Against Multilateralism

Unilateral actions are those that a single person, or small group of people, can take without consulting anybody else.

Multilateral actions are the opposite: actions that require the cooperation and approval of many people.

For instance, the "freedom to roam" or allemansrätten in Swedish, is a unilateral right in many Scandinavian countries -- any person can walk freely in the countryside, even over other people's land, without having to ask permission, provided he or she does not disturb the natural environment.  You don't have to "check in" with anyone; you just take a walk. 

People often mistrust unilateral actions even when at first glance they seem like "doing good":

  • Dylan Matthews at Vox opposes billionaire philanthropy (a unilateral donation to charitable causes the billionaire prefers) on the grounds that it undermines democracy (a multilateral process in which many voters, politicians, and government agencies deliberate on how money should be spent for the common good).
  • People are alarmed by geoengineering, a collection of technological methods for reversing global warming which are within reach of a single company acting unilaterally, and much more comfortable with multilateral tactics like international treaties to limit carbon emissions.
  • Gene drives that could wipe out malaria-causing mosquitoes could be a unilateral solution to eradicating malaria, unlike the multilateral solution of non-governmental aid organizations donating to malaria relief.  Gene drives are controversial because people are concerned about possible risks of releasing genetically modified organisms into the environment -- but they have the potential to eliminate malaria much faster and more cheaply than anything else.
  • Paul Krugman is troubled by the prospect of billionaires funding life extension research (a unilateral approach to solving the problems of age-related disease) because he's concerned they would ensure that only a privileged few would live long lives.

Often, unilateral initiatives are associated with wealth and technology, because both wealth and technology extend an individual's reach. 

I didn't really "get" why biotechnology innovation scared people until I watched the TV show Orphan Black.  There's a creepy transhumanist cabal in the show that turns out (spoiler!) to be murdering people. But before we know that, why is the show leading us to believe that this man onstage talking about genetic engineering is a bad guy?

I think it's about the secrecy, primarily. The lack of accountability.  The unilateralism.  We don't understand what these guys are doing, but they seem to have a lot of power, and they aren't telling us what they're up to

They're not like us, and they can just do stuff without any input from us, and they have the technology and money and power to do very big things very fast -- how do we know they won't harm us?

That's actually a rational fear. It's not "fear of technology" in some sort of superstitious sense.  Technology extends power; power includes the power to harm.  The same technology that fed a starving planet was literally a weapons technology.

Glen Weyl's post Why I Am Not A Technocrat basically makes this point.  Idealistic, intelligent, technologically adept people are quite capable of harming the populations they promise to help, whether maliciously or accidentally. He gives the examples of the Holodomor, a man-made famine created by Soviet state planning, and the rapid, US-economist-planned introduction of capitalism to Russia after the fall of the Soviet Union, which he claims was mismanaged and set the stage for Putin's autocracy.

In economic terms, Glen Weyl's point is simply that principal-agent problems exist. Just because someone is very smart and claims he's going to help you, doesn't mean you should take his word for it.  The "agent" (the technocrat) promising to act on behalf of the "principal" (the general public) may have self-interested motives that don't align with the public's best interest; or he may be ignorant of the real-life situation the public lives in, so that his theoretical models don't apply.

I think this is a completely valid concern.

The most popular prescription for solving principal-agent problems, though, especially when "technology" is mentioned, is simple multilateralism, what Weyl calls "design in a democratic spirit."  That is: include the general public in decisionmaking. Do not make decisions that affect many people without the approval of the affected populations. 

"Democratic designers thus must constantly attend, on equal footing, in teams or individually, to both the technical and communicative aspects of their work. They must view the audience for their work as at least equally being the broader non-technical public as their technical colleagues. They must view a lack of legitimacy of their designs with the relevant public as just as important as technical failures of the system."

In other words: if the general public isn't happy with a thing, it shouldn't be done. "Thin" forms of public feedback like votes or market demand are not enough for Weyl; if there's "political backlash and outrage" that in itself constitutes a problem, even if a policy is "popular" in the sense of winning votes or consumer dollars.  The goal for "democratic designers" is to avoid any appreciable segment of the public getting mad at them.

This is a natural intuition. Govern by consensus. Include all stakeholders in the decision process. It's how small groups naturally make decisions. 

Inclusion and consensus has a ring of justice to it. It makes for good slogans: "No taxation without representation." "Nothing about us without us."  And it really does provide a check on arbitrary power.

It is also extremely expensive and inhibits action.

I don't think you can have a contemporary level of technology and international trade that follows the rule "everyone whose life is affected by a decision should be included in the decision process." Technology and trade allow strangers to affect our lives profoundly, without ever asking us how we feel about it.  Many people are unhappy that globalization and technology have altered their traditions. They have real problems and real cause for complaint. And yet, I'm pretty sure that a majority of the human race would have to die in order to get us "back" to a state where nobody could change your life from across the globe without your consent. If you want the world to be governed wholly by consensus, I think you have to be something like an anarchoprimitivist -- and that carries some brutal implications that I don't think Weyl would endorse.

The good news is, multilateral or democratic consensus is not the only mechanism for solving principal-agent problems.

I can think of three other categories of ways to put checks on the power to harm.

1. Law
If you define certain types of harm as unacceptable, you can place criminal or civil penalties on anybody who commits illegal acts.
This is more efficient than consensus because it only imposes costs on illegal actions, while consensus imposes a cost on all actions (the time and resources spent on deliberation and the risk that consensus won't be achieved).

The difficulty, of course, is ensuring that the legal and judicial system is fair and considers everyone's interests. In democracies, we use deliberative consensus as part of the process for writing and approving laws. But that's still a lot more efficient than using consensus directly for all decisions in place of laws.

2. Self-Protection
This includes all situations where the potential victims of harm have a readily available means to protect themselves from being harmed.
Again, it's more efficient than consensus because it doesn't impose costs on all actions, just harmful ones. It has an advantage over law in that it doesn't require anyone to specify the types of harm beforehand -- human life doesn't always fit neatly into a priori systems. It has a disadvantage in that, by default, the potential victims bear the costs of protecting themselves, which seems unfair; but laws and policies which lower the cost of self-protection or place some responsibility on perpetrators can mitigate this.

Self protection includes:
  1. self-defense (as protection against violence)
  2. security protections against theft or invasion of privacy (locks, cryptography)
  3. various forms of exit (the right and opportunity to unilaterally leave a bad situation)
    1. the choice not to buy products you don't like and buy alternatives
    2. the choice to leave a bad job and find a better one
    3. the choice to leave one town or country for another
    4. the choice to leave an abusive family or bad relationship
  4. disclosure requirements on organizations, or free-speech rights for whistleblowers and journalists, that enable people to make informed decisions about who and what to avoid
  5. deliberately designing interventions to be transparent and opt-in, so that if people don't like them, they don't have to participate

3. Incentive Alignment

This includes things like equity ownership, in which the agent acting on behalf of a principal is given a share of the benefits he provides the principal. It also includes novel ideas like income share agreements, which introduce equity-like financial structures to human endeavors like education that haven't traditionally incorporated them.

This has the advantage over consensus that you don't have to pay the costs of group deliberation for every decision, and the advantage over law that it doesn't require anyone to enumerate beneficial behaviors a priori -- the agent is incentivized to originate creative ways to benefit the principal. The disadvantage is that it's only as good as the exact terms of the contract and the legal system that enforces it, both of which can be rigged to benefit the agent. 

As with criminal law, consensus deliberation mechanisms can be used in a targeted way, on the "meta-problem" of defining the "rules of the game" in ways that are accountable to the interests of all citizens. We can have public deliberation on the question of what kinds of contracts should be enforceable, but then let the contractual incentives themselves, rather than costly mass deliberation, govern day-to-day operational decisions like those involved in running a company.


The Case For (Controlled) Unilateralism

It's clear that principal-agent problems exist. But we don't have to go back to primitive government-by-consensus in order to prevent powerful people from taking advantage of others. There are lots and lots of legal and governance mechanisms that handle principal-agent problems more efficiently than that.

Moreover, government-by-consensus isn't even that safe. It's vulnerable to demagogues who falsely convince people that their interests are being represented. In fact, I think a lot of highly unilateral, technological initiatives are getting pushback not because they're uniquely dangerous but because they're uniquely undefended by PR and political lobbying.  

We need unilateral solutions to problems because consensus and coordination are so difficult. Multilateral solutions often fail because some party who's critical to implementing them isn't willing to cooperate.  For instance, voters around the world simply don't want high carbon taxes. Imposing a coordination-heavy project on an unwilling population often takes a lot of violence and coercion.

Technology, by definition, reduces the costs of doing things. Inventing and implementing a technology that makes it easy to solve a problem is more likely to succeed, and more humane, than convincing (or forcing) large populations to make large sacrifices to solve that problem.

Of course, I just framed it as technology "solving problems" -- but technology also makes weapons. So whether you want humanity to be more efficient or less efficient at doing things depends a lot on your threat scenario.

However, I see a basic asymmetry between action and inaction.  Living organisms must practice active homeostasis -- adaptation to external shocks -- to survive. If you make a living thing less able to act, in full generality, you have harmed it. This is true even though it is possible for an organism to act in ways that harm itself.

The same is true to a much greater degree for human civilization. "Business as usual" for humanity in 2019 is change. World population is growing rapidly. Our institutions are designed around a prediction of continued exponential growth in resources.  A reduction in humanity's overall capacity to do things is not going to result in peaceful stability, or at any rate, not before killing a lot of people.

Do we want to guard against powerful unilateral bad actors? Of course. We need incentives to constrain them from hurting others, and that's the task of governance and law.  But the cost of opposing unilateralism indiscriminately is too high. We need mechanisms that are targeted, that impose costs especially on harmful actions, not on beneficial and harmful actions alike.

views
1 response
No substantial comment, but "inhibits action" link is broken due to extra junk on end.