Unilateral actions are those that a single person, or small group of people, can take without consulting anybody else.
Multilateral actions are the opposite: actions that require the cooperation and approval of many people.
For instance, the "freedom to roam" or allemansrätten in Swedish, is a unilateral right in many Scandinavian countries -- any person can walk freely in the countryside, even over other people's land, without having to ask permission, provided he or she does not disturb the natural environment. You don't have to "check in" with anyone; you just take a walk.
People often mistrust unilateral actions even when at first glance they seem like "doing good":
- Dylan Matthews at Vox opposes billionaire philanthropy (a unilateral donation to charitable causes the billionaire prefers) on the grounds that it undermines democracy (a multilateral process in which many voters, politicians, and government agencies deliberate on how money should be spent for the common good).
- People are alarmed by geoengineering, a collection of technological methods for reversing global warming which are within reach of a single company acting unilaterally, and much more comfortable with multilateral tactics like international treaties to limit carbon emissions.
- Gene drives that could wipe out malaria-causing mosquitoes could be a unilateral solution to eradicating malaria, unlike the multilateral solution of non-governmental aid organizations donating to malaria relief. Gene drives are controversial because people are concerned about possible risks of releasing genetically modified organisms into the environment -- but they have the potential to eliminate malaria much faster and more cheaply than anything else.
- Paul Krugman is troubled by the prospect of billionaires funding life extension research (a unilateral approach to solving the problems of age-related disease) because he's concerned they would ensure that only a privileged few would live long lives.
Often, unilateral initiatives are associated with wealth and technology, because both wealth and technology extend an individual's reach.
I didn't really "get" why biotechnology innovation scared people until I watched the TV show Orphan Black. There's a creepy transhumanist cabal in the show that turns out (spoiler!) to be murdering people. But before we know that, why is the show leading us to believe that this man onstage talking about genetic engineering is a bad guy?
I think it's about the secrecy, primarily. The lack of accountability. The unilateralism. We don't understand what these guys are doing, but they seem to have a lot of power, and they aren't telling us what they're up to.
They're not like us, and they can just do stuff without any input from us, and they have the technology and money and power to do very big things very fast -- how do we know they won't harm us?
That's actually a rational fear. It's not "fear of technology" in some sort of superstitious sense. Technology extends power; power includes the power to harm. The same technology that fed a starving planet was literally a weapons technology.
Glen Weyl's post Why I Am Not A Technocrat basically makes this point. Idealistic, intelligent, technologically adept people are quite capable of harming the populations they promise to help, whether maliciously or accidentally. He gives the examples of the Holodomor, a man-made famine created by Soviet state planning, and the rapid, US-economist-planned introduction of capitalism to Russia after the fall of the Soviet Union, which he claims was mismanaged and set the stage for Putin's autocracy.
In economic terms, Glen Weyl's point is simply that principal-agent problems exist. Just because someone is very smart and claims he's going to help you, doesn't mean you should take his word for it. The "agent" (the technocrat) promising to act on behalf of the "principal" (the general public) may have self-interested motives that don't align with the public's best interest; or he may be ignorant of the real-life situation the public lives in, so that his theoretical models don't apply.
I think this is a completely valid concern.
The most popular prescription for solving principal-agent problems, though, especially when "technology" is mentioned, is simple multilateralism, what Weyl calls "design in a democratic spirit." That is: include the general public in decisionmaking. Do not make decisions that affect many people without the approval of the affected populations.
In other words: if the general public isn't happy with a thing, it shouldn't be done. "Thin" forms of public feedback like votes or market demand are not enough for Weyl; if there's "political backlash and outrage" that in itself constitutes a problem, even if a policy is "popular" in the sense of winning votes or consumer dollars. The goal for "democratic designers" is to avoid any appreciable segment of the public getting mad at them."Democratic designers thus must constantly attend, on equal footing, in teams or individually, to both the technical and communicative aspects of their work. They must view the audience for their work as at least equally being the broader non-technical public as their technical colleagues. They must view a lack of legitimacy of their designs with the relevant public as just as important as technical failures of the system."
- self-defense (as protection against violence)
- security protections against theft or invasion of privacy (locks, cryptography)
- various forms of exit (the right and opportunity to unilaterally leave a bad situation)
- the choice not to buy products you don't like and buy alternatives
- the choice to leave a bad job and find a better one
- the choice to leave one town or country for another
- the choice to leave an abusive family or bad relationship
- disclosure requirements on organizations, or free-speech rights for whistleblowers and journalists, that enable people to make informed decisions about who and what to avoid
- deliberately designing interventions to be transparent and opt-in, so that if people don't like them, they don't have to participate
3. Incentive Alignment
This includes things like equity ownership, in which the agent acting on behalf of a principal is given a share of the benefits he provides the principal. It also includes novel ideas like income share agreements, which introduce equity-like financial structures to human endeavors like education that haven't traditionally incorporated them.
This has the advantage over consensus that you don't have to pay the costs of group deliberation for every decision, and the advantage over law that it doesn't require anyone to enumerate beneficial behaviors a priori -- the agent is incentivized to originate creative ways to benefit the principal. The disadvantage is that it's only as good as the exact terms of the contract and the legal system that enforces it, both of which can be rigged to benefit the agent.
As with criminal law, consensus deliberation mechanisms can be used in a targeted way, on the "meta-problem" of defining the "rules of the game" in ways that are accountable to the interests of all citizens. We can have public deliberation on the question of what kinds of contracts should be enforceable, but then let the contractual incentives themselves, rather than costly mass deliberation, govern day-to-day operational decisions like those involved in running a company.
The Case For (Controlled) Unilateralism
It's clear that principal-agent problems exist. But we don't have to go back to primitive government-by-consensus in order to prevent powerful people from taking advantage of others. There are lots and lots of legal and governance mechanisms that handle principal-agent problems more efficiently than that.
Moreover, government-by-consensus isn't even that safe. It's vulnerable to demagogues who falsely convince people that their interests are being represented. In fact, I think a lot of highly unilateral, technological initiatives are getting pushback not because they're uniquely dangerous but because they're uniquely undefended by PR and political lobbying.
We need unilateral solutions to problems because consensus and coordination are so difficult. Multilateral solutions often fail because some party who's critical to implementing them isn't willing to cooperate. For instance, voters around the world simply don't want high carbon taxes. Imposing a coordination-heavy project on an unwilling population often takes a lot of violence and coercion.
Technology, by definition, reduces the costs of doing things. Inventing and implementing a technology that makes it easy to solve a problem is more likely to succeed, and more humane, than convincing (or forcing) large populations to make large sacrifices to solve that problem.
Of course, I just framed it as technology "solving problems" -- but technology also makes weapons. So whether you want humanity to be more efficient or less efficient at doing things depends a lot on your threat scenario.
However, I see a basic asymmetry between action and inaction. Living organisms must practice active homeostasis -- adaptation to external shocks -- to survive. If you make a living thing less able to act, in full generality, you have harmed it. This is true even though it is possible for an organism to act in ways that harm itself.
The same is true to a much greater degree for human civilization. "Business as usual" for humanity in 2019 is change. World population is growing rapidly. Our institutions are designed around a prediction of continued exponential growth in resources. A reduction in humanity's overall capacity to do things is not going to result in peaceful stability, or at any rate, not before killing a lot of people.
Do we want to guard against powerful unilateral bad actors? Of course. We need incentives to constrain them from hurting others, and that's the task of governance and law. But the cost of opposing unilateralism indiscriminately is too high. We need mechanisms that are targeted, that impose costs especially on harmful actions, not on beneficial and harmful actions alike.