Is Stupidity Strength? Part 4: Are VCs Stupid?

Defining the Question

If you want to write a thinkpiece bashing venture capitalists, it's easy enough. All you have to do is find an example of one VC-backed company that sounds stupid or mockable, and generalize that to condemning venture capitalists overall. Instant Valleywag article!

But "can you find a seemingly-dumb VC investment?" isn't an interesting question; the answer is obviously yes, and you can't do anything practical with that answer except drum up the public's knee-jerk resentment against Silicon Valley. I'm not interested in going that route.

Here are two interesting questions:

  1. Are institutional investors who invest in VC firms being economically rational by investing in VC the way they typically do today? Could they make more money doing something different? (That is, is "VC overrated" from the perspective of an institutional investor like a retirement fund or university endowment?)
  2. Are VC's being economically rational by choosing startups to invest in as they typically do today? Could a VC firm make more money doing something different? (That is, are VC's being "stupid" in the sense that a contrarian investment approach could strictly outperform them?)

If the answers to these questions are "no" and "yes" respectively, that doesn't mean all VCs are bad investors, just the typical VC. In fact, quite a few VCs argue that they're reliably beating the market by following a contrarian strategy.

If there's overinvestment in VC on the whole, or if there's a contrarian VC investment strategy that beats the market, that's good news -- it means there's an economic opportunity!

If not, that's a different kind of good news: the market is efficient and pretty much doing as well as a market can at allocating capital where it creates the most value. We can trust price signals to be something like quality signals.

There's upside whichever way the data shakes out, so we can go into this inquiry with open minds.

The VC Industry: How Big Is It? 

The National Venture Capital Association's latest 2019 Q3 report offers the following figures:

  • The US VC industry invests about $100 billion a year into companies
  • There are about 2000 US VC firms
  • US VC firms invest in about 10,000 companies a year
  • Over the past 10 years, total US VC investment has more than doubled

VC is still a tiny fraction of investment capital as a whole, however. The VC industry's total assets under management are worth $524 billion; by comparison, mutual funds manage $17 trillion.    

Most ordinary people don't invest in VC, but VC is popular with institutional investors like college endowments. For instance, 18 percent of Yale's endowment is invested in venture capital. 

Why care about VC, if it's a small fraction of all investments? At the very least, it's a matter of professional interest if you work in VC-funded industries like software or biotech; also, to the extent that VC is involved in funding technological innovation, how well VC does at funding real technologies determines how abundant and productive our future economy will be.

How Good Are Aggregate returns on VC?

Does the VC industry as a whole have a good rate of return on investment compared to other types of asset?  Should institutional investors be investing less in VC, more, or about the same?

This question is mostly about the average performance of VC firms; it could both be true that most VC firms have poor returns but a few exceptional firms excel, so that if you picked  a VC firm at random its expected return would still be good.

Cambridge Associates' venture capital index estimates the returns on venture capital. It depends a lot on the time horizon: the 5-year rate of return is 13%, the 10-year is 14%, and the 30-year is 32%. But it's still clearly higher in long-run growth than the return on stock indexes like the S&P 500, which has a 5-year rate of return of 11%, 10-year rate of 16%, and 30-year rate of 10%. Investing in a random VC is higher-return in expectation, though also higher-variance, than just investing in an index fund.  In other words, investing in VC is not so stupid that you could do strictly better by just putting your money in a random stock instead. But that's a low bar.

A more subtle analysis takes account of risk as well as return. VC's may have higher average returns than the stock market, but they're also more volatile. Investors rationally see a tradeoff between risk (variance) and return (expected value) and are willing to tolerate higher risk only if it also brings higher returns; there's a mathematically optimal balance, defined by standard portfolio theory, defining how a investor with a given level of risk aversion would maximize long-run returns.

 What's the risk-adjusted rate of return, comparing VC returns to the alternative of putting that same investment into an index fund? A 2015 study in the Journal of Finance says it's significantly negative, p=0.015; this means that rational portfolio investors should be investing less in VC. A 2010 study using more conservative assumptions finds a statistically insignificant level of excess returns (alpha = 0.17, p = 0.54), implying that investors are basically investing the right amount in VC; it's neither overrated nor underrated. I'm not sure how to evaluate which set of assumptions is more reasonable.

As of 2018, not a single Ivy League school endowment (which all include VC as part of their investment portfolios) had higher returns than a simple "60-40 portfolio" (60% stocks, 40%  bonds), which would have made a 15% yearly return; and the Ivy portfolios were also much higher risk! So Ivy League fund managers, at least, could do strictly better by investing in no VCs at all!

Is it stupid to invest money in VC at all? It's hard to say, given the conflicting results from the data. But we can at least say that institutional investors shouldn't be putting more of their money into VC.

Variation Within VC: Luck or Skill?

Some VC firms have much higher returns than others.  The Column Group, a biotech VC firm, posted a staggering 408% return in the first quarter of 2019; the same report claims a return of just 10% for the median VC.

Some observers are even more pessimistic about the median VC; Paul Graham, writing in 2009, said that in his experience the median VC loses money.   Israeli VC Gil Ben-Artzy claims that 95% of VC funds make less than 3x returns over a typically 10-year span, which amounts to about 11% per year -- worse than the S&P 500! 

A sample of 535 VCs also finds they have a median rate of return of 4% -- quite a bit worse than the S&P 500.

Most VCs, it seems, have terrible track records. Only one in every twenty is making more money than you could get by just investing in an index fund (and skip paying the VCs their high fees).

But that doesn't necessarily mean the Column Group is smarter than the median VC; they might have just gotten lucky. A lottery winner has a very large return on investment compared to the median lottery player, but not because she has higher skill.

If you want to ascertain if there's such a thing as investment skill, you need to look at investor track records. Do the same investors make above-average returns year after year? Then it might be skill (though it could also be something else, like monopoly power.)  But if investors show no consistency in returns, it definitely can't be skill.

A 2006 study suggests that investor skill exists. Firms funded by VCs with prior successful investments (where "success" means IPO) are more likely to succeed, but this effect goes away in firms funded by previously successful entrepreneurs.  So, top-tier investors are more likely to pick successful startups, but don't add much benefit to startups with experienced founders.

Looking at VC fund performance, investors who did well in the past continue to do well in the future; top-quartile firms make an average of 7-8 percentage points more each year than bottom-quartile firms.

Another study also shows a large effect of investor skill: VC firms in the 80th percentile for past performance made 15 percentage points more  a year than firms in the 20th percentile.

The evidence is unequivocal: some VCs are much better than others, consistently, and VCs who are any good at all are a minority.

Predictably Wrong Strategies

"Ok, VC as a whole doesn't have great returns relative to its risk, but some investors are much better than that! Why don't institutional investors just invest in good VC's and not bad ones?"

Well, maybe they can't; maybe identifying a VC firm with a good track record is hard. After all, fund performance data is private and jealously guarded, and every VC firm tries to only share numbers that make it look impressive.  

How could we test this hypothesis? Well, there's a way to disprove it: if there were an easy-to-check criterion that accurately distinguished good investors from bad ones, then you'd be able to use that criterion to choose good investors who get above-market returns; which means that anyone not using this criterion is being financially stupid.   

Well, here's such a criterion: investors with strong jawlines lose money. No, really.

Investors with higher facial width-to-height ratios -- a predictor of high testosterone levels -- make over 5 percentage points less a year than their narrow-faced counterparts.


This is a huge effect, comparable to the difference between stocks and cash. If you invested only in funds run by low-testosterone investors, you'd be 6x as wealthy in 20 years.

(To be fair, these are hedge fund managers, not VCs; in this section I'll refer to evidence from a variety of investment types, but there seem to be commonalities). 

What's going on here? Well, clearly, many rich people have a bias towards masculine, confident, charismatic men -- so much so, that they'll even invest money in crappy funds if they're run by guys with strong jawlines. Testosterone empirically causes people to make overly risky investments, which in turn correlates with worse performance.  A lot of investors are apparently letting bias get in the way of profit.

Additionally, hedge fund managers rated as more psychopathic earned about about 2 percentage points less a year than more empathetic managers  (p < 0.05).

Fund managers who attended more selective colleges also outperform those from less selective colleges, by about a percentage point per year.

Conscientiousness correlates positively with investor performance: 80th percentile conscientiousness investors are 6x as likely to achieve top-quartile performance as median-conscientiousness investors.  Conscientiousness is also a strong predictor of success in entrepreneurs.

In venture capital specifically, venture capitalists are much more likely to make successful investments if they have science or engineering degrees, have past VC or startup experience, and don't have MBA's.

Among innovation-intensive businesses, the percent of female executives correlates positively with firm performance; likewise, new businesses founded by men perform better than those founded by women, but this  difference is fully explained by male-founded businesses having more starting capital, being in more tightly clustered regions, and being more likely to focus on high-tech manufacturing.

 Moreover, VCs given identical pitches from entrepreneurs reliably prefer male to female entrepreneurs, and they particularly like pitches from attractive men; attractiveness in women doesn't matter.

What does this all mean? In short: low-testosterone, non-psychopathic, STEM-educated, conscientious people make better investment decisions than aggressive, impulsive risk-takers with MBA's; investors have a bias towards handsome, masculine men; and female founders can do as well as male founders if they enter the most technically innovative industries and get adequate starting capital.

"Just invest in the most macho, wildly confident guy you can find" is a common strategy, and one that fails. These recklessly overconfident, less conscientious individuals are more likely to take unwise risks, and also more likely to commit fraud -- both of which are bad for business in the long run.

(This hypothesis also matches the pattern of big recent failures in startup performance like WeWork and Uber).

You can beat the market as  an institutional investor just by not executing this stupid strategy. Therefore: we can be confident that a lot of capital is being invested stupidly, ie avoidably passing up opportunities for more money.

Do I have a problem with handsome, masculine men? Heck no; I married one!  

I'm claiming that there is a lot of "dumb money" out there, which favors handsome, masculine men even when they lose money. 

Gary Becker's theory about prejudice was that it ought to eventually die out.  racist employers who won't hire black people will eventually go out of business in a competitive market; any irrational prejudice in business owners or investors, should be selected against relative to optimal profit-maximizing behavior. If we see persistent prejudice that goes against financial self-interest, the market must be non-competitive.

Well, we see persistent prejudice in investment! In the most obvious way you'd expect: bias towards traits that make people high-status in our society. People spend money on people who look like winners; they don't check track records of actual winning. Why don't they all go broke and thus remove themselves from the market? I don't know, but they don't. 

I do know, from the account of my friend Zvi who used to work on sports betting, that most people who bet on sports are not even remotely optimizing for making money; they're just sports fans who bet on the home team. You can make money just by always betting on the away team. There are just that many bets made by "dumb money", that you can "beat the market" without doing anything cleverer than betting against blind fandom.

Maybe something not too different is happening in business as well.

This is good empirical corroboration for the existence of a Stupid Coalition in business.  If people who have a preference for macho men disproportionately invest in companies and investment firms run by macho men, they can prop each other up temporarily, but sooner or later these whole clusters of highly correlated bias will experience market crashes. 


Is Stupidity Strength? Part 3: Evolutionary Game Theory

Spite Strategies

Carlo Cipolla defined stupidity as causing harm to others as well as harming oneself, while benefiting nobody.

Another way of looking at this: a stupid decision is one which you could make a Pareto improvement on. A stupid decision means neglecting a win-win opportunity.  

Since people aren't omniscient and omnipotent, and we don't necessarily want to call that stupidity, we can narrow this; a stupid decision is one that avoidably causes harm to self and others.

In the previous post, I mentioned a possible incentive for a coalition of individuals to be stupid -- the "too big to fail" strategy.  If enough people commit to take imprudent risks, all at once, then they can force the prudent people to bail them out when catastrophe eventually comes. In the long term, everyone will be worse off in absolute terms than if the catastrophe had been prudently averted; but the Stupids will be relatively better off than the Prudents.

In evolutionary biology, this is a special case of what's called Hamiltonian spite, after its originator W.D. Hamilton. Imagine a gene that imposes a fitness cost on organisms that bear it, but an even greater fitness cost on members of the same species that do not bear it. This gene might be able to persist in the population, by enabling its bearers to outcompete their neighbors, even though it causes only harm and no benefit to anyone!

Does spite ever happen?  

Many apparently spiteful behaviors in nature are actually selfish; when a male bowerbird destroys the nests of other bowerbirds, his own nest appears more attractive in comparison and he gets more mating opportunities. This is a straightforward case of zero-sum competition, not true spite. 

Hamilton himself thought cases of spite would be vanishingly rare in nature; his own equations show that spiteful strategies are less likely to win, the larger the population size; and since spite strategies diminish absolute fitness (ie the number of offspring), spite-dominated populations will tend to shrink towards extinction.  In his original paper, Hamilton proposed that spite strategies might emerge in small, isolated populations and quickly drive those populations out of existence; we shouldn't expect to see them exist for long.

A more recent paper adds an additional wrinkle, however. Hamilton's original models assumed that populations could be of arbitrary size. But in nature, population sizes are often bounded above by the carrying capacity of the environment -- a given savannah only has enough resources to support so many lions, no matter how fit they are.  If you add a carrying capacity constraint to the equations, you see that spite strategies can persist in the long term, provided the harm to those who don't bear the spite gene is enough larger than the harm to those who do bear it. This critical ratio must be larger, the larger the maximum population size can be; it is easier for spite strategies to survive in environments with smaller carrying capacities.

This fact is suggestive for the question of whether spite strategies could have evolved in humans.  We are a highly K-selected species (compared to other mammals like mice) -- we have large bodies, slow metabolisms, and long lives, developing slowly, reproducing infrequently, and investing a lot of care into our offspring.  This pattern tends to evolve in organisms close to their environment's carrying capacity, such as in predators at the top of the food chain. Vast litters of offspring would do a K-selected mother no good; they would bump into the harsh limitations of the food supply and starve before they had children of their own. She would be better off investing resources into making her few offspring more robust; building them bigger, more long-lasting bodies, with bigger brains more able to adapt their behavior to survive; and guarding and feeding them while young; and, perhaps, sabotaging their competition!  It is in K-selected animals like us that spiteful behaviors have a plausible evolutionary advantage, since populations are stably small; just as it is in oligopolies, not competitive markets, where sabotaging a competitor can be a winning strategy.  

(Of course, the environment in which modern Homo sapiens evolved was the harsh Malthusian context of the Pleistocene; for the past 300 years the human population has exploded exponentially. Perhaps the spite strategies we evolved with are no longer adaptive in a context of improving technology and global trade.)

Likewise, there is a wider range of conditions under which spiteful strategies can persist when competition is more localized, so that only small populations can interact with each other. Global competition punishes lose-lose strategies, since these diminish the absolute fitness of those who carry them and their non-carrier victims; local competition can preserve these strategies in isolated enclaves.

In nature, we see spiteful behavior in the social insects; worker bees, wasps, and ants prevent other workers from reproducing by killing their eggs, and red fire ant workers kill unrelated queen ants. These actions do not provide any direct fitness benefit to the specific workers that do the killing; rather, they provide an indirect benefit to their sisters, the queens, by killing their unrelated rivals. 

It has been hypothesized that primates engage in spiteful behavior; they certainly engage in apparently spiteful behaviors like harassing copulating couples and killing non-kin infants, but there's no consensus I can find as to whether this is true Hamiltonian spite or mere self-interested competition for food and mates.

Spite and rent-seeking

In Tullock's model of rent-seeking, individuals compete to take a winner-take-all prize; each individual decides how much to spend, and the more you spend, relative to all the other individuals, the more likely you are to get a prize.  What's the optimal amount to spend?

There is a unique Nash equilibrium strategy of how much to spend on trying to get the prize; that is, you can't improve your expected net gain by spending any more or any less. However, this is not an evolutionarily stable strategy! Populations that bid the Nash equilibrium will get overtaken by populations that spitefully bid more, at cost to themselves.

The two strategies are rather close, and get closer asymptotically in large populations; the Nash equilibrium bid is (n-1)/n^2 rV (where n is population size, V is the payout value, and r is a shape parameter of the win-probability function), while the ESS bid is rV/n.  Evolutionarily optimal play is slightly more aggressive than individually optimal play, in a large population with many-to-many competition. But in a small population, or in a tournament-like setup where pairs of individuals play one on one and losers get knocked out of the game, this difference is magnified, and of course compounds with time.

Direct resource competition between conspecifics is many-to-many competition; as soon as I eat a bite, it simultaneously becomes unavailable to everyone else.

Fighting between conspecifics, however, is one-to-one competition; only two rams can butt heads at once. 

We should expect to see "overinvestment" in adaptations that increase individuals' abilities to win such head-to-head conflicts (pun intended), relative to the individually "rational" Nash equilibrium amount.  Competing for resources is not in general a spite strategy, because the winner of a conflict does directly benefit; but overinvestment in resource competition can be a spite strategy.  It's net harmful to the individual, in expectation, but it's more net harmful to his opponent.

Spite and intergroup conflict

If we allow different evolutionary strategies to detect each other -- to treat "in-group" members differently from "outgroup" members, as human nations do (as well as other species; ants go to war) we see even more interesting things about the dynamics of spite.

If individuals are assumed to interact only with local neighbors, to migrate around somewhat, but to be able to distinguish kin from non-kin even if migration has occurred, we observe that individuals tend to be altruistic (hurting themselves to help others) towards kin, and spiteful (hurting themselves to hurt others) towards non-kin. 

Moreover, minorities living in non-kin territory tend to be strongly altruistic towards their kin and only mildly spiteful towards the majority; while majorities tend to be only mildly altruistic towards each other and strongly spiteful towards minorities. This seems to match available evidence about human ethnic conflict.

Spite in human experiments

Humans display spiteful behavior in game-theoretic experiments:

Zizzo (2003a) in his paper on burning money experiments reported that subjects are often willing to reduce, at a cost for themselves, the incomes of players who had been given higher endowments. In some instances subjects with the same or less endowment were also targeted. In a similar vein, Dawes, Fowler, Johnson, McElreath and Smirnov (2007) find that subjects are willing to reduce other group members’ income independently of the history of interaction...

In their experiments on competitive behavior, Rustichini and Vostroknutov (2007) find that participants are more inclined to reduce someone else’s income if the punished subject has earned more money than the punisher. Surprisingly, this effect is stronger when the higher incomes of the punished subjects are due to merit rather than luck...
The most extreme form of anti-social punishment, where the punishment is directed against those who had previously behaved nicely towards the punisher, has been observed in public good games with punishment. In these games those who are more cooperative than others are frequently punished. Such evidence is reported in Cinyabuguma, Page and Putterman (2006), Gächter, Herrmann and Thöni (2005) and Herrmann et al. (2008).

In a "rent-seeking game" played with 3500 undergraduates, players significantly "over-spent" on winning relative to the Nash equilibrium; in particular, they spent twice as much when playing against another human vs the computer, which suggests that spite is a social emotion.  Players who defected on the Prisoner's Dilemma game engaged in more spiteful overspending than cooperative players, and players who were more risk-prone in a lottery test were also more prone to overspend. Finally, after engaging in a rent-seeking game, players cooperated significantly less on the Prisoner's Dilemma.

While players of a public good game punished free riders in all cities, in some cities players also engaged in antisocial punishment -- selectively penalizing the most generous contributors. This happened least in Anglophone cities (Boston, Melbourne, Nottingham) and most in Mediterranean, Middle Eastern, or Slavic cities (Muscat, Athens, Riyadh, Samara, Minsk, Istanbul); countries with high scores on social trust and rule of law displayed more "prosocial punishment" of free-riders and less "antisocial punishment" of contributors.

Several hundred Portuguese schoolchildren were assigned to play a spite game, where they could either play cooperatively or spitefully. If both players cooperate, both gain 15 points; if one cooperates and the other spites, the spiteful player gains 11 points (paying a cost) but his opponent only gains 5 points (a greater loss). Finally, if both players spite, they each get 2 points (a severe loss). 

This game can either be played with proportional winnings (each player gets a piece of candy for every 15 points), in which case playing cooperatively is optimal, or with winner-take-all conditions (the player with the most points gets a fancy chocolate), in which case playing spitefully is optimal.

The experiment found that younger children (5th-7th grade) usually played cooperatively, while older children (8th-10th grade) played cooperatively in the proportional-rewards conditions and spitefully in the winner-take-all conditions. Students repeating a grade were much more likely to behave spitefully.  This suggests that spiteful behavior in humans may emerge in the teenage years.

The economic experimental literature is clear that spiteful strategies do exist in humans, that they correlate with social trust and rule of law in the expected (inverse) direction, and that they seem to emerge in adolescence.



Is Stupidity Strength Part 2: Confidence

One very common way people believe stupidity can be a strength is that it can give give greater confidence, which brings advantages.

If you are ignorant of your own flaws, you can perform self-assurance and boldness, which makes it more likely you will win success, especially in social competitive situations. (Getting the girl, getting the raise, winning the election.)

If you are ignorant of the risks of a new venture, you will be more likely to boldly attempt it; and many risky ventures are high in expected value.

If you are ignorant of the weaknesses in your ideas, you will proclaim them confidently and have more influence in society, and more of a sense of joyful certainty, than more reflective, self-critical people.  "The best lack all conviction, while the worst are full of passionate intensity."

Not knowing the flaws in your own character, your own plans, or your own opinions, seems like it might carry an advantage. Even those who think it's morally unacceptable to engage in self-serving delusion often think that the deludedly confident obtain selfish gain from their stupidity. After all, look what it got Adam Neumann -- a CEO so brashly incompetent and unscrupulous that he was recently paid over a billion dollars to leave his company.

But what is this "confidence", why is it good, and why can't you get it without self-delusion?

Confidence Is Willingness To Act

William James, in his "The Will to Believe", was obsessed with the question of whether it could be acceptable to choose a belief, for which you had no evidence, if it made you more decisive and better at functioning in life. 

This was a practical issue for James, as he was plagued with pathological indecisiveness and self-doubt all his life, as Louis Menand's wonderful group biography of the Pragmatists recounts. James spent 15 years deciding on a profession; he was speaking of himself when he said "There is no more miserable human being than one in whom nothing is habitual but indecision."

For James, the critical issue for decisive confidence was faith in God. He thought there was no adequate evidence for either believing or disbelieving in God, but that without religious faith, nobody could have the confidence to engage in a life of action or purpose. We would languish in passive despair, sure that our lives had no meaning. 

James defended the choice to believe because he thought the very nature of "belief" or "truth" is rooted in its function as an aid to decision. We are living creatures; we only evolved the capacity to apprehend the world because knowledge helps us make more survival-promoting decisions; a "belief" that doesn't cash out to anticipated experiences that matter to the holder of the belief, is in a sense not a belief at all, but an empty string of syllables he parrots. 

Therefore, a belief in a ground of meaningfulness or worthwhileness in the universe, a belief that anything at all is worth doing, is by the above decision-theoretic standard not only true, but the necessary foundation of all true beliefs.  And this, says James, is essentially what it means to believe in God. 

He's very carefully not saying that you may believe anything that makes you feel better, even if it's false; he's saying that the "belief" that it ever does any good to act is actually true, by the only reasonable and non-circular definition of truth he can come up with.

"A man's religious faith (whatever more special items of doctrine it may involve) means for me essentially his faith in the existence of an unseen order of some kind in which the riddles of the natural order may be found explained...
"Our only way, for example, of doubting, or refusing to believe, that a certain thing is, is continuing to act as if it were not. If, for instance, I refuse to believe that the room is getting cold, I leave the windows open and light no fire just as if it still were warm. If I doubt that you are worthy of my confidence, I keep you uninformed of all my secrets just as if you were unworthy of the same. If I doubt the need of insuring my house, I leave it uninsured as much as if I believed there were no need. And so if I must not believe that the world is divine, I can only express that refusal by declining ever to act distinctively as if it were so...
"So far as man stands for anything, and is productive or originative at all, his entire vital function may be said to have to deal with maybes. Not a victory is gained, not a deed of faithfulness or courage is done, except upon a maybe; not a service, not a sally of generosity, not a scientific exploration or experiment or text-book, that may not be a mistake. It is only by risking our persons from one hour to another that we live at all. And often enough our faith beforehand in an uncertified result is the only thing that makes the result come true. Suppose, for instance, that you are climbing a mountain, and have worked yourself into a position from which the only escape is by a terrible leap. Have faith that you can successfully make it, and your feet are nerved to its accomplishment. But mistrust yourself, and think of all the sweet things you have heard the scientists say of maybes, and you will hesitate so long that, at last, all unstrung and trembling, and launching yourself in a moment of despair, you roll in the abyss. In such a case (and it belongs to an enormous class), the part of wisdom as well as of courage is to believe what is in the line of your needs, for only by such belief is the need fulfilled. Refuse to believe, and you shall indeed be right, for you shall irretrievably perish. But believe, and again you shall be right, for you shall save yourself. You make one or the other of two possible universes true by your trust or mistrust,—both universes having been only maybes, in this particular, before you contributed your act.

Now, it appears to me that the question whether life is worth living is subject to conditions logically much like these. It does, indeed, depend on you the liver. If you surrender to the nightmare view and crown the evil edifice by your own suicide, you have indeed made a picture totally black. Pessimism, completed by your act, is true beyond a doubt, so far as your world goes. Your mistrust of life has removed whatever worth your own enduring existence might have given to it; and now, throughout the whole sphere of possible influence of that existence, the mistrust has proved itself to have had divining power. But suppose, on the other hand, that instead of giving way to the nightmare view you cling to it that this world is not the ultimatum. Suppose you find yourself a very well-spring, as Wordsworth says, of—

"Zeal, and the virtue to exist by faith

As soldiers live by courage; as, by strength

Of heart, the sailor fights with roaring seas."

Suppose, however thickly evils crowd upon you, that your unconquerable subjectivity proves to be their match, and that you find a more wonderful joy than any passive pleasure can bring in trusting ever in the larger whole. Have you not now made life worth living on these terms? 

Courage, here, is the willingness to act under uncertainty, the willingness to live at all rather than committing suicide or passively waiting out your years hoping for death.

Faith, to James, is simply the conviction that something you have not yet seen will someday resolve your uncertainties; that the universe makes sense and your life matters, even if the reasons are outside the frame of your current knowledge. This faith is the difference between seeing a life of hardships as a determined struggle rather than an inescapable hell; it is the difference between seeing your problems and questions as ultimately resolvable and seeing the universe as a perverse, absurd, inherently unintelligible chaos, at every level fractally resisting your comprehension.  You cannot prove you don't live in such a universe; but your ability to live, act, and learn, to obtain any good things in life, to have anything beyond depressive nihilism, depends on your believing the opposite.

In a more secular age, you might call this "faith" simply the belief in an intelligible universe in which survival is possible. But even traditional theologies often makes sense if you translate "God" to mean "the universe, which is singular, and which exists even outside the frame of our perception and all our mental models." Witness all the prayers and holy texts that say that following God's teachings will help us flourish and make our descendants prosper and multiply; this is simply the claim that understanding the laws of Nature (and the decision-theoretic laws of ethics, or the social/psychological foundations of good societies) is to our long-run best interest.  What is "I Am that I Am" but a poetic way of expressing the notion of existence itself, the Universe, the world "out there" that our words and guesses ultimately refer to?

The "faith" or stance that there is one universe, which is ultimately intelligible and habitable, even if we can't see how at the moment, is also held to be important by scientific atheist philosophers like David Deutsch, who calls it the conviction that "problems are solvable". Without a stance of optimism that coherent explanations are possible, no science could actually be done; nobody would ever search for an explanation for the brute facts they observe.

The stance that life matters and that you personally are overall capable of handling life and worthy to make your own decisions is called "self-esteem" in psychology. One can improve it -- and thereby improve performance on a variety of practical tasks -- by writing personal essays about what one values in life.  (This is the original, older meaning of self-esteem, before it became redefined as "agreeing with positive statements about oneself", which doesn't correlate with work or school performance, improved mental health, or suchlike practical successes. Exercises in which you praise yourself don't work; exercises in which you think about your values and priorities do. )  

The "faith" that the universe makes sense and that it's worthwhile to live actively, making plans and decisions, is one of the key things that is destroyed in PTSD. 

This is not a loss of "confidence" or "trust" in any particular thing, which might well be rational after a traumatic experience (after being raped it is rational to have less trust in your rapist or in people similar to him), but a loss of the ability to have self-confidence generally or to trust in anything generally.  The idea of a generalized "loss of confidence" can't be interpreted as an epistemic belief; it's a change in stance, a change in the ground of all belief or action.

Jenny Holzer's art really captures this aspect of the traumatized experience:

This article investigates the philosophical interpretation of the generalized loss of trust, confidence, or meaning that occur after trauma.

The Istanbul Protocol, a United Nations guide to documenting cases of torture, claims that torture survivors lose the will to look forward to, or shape, their own future.  "The victim has a subjective feeling of having been irreparably damaged and having undergone an irreversible personality change. He or she has a sense of foreshortened future without expectation of a career, marriage, children, or normal lifespan."

In PTSD, "we experience a fundamental assault on our right to live, on our personal sense of worth, and further, on our sense that the world (including people) basically supports human life. Our relationship with existence itself is shattered. Existence in this sense includes all the meaning structures that tell us we are a valued and viable part of the fabric of life..."

What, exactly, does this “shattering” involve? It could be that experiencing significant suffering at the hands of another person leads to a negation of engrained beliefs such as “people do not hurt each other for the sake of causing pain,” “people will help me if I am suffering,” and so on. Then again, through our constant exposure to news stories and other sources, most of us are well aware that people seriously harm each other in all manner of ways. One option is to maintain that we do not truly “believe” such things until we endure them ourselves, and various references to loss of trust as the overturning of deeply held “assumptions” lend themselves to that view. For example, Herman (1992/1997, p. 51) states that “traumatic events destroy the victim’s fundamental assumptions about the safety of the world,” and Brison (2002, p. 26) describes how interpersonal trauma “undermined my most fundamental assumptions about the world.” An explicitly cognitive approach, which construes these assumptions as “cognitive schemas” or fundamental beliefs, is adopted by Janoff-Bulman (1992, pp. 5–6), who identifies three such beliefs as central to one-place trust: “the world is benevolent;” “the world is meaningful;” and “the self is worthy.”

...

Many of us anticipate most things with habitual confidence. It does not occur to us that we will be deliberately struck by a car as we walk to the shop to buy milk or that we will be assaulted by the stranger we sit next to on a train. There is a sense of security so engrained that we are oblivious to it. Indeed, the more at home we are in the world, the less aware we are that “feeling at home in the world” is even part of our experience (Baier, 1986Bernstein, 2011). It is not itself an object of experience but something that operates as a backdrop to our perceiving that p, thinking that q or acting in order to achieve r. To lose it is not just to endorse one set of evaluative judgments over another. It is more akin to losses of practical confidence that all of us feel on occasion, in relation to one or another performance. Suppose, for instance, one starts to “feel” that one can no longer teach well. Granted, evaluative judgments have a role to play, but loss of confidence need not originate in explicit judgments about one’s performance, and its nature is not exhausted by however many judgments. The lecture theater looks somehow different – daunting, oppressive, unpredictable, uncontrollable. Along with this, one’s actions lack their more usual fluidity and one’s words their spontaneity. The experience is centrally one of feeling unable to engage in a habitual, practical performance. And loss of confidence can remain resistant to change even when one explicitly endorses propositions such as “I am a good teacher.”
Such an experience can be fairly circumscribed, relating primarily to certain situations. However, we suggest that human experience also has a more enveloping “overall style” of anticipation. This view is developed in some depth by the phenomenologist Husserl (1991). According to Husserl, all of our experiences and activities incorporate anticipation. He uses the term “protention” to refer to an anticipatory structure that is integral to our sense of the present.
The 19th-century philosopher Edmund Husserl, much like contemporary neuroscientists, believed there is no perception without anticipation; all sensory perceptions and indeed all motor actions involve hypotheses about what we will observe next, or what will happen if we do this or that.  The basic function of the brain is to form predictions and measure how and in what way they differ from our subsequent observations.  There is no level at which our senses provide us with an unmediated, judgment-free snapshot of reality; it's prediction and error-correction all the way down. 

Loss of trust in our ability to make correct predictions thus means a generalized weakness in our ability to perceive, think, and act.  It is loss of trust in the intelligibility of the universe and in our own ability to act to achieve goals; it is loss of trust that the future can be predicted, and thus that there's any point in planning or investing in the future. It is overall a loss of meaningfulness, a loss of the sense that anything has a point, a loss of will to act, a loss of confidence.  In other words, the problem caused by trauma is exactly the problem that confronted William James.

It's a common observation that the risk of PTSD is not predicted so much by the severity of the trauma as by the degree to which the victim is persuaded to deny her own experience; pressured (by abusers or bystanders) to believe that it didn't really happen, that it wasn't so bad, or that she deserved it. It's not surprising that this particular experience is damaging to one's trust in one's own ability to make sense of reality or rationally assess risk to oneself.

Confidence Without Self-Delusion

The above model of how confidence works makes it clear that we don't have to be stupid or delusional to get most of the benefits of high confidence.  

Confidence is not a belief, in the ordinary sense.  It is not the belief that you are beautiful or brilliant or that your plans will work or that your ideas are right.  It is a stance of willingness to act, decisively and uninhibitedly. You can make a choice to act, without changing your assessment of any hypothesis; in machine-learning terms, confidence is a hyperparameter, an error threshold for "enough certainty" required before taking action or asserting a conviction, which you can lower if it is too high.

Eliezer Yudkowsky has remarked that he often thinks projects are worth trying on net despite only having, in his estimate, a 10% probability of success; while other people, in order to be motivated to try at all, need to psych themselves up into the unrealistically "confident" belief that success is virtually certain.  Most people conflate self-esteem or global self-confidence or courage, the willingness to try, with over-optimism about one's chances; but this is a needless error.

The need for external "validation" of one's basic worth as a person is likewise an error of looking for evidence of one's worthiness, when what you really want is permission to act as you desire; or, one might cash this out as a decision to act as you desire.  Repeatedly looking for validation, when you aren't really seeking new information, because you know what answer you expect and want to get, isn't going to work, because data only conveys information (in the Shannon sense) if it's surprising.  You can't come to "believe in yourself" by spamming your brain with the same data over and over. But if you know you want more self-confidence, you already have all the data you need to know more confidence would be good for you. You don't need to seek any more reassurance; you need to unilaterallly change your stance to a decisive one.

Easier said than done! But here's some tactics that have worked for me:

1.) Writing about my values! It's the time-honored, evidence-based trick for increasing self-esteem, and it works for me.  (Yes, this blog post is itself an example.)

2.) Unilaterally doing something I feel like doing in the moment. (Usually a bodily craving like food or exercise, or a minor breach of social propriety like making ugly faces or shouting in my own home.)  

If I'm wrapped up in an anxious obsession with being liked or validated or given approval, I can break that cycle by proving to myself that I have "permission" to do whatever I feel like, except for a really sparse set of ethical and practical constraints that I'm truly committed to. I don't have to be good, in the sense of an identity or "personal brand"; there is what I impulsively feel like doing, there is what I really absolutely must do, and that's all. 

(I usually fast, as is traditional, on the Jewish holiday of Yom Kippur, the Day of Atonement; but this year I actually got a lot of mileage and dare-I-say spiritual growth out of breaking the rule and eating food, when I was falling into an unhealthy spiral of shame and resentment about the idea of "being good," and becoming unpleasant to my family due to hunger.  Eating made me a better mom and wife that day, and the insight catalyzed me being a better friend to my friends in the following few days. Real ethics isn't about being any particular way, in the sense of an aesthetic or persona; that's fake "ethics," which is advertising. If there's anything you actually have to do, in reality rather than in a performative sense, then it will have a function ascertainable through ordinary cause and effect.  The goal of real ethics is not to maintain a goody-two-shoes persona but to exert agency towards good outcomes.  Violating a taboo, on an occasion when the taboo-violation directly helps someone and doesn't break any principle you're truly serious about, can help concretize this to yourself. )

False Confidence As Fraud

There's another kind of benefit self-deluded confidence can have, however, that courage and decisiveness by themselves cannot match; it can be part of a coalitional Stupid Strategy, as mentioned in the previous post. False beliefs are a luxury, an ornament, a costly signal that you are in a secure enough social position that you do not need to be realistic; if you fail, someone else will bail you out.

There is a common phenomenon that "mediocre white men" are given advantages for being unrealistically overconfident, while more-competent, humbler, more serious people who have less privilege (women, upwardly-mobile lower-class people, foreigners and immigrants, especially East Asians today and Jews historically) are seen as less appealing, less "likable", dispreferred as recipients of resources and privileges.  

Part of this is simply that the "overconfident" privileged people are acting on the correct amount of ambition for optimal outcomes, and others would do well to emulate their confidence. We should apply to more things, speak up more, negotiate more for ourselves, try more new things.

Another part of it is that having more resources makes it rational to take more risks; if you have savings or an inheritance, it actually is less risky to start a business. People born rich are free to be bolder; that's part of what wealth means. That's an argument that more people should have access to enough wealth to enable them to take useful calculated risks, but not that there's anything wrong with taking advantage of your good fortune, should you happen to have it.

But a third, perverse possible component of the "overconfidence of the privileged" is that they are signaling their ability to be wrong so they can align in a "too big to fail" coalition that is parasitic on the more-productive, less-grandiose people's work. For this purpose, it isn't enough to just be ambitious, bold, or confident; you have to be shamelessly wrong, to prove your membership in the Stupid Coalition of those privileged to be "secure" in their entitlement to valuable resources that other people produce.  Wrongness -- particularly in the form of excess confidence, where you make bets that would be disastrous in expectation for yourself unless someone else bailed you out -- is an unfakeable signal that you are sure other people will bail you out.  It's a form of playing Chicken with the (social) universe.

Like the tail of the peacock, irrational overconfidence is a self-imposed handicap; it's a gloriously flamboyant waste of resources, as a way of proving its bearer has resources to burn.

To the extent that this is true, we ought to see supernormal returns from investing in individuals who have the same apparent level of performance (in profits, product quality metrics, test scores, whatever) but are a.) from less-privileged backgrounds (women, minorities, LGBT individuals, people from working-class families) or b.) who have a manner that's more serious, modest, down-to-earth, and less entitled or grandiose. There's actually some evidence to that effect; women-led firms have more than twice the average annual rate of return of companies worldwide (24% vs. 11%).

The logic is, someone who's performing at the top, but is "spending" less than her equally high-performing peers on wasteful display signaling, is a much better bet. Your dollar goes farther, in the long run, if you aren't spending half of it on peacock tails.

An actual peacock bears the weight of its tail with its own strong muscles. But the Stupid Coalition's "peacock tail" is supported by the too-big-to-fail dynamics that rely on someone outside the coalition bailing them out.  If you are confident you can find a "greater fool" to enter your Ponzi scheme, or that the use of force will ensure payment of your unsustainable debts (e.g. the government printing more money to continue funding your project, or a Saudi sovereign-wealth fund backed ultimately by violence will invest in the next round), then peacock tails can be a good investment -- if you think the bubble, or Ponzi scheme, or public trust in government, will hold.  If there's enough risk the whole system will crash, "too big to fail" or not, then peacock tails are a terrible investment and you'll do much better optimizing for long-run, resource-efficient value creation.

Is Stupidity Strength?

Definition By Examples

There is a meme that stupidity is associated with strength -- that being as intelligent as possible comes at a cost in power, money, happiness, or practical advantage.

Some instances of this include:

  • The trope of the "nerd" -- a stereotype that bundles intellect with social ostracism and physical weakness 
  • The lament that ignorance is bliss; as in Ecclesiastes, "For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow."
  • The truism that the way to succeed in a practical endeavor is to "not overthink it."

This pattern is not a human universal. Plato and Aristotle would have found it alien. Francis Bacon and Sun Tsu were firmly of the other opinion: that understanding the world would lead to mastery, including military victory.

I'm told, by people who grew up in China, England, France, and Germany, that they don't have the concept of a "nerd" as we do in America. There's no presumption that good students tend to be unpopular or unathletic. In France, it's even fashionable to profess an interest in math or philosophy; they have (trashy) pop-philosophy magazines the way we have pop-psychology magazines.

My guess is that the trope of the ineffectual intellectual in America starts with the Pragmatist movement at the turn of the 20th century, which often presupposed a dichotomy between thinking and doing.

Consider Theodore Roosevelt's famous speech to students at the Sorbonne:

 A cynical habit of thought and speech, a readiness to criticise work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not, as the possessor would fain think, of superiority, but of weakness. 
. . .

Shame on the man of cultivated taste who permits refinement to develop into a fastidiousness that unfits him for doing the rough work of a workaday world. Among the free peoples who govern themselves there is but a small field of usefulness open for the men of cloistered life who shrink from contact with their fellows. Still less room is there for those who deride or slight what is done by those who actually bear the brunt of the day; nor yet for those others who always profess that they would like to take action, if only the conditions of life were not what they actually are. The man who does nothing cuts the same sordid figure in the pages of history, whether he be cynic, or fop, or voluptuary. 

. . .
Let those who have, keep, let those who have not, strive to attain, a high standard of cultivation and scholarship. Yet let us remember that these stand second to certain other things. There is need of a sound body, and even more need of a sound mind. But above mind and above body stands character—the sum of those qualities which we mean when we speak of a man's force and courage, of his good faith and sense of honor. I believe in exercise for the body, always provided that we keep in mind that physical development is a means and not an end. I believe, of course, in giving to all the people a good education. But the education must contain much besides book-learning in order to be really good. We must ever remember that no keenness and subtleness of intellect, no polish, no cleverness, in any way make up for the lack of the great solid qualities. Self-restraint, self-mastery, common sense, the power of accepting individual responsibility and yet of acting in conjunction with others, courage and resolution—these are the qualities which mark a masterful people.
Certainly these character traits are important, but notice how Roosevelt takes for granted that they are a separate magisterium from the intellect, as opposed to virtues whose necessity can be appreciated through reason or which can be developed by applications of intellect.

John Dewey, the Pragmatist philosopher of education, was deeply concerned that too much conceptual abstraction in education would produce impractical, antisocial intellectuals:

The gullibility of specialized scholars when out of their own lines, their extravagant habits of inference and speech, their ineptness in reaching conclusions in practical matters, their egotistical engrossment in their own subjects, are extreme examples of the bad effects of severing studies completely from their ordinary connections in life.

Philosopher and psychologist William James often opposed the "intellect", which he supposed abstract and disconnected from real life, to the "will" which enables the courage to act; eg

"Or what can any superficial theorist's judgment be worth, in a world where every one of hundreds of ideals has its special champion already provided in the shape of some genius expressly born to feel it, and to fight to death in its behalf? The pure philosopher can only follow the windings of the spectacle, confident that the line of least resistance will always be towards the richer and the more inclusive arrangement, and that by one tack after another some approach to the kingdom of heaven is incessantly made.

James returns again and again to the shortcomings of "superficial" theory, devoid of motivation or practical application; often he makes valid points, but always he calls for endorsing the will as a counterbalancing supplement to the intellect, not an essential component of it.

There's a persistent anti-intellectual strain in Pragmatist writing that paints "superficial theorists" as cowardly, ineffectual, and emotionally barren, in need of balancing with "practicality." It survives today in the self-help tropes that people need to get "out of their heads." But the Pragmatists did retain appreciation for experimental science and applied craft (and, of course, James helped found the modern American research university.) 

If you want to see the extreme version of disdain for all thought and peaceful production, you have to look beyond Pragmatism to fascism, with its overt rejection of sense-making. To a fascist, the irrational is always more potent and "magical" than the rational; the mundane, boring correctness of a shopkeeper's arithmetic marks him as simultaneously pathetic and sinister. Pathetic, because "mere" logic cannot move masses of people to collective battle-frenzy, which is the fascist's source of power and only conception of strength; sinister, because "mere" logic cannot be moved by social contagion and thus its wielder, being difficult to seduce into unity, is a potential threat.

Milder, more reasonable versions of the anti-intellectual hypothesis correctly note that smart people aren't all supermen, that practical experience and motivation matter too (see Scott Alexander's early critique of "extreme rationality.")  Like the Pragmatists, Alexander thinks that reason isn't enough to make you win, that it has to be supplemented with distinct, independent virtues.  The extreme version of the anti-intellectual thesis goes farther and holds that you could actually win more -- become more charismatic, more decisive, more powerful -- by becoming dumber. 

Scott Adams is an example of a modern advocate of extreme irrationalism:

"People are not wired to be rational. Our brains simply evolved to keep us alive. Brains did not evolve to give us truth. Brains merely give us movies in our minds that keeps us sane and motivated. But none of it is rational or true, except maybe sometimes by coincidence.” 

“The evidence is that Trump completely ignores reality and rational thinking in favor of emotional appeal,” Adams writes. “Sure, much of what Trump says makes sense to his supporters, but I assure you that is coincidence. Trump says whatever gets him the result he wants. He understands humans as 90-percent irrational and acts accordingly.”

Adams adds: “People vote based on emotion. Period.”

“While his opponents are losing sleep trying to memorize the names of foreign leaders – in case someone asks – Trump knows that is a waste of time … ,” Adams writes. “There are plenty of important facts Trump does not know. But the reason he doesn’t know those facts is – in part – because he knows facts don’t matter. They never have and they never will. So he ignores them.

Trump “doesn’t apologize or correct himself. If you are not trained in persuasion, Trump looks stupid, evil, and maybe crazy,” Adams writes. “If you understand persuasion, Trump is pitch-perfect most of the time. He ignores unnecessary rational thought and objective data and incessantly hammers on what matters (emotions).”

In other words, Adams thinks Trump's indifference to facts, his irrationality, is a strength.

How Can Stupidity Be Advantageous?

Advocates of an extreme irrationalist or anti-intellectual view have an obvious challenge in arguing their case. Knowledge is power; there are obvious ways in which information can be turned to advantage. Prima facie, making yourself more ignorant should mostly harm you, not benefit you.

Obviously it's possible to have useless knowledge which is not worth the effort of acquiring. But that's very different than knowledge being actively harmful.  How can knowing more, or better understanding the logical implications of what you know, cause you to make worse decisions?  If you maintain ignorance of something, the unknown thing could hurt you in unexpected ways; whereas if you regret learning something, you can at least in principle just go back to behaving as you would have if you hadn't learned it.

Ignorance constrains your options.  So why seek it?

Well, the usual reason people seek constraint is as a commitment device.

If you don't know the secret codes, you can't reveal them under torture.

More generally, there are all sorts of things you might be pressured by others to do, which you can excuse yourself from doing if you make sure you don't know how.  Witness all the people who are "just hopeless" at housework or administration.

But even this is not a good reason to seek general ignorance or irrationality.  Granted it may be strategic to avoid gaining some particular bit of knowledge or skill, which is only useful for things you'd rather not do; but surely it can't be advantageous to cripple a fully general skill like logic or arithmetic! You need those too often! How can the advantage of the commitment device outweigh the loss from actually being bad at thinking?

The point is, irrationality is not an individual strategy but a collective one. Being bad at thinking, if you're the only one, is bad for you. Being bad at thinking in a coordinated way with a critical mass of others, who are bad at thinking in the same way, can be good for you relative to other strategies. How does this work? If the coalition of Stupids are taking an aggressive strategy that preys on the production of Non-Stupids, this can lead to "too big to fail" dynamics that work out in the Stupids' favor. 

"Here's a large mass of us who are stupid in the exact same way. This means when we fail, we all fail at once.  Now we're here, we're hungry, we're angry, and we're literally incapable of solving our own problems.  You really want to see what happens if you don't bail us out?"

Correlation of risk can lead to security, in this way. Make a mistake alone and you have to bear the cost; make a mistake along with an aggressive crowd and someone will have to rescue you.  

As the saying goes, if you owe the bank a million dollars you have a problem; if you owe the bank a billion dollars, the bank has a problem. The bailouts of the 2008 financial crisis is an example of this phenomenon; as is the medieval practice of expelling the Jews from a kingdom once the king could not afford to pay his debts to Jewish moneylenders. 

Less obviously, normalization of deviance is an example of this "stupid strategy." Organizations have standards for safety, quality control, and so on; in a functional organization, if a single worker falls short of the standard, she will be less professionally successful or even face disciplinary action.  In a dysfunctional organization, violation of the standard gradually becomes so commonplace that it becomes normative. Nobody actually follows the rules; there's a tacit common knowledge that the rules are unreasonably stringent and "just for show" and people can't be expected to literally follow them; after all, if enough people are violating the rules, you can't just fire all of them! But, to the extent that the organization's survival actually depends on those standards (eg in a company whose revenues depend on their products meeting certain quality standards) then the rule-breaking strategy is parasitic on the minority of workers who actually try to meet the standards and have to clean up the rule-breakers' messes. The rule-breakers get job security and advancement without having to make the effort to meet standards -- until standards fall so far that the whole organization collapses, at which point they can claim it wasn't their fault, since they were behaving "normally." The rule-breaking coalition has become "too big to fail", and the (invariably less senior) rule-followers get screwed.

Note that a strategy doesn't have to produce good outcomes to be evolutionarily stable. It could be much better to live in a less stupid society and still, given one's current social environment, locally optimal to join the Stupids.



Against Multilateralism

Unilateral actions are those that a single person, or small group of people, can take without consulting anybody else.

Multilateral actions are the opposite: actions that require the cooperation and approval of many people.

For instance, the "freedom to roam" or allemansrätten in Swedish, is a unilateral right in many Scandinavian countries -- any person can walk freely in the countryside, even over other people's land, without having to ask permission, provided he or she does not disturb the natural environment.  You don't have to "check in" with anyone; you just take a walk. 

People often mistrust unilateral actions even when at first glance they seem like "doing good":

  • Dylan Matthews at Vox opposes billionaire philanthropy (a unilateral donation to charitable causes the billionaire prefers) on the grounds that it undermines democracy (a multilateral process in which many voters, politicians, and government agencies deliberate on how money should be spent for the common good).
  • People are alarmed by geoengineering, a collection of technological methods for reversing global warming which are within reach of a single company acting unilaterally, and much more comfortable with multilateral tactics like international treaties to limit carbon emissions.
  • Gene drives that could wipe out malaria-causing mosquitoes could be a unilateral solution to eradicating malaria, unlike the multilateral solution of non-governmental aid organizations donating to malaria relief.  Gene drives are controversial because people are concerned about possible risks of releasing genetically modified organisms into the environment -- but they have the potential to eliminate malaria much faster and more cheaply than anything else.
  • Paul Krugman is troubled by the prospect of billionaires funding life extension research (a unilateral approach to solving the problems of age-related disease) because he's concerned they would ensure that only a privileged few would live long lives.

Often, unilateral initiatives are associated with wealth and technology, because both wealth and technology extend an individual's reach. 

I didn't really "get" why biotechnology innovation scared people until I watched the TV show Orphan Black.  There's a creepy transhumanist cabal in the show that turns out (spoiler!) to be murdering people. But before we know that, why is the show leading us to believe that this man onstage talking about genetic engineering is a bad guy?

I think it's about the secrecy, primarily. The lack of accountability.  The unilateralism.  We don't understand what these guys are doing, but they seem to have a lot of power, and they aren't telling us what they're up to

They're not like us, and they can just do stuff without any input from us, and they have the technology and money and power to do very big things very fast -- how do we know they won't harm us?

That's actually a rational fear. It's not "fear of technology" in some sort of superstitious sense.  Technology extends power; power includes the power to harm.  The same technology that fed a starving planet was literally a weapons technology.

Glen Weyl's post Why I Am Not A Technocrat basically makes this point.  Idealistic, intelligent, technologically adept people are quite capable of harming the populations they promise to help, whether maliciously or accidentally. He gives the examples of the Holodomor, a man-made famine created by Soviet state planning, and the rapid, US-economist-planned introduction of capitalism to Russia after the fall of the Soviet Union, which he claims was mismanaged and set the stage for Putin's autocracy.

In economic terms, Glen Weyl's point is simply that principal-agent problems exist. Just because someone is very smart and claims he's going to help you, doesn't mean you should take his word for it.  The "agent" (the technocrat) promising to act on behalf of the "principal" (the general public) may have self-interested motives that don't align with the public's best interest; or he may be ignorant of the real-life situation the public lives in, so that his theoretical models don't apply.

I think this is a completely valid concern.

The most popular prescription for solving principal-agent problems, though, especially when "technology" is mentioned, is simple multilateralism, what Weyl calls "design in a democratic spirit."  That is: include the general public in decisionmaking. Do not make decisions that affect many people without the approval of the affected populations. 

"Democratic designers thus must constantly attend, on equal footing, in teams or individually, to both the technical and communicative aspects of their work. They must view the audience for their work as at least equally being the broader non-technical public as their technical colleagues. They must view a lack of legitimacy of their designs with the relevant public as just as important as technical failures of the system."

In other words: if the general public isn't happy with a thing, it shouldn't be done. "Thin" forms of public feedback like votes or market demand are not enough for Weyl; if there's "political backlash and outrage" that in itself constitutes a problem, even if a policy is "popular" in the sense of winning votes or consumer dollars.  The goal for "democratic designers" is to avoid any appreciable segment of the public getting mad at them.

This is a natural intuition. Govern by consensus. Include all stakeholders in the decision process. It's how small groups naturally make decisions. 

Inclusion and consensus has a ring of justice to it. It makes for good slogans: "No taxation without representation." "Nothing about us without us."  And it really does provide a check on arbitrary power.

It is also extremely expensive and inhibits action.

I don't think you can have a contemporary level of technology and international trade that follows the rule "everyone whose life is affected by a decision should be included in the decision process." Technology and trade allow strangers to affect our lives profoundly, without ever asking us how we feel about it.  Many people are unhappy that globalization and technology have altered their traditions. They have real problems and real cause for complaint. And yet, I'm pretty sure that a majority of the human race would have to die in order to get us "back" to a state where nobody could change your life from across the globe without your consent. If you want the world to be governed wholly by consensus, I think you have to be something like an anarchoprimitivist -- and that carries some brutal implications that I don't think Weyl would endorse.

The good news is, multilateral or democratic consensus is not the only mechanism for solving principal-agent problems.

I can think of three other categories of ways to put checks on the power to harm.

1. Law
If you define certain types of harm as unacceptable, you can place criminal or civil penalties on anybody who commits illegal acts.
This is more efficient than consensus because it only imposes costs on illegal actions, while consensus imposes a cost on all actions (the time and resources spent on deliberation and the risk that consensus won't be achieved).

The difficulty, of course, is ensuring that the legal and judicial system is fair and considers everyone's interests. In democracies, we use deliberative consensus as part of the process for writing and approving laws. But that's still a lot more efficient than using consensus directly for all decisions in place of laws.

2. Self-Protection
This includes all situations where the potential victims of harm have a readily available means to protect themselves from being harmed.
Again, it's more efficient than consensus because it doesn't impose costs on all actions, just harmful ones. It has an advantage over law in that it doesn't require anyone to specify the types of harm beforehand -- human life doesn't always fit neatly into a priori systems. It has a disadvantage in that, by default, the potential victims bear the costs of protecting themselves, which seems unfair; but laws and policies which lower the cost of self-protection or place some responsibility on perpetrators can mitigate this.

Self protection includes:
  1. self-defense (as protection against violence)
  2. security protections against theft or invasion of privacy (locks, cryptography)
  3. various forms of exit (the right and opportunity to unilaterally leave a bad situation)
    1. the choice not to buy products you don't like and buy alternatives
    2. the choice to leave a bad job and find a better one
    3. the choice to leave one town or country for another
    4. the choice to leave an abusive family or bad relationship
  4. disclosure requirements on organizations, or free-speech rights for whistleblowers and journalists, that enable people to make informed decisions about who and what to avoid
  5. deliberately designing interventions to be transparent and opt-in, so that if people don't like them, they don't have to participate

3. Incentive Alignment

This includes things like equity ownership, in which the agent acting on behalf of a principal is given a share of the benefits he provides the principal. It also includes novel ideas like income share agreements, which introduce equity-like financial structures to human endeavors like education that haven't traditionally incorporated them.

This has the advantage over consensus that you don't have to pay the costs of group deliberation for every decision, and the advantage over law that it doesn't require anyone to enumerate beneficial behaviors a priori -- the agent is incentivized to originate creative ways to benefit the principal. The disadvantage is that it's only as good as the exact terms of the contract and the legal system that enforces it, both of which can be rigged to benefit the agent. 

As with criminal law, consensus deliberation mechanisms can be used in a targeted way, on the "meta-problem" of defining the "rules of the game" in ways that are accountable to the interests of all citizens. We can have public deliberation on the question of what kinds of contracts should be enforceable, but then let the contractual incentives themselves, rather than costly mass deliberation, govern day-to-day operational decisions like those involved in running a company.


The Case For (Controlled) Unilateralism

It's clear that principal-agent problems exist. But we don't have to go back to primitive government-by-consensus in order to prevent powerful people from taking advantage of others. There are lots and lots of legal and governance mechanisms that handle principal-agent problems more efficiently than that.

Moreover, government-by-consensus isn't even that safe. It's vulnerable to demagogues who falsely convince people that their interests are being represented. In fact, I think a lot of highly unilateral, technological initiatives are getting pushback not because they're uniquely dangerous but because they're uniquely undefended by PR and political lobbying.  

We need unilateral solutions to problems because consensus and coordination are so difficult. Multilateral solutions often fail because some party who's critical to implementing them isn't willing to cooperate.  For instance, voters around the world simply don't want high carbon taxes. Imposing a coordination-heavy project on an unwilling population often takes a lot of violence and coercion.

Technology, by definition, reduces the costs of doing things. Inventing and implementing a technology that makes it easy to solve a problem is more likely to succeed, and more humane, than convincing (or forcing) large populations to make large sacrifices to solve that problem.

Of course, I just framed it as technology "solving problems" -- but technology also makes weapons. So whether you want humanity to be more efficient or less efficient at doing things depends a lot on your threat scenario.

However, I see a basic asymmetry between action and inaction.  Living organisms must practice active homeostasis -- adaptation to external shocks -- to survive. If you make a living thing less able to act, in full generality, you have harmed it. This is true even though it is possible for an organism to act in ways that harm itself.

The same is true to a much greater degree for human civilization. "Business as usual" for humanity in 2019 is change. World population is growing rapidly. Our institutions are designed around a prediction of continued exponential growth in resources.  A reduction in humanity's overall capacity to do things is not going to result in peaceful stability, or at any rate, not before killing a lot of people.

Do we want to guard against powerful unilateral bad actors? Of course. We need incentives to constrain them from hurting others, and that's the task of governance and law.  But the cost of opposing unilateralism indiscriminately is too high. We need mechanisms that are targeted, that impose costs especially on harmful actions, not on beneficial and harmful actions alike.