tag:srconstantin.posthaven.com,2013:/posts Sarah Constantin 2020-05-05T08:14:04Z Sarah Constantin tag:srconstantin.posthaven.com,2013:Post/1513080 2020-02-24T19:41:00Z 2020-03-27T19:56:09Z Tangled Thinking 2: Motivated Cognition and Its Opposite

Motivated cognition is the state of emotionally needing to believe something is true, whether it actually is true or not.

I've found that a good kinaesthetic metaphor for motivated cognition is pressure.  If you're forcing things with your mind -- if you're going "it's GOT to be this way, or ELSE" -- then you aren't actually open to the truth being whatever it might happen to be.  

Defensiveness, anxiety, revulsion, despair, eagerness to please or to be acceptable, can motivate you to believe that the convenient and pleasant thing is true -- or that the awful worst-case scenario is -- or both at once (e.g. anxiety can make you fear the worst but flinch from it and profess the best-case scenario.)

It's not motivated cognition for motivation to be involved in cognition.  We think most clearly, in fact, when we're motivated to do so; for instance, someone who stands to make money by making the correct prediction will be more motivated to be correct than someone who's merely having a conversation.  

In fact, I think motivation is essential to all the words we use to talk about thinking well  -- rationality, wisdom, objectivity, science, empiricism, common sense, "Looking", etc. These words get corrupted by connotations of smugness, coldness, superiority, authoritarianism, etc, and new words have to be continually invented to point at the same thing the old words were intended to point at.  The thing itself is, perhaps, best described as "thinking in the way everyone naturally does when they actually care about the object of their thought."  

If you care about the thing in the real world, you will not want to be wrong about it; a delusion, however pleasant, won't give you what you want.  You still can be wrong about it, of course, but your incentives are to be as correct as you can be.  A certain amount of pretense and posturing and game-playing may drop away suddenly when, for instance, you find your child's safety is at stake; suddenly it is vitally important to get real.

(Of course, phrases like "get real", "be sensible", "be reasonable", often are used to mean "shut up and do what I tell you", which is not the thing. A person who Actually Cares about getting something done may often be perceived as an unreasonable or irrational person, because she is doing something that doesn't meet with everyone's approval.)

There's something related about words like "literally", "truly", "actually", "really", "very", "honestly" -- and it's telling that over time language evolves to make them all used as words for emphasis instead of denoting literal exact truth.  It's hard to find a way to phrase in words "I'm pointing at reality now" as opposed to pointing at a model of reality, or playing a game with language, or speaking 'in character' as the persona you want to embody right now.  

Notions like "rationality" are attempts to encourage people to think and speak literally rather than performatively.

It seems like sort of a mistake to present them as a specialized discipline to be taught rather than a stance to be adopted that most people actually have by default from time to time. Doing science doesn't actually involve going through The Scientific Method as you're taught in elementary school; but while there may not really be a Scientific Method, there is definitely a scientific mindset. It's the same mindset you have naturally when you're curious.  

If you try to codify how people think when they're being curious, it winds up sounding like nothing at all.  "Just, y'know, thinkLookCare! Try!"

Or it comes across as condescending: "Most people go through life never actually trying! You should actually try!" There's not much content to this, but the "actually" is gesturing at something: the rubber meeting the road, the moon and not the pointing finger. 

By contrast, motivated cognition is being motivated to have certain cognitions, inside your head, rather than being motivated to seek outcomes out in the world.

It's kind of weird that we have this feature at all. Why would it be evolutionarily adaptive? Or, perhaps it's not adaptive but it's a 'natural flaw' that most possible ways to make a brain would fall into?

Why do we (often) care more about the insides of our heads than what's going on outside them?

]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1511945 2020-02-21T20:34:42Z 2020-02-22T01:33:05Z Tangled Thinking 1: Mental Objects, Contradictions, Reconciliation Mental Objects

Let's call a mental object anything that you're aware of, anything that is in your conscious mind. A sense perception, a feeling, a thought, a concept, anything that is under the spotlight of your consciousness at a given moment. 

Tautologically, if you have any access to something in the world, it must be a mental object; it must enter your perception.

So how do you even know there is an external world?  Why isn't it "all in your mind"?  What's even the difference between a totally imaginary world playing out in your consciousness, and a "real world" that's "out there"?

Without being knowledgeable about the history of philosophy and the many people who have attempted to answer this question, I think there's a pretty straightforward way to solve this problem.

Difference Detection

One of the things your mind can do is ask the question "same or different?" about two mental objects.

"Same or different?" can apply on multiple meta-levels.  You can say "do these propositions contradict?"  You can say "do these motives conflict?"  You can say "does my observation match my prediction?" 

The "evidence" for this proposition is just that you can try it out in your own mind and see if it works.  It's also analytically necessary that this be true if we believe that people have goals.

Any kind of optimizing behavior can be viewed as optimizing to minimize a distance or difference between the desired and actual situation. Which means you need to be able to detect differences. If we are goal-achieving machines, we must also be difference-detecting machines.

Moreover, there are some computational models that fit psychophysics data quite well, like Waltz Filtering, where you can view the process of visually parsing a line drawing, as an optimization problem where we try to minimize the number of inconsistencies in interpretations of the drawing as representing a 3-d object. (Unreconcilable inconsistencies lead to impossible figures, where we "see" the figure as representing a 3-d form locally, but then can't extend that same interpretation consistently to the whole figure.)

Psychophysics data also tells us that our sensory perceptions are keyed to differences, not absolute magnitudes; far more people have relative pitch than absolute pitch, for instance, and our color perceptions are relative to background and lighting, not absolute.  Detecting "same or different?" seems to be more "primitive" or "fundamental" an operation in the brain than detecting "how much?"  In many cases, at least, it seems that we have "difference detectors" and construct absolute measurements out of those, rather than having "absolute magnitude detectors" and computing differences by subtracting them.

Resolving Contradictions

"A" and "Not A" is a flat contradiction -- an impossibility.  The mind boggles. It cannot be.

But not all differences, of course, are impossibilities. It's possible to notice that a dress has fabric in two different colors, and this doesn't slow us down a bit.  

What happens is a kind of "going meta", I think.  You say "oh, no problem: it's A here, and not-A there."  Distinction by dividing up the mental world. "Hey, A and not-A, you can share."  Now there is no contradiction and everything is hunky-dory.

Or, you can explain away one half of the contradiction: "oh, I only believed not-A because I was misled by such-and-such; now I can safely discard it as a mistake."  Again, no problem.

Or, you can reframe A and not-A so they are both parts of a whole, or not really opposites after all.  There are a lot of things you can do.

Essentially what you're doing is handling an ontological crisis.  You resolve an apparent contradiction by adding some complexity to your mental world, such that both apparently contradicting mental objects are compatible and explainable (or explain-away-able) by your new, expanded view of the world.  It's the process of noticing that the blind men were seeing different parts of the elephant.

There seems to be a contradiction, but really, if you shake it all around, if you learn more, if you do some trial and error, you can get into a new configuration where the knot untangles and it all makes sense.

This is what "thinking" is, I believe. Messing around with your mental objects until apparent contradictions resolve.  

And this tells us what it means to believe in an external "world."  It means that you believe that your space of mental objects will come to include things that it does not yet include, but which recontextualize today's apparent contradictions so they make sense.

It means "we will understand it better by and by."  Everything has an explanation; everything came from the same world.  If we took a wide enough view, everything would make sense.

It is a kind of faith, but a very minimal sort -- the faith that you live in an intelligible universe, that ultimately you can make more and more sense of things.  Or the stance of trying to see things from the perspective of how they would look once you had made sense of them.

Solving Problems Is What Brains Like Doing

Lulie Tanett likes to write about how "reason is fun" and "problems are good" -- that literally humans enjoy the process of problem-solving.  

If you think about it, most of our "play" is puzzle-solving -- trying to achieve an objective despite an apparent obstacle, or trying to make sense of something initially confusing. Videogames are puzzles even when they're not "puzzle games". Sports are puzzles.  Even reading or watching a work of fiction, or listening to music, gets much of its "fun" from a dance between predictability and surprise (and ultimate resolution of the apparent mystery or discordance.) Many of the things we do for no reason other than enjoyment are problem-solving activities.

We have an instinctive desire to tug on problems in attempts to solve them. That doesn't mean all problems are perceived as unpleasant.  Sometimes the "problem-solving" happens faster than we can be aware of it; sometimes the process itself is pleasurable. Only sometimes do we have a negative feeling around the problem, and it's not merely because the problem exists.

Suffering = Problems Metastasizing

Just having a problem -- your conscious mind includes both "A" and "Not A" -- doesn't necessarily cause suffering. But if you then go "there IS a contradiction" and "but there CAN'T be a contradiction", then things start to get worse. Or "I have mixed feelings" but "but I MUSTN'T have mixed feelings", or "this is hard" and "but it SHOULDN'T be hard", and so on. Somehow a problem can result in, not an attempt at resolution, but more problems!  More contradictory beliefs, which vibrate against each other, and make more and more friction in your mind.

I think this is usually what's going on when something really bothers us.  Have you ever noticed that sometimes a big life problem stops being upsetting, not when it goes away, but when you find something you can do about it?  You flip out of "bemoaning and denying that the issue is there at all", focus instead on some constructive activity toward solving the issue, and suddenly it gets easier? Because now all you have to cope with is the issue itself, not the exhausting "is it real or isn't it", "should I admit it hurts or tough it out", internal debates. You now have one less problem to solve.

If you have problems about problems, issues about issues, etc, it gets much harder to deal with them. These are psychological truisms: negative emotions are worse if you're ashamed of feeling them; abuse is harder to recover from if everyone around you insists it's not real; dealing with misfortune is harder if you're still in denial about it.  

Having a "meta-problem" means having an internal contradiction around the concept of the problem itself.  "This problem exists" and "this problem doesn't exist" battle in your head. Trying to believe in a contradiction, trying to do the literally impossible, maintaining the conviction that you should be able to believe contradictions or do the impossible -- all of these are ways of making meta-problems out of problems.

The Buddhist concept of tanha is usually translated as "craving" or "desire", but it literally means "fuel" and is associated with clinging or persistence, trying to make mental states stick around.  If I were to try to map it to this model, I'd say that tanha is the mental motion of returning to a contradiction or a knot in the mind, and trying to will it to not be a knot, thereby creating a bigger meta-knot.  It's what you do when you remind yourself of a frustration in a way that makes it more and more frustrating.  You keep the contradiction bouncing back and forth, louder and louder -- it IS, but it SHOULDN'T BE, and moreover nothing should BE HOW IT SHOULDN'T, but some people told me I should ACCEPT THINGS AS THEY ARE, but I DON'T LIKE THAT...

That "outward spiral" makes it harder and harder to resolve the problem at the root of the whole thing, because every time you notice it, it activates all the other meta-problems.  I think this is the structural underpinning of what we experience as negative emotions, "touchy subjects", and "sore spots."  They're wounds that defend themselves from healing.


Ultimately, if we believe in a world, we must believe that this process too comes from the world, that there is a reason why problems sometimes grow their own defenses. But it's tricky and deserves exploration at length.  You're looking at a process that doesn't "want" to be looked at.
]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1510858 2020-02-18T18:15:33Z 2020-02-23T10:36:11Z Defensiveness, Politeness and the Presumption of Hostility

Defensiveness is a maladaptive social interaction style characterized by responding as if someone is attacking you, even when they aren't.  

If you treat innocent questions or suggestions as though they're attempts to ridicule or condemn you, you're going to damage your personal relationships. Well-meaning friends and family don't like being mischaracterized as cruel attackers. If you reflexively apologize, justify yourself, brace for a blow, or counterattack, when people in your life initiate non-hostile interactions, you're going to hurt their feelings and drive them away.

What's interesting is that a lot of common norms around politeness or professionalism are consistent with a "defensive" worldview, an assumption that people are hostile until proven otherwise.

Declining offers: it's generally considered polite to refuse offers of food or other favors.  You should assume that the other person doesn't really want to help you, and will be relieved if you don't take them up on the offer; the considerate thing to do is to reduce the burden on them.

By the same token, you don't want to appear needy, or actually ask for anything you want; the "safe" thing to assume is that the other person would feel burdened if you made a request.

Being reluctant to initiate contact: the "safe" assumption about any person is that they don't want you to disturb them and would rather be left alone.  It's impolite to reach out too much or too often.  

Discretion: it's considered potentially embarrassing or career-limiting to "overshare" or be excessively candid in public; the more mysterious you are, the more respectable you appear. The implicit belief here is that the more people know about you, the less they'll approve of you.

Justifiability: in professional or scientific discourse, there's often an (implicit or explicit) frequentist paradigm: there is a "null hypothesis" that is the default assumption, and only a certain threshold of formally presented evidence is sufficient to justify a claim that any other hypothesis is credible.  You are expected to prove a case to a hostile or skeptical audience, rather than merely motivate or explain your reasons for your belief to a curious audience; unless, of course, your belief is the null hypothesis, in which case it needs no argument at all. 

We don't normally describe these behaviors as "defensive" or see them as maladaptive. In context, they're normal. But they have a worldview in common with the worldview that damages personal relationships -- in all cases, they involve the assumption that being open about who you are, what you think, and what you want, leaves attack surface for your enemies, and that it's not even worth considering the possibility that people might be friends (who want to help you, want to interact with you, like you better after getting to know you, and would benefit from learning what you think.)

I'm not sure what exactly to conclude, but this makes me think more about the potential downsides to "defensive" norms.

]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1499344 2020-01-15T08:09:15Z 2020-02-24T08:19:16Z Aging Interventions from Older Publications that Deserve a New Look

Why Read Old Papers?

These days, from what I'm told by knowledgeable people, there's a fairly tight feedback loop between current aging research and the biotech industry. When a new, major aging-related paper comes out, there are people seriously evaluating whether they can start a company around it.

But that isn't necessarily true when it comes to old research. There's no automatic means by which old papers "go viral."  There are no conferences (that I know of) where people call their colleagues' attention to remarkable, decades-old results that haven't received follow-up investigation.

I think old papers deserve a second look, for a few reasons.

1.) Often a result that had little interpretability or applicability in the past can benefit from contemporary tools. 

Let's say, twenty years ago researchers found a way to extend life in rats -- but it was a surgical operation that would be too invasive or risky to try on healthy humans. And maybe that was the end of that research direction.  But now we have lots of new options! We can take tissue samples with and without the intervention, and look at gene and protein expression, even down to the individual cell level. We can identify genetic modifications or drug targets that could be used to simulate the intervention in a safer, more targeted way.  

2.) Looking at old papers reduces some of the biases that come from looking at the latest, most-cited papers.

The volume of scientific publications is increasing at an exponential rate, up to 4% a year.[1] 

However, the reliability of the average publication has probably decreased. If there are indeed "diminishing returns to science" in recent decades, as Patrick Collison and Michael Nielsen argue [2], with roughly constant rates of important discoveries (as rated by experts) and flat economic productivity (a measure that we'd expect to correlate with technological progress) despite exponentially growing numbers of scientists, publications, and dollars devoted to science, then the quality of the average paper, scientist, or dollar allocated to research must have gone down. In that case, a randomly chosen older paper should be more trustworthy than a newer paper.

One might counter that the new papers that get the most attention aren't randomly chosen -- they're the highly cited papers, or the papers in prominent journals. Maybe the average paper has gotten worse, but the average is being pulled down by junk papers in journals so low-quality and obscure that barely anybody reads them; so, perhaps, the typical new paper that a colleague (or your twitter feed) brings to your attention is no less credible than a comparable old paper.

I think that optimistic outlook is doubtful; in fact, articles in prestigious (high-impact-factor) journals are more likely than average to be retracted, and many measures of research reliability anticorrelate with impact factor, implying that articles the most prestigious journals are less trustworthy than average[3]:

  • overestimating effect size in gene-association studies increases with impact factor (more bias in more prestigious journals)
  • sample size in gene-association studies decreases with impact factor
  • statistical power in psychology and cognitive science papers decreases with impact factor
  • randomization in animal studies is reported less frequently in papers from high-impact-factor journals
  • errors in supplemental data (eg Excel auto-converting a gene name to a date) are more common in papers from high-impact-factor journals
  • p-value reporting errors, usually in the direction of misinterpreting a non-significant result as significant, are more common in papers from high-impact-factor journals
  • metrics to identify tell-tale signs of questionable research practices find lower research quality in higher-impact-factor journals.

So, no, we can't assume that the most-cited papers of today are the cream of the crop. If anything, there's more pressure today than ever to get dramatic but untrustworthy results, and that pressure is highest at the most competitive journals.

One way to reverse this effect is to go back in time.  If the amount of noise in the system is increasing, an old paper is more likely to have a valid signal than a new one.

3.) Low technology can be a blessing in disguise.

The miracle of modern molecular biology is that we keep developing better tools to affordably do breadth-first searches. "Sequence ALL the genes!" "Quantify ALL the transcripts!" "Quantify ALL the proteins!" And so on.

The danger of having these incredible tools is that you can cherrypick positive results -- and people do.

It's much harder to erroneously get an apparently therapeutic intervention if your tools are blunter and your search space is smaller. If somebody in 1944 says that shining light on a duck's head makes its testes grow[4], then by gum I bet that actually happens!  

Because it's not coming from a breadth-first search, somebody had to have a specific reason to think that the experiment would work, and because there just aren't that many experiments being done at random, you can expect that to be a well-informed reason. The prior is higher.  

There's a similar sense in which lack of standardization is a blessing in disguise. Most mammal experiments today are done on the same handful of strains of inbred mice, for instance. The standardization is a boon to researchers in many ways (you can make apples-to-apples comparisons, you don't have to spend time inventing the basics of experimental methods yourself) but it also means that experimental results can just turn out to be an artifact of the "standard methodology."  Looking at older experiments, which have greater diversity in model organisms and other experimental methods, can be a corrective.

4.) Old papers are undervalued opportunities.

The author of the latest exciting result is a ready-made advocate for the discovery and a potential founder or collaborator for new ventures to put it into practice.  The author of an old paper, by contrast, might be dead or retired, with nobody to champion the potential applications of the discovery. It's very easy for an area of research to quietly fall out of fashion through no inherent lack of merit, just because it never met the right opportunity for application. 

I'm just barely old enough to remember when neural nets were thought of as an embarrassing phase in the history of computer science; they became "hot" again in 2012, with AlexNet, when newly affordable GPUs proved that deep learning algorithms could suddenly outperform the competition. In other words, advances in a totally different technology made a "failed" research approach into an overnight success. 

Going through old, not necessarily well-known experiments to see if there are opportunities is something I don't believe is being done that often, and is probably an unusually good place to apply a little bit of time and attention for big returns.

That said, let's look at some specific examples!  These are all results from prior to the year 2000, that are experimental interventions affecting vertebrate lifespan or aging, and which aren't currently the focus of a research program that I'm aware of.

Lowering Body Temperature: 71% Life Extension in Fish

Unusually long-lived vertebrates (tortoises, sharks, rockfish, etc, which can survive for centuries, or naked mole rats, which are extremely long-lived for their size) are frequently cold-blooded.  Warm-blooded animals which are long-lived (in absolute terms, like whales, or relative to their size, like bats, hummingbirds, and squirrels) often undergo temporary reductions in body temperature, during diving or hibernation.  Moreover, interventions like dietary restriction which extend lifespan have the effect of reducing body temperature.  So can reducing body temperature directly extend life?

For a cold-blooded example, transferring fish from 20-degree water to 15-degree water extended lifespan by 71%, in a 1972 study.[5]

To reduce the body temperature of a warm-blooded animal, it's not enough to reduce ambient temperature, since warm-blooded animals generate heat to compensate. In fact, reducing the ambient temperature actually shortens mouse lifespan. However, there are tricks to lower body temperature in a warm-blooded animal.

Mice genetically modified to overexpress the uncoupling protein, UCP2, in the hypothalamus have lower body temperature than wild-type[6], and they live longer than their wild-type counterparts (20% increase in female median lifespan, 12% in male.)

You can also induce hypothermia by stimulating the heat-detecting cells in the hypothalamus, either by injecting capsaicin [7], heating the hypothalamus directly with a thermode [8], or stimulating the heat-sensing neurons optogenetically [9].

The natural next experiments to do are a.) see if any of these other methods of inducing hypothermia affect lifespan and diseases of aging in mice or other mammals; b.) do longitudinal transcriptomics or other broad assays to see what reduced body temperature is doing and whether its effects can be simulated chemically or genetically.

Altered Photoperiod Cycle Length: Short "Years" Shorten Lifespan 30% in Lemurs

Days get longer in summer and shorter in winter; by lengthening or shortening the cycle of alteration in photoperiod length (by changing artificial lighting) you can give an animal a shorter or longer subjective "year".

This turns out to affect lifespan!

The gray mouse lemur is a prosimian primate, native to Madagascar, that is long-lived for its size. During the long-day summer, gray mouse lemurs breed and are more active; during the short-day winter, they gain weight, become lethargic, and don't copulate. If you alter the photoperiod cycle artificially, you can alter the timing of these behavioral and morphological changes accordingly -- and if you reduce the "year length" by a third, from 12 months to 8 months, lifespan also shortens by 30% and the onset of white fur happens 30% earlier. [10] . In other words, lemurs live 9-10 "subjective years", whether those are 8-month years or 12-month years. 

The obvious follow-up experiment is to go the other direction -- do lemurs (or other animals) live longer if you subject them to 16-month subjective years? And to take some tissue and blood samples and try to identify how this effect works -- do we see pathological changes, transcriptional changes, hormonal changes, metabolic changes?

Constant Light Exposure: 25% Life Extension in Hamsters

A Syrian hamster model of congenital heart disease showed delayed onset of heart failure and 25% life extension if they were kept in continuously lit conditions.[11]

The obvious corollary studies are to take heart tissue samples and blood samples and look for altered gene expression or metabolic parameters that might explain the effect of light exposure on preventing heart failure. It also might be possible to experiment directly with continuous light exposure on humans, since it's probably not dangerous.

Pineal-Thymus Graft: 24% Life Extension in Aged Mice

Implanting the pineal gland of a young mouse into the thymus of an old (16-22 month) mouse extends lifespan 19% in C57BL6 mice, 20% in Balb/c mice, and 35% in hybrid mice, for an average of 24% overall.[12] This is consonant with a more extensive literature about the pineal gland or the main hormone (melatonin) it secretes having a life-extending effect through preventing the dysregulation of the circadian rhythm which occurs with age.

The obvious follow-up study to do is a replication of the same implantation experiment, along with longitudinal expression data, to find out how this works and work towards identifying how a similar effect could be replicated by a less invasive intervention.

Splenectomy: 19% Life Extension in Aged Mice

In a 1969 experiment, adding spleen cells to mice of the same age as the cell donors shortened lifespan; adding spleen cells from younger mice (14 week) to older mice (76 week) extended median lifespan from 105 weeks to 128 weeks, a 13% lifespan effect; and removing the spleens of mice altogether at age 97 weeks increased median remaining lifespan from 118 to 158 weeks, a 19% lifespan effect.

Clearly, the aged mouse spleen contains some factor that accelerates age-related decline. The obvious question is to find out what this is, through expression or proteomics studies on young, aged, and splenectomized mice, and see if there's a way to target the culprit pharmacologically.

Induced Hypothyroidism: 17% Life Extension in Rats

Exposing newborn rats to thyroid hormone permanently reduces their bodyweight and thyroxine levels; it's a way of artificially inducing hypothyroidism.[13] It also has the effect of dramatically elevating their prolactin levels; as prolactin is stimulated by TSH release from the hypothalamus, clearly neonatal T4 exposure doesn't prevent TSH release in the brain, but rather impairs the thyroid's ability to respond to it.  This induced hypothyroidism also extends median lifespan by 17% and maximal lifespan by 6%.

Obviously, inducing hypothyroidism isn't a viable intervention for humans, but looking at changes in hormone levels and gene regulation in induced hypothyroidism might give clues to what downstream mechanisms are responsible for the lifespan increase and whether there's a less-side-effect-heavy way to induce it.

Castration: 17% Life Extension in Rats

Removing the testes of male Wistar rats has been found to extend lifespan 17% relative to unbred intact males; removing the ovaries extends lifespan 29% relative to unbred females. [14]

This isn't too surprising given that caloric restriction (a reliable life-extending intervention in rodents under typical lab conditions) has antigonadal effects, and that extremely dramatic lifespan effects can come from removing the germ cells in C. elegans.[15] There's also some correlational evidence -- for instance, castrated male cats arriving at veterinary hospitals lived 67% longer than intact males.[16]

Obviously, castration isn't a practical intervention for most humans, but it's possible that there's some downstream effect that doesn't alter fertility or observable sex characteristics and preserves some of the anti-aging effect; this is a good opportunity for looking at longitudinal expression changes in castrated vs. intact animals and trying to identify the mechanism of lifespan extension.

Lateral Hypothalamic Stimulation: 5% Life Extension in Aged Rats

Stimulating the lateral hypothalamus is pleasurable, and animals given the opportunity to self-stimulate will do so; this is what's known as wireheading.  Interestingly enough, there are interactions with aging here as well.  Young adult rats have more neurons and more electrical activity in the lateral hypothalamic area (LHA) than old rats; young rats also exhibit more self-stimulatory behavior than old rats when given access to a button that turns on the electrode. Moreover, in old rats, chronic stimulation in the LHA extended lifespan from 1075 days to 1125 days (5% of total median lifespan, 8% of total maximum lifespan, 35% of residual lifespan); stimulation reduced body mass as well.[17]

Is this just a dietary restriction effect, or is it something else? The natural thing to do is to try the experiment again, this time compared against controls given the exact same amount of food to eat; and also, to take brain samples after death and possibly other blood samples during lifespan to try to identify metabolic or regulatory changes caused by the stimulation.

Blindness: Increases Survival in Rats

Blindness affects the circadian rhythm; it effectively gives the same hormonal signals as perpetual darkness. Rats blinded at 25 days had increased lifespan relative to controls; at 748 days, when the experiment concluded, the blind rats had a 95% survival rate while the control rats had a 50% survival rate.

The natural follow-up is to do a full lifespan study so we can get an actual measurement of the effect on median lifespan, as well as measurements of other biomarkers so we can identify a mechanism and possibly a way to replicate the anti-aging effect without actually inducing blindness.

Fetal Hypothalmic Graft: Restores Fertility and Circadian Rhythm in Rats and Hamsters

In keeping with the pattern of neuroendocrine effects on aging, it turns out that transplanting the suprachiasmatic nucleus (the part of the hypothalamus responsible for entraining the circadian rhythm in response to day length) from fetal animals into the brains of aged animals can restore the periodicity of the circadian rhythm and restore diminished fertility. With age, circadian rhythms become less regular; animals wake more during the periods when they should be sleeping, and/or are more lethargic during the periods when they should be awake.  Fetal SCN grafts reverse this phenomenon in both hamsters [18] and rats.[19]

Moreover, 7 of 10 aged rats given fetal anterior hypothalamus transplants regained fertility and fathered a total of 106 pups[20], while medial basal hypothalamus transplants from rat fetuses into aged female rats reversed hypogonadism.[21]

The hypothalamus regulates a variety of hormonal signals, which become dysregulated with age; it seems that some of these effects can be reversed by transplanting a younger hypothalamus. The most natural question to ask is, first, does this extend life? Second, can we identify on the genetic or molecular level what the younger hypothalamus tissue is doing that improves aging-related phenotypes? If so, there might be a non-invasive way to replicate the effect.

What Now?
These are ten very broad suggestions for animal experiments to run, which might yield targets that are ripe for intervention.  I'd expect, without looking too deeply into details, that each of these ten experiments would have a 6-figure price tag. And I'm aware of nobody who's working on these projects (please correct me if I'm wrong!)

Could these projects turn into biotech companies? It's hard to say, of course; it depends on whether the experimental results are good, among other things. But I'm pretty inclined to believe that we don't know all the aging-modulating targets yet. That points to phenotypic screening approaches (like what we're doing at Daphnia Labsor target-discovery studies (like the ones proposed in this post, or like the ones being done at Gordian, BioAge, or Fauna), being quite valuable. We don't know everything that's out there, and early-stage exploration is a lot cheaper than depth-first drug development, so on the margin more exploration is a "good buy", it seems.


References
[1]https://www.stm-assoc.org/2018_10_04_STM_Report_2018.pdf
[2]https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/
[3]Brembs, Björn. "Prestigious science journals struggle to reach even average reliability." Frontiers in human neuroscience 12 (2018): 37.
[4]Benoit, Jacques, and L. Ott. "External and internal factors in sexual activity: effect of irradiation with different wave-lengths on the mechanisms of photostimulation of the hypophysis and on testicular growth in the immature duck." The Yale journal of biology and medicine 17.1 (1944): 27.
[5]Liu, R. K., and R. L. Walford. "The effect of lowered body temperature on lifespan and immune and non-immune processes." __Gerontology__ 18.5-6 (1972): 363-388.
[6]Conti, Bruno, et al. "Transgenic mice with a reduced core body temperature have an increased life span." __Science__ 314.5800 (2006): 825-82
[7]Jancsó-Gábor, Aurelia, J. Szolcsanyi, and N. Jancso. "Stimulation and desensitization of the hypothalamic heat‐sensitive structures by capsaicin in rats." __The Journal of physiology__ 208.2 (1970): 449-459.
[8]Hammel, H. T., J. D. Hardy, and MiM Fusco. "Thermoregulatory responses to hypothalamic cooling in unanesthetized dogs." __American Journal of Physiology-Legacy Content__ 198.3 (1960): 481-486.
[9]Zhao, Zheng-Dong, et al. "A hypothalamic circuit that controls body temperature." __Proceedings of the National Academy of Sciences__ 114.8 (2017): 2042-2047.
[10]Perret, Martine. "Change in Photoperiodic Cycle Affects Life Span in a Prosimian Primate (Microcebus murinus." __Journal of biological rhythms__ 12.2 (1997): 136-145.
[11]Tapp, Walter, and Benjamin Natelson. "Life extension in heart disease: an animal model." __The Lancet__ 327.8475 (1986): 238-24
[12]Pierpaoli, Walter, et al. "The pineal control of aging: the effects of melatonin and pineal grafting on the survival of older mice." __Annals of the New York Academy of Sciences__ 621.1 (1991): 291-313.
[13]Ooka, Hiroshi, Saori Fujita, and Emiko Yoshimoto. "Pituitary-thyroid activity and longevity in neonatally thyroxine-treated rats." __Mechanisms of ageing and development__ 22.2 (1983): 113-12
[14]Asdell, S. A., and S. R. Joshi. "Reproduction and longevity in the hamster and rat." __Biology of reproduction__ 14.4 (1976): 478-480.
[15]Hsin, Honor, and Cynthia Kenyon. "Signals from the reproductive system regulate the lifespan of C. elegans." Nature 399.6734 (1999): 362.
[16]Hamilton, James B. "Relationship of castration, spaying, and sex to survival and duration of life in domestic cats." __Reproduction and aging. New York, NY: MSS Information Corporation__ (1974): 96-115.
[17]Frolkis, V. V., et al. "The lateral hypothalamic area: Peculiarities of aging and the effect of chronic electrical stimulation on the lifespan in rats." __Neurophysiology__ 32.4 (2000): 276-282.
[18]Viswanathan, N., and F. C. Davis. "Suprachiasmatic nucleus grafts restore circadian function in aged hamsters." __Brain research__ 686.1 (1995): 10-16.
[19]Li, Hua, and Evelyn Satinoff. "Fetal tissue containing the suprachiasmatic nucleus restores multiple circadian rhythms in old rats." __American Journal of Physiology-Regulatory, Integrative and Comparative Physiology__ 275.6 (1998): R1735-R1744.
[20]Huang, H. H., J. Q. Kissane, and E. J. Hawrylewicz. "Restoration of sexual function and fertility by fetal hypothalamic transplant in impotent aged male rats." __Neurobiology of aging__ 8.5 (1987): 465-472.
[21]MATSUMOTO, Akira, et al. "Recovery of declined ovarian function in aged female rats by transplantation of newborn hypothalamic tissue." __Proceedings of the Japan Academy, Series B__ 60.4 (1984): 73-76
]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1491010 2019-12-20T21:53:10Z 2020-01-22T20:48:18Z Genes involved in aging: looking for intersections

A natural type of question you might ask, if you're interested in understanding aging, is which genes are involved in the aging process. The practical upshot of identifying genes with causal roles is that they're potential drug targets.  If diseases of aging are caused or worsened by the excess of some protein, you might want to inhibit the production or activity of that protein. If diseases of aging are caused or worsened by a deficit in some protein, you might want to stimulate its production.

We have a lot of different kinds of experiments that can be run for identifying what genes and proteins are involved in aging, and thus a lot of aging-related "omics" studies. I'll briefly summarize a few categories I know about.

Longitudinal Transcriptomics

You can compare the expression of genes in tissue samples from old vs. young organisms (humans or mice) and see which genes are expressed more or less with age.  Today it's even possible to get single-cell resolution on gene expression; we can identify the rate of gene transcription for each gene in a specific cell at a given time.  Similarly, you can get proteomics data, directly measuring the quantity of each protein in a tissue sample.

This gives us correlational information about which genes are altered in the aging process, in particular tissues and cell types. It doesn't by itself tell us which interventions might prevent disease.  If a particular gene is more expressed with age, it could be because it's a cause of some deleterious process, or because it's a symptom of that process, or because it's part of the body's attempt to mitigate that process.  Whether you want to inhibit that protein's activity or production depends on what it's doing, and expression levels by themselves can't tell you that.

Comparative Genomics

Animals don't all age in the same way. Some are exceptionally long-lived, either in absolute terms (whales, elephants, tortoises, rockfishes, the Greenland shark) or relative to their size (bats, naked mole rats). Some mammals are immune from cancer.  Can we identify "genes responsible for healthy longevity" in the animal kingdom? Could we "borrow" the adaptations that slow-aging animals have developed, as treatments for humans?

Given the genomes for two species, you can identify homologous genes -- genes that have very similar sequences and probably similar functions.  Something like half our genes have homologues in common with insects and all vertebrates.  

If the homologue of a gene in an exceptionally long-lived species is absent or has a loss-of-function mutation, you might ask whether that gene contributes to aging.

If a gene family of similar proteins is "expanded" in a long-lived species (meaning there are more variations on that gene present) or if a gene has a high copy number (meaning there are many identical versions of that gene) you might ask whether that gene has a protective effect.

If a gene shows evidence of positive selection in a long-lived species, you might ask whether that gene has a protective effect against some aging process.

If there's a correlation between copy number or gene family size and lifespan (or lifespan per body weight) across species, that's somewhat stronger evidence that there's an association between those genes and lifespan.

Again, these are all correlational; they don't tell you how the gene works, or what would happen if you interfered with it experimentally.

Experimental Genetic Modifications

If you induce a genetic mutation in a mouse gene and the mouse lives longer or avoids the onset of age-related diseases, then you do have causal evidence that the gene is involved in regulating aging.

There are a variety of ways of inducing specific mutations, some permanent and some temporary, some causing total absence of the gene (knockout) while others cause a deficit (knockdown) or unusually high production of the gene product (overexpression.)

Looking for the Intersection

Most broad studies (longitudinal transcriptomics, comparative genomics) looking for genes involved in aging come up with totally different lists of candidate genes. 

Sometimes, of course, this is expected, because they're looking at different tissues or different organisms. You don't expect all animals or all tissues to change in the same way with age. And, of course, comparing genomes between species and comparing gene expression over time within one species are apples-to-oranges comparisons.

But even in cases where the experiments are supposed to measure the same thing, there's poor replication. And that's not surprising because the samples are so small. It's not uncommon to see longitudinal transcriptomics studies with fewer than ten organisms in the "old" and "young" groups. 

And this matters because if you want to have any hope of translation to humans, you've got to be able to have results that are consistent across different strains of mouse, or even species of mammal; if it's all wiped out by natural variation between organisms, there's no way you're getting signal usable for designing human treatments.

So I did a very, very crude type of meta-analysis; I looked at all the studies I could, out of these three types (transcriptomics and proteomics of aging; comparative genomics of aging in long-lived species; and interventional studies of genetic modifications; all restricted to studies on vertebrates) and ranked genes (or gene families) by the number of papers in which the gene popped out as significant. 

There are a couple of potential flaws in my methodology.

First of all, I used Google Scholar to search, and stopped when the relevant search terms stopped returning studies of the relevant type. There may well have been studies this search method missed; it's just much more time-efficient than using PubMed searches (which reliably produce far less relevant results, but it's easier to document exactly how many papers matched search terms, which is why they're the standard method in formal literature reviews.)

Second of all, I didn't use a consistent cutoff in picking out which genes were significant. (In a study that tests all 20,000 or so human genes for differential expression, "statistically significant" is a very low bar.) I generally noted down the handful of genes that had the highest fold change and lowest p-value, not literally all the ones that met a significance threshold.  

Thirdly, some studies, of course, like lifespan studies of a genetic modification, aren't unbiased screens of all genes; genes that have gained more scientific interest are likely to be studied more often, so in part this list of "top genes" reflects the biases of the research literature.

And, finally, since we're comparing different types of studies, we're not making apples-to-apples comparisons. You don't expect the genes differentially expressed with age to be exactly the same as the genes which are modified in long-lived organisms or the same as those which alter lifespan when experimentally mutated. If a gene shows up in all three types of studies I think that's some sort of evidence that it's "more likely to really be involved" in aging, but not in the same straightforward sense that it's true that a study is more credible when it replicates exactly.

However, I think it's worth doing something in this vein, as a way of helping orient ourselves in a growing field. As more and more papers come out claiming that they've found genes "associated" with aging, we want to be able to be familiar with what the most common ones that keep showing up are.  Just as with genome-wide association studies for genetic predictors of disease, one correlation showing up in a study doesn't mean we've found the "gene for" anything. I think of the aggregation process as a learning experience, for getting a sense of what the field as a whole looks like.

Top Genes

The following table is color-coded for the primary "hallmark of aging" associated with the gene or gene family; red for nutrient-sensing, purple for proteostasis, green for intercellular communication and inflammatory signaling, blue for DNA repair, and pink for mitochondria.  




Here's a histogram of the frequency of the distribution of the genes.


The majority of genes only appeared in one paper; most genes that were significantly related to age or lifespan in one paper did not show up in any others.  

Corollaries
The top-scoring gene families suggest some conclusions.

1. It's probably worth doing interventional genetic modifications on mammals for genes that show a lot of correlational evidence of being involved in aging. Inhibiting the expression of serpins, heat shock proteins, or chemokines in mice might show a delay in some aging phenotypes.

2. Some of these genes make sense in light of the "hallmarks of aging" -- serpins and heat shock proteins are associated with proteostasis and the elimination of misfolded proteins; IGF, GH, and FGF's are involved in nutrient sensing and growth; UCP is a key mitochondrial function; TNF and the chemokines are inflammatory signals. It would make sense that dysregulation of these functions plays a causal role in age-related disease. 

3. We need much larger sample sizes on longitudinal transcriptomics studies. The typical studies I found, mouse or human, had fewer than ten experimental subjects per age group. As you might expect, this yielded inconsistent results. Between-individual diversity can be a confounding factor that makes it harder to reliably identify age-related changes.

Interestingly, [22] clusters gene transcripts in human T cells according to their aging-related dynamics, and finds three kinds of genes:
  1. genes that follow a "U-shaped" curve, declining in expression level until about age 60 and then rising again; these include growth-and-proliferation-associated signals
  2. genes that follow an inverted-U curve, rising in expression level until about age 70 and then declining; these are mostly cancer-related genes as well as the mTOR and Jak-STAT pathways
  3. genes that start out high in expression level and start to decline at age 80; these are mostly mitochondrial and neurological.

I find this very interesting as a categorization and wonder if it holds up for more cell and tissue types. 

From a therapeutic perspective, if this pattern generalizes, I could imagine we might want to enhance production or activity of proteins in the third cluster, inhibit the production or activity of compounds in the second cluster, and be cautious about the tradeoffs in the first cluster, since both straightforwardly inhibiting and accelerating nonspecific growth signals can have serious side effects.  

It's hard to tell, for now. But in order to replicate this result you'd also need more studies to use multiple time periods, instead of just taking old and young samples; once again, increasing the scale of longitudinal transcriptomic studies would be very valuable.

References

1. Bodyak, Natalya, et al. "Gene expression profiling of the aging mouse cardiac myocytes." Nucleic acids research 30.17 (2002): 3788-3794.

2. Park, Sang‐Kyu, et al. "Gene expression profiling of aging in multiple mouse strains: identification of aging biomarkers and impact of dietary antioxidants." Aging cell 8.4 (2009): 484-495.

3. Yoshida, Shigeo, et al. "Microarray analysis of gene expression in the aging human retina." Investigative ophthalmology & visual science 43.8 (2002): 2554-2560.

4.Zhou, Jing, et al. "Integrated study on comparative transcriptome and skeletal muscle function in aged rats." __Mechanisms of ageing and development__ 169 (2018): 32-

5. Dobson Jr, James G., et al. "Molecular mechanisms of reduced β-adrenergic signaling in the aged heart as revealed by genomic profiling." __Physiological genomics__ 15.2 (2003): 142-147.

6. Yang, S., et al. "Comparative proteomic analysis of brains of naturally aging mice." __Neuroscience__ 154.3 (2008): 1107-1120.

7. Glaab, Enrico, and Reinhard Schneider. "Comparative pathway and network analysis of brain transcriptome changes during adult aging and in Parkinson's disease." __Neurobiology of disease__ 74 (2015): 1-13.

8.Huang, Zixia, et al. "Longitudinal comparative transcriptomics reveals unique mechanisms underlying extended healthspan in bats." __Nature ecology & evolution__ (2019): 1.

9. Welle, Stephen, et al. "Gene expression profile of aging in human muscle." __Physiological genomics__ 14.2 (2003): 149-159.

10. Heras, Joseph, and Andres Aguilar. "Comparative Transcriptomics Reveals Patterns of Adaptive Evolution Associated with Depth and Age Within Marine Rockfishes (Sebastes)." Journal of Heredity 110.3 (2019): 340-350.

11. Ximerakis, Methodios, et al. "Single-cell transcriptomic profiling of the aging mouse brain." __Nature neuroscience__ 22.10 (2019): 1696-1708

12. Angelidis, Ilias, et al. "An atlas of the aging lung mapped by single cell transcriptomics and deep tissue proteomics." __Nature communications__ 10.1 (2019): 963.

13. Benayoun, Bérénice A., et al. "Remodeling of epigenome and transcriptome landscapes with aging in mice reveals widespread induction of inflammatory responses." __Genome research__ 29.4 (2019): 697-709.

14. Taylor, Jackson, et al. "Transcriptomic profiles of aging in naïve and memory CD4+ cells from mice." __Immunity & Ageing__ 14.1 (2017): 15.

15. Shi, Zhanping, et al. "Single-cell transcriptomics reveals gene signatures and alterations associated with aging in distinct neural stem/progenitor cell subpopulations." __Protein & cell__ 9.4 (2018): 351-364

16. Kuehne, Andreas, et al. "An integrative metabolomics and transcriptomics study to identify metabolic alterations in aged skin of humans in vivo." __BMC genomics__ 18.1 (2017): 169.

17. Peffers, Mandy Jayne, Xuan Liu, and Peter David Clegg. "Transcriptomic signatures in cartilage ageing." Arthritis research & therapy 15.4 (2013): R98.

18. Yu, Ying, et al. "A rat RNA-Seq transcriptomic BodyMap across 11 organs and 4 developmental stages." __Nature communications__ 5 (2014): 3230

19. Galatro, Thais F., et al. "Transcriptomic analysis of purified human cortical microglia reveals age-associated changes." __Nature neuroscience__ 20.8 (2017): 1162.

20. Galea, Gabriel L., et al. "Old age and the associated impairment of bones' adaptation to loading are associated with transcriptomic changes in cellular metabolism, cell-matrix interactions and the cell cycle." __Gene__ 599 (2017): 36-52.

21. Marttila, Saara, et al. "Transcriptional analysis reveals gender-specific changes in the aging of the human immune system." __PloS one__ 8.6 (2013): e66229

22. Remondini, Daniel, et al. "Complex patterns of gene expression in human T cells during in vivo aging." __Molecular bioSystems__ 6.10 (2010): 1983-1992.

23. Gao, Lin, et al. "Age-mediated transcriptomic changes in adult mouse substantia nigra." __PloS one__ 8.4 (2013): e62456.

24. Cai, Hui, et al. "Effects of aging and anatomic location on gene expression in human retina." __Frontiers neuroscience__ 4 (2012): 8.

25. Jonker, Martijs J., et al. "Life spanning murine gene expression profiles in relation to chronological and pathological aging in multiple organs. Aging cell__ 12.5 (2013): 901-909.

26. Lewis, Kaitlyn N., et al. "Unraveling the message: insights into comparative genomics of the naked mole-rat." Mammalian Genome 27.7-8 (2016): 259-278.

27. Keane, Michael, et al. "Insights into the evolution of longevity from the bowhead whale genome." __Cell reports__ 10.1 (2015): 112-122.

28. Seim, Inge, et al. "Genome analysis reveals insights into physiology and longevity of the Brandt’s bat Myotis brandtii." __Nature communications__ 4 (2013): 2212.

29. Quesada, Víctor, et al. "Giant tortoise genomes provide insights into longevity and age-related disease." __Nature ecology & evolution__ 3.1 (2019): 87.

30. Abegglen, Lisa M., et al. "Potential mechanisms for cancer resistance in elephants and comparative cellular response to DNA damage in humans." __Jama__ 314.17 (2015): 1850-1860.

31. Seabury, Christopher M., et al. "A multi-platform draft de novo genome assembly and comparative analysis for the scarlet macaw (Ara macao)." __PLOS one__ 8.5 (2013): e6

32. Foote, Andrew D., et al. "Convergent evolution of the genomes of marine mammals." __Nature genetics__ 47.3 (2015): 272.

33. Shaffer, H. Bradley, et al. "The western painted turtle genome, a model for the evolution of extreme physiological adaptations in a slowly evolving lineage." __Genome biology__ 14.3 (2013): R28.

34. Wirthlin, Morgan, et al. "Parrot genomes and the evolution of heightened longevity and cognition." __Current Biology__ 28.24 (2018): 4001-4008.

35. Wirthlin, Morgan, et al. "Parrot genomes and the evolution of heightened longevity and cognition." __Current Biology__ 28.24 (2018): 4001-4008.

36. Ma, Siming, and Vadim N. Gladyshev. "Molecular signatures of longevity: insights from cross-species comparative studies." __Seminars in cell & developmental biology__. Vol. 70. Academic Press, 2017.

37. Fushan, Alexey A., et al. "Gene expression defines natural changes in mammalian lifespan." __Aging cell__ 14.3 (2015): 352-365.

38. Pickering, Andrew M., Marcus Lehr, and Richard A. Miller. "Lifespan of mice and primates correlates with immunoproteasome expression." __The Journal of clinical investigation__ 125.5 (2015): 2059-2068.

39. Pyo, Jong-Ok, et al. "Overexpression of ATG5 in mice activates autophagy and extends lifespan." __Nature communications__ 4 (2013): 2300.

40. Zhang, Yuan, et al. "The starvation hormone, fibroblast growth factor-21, extends lifespan in mice." __elife__ 1 (2012): e00065.

41. Conover, Cheryl A., and Laurie K. Bale. "Loss of pregnancy‐associated plasma protein A extends lifespan in mice." __Aging cell__ 6.5 (2007): 727-729.

42. Sun, Liou Y., et al. "Growth hormone-releasing hormone disruption extends lifespan and regulates response to caloric restriction in mice." __Elife__ 2 (2013): e01098.

43. Baker, Darren J., et al. "Increased expression of BubR1 protects against aneuploidy and cancer and extends healthy lifespan." __Nature cell biology__ 15.1 (2013): 96.

44. Selman, Colin, Linda Partridge, and Dominic J. Withers. "Replication of extended lifespan phenotype in mice with deletion of insulin receptor substrate 1." __PloS one__ 6.1 (2011): e16144.

45. Kanfi, Yariv, et al. "The SIRTuin SIRT6 regulates lifespan in male mice." __Nature__ 483.7388 (2012): 218.

46. Wu, Chia-Yu, et al. "A persistent level of Cisd2 extends healthy lifespan and delays aging in mice." __Human molecular genetics__ 21.18 (2012): 3956-3968.

47. Wu, Chia-Yu, et al. "A persistent level of Cisd2 extends healthy lifespan and delays aging in mice." __Human molecular genetics__ 21.18 (2012): 3956-3968.

48. Flurkey, Kevin, et al. "Lifespan extension and delayed immune and collagen aging in mutant mice with defects in growth hormone production." __Proceedings of the National Academy of Sciences__ 98.12 (2001): 6736-6741.

49. Harper, James M., J. Erby Wilkinson, and Richard A. Miller. "Macrophage migration inhibitory factor-knockout mice are long lived and respond to caloric restriction." __The FASEB Journal__ 24.7 (2010): 2436-2442

50. Streeper, Ryan S., et al. "Deficiency of the lipid synthesis enzyme, DGAT1, extends longevity in mice." __Aging (Albany NY)__ 4.1 (2012): 13.

51. Müller, Christine, et al. "Reduced expression of C/EBPβ-LIP extends health and lifespan in mice." __Elife__ 7 (2018): e34985

52. de Jesus, Bruno Bernardes, et al. "Telomerase gene therapy in adult and old mice delays aging and increases longevity without increasing cancer." __EMBO molecular medicine__ 4.8 (2012): 691-704.

53. Canaan, Allon, et al. "Extended lifespan and reduced adiposity in mice lacking the FAT10 gene." __Proceedings of the National Academy of Sciences__ 111.14 (2014): 5313-5318.

54. Al-Regaiey, Khalid A., et al. "Long-lived growth hormone receptor knockout mice: interaction of reduced insulin-like growth factor i/insulin signaling and caloric restriction." __Endocrinology__ 146.2 (2005): 851-860.

55. Holzenberger, Martin, et al. "IGF-1 receptor regulates lifespan and resistance to oxidative stress in mice." __Nature__ 421.6919 (2003): 182.

56. Schriner, Samuel E., and Nancy J. Linford. "Extension of mouse lifespan by overexpression of catalase." __Age__ 28.2 (2006): 209-218.

57. Schriner, Samuel E., and Nancy J. Linford. "Extension of mouse lifespan by overexpression of [[catalase]]." __Age__ 28.2 (2006): 209-218.

58. Satoh, Akiko, et al. "SIRT1 extends life span and delays aging in mice through the regulation of Nk2 homeobox 1 in the DMH and LH." __Cell metabolism__ 18.3 (2013): 416-430.

59. Kurosu, Hiroshi, et al. "Suppression of aging in mice by the hormone Klotho." __Science__ 309.5742 (2005): 1829-1833.

60. Shimokawa, Isao, et al. "Life span extension by reduction in growth hormone-insulin-like growth factor-1 axis in a transgenic rat model." __The American journal of pathology__ 160.6 (2002): 2259-2265.

61. Zhang, Guo, et al. "Hypothalamic programming of systemic ageing involving IKK-β, NF-κB and GnRH." __Nature__ 497.7448 (2013): 211

62. Wu, J. Julie, et al. "Increased mammalian lifespan and a segmental and tissue-specific slowing of aging after genetic reduction of mTOR expression." __Cell reports__ 4.5 (2013): 913-920.

63. Selman, Colin, et al. "Ribosomal protein S6 kinase 1 signaling regulates mammalian life span." __Science__ 326.5949 (2009): 140-144

64. Ikeno, Yuji, et al. "Delayed occurrence of fatal neoplastic diseases in Ames dwarf mice: correlation to extended longevity." The Journals of Gerontology Series A: Biological Sciences and Medical Sciences 58.4 (2003): B291-B296.

65. Matheu, Ander, et al. "Delayed ageing through damage protection by the Arf/p53 pathway." __Nature__ 448.7151 (2007): 375.

66. Ortega-Molina, Ana, et al. "Pten positively regulates brown adipose function, energy expenditure, and longevity." __Cell metabolism__ 15.3 (2012): 382-3

67. Hofmann, Jeffrey W., et al. "Reduced expression of MYC increases longevity and enhances healthspan." __Cell__ 160.3 (2015): 477-488.

68. Riera, Céline E., et al. "TRPV1 pain receptors regulate longevity and metabolism by neuropeptide signaling." __Cell__ 157.5 (2014): 1023-1036

69. Conti, Bruno, et al. "Transgenic mice with a reduced core body temperature have an increased life span." __Science__ 314.5800 (2006): 825-828.

70. Nóbrega-Pereira, Sandrina, et al. "G6PD protects from oxidative damage and improves healthspan in mice." __Nature communications__ 7 (2016): 10894.

71. Dell'Agnello, Carlotta, et al. "Increased longevity and refractoriness to Ca2+-dependent neurodegeneration in Surf1 knockout mice." __Human molecular genetics__ 16.4 (2007): 431-444.

72. Miskin, Ruth, and Tamar Masos. "Transgenic mice overexpressing urokinase-type plasminogen activator in the brain exhibit reduced food consumption, body weight and size, and increased longevity." The Journals of Gerontology Series A: Biological Sciences and Medical Sciences 52.2 (1997): B118-B124.

73. Miskin, Ruth, and Tamar Masos. "Transgenic mice overexpressing urokinase-type plasminogen activator in the brain exhibit reduced food consumption, body weight and size, and increased longevity." The Journals of Gerontology Series A: Biological Sciences and Medical Sciences 52.2 (1997): B118-B124.

74. Fernández, Álvaro F., et al. "Disruption of the beclin 1–BCL2 autophagy regulatory complex promotes longevity in mice." Nature 558.7708 (2018): 136.

75. Vatner, Stephen F., et al. "Adenylyl cyclase type 5 disruption prolongs longevity and protects the heart against stress." __Circulation Journal__ 73.2 (2009): 195-200.

76.Schriner, Samuel E., et al. "Extension of murine life span by overexpression of catalase targeted to mitochondria." __science__ 308.5730 (2005): 1909-1911.

77. Liu, Xingxing, et al. "Evolutionary conservation of the clk-1-dependent mechanism of longevity: loss of mCLK1 increases cellular fitness and lifespan in mice." __Genes & development__ 19.20 (2005): 2424-2434.

78. Blüher, Matthias, Barbara B. Kahn, and C. Ronald Kahn. "Extended longevity in mice lacking the insulin receptor in adipose tissue." __Science__ 299.5606 (2003): 572-574.

79. Borrás, Consuelo, et al. "RASGrf1 deficiency delays aging in mice." __Aging (Albany NY)__ 3.3 (2011): 262.

80. Matheu, Ander, et al. "Anti‐aging activity of the Ink4/Arf locus." __Aging cell__ 8.2 (2009): 152-161.

81. Markovich, Daniel, Mei-Chun Ku, and Dzaidenny Muslim. "Increased lifespan in hyposulfatemic NaS1 null mice." __Experimental gerontology__ 46.10 (2011): 833-835.

82. Redmann Jr, Stephen M., and George Argyropoulos. "AgRP-deficiency could lead to increased lifespan." __Biochemical and biophysical research communications__ 351.4 (2006): 860-864.

83. De Luca, Gabriele, et al. "Prolonged lifespan with enhanced exploratory behavior in mice overexpressing the oxidized nucleoside triphosphatase hMTH1." __Aging cell__ 12.4 (2013): 695-705.

84. Andersen, J. B., et al. "Role of 2-5A-dependent RNase-L in senescence and longevity." __Oncogene__ 26.21 (2007): 3081.


]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1473985 2019-11-05T01:42:46Z 2019-11-09T15:30:54Z Is Stupidity Strength? Part 4: Are VCs Stupid?

Defining the Question

If you want to write a thinkpiece bashing venture capitalists, it's easy enough. All you have to do is find an example of one VC-backed company that sounds stupid or mockable, and generalize that to condemning venture capitalists overall. Instant Valleywag article!

But "can you find a seemingly-dumb VC investment?" isn't an interesting question; the answer is obviously yes, and you can't do anything practical with that answer except drum up the public's knee-jerk resentment against Silicon Valley. I'm not interested in going that route.

Here are two interesting questions:

  1. Are institutional investors who invest in VC firms being economically rational by investing in VC the way they typically do today? Could they make more money doing something different? (That is, is "VC overrated" from the perspective of an institutional investor like a retirement fund or university endowment?)
  2. Are VC's being economically rational by choosing startups to invest in as they typically do today? Could a VC firm make more money doing something different? (That is, are VC's being "stupid" in the sense that a contrarian investment approach could strictly outperform them?)

If the answers to these questions are "no" and "yes" respectively, that doesn't mean all VCs are bad investors, just the typical VC. In fact, quite a few VCs argue that they're reliably beating the market by following a contrarian strategy.

If there's overinvestment in VC on the whole, or if there's a contrarian VC investment strategy that beats the market, that's good news -- it means there's an economic opportunity!

If not, that's a different kind of good news: the market is efficient and pretty much doing as well as a market can at allocating capital where it creates the most value. We can trust price signals to be something like quality signals.

There's upside whichever way the data shakes out, so we can go into this inquiry with open minds.

The VC Industry: How Big Is It? 

The National Venture Capital Association's latest 2019 Q3 report offers the following figures:

  • The US VC industry invests about $100 billion a year into companies
  • There are about 2000 US VC firms
  • US VC firms invest in about 10,000 companies a year
  • Over the past 10 years, total US VC investment has more than doubled

VC is still a tiny fraction of investment capital as a whole, however. The VC industry's total assets under management are worth $524 billion; by comparison, mutual funds manage $17 trillion.    

Most ordinary people don't invest in VC, but VC is popular with institutional investors like college endowments. For instance, 18 percent of Yale's endowment is invested in venture capital. 

Why care about VC, if it's a small fraction of all investments? At the very least, it's a matter of professional interest if you work in VC-funded industries like software or biotech; also, to the extent that VC is involved in funding technological innovation, how well VC does at funding real technologies determines how abundant and productive our future economy will be.

How Good Are Aggregate returns on VC?

Does the VC industry as a whole have a good rate of return on investment compared to other types of asset?  Should institutional investors be investing less in VC, more, or about the same?

This question is mostly about the average performance of VC firms; it could both be true that most VC firms have poor returns but a few exceptional firms excel, so that if you picked  a VC firm at random its expected return would still be good.

Cambridge Associates' venture capital index estimates the returns on venture capital. It depends a lot on the time horizon: the 5-year rate of return is 13%, the 10-year is 14%, and the 30-year is 32%. But it's still clearly higher in long-run growth than the return on stock indexes like the S&P 500, which has a 5-year rate of return of 11%, 10-year rate of 16%, and 30-year rate of 10%. Investing in a random VC is higher-return in expectation, though also higher-variance, than just investing in an index fund.  In other words, investing in VC is not so stupid that you could do strictly better by just putting your money in a random stock instead. But that's a low bar.

A more subtle analysis takes account of risk as well as return. VC's may have higher average returns than the stock market, but they're also more volatile. Investors rationally see a tradeoff between risk (variance) and return (expected value) and are willing to tolerate higher risk only if it also brings higher returns; there's a mathematically optimal balance, defined by standard portfolio theory, defining how a investor with a given level of risk aversion would maximize long-run returns.

 What's the risk-adjusted rate of return, comparing VC returns to the alternative of putting that same investment into an index fund? A 2015 study in the Journal of Finance says it's significantly negative, p=0.015; this means that rational portfolio investors should be investing less in VC. A 2010 study using more conservative assumptions finds a statistically insignificant level of excess returns (alpha = 0.17, p = 0.54), implying that investors are basically investing the right amount in VC; it's neither overrated nor underrated. I'm not sure how to evaluate which set of assumptions is more reasonable.

As of 2018, not a single Ivy League school endowment (which all include VC as part of their investment portfolios) had higher returns than a simple "60-40 portfolio" (60% stocks, 40%  bonds), which would have made a 15% yearly return; and the Ivy portfolios were also much higher risk! So Ivy League fund managers, at least, could do strictly better by investing in no VCs at all!

Is it stupid to invest money in VC at all? It's hard to say, given the conflicting results from the data. But we can at least say that institutional investors shouldn't be putting more of their money into VC.

Variation Within VC: Luck or Skill?

Some VC firms have much higher returns than others.  The Column Group, a biotech VC firm, posted a staggering 408% return in the first quarter of 2019; the same report claims a return of just 10% for the median VC.

Some observers are even more pessimistic about the median VC; Paul Graham, writing in 2009, said that in his experience the median VC loses money.   Israeli VC Gil Ben-Artzy claims that 95% of VC funds make less than 3x returns over a typically 10-year span, which amounts to about 11% per year -- worse than the S&P 500! 

A sample of 535 VCs also finds they have a median rate of return of 4% -- quite a bit worse than the S&P 500.

Most VCs, it seems, have terrible track records. Only one in every twenty is making more money than you could get by just investing in an index fund (and skip paying the VCs their high fees).

But that doesn't necessarily mean the Column Group is smarter than the median VC; they might have just gotten lucky. A lottery winner has a very large return on investment compared to the median lottery player, but not because she has higher skill.

If you want to ascertain if there's such a thing as investment skill, you need to look at investor track records. Do the same investors make above-average returns year after year? Then it might be skill (though it could also be something else, like monopoly power.)  But if investors show no consistency in returns, it definitely can't be skill.

A 2006 study suggests that investor skill exists. Firms funded by VCs with prior successful investments (where "success" means IPO) are more likely to succeed, but this effect goes away in firms funded by previously successful entrepreneurs.  So, top-tier investors are more likely to pick successful startups, but don't add much benefit to startups with experienced founders.

Looking at VC fund performance, investors who did well in the past continue to do well in the future; top-quartile firms make an average of 7-8 percentage points more each year than bottom-quartile firms.

Another study also shows a large effect of investor skill: VC firms in the 80th percentile for past performance made 15 percentage points more  a year than firms in the 20th percentile.

The evidence is unequivocal: some VCs are much better than others, consistently, and VCs who are any good at all are a minority.

Predictably Wrong Strategies

"Ok, VC as a whole doesn't have great returns relative to its risk, but some investors are much better than that! Why don't institutional investors just invest in good VC's and not bad ones?"

Well, maybe they can't; maybe identifying a VC firm with a good track record is hard. After all, fund performance data is private and jealously guarded, and every VC firm tries to only share numbers that make it look impressive.  

How could we test this hypothesis? Well, there's a way to disprove it: if there were an easy-to-check criterion that accurately distinguished good investors from bad ones, then you'd be able to use that criterion to choose good investors who get above-market returns; which means that anyone not using this criterion is being financially stupid.   

Well, here's such a criterion: investors with strong jawlines lose money. No, really.

Investors with higher facial width-to-height ratios -- a predictor of high testosterone levels -- make over 5 percentage points less a year than their narrow-faced counterparts.


This is a huge effect, comparable to the difference between stocks and cash. If you invested only in funds run by low-testosterone investors, you'd be 6x as wealthy in 20 years.

(To be fair, these are hedge fund managers, not VCs; in this section I'll refer to evidence from a variety of investment types, but there seem to be commonalities). 

What's going on here? Well, clearly, many rich people have a bias towards masculine, confident, charismatic men -- so much so, that they'll even invest money in crappy funds if they're run by guys with strong jawlines. Testosterone empirically causes people to make overly risky investments, which in turn correlates with worse performance.  A lot of investors are apparently letting bias get in the way of profit.

Additionally, hedge fund managers rated as more psychopathic earned about about 2 percentage points less a year than more empathetic managers  (p < 0.05).

Fund managers who attended more selective colleges also outperform those from less selective colleges, by about a percentage point per year.

Conscientiousness correlates positively with investor performance: 80th percentile conscientiousness investors are 6x as likely to achieve top-quartile performance as median-conscientiousness investors.  Conscientiousness is also a strong predictor of success in entrepreneurs.

In venture capital specifically, venture capitalists are much more likely to make successful investments if they have science or engineering degrees, have past VC or startup experience, and don't have MBA's.

Among innovation-intensive businesses, the percent of female executives correlates positively with firm performance; likewise, new businesses founded by men perform better than those founded by women, but this  difference is fully explained by male-founded businesses having more starting capital, being in more tightly clustered regions, and being more likely to focus on high-tech manufacturing.

 Moreover, VCs given identical pitches from entrepreneurs reliably prefer male to female entrepreneurs, and they particularly like pitches from attractive men; attractiveness in women doesn't matter.

What does this all mean? In short: low-testosterone, non-psychopathic, STEM-educated, conscientious people make better investment decisions than aggressive, impulsive risk-takers with MBA's; investors have a bias towards handsome, masculine men; and female founders can do as well as male founders if they enter the most technically innovative industries and get adequate starting capital.

"Just invest in the most macho, wildly confident guy you can find" is a common strategy, and one that fails. These recklessly overconfident, less conscientious individuals are more likely to take unwise risks, and also more likely to commit fraud -- both of which are bad for business in the long run.

(This hypothesis also matches the pattern of big recent failures in startup performance like WeWork and Uber).

You can beat the market as  an institutional investor just by not executing this stupid strategy. Therefore: we can be confident that a lot of capital is being invested stupidly, ie avoidably passing up opportunities for more money.

Do I have a problem with handsome, masculine men? Heck no; I married one!  

I'm claiming that there is a lot of "dumb money" out there, which favors handsome, masculine men even when they lose money. 

Gary Becker's theory about prejudice was that it ought to eventually die out.  racist employers who won't hire black people will eventually go out of business in a competitive market; any irrational prejudice in business owners or investors, should be selected against relative to optimal profit-maximizing behavior. If we see persistent prejudice that goes against financial self-interest, the market must be non-competitive.

Well, we see persistent prejudice in investment! In the most obvious way you'd expect: bias towards traits that make people high-status in our society. People spend money on people who look like winners; they don't check track records of actual winning. Why don't they all go broke and thus remove themselves from the market? I don't know, but they don't. 

I do know, from the account of my friend Zvi who used to work on sports betting, that most people who bet on sports are not even remotely optimizing for making money; they're just sports fans who bet on the home team. You can make money just by always betting on the away team. There are just that many bets made by "dumb money", that you can "beat the market" without doing anything cleverer than betting against blind fandom.

Maybe something not too different is happening in business as well.

This is good empirical corroboration for the existence of a Stupid Coalition in business.  If people who have a preference for macho men disproportionately invest in companies and investment firms run by macho men, they can prop each other up temporarily, but sooner or later these whole clusters of highly correlated bias will experience market crashes. 


]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1471258 2019-10-29T23:05:48Z 2020-03-27T19:18:43Z Is Stupidity Strength? Part 3: Evolutionary Game Theory

Spite Strategies

Carlo Cipolla defined stupidity as causing harm to others as well as harming oneself, while benefiting nobody.

Another way of looking at this: a stupid decision is one which you could make a Pareto improvement on. A stupid decision means neglecting a win-win opportunity.  

Since people aren't omniscient and omnipotent, and we don't necessarily want to call that stupidity, we can narrow this; a stupid decision is one that avoidably causes harm to self and others.

In the previous post, I mentioned a possible incentive for a coalition of individuals to be stupid -- the "too big to fail" strategy.  If enough people commit to take imprudent risks, all at once, then they can force the prudent people to bail them out when catastrophe eventually comes. In the long term, everyone will be worse off in absolute terms than if the catastrophe had been prudently averted; but the Stupids will be relatively better off than the Prudents.

In evolutionary biology, this is a special case of what's called Hamiltonian spite, after its originator W.D. Hamilton. Imagine a gene that imposes a fitness cost on organisms that bear it, but an even greater fitness cost on members of the same species that do not bear it. This gene might be able to persist in the population, by enabling its bearers to outcompete their neighbors, even though it causes only harm and no benefit to anyone!

Does spite ever happen?  

Many apparently spiteful behaviors in nature are actually selfish; when a male bowerbird destroys the nests of other bowerbirds, his own nest appears more attractive in comparison and he gets more mating opportunities. This is a straightforward case of zero-sum competition, not true spite. 

Hamilton himself thought cases of spite would be vanishingly rare in nature; his own equations show that spiteful strategies are less likely to win, the larger the population size; and since spite strategies diminish absolute fitness (ie the number of offspring), spite-dominated populations will tend to shrink towards extinction.  In his original paper, Hamilton proposed that spite strategies might emerge in small, isolated populations and quickly drive those populations out of existence; we shouldn't expect to see them exist for long.

A more recent paper adds an additional wrinkle, however. Hamilton's original models assumed that populations could be of arbitrary size. But in nature, population sizes are often bounded above by the carrying capacity of the environment -- a given savannah only has enough resources to support so many lions, no matter how fit they are.  If you add a carrying capacity constraint to the equations, you see that spite strategies can persist in the long term, provided the harm to those who don't bear the spite gene is enough larger than the harm to those who do bear it. This critical ratio must be larger, the larger the maximum population size can be; it is easier for spite strategies to survive in environments with smaller carrying capacities.

This fact is suggestive for the question of whether spite strategies could have evolved in humans.  We are a highly K-selected species (compared to other mammals like mice) -- we have large bodies, slow metabolisms, and long lives, developing slowly, reproducing infrequently, and investing a lot of care into our offspring.  This pattern tends to evolve in organisms close to their environment's carrying capacity, such as in predators at the top of the food chain. Vast litters of offspring would do a K-selected mother no good; they would bump into the harsh limitations of the food supply and starve before they had children of their own. She would be better off investing resources into making her few offspring more robust; building them bigger, more long-lasting bodies, with bigger brains more able to adapt their behavior to survive; and guarding and feeding them while young; and, perhaps, sabotaging their competition!  It is in K-selected animals like us that spiteful behaviors have a plausible evolutionary advantage, since populations are stably small; just as it is in oligopolies, not competitive markets, where sabotaging a competitor can be a winning strategy.  

(Of course, the environment in which modern Homo sapiens evolved was the harsh Malthusian context of the Pleistocene; for the past 300 years the human population has exploded exponentially. Perhaps the spite strategies we evolved with are no longer adaptive in a context of improving technology and global trade.)

Likewise, there is a wider range of conditions under which spiteful strategies can persist when competition is more localized, so that only small populations can interact with each other. Global competition punishes lose-lose strategies, since these diminish the absolute fitness of those who carry them and their non-carrier victims; local competition can preserve these strategies in isolated enclaves.

In nature, we see spiteful behavior in the social insects; worker bees, wasps, and ants prevent other workers from reproducing by killing their eggs, and red fire ant workers kill unrelated queen ants. These actions do not provide any direct fitness benefit to the specific workers that do the killing; rather, they provide an indirect benefit to their sisters, the queens, by killing their unrelated rivals. 

It has been hypothesized that primates engage in spiteful behavior; they certainly engage in apparently spiteful behaviors like harassing copulating couples and killing non-kin infants, but there's no consensus I can find as to whether this is true Hamiltonian spite or mere self-interested competition for food and mates.

Spite and rent-seeking

In Tullock's model of rent-seeking, individuals compete to take a winner-take-all prize; each individual decides how much to spend, and the more you spend, relative to all the other individuals, the more likely you are to get a prize.  What's the optimal amount to spend?

There is a unique Nash equilibrium strategy of how much to spend on trying to get the prize; that is, you can't improve your expected net gain by spending any more or any less. However, this is not an evolutionarily stable strategy! Populations that bid the Nash equilibrium will get overtaken by populations that spitefully bid more, at cost to themselves.

The two strategies are rather close, and get closer asymptotically in large populations; the Nash equilibrium bid is (n-1)/n^2 rV (where n is population size, V is the payout value, and r is a shape parameter of the win-probability function), while the ESS bid is rV/n.  Evolutionarily optimal play is slightly more aggressive than individually optimal play, in a large population with many-to-many competition. But in a small population, or in a tournament-like setup where pairs of individuals play one on one and losers get knocked out of the game, this difference is magnified, and of course compounds with time.

Direct resource competition between conspecifics is many-to-many competition; as soon as I eat a bite, it simultaneously becomes unavailable to everyone else.

Fighting between conspecifics, however, is one-to-one competition; only two rams can butt heads at once. 

We should expect to see "overinvestment" in adaptations that increase individuals' abilities to win such head-to-head conflicts (pun intended), relative to the individually "rational" Nash equilibrium amount.  Competing for resources is not in general a spite strategy, because the winner of a conflict does directly benefit; but overinvestment in resource competition can be a spite strategy.  It's net harmful to the individual, in expectation, but it's more net harmful to his opponent.

Spite and intergroup conflict

If we allow different evolutionary strategies to detect each other -- to treat "in-group" members differently from "outgroup" members, as human nations do (as well as other species; ants go to war) we see even more interesting things about the dynamics of spite.

If individuals are assumed to interact only with local neighbors, to migrate around somewhat, but to be able to distinguish kin from non-kin even if migration has occurred, we observe that individuals tend to be altruistic (hurting themselves to help others) towards kin, and spiteful (hurting themselves to hurt others) towards non-kin. 

Moreover, minorities living in non-kin territory tend to be strongly altruistic towards their kin and only mildly spiteful towards the majority; while majorities tend to be only mildly altruistic towards each other and strongly spiteful towards minorities. This seems to match available evidence about human ethnic conflict.

Spite in human experiments

Humans display spiteful behavior in game-theoretic experiments:

Zizzo (2003a) in his paper on burning money experiments reported that subjects are often willing to reduce, at a cost for themselves, the incomes of players who had been given higher endowments. In some instances subjects with the same or less endowment were also targeted. In a similar vein, Dawes, Fowler, Johnson, McElreath and Smirnov (2007) find that subjects are willing to reduce other group members’ income independently of the history of interaction...

In their experiments on competitive behavior, Rustichini and Vostroknutov (2007) find that participants are more inclined to reduce someone else’s income if the punished subject has earned more money than the punisher. Surprisingly, this effect is stronger when the higher incomes of the punished subjects are due to merit rather than luck...
The most extreme form of anti-social punishment, where the punishment is directed against those who had previously behaved nicely towards the punisher, has been observed in public good games with punishment. In these games those who are more cooperative than others are frequently punished. Such evidence is reported in Cinyabuguma, Page and Putterman (2006), Gächter, Herrmann and Thöni (2005) and Herrmann et al. (2008).

In a "rent-seeking game" played with 3500 undergraduates, players significantly "over-spent" on winning relative to the Nash equilibrium; in particular, they spent twice as much when playing against another human vs the computer, which suggests that spite is a social emotion.  Players who defected on the Prisoner's Dilemma game engaged in more spiteful overspending than cooperative players, and players who were more risk-prone in a lottery test were also more prone to overspend. Finally, after engaging in a rent-seeking game, players cooperated significantly less on the Prisoner's Dilemma.

While players of a public good game punished free riders in all cities, in some cities players also engaged in antisocial punishment -- selectively penalizing the most generous contributors. This happened least in Anglophone cities (Boston, Melbourne, Nottingham) and most in Mediterranean, Middle Eastern, or Slavic cities (Muscat, Athens, Riyadh, Samara, Minsk, Istanbul); countries with high scores on social trust and rule of law displayed more "prosocial punishment" of free-riders and less "antisocial punishment" of contributors.

Several hundred Portuguese schoolchildren were assigned to play a spite game, where they could either play cooperatively or spitefully. If both players cooperate, both gain 15 points; if one cooperates and the other spites, the spiteful player gains 11 points (paying a cost) but his opponent only gains 5 points (a greater loss). Finally, if both players spite, they each get 2 points (a severe loss). 

This game can either be played with proportional winnings (each player gets a piece of candy for every 15 points), in which case playing cooperatively is optimal, or with winner-take-all conditions (the player with the most points gets a fancy chocolate), in which case playing spitefully is optimal.

The experiment found that younger children (5th-7th grade) usually played cooperatively, while older children (8th-10th grade) played cooperatively in the proportional-rewards conditions and spitefully in the winner-take-all conditions. Students repeating a grade were much more likely to behave spitefully.  This suggests that spiteful behavior in humans may emerge in the teenage years.

The economic experimental literature is clear that spiteful strategies do exist in humans, that they correlate with social trust and rule of law in the expected (inverse) direction, and that they seem to emerge in adolescence.



]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1469376 2019-10-23T17:59:56Z 2019-10-23T19:50:30Z Is Stupidity Strength Part 2: Confidence

One very common way people believe stupidity can be a strength is that it can give give greater confidence, which brings advantages.

If you are ignorant of your own flaws, you can perform self-assurance and boldness, which makes it more likely you will win success, especially in social competitive situations. (Getting the girl, getting the raise, winning the election.)

If you are ignorant of the risks of a new venture, you will be more likely to boldly attempt it; and many risky ventures are high in expected value.

If you are ignorant of the weaknesses in your ideas, you will proclaim them confidently and have more influence in society, and more of a sense of joyful certainty, than more reflective, self-critical people.  "The best lack all conviction, while the worst are full of passionate intensity."

Not knowing the flaws in your own character, your own plans, or your own opinions, seems like it might carry an advantage. Even those who think it's morally unacceptable to engage in self-serving delusion often think that the deludedly confident obtain selfish gain from their stupidity. After all, look what it got Adam Neumann -- a CEO so brashly incompetent and unscrupulous that he was recently paid over a billion dollars to leave his company.

But what is this "confidence", why is it good, and why can't you get it without self-delusion?

Confidence Is Willingness To Act

William James, in his "The Will to Believe", was obsessed with the question of whether it could be acceptable to choose a belief, for which you had no evidence, if it made you more decisive and better at functioning in life. 

This was a practical issue for James, as he was plagued with pathological indecisiveness and self-doubt all his life, as Louis Menand's wonderful group biography of the Pragmatists recounts. James spent 15 years deciding on a profession; he was speaking of himself when he said "There is no more miserable human being than one in whom nothing is habitual but indecision."

For James, the critical issue for decisive confidence was faith in God. He thought there was no adequate evidence for either believing or disbelieving in God, but that without religious faith, nobody could have the confidence to engage in a life of action or purpose. We would languish in passive despair, sure that our lives had no meaning. 

James defended the choice to believe because he thought the very nature of "belief" or "truth" is rooted in its function as an aid to decision. We are living creatures; we only evolved the capacity to apprehend the world because knowledge helps us make more survival-promoting decisions; a "belief" that doesn't cash out to anticipated experiences that matter to the holder of the belief, is in a sense not a belief at all, but an empty string of syllables he parrots. 

Therefore, a belief in a ground of meaningfulness or worthwhileness in the universe, a belief that anything at all is worth doing, is by the above decision-theoretic standard not only true, but the necessary foundation of all true beliefs.  And this, says James, is essentially what it means to believe in God. 

He's very carefully not saying that you may believe anything that makes you feel better, even if it's false; he's saying that the "belief" that it ever does any good to act is actually true, by the only reasonable and non-circular definition of truth he can come up with.

"A man's religious faith (whatever more special items of doctrine it may involve) means for me essentially his faith in the existence of an unseen order of some kind in which the riddles of the natural order may be found explained...
"Our only way, for example, of doubting, or refusing to believe, that a certain thing is, is continuing to act as if it were not. If, for instance, I refuse to believe that the room is getting cold, I leave the windows open and light no fire just as if it still were warm. If I doubt that you are worthy of my confidence, I keep you uninformed of all my secrets just as if you were unworthy of the same. If I doubt the need of insuring my house, I leave it uninsured as much as if I believed there were no need. And so if I must not believe that the world is divine, I can only express that refusal by declining ever to act distinctively as if it were so...
"So far as man stands for anything, and is productive or originative at all, his entire vital function may be said to have to deal with maybes. Not a victory is gained, not a deed of faithfulness or courage is done, except upon a maybe; not a service, not a sally of generosity, not a scientific exploration or experiment or text-book, that may not be a mistake. It is only by risking our persons from one hour to another that we live at all. And often enough our faith beforehand in an uncertified result is the only thing that makes the result come true. Suppose, for instance, that you are climbing a mountain, and have worked yourself into a position from which the only escape is by a terrible leap. Have faith that you can successfully make it, and your feet are nerved to its accomplishment. But mistrust yourself, and think of all the sweet things you have heard the scientists say of maybes, and you will hesitate so long that, at last, all unstrung and trembling, and launching yourself in a moment of despair, you roll in the abyss. In such a case (and it belongs to an enormous class), the part of wisdom as well as of courage is to believe what is in the line of your needs, for only by such belief is the need fulfilled. Refuse to believe, and you shall indeed be right, for you shall irretrievably perish. But believe, and again you shall be right, for you shall save yourself. You make one or the other of two possible universes true by your trust or mistrust,—both universes having been only maybes, in this particular, before you contributed your act.

Now, it appears to me that the question whether life is worth living is subject to conditions logically much like these. It does, indeed, depend on you the liver. If you surrender to the nightmare view and crown the evil edifice by your own suicide, you have indeed made a picture totally black. Pessimism, completed by your act, is true beyond a doubt, so far as your world goes. Your mistrust of life has removed whatever worth your own enduring existence might have given to it; and now, throughout the whole sphere of possible influence of that existence, the mistrust has proved itself to have had divining power. But suppose, on the other hand, that instead of giving way to the nightmare view you cling to it that this world is not the ultimatum. Suppose you find yourself a very well-spring, as Wordsworth says, of—

"Zeal, and the virtue to exist by faith

As soldiers live by courage; as, by strength

Of heart, the sailor fights with roaring seas."

Suppose, however thickly evils crowd upon you, that your unconquerable subjectivity proves to be their match, and that you find a more wonderful joy than any passive pleasure can bring in trusting ever in the larger whole. Have you not now made life worth living on these terms? 

Courage, here, is the willingness to act under uncertainty, the willingness to live at all rather than committing suicide or passively waiting out your years hoping for death.

Faith, to James, is simply the conviction that something you have not yet seen will someday resolve your uncertainties; that the universe makes sense and your life matters, even if the reasons are outside the frame of your current knowledge. This faith is the difference between seeing a life of hardships as a determined struggle rather than an inescapable hell; it is the difference between seeing your problems and questions as ultimately resolvable and seeing the universe as a perverse, absurd, inherently unintelligible chaos, at every level fractally resisting your comprehension.  You cannot prove you don't live in such a universe; but your ability to live, act, and learn, to obtain any good things in life, to have anything beyond depressive nihilism, depends on your believing the opposite.

In a more secular age, you might call this "faith" simply the belief in an intelligible universe in which survival is possible. But even traditional theologies often makes sense if you translate "God" to mean "the universe, which is singular, and which exists even outside the frame of our perception and all our mental models." Witness all the prayers and holy texts that say that following God's teachings will help us flourish and make our descendants prosper and multiply; this is simply the claim that understanding the laws of Nature (and the decision-theoretic laws of ethics, or the social/psychological foundations of good societies) is to our long-run best interest.  What is "I Am that I Am" but a poetic way of expressing the notion of existence itself, the Universe, the world "out there" that our words and guesses ultimately refer to?

The "faith" or stance that there is one universe, which is ultimately intelligible and habitable, even if we can't see how at the moment, is also held to be important by scientific atheist philosophers like David Deutsch, who calls it the conviction that "problems are solvable". Without a stance of optimism that coherent explanations are possible, no science could actually be done; nobody would ever search for an explanation for the brute facts they observe.

The stance that life matters and that you personally are overall capable of handling life and worthy to make your own decisions is called "self-esteem" in psychology. One can improve it -- and thereby improve performance on a variety of practical tasks -- by writing personal essays about what one values in life.  (This is the original, older meaning of self-esteem, before it became redefined as "agreeing with positive statements about oneself", which doesn't correlate with work or school performance, improved mental health, or suchlike practical successes. Exercises in which you praise yourself don't work; exercises in which you think about your values and priorities do. )  

The "faith" that the universe makes sense and that it's worthwhile to live actively, making plans and decisions, is one of the key things that is destroyed in PTSD. 

This is not a loss of "confidence" or "trust" in any particular thing, which might well be rational after a traumatic experience (after being raped it is rational to have less trust in your rapist or in people similar to him), but a loss of the ability to have self-confidence generally or to trust in anything generally.  The idea of a generalized "loss of confidence" can't be interpreted as an epistemic belief; it's a change in stance, a change in the ground of all belief or action.

Jenny Holzer's art really captures this aspect of the traumatized experience:

This article investigates the philosophical interpretation of the generalized loss of trust, confidence, or meaning that occur after trauma.

The Istanbul Protocol, a United Nations guide to documenting cases of torture, claims that torture survivors lose the will to look forward to, or shape, their own future.  "The victim has a subjective feeling of having been irreparably damaged and having undergone an irreversible personality change. He or she has a sense of foreshortened future without expectation of a career, marriage, children, or normal lifespan."

In PTSD, "we experience a fundamental assault on our right to live, on our personal sense of worth, and further, on our sense that the world (including people) basically supports human life. Our relationship with existence itself is shattered. Existence in this sense includes all the meaning structures that tell us we are a valued and viable part of the fabric of life..."

What, exactly, does this “shattering” involve? It could be that experiencing significant suffering at the hands of another person leads to a negation of engrained beliefs such as “people do not hurt each other for the sake of causing pain,” “people will help me if I am suffering,” and so on. Then again, through our constant exposure to news stories and other sources, most of us are well aware that people seriously harm each other in all manner of ways. One option is to maintain that we do not truly “believe” such things until we endure them ourselves, and various references to loss of trust as the overturning of deeply held “assumptions” lend themselves to that view. For example, Herman (1992/1997, p. 51) states that “traumatic events destroy the victim’s fundamental assumptions about the safety of the world,” and Brison (2002, p. 26) describes how interpersonal trauma “undermined my most fundamental assumptions about the world.” An explicitly cognitive approach, which construes these assumptions as “cognitive schemas” or fundamental beliefs, is adopted by Janoff-Bulman (1992, pp. 5–6), who identifies three such beliefs as central to one-place trust: “the world is benevolent;” “the world is meaningful;” and “the self is worthy.”

...

Many of us anticipate most things with habitual confidence. It does not occur to us that we will be deliberately struck by a car as we walk to the shop to buy milk or that we will be assaulted by the stranger we sit next to on a train. There is a sense of security so engrained that we are oblivious to it. Indeed, the more at home we are in the world, the less aware we are that “feeling at home in the world” is even part of our experience (Baier, 1986Bernstein, 2011). It is not itself an object of experience but something that operates as a backdrop to our perceiving that p, thinking that q or acting in order to achieve r. To lose it is not just to endorse one set of evaluative judgments over another. It is more akin to losses of practical confidence that all of us feel on occasion, in relation to one or another performance. Suppose, for instance, one starts to “feel” that one can no longer teach well. Granted, evaluative judgments have a role to play, but loss of confidence need not originate in explicit judgments about one’s performance, and its nature is not exhausted by however many judgments. The lecture theater looks somehow different – daunting, oppressive, unpredictable, uncontrollable. Along with this, one’s actions lack their more usual fluidity and one’s words their spontaneity. The experience is centrally one of feeling unable to engage in a habitual, practical performance. And loss of confidence can remain resistant to change even when one explicitly endorses propositions such as “I am a good teacher.”
Such an experience can be fairly circumscribed, relating primarily to certain situations. However, we suggest that human experience also has a more enveloping “overall style” of anticipation. This view is developed in some depth by the phenomenologist Husserl (1991). According to Husserl, all of our experiences and activities incorporate anticipation. He uses the term “protention” to refer to an anticipatory structure that is integral to our sense of the present.
The 19th-century philosopher Edmund Husserl, much like contemporary neuroscientists, believed there is no perception without anticipation; all sensory perceptions and indeed all motor actions involve hypotheses about what we will observe next, or what will happen if we do this or that.  The basic function of the brain is to form predictions and measure how and in what way they differ from our subsequent observations.  There is no level at which our senses provide us with an unmediated, judgment-free snapshot of reality; it's prediction and error-correction all the way down. 

Loss of trust in our ability to make correct predictions thus means a generalized weakness in our ability to perceive, think, and act.  It is loss of trust in the intelligibility of the universe and in our own ability to act to achieve goals; it is loss of trust that the future can be predicted, and thus that there's any point in planning or investing in the future. It is overall a loss of meaningfulness, a loss of the sense that anything has a point, a loss of will to act, a loss of confidence.  In other words, the problem caused by trauma is exactly the problem that confronted William James.

It's a common observation that the risk of PTSD is not predicted so much by the severity of the trauma as by the degree to which the victim is persuaded to deny her own experience; pressured (by abusers or bystanders) to believe that it didn't really happen, that it wasn't so bad, or that she deserved it. It's not surprising that this particular experience is damaging to one's trust in one's own ability to make sense of reality or rationally assess risk to oneself.

Confidence Without Self-Delusion

The above model of how confidence works makes it clear that we don't have to be stupid or delusional to get most of the benefits of high confidence.  

Confidence is not a belief, in the ordinary sense.  It is not the belief that you are beautiful or brilliant or that your plans will work or that your ideas are right.  It is a stance of willingness to act, decisively and uninhibitedly. You can make a choice to act, without changing your assessment of any hypothesis; in machine-learning terms, confidence is a hyperparameter, an error threshold for "enough certainty" required before taking action or asserting a conviction, which you can lower if it is too high.

Eliezer Yudkowsky has remarked that he often thinks projects are worth trying on net despite only having, in his estimate, a 10% probability of success; while other people, in order to be motivated to try at all, need to psych themselves up into the unrealistically "confident" belief that success is virtually certain.  Most people conflate self-esteem or global self-confidence or courage, the willingness to try, with over-optimism about one's chances; but this is a needless error.

The need for external "validation" of one's basic worth as a person is likewise an error of looking for evidence of one's worthiness, when what you really want is permission to act as you desire; or, one might cash this out as a decision to act as you desire.  Repeatedly looking for validation, when you aren't really seeking new information, because you know what answer you expect and want to get, isn't going to work, because data only conveys information (in the Shannon sense) if it's surprising.  You can't come to "believe in yourself" by spamming your brain with the same data over and over. But if you know you want more self-confidence, you already have all the data you need to know more confidence would be good for you. You don't need to seek any more reassurance; you need to unilaterallly change your stance to a decisive one.

Easier said than done! But here's some tactics that have worked for me:

1.) Writing about my values! It's the time-honored, evidence-based trick for increasing self-esteem, and it works for me.  (Yes, this blog post is itself an example.)

2.) Unilaterally doing something I feel like doing in the moment. (Usually a bodily craving like food or exercise, or a minor breach of social propriety like making ugly faces or shouting in my own home.)  

If I'm wrapped up in an anxious obsession with being liked or validated or given approval, I can break that cycle by proving to myself that I have "permission" to do whatever I feel like, except for a really sparse set of ethical and practical constraints that I'm truly committed to. I don't have to be good, in the sense of an identity or "personal brand"; there is what I impulsively feel like doing, there is what I really absolutely must do, and that's all. 

(I usually fast, as is traditional, on the Jewish holiday of Yom Kippur, the Day of Atonement; but this year I actually got a lot of mileage and dare-I-say spiritual growth out of breaking the rule and eating food, when I was falling into an unhealthy spiral of shame and resentment about the idea of "being good," and becoming unpleasant to my family due to hunger.  Eating made me a better mom and wife that day, and the insight catalyzed me being a better friend to my friends in the following few days. Real ethics isn't about being any particular way, in the sense of an aesthetic or persona; that's fake "ethics," which is advertising. If there's anything you actually have to do, in reality rather than in a performative sense, then it will have a function ascertainable through ordinary cause and effect.  The goal of real ethics is not to maintain a goody-two-shoes persona but to exert agency towards good outcomes.  Violating a taboo, on an occasion when the taboo-violation directly helps someone and doesn't break any principle you're truly serious about, can help concretize this to yourself. )

False Confidence As Fraud

There's another kind of benefit self-deluded confidence can have, however, that courage and decisiveness by themselves cannot match; it can be part of a coalitional Stupid Strategy, as mentioned in the previous post. False beliefs are a luxury, an ornament, a costly signal that you are in a secure enough social position that you do not need to be realistic; if you fail, someone else will bail you out.

There is a common phenomenon that "mediocre white men" are given advantages for being unrealistically overconfident, while more-competent, humbler, more serious people who have less privilege (women, upwardly-mobile lower-class people, foreigners and immigrants, especially East Asians today and Jews historically) are seen as less appealing, less "likable", dispreferred as recipients of resources and privileges.  

Part of this is simply that the "overconfident" privileged people are acting on the correct amount of ambition for optimal outcomes, and others would do well to emulate their confidence. We should apply to more things, speak up more, negotiate more for ourselves, try more new things.

Another part of it is that having more resources makes it rational to take more risks; if you have savings or an inheritance, it actually is less risky to start a business. People born rich are free to be bolder; that's part of what wealth means. That's an argument that more people should have access to enough wealth to enable them to take useful calculated risks, but not that there's anything wrong with taking advantage of your good fortune, should you happen to have it.

But a third, perverse possible component of the "overconfidence of the privileged" is that they are signaling their ability to be wrong so they can align in a "too big to fail" coalition that is parasitic on the more-productive, less-grandiose people's work. For this purpose, it isn't enough to just be ambitious, bold, or confident; you have to be shamelessly wrong, to prove your membership in the Stupid Coalition of those privileged to be "secure" in their entitlement to valuable resources that other people produce.  Wrongness -- particularly in the form of excess confidence, where you make bets that would be disastrous in expectation for yourself unless someone else bailed you out -- is an unfakeable signal that you are sure other people will bail you out.  It's a form of playing Chicken with the (social) universe.

Like the tail of the peacock, irrational overconfidence is a self-imposed handicap; it's a gloriously flamboyant waste of resources, as a way of proving its bearer has resources to burn.

To the extent that this is true, we ought to see supernormal returns from investing in individuals who have the same apparent level of performance (in profits, product quality metrics, test scores, whatever) but are a.) from less-privileged backgrounds (women, minorities, LGBT individuals, people from working-class families) or b.) who have a manner that's more serious, modest, down-to-earth, and less entitled or grandiose. There's actually some evidence to that effect; women-led firms have more than twice the average annual rate of return of companies worldwide (24% vs. 11%).

The logic is, someone who's performing at the top, but is "spending" less than her equally high-performing peers on wasteful display signaling, is a much better bet. Your dollar goes farther, in the long run, if you aren't spending half of it on peacock tails.

An actual peacock bears the weight of its tail with its own strong muscles. But the Stupid Coalition's "peacock tail" is supported by the too-big-to-fail dynamics that rely on someone outside the coalition bailing them out.  If you are confident you can find a "greater fool" to enter your Ponzi scheme, or that the use of force will ensure payment of your unsustainable debts (e.g. the government printing more money to continue funding your project, or a Saudi sovereign-wealth fund backed ultimately by violence will invest in the next round), then peacock tails can be a good investment -- if you think the bubble, or Ponzi scheme, or public trust in government, will hold.  If there's enough risk the whole system will crash, "too big to fail" or not, then peacock tails are a terrible investment and you'll do much better optimizing for long-run, resource-efficient value creation.
]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1469113 2019-10-23T01:27:16Z 2019-12-21T00:55:08Z Is Stupidity Strength?

Definition By Examples

There is a meme that stupidity is associated with strength -- that being as intelligent as possible comes at a cost in power, money, happiness, or practical advantage.

Some instances of this include:

  • The trope of the "nerd" -- a stereotype that bundles intellect with social ostracism and physical weakness 
  • The lament that ignorance is bliss; as in Ecclesiastes, "For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow."
  • The truism that the way to succeed in a practical endeavor is to "not overthink it."

This pattern is not a human universal. Plato and Aristotle would have found it alien. Francis Bacon and Sun Tsu were firmly of the other opinion: that understanding the world would lead to mastery, including military victory.

I'm told, by people who grew up in China, England, France, and Germany, that they don't have the concept of a "nerd" as we do in America. There's no presumption that good students tend to be unpopular or unathletic. In France, it's even fashionable to profess an interest in math or philosophy; they have (trashy) pop-philosophy magazines the way we have pop-psychology magazines.

My guess is that the trope of the ineffectual intellectual in America starts with the Pragmatist movement at the turn of the 20th century, which often presupposed a dichotomy between thinking and doing.

Consider Theodore Roosevelt's famous speech to students at the Sorbonne:

 A cynical habit of thought and speech, a readiness to criticise work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life's realities—all these are marks, not, as the possessor would fain think, of superiority, but of weakness. 
. . .

Shame on the man of cultivated taste who permits refinement to develop into a fastidiousness that unfits him for doing the rough work of a workaday world. Among the free peoples who govern themselves there is but a small field of usefulness open for the men of cloistered life who shrink from contact with their fellows. Still less room is there for those who deride or slight what is done by those who actually bear the brunt of the day; nor yet for those others who always profess that they would like to take action, if only the conditions of life were not what they actually are. The man who does nothing cuts the same sordid figure in the pages of history, whether he be cynic, or fop, or voluptuary. 

. . .
Let those who have, keep, let those who have not, strive to attain, a high standard of cultivation and scholarship. Yet let us remember that these stand second to certain other things. There is need of a sound body, and even more need of a sound mind. But above mind and above body stands character—the sum of those qualities which we mean when we speak of a man's force and courage, of his good faith and sense of honor. I believe in exercise for the body, always provided that we keep in mind that physical development is a means and not an end. I believe, of course, in giving to all the people a good education. But the education must contain much besides book-learning in order to be really good. We must ever remember that no keenness and subtleness of intellect, no polish, no cleverness, in any way make up for the lack of the great solid qualities. Self-restraint, self-mastery, common sense, the power of accepting individual responsibility and yet of acting in conjunction with others, courage and resolution—these are the qualities which mark a masterful people.
Certainly these character traits are important, but notice how Roosevelt takes for granted that they are a separate magisterium from the intellect, as opposed to virtues whose necessity can be appreciated through reason or which can be developed by applications of intellect.

John Dewey, the Pragmatist philosopher of education, was deeply concerned that too much conceptual abstraction in education would produce impractical, antisocial intellectuals:

The gullibility of specialized scholars when out of their own lines, their extravagant habits of inference and speech, their ineptness in reaching conclusions in practical matters, their egotistical engrossment in their own subjects, are extreme examples of the bad effects of severing studies completely from their ordinary connections in life.

Philosopher and psychologist William James often opposed the "intellect", which he supposed abstract and disconnected from real life, to the "will" which enables the courage to act; eg

"Or what can any superficial theorist's judgment be worth, in a world where every one of hundreds of ideals has its special champion already provided in the shape of some genius expressly born to feel it, and to fight to death in its behalf? The pure philosopher can only follow the windings of the spectacle, confident that the line of least resistance will always be towards the richer and the more inclusive arrangement, and that by one tack after another some approach to the kingdom of heaven is incessantly made.

James returns again and again to the shortcomings of "superficial" theory, devoid of motivation or practical application; often he makes valid points, but always he calls for endorsing the will as a counterbalancing supplement to the intellect, not an essential component of it.

There's a persistent anti-intellectual strain in Pragmatist writing that paints "superficial theorists" as cowardly, ineffectual, and emotionally barren, in need of balancing with "practicality." It survives today in the self-help tropes that people need to get "out of their heads." But the Pragmatists did retain appreciation for experimental science and applied craft (and, of course, James helped found the modern American research university.) 

If you want to see the extreme version of disdain for all thought and peaceful production, you have to look beyond Pragmatism to fascism, with its overt rejection of sense-making. To a fascist, the irrational is always more potent and "magical" than the rational; the mundane, boring correctness of a shopkeeper's arithmetic marks him as simultaneously pathetic and sinister. Pathetic, because "mere" logic cannot move masses of people to collective battle-frenzy, which is the fascist's source of power and only conception of strength; sinister, because "mere" logic cannot be moved by social contagion and thus its wielder, being difficult to seduce into unity, is a potential threat.

Milder, more reasonable versions of the anti-intellectual hypothesis correctly note that smart people aren't all supermen, that practical experience and motivation matter too (see Scott Alexander's early critique of "extreme rationality.")  Like the Pragmatists, Alexander thinks that reason isn't enough to make you win, that it has to be supplemented with distinct, independent virtues.  The extreme version of the anti-intellectual thesis goes farther and holds that you could actually win more -- become more charismatic, more decisive, more powerful -- by becoming dumber. 

Scott Adams is an example of a modern advocate of extreme irrationalism:

"People are not wired to be rational. Our brains simply evolved to keep us alive. Brains did not evolve to give us truth. Brains merely give us movies in our minds that keeps us sane and motivated. But none of it is rational or true, except maybe sometimes by coincidence.” 

“The evidence is that Trump completely ignores reality and rational thinking in favor of emotional appeal,” Adams writes. “Sure, much of what Trump says makes sense to his supporters, but I assure you that is coincidence. Trump says whatever gets him the result he wants. He understands humans as 90-percent irrational and acts accordingly.”

Adams adds: “People vote based on emotion. Period.”

“While his opponents are losing sleep trying to memorize the names of foreign leaders – in case someone asks – Trump knows that is a waste of time … ,” Adams writes. “There are plenty of important facts Trump does not know. But the reason he doesn’t know those facts is – in part – because he knows facts don’t matter. They never have and they never will. So he ignores them.

Trump “doesn’t apologize or correct himself. If you are not trained in persuasion, Trump looks stupid, evil, and maybe crazy,” Adams writes. “If you understand persuasion, Trump is pitch-perfect most of the time. He ignores unnecessary rational thought and objective data and incessantly hammers on what matters (emotions).”

In other words, Adams thinks Trump's indifference to facts, his irrationality, is a strength.

How Can Stupidity Be Advantageous?

Advocates of an extreme irrationalist or anti-intellectual view have an obvious challenge in arguing their case. Knowledge is power; there are obvious ways in which information can be turned to advantage. Prima facie, making yourself more ignorant should mostly harm you, not benefit you.

Obviously it's possible to have useless knowledge which is not worth the effort of acquiring. But that's very different than knowledge being actively harmful.  How can knowing more, or better understanding the logical implications of what you know, cause you to make worse decisions?  If you maintain ignorance of something, the unknown thing could hurt you in unexpected ways; whereas if you regret learning something, you can at least in principle just go back to behaving as you would have if you hadn't learned it.

Ignorance constrains your options.  So why seek it?

Well, the usual reason people seek constraint is as a commitment device.

If you don't know the secret codes, you can't reveal them under torture.

More generally, there are all sorts of things you might be pressured by others to do, which you can excuse yourself from doing if you make sure you don't know how.  Witness all the people who are "just hopeless" at housework or administration.

But even this is not a good reason to seek general ignorance or irrationality.  Granted it may be strategic to avoid gaining some particular bit of knowledge or skill, which is only useful for things you'd rather not do; but surely it can't be advantageous to cripple a fully general skill like logic or arithmetic! You need those too often! How can the advantage of the commitment device outweigh the loss from actually being bad at thinking?

The point is, irrationality is not an individual strategy but a collective one. Being bad at thinking, if you're the only one, is bad for you. Being bad at thinking in a coordinated way with a critical mass of others, who are bad at thinking in the same way, can be good for you relative to other strategies. How does this work? If the coalition of Stupids are taking an aggressive strategy that preys on the production of Non-Stupids, this can lead to "too big to fail" dynamics that work out in the Stupids' favor. 

"Here's a large mass of us who are stupid in the exact same way. This means when we fail, we all fail at once.  Now we're here, we're hungry, we're angry, and we're literally incapable of solving our own problems.  You really want to see what happens if you don't bail us out?"

Correlation of risk can lead to security, in this way. Make a mistake alone and you have to bear the cost; make a mistake along with an aggressive crowd and someone will have to rescue you.  

As the saying goes, if you owe the bank a million dollars you have a problem; if you owe the bank a billion dollars, the bank has a problem. The bailouts of the 2008 financial crisis is an example of this phenomenon; as is the medieval practice of expelling the Jews from a kingdom once the king could not afford to pay his debts to Jewish moneylenders. 

Less obviously, normalization of deviance is an example of this "stupid strategy." Organizations have standards for safety, quality control, and so on; in a functional organization, if a single worker falls short of the standard, she will be less professionally successful or even face disciplinary action.  In a dysfunctional organization, violation of the standard gradually becomes so commonplace that it becomes normative. Nobody actually follows the rules; there's a tacit common knowledge that the rules are unreasonably stringent and "just for show" and people can't be expected to literally follow them; after all, if enough people are violating the rules, you can't just fire all of them! But, to the extent that the organization's survival actually depends on those standards (eg in a company whose revenues depend on their products meeting certain quality standards) then the rule-breaking strategy is parasitic on the minority of workers who actually try to meet the standards and have to clean up the rule-breakers' messes. The rule-breakers get job security and advancement without having to make the effort to meet standards -- until standards fall so far that the whole organization collapses, at which point they can claim it wasn't their fault, since they were behaving "normally." The rule-breaking coalition has become "too big to fail", and the (invariably less senior) rule-followers get screwed.

Note that a strategy doesn't have to produce good outcomes to be evolutionarily stable. It could be much better to live in a less stupid society and still, given one's current social environment, locally optimal to join the Stupids.



]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1456007 2019-09-16T18:48:07Z 2019-09-17T00:41:59Z Against Multilateralism

Unilateral actions are those that a single person, or small group of people, can take without consulting anybody else.

Multilateral actions are the opposite: actions that require the cooperation and approval of many people.

For instance, the "freedom to roam" or allemansrätten in Swedish, is a unilateral right in many Scandinavian countries -- any person can walk freely in the countryside, even over other people's land, without having to ask permission, provided he or she does not disturb the natural environment.  You don't have to "check in" with anyone; you just take a walk. 

People often mistrust unilateral actions even when at first glance they seem like "doing good":

  • Dylan Matthews at Vox opposes billionaire philanthropy (a unilateral donation to charitable causes the billionaire prefers) on the grounds that it undermines democracy (a multilateral process in which many voters, politicians, and government agencies deliberate on how money should be spent for the common good).
  • People are alarmed by geoengineering, a collection of technological methods for reversing global warming which are within reach of a single company acting unilaterally, and much more comfortable with multilateral tactics like international treaties to limit carbon emissions.
  • Gene drives that could wipe out malaria-causing mosquitoes could be a unilateral solution to eradicating malaria, unlike the multilateral solution of non-governmental aid organizations donating to malaria relief.  Gene drives are controversial because people are concerned about possible risks of releasing genetically modified organisms into the environment -- but they have the potential to eliminate malaria much faster and more cheaply than anything else.
  • Paul Krugman is troubled by the prospect of billionaires funding life extension research (a unilateral approach to solving the problems of age-related disease) because he's concerned they would ensure that only a privileged few would live long lives.

Often, unilateral initiatives are associated with wealth and technology, because both wealth and technology extend an individual's reach. 

I didn't really "get" why biotechnology innovation scared people until I watched the TV show Orphan Black.  There's a creepy transhumanist cabal in the show that turns out (spoiler!) to be murdering people. But before we know that, why is the show leading us to believe that this man onstage talking about genetic engineering is a bad guy?

I think it's about the secrecy, primarily. The lack of accountability.  The unilateralism.  We don't understand what these guys are doing, but they seem to have a lot of power, and they aren't telling us what they're up to

They're not like us, and they can just do stuff without any input from us, and they have the technology and money and power to do very big things very fast -- how do we know they won't harm us?

That's actually a rational fear. It's not "fear of technology" in some sort of superstitious sense.  Technology extends power; power includes the power to harm.  The same technology that fed a starving planet was literally a weapons technology.

Glen Weyl's post Why I Am Not A Technocrat basically makes this point.  Idealistic, intelligent, technologically adept people are quite capable of harming the populations they promise to help, whether maliciously or accidentally. He gives the examples of the Holodomor, a man-made famine created by Soviet state planning, and the rapid, US-economist-planned introduction of capitalism to Russia after the fall of the Soviet Union, which he claims was mismanaged and set the stage for Putin's autocracy.

In economic terms, Glen Weyl's point is simply that principal-agent problems exist. Just because someone is very smart and claims he's going to help you, doesn't mean you should take his word for it.  The "agent" (the technocrat) promising to act on behalf of the "principal" (the general public) may have self-interested motives that don't align with the public's best interest; or he may be ignorant of the real-life situation the public lives in, so that his theoretical models don't apply.

I think this is a completely valid concern.

The most popular prescription for solving principal-agent problems, though, especially when "technology" is mentioned, is simple multilateralism, what Weyl calls "design in a democratic spirit."  That is: include the general public in decisionmaking. Do not make decisions that affect many people without the approval of the affected populations. 

"Democratic designers thus must constantly attend, on equal footing, in teams or individually, to both the technical and communicative aspects of their work. They must view the audience for their work as at least equally being the broader non-technical public as their technical colleagues. They must view a lack of legitimacy of their designs with the relevant public as just as important as technical failures of the system."

In other words: if the general public isn't happy with a thing, it shouldn't be done. "Thin" forms of public feedback like votes or market demand are not enough for Weyl; if there's "political backlash and outrage" that in itself constitutes a problem, even if a policy is "popular" in the sense of winning votes or consumer dollars.  The goal for "democratic designers" is to avoid any appreciable segment of the public getting mad at them.

This is a natural intuition. Govern by consensus. Include all stakeholders in the decision process. It's how small groups naturally make decisions. 

Inclusion and consensus has a ring of justice to it. It makes for good slogans: "No taxation without representation." "Nothing about us without us."  And it really does provide a check on arbitrary power.

It is also extremely expensive and inhibits action.

I don't think you can have a contemporary level of technology and international trade that follows the rule "everyone whose life is affected by a decision should be included in the decision process." Technology and trade allow strangers to affect our lives profoundly, without ever asking us how we feel about it.  Many people are unhappy that globalization and technology have altered their traditions. They have real problems and real cause for complaint. And yet, I'm pretty sure that a majority of the human race would have to die in order to get us "back" to a state where nobody could change your life from across the globe without your consent. If you want the world to be governed wholly by consensus, I think you have to be something like an anarchoprimitivist -- and that carries some brutal implications that I don't think Weyl would endorse.

The good news is, multilateral or democratic consensus is not the only mechanism for solving principal-agent problems.

I can think of three other categories of ways to put checks on the power to harm.

1. Law
If you define certain types of harm as unacceptable, you can place criminal or civil penalties on anybody who commits illegal acts.
This is more efficient than consensus because it only imposes costs on illegal actions, while consensus imposes a cost on all actions (the time and resources spent on deliberation and the risk that consensus won't be achieved).

The difficulty, of course, is ensuring that the legal and judicial system is fair and considers everyone's interests. In democracies, we use deliberative consensus as part of the process for writing and approving laws. But that's still a lot more efficient than using consensus directly for all decisions in place of laws.

2. Self-Protection
This includes all situations where the potential victims of harm have a readily available means to protect themselves from being harmed.
Again, it's more efficient than consensus because it doesn't impose costs on all actions, just harmful ones. It has an advantage over law in that it doesn't require anyone to specify the types of harm beforehand -- human life doesn't always fit neatly into a priori systems. It has a disadvantage in that, by default, the potential victims bear the costs of protecting themselves, which seems unfair; but laws and policies which lower the cost of self-protection or place some responsibility on perpetrators can mitigate this.

Self protection includes:
  1. self-defense (as protection against violence)
  2. security protections against theft or invasion of privacy (locks, cryptography)
  3. various forms of exit (the right and opportunity to unilaterally leave a bad situation)
    1. the choice not to buy products you don't like and buy alternatives
    2. the choice to leave a bad job and find a better one
    3. the choice to leave one town or country for another
    4. the choice to leave an abusive family or bad relationship
  4. disclosure requirements on organizations, or free-speech rights for whistleblowers and journalists, that enable people to make informed decisions about who and what to avoid
  5. deliberately designing interventions to be transparent and opt-in, so that if people don't like them, they don't have to participate

3. Incentive Alignment

This includes things like equity ownership, in which the agent acting on behalf of a principal is given a share of the benefits he provides the principal. It also includes novel ideas like income share agreements, which introduce equity-like financial structures to human endeavors like education that haven't traditionally incorporated them.

This has the advantage over consensus that you don't have to pay the costs of group deliberation for every decision, and the advantage over law that it doesn't require anyone to enumerate beneficial behaviors a priori -- the agent is incentivized to originate creative ways to benefit the principal. The disadvantage is that it's only as good as the exact terms of the contract and the legal system that enforces it, both of which can be rigged to benefit the agent. 

As with criminal law, consensus deliberation mechanisms can be used in a targeted way, on the "meta-problem" of defining the "rules of the game" in ways that are accountable to the interests of all citizens. We can have public deliberation on the question of what kinds of contracts should be enforceable, but then let the contractual incentives themselves, rather than costly mass deliberation, govern day-to-day operational decisions like those involved in running a company.


The Case For (Controlled) Unilateralism

It's clear that principal-agent problems exist. But we don't have to go back to primitive government-by-consensus in order to prevent powerful people from taking advantage of others. There are lots and lots of legal and governance mechanisms that handle principal-agent problems more efficiently than that.

Moreover, government-by-consensus isn't even that safe. It's vulnerable to demagogues who falsely convince people that their interests are being represented. In fact, I think a lot of highly unilateral, technological initiatives are getting pushback not because they're uniquely dangerous but because they're uniquely undefended by PR and political lobbying.  

We need unilateral solutions to problems because consensus and coordination are so difficult. Multilateral solutions often fail because some party who's critical to implementing them isn't willing to cooperate.  For instance, voters around the world simply don't want high carbon taxes. Imposing a coordination-heavy project on an unwilling population often takes a lot of violence and coercion.

Technology, by definition, reduces the costs of doing things. Inventing and implementing a technology that makes it easy to solve a problem is more likely to succeed, and more humane, than convincing (or forcing) large populations to make large sacrifices to solve that problem.

Of course, I just framed it as technology "solving problems" -- but technology also makes weapons. So whether you want humanity to be more efficient or less efficient at doing things depends a lot on your threat scenario.

However, I see a basic asymmetry between action and inaction.  Living organisms must practice active homeostasis -- adaptation to external shocks -- to survive. If you make a living thing less able to act, in full generality, you have harmed it. This is true even though it is possible for an organism to act in ways that harm itself.

The same is true to a much greater degree for human civilization. "Business as usual" for humanity in 2019 is change. World population is growing rapidly. Our institutions are designed around a prediction of continued exponential growth in resources.  A reduction in humanity's overall capacity to do things is not going to result in peaceful stability, or at any rate, not before killing a lot of people.

Do we want to guard against powerful unilateral bad actors? Of course. We need incentives to constrain them from hurting others, and that's the task of governance and law.  But the cost of opposing unilateralism indiscriminately is too high. We need mechanisms that are targeted, that impose costs especially on harmful actions, not on beneficial and harmful actions alike.

]]>
Sarah Constantin
tag:srconstantin.posthaven.com,2013:Post/1452104 2019-09-05T16:31:48Z 2020-01-12T18:20:44Z Using Bullet Points to Improve Arguments

Bullet points may not be elegant prose style, but I think they're helpful for making disagreements productive.  I learned this technique from Paul Christiano and I hope it catches on further.

Conversational back-and-forth is a terrible format for resolving disagreements in good faith. 

  • A conversation is single-threaded. Alice says something; Bob replies to Alice's last statement; Alice replies to Bob's last statement; and so on.
    • Sometimes a single conversational "turn" is not long enough to express the whole idea Alice was trying to get across. Bob interjects at what feels like a natural "stopping point", but Alice wasn't done, and now she has to either grab the conversation back (which feels rude) or give up on making her point.
    • Structured arguments are not single-threaded; they are branched. Each claim has supporting evidence.  If I believe A because B, C, and D, and after an hour you finally convince me that B is false, we might "feel" like you've "won" the argument, and not notice that you haven't convinced me that A is false.

  • Verbal conversations are often limited by one or both parties' mental energy or sense of social appropriateness.
    • Bob may agree with Alice not because he's convinced but because he's tired of arguing or worried that continued argument will damage their relationship.
    • The structure of the argument may become unclear when the discussion partners are overcome with strong emotion.
    • Alice may ask Bob for clarification once or twice, but will feel like it's rude to keep saying "no, I still don't understand" three times in a row, even if she really doesn't understand.
    • Detailed, nested arguments may never actually get across because it feels rude to ask busy people to read walls of text or have hours-long conversations.
    Bullet points solve some of these problems:
    • They clearly identify which statements are supporting examples for which main points.
    • They de-emphasize rhetoric and foreground the structure of the claim.
    • They make it easier to point out which statements you agree or disagree with.
      • this reminds people that it's okay to partially agree; it promotes nuance.
    • They reduce length, so busy people can see the argument structure at a glance.
    • Because they're written rather than spoken, they allow people to take breaks from the discussion and pick up where they left off.

    Apps with infinite nesting capabilities, like Workflowy, are especially good for this, but plain old text is fine.

    Possible objections:

    • "Most arguments aren't really about a structure of claims and supporting evidence! Usually what people say they're arguing about isn't the thing they really care about deep down!"
      • True, but "arguing about one thing when the real issue motivating you is something else" is  pretty much the definition of "arguing in bad faith". Sometimes people are arguing in good faith!
      • Bad faith arguments often don't make sense structurally, and structuring arguments explicitly can help make that clear.
        • e.g. if Bob feels sure Alice is wrong about something important, but doesn't know what, so he argues against one of her points at random, Bob's specific argument is likely not to hold water, even if his feeling of disquiet is justified.
        • Bullet points and other structural aids can make it easier to understand that Bob's specific claim is wrong.
        • Alice and Bob also need to have the emotional maturity to realize that Bob may be seeing a problem he can't quite articulate, and cooperate to figure out what it is. Bullet points can't automagically give you that.
    • "Bullet points aren't flexible enough! To really formalize arguments you'd need logical operators or something!"
      • Yeah, "or something." Fully formalizing human speech is a hard problem. This is a very minimal stab at making some speech a little more structured.
    • "Bullet points are ugly/corporate/boring/not how my English teacher taught me!"
      • In my experience, it is hard enough to simply be clear that it's often worth sacrificing style to make sure people understand.
      • You can always go back and turn your bullet points into essays.
    ]]>
    Sarah Constantin
    tag:srconstantin.posthaven.com,2013:Post/1449549 2019-08-28T18:31:35Z 2020-05-05T08:14:04Z How to Make A Memex

    Vannevar Bush's 1945 essay "As We May Think" prefigured the invention of hypertext and the Internet.

    He imagined a "memex", a desk equipped with a microfilm apparatus, "in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory." 

    The memex allows its owner to link to sources and comment on them. This way, he can record, for his own recollection, what he was reading and how it was relevant to questions he was thinking about. He can create "trails" of research questions, which contain links and excerpts to various sources he finds along the way. And these personal trails can be copied and shared with others, to put into their own personal memexes.

    Arguably the Internet forms one big memex today. Bush was right in his prediction that "wholly new forms of encyclopedias will appear", that "The patent attorney has on call the millions of issued patents," and "The physician, puzzled by a patient's reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics." 

    But Bush imagined the memex as a private (though shareable) record, not a communal one. Each person should have their own memex.

    This matters because people need complex private thought.

    Nicholas Carr, concerned about the effect of the Internet on human cognition, argues that a "complex personality" is actually the result of forming one's own interpretations of what one reads, forming a private "cathedral-like" structure, a "personally constructed and unique version of the entire heritage of the West."

    Educated men and women of the 19th century worked at constructing inner lives through text. They kept diaries. They wrote letters.  They kept files so they could remember what they were writing at different points in their lives.

    Today, educated people also read and write a lot, but in an ephemeral and exposed fashion.  Social media has a short memory. The defaults don't permit you to organize your own space. Moreover, the demands of immediate sharing with everyone mean that you're constantly modeling what other people will think of your writing. Not only are you incentivized away from writing controversial things, but also of writing anything that might be confusing or involve a large inferential distance from the audience. Long and nested chains of reasoning are hard to convey to all readers. Private concepts that you've invented are too "jargony" and people may criticize you for "needlessly" inventing terms. If you do all your thinking in public, in venues where you always have to start from a presumption of zero familiarity with your other thought, you can't create complex thoughts at all.

    The "intimacy" Bush wrote about is lost. You get dumber if your thoughts are limited to the bandwidth that you can successfully communicate in a few minutes to arbitrarily many strangers.

    The solution is to create a personal memex. A record of your thoughts and associations, which you will only share parts of with others.

    I use Roam for this.

    (I know the founders but I'm not paid to promote Roam; I just genuinely love the tool.)

    Roam has two main features that make it better than a simple notebook or text document: links and indents.

    Links, of course, allow you to make associations between pages. Infinitely threaded indents allow you to impose hierarchical structures of arbitrary depth.

    This allows you to make and visualize a graph of all your notes:

    Roam also has one important feature that nothing else I've used does -- very low friction to making and linking new pages.  A memex is only as good as your willingness to use it. If it's clunky to take notes, you won't. So, Roam has keyboard shortcuts for links and indents and other applications (like LaTeX markup!) It also has very rapid loading times, so it's less irritating to use than a word processor or Google Docs. You can add and link pages as fast as you can type.

    Mainly, I use my memex as a personal record of my thoughts. I make pages for concepts of interest, including references to people. ("Memex" has its own page.)  On each day's "daily notes", I add links to things I read, my reactions to conversations or things I've read, worries on my mind, drafts of arguments (diagrammed out with heavy use of indents; supporting arguments are threaded beneath the claims they justify) etc. When I mention a concept, I tag the word so it links to the corresponding concept page.

    I've noticed this allows me to think in a more nuanced way. When I don't have to compress an idea to make it easier to communicate with others, I can allow it to have dependencies, exceptions, conditionals...all the sub-clauses that make it hard to fit in a tweet.

    It also allows me to gain more temporal consistency -- more sense of the commonalities between me-of-last-week and me-of-this-week, to remember when I keep coming back to the same thoughts, etc.

    And it has the usual advantages of diaries -- helps me process my emotions, helps me make sense of my thoughts, helps me keep track of my life.

    Ultimately, I think it could be a replacement for a documents folder or Google Drive, though I'm not sure I'm quite ready to switch my whole text-based life into Roam.  It's certainly an upgrade from either a diary or a blog. I hope more people try it!

    ]]>
    Sarah Constantin