Second-Order Existential Risk

[Epistemic status: Low confidence]

[I haven’t seen this discussed elsewhere, though there might be overlap with Bostrom’s “crunches” and “shrieks”]

How important is creating the conditions to fix existential risks versus actually fixing existential risks? 

We can somewhat disentangle these. Let’s say there are two levels to “solving existential risk.” The first level includes the elements deliberately ‘aimed’ at solving existential risk. This includes researchers, their assistants, their funding. On the second level are the social factors that come together to produce humans and institutions with the knowledge and skills to even be able to contribute to existential risk. This second level includes things like “a society that encourages curiosity” or “continuity of knowledge” or “a shared philosophy that lends itself to thinking in terms of things like existential risk (humanism?).” All of these have numerous other benefits to society, and they could maybe be summarized as “create enough surplus to enable long-term thinking.” 

Another attribute of this second level is that these are all conditions that allow us to tackle existential risk. Here are a few more of these conditions:

  • Humans continue to reproduce. 
  • Humans tend to see a stable career as their preferred lifepath. 
  • Research institutions exist.
  • Status is allocated to researchers and their institutions. 

If any of these were reversed, it seems conceivable that our capacity to deal with existential risk would be heavily impacted. Is there a non-negligible risk of these conditions reversing? If so, then perhaps research should be put into dealing with this “second-order” existential risk (population collapse, civilization collapse) the same way it’s put into dealing with “first-order” existential risk (nuclear war, AI alignment).

Reasons why second-order x-risk might be a real concern:

  • The above conditions are not universal and thus can’t be taken for granted.
  • Some of these conditions are historical innovations and thus can’t be taken for granted.
  • The continued survival of our institutions could be based more on inertia than any real strength.  
  • Civilizations do collapse and due to our increased interconnectivity a global collapse seems possible. 
  • A shift away from ‘American values’ over the coming decades could lead to greater conformity and less innovation. 
  • Technology could advance faster than humanity’s ability to adapt, significantly impacting our ability to reproduce ourselves. 

Reasons why second-order x-risk might not be a real concern:

  • Civilization keeps on trucking through all the disruptions of modernity. Whether or not the kids are alright, they grow up to complain about the next kids. 
  • Whatever poor adaptations people have to new technology, they’ll be selected against. Future humans might develop good attention habits and self-control. 
  • The bottleneck could really only be in funding. You don’t need that many talented people to pluck all the significant x-risk fruit. They’re out there and they’ll be out there for years to come, they just need funding once found. 

When considering whether or not second-order x-risk is worth researching, it’s also worth looking at where second-order existential risk falls in terms of effective altruist criteria: 

  • Scale: Impaired ability to deal with existential risk would, by definition, affect everybody. 
  • Neglect: Many people are already working on their version of preserving civilization. 
  • Tractability: It is unclear what the impact of additional resources would be. 

My suspicion is that second-order x-risk is not as important as ex-risk. It might not even be a thing! However, I think the tractability is still worth exploring. Perhaps there are cheap, high-impact measures that maximize our future ability to deal with existential risk. It’s possible that these measures could also align with other EA values. Even decreasing disease burden in developing countries slightly increases the chances of a future innovator not dying of starvation. 

I am also personally interested in the exploration of second-order x-risk because there is a lot of overlap with conservative concerns about social and moral collapse. I think those fears are overblown but they are shared by a huge chunk of the population (and are probably the norm outside of WEIRD countries). I’m curious to see robust analyses of how much we realistically should worry about institutional decay, population collapse, and technological upheaval. It’s a ‘big question’ the same way religion is: if its claims are true, it would be a big deal, and enough people consider it a big deal that it’s worth checking. However, if it is rational to not worry about such things, then we could convince at least a few people with those concerns to worry about our long-term prospects instead.

Three Ways to Think Less Stupidly About Groups

[Epistemic status: Confident]

On simple ways to think better about groups. 

There are three common ways I see people reasoning poorly about groups. Both they and their solutions have a lot of overlap. 

Three dumb ways to think about groups:

  1. “Group opinion is monolithic”

This sounds obviously and intuitively foolish but I think it’s one of those common-sense things we forget when it’s convenient to forget it. I see this way of thinking often come out politically. Imagine Alice (a liberal) and Bob (a conservative) arguing about Black Lives Matter. At some point Alice, frustrated with Bob, says “You should listen to black people.” This sounds like a pretty big own but it’s more or less dysfunctional. Black people are not a monolith. Obviously they, like any racial group, have a wide range of opinions and beliefs. However, this kind of statement isn’t an uncommon play in debate. 

Just what does Alice mean? Imagine that Bob replies, “I do! I listen to black conservatives!” Now, imagine Alice sighing and clarifying: “What I mean is, you should listen to black people who agree with me.” This doesn’t sound so impactful, but it’s really what’s being said! A group is being used as a stand-in for its authority and to sneak in Alice’s opinions.

Note that this way-of-thinking is somewhat fair when applied to political groups, which actually are nominally united in beliefs. To be fair, political groups don’t have the slam dunk emotional appeal. I’ve never heard, say, “Maybe try listening to some neoliberals.

It also gets more complicated when somebody says to just “listen to the experts”. Experts, for obvious reasons, are more likely to agree on sets of beliefs than entire identity groups (and can be trusted to get there over time). So sometimes you really can appeal to expert opinion, though ideally you’d appeal to actual arguments (to be fair, life is short)! Problems emerge when experts don’t agree! Consider the fact that Sweden’s herd immunity strategy was developed by their public health officials: if you were to argue with a Swede about the necessity of lockdown both of you could pull the “experts recommended my nation’s strategy” card. Furthermore, on occasion, expert opinion can shift rapidly (see: masks and COVID). There isn’t really a replacement for a functional brain. 

Anyway, I don’t think there is really a solution when somebody is telling you to shut up and listen to their preferred opinion-stand-in. However, I do think it’s possible to catch and stop yourself in this habit of thinking. Groups are not monolithic! They are not strong enough to shut down having to actually debate arguments!

  1. “Groups are their worst members”

The simplest explanation is to just link to Cardiologists and Chinese Robbers*, but it bears repeating. Every group large enough to be noticed is large enough to have bad apples. Pointing out the excesses of a few nuts (ideally, all miscreants could be identified as something small and edible) is not enough to condemn the larger group. This applies to political groups but it also applies to any group. Consider a “reporter” whose game is to just retweet news about migrant crime! Obviously if you focus on the worst of your outgroup, you’ll see a lot of bad. And conversely, obviously if you only focus on the best of your ingroup, you’ll come out rosy. This should be an easy one to avoid!

“But Ideopunk!” you cry. “Some groups really are bad! The alt-right actually does suck!”

Yes! Some groups do suck! But that will be clear even when we look at members besides the worst 5%. 

“But Ideopunk!” you cry. “Even though most of my enemies don’t commit crime and abuse justice themselves, they covertly support the worst of the group!”

You know what, pal, you’re totally right. That does happen (police culture is a great example). However, you should be very careful before assuming: 

1) That this is a coherent group you’re describing,

2) The majority really haven’t condemned the worst offenders,

3) The standards you’re holding the outgroup to are standards you hold yourself to. 

Catch yourself doing this and catch parts of your beliefs that rely on this way of thinking. If somebody is citing anecdotal evidence of some group doing something terrible (Migrant crime!) ask them for base rates or for evidence of the larger culture being devoted to covering up said terrible thing. 

  1. Names are nebulous

I think this is one is hard to stop using. You have to stop treating political labels as information-rich and consistent. If you don’t know what I mean, here are a few examples:

  • “Like all utilitarians, he’s short-sighted.”
  • “You’re a conservative, so you hate poor people.”
  • “Socialism is just communism in disguise. You want to take away my rights.”
  • Bonus: Being confused about why a conservative supports a liberal policy.

All of these are ridiculous, and few would actually think them out loud, but I think it’s largely implicit. I suspect that most people feel they’ve learned a lot as soon as they’ve heard somebody name their political belief system. This is a mistake. People have complex (or confused!) definitions of their identities and complex (or confused!) reasons for why they chose that identity! Even if they were simple and straight-forward, they could vary wildly from one person to the next. 

I’m not saying that you’ve learned nothing upon hearing that a person identifies as a libertarian. But you are more likely to make the mistake of overconfidence in your new assumptions than underconfidence. You should update your guesses on his beliefs. You should not (yet) chuckle having confirmed him for a bitcoin-obsessed polyamorist who wants to raze the poor.

Like I wrote at the beginning, these are all related, and sometimes they intersect in the most annoying ways. Consider this combination of #1 and #3:

The idea is that the outgroup’s beliefs are modified or put away when it becomes inconvenient. One reason I find this template so frustrating is that it so neatly parallels one of my favorite gotchas: barefaced personal hypocrisy. For example: 

[Via Ari Schulman]

The “person” version is valid because it’s an example of hypocrisy. The “outgroup” version is an example of different people with the same label having different beliefs, the most normal thing in the world. This should not be astounding.

What should you do if you notice yourself being annoyed by the hypocrisy of a group, but not by any one person? Just remind yourself that these are probably different people. Maybe you can give yourself permission to be very annoyed if it does turn out to be one person. 

All of these solutions involve reminding yourself that even though these are convenient ways of thinking, they’re convenient because they’re cheap. They avoid the work of actually dealing with arguments or the complexity of people. That’s harder, but will lead to more interesting questions and will make it easier to engage with people you disagree with–assuming, of course, that that’s what you want.

* This is, unfortunately, no longer possible

The Values of Others

The Values of Others

[Epistemic status: Mildly confident, not surprised if missing important component]

Not to be confused with the value of others!

The other day I was asked to provide a short clip explaining why I’ve (pre-)volunteered for human challenge trials. I’ve talked about it in a few different places at varying lengths but here they just wanted a sentence or two to go in a story on HCTs alongside clips from other volunteers on a show. Neat!

The org rep asking for the clip mentioned that the show is right-leaning and what immediately came into my head was Jonathan Haidt’s moral foundations theory — what values should I appeal to to convince conservatives that HCTs are a good idea? I have a number of reasons I’m volunteering, there are a number of reasons that are valid, and there had to be a great gem that fit both of those!

In the end I decided against it. Instead I said what I’ve basically said to everybody: that the benefits to everybody far outweigh the personal risk. I could have tailored something to appeal to conservative values: “The economy is fucked, our children will be paying for it for years, every day counts.” But I didn’t. I felt weird appealing to the values of others. 

Is that the right call? It seems like common-sense to appeal to the values of others, in a Dale Carnegie way. Show people why they should agree with you! People don’t care about your values, they want you to appeal to theirs! 

On the other hand…

Interactions this condescending don’t happen often outside of memes but I think it points to something in how we try to talk across political tribes. I feel less and less convinced by this approach and more concerned that it’s slimy.

Here’s how I currently see it: It’s not fair to ask misguided people to be a bit less wrong in a way that benefits you instead of asking them to stop being wrong. Imagine that you walk in on somebody stealing from the shared cookie jar–something you would never do–and you tell them that you’re not a thief, but if the thief you just encountered could pass you a cookie you’d appreciate it.

What would be better? Just tell them to stop being a thief! 

Some caveats apply: 

  1. If you share values, this doesn’t apply. If you think taking extraordinarily large quantities of cookies from the jar is fine, then there’s nothing wrong with asking a fellow thief to pass you one. The problem is in trying to exploit belief systems you don’t respect. 
  2. Obviously there’s something like this that’s totally neutral. “You hate running, I love it. But you want to lose weight, so you should join me!” In case it needs explaining, the problem is in appealing to values you don’t hold yourself. If you think losing weight is a stupid value (perhaps you are fat-positive!) then you are condescending to your friend. You either don’t care about whether they actually achieve their goal, or you simply think it’s a goal that can’t be achieved, or what’s most likely: You just don’t think about it–Other people’s values are a means to an end. 
  3. There’s a lot of overlap in values and often the difference is really in degree. Thank god, because otherwise negotiation and reasoning between value-sets would be much harder. Nonetheless, it still happens that we encounter people whose value sets we disagree with but we try to benefit from anyway. 

Think about the common dichotomy of ‘speech’ and ‘violence’. There’s trying to smack your opponents and then there’s trying to convince your opponents. But there’s also a third option: trying to exploit your opponents. The best metaphor here might be a network security one. When you try to convince your opponent you are trying to upgrade their system, in whatever way. Your version is more efficient, more secure, whatever, and you think the network is better if any node is better. In contrast, when you try to use people’s values against them, you are treating their values as a back-door exploit to use their resources. 

There are nicer and less nice ways to convince, of course, and there are meaner and less mean ways to exploit. But there aren’t any good ways. 

In A Connecticut Yankee in King Arthur’s Court, the main character, teleported back in time, uses his knowledge of a historical eclipse to trick the King into believing he has immense power. I don’t see a difference between this form of exploiting beliefs and what people do to each other when they tell each other something is “really in their interest”. If you think somebody is wrong, don’t appeal to their wrong values. Maybe instead it would be more decent to tell them that their value system just doesn’t work. 

This might explain why I like liberalism and dislike acting like every perspective is valuable. I like the idea of everybody having to convince each other instead of hitting each other. But I don’t like the idea of finding the value in every position because some positions are fundamentally misguided and everybody but the believer knows it. Desiring for people to retain dysfunctional beliefs is a lot like not wanting your children to grow up. 

So go forth! Stop treating people’s values as means to your own values. Either be honest about what you want or leave them be!