Poetry With Heart

Kakinomoto no Hitomaro was a Japanese waka (tanka) poet who lived during the late 7th century. Wikipedia says he was known for his elegies for imperial princes but his poems within 100 Poems from the Japanese are by and large about love, loss, and longing. He is considered one of the great Japanese poets. 

Rupi Kaur is a contemporary Canadian Instagram poet. Her topics include love, trauma, and self-care. Though she is intensely popular, critics have called her work lazy, shallow, and simplistic. 

I don’t think I’m being cheeky when I say the above two poems have the exact same energy, separated by thirteen centuries. During that time, simple emotionality became unacceptable (insufficient) as poetry, so that now one poet is venerated and the other ‘isn’t a real poet’. To be fair, these are completely different cultures, and I don’t know what Japan likes in its contemporary tanka. But still, I’m left with questions. 

Past

Was such poetry regarded as equal in quality to ‘deeper’ poetry? Consider this other poem from Hitomaro: 

A strange old man

Stops me,

Looking out of my deep mirror 

This is such a perfect poem. It captures the jolt upon seeing oneself aged in a mirror, the disquiet of it, how it briefly removes selfhood–the reflection becomes a challenging foreigner, ‘stopping’ the narrator. 

In this collection, Hitomaro’s poems are a mix of these thoughtful and beautiful poems and of simple expressions of angst. Were they all regarded equally? If so, what made them equal? 

Future

A century from now, will there be any poets from the 2010s remembered besides Kaur? I’ve seen it proposed that a millennium from now, Tolkien will be the one author remembered from the 20th century (much the same way Dante is the author we know from the 14th century). This has to apply on the century level too, when we’ll only remember one poet per decade. Maybe Anne Carson will claim the 1990s. But it seems to me that outselling Homer ten to one will snag the 2010s for Kaur. Will she be remembered the way we remember Hitomaro? 

Present

Consider this quote from Ariana Reines, one of my favorite contemporary poets:

I want to say something about bad writing. I’m proud of my bad writing. Everyone is so intelligent lately, and stylish. Fucking great. I am proud of Philip Guston’s bad painting, I am proud of Baudelaire’s mamma’s boy goo goo misery. Sometimes the lurid or shitty means having a heart, which’s something you have to try to have.

Do we like bad writing? Does it have heart? What does it say about our culture that our good writing doesn’t? 

The Virtues of Melee

This post comes at an awkward time. 

Two weeks ago, improvements to the netcode for Super Smash Bros Melee (Melee) dramatically expanded the potential for online play. I’ve started playing again and it’s gotten me thinking about the virtues of competition. From 2014 to 2016, Melee’s tournament scene was how I came to appreciate competition: the joy of it and the demand of it. 

One week ago, the Smash scene exploded as accusation after accusation of sexual misconduct rolled in concerning top players and accusation after accusation was verified. This has largely affected Ultimate, the latest entry in the series, though there are a few players in Melee who have also been revealed to have done terrible things. 

Amidst the brutality of all the revelations, I wanted to write this post to remind myself of what I love about Melee, competition, and tournaments. The community is now having an important and complicated discussion about how to make events safer. I don’t have much to add as I only attend a tournament once every few months. Mostly I hope people go through with steps to improve the scene rather than settle for just catching the current crop of abusers. 


What I want to talk about is the virtues that Melee inculcates. We don’t intuitively connect virtue and gaming, but Melee (or any other sport or complex game) requires virtue in order to improve. What happens to those who can’t accept losing? They’re forced to cultivate virtue. 

To get in the right mindspace imagine that you play a video game for fun. It’s a good time, you know some tricks, you can beat most of your friends. Then you meet somebody you can’t beat. Imagine that it bugs you. For some reason, you can’t stand losing. For some reason, you really, really want to win. Maybe you know why, maybe you don’t. Whatever it is, you have to win.  

This is where it begins. You start by being dissatisfied. You start by not being happy unless you win. The fire is lit. It only gets worse when you find out you’re not only not the best among your friends, there are whole tournaments where everybody is much, much better than you. 

There are two things you can do here. You can keep losing and stay salty or you can make the choice to improve. And if you choose the latter, you begin building 

Scholarship

You find communities and you start asking questions. Questions like “why do I lose to Marth?” gradually refine themselves into questions like “what’s the optimal punish when I grab a 0% Marth?” You start analyzing match after match. Why did that work? Why does he make this look easy? Maybe I could try… You save replays of your matches and religiously go over your mistakes. I always hesitate there. Maybe instead of panicking, I should… Maybe you even pay top players to analyze your matches. You memorize the frame data for your character. You learn minutiae that never comes in handy until it does. (Sheik’s needles do 16-18% damage depending on how far she is from an enemy).

Discipline

At the same time, you’re becoming disciplined. You start practicing situation after situation. Frame-perfect ledge-dash after ledge-dash, losing a stock if you don’t have frame perfection. Waveshining Peach back and forth across FD. 

You start entering tournaments. At first you can’t beat anybody, week after week, until you take your first game. Then your first set. You watch as you slowly move out of the flotsam of bottom-seeds. You even start taking tournament days seriously. This used to just be for fun and now you’re making decisions like ‘get good sleep’ and ‘eat a healthy lunch’ to make sure you don’t crash mid-tournament. 

Stoicism

Oh shit, my girlfriend is watching and I’m getting wrecked, I have to turn this around.

Oh shit, I’m on stream, me getting bodied is going to be recorded forever, I wonder what the commentators are saying (Bonus: the commentators are feet away and you can actually hear them). 

Oh shit, he taunted, he thinks he’s better than me and I have to show him and right after I get a sick combo I’ll taunt him back. 

I’m a way better player, I should be winning, what the fuck is going on?

If I lose this game I’m out of the tournament and didn’t even make top 8.

If I win this game I know I’ll make my region’s power rankings. 

If I lose this game I’m out of the tournament and my sponsor might drop me. 

If I win this game I’ll make fifty bucks.

As your tech skill improves the mental game becomes that much more important. As you improve at relinquishing intrusive thoughts like these you gain the edge over everybody who has the skill but no mental self-control. Notice the commonalities between my examples. Many of them have to do with status: how your partner / community / opponent / self perceives you. Many of them involve entering a mode where you feel you must win. 

Part of stoicism is making peace with what is beyond your control and focusing on the task at hand. You cannot control what is not on the screen in front of you, and to the extent you are thinking about them, you are not thinking about what’s inside that screen. Consider the expressions “He’s in his own head” and “he’s in his opponent’s head.” These highlight the ways in which your mind can (and will) be hijacked by thoughts and concerns beyond your control that take away from your ability to focus on what is within your control. 

Responsibility

Take responsibility for your losses, even to water bottles.

A fake conversation can illustrate this virtue better than an intensional definition. 

Q: Why did you lose?

A: He played so lame, he–

Q: No. Why did you lose?

A: It’s a bad matchup for my character actually–

Q: No. Why did you lose? 

A: Well, it was my first match of the day and my hands were cold from the harsh Canadian winter, so–

Q: No. Honestly, I don’t think you are hearing my question. 

A: I don’t know what you’re looking for! If any of those were different–if he didn’t play like that, if it was a different matchup, if I had time to warm-up–I would have won. 

Q: A, do you think you could have won that match? That if things in the match had gone differently, you could have won? 

A: Well, yeah. There were some things. 

Q: Then I’ll rephrase: What did you do that made you lose? What could you have done differently, to make you win? 

A: Well, I guess I could have gotten there earlier to warm up. And if I had edgeguarded him better I would have actually finished off his stocks. And…

Play to win

These are the virtues of Melee. Scholarship, discipline, stoicism, responsibility. But the most fundamental of these is responsibility because it encompasses the others. What happens if you stop cultivating virtue? Then you stop improving. You don’t stop winning–but you plateau.

To sum up, one of the reasons Melee is awesome is that it forces you to become a stronger person if you want to win.  I don’t play Melee competitively anymore but I think it left me more disciplined, more ready to take responsibility, and more able to let go of what I can’t control. That doesn’t mean that I recommend that you go out and become a nerd–but if you aren’t interested in physical sports, competitive gaming is one incredibly fun way to cultivate the above virtues. And if you ever want to play, I can be reached on Slippi at KAY#863. 

Second-Order Existential Risk

[Epistemic status: Low confidence]

[I haven’t seen this discussed elsewhere, though there might be overlap with Bostrom’s “crunches” and “shrieks”]

How important is creating the conditions to fix existential risks versus actually fixing existential risks? 

We can somewhat disentangle these. Let’s say there are two levels to “solving existential risk.” The first level includes the elements deliberately ‘aimed’ at solving existential risk. This includes researchers, their assistants, their funding. On the second level are the social factors that come together to produce humans and institutions with the knowledge and skills to even be able to contribute to existential risk. This second level includes things like “a society that encourages curiosity” or “continuity of knowledge” or “a shared philosophy that lends itself to thinking in terms of things like existential risk (humanism?).” All of these have numerous other benefits to society, and they could maybe be summarized as “create enough surplus to enable long-term thinking.” 

Another attribute of this second level is that these are all conditions that allow us to tackle existential risk. Here are a few more of these conditions:

  • Humans continue to reproduce. 
  • Humans tend to see a stable career as their preferred lifepath. 
  • Research institutions exist.
  • Status is allocated to researchers and their institutions. 

If any of these were reversed, it seems conceivable that our capacity to deal with existential risk would be heavily impacted. Is there a non-negligible risk of these conditions reversing? If so, then perhaps research should be put into dealing with this “second-order” existential risk (population collapse, civilization collapse) the same way it’s put into dealing with “first-order” existential risk (nuclear war, AI alignment).

Reasons why second-order x-risk might be a real concern:

  • The above conditions are not universal and thus can’t be taken for granted.
  • Some of these conditions are historical innovations and thus can’t be taken for granted.
  • The continued survival of our institutions could be based more on inertia than any real strength.  
  • Civilizations do collapse and due to our increased interconnectivity a global collapse seems possible. 
  • A shift away from ‘American values’ over the coming decades could lead to greater conformity and less innovation. 
  • Technology could advance faster than humanity’s ability to adapt, significantly impacting our ability to reproduce ourselves. 

Reasons why second-order x-risk might not be a real concern:

  • Civilization keeps on trucking through all the disruptions of modernity. Whether or not the kids are alright, they grow up to complain about the next kids. 
  • Whatever poor adaptations people have to new technology, they’ll be selected against. Future humans might develop good attention habits and self-control. 
  • The bottleneck could really only be in funding. You don’t need that many talented people to pluck all the significant x-risk fruit. They’re out there and they’ll be out there for years to come, they just need funding once found. 

When considering whether or not second-order x-risk is worth researching, it’s also worth looking at where second-order existential risk falls in terms of effective altruist criteria: 

  • Scale: Impaired ability to deal with existential risk would, by definition, affect everybody. 
  • Neglect: Many people are already working on their version of preserving civilization. 
  • Tractability: It is unclear what the impact of additional resources would be. 

My suspicion is that second-order x-risk is not as important as ex-risk. It might not even be a thing! However, I think the tractability is still worth exploring. Perhaps there are cheap, high-impact measures that maximize our future ability to deal with existential risk. It’s possible that these measures could also align with other EA values. Even decreasing disease burden in developing countries slightly increases the chances of a future innovator not dying of starvation. 

I am also personally interested in the exploration of second-order x-risk because there is a lot of overlap with conservative concerns about social and moral collapse. I think those fears are overblown but they are shared by a huge chunk of the population (and are probably the norm outside of WEIRD countries). I’m curious to see robust analyses of how much we realistically should worry about institutional decay, population collapse, and technological upheaval. It’s a ‘big question’ the same way religion is: if its claims are true, it would be a big deal, and enough people consider it a big deal that it’s worth checking. However, if it is rational to not worry about such things, then we could convince at least a few people with those concerns to worry about our long-term prospects instead.