2024-05-09 - S-Risk - What's the Risk?

S-Risk - What’s the Risk?

Hey there everyone. I’ve been pretty busy the last few months, so sorry for not uploading. But recently I was recommended a weird video and I kinda wanted to talk about it. This video, titled “S-Risks: Fates Worse Than Extinction?” is largely about the ideas of Nick Bolstrom and a philosophy called “Longtermism”, often associated with the Effective Altruism movement.

Wait, what are those things?

Well, to be honest, I just started looking into them, and I don’t think things are looking too good. I was familiar with Peter Singer’s utilitarianism and how he influenced people like Richard Dawkins, which had a profound impact on how people understand, aestheticize and moralize science. But it seems like I have come across a new branch in the story.

From what I could gather, Effective Altruism is basically a philosophy that tries to make the most good in the world. Sounds nice on the surface. But it’s also that ethical movement that is right now backing the actions of snobby AI tech CEOs who believe they can funnel enough money into problems to fix everything. Elon Musk for example is notorious for funding various Effective Altruism companies. The problem of course is that Elon Musk is kind of known for being a psycho asshole. Yeah, if Elon Musk hasn’t been enough of a demonstration of why this is a bad idea, maybe Sam Bankman-Fried will. He was so altruistic that he funneled billions from FTX to fund his and his friends’ lavish lifestyles.

Longtermism in comparison appears to be a form of utilitarianism that is applied to future risks. It plugs into Effective Altruism as being a specific potential interest to pursue that it can place its capital into. Nick Bolstrom in particular is interested in this subject, writes about it quite extensively and was the founder of The Future of Humanity Institute. Upon further research, it seems like Oxford University isn’t very impressed - they decided to not renew funding for his Future of Humanity Institute this year. This is likely related to an incident that was uncovered from the 1990s where he said extremely racist things that he cries about will be “maliciously framed” against his public image that I’m not going to say out loud because I’m not going to do that to myself.

So, to be honest, as far as a philosophy of ethics goes, not off to a good start. It doesn’t help that the video is literally jam packed with Shiba Inus and other cryptobro references everywhere. But what is the video actually arguing?

In it, the video explains a model of risk assessment of humanity’s future through different kinds of factors. For example, an X-risk is a risk that leads to extinction.

However, this video introduces something called an S-risk. What is an S-risk? It’s basically a risk that increases suffering far in the future for all descendants, and therefore spreading suffering to a wider population. To utilitarians, because the collective suffering of a nearly uncountable number of subjects is worse than the subjects not existing at all, an S-Risk is something even more serious than extinction itself.

Yes, I completely understand if you’re confused. It is confusing, because its conclusions really don’t seem correct at all. Now keep in mind, I’ve only just started to dig into this rabbit hole, but this seems to be a consequence of the utilitarian framework that Bolstrom depends on.

Utilitarianism, as a general overview, is concerned with measuring “utility”, sometimes represented as pleasure or suffering. In many models, it tries to represent this as a value that is transformed through various complex equations. But representing “suffering” or “pleasure” as a number or mathematical function is very limiting. Think about the times you suffered or had pleasure - do you think you could realistically rate it on a scale from 1 to 10 without cutting out some of what really made it suffering or pleasure in the first place? If that’s the case, then what is that number really measuring?

Additionally, in many cases, utilitarian models lead to conclusions that certain people would rather not exist at all. In this video, there is an extreme emphasis on the scale of potential suffering that could exist, in comparison to how extinction events could eliminate all of mankind. It really sounds like the video is claiming that it is actually worse for the future to have nearly countless suffering individuals than it is for everyone to stop existing altogether. Think about how many people are suffering, such as those with a disability. Do you really think any self respecting disabled person really believes it is in their best interest to simply not exist? As a result, utilitarianism has been heavily derided by disabled people, and modern utilitarians refusal to engage with disabled arguments seriously highlights a serious oversight in their ethical models. Additionally, a people who continue to exist are a people who have the ability to fight for a better world - a people who cease to exist can fight for nothing. Of course, this kind of reasoning has a massive impact for how we view endangered species, languages, cultures and societies.

So yeah, you heard that right - this video is trying to tell you that potential suffering billions of years in the future is actually worse than the suffering going on on the planet right now, because there will be more people suffering in more complex, larger social machines.

Now, while I personally think we should try to create the best future that we can, I don’t think that focusing our efforts on something so distant, alienated and disconnected from our lives is actually worthwhile. Considering the volatility of the world in the last 50 years, do you really believe that we can predict what things we should invest in to protect our future from S-risk even 50 years from now? Another problem I have with the concept of S-risk is how it applies backwards in time. If S-Risk represents a risk in the far future, does this mean that early modern humans 50,000 years ago were responsible for all the suffering that’s going on today? Should we hold them accountable for our misery? Or does it make more sense for ourselves to fight for a future that is less miserable right now?

It seems if we take this idea to its logical conclusion, we cannot really justify the existence of life at all, because it will inevitably produce more and more kinds of suffering. Considering that we are living in a period of incredible biodiversity, does this imply we are also also living in a period of incredibly diverse suffering, and therefore imply we are living in the worst of times according to utilitarians? How is this argument not just an extension of anti-natalism?

While I am obviously not adverse to mitigating future risk of suffering, I think we should be focused on issues that exist in our more local vicinity. After all, we are building the future right now, through our actions right now. What we do with our suffering now determines how we suffer in the future. We can observe how things are changing in real time. Worrying about issues millions of years in the future serves to keep us from acting in a way that deals with the real world at all now.

It almost sounds like… a sales pitch from a bunch of tech CEOs huffing their own fumes. Not surprising for a bunch of people who think they can lecture the world on suffering when they refuse to talk to someone under their tax bracket. It is very interesting what these people who supposedly care about suffering have to say about the people who are suffering the most right now in the world.

Anyways, I’m very interested in continuing research into whatever weird rabbit hole I seem to have drawn myself into.

Oxford Closes Future of Humanity Institute: https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

Future of Humanity Institute Website: https://www.futureofhumanityinstitute.org/

Email: https://nickbostrom.com/oldemail.pdf

Bolstrom Existential Risk Prevention: https://existential-risk.com/concept.pdf

Disabled Critique of Peter Singer: https://direct.mit.edu/ajle/article/doi/10.1162/ajle_a_00014/107218/THE-DISCORDANT-SINGER-How-Peter-Singer-s-Treatment

Utilitarian Responses to Disabled Critiques: https://askell.io/posts/2017/03/utilitarianism-and-disability-activists

[top]

[newer] | [older]