Are we experiencing an extinction crisis?


A Canada goose stands on the snow-covered figure of a Tyrannosaurus Rex at Cologne Zoo in Cologne, Germany in January 2024.
A Canada goose stands on the snow-covered figure of a Tyrannosaurus Rex at Cologne Zoo in Cologne, Germany in January 2024. | Federico Gambarini/Picture Alliance via Getty Images

Inside the wild world of AI doomsdayers.

If you’ve followed the news in the last year or two, you’ve no doubt heard a ton about artificial intelligence. And depending on the source, it usually goes one of two ways: AI is either the beginning of the end of human civilization, or a shortcut to utopia.

Who knows which of those two scenarios is nearer the truth, but the polarized nature of the AI discourse is itself interesting. We’re in a period of rapid technological growth and political disruption and there are many reasons to worry about the course we’re on — that’s something almost everyone can agree with.

But how much worry is warranted? And at what point should worry deepen into panic?

To get some answers, I invited Tyler Austin Harper onto The Gray Area. Harper is a professor of environmental studies at Bates College and the author of a fascinating recent essay in the New York Times. The piece draws some helpful parallels between the existential anxieties today and some of the anxieties of the past, most notably in the 1920s and ’30s, when people were (rightly) terrified about machine technology and the emergence of research that would eventually lead to nuclear weapons.

Below is an excerpt of our conversation, edited for length and clarity. As always, there’s much more in the full podcast, so listen to and follow The Gray Area on Apple Podcasts, Google Podcasts, Spotify, Stitcher, or wherever you find podcasts. New episodes drop every Monday.


Sean Illing

When you track the current discourse around AI and existential risk, what jumps out to you?

Tyler Austin Harper

Silicon Valley’s really in the grip of kind of a science fiction ideology, which is not to say that I don’t think there are real risks from AI, but it is to say that a lot of the ways that Silicon Valley tends to think about those risks come through science fiction, through stuff like The Matrix and the concern about the rise of a totalitarian AI system, or even that we’re potentially already living in a simulation.

I think something else that’s really important to understand is what an existential risk actually means according to scholars and experts. An existential risk doesn’t only mean something that could cause human extinction. They define existential risk as something that could cause human extinction or that could prevent our species from achieving its fullest potential.

So something, for example, that would prevent us from colonizing outer space or creating digital minds, or expanding to a cosmic civilization — that’s an existential risk from the point of view of people who study this and also from the point of view of a lot of people in Silicon Valley.

So it’s important to be careful that when you hear people in Silicon Valley say AI is an existential risk, that doesn’t necessarily mean that they think it could cause human extinction. Sometimes it does, but it could also mean that they worry about our human potential being curtailed in some way, and that gets in wacky territory really quickly.

Sean Illing

One of the interesting things about the AI discourse is its all-or-nothing quality. AI will either destroy humanity or spawn utopia. There doesn’t seem to be much space for anything in between. Does that kind of polarization surprise you at all, or is that sort of par for the course with these kinds of things?

Tyler Austin Harper

I think it’s par for the course. There are people in Silicon Valley who don’t have 401(k)s because they believe that either we’re going to have a digital paradise, a universal basic income in which capitalism will dissolve into some kind of luxury communism, or we’ll all be dead in four years, so why save for the future?

I mean, you see this in the climate discourse, too, where it’s either total denialism and everything is going to be fine, or they imagine that we’re going to be living in a future hellscape of an uninhabitable earth. And neither of those extremes are necessarily the most likely.

What’s the most likely is some kind of middle ground where we have life like we have it now except worse in every way, but something short of full-scale apocalypse. And I think the AI discourse is similar, where it’s a sort of zero-sum game: we’ll have a paradise of techno-utopia and digital hedonism, or we will live as slaves under our robot overlords.

Sean Illing

What makes an extinction panic a panic?

Tyler Austin Harper

Extinction panics are usually in response to new scientific developments that seem to come on suddenly, like rapid changes in technology, or geopolitical crises, when it feels like everything is happening too fast all at once. And then you have this collective and cultural sense of vertigo, that we don’t know where things go from here, everything seems in flux and dangerous, and the risks are stacking up.

I compare extinction panics to moral panics, and one of the defining features of a moral panic for sociologists is that it’s not necessarily based on nothing. It’s not always the case that a moral panic has no basis in reality, but rather it’s blowing up a kernel of reasonableness into a five-alarm fire. And that’s how I view our present moment.

I’m very concerned about climate change. I’m concerned about AI a little differently than the Silicon Valley folks, but I’m concerned about it. But it does seem that we are blowing up super reasonable concerns into a panic that doesn’t really help us solve them, and that doesn’t really give us much purchase on what the future’s going to be like.

Sean Illing

For something to qualify as an extinction panic, does it have to be animated by a kind of fatalism?

Tyler Austin Harper

There’s a kind of tragic fatalism or pessimism that defines an extinction panic where there is a sense that there’s nothing we can do, this is already baked in, it’s already foretold. And you see this a lot in AI discourse, where many people believe that the train is already too far down the tracks, there’s nothing we can do. So yeah, there is a fatalism to it for sure.

Sean Illing

We had a major extinction panic roughly 100 years ago, and there are a lot of similarities with the present moment, with plenty of new and repurposed fears. Tell me about that.

Tyler Austin Harper

Right after the end of World War I, we entered another period of similar panic. We tend to think of the end of World War II with a dropping of two atomic bombs and the ushering in of the nuclear age. We tend to think of that as the moment when humanity became worried that it could cause its own destruction. Those fears happened much earlier, and they were already percolating in the 1920s.

Winston Churchill wrote a little essay called “Shall We All Commit Suicide?” And that predicted bombs the size of an orange that could lay waste to cities. And these weren’t fringe views. The president of Harvard at the time blurbed that essay Churchill wrote and called it something all Americans need to read.

There was a pervasive sense, particularly among the elites, that the Second World War might be the last war humanity fights. But even concerns about a machine age, the replacement of human beings by machines, the automation of labor, those appear in the ’20s, too.

Sean Illing

In their defense, the people panicking in the ’20s don’t look that crazy in retrospect, given what happened in the following two decades.

Tyler Austin Harper

Absolutely. I think that’s one of the important pieces of what I’m trying to get at, is that panics are never helpful. It doesn’t mean that the fears aren’t grounded in real risks or real potential developments that could be disastrous.

Obviously, a lot of things in the 1920s were right, but a lot was wrong, too. H.G. Wells, the great science fiction novelist, who in his own day was actually more famous as a political writer, famously said, “On my tombstone, you should put, ‘I told you so, you damned fools.’” And he thought as soon as we had nuclear weapons, we’d be extinct within a few years, and yet we’ve survived eight decades with nuclear weapons. We’ve never used them since 1945.

That’s a remarkable accomplishment, and it’s one of the reasons why I’m really resistant to this notion that we have an accurate sense of what’s coming down the pipeline or that we have an accurate sense of what humanity’s capable of. Because I don’t think many would’ve predicted that we could semi-responsibly have nuclear weapons without another nuclear war.

Sean Illing

You wrote that something we’re seeing now, which is something we’ve seen before, is this belief that the real threat posed by human extinction is nihilism. The idea that to go extinct is to have meant nothing cosmically. What does that mean, exactly?

Tyler Austin Harper

That’s at the core of longtermism, right? This sense that it is the universe or nothingness, that humanity’s meaning depends on our immortality. And so they start from this almost Nietzschean view of the universe that there’s no meaning, life means nothing. But their twist is to say, “But we can install meaning in the universe if we make ourselves permanent.”

So if we achieve digital immortality, if we colonize the cosmos, we can put meaning into what was previously a godless vacuum, and we can even become kinds of gods ourselves. So the question of nihilism and overcoming nihilism through technology and through digital immortality is shot through contemporary extinction discourse.

Sean Illing

There does seem to be something deeply religious about this. I mean, religious people have always been obsessed over the end of the world and our place in the cosmos, and this strikes me as a secular analog to that.

Tyler Austin Harper

You know, people have been telling tales about the end of the world for as long as there have been human beings. You do see a shift in the late 18th, early 19th century to the first naturalistic, non-religious imaginations of human extinction. By naturalistic I mean human extinction not from a divine cause, but from a natural event or from technology. And yet even as that conversation becomes secular, there’s all sorts of religious holdovers that are suffused throughout this discourse.

I do think there’s a way that longtermism has become a kind of secular religion. I mean, the stakes are as large in their telling as the stakes in something like the Bible. Both are dreaming of cosmic afterlife, of immortality and paradise and great things. And there is this sense of regaining the garden and creating a paradise that I think is deeply embedded in Silicon Valley, and also the alternatives of damnation in hell, extinction, or slavery from AI overlords. So there’s a lot of religious resonances for sure.

Sean Illing

Another point you make is that extinction panics are almost always elite panics. Why is that the case?

Tyler Austin Harper

Yeah, I think they tend to reflect the social anxieties of elite folks who are worried about changing positions in society, and that the future might not be one catered to them. So if you look at something like climate change, which again, I can’t emphasize enough, I take really seriously. But it’s hard to avoid noticing that for a certain kind of person, the panic of climate change is that I’m not going to be able to live in my suburban home with my two cars and my nice house and my vacations.

And so it is kind of a middle- and upper-class anxiety often about changing fortunes in that they’re not going to have this luxurious lifestyle they’ve enjoyed thus far, even as the global poor are the primary victims of climate change.

And there’s something similar with AI discourse where these elite tech bros are panicking and not buying 401(k)s and convinced we’re going to go the way of the dodo bird. Meanwhile, the people who are most impacted by AI are going to be the poor people put out of work after their jobs are automated.

Yeah, it is elites that tend to shape the discourse, and that’s the language I would use — “shaping.” Because it’s not that there’s no basis in reality to these concerns, but the narrative that forms around them tends to be one formed by elites.

Sean Illing

It seems like your basic advice is to worry, but not panic. How would you distinguish one from the other? What is the difference between worrying and panicking?

Tyler Austin Harper

Yeah, it’s a great question. I would define panicking as catastrophizing and adopting this fatalistic attitude. I think panic is predicated on certainty, the sense that I know what’s going to happen. When the history of science and technology tells us there’s a lot of uncertainty like there was in 1945, so many people were certain that the world was going to end in thermonuclear fire, and it didn’t.

And so I think worry is having a realistic sense that there are real challenges for our species and for our civilization, but at the same time, maybe I should invest in a 401(k). And maybe if I want children, I should think about having them. And not make sweeping life decisions at the individual level predicated on your certainty that the future’s going to look one way or the other.

To hear the rest of the conversation, click here, and be sure to follow The Gray Area on Apple Podcasts, Google Podcasts, Spotify, Pandora, or wherever you listen to podcasts.

----------------------------------------

By: Sean Illing
Title: Are we in the middle of an extinction panic?
Sourced From: www.vox.com/the-gray-area/2024/3/3/24083523/artificial-intelligence-ai-doomsday-panic-extinction-climate-change-utopia-tech-bros-silicon-valley
Published Date: Sun, 03 Mar 2024 12:00:00 +0000

Read More


Did you miss our previous article...
https://consumernewsnetwork.com/politics-us/the-antiabortion-strategy-for-limiting-birth-control