Is This the End?

As the clock ticks toward the end times, it’s easy to lose faith in humanity’s ability to endure. But we can’t give up.

Asteroids, supervolcanoes, nuclear war, climate change, engineered viruses, artificial intelligence, and even aliens — the end may be closer than you think. This is the last in a series of essays drawn from editor Bryan Walsh’s new book End Times: A Brief Guide to the End of the World, which hits shelves on August 27 and is available for order now. We’re not helpless. It’s up to us to postpone the apocalypse.

On January 24, 2019, the Bulletin of the Atomic Scientists — a journal that has tracked the threat of human-made apocalypse since the dawn of the nuclear age — announced the new setting of its Doomsday Clock. It was two minutes to midnight — the same as the year before, tied for the latest since the Clock began keeping time. In keeping the Clock unchanged, the Bulletin made the only decision it could. The world hadn’t gotten perceptibly worse over the course of 2018; there were even some improvements, but on balance it hadn’t gotten any better, either.

The Bulletin had an apt term for the state we now find ourselves in, circling the drain of Armageddon: a “new abnormal,” as the physicist Robert Rosner put it the day of the clock unveiling, “a disturbing reality in which things are not getting better and are not being effectively dealt with.” It’s not that we find ourselves in a new state of existential fear — we’ve had reason for such fear since the Trinity test on July 16, 1945. But that fear has largely bred not passion but paralysis. Though we can imagine the end of the world all too easily, we can’t imagine coming together to save it. And that creeping futility is what we must overcome.

The Doomsday Clock is a brilliant symbol, but a symbol is all it is. There is no countdown to the end — at least not one we can hear. But if our current existential risks worsen with each passing year, and if we continue to add new ones, the odds of our long-term survival will be short.

The hopeful view is that what appears to be ever-increasing existential risk is actually a temporary bottleneck created by new technologies we can’t yet control and by environmental challenges that are a function of our accelerating growth, like climate change. If we can make it through that bottleneck, we’ll find safety on the other side. We just need a breakthrough. Maybe it will be artificial intelligence, ethical and controlled. Maybe it will be some mix of clean energy and carbon engineering that defuses climate change and gives us centuries more to grow on this planet. Maybe Elon Musk and Jeff Bezos are right, and the move to space will keep us safe. In this vision we survive — and thrive — not by slowing down, but by speeding up.

Though we can imagine the end of the world all too easily, we can’t imagine coming together to save it. And that creeping futility is what we must overcome.

The age of existential risk has sharpened the stakes, but this has been the central human challenge for as long as there have been human beings. Faced with limits, we invent new technologies and new practices that allow us to grow, which then use up more resources and create new risks, forcing us to innovate again to keep one step ahead of our growing capacity for both success and destruction. This is how we put 7.5 billion people on this planet. This is how we reached a point where the life of the average human being is longer and healthier and richer and just plain better than it has ever been before, no matter how fed up and pessimistic we may feel on a day-to-day basis. But with each passing year the race becomes faster and harder to run. Sooner or later we may stumble, and be overtaken.

It might work. But we would be surrendering much as well. In seeking to avoid existential risk, we would be giving up existential hope, a lottery ticket to a technological heaven where there are limits no longer. And we would be asking ourselves to alter what seems to be a basic drive of humanity: growth. Great religions claiming billions of adherents counsel humility and abstention, yet look around. The drive to grow and compete seems so hardwired into human beings that it can seem as if the only way we could change it would be to change our very DNA. Not by political activism, or moral suasion, but by rewriting our own source code.

The most startling conversation I had while researching this book wasn’t with a nuclear warrior like William Perry or a gene-editing scientist like George Church. It was with a philosopher of ethics at the University of Oxford named Julian Savulescu. Savulescu believes that the new technologies I’ve highlighted and the thirst for growth have put humanity at risk of what he calls “Ultimate Harm” — meaning the end of the world. It might be the global threat of climate change, the nation-state threat of nuclear war, or the individual threat of bioengineered viruses.

Savulescu’s point is that as the power to inflict Ultimate Harm spreads, human ethics become the only emergency brake. If the world will blow up if just one of us pushes the self-destruct button, then we will survive only if human beings — all human beings — can be trusted not to push that button. The problem, Savulescu told me, is that we don’t have the ethics to handle the dangerous world that we ourselves have created. “There’s this growing mismatch between our cognitive and technological powers and our moral capacity to use them,” he said.

Savulescu has a radical solution. He suggests that the same cutting-edge biotechnology that now poses an existential risk itself — the greatest looming existential risk, in my view — could one day be used to engineer more ethical and more moral human beings. As we learn to identify the genes associated with altruism and a sense of justice, we could upregulate them in the next generation, creating children who would innately possess the wisdom not to use that terrifying bioweapon, who would see the prudence in curbing their present-day consumption to ensure that future generations have a future at all. The options for self-destruction would still exist, but our morally bioenhanced offspring would be too good to choose them.

If that sounds like a desperate measure, well, so are the times. “I think we’re at this point where we need to look at every avenue,” Savulescu said. “And one of those avenues is not just looking to political reform — which we should be doing — but also to be looking at ourselves. We’re the ones who cause the problems. We’re the ones who make the choices. We’re the ones who create these political systems. No one wants to acknowledge the elephant in the room, and that is that human beings may be the problem, not the political system.” It’s not just ethical A.I. that we would need to create. It’s ethical human beings.

From what I have observed, most of us speak as if we believe it is people who can be changed, but behave as if technology will keep us ahead.

This is a moral philosopher’s thought experiment, not a concrete plan to begin editing genes for altruism into our babies — which, it should be noted, we’re not close to knowing how to do, even if we wanted. But as we spoke — Savulescu in Oxford, me in Brooklyn — I realized he was trying to answer a question that had nagged me since I began reporting on environmental issues, and one that followed me throughout the time I spent on this book: Is it easier to change people or technology? If you hold out hope for people, then you believe that we can be persuaded to behave in a way that is more sustainable, even if it demands sacrifice. If you believe technology is more responsive, then you’re in favor of running the race against risk faster, putting faith in our ability to innovate ahead of doom.

From what I have observed, most of us speak as if we believe it is people who can be changed, but behave as if technology will keep us ahead. We embrace a rhetoric of political change and personal responsibility, but the lives we actually live depend on technological and economic growth, whatever the consequences. Savulescu was trying to split the difference: use technology to change people. That should tell you just how difficult it is to fundamentally change ourselves.

People have changed, of course. We’ve largely abandoned hideous practices like slavery, expanded the circle of human rights, and fought for the power to rule ourselves. But those changes largely fed the engine of growth, and put more power in the hands of individuals, to be used for good or ill. Short of a fundamental political or even spiritual revolution, what I can’t see changing is that primal human drive to expand.

Perhaps I’m suffering from a failure of imagination. The Marxist political theorist and literary critic Fredric Jameson, after all, once wrote that it is “easier to imagine the end of the world than to imagine the end of capitalism.” But everywhere I’ve traveled on this planet, I’ve seen people who want more. More for themselves, and more for their children. Who will tell them they can’t have it, even if it may cost the world?

So we must run faster, as if we’re running for our lives.

All Rights Reserved for Bryan Walsh

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.