A.I. Is the Cause Of — And Solution To — the End of the World

The development of artificial general intelligence offers tremendous benefits and terrible risks

Asteroids, supervolcanoes, nuclear war, climate change, engineered viruses, artificial intelligence, and even aliens — the end may be closer than you think. For the next two weeks, OneZero will be featuring essays drawn from editor Bryan Walsh’s forthcoming book End Times: A Brief Guide to the End of the World, which hits shelves on August 27 and is available for pre-order now, as well as pieces by other experts in the burgeoning field of existential risk. But we’re not helpless. It’s up to us to postpone the apocalypse.

T

There is no easy definition for artificial intelligence, or A.I. Scientists can’t agree on what constitutes “true A.I.” versus what might simply be a very effective and fast computer program. But here’s a shot: intelligence is the ability to perceive one’s environment accurately and take actions that maximize the probability of achieving given objectives. It doesn’t mean being smart, in a sense of having a great store of knowledge, or the ability to do complex mathematics.

My toddler son doesn’t know that one plus one equals two, and when I was writing my book End Times his vocabulary was largely limited to excitedly shouting “Dog!” every time he saw anything that is vaguely furry and walks on four legs. (I would not put money on him in the annual ImageNet Large Scale Visual Recognition Challenge, the World Cup of computer vision.) But when he toddles into our kitchen and figures out how to reach up to the counter and pull down a cookie, he’s perceiving and manipulating his environment to achieve his goal — even if his goal, in this case, boils down to sugar. That’s the spark of intelligence, a quality that only organic life — and humans most of all — has so far demonstrated.

Computers can already process information far faster than we can. They can remember much more, and they can remember it without decay or delay, without fatigue, without errors. That’s not new. But in recent years the computing revolution has become a revolution in artificial intelligence. A.I.s can trounce us in games like chess and Go that were long considered reliable markers of intelligence. They can instantly recognize images, with few errors. They can play the stock market better than your broker. They can carry on conversations via text almost as well as a person can. They can do much of what you can do — and they can do it better.

We’re intelligent — Homo sapiens, after all, means “wise man.” But an A.I. could become superintelligent.

Most of all, A.I. is learning to learn. This is a human trait, but because an A.I. can draw on far more data than a human brain could ever hold, and process that data far faster than a human brain could ever think, it has the potential to learn more quickly and more thoroughly than humans ever could. Right now that learning is largely limited to narrow subjects, but if that ability broadens, artificial intelligence may become worthy of the name. If A.I. can do that, it will cease to merely be a tool of the bipedal primates who currently rule this planet. It will become our equal, ever so briefly. And then quickly — because an A.I. is nothing if not quick — it will become our superior. We’re intelligent — Homo sapiens, after all, means “wise man.” But an A.I. could become superintelligent.

We did not rise to the top of the food chain because we’re stronger or faster than other animals. We made it there because we are smarter. Take that primacy away and we may find ourselves at the mercy of a superintelligent A.I., in the same sense that endangered gorillas are at the mercy of us. And just as we’ve driven countless species to extinction not out of enmity or even intention, but because we decided we needed the space and the resources they were taking up, so a superintelligent A.I. might nudge us out of existence simply because our very presence gets in the way of the A.I. achieving its goals. We would be no more able to resist it than endangered animals have been able to resist us.

You have probably heard the warnings. Tesla and SpaceX founder Elon Musk has cited A.I. as “the biggest risk we face as a civilization,” and calls developing general A.I. “summoning the demon.” The late Stephen Hawking said that the “development of full artificial intelligence could spell the end of the human race.” Well before authentic A.I. was even a possibility, we entertained ourselves with scare stories about intelligent machines rising up and overthrowing their human creators: The Terminator, The MatrixBattlestar GalacticaWestworld.

A.I. is the ultimate existential risk. But A.I. is also the ultimate source of what some call “existential hope,” the flip side of existential risk.

Existential risk exists as an academic subject largely because of worries about artificial intelligence. All of the major centers on existential risk — the Future of Humanity Institute (FHI), Future of Life Institute (FLI), the Centre for the Study of Existential Risk (CSER) — put A.I. at the center of their work. CSER, for example, was born during a shared cab ride when Skype co-creator Jaan Tallinn told the Cambridge philosopher Huw Price that he thought his chance of dying in an A.I.-related accident was as great as death from heart disease or cancer. Tallinn is far from the only one.

A.I. is the ultimate existential risk, because our destruction would come at the hands of a creation that would represent the summation of human intelligence. But A.I. is also the ultimate source of what some call “existential hope,” the flip side of existential risk.

Our vulnerability to existential threats, natural or man-made, largely comes down to a matter of intelligence. We may not be smart enough to figure out how to deflect a massive asteroid, and we don’t yet know how to prevent a supereruption. We know how to prevent nuclear war, but we aren’t wise enough to ensure that those missiles will never be fired. We aren’t intelligent enough yet to develop clean and ultra-cheap sources of energy that could eliminate the threat of climate change while guaranteeing that every person on this planet could enjoy the life that they deserve. We’re not smart enough to eradicate the threat of infectious disease, or to design biological defenses that could neutralize any engineered pathogens. We’re not smart enough to outsmart death — of ourselves, or of our species.

But if A.I. becomes what its most fervent evangelists believe it could be — not merely artificial intelligence, but superintelligence — then nothing will be impossible. We could colonize the stars, live forever by uploading our consciousness into a virtual heaven, eliminate all the pain and ills that are part of being human. Instead of an existential catastrophe, we could create what is called existential “eucatastrophe” — a sudden explosion of value. The only obstacle is intelligence — an obstacle put in place by our own biology and evolution. But our silicon creations, which have no such limits, just might pull it off — and they could bring us along.

No wonder that a Silicon Valley luminary as bright as Google CEO Sundar Pichai has said that A.I. will be more important than “electricity or fire.” A.I. experts are so in demand that they can earn salaries as high as $500,000 right out of school. Militaries — led by the United States and China — are spending billions on A.I.-driven autonomous weapons that could change the nature of warfare as fundamentally as nuclear bombs once did. Every tech company now thinks of itself as an A.I. company — Facebook and Uber have scooped up some of the best A.I. talent from universities, and in 2018 Google rebranded its entire research division as simply Google A.I. Whether you’re building a social network or creating drugs or designing an autonomous car, research in tech increasingly is research in A.I. — and everything else is mere engineering.

“The advent of super-intelligent A.I. would be either the best or the worst thing ever to happen to humanity.”

These companies know that the rewards of winning the race to true A.I. may well be infinite. And make no mistake — it is a race. The corporations or countries that develop the best A.I. will be in a position to dominate the rest of the world, which is why until recently little thought was given to research that could ensure that A.I. is developed safely, to minimize existential risk and maximize existential hope. It’s as if we find ourselves in the early 1940s and we’re racing toward a nuclear bomb. And like the scientists who gathered around the New Mexico desert in the predawn morning of July 16, 1945, we don’t know for sure what our invention might unleash, up to and including the might end the world.

The physicists of the Manhattan Project could only wait to see what the Trinity test would bring. But we can try to actively shape how A.I. develops. This is why existential risk experts are so obsessed with A.I. — more than any other threat the human race faces, this is where we can make a difference. We can hope to turn catastrophe to eucatastrophe. This is a race as well, a race to develop the tools to control A.I. before A.I. spins out of control. The difference could be the difference between the life and the death of the future.

This is A.I. — the cause of and solution to all existential risk. As Hawking wrote in his final book, published after his death: “The advent of super-intelligent A.I. would be either the best or the worst thing ever to happen to humanity.”

All Rights Reserved for Bryan Walsh

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.