Book Review: Life 3.0
On August 2, 2014, Elon Musk (@elonmusk) tweeted, “We need to be super careful with AI. Potentially more dangerous than nukes.” Yet, in the cultural ambience of Hollywood blockbusters, can the public separate Musk’s words from the fantastical, almost cartoonish imagery of films such as Terminator and The Matrix? Some have even taken a Freudian approach to interpreting Musk’s “true” motivations. Musk’s fear of runaway artificial intelligence is really a suppressed fear of runaway capitalism, the argument goes.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
But enough psychoanalyzing. This conversation is hardly limited to Musk and his armchair psychologists.
Enter Max Tegmark. A Swedish born theoretical physicist at MIT, Tegmark’s book Life 3.0: Being Human in the Age of Artificial Intelligence is a clear articulation of the very real dangers of artificial intelligence, or AI. 2017 was a huge year for AI. And if you’re not concerned about AI yet, it’s time to join what Tegmark in chapter 1 calls “the Most Important Conversation of Our Time.”
What is AI?
Before we get into Tegmark’s book, what is AI? AI is intelligence possessed by machines. This includes a machine’s ability to learn, perceive, solve problems, and act on its environment. Recently, huge strides in a technique called deep learning have allowed machines to learn in ways that were once science fiction.
“AI’s gentle beginnings may quickly seed an intelligence explosion.”
Like the human brain, deep learning algorithms learn by adjusting connections (or weights) between neurons. Unlike the human brain, of course, these are not wet neurons, but, rather, simulated or artificial neurons. The “deep” in deep learning comes from “hidden” layers of neurons nested between an input layer (that receives information) and output layer (that gives “behaviors” or actions).
Such hidden layers hold increasingly abstract representations. Rather than “seeing” a jumble of independent pixels through its camera eye, such a deep learning network can be trained to perceive abstract concepts such as faces, cars, and pedestrians.
Of course, machines have been able to beat humans in domains like arithmetic for a long time, but still drag behind in language and abstract thinking. Clearly, there are many different dimensions of intelligence. So when I refer to AI, I’m really referring to what some call AGI: artificial general intelligence, or AI that encompasses all tasks humans are good at.
Super intelligence, or intelligence far beyond that of humans, might arise from a humbler AI. Imagine an AI that is just slightly better at creating AI than we are. The AI it creates might beget a still more powerful AI, and it would in turn beget a yet more powerful AI. In this manner, some philosophers have argued, AI’s gentle beginnings may quickly seed an intelligence explosion.
Why worry about AI?
What’s the threat of AI? Tegmark’s book clearly articulates genuine concerns many reasonable people have about AI. In the process, he takes great pains to distinguish these genuine concerns from myths and sci-fi tropes.
So, what is AI angst all about? Killer robots taking to the street? Powerful computers mysteriously becoming conscious … and, even more mysteriously, turning murderously evil?
No. As Tegmark articulates from the outset, robots, consciousness, and malevolence are not necessary conditions for concern. An AI connected to the Internet would hardly need a robotic body to do real damage. Nor would such an AI necessarily need to have consciousness, or subjective experience. (Of course, consciousness may come along for the ride anyway, a topic tackled in the book’s last chapter).
And, running against the cultural fiber of The Matrix and its villains, malevolence need not be part of the equation.
Reasonable concern over AI instead centers around the so-called alignment problem. If a superintelligent AI has different goals or values than us, our destruction may be sowed not be malevolence, but by indifference.
Consider the many animals driven to extinction or endangerment by humans. The deforestation behind such events is not motivated by malice or contempt for birds or trees. Rather, we have different goals that are more important to us than forests: namely, building homes, shopping malls, and roads.
“An AI with the goal of eliminating cancer might find a naive solution: killing anyone prone to cancer.”
After humans give an AI its initial goals, it may naturally acquire new goals. Consider our own species. The evolutionary goal of humanity is to reproduce and spread our genes. But as Tegmark points out, humans often use birth control to thwart the goals given to us by biological evolution. Similarly, superintelligence may acquire new goals that trump those given to it by humans.
Furthermore, a superintelligent AI may redesign itself, soon assuming goals unthinkable to us. And if humans stand in the way of such goals, we may not be around much longer.
Such self-modification is the meaning of ‘Life 3.0.’ And the previous versions? Evolution alone shapes the bodies and minds of Life 1.0, whereas Life 2.0 (smart animals like ourselves) enjoy additional shaping of the mind through cultural learning. Beyond humanity, Life 3.0 (or technological life) has the full ability to shape its own hardware (body) and software (mind).
Even if AI does not self-modify to acquire new goals, we must be very careful which goals and values we give it. Writer Nick Bilton fears, for instance, that an AI with the goal of eliminating cancer might find a naive solution: killing anyone prone to cancer.
Why read Life 3.0?
Departing from other popular science books, Life 3.0 opens with a frighteningly plausible short story about a super-intelligent AI aptly named Prometheus. From there, Tegmark tells us of the nonfictional Future of Life Institute, a think tank co-founded by Tegmark and four of his colleagues (including his wife, Meia Chita-Tegmark) to begin a public conversation about AI.
“Never before has a conversation about something that could kill us all felt so fascinating.”
We are then treated to an in-depth breakdown of every possible scenario AI might take. These various trajectories include “gatekeeper” AI (superintelligence that guards humanity from further superintelligence), “zookeeper” AI (superintelligence that keeps humans as zoo animals), “enslaved-god” AI (superintelligence kept as a slave), and “benevolent dictator” AI (just what it sounds like).
With his physics background, Tegmark next treats us to a vision of the future (“our cosmic endowment”) that awaits either ourselves, our artificial descents, or some cyborg fusion thereof. While touring such esoteric ideas as black hole farming and mind uploading, Tegmark’s writing remains accessible and concise, with summaries of key concepts at the end of each chapter. My favorite part of the book is the final chapter’s discussion of consciousness, including Guilio Tononi’s integrated information theory.
Never before has a conversation about something that could kill us all felt so fascinating. But AI is not entertainment, nor is Tegmark’s book. Before human level AI debuts, we must reflect on the values we want machines to have.
Self-driving AI cars are already facing ethical dilemmas that we don’t know how to solve. Who should die in an unavoidable car crash? The pedestrian who carelessly jumped into the street? Or the driver, when the car avoids the pedestrian by driving into a telephone pole? As Tegmark quotes from philosopher Nick Bostrom, we are now faced with “philosophy with a deadline.”
We don’t yet have the answers. But Tegmark does have the questions. The conversation starts now.
Images by Jooyeun Lee and Kayleen Schreiber
–
Life 3.0: Being Human in the Age of Artificial Intelligence
By Max Tegmark
384 pages. Knopf. $19.39