Nate Soares, research fellow, Machine Intelligence Research Institute
'There are many incentives to getting something built and very few to getting it right' - Nate Soares, research fellow, Machine Intelligence Research Institute © Annie Tritt

The scene in the cramped office in Berkeley on a recent Saturday feels like a typical start-up carried along by the tech boom, with engineers working through the weekend in a race against time. The long whiteboard down one wall has been scrawled over in different-coloured pens. A large jar of candy and a glass-doored fridge full of soda sit by the entrance.

Nate Soares, a former Google engineer, is sitting on the edge of a sofa weighing up the chances of success for the project he is working on. He puts them at only about 5 per cent. But the odds he is calculating aren’t for some new smartphone app. Instead, Soares is talking about something much more arresting: whether programmers like him will be able to save mankind from extinction at the hands of its own most powerful creation.

Silicon Valley Special

A new Page
What will Google chief Larry Page do next?

Under pressure
Stress and the Silicon Valley entrepreneur

Female entrepreneurs
What every female founder needs to know

Food 2.0
Meatless burgers and other foods disrupting the future

The disrupters: 25 to watch
The apps, products and people setting the pace

Facebook country
Mark Zuckerberg’s ‘nation state’ plans

The object of concern – both for him and the Machine Intelligence Research Institute (Miri), whose offices these are – is artificial intelligence. Super-smart machines with malicious intent are a staple of science fiction, from the soft-spoken Hal 9000 to the scarily violent Skynet. But the AI that people like Soares believe is coming mankind’s way, very probably before the end of this century, would be much worse.

If it were a sci-fi movie, a small band of misfits would be thrown together at this point to save the planet. To the people involved in this race, that doesn’t seem so far from reality. Besides Soares, there are probably only four computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure AI remains “friendly”, says Luke Muehlhauser, Miri’s director.

Their effort is prompted by a fear of what will happen when computers match humans in intelligence. At that point, humans would cede leadership in technological development, since the machines would be capable of improving their own designs by themselves. And with the accelerating pace of technological change, it wouldn’t be long before the capabilities – and goals – of the computers would far surpass human understanding.

In their single-mindedness, they would view their biological creators as mere collections of matter, waiting to be reprocessed into something they find more useful, says Muehlhauser. They would consume all the resources on earth before propelling themselves into space, sucking energy from distant stars and ultimately devouring much of the visible universe.

On a sunny Saturday morning in northern California, this provokes a distinct sense of unreality. Alternately leaning back or perching forward awkwardly on this too-low sofa, are we really trying to hold a rational conversation about something so far beyond human conception?

Terminator 3: The Rise of the Machines (2003) depicts a world terrorised by malevolent, human-killing robots
Terminator 3: The Rise of the Machines (2003) depicts a world terrorised by malevolent, human-killing robots © Warner Bros/ The Kobal Collection

It isn’t unusual to hear people express big thoughts about the future in Silicon Valley these days – though most of the technology visions are much more benign. It sometimes sounds as if every entrepreneur, however trivial the start-up, has taken a leaf from Google’s mission statement and is out to “make the world a better place”.

Usually this is just rhetoric. But there is also a strand of thinking that draws on the supposedly transformative effects of the technologies that will soon be within mankind’s grasp. It assumes that the human race is about to take its fate into its hands – for good or ill.

Peter Diamandis, a serial entrepreneur, author and space enthusiast, is one of the prophets of the advanced technological civilisation supposedly at hand. He was the brains nearly 20 years ago behind the XPrize Foundation, which offered $10m to the first privately funded, reusable spacecraft. Among his current projects is a plan to mine minerals from asteroids. One large space rock in his sights contains platinum that he estimates would be worth $5,400bn at today’s prices on planet Earth.

For techno-optimists like him, the idea that computers will soon far outstrip their creators is both a given and something to be celebrated. Why would these machines bother to harm us, he says, when, to them, we will be about as interesting as “the bacteria in the soil outside in the backyard”?

Peter Diamandis
'We haven’t seen 1 per cent of the change we’re going to see in the next 10 years' - Peter Diamandis © Annie Tritt

Countering the disaster-movie scenario of Miri, he sketches a future in which the machines shake off their earthly shackles and leave mankind behind: “It’s a huge universe, there’s plenty of resources and energy for them.” His matter-of-fact tone makes this science fiction outcome sound almost a given. “There’s no reason for them to stay here and battle with us – they can escape at the speed of light if they want.”

Connecting the present to a future in which humanity is liberated by advanced technology is what prophets like Diamandis are all about. He points to the members of a new super-rich class who sit at the head of the biggest tech companies and have both the money and the ambition to pursue true breakthrough ideas – people like Elon Musk of Tesla Motors and SpaceX, Jeff Bezos of Amazon and Larry Page of Google. “They have the wherewithal to make their dreams come true or to go after the world’s biggest problems,” he says. “It has very little value to work on an app for them.”

Artificial intelligence is one of the main ingredients in this. It is a technology that promises to make possible many others, for instance by letting people interact with computers by just talking to them, and by making computers far better at coming up with useful answers. AI also acts as the “brains” in robots, drones and driverless cars, bringing an awareness of the world to inanimate objects.

Companies that have stepped up investments in AI research over the past year, either by buying promising start-ups in the field or hiring well-known talent, include Google, Facebook and Amazon, as well as Chinese internet company Baidu. Asked at an internal Google event earlier this year whether the company had plans to try to develop human-level machine intelligence, co-founder Larry Page expressed optimism about the progress that might be made in future, though he also suggested the technology was still some way off, according to people familiar with his comments.

The history of AI research, which can be traced back 58 years to a conference at Dartmouth College in New Hampshire where the phrase was coined, has been littered with false dawns. If the latest hopes also fall short, it won’t be because of a lack of ambition or effort.

Google’s driverless car reflects co-founder Larry Page’s optimistic views on machine intellige
Google’s driverless car reflects co-founder Larry Page’s optimistic views on machine intellige © Google

Extinction events are hard to contemplate for long, and not just because of their sheer awfulness. It is impossible to know how seriously to take them, since non-experts have no way of calculating the probability of catastrophes that have such a stark, binary outcome.

There is also extinction fatigue. The list of things that might finish us off has been growing. It includes not just global warming, but the microscopic, self-replicating machines of nanotechnology that might reduce the world to a grey goo or a plague released by irresponsible bioengineering. Frankly, who has time to worry about all this stuff?

Provided they seem sufficiently remote, truly horrific events can even be a little thrilling. From Icarus on, the idea of the creator being destroyed by his creation has been a compelling fantasy, a sort of Frankenstein narcissism for the tech elite. As Silicon Valley futurist Paul Saffo puts it, this touches on “a real, deep yearning. It’s the fall from the garden, it’s original sin.”

That might explain why the subject holds such fascination, both for those who warn of the risks as well as those who see AI as the tool that will instead liberate mankind. “Both sides are treating this like a secular religion,” Saffo says.

If this was all there was to the nightmare scenario of artificial intelligence, it might be easy to set aside. But the warnings have been growing louder. Astrophysicist Stephen Hawking, writing earlier this year, said that AI would be “the biggest event in human history”. But he added: “Unfortunately, it might also be the last.”

Elon Musk, CEO of SpaceX and Tesla Motors
'AI is potentially more dangerous than nukes’ - Elon Musk, CEO of SpaceX and Tesla Motors © Bloomberg

Elon Musk – whose successes with electric cars (through Tesla Motors) and private space flight (SpaceX) have elevated him to almost superhero status in Silicon Valley – has also spoken up. Several weeks ago, he advised his nearly 1.2 million Twitter followers to read Superintelligence, a book about the dangers of AI, which has made him think the technology is “potentially more dangerous than nukes”.

Mankind, as Musk sees it, might be like a computer program whose usefulness ends once it has started up a more complex piece of software. “Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted. “Unfortunately, that is increasingly probable.”

Nick Bostrom, the author of the book that provoked Musk’s alarming warning, has a dry and deliberate delivery. A Swedish philosophy professor and director of the University of Oxford’s Future of Humanity Institute, his clipped accent and sardonic delivery make him typecast for the role of Jeremiah.

Bostrom says he got interested in the subject in the 1990s, in an email discussion forum for an odd group known as the Extropians. Among the assorted “cranks” and “crackpots”, he says, was a handful of serious thinkers who were already looking ahead to a trans-humanist future in which technology would carry mankind beyond its biological limitations. They included Eliezer Yudkowsky, the guiding spirit behind Miri in Berkeley.

AI on the big screen

Ten landmark films

Terminator 3: Rise Of The Machines (2003) | Dir: Jonathan Mostow | Ref: TER056BQ | Photo Credit: [ C-2 Pictures/Warner Bros. / The Kobal Collection ] | Editorial use only related to cinema, television and personalities. Not for cover use, advertising or fictional works without specific prior agreement

The Day the Earth Stood Still (1951, Robert Wise)

2001: A Space Odyssey (1968, Stanley Kubrick)

Star Wars: Episode IV - A New Hope (1977, George Lucas

The Terminator (1984, James Cameron)

The Matrix (1999, Wachowski & Lana Wachowski)

A.I. (2001, Steven Spielberg)

I, Robot (2004, Alex Proyas)

The Hitchhiker’s Guide to the Galaxy (2005, Garth Jennings)

Her (2013, Spike Jonze)

Avengers: Age of Ultron (2015, Joss Whedon)

The belief that self-inflicted extinction through technology is something worthy of serious academic study has been spreading. This year has seen the formation of the Future of Life Institute in the US (Musk is on the advisory board), while Cambridge university has created the Centre for the Study of Existential Risk. With no shortage of cataclysmic events to worry about, the most pressing question may be to decide what to worry about most.

“People are spending way too much time thinking about climate change, way too little thinking about AI,” says Peter Thiel, the Silicon Valley investor who is both a friend of Musk and a big financial backer of Yudkowsky’s group.

Behind all the warnings is a growing belief among computer scientists that machines will, within decades, reach the condition of “artificial general intelligence” and match humans in their intellectual capacity. That moment, Thiel says, “will be as momentous an event as extraterrestrials landing on this planet”. It will mark the birth of an intellect that is as capable as that of humans but is entirely inhuman, with unpredictable results.

Artificial intelligence has already provoked a public debate in recent months about a different kind of risk. This has centred on how it might wipe out human work, as clever computers and the robots they make possible take over most types of human employment. But the bigger issue may be whether AI wipes out mankind itself.

“The first question we would ask if aliens landed on this planet is not, what does this mean for the economy or jobs,” says Thiel. “It would be: are they friendly or unfriendly?”

Strictly speaking, according to Bostrom, the kind of machine-based intelligence that is heading humanity’s way wouldn’t wish its makers harm. But it would be so intent on its own goals that it could end up crushing mankind without a thought, like a human stepping carelessly on an ant. This is where the nightmare scenarios come into play. Once they soar past the intellects of their creators, machines are likely to reach their own conclusions about how best to achieve the goals programmed into them. And if humans can’t even prevent accidents in the moderately complex technological systems of today, what chance is there of controlling the systems to come?

Miri was founded on the belief that mankind’s ant brains will have to find a way to programme safety into these godlike machines before they can reach their full potential. But anything human minds can dream up to restrain the unfathomable will of the supercomputers seems almost guaranteed to fail. And with complex systems governed by computers playing an increasingly central role in everyday life, that puts humanity at a distinct disadvantage if things go wrong.

Even the pessimists, however, say they are prepared to consider a happier outcome. Bostrom, for his part, says that there’s a chance things could turn out very well indeed. Aided by their brilliant machines, humans could quickly colonise space, cure ageing and upload their minds into computers – it’s just a case of getting past the dangerous moment of the intelligence explosion.

“If we can make it to the next century and achieve technological maturity, we could have another billion years,” he says.

Japan is a leading developer of robots designed to help humans: Toshiba’s 'ApriAttenda' housekeeper can open a fridge door
Japan is a leading developer of robots designed to help humans: Toshiba’s 'ApriAttenda' housekeeper can open a fridge door © Getty

Like all technology races, the pursuit of a human-like machine intelligence is propelled by hope, idealism, ambition and greed. It is also carried along by its own momentum, as the exponential growth in computing power that has accompanied the information revolution adds inexorably to the capabilities at the disposal of the computer scientists.

Peter Diamandis embodies the hope that many in Silicon Valley feel these days with this accelerating pace of technological change. Standing before an audience at Singularity University, the private training centre he helped to found a stone’s throw from Google’s headquarters, he predicts that a “massive tsunami of change” is coming. It will put an end to want for billions of people, he says – by which he means meeting the basic needs of “every man, woman and child. I don’t mean Louis Vuitton and Ferraris.” Provided the cost of computing power continues to fall at the rate it has since the arrival of the microchip, he predicts: “We haven’t seen 1 per cent of the change we’re going to see in the next 10 years.”

To optimists like Diamandis, an irrepressible human drive for discovery means that it is both impossible and undesirable to restrain new inventions. That is the case even if some of their users are potentially harmful: “There is this genetic drive we have to explore. It drives us to do more because we can – and if we can, why wouldn’t we want to?”

There is also an unquestioned assumption in Silicon Valley that if something can be built, then, inevitably, it will be. To deliberately hold back from advancing a technology to its local, logical conclusion seems not just negligent but, in some unspoken way, morally wrong.

That is the assumption that Neil Jacobstein, co-head of the AI course at Singularity University, makes when describing how computers will one day become so advanced that they can simulate human brains. “We’re going to reverse-engineer the brain, that’s just the way it is,” he says.

Nick Bostrom, Director, University of Oxford’s Future of Humanity Institute
'Do we need more innovation? It’s non-obvious’ - Nick Bostrom, Director, University of Oxford’s Future of Humanity Institute © Getty

Nor is there much social questioning of the headlong rush of technological development in the hands of private corporations. “Discussions around innovation are built on the premise that we need more,” says Bostrom. “It’s non-obvious, if you take a step back and look at the macro picture for humanity, that more innovation would be better.” Such dour pronouncements sound profoundly out of tune with the times.

And then there is the tech industry’s wealth-creation engine. Once cranked up, it becomes hard to apply the brakes. Technologies are built in a hurry and rushed to market. Fixes, where needed, are added later.

“There are many incentives to getting something built and very few to getting it right,” says Soares. Against these urges, self-restraint seems highly unlikely, he adds: “In history, that has almost never happened.” It is these unbalanced incentives that have persuaded him there is only a 5 per cent chance of programming sufficient safeguards into advanced AI (although he adds another 15 per cent that something will happen that we can’t even imagine for now).

For an idea of how things could turn out, the internet is a model held up by those on both sides. A complex, networked system that draws together both human and machine intelligence, it has advanced in an ad hoc way. To some tech visionaries, it may even become the place that a collective hive mind emerges, transcending the individual.

Pervasive cyber security flaws show how systems like this are inherently vulnerable, says Soares. If similar glitches creep into the super-intelligent computer systems of the future, the prospects for mankind could be bleak.

Others, by contrast, see the internet as a forerunner of a more harmonious marriage of human and machine minds. To Google’s Larry Page, AI is already woven inextricably into online life. Services like web search or automatic translation between languages represent a high level of machine intelligence under control of people. “It’s learning from you and you’re learning from it,” he says. “In some sense the internet is already that: it’s a combination of people and machine intelligence to make our lives better.”

Page, who is halfway through reading Bostrom’s book, says he is glad that the risks of AI are being aired – though he also criticises the “alarmism” around the subject. There will be plenty of time later on to work out how to control the advanced machine intelligence that is coming: “As we get closer and closer to it, I think we’ll know. I think we’ll learn a lot in the process.”

Yet that isn’t likely to silence the apocalyptic warnings. As Muehlhauser, the director of Miri, puts it: “We’re toying with the intelligence of the gods. And there isn’t an off switch.”

Photographs: Warner Bros/The Kobal Collection; MGM/The Kobal Collection; Getty Images; Bloomberg; Google

——————————————-

Letter in response to this article:

AI may conclude that existence is pointless / from Pascal Michels, Barcelona, Spain

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments