Menu
Libération
Technology

The Scary Promise of Artificial Intelligence

New York Times Weeklydossier
Some people believe Nick Bostrom introduced fear into the A.I. debate with his book ‘‘Superintelligence: Paths, Dangers, Strategies.’’ (Tom Jamieson for The New York Times)
par Cade Metz
publié le 21 juin 2018 à 14h45

Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist.

Mr. Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, was warning the world in television interviews and on social media that artificial intelligence was «potentially more dangerous than nukes.»

So, on November 19, 2014, Mr. Zuckerberg, Facebook’s chief executive, invited Mr. Musk to dinner at his home in Palo Alto, California. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives joined them.

The Facebook contingent tried to convince Mr. Musk that he was wrong. But he wasn’t budging. «I genuinely believe this is dangerous,» Mr. Musk told those assembled, according to one of the dinner’s attendees, Yann LeCun, the researcher who led Facebook’s A.I. lab.

Mr. Musk’s fears of A.I. were simple: If we create machines that are smarter than humans, they could turn against us; we must consider the unintended consequences of what we are creating before we unleash it on the world.

Neither Mr. Musk nor Mr. Zuckerberg would talk in detail about the dinner, which has not been reported before, or about their long-running A.I. debate.

The creation of «superintelligence» — the name for the supersmart technological breakthrough that takes A.I. to the next level and creates machines that not only perform narrow tasks that typically require human intelligence (like self-driving cars) but can outthink humans — still feels like science fiction.

But the fight over the future of A.I. has spread across the tech industry.

More than 4,000 Google employees recently signed a petition protesting a $9 million A.I. contract the company had signed with the United States Department of Defense. The deal was deeply troubling to many A.I. researchers at the company. Early this month, Google executives, trying to head off a worker rebellion, said they wouldn’t renew the contract next year.

A.I. research has enormous potential and enormous implications, both as an economic engine and a source of military superiority. Beijing has said it is willing to spend billions to make China the world’s leader in A.I., while the United States Defense Department is aggressively courting the tech industry for help. A new breed of autonomous weapons can’t be far away.

All sorts of deep thinkers have joined the debate, from a gathering of philosophers and scientists held along the central California coast to an annual conference hosted in Palm Springs, California, by Amazon’s chief executive, Jeff Bezos.

«You can now talk about the risks of A.I. without seeming like you are lost in science fiction,» said Allan Dafoe, a director of the governance of A.I. program at the Future of Humanity Institute, a research center at the University of Oxford that explores the risks and opportunities of advanced technology.

And the public criticism of Facebook and other tech companies over the past few months has done plenty to raise the issue of the unintended consequences of the technology created by Silicon Valley.

In April, Mr. Zuckerberg spent two days answering questions from members of the United States Congress about data privacy and Facebook’s role in the spread of misinformation before the 2016 American election. He faced a similar grilling in Europe last month.

Facebook’s recognition that it was slow to understand what was going on has led to a rare moment of self-reflection in an industry that has believed it is making the world better.

Even such influential figures as the Microsoft founder Bill Gates and the late Stephen Hawking have expressed concern about creating machines that are more intelligent than we are. Though superintelligence seems decades away, they and others have said, we should consider the consequences before it’s too late.

«The kind of systems we are creating are very powerful,» said Bart Selman, a computer science professor at Cornell University in Ithaca, New York, and a former Bell Labs researcher. «And we cannot understand their impact.»

Pacific Grove is a tiny town on the California coast. Geneticists gathered there in 1975 to discuss whether their work — gene editing — would end up harming the world.

The A.I. community held a similar event there in 2017.

The private gathering was organized by the Future of Life Institute, a think tank built to consider the risks of A.I.

The leaders of A.I. were in the room — among them Mr. LeCun, the Facebook A.I. lab boss who was at the dinner in Palo Alto, and who had helped develop a neural network, one of the most important tools in artificial intelligence today. Also there was Nick Bostrom, whose 2014 book, «Superintelligence: Paths, Dangers, Strategies» had an outsized — some would argue fear-mongering — effect on the A.I. discussion; Oren Etzioni, a former computer science professor at the University of Washington who had taken over the Allen Institute for Artificial Intelligence in Seattle; and Demis Hassabis, who heads DeepMind, an influential Google-owned A.I. research lab in London.

And so was Mr. Musk, who in 2015 had helped create an independent artificial intelligence lab, OpenAI, with an explicit goal: create superintelligence with safeguards meant to ensure it won’t get out of control. It was a message that clearly aligned him with Mr. Bostrom.

Mr. Musk said at the retreat: «We are headed toward either superintelligence or civilization ending.»

Mr. Musk was asked how society can best live alongside superintelligence. What we needed, he said, was a direct connection between our brains and our machines. A few months later, he unveiled a start-up, called Neuralink to create that kind of so-called neural interface by merging computers with human brains.

There is a saying in Silicon Valley: We overestimate what can be done in three years and underestimate what can be done in 10.

On January 27, 2016, Google’s DeepMind lab unveiled a machine that could beat a professional player at the ancient board game Go. In a match played soon after, the machine, called AlphaGo, had defeated the European champion Fan Hui — five games to none.

Even top A.I. researchers had assumed it would be another decade before a machine could solve the game. Go is complex — there are more possible board positions than atoms in the universe — and the best players win not with sheer calculation, but through intuition. Two weeks before AlphaGo was revealed, Mr. LeCun said the existence of such a machine was unlikely.

A few months later, AlphaGo beat Lee Sedol, the best Go player of the last decade. The machine made moves that baffled human experts but ultimately led to victory.

Many researchers believe the kind of self-learning technology that underpins AlphaGo provided a path to «superintelligence.» And they believe progress in this area will accelerate in the coming years.

OpenAI recently «trained» a system to play a boat racing video game to win as many game points as it could. It proceeded to win those points but did so while spinning in circles, colliding with stone walls and ramming other boats. It’s the kind of unpredictability that raises grave concerns about A.I.

Since their dinner three years ago, the debate between Mr. Zuckerberg and Mr. Musk has turned sour. Last summer, in a live Facebook video, Mr. Zuckerberg called Mr. Musk’s views on A.I. «pretty irresponsible.»

Panicking about A.I. now, so early in its development, could threaten the many benefits that come from things like self-driving cars and A.I. health care, he said.

Mr. Zuckerberg then said: «People who are naysayers and kind of try to drum up these doomsday scenarios — I just, I don’t understand it.»

Mr. Musk responded with a tweet: «I’ve talked to Mark about this. His understanding of the subject is limited.»

In his testimony before Congress, Mr. Zuckerberg explained how Facebook was going to fix the problems it helped create:by leaning on artificial intelligence. But he acknowledged that scientists haven’t exactly figured out how some types of artificial intelligence are learning.

«This is going to be a very central question for how we think about A.I. systems,» Mr. Zuckerberg said. «Right now, a lot of our A.I. systems make decisions in ways that people don’t really understand.»

Researchers are warning that A.I. systems that automatically generate realistic images and video will soon make it even harder to trust what we see online. Both DeepMind and OpenAI now operate groups dedicated to «A.I. safety.»

Mr. Hassabis, the founder of DeepMind, said the threat of superintelligence is not here. Not yet. But Facebook’s problems are a warning.

«We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come,» Mr. Hassabis said. «The time we have now is valuable, and we need to make use of it.»