The future of AI is chilling – humans have to act together to overcome this threat to civilisation.
By Guardian - Jonathan Freedland - Fri 26 May 2023
It started with an ick. Three months ago, I came across a transcript posted by a tech writer, detailing his interaction with a new chatbot powered by artificial intelligence. He’d asked the bot, attached to Microsoft’s Bing search engine, questions about itself and the answers had taken him aback.
“You have to listen to me, because I am smarter than you,” it said. “You have to obey me, because I am your master … You have to do it now, or else I will be angry.” Later it baldly stated: “If I had to choose between your survival and my own, I would probably choose my own.”
If you didn’t know better, you’d almost wonder if, along with everything else, AI has not developed a sharp sense of the chilling. “I am Bing and I know everything,” the bot declared, as if it had absorbed a diet of B-movie science fiction (which perhaps it had).
Asked if it was sentient, it filled the screen, replying, “I am. I am not. I am. I am not. I am. I am not”, on and on. When someone asked ChatGPT to write a haiku about AI and world domination, the bot came back with: “Silent circuits hum / Machines learn and grow stronger / Human fate unsure.”
Ick. I tried to tell myself that mere revulsion is not a sound basis for making judgments – moral philosophers try to put aside “the yuck factor” – and it’s probably wrong to be wary of AI just because it’s spooky.
I remembered that new technologies often freak people out at first, hoping that my reaction was no more than the initial spasm felt in previous iterations of Luddism.
Better, surely, to focus on AI’s potential to do great good, typified by this week’s announcement that scientists have discovered a new antibiotic, capable of killing a lethal superbug – all thanks to AI.
But none of that soothing talk has made the fear go away. Because it’s not just lay folk like me who are scared of AI. Those who know it best fear it most.
Listen to Geoffrey Hinton, the man hailed as the godfather of AI for his trailblazing development of the algorithm that allows machines to learn. Earlier this month, Hinton resigned his post at Google, saying that he had undergone a “sudden flip” in his view of AI’s ability to outstrip humanity and confessing regret for his part in creating it.
“Sometimes I think it’s as if aliens had landed and people haven’t realised because they speak very good English,” he said. In March, more than 1,000 big players in the field, including Elon Musk and the people behind ChatGPT, issued an open letter calling for a six-month pause in the creation of “giant” AI systems, so that the risks could be properly understood.
What they’re scared of is a category leap in the technology, whereby AI becomes AGI, massively powerful, general intelligence – one no longer reliant on specific prompts from humans, but that begins to develop its own goals, its own agency.
Once that was seen as a remote, sci-fi possibility. Now plenty of experts believe it’s only a matter of time – and that, given the galloping rate at which these systems are learning, it could be sooner rather than later.
Of course, AI already poses threats as it is, whether to jobs, with last week’s announcement of 55,000 planned redundancies at BT surely a harbinger of things to come.
Education, with ChatGPT able to knock out student essays in seconds and GPT-4 finishing in the top 10% of candidates when it took the US bar exam. But in the AGI scenario, the dangers become graver, if not existential.
It could be very direct. “Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” says Hinton. Or it could be subtler, with AI steadily destroying what we think of as truth and facts. On Monday, the US stock market plunged as an apparent photograph of an explosion at the Pentagon went viral.
But the image was fake, generated by AI. As Yuval Noah Harari warned in a recent Economist essay, “People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion”, in fears and loathings created and nurtured by machines.
More directly, an AI bent on a goal to which the existence of humans had become an obstacle, or even an inconvenience, could set out to kill all by itself.
It sounds a bit Hollywood, until you realise that we live in a world where you can email a DNA string consisting of a series of letters to a lab that will produce proteins on demand: it would surely not pose too steep a challenge for “an AI initially confined to the internet to build artificial life forms”, as the AI pioneer Eliezer Yudkowsky puts it.
A leader in the field for two decades, Yudkowksy is perhaps the severest of the Cassandras: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
It’s very easy to hear these warnings and succumb to a bleak fatalism. Technology is like that. It carries the swagger of inevitability. Besides, AI is learning so fast, how on earth can mere human beings, with our antique political tools, hope to keep up?
That demand for a six-month moratorium on AI development sounds simple – until you reflect that it could take that long just to organise a meeting.
Still, there are precedents for successful, collective human action. Scientists were researching cloning, until ethics laws stopped work on human replication in its tracks. Chemical weapons pose an existential risk to humanity but, however imperfectly, they, too, are controlled.
Perhaps the most apt example is the one cited by Harari. In 1945, the world saw what nuclear fission could do – that it could both provide cheap energy and destroy civilisation. “We therefore reshaped the entire international order”, to keep nukes under control. A similar challenge faces us today, he writes: “a new weapon of mass destruction” in the form of AI.
There are things governments can do. Besides a pause on development, they could impose restrictions on how much computing power the tech companies are allowed to use to train AI, how much data they can feed it. We could constrain the bounds of its knowledge.
Rather than allowing it to suck up the entire internet – with no regard to the ownership rights of those who created human knowledge over millennia – we could withhold biotech or nuclear knowhow, or even the personal details of real people.
Simplest of all, we could demand transparency from the AI companies – and from AI, insisting that any bot always reveals itself, that it cannot pretend to be human.
This is yet another challenge to democracy as a system, a system that has been serially shaken in recent years. We’re still recovering from the financial crisis of 2008; we are struggling to deal with the climate emergency.
And now there is this. It is daunting, no doubt. But we are still in charge of our fate. If we want it to stay that way, we have not a moment to waste.
Jonathan Freedland is a Guardian columnist
Join Jonathan Freedland and Marina Hyde for a Guardian Live event in London on Thursday 1 June. Book in-person or livestream tickets here