On 16 July 1945, as scientists at Los Alamos in New Mexico prepared to detonate the first atomic bomb, they had tried to allay their fears that the explosion might set off a chain reaction by igniting nitrogen in the atmosphere or hydrogen in the oceans, ending life on Earth, by calculating that the risk was less than three in one million.
Yet they were also aware that they were dealing with hitherto unknown forces, leaving the possibility of some unforeseen reaction; it was Heisenberg’s canvassing of such a possibility that had disinclined Hitler, of all people, to pursue atomic weaponry, since he did not want his thousand-year Reich vaporised just one decade into its existence.
By the time of the Los Alamos test the Third Reich had been consigned to history, lessening the pressure on the scientists, but Japan and the Soviet Union remained threats. While exhaustive calculations had confirmed the bomb posed no threat of setting fire to the Earth’s atmosphere, the unprecedented nature of the experiment still made the scientists nervous. But they went ahead and did it just the same.
There was a similar question mark over the deployment of the Large Hadron Collider (LHC) at CERN, with concerns it might generate black holes, strangelets or other destructive forces. The public was deluged with reassurances, but the random nature of quantum physics leaves open the possibility that an operation that has been performed harmlessly a hundred thousand times may produce an unlooked-for outcome on the one hundred thousand and first occasion. The project went ahead, pursuing the quest for the holy grail of physics – the Higgs Boson – with little quantifiable benefit to humanity.
The reason why scientists must never be allowed total autonomy is that their driving force, the investigative imperative, renders them indifferent to risk, although they pay lip service to safety concerns. At heart, they are all schoolboys messing around with their chemistry sets to see what happens. To the scientific mind, the most unthinkable eventuality is that any path of inquiry should be blocked out of craven concerns for public safety. Until now, for the most part, science has been in its gourmand stage, greedily devouring every possibility that presents itself, rather than gourmet, selecting lines of investigation discriminatingly.
Until now. Suddenly, concern over the destructive possibilities of any further development of Artificial Intelligence (AI) has produced a revulsion, among the very scientists who have spearheaded AI research, that has led them publicly to voice concerns of the greatest seriousness and urgency. Considering how alien it is to a scientist to halt any further development within their specialist field, that is a measure of how deadly dangerous these people, the foremost experts in the field of AI, must consider this initiative to be. We would be insane not to heed their warnings and act upon them.
At the end of last month, an open letter signed by more than 1,000 individuals closely involved in the development of AI technology, including Elon Musk, Apple co-founder Steve Wozniak, “Sapiens” author Yuval Noah Harari, Emad Mostaque, CEO of Stability AI, and Tristan Harris, executive director of the Center for Humane Technology, was published. The signatures read like a roll of advanced technology maestros. The letter demanded a six-month moratorium on AI research while the risks are assessed.
“Contemporary AI systems are now becoming human-competitive at general tasks,” the letter claimed. “We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
Although that instructive paragraph addresses the main concerns regarding AI, it pitches its concerns too mildly and two out of the three are already out of date. Concern about AI flooding the internet with false information or marginalising those speaking the truth was voiced by Elon Musk late last year. He denounced ChatGPT’s developer, Open AI, for building woke prejudices into the technology: “The danger of training AI to be woke – in other words, lie – is deadly,” he tweeted. The arrival of a post-truth technology in a post-modern society is depressingly appropriate; but that is no longer the main concern regarding AI.
Nor is the second issue mentioned in the letter, the displacement of human employees from their jobs by AI. For example, Arvind Krishna, the CEO of IBM, has recently announced a shift in his company’s employment strategy by revealing plans to halt recruitment for posts that can be replaced by AI and automation within the next few years, with 7,800 human employees likely to be replaced. Extend that strategy to every major corporation and you can foresee the mass redundancy of millions of skilled workers, provoking a social crisis of huge proportions.
The Hollywood writers’ strike demonstrates that the threat to employment – and to human culture – from AI is already with us. Yet all these concerns are minor and obsolete, in the light of the existential risk posed by the exponential advance in AI technology. Open AI’s GPT-4 was released less than a month ago and is currently the leader in the field. Soon, more streamlined models will make it obsolete and that is the nightmare scenario.
Eliezer Yudkowsky is a US-based decision theorist leading research at the Machine Intelligence Research Institute; he has been working on aligning Artificial General Intelligence for 22 years and is the founder of the field. He refused to sign the open letter because he believed it did not go nearly far enough. His aperçus on the subject, on which he is the ultimate expert, are spine-chilling.
Writing in Time magazine on 29 March, he observes: “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen’.”
Without extreme precision and preparation, which does not exist today, “the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general”. Yudkowsky is not conjuring some hostile, science-fiction entity, but a force that is simply clinically objective: “The AI does not love you, nor does it hate you, and you are made of atoms it can use for something else”.
That has always been the inherent Catch-22 of artificial intelligence: if it is not cleverer than us, its creation is a pointless expense; if it is cleverer than us, it will become our master and destroyer. Although AI could wreak havoc on the internet and by invading computer systems, to which every life support system on the planet is now linked, Yudkowsky’s reference to human atoms canvasses the consequences of its escape into the biological world. If, in the spirit Yudkowsky suggests, it regarded human existence as surplus to its requirements, it could manufacture a pathogen that would eliminate humanity in a pandemic, leaving AI to inherit the Earth.
In the light of these warnings, it beggars belief that nuclear powers are considering embedding AI in their weapons systems. What if AI made the clinical, objective calculation that a certain date offered the best opportunity of destroying other countries’ nuclear arsenals by launching a pre-emptive strike? The possibilities are endless and universally dystopian.
Yudkowsky’s concern is that humanity is not advancing, inch by inch, cautiously and collaboratively towards a phased and probationary development of AI but leaving it to degenerate into an arms race in rival commercial laboratories. “If somebody builds a too-powerful AI, under present conditions,” he warns, “I expect that every single member of the human species and all biological life on Earth dies shortly thereafter”.
He points out that, on current thinking, the proposal is effectively to delegate our AI alignment homework to some future AI, which “ought to be enough to get any sensible person to panic”. The reality of such concerns is impelling other leading members of the scientific establishment to break ranks with corporate interests and speak out on the dangers of AI.
Among them is the British computer scientist Geoffrey Hinton, nicknamed the “Godfather of AI” and a winner of the Turing Award who has left his post as head of the Google Brain research department, in order to speak freely, and has expressed regrets about parts of his life’s work. “The idea that this stuff could actually get smarter than people – a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Last month, Sir Jeremy Fleming, the director of GCHQ, privately warned the Cabinet about the danger from chatbots. From the topmost level of the relevant scientific disciplines to the security services, we are being warned about the peril we face.
This revolt by scientists against the blind ambitions of Big Tech has a historic significance: it signals that science has progressed from the gourmand to the gourmet stage. Unfortunately, this is happening just as science has lost its pre-eminence. In recent generations science became the new religion, with scientists as a priesthood and scientism a distortion of human discourse. “The science is settled” was the mantra of climate alarmists.
Recently, however, science has been dethroned, as we enter a new post-truth age in which the law of the land is prejudiced in favour of those who propagate objective untruths, such as that human beings can change sex or that gender is not a biological but a “social construct”. Science is coming clean and becoming transparent, just as it is losing its authority.
The threat from AI, as Yudkowski says, can be averted in only one way: “Shut it all down.” He is right. There must be a collaborative, global consensus to close down this self-destructive experiment. China has expressed concerns in the past and should be brought on board to police this global threat. Obviously, rogue actors will seek to gain an advantage by privately pursuing AI; North Korea is an obvious candidate. Yudkowski’s advice is common sense: “be willing to destroy a rogue datacenter by airstrike”.
That is not extremist talk: it is an expert’s realistic assessment of the threat we face and the measures we must be prepared to take to avert annihilation. The time has come to wrest our destiny from the hands of Google, Microsoft and other Big Tech tyrants who have been getting too big for their boots for a long time. The AI threat also provides a realistic yardstick against which the follies of net zero should be measured.
Our governments were hypnotised, like rabbits in the headlights, by climate alarmism when, warned by the MERS crisis, we should have been preparing to counter a pandemic. Now, our charlatan leaders have reverted to that distraction, oblivious to yet another genuine existential threat, this time from AI. There is no way of discrediting this uncomfortable truth, with the scientists who nurtured the monster now heroically confessing their mistakes and demanding its deconstruction. AI must be killed off before it kills us.
Write to us with your comments to be considered for publication at letters@reaction.life