Skip to main contentSkip to navigationSkip to navigation
An AI supercomputer in California
An AI supercomputer in California. Elon Musk, a leader in AI technology, is leading calls for the risks of ‘giant’ AI computers to be assessed. Photograph: Rebecca Lewington/Reuters
An AI supercomputer in California. Elon Musk, a leader in AI technology, is leading calls for the risks of ‘giant’ AI computers to be assessed. Photograph: Rebecca Lewington/Reuters

Elon Musk joins call for pause in creation of giant AI ‘digital minds’

This article is more than 1 year old

More than 1,000 artificial intelligence experts urge delay until world can be confident ‘effects will be positive and risks manageable’

More than 1,000 artificial intelligence experts, researchers and backers have joined a call for an immediate pause on the creation of “giant” AIs for at least six months, so the capabilities and dangers of systems such as GPT-4 can be properly studied and mitigated.

The demand is made in an open letter signed by major AI players including: Elon Musk, who co-founded OpenAI, the research lab responsible for ChatGPT and GPT-4; Emad Mostaque, who founded London-based Stability AI; and Steve Wozniak, the co-founder of Apple.

Its signatories also include engineers from Amazon, DeepMind, Google, Meta and Microsoft, as well as academics including the cognitive scientist Gary Marcus.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter says.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The authors, coordinated by the “longtermist” thinktank the Future of Life Institute, cite OpenAI’s own co-founder Sam Altman in justifying their calls.

In a post from February, Altman wrote: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

The letter continued: “We agree. That point is now.”

If researchers will not voluntarily pause their work on AI models more powerful than GPT-4, the letter’s benchmark for “giant” models, then “governments should step in”, the authors say.

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” they add.

Since the release of GPT-4, OpenAI has been adding capabilities to the AI system with “plugins”, giving it the ability to look up data on the open web, plan holidays, and even order groceries. But the company has to deal with “capability overhang”: the issue that its own systems are more powerful than it knows at release.

As researchers experiment with GPT-4 over the coming weeks and months, they are likely to uncover new ways of “prompting” the system that improve its ability to solve difficult problems.

One recent discovery was that the AI is noticeably more accurate at answering questions if it is first told to do so “in the style of a knowledgable expert”.

The call for strict regulation stands in stark contrast to the UK government’s flagship AI regulation white paper, published on Wednesday, which contains no new powers at all.

skip past newsletter promotion

Instead, the government says, the focus is on coordinating existing regulators such as the Competition and Markets Authority and Health and Safety Executive, offering five “principles” through which they should think about AI.

“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow,” said the science, innovation and technology secretary, Michelle Donelan.

The Ada Lovelace Institute was among those that criticised the announcement. “The UK’s approach has significant gaps, which could leave harms unaddressed, and is underpowered relative to the urgency and scale of the challenge,” said Michael Birtwistle, who leads data and AI law and policy at the research institute.

“The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software.”

Labour joined the criticism, with the shadow culture secretary, Lucy Powell, accusing the government of “letting down their side of the bargain”.

She said: “This regulation will take months, if not years, to come into effect. Meanwhile, ChatGPT, Google’s Bard and many others are making AI a regular part of our everyday lives.

“The government risks re-enforcing gaps in our existing regulatory system, and making the system hugely complex for businesses and citizens to navigate, at the same time as they’re weakening those foundations through their upcoming data bill.”

Most viewed

Most viewed