Argument
An expert's point of view on a current event.

Every Country Is on Its Own on AI

Why AI regulation can’t follow in the footsteps of international nuclear controls.

By , an associate fellow at the Center for a New American Security, and , a research associate at the Center for a New American Security.
A diver wearing a space suit and waving a flag walks underwater during a "Moonwalk" operation in the Mediterranean sea near Marseille, southern France, on June 8, 2016.
A diver wearing a space suit and waving a flag walks underwater during a "Moonwalk" operation in the Mediterranean sea near Marseille, southern France, on June 8, 2016.
A diver wearing a space suit walks underwater during a "Moonwalk" operation in the Mediterranean sea near Marseille, southern France, on June 8, 2016. Boris Horvat/AFP via Getty Images

Many artificial intelligence industry leaders see themselves as this century’s nuclear scientists—wielding a revolutionary new technology so powerful that it could threaten to wipe out humanity itself. Some, including the chiefs of industry front-runner OpenAI, are now pinning their hopes of averting such an AI cataclysm on establishing global AI governance structures styled after the nuclear arms controls that emerged from the Cold War.

Many artificial intelligence industry leaders see themselves as this century’s nuclear scientists—wielding a revolutionary new technology so powerful that it could threaten to wipe out humanity itself. Some, including the chiefs of industry front-runner OpenAI, are now pinning their hopes of averting such an AI cataclysm on establishing global AI governance structures styled after the nuclear arms controls that emerged from the Cold War.

In concept, that could be a very welcome development to help mitigate the wide range of serious risks already presented by AI, let alone the catastrophic scenarios that a majority of Americans now fear. British Prime Minister Rishi Sunak is so compelled by the idea that he pitched U.S. President Joe Biden on the concept last week and is angling for Britain to house the new institution.

But establishing such an institution quickly enough to match AI’s accelerating progress is likely a pipe dream, given the history of nuclear arms controls and their status today. In the absence of an international agency, frontier labs and their governments must instead assume responsibility for the dangers created by new AI capabilities and act accordingly—especially as the geopolitical stakes and exact risks of the technology are still being determined.


There are good reasons why nuclear arms control is an alluring model for averting catastrophic AI risks: The world has not (yet) had to endure any nuclear exchanges, nuclear stockpiles have been successfully reduced by 80 percent, and there are a range of countries that don’t maintain nuclear arms but might have without the measures in place. But viewing these victories in hindsight can obscure just how precarious these achievements were and still are today.

Standing up the agreements and institutions that have been central to avoiding nuclear disaster was a slow, uncertain process. Twelve years elapsed between the United States’ acquisition of its first nuclear weapon in 1945 and the creation of the International Atomic Energy Agency (IAEA) in 1957, which OpenAI CEO Sam Altman and U.N. Secretary-General António Guterres both propose as a model for a new AI regulatory regime. During that time, Britain and the Soviet Union also acquired nuclear weapons and the United States’ arsenal grew rapidly, setting the stage for a nuclear arms race. For the first decade of its existence, the IAEA was principally focused on safety for peaceful nuclear energy technology and didn’t take on its more famous nonproliferation role in earnest until 1968, by which time France, China, and Israel had also armed.

For years, the clear threat of global devastation from a nuclear war between superpowers failed to produce robust nuclear controls. It took the Cuban Missile Crisis—which pushed humanity to the brink of a nuclear Armageddon—for serious arms control measures to take shape, and these were often carried out through ad-hoc treaties or norms instead of formal international institutions.

Indeed, if the AI scions pushing for a new IAEA are attracted by the legitimacy that comes from multilateral cooperation, they may be disappointed to learn that the United States’ dogged insistence on nonproliferation has been more effective than international institutions in keeping nations from arming . Realpolitik can also distort seemingly humanitarian efforts to curb catastrophic risks: The United States first proposed the IAEA, in part, to distract from its efforts to strengthen its own arsenal.

Though AI safety experts have the benefit of nuclear precedent to learn from, a parallel AI safety regime will likely run into similar issues and more. Treaties and multilateral agreements tend to move much more slowly than AI, which is seeing breakneck progress and will be shaped by hard competitive interests of the superpowers. Mobilizing political will without a catastrophe or near miss may be a nonstarter.

Current geopolitical conditions are also unusually hostile to building a new control regime to deal with AI hazards. Nuclear controls are already more frayed than they have been at any time since the Cold War, as Russian President Vladimir Putin threatens nuclear strikes in Ukraine, China vastly expands its arsenal, and seminal nuclear control treaties lapse. Beijing has lately shown considerable resistance to making use of existing mechanisms to reduce friction between the superpowers, and it has eschewed cooperation on scientific risk management in public health and space operations.

The biggest challenge, however, is related to the evolving risks of AI itself. Compared to the relatively narrow use cases of nuclear technology—weapons and energy production—AI holds promise in a highly diverse, evolving set of domains, making regulation far more complex. AI development will doubtless be a key element in nations’ general economic competitiveness in the decades ahead, but a “killer app” for AI in geopolitics on the level of the atom bomb has not yet materialized. Likewise, although an ever-growing list of technology luminaries have professed concern about potential AI cataclysm on a similar scale to nuclear catastrophe, the specifics of that danger remain opaque. Until governments have a much clearer idea of the trade-offs between the geopolitical advantages and risks associated with advanced AI, it will be impossible to institute a control regime of the kind AI leaders are envisioning.

That doesn’t mean that working toward an international AI control body should not be a high priority, nor that such an institution couldn’t eventually play an important role in mitigating risks. But proponents of this plan should understand that, as in the nuclear era, humanity will have to navigate severe new technological risks in an atmosphere of geopolitical uncertainty using slow, flawed mechanisms for international coordination. If AI systems pose as urgent and grave a danger as industry leaders believe, international agreements for mitigating those threats likely won’t bear fruit in time for the arrival of cataclysmic dangers.

Even as they work toward international solutions, frontier labs should simultaneously recognize that the onus falls on them and their respective governments to tackle grave emerging challenges. They are, after all, the ones creating the dangers—not the diplomats. Given that only a handful of labs are at the forefront of emerging capabilities, coordination of norms and safeguards among them should be much more easily achievable than seeking consensus among governments in international fora. As severe risks that fall short of calamity become more apparent and capabilities diffuse, industry-led norms and safety coordination mechanisms may also be necessary to mitigate hazards.

Individual governments likewise have an outsized role to play, but through old-fashioned methods: establishing legally binding safety standards, licensing the resources needed to build the most complex AI models, and modernizing legal structures to cover new AI use cases could all be positive steps in the years ahead as new capabilities invite new risks. Already, the U.S. government’s AI chip export controls to China represent the right sort of measure to mitigate the dangers of China’s AI ambitions.

However attractive an international AI control regime might appear, success in realizing that vision is far from assured in today’s political climate and will not match the speed of AI progress. Leading AI labs must cooperate with each other and their governments to put in place necessary guardrails today rather than betting on a repeat of a precarious and politically fraught Cold War success.

Bill Drexel is an associate fellow at the Center for a New American Security, where he researches artificial intelligence, technology competition, and national security. Twitter: @bill_drexel

Michael Depp is a research associate at the Center for a New American Security, where he focuses on AI safety and stability. Twitter: @michaeljaydepp

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Cardboard figurines depicting U.S. President Joe Biden, Chinese President Xi Jinping and Russian President Vladimir Putin at the Fallas festival in Valencia, on March 16, 2022.
Cardboard figurines depicting U.S. President Joe Biden, Chinese President Xi Jinping and Russian President Vladimir Putin at the Fallas festival in Valencia, on March 16, 2022.

Nobody Is Competing With the U.S. to Begin With

Conflicts with China and Russia are about local issues that Washington can’t win anyway.

Russian President Vladimir Putin and Chinese President Xi Jinping make a toast during a reception following their talks at the Kremlin in Moscow.
Russian President Vladimir Putin and Chinese President Xi Jinping make a toast during a reception following their talks at the Kremlin in Moscow.

The Very Real Limits of the Russia-China ‘No Limits’ Partnership

Intense military cooperation between Moscow and Beijing is a problem for the West. Their bilateral trade is not.

Soldiers wearing camouflage fatigues visit a makeshift memorial for Wagner Group leader Yevgeny Prigozhin in Moscow. The informal memorial is on the side of a street and is covered with flags, photos of Prigozhin, and candles.
Soldiers wearing camouflage fatigues visit a makeshift memorial for Wagner Group leader Yevgeny Prigozhin in Moscow. The informal memorial is on the side of a street and is covered with flags, photos of Prigozhin, and candles.

What Do Russians Really Think About Putin’s War?

Polling has gotten harder as autocracy has tightened.

French President Emmanuel Macron walks with Chinese President Xi Jinping after inspecting an honor guard during a welcome ceremony outside the Great Hall of the People in Beijing.
French President Emmanuel Macron walks with Chinese President Xi Jinping after inspecting an honor guard during a welcome ceremony outside the Great Hall of the People in Beijing.

Can Xi Win Back Europe?

The Chinese leader’s visit follows weeks of escalating tensions between China and the continent.