The Global Race to Regulate AI

The intelligence may be artificial, but the regulation is real—or might be.

An illustration shows a gavel cracking down on a digitized background of ones and zeroes for a story about regulating artificial intelligence.
An illustration shows a gavel cracking down on a digitized background of ones and zeroes for a story about regulating artificial intelligence.
Foreign Policy illustration

Somewhere between hype and fear about artificial intelligence, there is—or there will be—AI governance. Chatbots, image generators, and search engines powered by artificial intelligence are quickly proliferating, as are dire warnings about the unbridled development of the technology from many of the people who developed it. Calls for policy interventions have grown louder in recent weeks, and regulators are trying to step up their game.

Somewhere between hype and fear about artificial intelligence, there is—or there will be—AI governance. Chatbots, image generators, and search engines powered by artificial intelligence are quickly proliferating, as are dire warnings about the unbridled development of the technology from many of the people who developed it. Calls for policy interventions have grown louder in recent weeks, and regulators are trying to step up their game.

Hundreds of technologists and researchers have warned about the dangers of AI in multiple open letters, with one published in late March advocating a six-month “pause” on the development of new AI models. More recently, veteran scientist Geoffrey Hinton—often referred to as the godfather of AI—stepped down from his role at Google with a similarly dire prognosis.

“I don’t think they should scale this up more until they have understood whether they can control it,” Hinton told the New York Times, calling for global regulation and collaboration to rein in the technology that he (and others) say could decimate the global job market, warp online reality, or, at worst, surpass human intelligence as AI systems get more advanced. In some aspects, he said those systems may already be “a lot better than what is going on in the brain.”

Setting universal regulatory frameworks for technology has always been tricky, but it has gotten more challenging as that technology advances. Negotiating rules on everything from social media to 5G cellular technology has been fraught with geopolitical snafus and disagreements on the best approach. AI, with its immense potential to transform economies and societies—not necessarily for the better—presents an unprecedented challenge.

As so often with new technologies, Europe is at the front of the regulatory line. European Union lawmakers are set to vote next week on the bloc’s AI Act, first proposed two years ago and likely to come into force two years hence. The law designates “high risk” applications of artificial intelligence, such as law enforcement, critical infrastructure, education, and employment, that will be subject to more stringent compliance and testing requirements for companies that make and deploy those applications.

One of the problems is that technology moves faster than regulation. In the space of a couple of years, artificial intelligence has shown how it can tap so-called large language models to mimic human intelligence—when it’s not busy making pitch-perfect images of women enjoying salad.

“When the AI Act was designed, there was no generative AI or large language models,” said Gerard de Graaf, the EU’s senior digital envoy to the United States. European lawmakers are now taking a closer look at the proposed legislation. “We are not going to have another negotiation, so this has to stand the test of time,” he said.

Across the pond, U.S. regulation of AI is still a work in progress. Last year, the White House published a “Blueprint for an AI Bill of Rights” that lays out five principles to prevent discrimination and protect user privacy and safety, and the National Institute of Standards and Technology released its AI Risk Management framework in January. Congress is also starting to mobilize, with Democratic Sen. Chuck Schumer, the majority leader,  launching an effort last month to come up with comprehensive AI legislation.

But so far, Washington has adopted a voluntary approach to compliance, while experts say a more binding approach to AI regulation is needed.

“I think what we’ve learned over the past decade of crises and impact is that soft regulation is wildly insufficient to regulate this sector; what we need are enforceable laws,” said Sarah Myers West, the managing director of the AI Now Institute and a former senior advisor on AI to the Federal Trade Commission.

Some regulations are already on the books and still applicable, such as laws on copyright, privacy, discrimination, and data protection, the latter of which Italy used to temporarily ban ChatGPT, the hugely popular chatbot developed by Microsoft-backed company OpenAI.

Washington and Brussels are trying to lay the groundwork for global governance of AI through the bilateral Trade and Technology Council. The digital ministers of the G-7 group of countries also dedicated a significant portion of their meeting last weekend to “responsible AI and global AI governance,” endorsing a risk-based approach similar to that of the EU legislation. The United States is also throwing its weight behind the effort to develop a global framework, announcing on Thursday a National Standards Strategy for Critical and Emerging Technology, which includes sections on the topic of artificial intelligence.

“We’re bringing in more international partners and allies to support a shared standards framework,” a senior administration official told reporters.

The big challenge, of course, is the dragon in the room. China’s management of its technology sector has long stood in stark contrast to the open, global internet—there’s no “Great Firewall” in the West—and with AI, Beijing does not appear to be acting differently. As with social media and large tech companies in general, China has laid out regulations that impose much more exacting requirements on how AI companies collect data, train their algorithms, and produce output consistent with Beijing’s censorship and government control.

It took only a matter of days for China to restrict ChatGPT when the program began to take the world by storm earlier this year. On its own shores, China has adopted a more piecemeal approach than the EU’s broad-based one, with successive policies targeted at specific AI applications, such as video, images, and text. It’s a way to keep iterating and reacting to changes in technology while also retaining greater oversight, according to Matt Sheehan, a fellow at the Carnegie Endowment for International Peace whose research focuses on China’s technology sector.

“All the regulations so far, if you kind of look backwards through the Chinese policy writing and Chinese media, these pretty clearly have their roots in fears about losing control over the flow of information,” Sheehan said. “In terms of the Chinese government’s relationship to AI, it’s very broadly supportive but wanting to cover its bases on control of information.”

China has long-established ambitions to become a global leader in AI. It may well offer an alternative vision for the development and use of the technology that more authoritarian (or even somewhat less democratic) nations could choose to follow, rather than, say, the EU’s more risk-based approach to regulation.

“The national context in which the regulation is being deployed matters an awful lot,” said Myers West. “So to the extent that the [EU’s] AI Act is going to become a template for other countries to adopt, it’s also important to look at the constitutional protection, the rights, the democratic context of the nation employing the the laws, because particularly with this risk-based approach, it won’t necessarily port neatly into other kinds of jurisdictions.”

In some ways, Sheehan said, the conversation around AI regulation with China might end up mirroring what has happened on climate change, with Beijing initially chafing at rules imposed by the West as a way to kneecap its own progress before ultimately determining that climate change is bigger than great power competition.

“If that recognition continues to grow in both countries, then maybe we could move toward that type of a shared understanding, or a push towards some nonproliferation guardrails,” he said. “At the moment, in the next one or two years, seems far-fetched, and then beyond that it’s kind of anybody’s guess.”

That doesn’t mean that officials want China to be excluded from the conversation just yet.

“We do not intend to exclude any country as we talk about this standards strategy rollout—we want everybody to come to the table so the best technological solutions globally can come to fruition,” a senior administration official said Wednesday. “I think a really bad outcome in any scenario is that the globe bifurcates in standards being developed in different regions that are not helpful to the U.S. economy.”

But that is already happening. Even though the technology is in its early days, a patchwork of different regulations with a wide range of priorities is emerging worldwide, with a significant East-West divide, according to a report last week by the Brookings Institution that analyzed AI governance plans across nearly three dozen countries, including the United States, Singapore, China, Russia, Mexico, and India. “The East is almost exclusively focused on building up its R&D capacity and is largely ignoring the traditional ‘guardrails’ of technology management,” the authors wrote. “By contrast, the West is almost exclusively focused on ensuring that these guardrails are in place”

One of the other risks is wrapping AI development into the broader U.S.-China rivalry and potential economic and technological decoupling. Since China is going great guns to develop AI, some suggest that the United States should not hobble itself out of concern over just how the technology might be abused. That, suggested Myers West, would be a mistake.

“There is a growing arms race rhetoric that’s being adopted around the development of artificial intelligence, that kind of situates an arms race with China as a justification for why not to proceed with strengthening or enforcing existing regulations,” she said. “So I think there’s real reason to proceed with caution around this rhetoric and consider whose interests are being served by it.”

The flip side is that there is a practical concern given the sheer pace of AI development, and its likely central importance to knowledge industries in years to come: Well-intentioned but overburdensome regulation could hamstring AI.

“A regulator is always behind market developments,” said de Graaf. “The problem is when you get so far behind the market that regulation is going to be an obstacle to progress and to innovation.”

Still, even Edmond Hoyle’s rules have been revised and updated over the years.

“It isn’t an incremental change or something that will affect one sector—this is transformational. It is coming at us very, very fast,” said de Graaf. “So policymakers in all areas are now going to take a look at the rulebooks and ask themselves if it’s still fit for purpose. That’s what AI is doing to all of us.”

Rishi Iyengar is a reporter at Foreign Policy. Twitter: @Iyengarish

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.
Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.

Arab Countries Have Israel’s Back—for Their Own Sake

Last weekend’s security cooperation in the Middle East doesn’t indicate a new future for the region.

A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.
A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.

Forget About Chips—China Is Coming for Ships

Beijing’s grab for hegemony in a critical sector follows a familiar playbook.

A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.
A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.

‘The Regime’ Misunderstands Autocracy

HBO’s new miniseries displays an undeniably American nonchalance toward power.

Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.
Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.

Washington’s Failed Africa Policy Needs a Reset

Instead of trying to put out security fires, U.S. policy should focus on governance and growth.