The US unofficial position on upcoming EU Artificial Intelligence rules

[Pixels Hunter/Shutterstock]

The United States is pushing for a narrower Artificial Intelligence definition, a broader exemption for general purpose AI and an individualised risk assessment in the AI Act, according to a document obtained by EURACTIV.

The non-paper is dated October 2022 and was sent to targeted government officials in some EU capitals and the European Commission. It follows much of the ideas and wording of the initial feedback sent to EU lawmakers last March.

“Many of our comments are prompted by our growing cooperation in this area under the U.S.-EU Trade and Technology Council (TTC) and concerns over whether the proposed Act will support or restrict continued cooperation,” the document reads.

The document is a reaction to the progress made by the Czech Presidency of the EU Council on the AI regulation last month. A US Mission to the European Union spokesperson declined EURACTIV’s request for comment.

EU Council nears common position on AI Act in semi-final text

The Czech Presidency of the EU Council circulated a new compromise on the Artificial Intelligence (AI) Act on Wednesday (19 October), set to be the basis for an agreement next month.

AI definition

While the Americans showed support for the changes the Czech Presidency made to clarify the definition of Artificial Intelligence, they warned that the definition “still includes systems that are not sophisticated enough to merit special attention under AI-focused legislation, such as hand-crafted rules-based systems.”

To avoid over-inclusiveness, the non-paper suggests using a narrower definition that captures the spirit of the one provided by the Organisation for Economic Co-operation and Development (OECD) and clarifies what is and is not included.

AI Act: Czech Presidency pushes narrower AI definition, shorter high-risk list

The Czech Presidency of the EU Council pitched a narrower definition of Artificial Intelligence (AI), a revised and shortened list of high-risk systems, a stronger role for the AI Board and reworded national security exemption.

General purpose AI

The non-paper recommends having different liability rules for the providers of general purpose AI systems, large models that can be adapted to perform various tasks, and the users of such models that might employ them for high-risk applications.

The Czech Presidency proposed that the Commission should tailor the obligations of the AI regulation to the specificities of general purpose AI at a later stage via an implementing act.

By contrast, the US administration warns that placing risk-management obligations on these providers could prove “very burdensome, technically difficult and in some cases impossible.”

Moreover, the non-paper pushes against the idea that general purpose AI providers would have to cooperate with their users to help them comply with the AI Act, including the disclosure of confidential business information or trade secrets, albeit with the appropriate safeguards.

The leading providers of general purpose AI systems are large American companies like Microsoft and IBM.

Czech Presidency proposes tailored requirements for general purpose AI

The Czech Republic wants the Commission to evaluate how to best adapt the obligation of the AI Act to general purpose AI, according to the latest compromise text seen by EURACTIV.

High-risk systems

In classifying a use case as high-risk, the US administration advocated for a more individualised risk assessment that should consider threat sources, vulnerabilities, likely occurrence of the harm and its significance.

By contrast, human rights are only to be assessed in particular contexts. They also made the case for an appeal mechanism for companies that think they have been wrongly classified as high-risk.

For international cooperation, Washington wants that the National Institute of Standards and Technology (NIST) standards can be a means for compliance alternative to the self-assessments mandated in the AI regulation.

The non-paper also states that “in areas considered to be “high risk” under the Act, many U.S. government agencies will likely stop sharing rather than risk that closely held methods will be disclosed more broadly than they have comfort with.”

While the document expresses support for the approach of the Czech Presidency in adding an extra layer for the classification of high-risk systems, it also warns that there might be inconsistencies with the regulatory regime of the Medical Device Regulation.

AI Act: Czech Presidency puts forward narrower classification of high-risk systems

A new partial compromise on the AI Act, seen by EURACTIV on Friday (16 September) further elaborates on the concept of the ‘extra layer’ that would qualify an AI as high-risk only if it has a major impact on decision-making.

Governance

The United States is pushing for a more substantial role for the AI Board, which will collectively gather the EU’s national competent authorities, compared to the authority of the individual country. They also propose a standing sub-group within the board with stakeholder representatives.

As the Board will be responsible for advising on technical specifications, harmonised standards and the development of guidelines, Washington would like to see wording allowing representatives from like-minded countries, at least in this sub-group.

The European Commission has increasingly shut down the door to non-EU countries on standard development, whilst the U.S. is pushing for more bilateral cooperation.

Industry bodies oppose EU’s removal of non-European companies from expert group

Six interest groups have mobilised against the European Commission’s recent restrictions on the involvement of representatives of non-European companies in the Radio Equipment expert group, saying they go against the’s EU global values.

International cooperation

According to the non-paper, the regulation could prevent cooperation with third countries as it covers public authorities outside the EU that impact the bloc unless there is an international agreement for law enforcement and judicial cooperation.

The concern is that the US administration might stop cooperation with the EU authorities for border control management, which the AI Act considers separate from law enforcement.

Another point raised is that the reference to ‘agreements’ is deemed too narrow, as binding agreements on AI cooperation might take years to conclude. Even existing law enforcement cooperation might suffer since it also takes place outside formal agreements.

Moreover, the non-paper suggests a more flexible exemption for the use of biometric recognition technologies in cases where there is a ‘credible’ threat, such as a terrorist attack, as strict wording could prevent practical cooperation to ensure the safety of major public events.

Source code

In May, the French Presidency included the possibility for market surveillance authorities to be granted full access to the source code of high-risk systems when ‘necessary’ to assess their conformity with the AI rulebook.

For Washington, what is ‘necessary’ needs to be better defined, a list of transparent criteria should be applied to avoid subjective and inconsistent decisions across the EU, and the company should be able to appeal the decision.

France pitches changes to the supervisory board, market surveillance in AI regulation

Yet another compromise text on the AI Act has been circulated amongst the diplomats of the EU Council by the French Presidency ahead of a working party meeting on Tuesday (10 May)

[Edited by Nathalie Weatherald]

Read more with Euractiv

Subscribe to our newsletters

Subscribe