vchalup - stock.adobe.com

UK and US marked down on responsible AI

The UK and US have been rated as leaders in government use of artificial intelligence, but the Nordics and Baltics attained the highest scores for responsible AI

Some of the world’s most advanced countries in terms of artificial intelligence (AI) are not prioritising and practicing responsible AI, a report from Oxford Insights and the International Development Research Centre (IDRC) has warned.

In 2019, the Government AI readiness index reported that the UK led the world in terms of government use of AI. But the latest update for 2020 found that while the UK is still a leader on government AI, it lags many other countries in terms of the responsible use of AI.

Richard Stirling, CEO of Oxford Insights, said AI was transforming banking and the way governments interact with citizens, but “that transformation is not happening in the same way in every country around the world”.

The index puts European countries in a strong position for AI adoption.

According to Walter Pasquarelli, project lead for the AI readiness index at Oxford Insights, Europe has a stable governance structure, which has been a great advantage when developing an AI ecosystem.

“Western European countries dominate the top 20 of this year’s index,” he said. “There has been a rapid increase of AI strategies in Europe. Last year, there were a number of countries that had AI readiness strategies. This year, every country in Europe has a national AI strategy.”

The study found that the UK has the third-highest proportion of AI unicorns, behind America and China.

While the researchers recognised that startups drive innovation, in the US, tech giants such as Google Amazon and IBM have the scale to commercialise AI, the report noted.

The researchers found that in the Baltic-Nordic region such as Finland and Estonia, there must be a greater focus on data representativeness and protection, privacy legislation and national ethics frameworks to protect citizen’s rights and prevent unfair and discriminatory outcomes for certain groups in society.

“There has been a rapid increase of AI strategies in Europe. Last year, there were a number of countries that had AI readiness strategies. This year, every country in Europe has a national AI strategy”
Walter Pasquarelli, Oxford Insight

“Our report shows that some of the world’s most AI-advanced countries are not prioritising and practicing responsible AI in the way they should be,” said Stirling. “Nations from the US and the UK to Russia, China and Israel need to ensure that as they implement AI technologies they do it in a way that benefits all their citizens.”

The researchers noted that China has ambitions to challenge the US in global AI dominance, and while the US and the UK are not rated as world leaders in government AI readiness, they scored noticeably lower in terms of responsible use of AI. The AI readiness index rated the US and UK as first and second respectively in terms of government use of and investment in AI. However, they were 24th and 22nd respectively when assessed for responsible AI.

There are a number of possible factors behind this gap. The researchers reported that the US and the UK each have significant technology sectors, in which a number of companies score poorly on the Transparency International Corporate Political Engagement Index.

“There is, therefore, a risk of regulatory capture, where government policy reflects the interests of tech companies more than those of citizens,” they warned. As an example, they said that both the US and the UK have significant surveillance industries. As Computer Weekly has previously reported, the Met Police is facing criticism in the UK for its trial of facial recognition.

“AI is transforming the way in which countries are governed so it will become increasingly important that governments, while capitalising on AI’s potential, also have protocols and regulations in place to ensure implementation is ethical, transparent and inclusive,” said Stirling.

Read more about responsible AI

  • Despite the abundance of decision-making algorithms with social impacts, many companies are not conducting specific audits for bias and discrimination that can help mitigate their potentially negative consequences.
  • Artificial intelligence promises to change the way businesses operate. IT leaders are now taking bias in AI algorithms seriously.

Next Steps

Research manager resigns amid Google AI ethics controversy

Read more on IT for government and public sector

CIO
Security
Networking
Data Center
Data Management
Close