Should we be afraid of artificial intelligence?

Every month, in partnership with Display Europe, we take a close look at the media coverage of our continent from the angle of freedoms and democracy. In this first press review, our focus is artificial intelligence and its impact on democracy.

Published on 18 October 2023 at 16:24

Heralded as an imminent technological revolution, artificial intelligence (AI) is being applied in ever more areas of everyday life, provoking both alarm and enthusiasm. Supporters are convinced that AI will help solve many of the problems facing humanity (and create a few new billionaires in the process). Dissenters point to the as-yet-unknown risks posed by machines capable of thinking and acting infinitely faster than humans.

However, as Nello Cristianini, Professor of Artificial Intelligence at the University of Bath (UK), points out in The Conversation, "none of the above scenarios" – whether imagined by experts or industry entrepreneurs – "seem to map out a specific path to human extinction. This means that we are left with a generic sense of alarm, but with no possible action to take". This position is shared by Christopher Wylie, the whistleblower behind the Cambridge Analytica scandal, in the article we publish this week.

Taking a similar line, in an article (for subscribers) in New Scientist, is ethics researcher Mhairi Aitken of the Alan Turing Institute. She believes that these apocalyptic warnings "are frightening because they are making a decisive difference to the debate on the consequences of artificial intelligence". Deeply rooted in the collective imagination, this thinking "has now crept into the political and regulatory spheres. That is worrying, she says, "because the evidence to support these alarmist theories is practically non-existent and does not stand up to scrutiny". In Aitken's view, the aim of these warnings is "to deflect demands for transparency and erase the debate on the responsibilities of developers".

And where does Europe stand in all this? For once, the European Union has been quick to address the issue, drawing up a regulation – the AI Act. It is due to come into force in 2024 and "provides for classes of risk for which the certification procedures to be carried out by the producer are more stringent as the risk increases", explains Francesca Lagioia, a researcher at the law and engineering departments of the European Commission, in an interview with Internazionale's Annamaria Testa. "Risk classes must guarantee the reliability and security levels of a system by means of prior checks and compliance and certification procedures, i.e. before these technologies are marketed and used and before any harm occurs". She warns of the main limitation of this system: "producers will be able to self-assess compliance with the standards of high-risk systems".


Receive the best of European journalism straight to your inbox every Thursday

Also in Internazionale, Francesca Spinelli interviews Caterina Rodelli, an analyst at Access now, a digital liberties organisation, on the shortcomings of the AI Act. Rodelli points out that the appeal mechanisms for high-risk systems do not allow public-interest organisations to lodge an appeal on an individual's behalf, since"the authorities fear that they will be overwhelmed by legal actions brought by NGOs". The current text, she adds, "also excludes from the high-risk category migration-forecasting systems, which are very popular with governments determined to block the arrival of asylum seekers and 'irregular migrants', and also with reception organisations". For their part, some sixty human-rights organisations have published an open letter on the Liberties platform addressed to European lawmakers calling for the AI Act to "require the EU to adopt robust safeguards to protect the very foundation our Union stands on. The misuse of AI systems, including opaque and unaccountable deployment of AI systems by public authorities, poses a serious threat to the rule of law and democracy.".


On the same subject

Generative artificial intelligence is slowly entering children’s lives

Nathalie Koubayová | AlgorithmWatch | 25 September | EN

Amazon is developing a feature for its Alexa Echo Device to create bedtime stories for children using its in-house large language model (LLM). The system generates custom stories based on children's input and features such as character recognition via the device's camera. This initiative aims to compete with other voice assistants from tech companies like Google and Apple. However, Amazon's move into child-friendly AI has run into privacy concerns, given the company’s previous involvement in a $25-million settlement for illegally collecting children's data without parental consent. Legislation is currently being prepared to regulate AI technology aimed at children, with stricter rules in the European Union and some US states.

AI is a threat to internet freedom

Leonhard Pitz | Netzpolitik | 5 October | DE

The report Freedom of the Net 2023 by the NGO Freedom House reveals that AI “is the next threat to internet freedom". The technology is being used to amplify disinformation in many countries, and to make censorship more sophisticated. The use of AI by governments, combined with self-moderation by platforms, is thus leading to a decline in internet freedom. The report highlights the need for regulation based on human rights, transparency and independent oversight.


Also worth reading

Thierry Breton in the arena of Elon Musk

Markus Reuter | Netzpolitik | 11 October | DE

The EU’s internal-market commissioner Thierry Breton has sent a letter to the mercurial boss of social network X (formerly Twitter) from his account on Mastodon, a rival (and open) network. In it Breton points to misinformation spread on X after the Hamas attacks on Israel, and reminds Elon Musk that the Digital Services Act requires X to remove illegal content within 24 hours. While the purpose of the letter makes sense, notes Markus Reuter, "what we have here is show politics over EU law, a public exchange between two men in the emotional boxing ring of social media".

The EU’s “set menu” membership model is failing. It‘s time for an ”à la carte” approach

Alberto Alemanno | The Guardian | 10 October | EN

The potential EU membership of new states, notably Ukraine, "offers an unmissable opportunity to make the union strategically independent in a threatening new world order and capable of leading on the climate emergency", says this professor of European law at HEC (and member of the advisory board of Voxeurop), who calls for a review of EU governance. Two initiatives, one by the European Parliament and the other by a Franco-German group, to reform EU structures – including a possible multi-speed setup – could foster integration and help the EU to meet global challenges.

In partnership with Display Europe, cofunded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Directorate‑General for Communications Networks, Content and Technology. Neither the European Union nor the granting authority can be held responsible for them.

Was this article useful? If so we are delighted!

It is freely available because we believe that the right to free and independent information is essential for democracy. But this right is not guaranteed forever, and independence comes at a cost. We need your support in order to continue publishing independent, multilingual news for all Europeans.

Discover our subscription offers and their exclusive benefits and become a member of our community now!

Are you a news organisation, a business, an association or a foundation? Check out our bespoke editorial and translation services.

Support border-free European journalism

See our subscription offers, or donate to bolster our independence

On the same topic