[INTERVIEW] RUSSIAN DIGITAL SOVEREIGNTY: 5 QUESTIONS TO KEVIN LIMONIER
20 October 2022
[INVITATION] Annual Conference 2022 : Digital Sovereignty and Geopolitical Crisis
17 November 2022

[INTERVIEW] The EU Artificial Intelligence Act: 4 questions to Joanna Bryson

By Tamian Derivry

The Council of the EU appears to finalise its position on the Artificial Intelligence Act, as the Czech Presidency has just shared the final version of the AI Act with other EU countries. Joanna Bryson, Professor of Ethics and Technology at the Hertie School, agreed to answer the Chair’s questions and share some of her thoughts on this new regulation.

  1. What is the AI Act, how does it stand in relation to other EU regulations and what are its main goals?

The EU has been regulating the digital economy for a while now. They wanted to make it more successful but they also didn’t want to do that at the cost of their sovereignty or the safety of their citizens. The cornerstone of this regulatory strategy was of course the GDPR (General Data Protection Regulation) and it still is to some extent. Then there is the Digital Services Act, and in fact, many of the things that I think about when I think of AI, precisely, digital services that do something, are covered by the DSA. And then there is the Digital Markets Act, which is about market competition. But what was left over for the AI Act? The EU wrote this big white paper, lots of ideas got pumped into it and it wasn’t clear where it was going to go. It looked like it might be about liability. At the end of the day, my personal assessment is that the AI Act has turned out to be about what kinds of processes we think shouldn’t be done at all, and for all the other kinds of processes, what is the correct amount of oversight. AI is basically the new paper, and the reason we use it for paper is because it’s a robot that can do something. You write down your intentions and then they are executed. I think the AI Act is really about how we keep human control when we have computational means of achieving things without direct, real-time human oversight, but also about when exactly we really need human control – what is not allowable in an automated way.

In my mind, it’s really not that big of a deal. People are fighting so much about it and trying to shove all kinds of things into it. In effect, it only come down to two things: what are the kind of things that you can’t do at all and what are the kind of things that you basically don’t have to worry about, and then how do we handle the ones in-between. The way you handle the ones in-between is actually pretty light-weight. It’s just about making sure that everybody knows how that AI was written, that you have practiced due diligence and that you have done the best we know and how to ensure that bad things are not going to happen. What I have been very enthusiastic about is that all the parts of the white paper about liabilities have come out in a revised version of the Product Liability Directive, which clarifies that software is a product, and is therefore covered by standard liabilities. In a way, I think that the real action is happening in the new Liability Directive and the Digital Services Act.

With GDPR, people from Google and Microsoft were coming up to me and saying “you know when that thing comes in, we are going to pull out of Europe”. Nobody is going to walk away from 450 million rich people, this was just not really going to happen. But the strange thing was that 6 months later, people came up to me and were saying “we are actually making more money in Europe now”. Of course, it’s a single market of 27 countries. I think it’s going to be the same when the AI Act finally goes through. People are going to find out that it’s really no big deal, also the GDPR is not that big of a deal. The AI Act will help make sure that we can easily prosecute someone that doesn’t do good practice and creates a corrupted product that causes harm. It will also just show the world that AI is just software, and this is how you can handle it.

2. A lot of the discussion on the AI Act has focused on its scope and the very definition of AI. Earlier this year, you published an article on wired in which you advocated for a broad definition of AI that would encompass more systems. What do you think are the main issues at stake here

I define intelligence as any sort of conversion of context into action. It’s a form of computation, it’s a capacity to derive your next action from the moment. In that case, artificial intelligence is just that subset of intelligence that is built intentionally. I think that this is incredibly broad and it goes back to the debates that have been going on for a century now. Is a thermostat intelligent? I think the correct answer is: “yes, but not very”. In most places, thermostats would not be considered high-risk, in hospitals maybe, but otherwise you don’t have to do anything. The only obligation you have if it’s not high-risk is that you have to make sure that people don’t mistake it for a person. Nobody is going to mistake the thermostat for a person. But what is the harm of defining it that way? I wouldn’t say that defining broadly is the same as saying we don’t have a definition.

It seems that the Wired article I wrote was really effective. I’m in the Global Partnership for Artificial Intelligence and they definitely backed me a lot on what the AI Act looked like in the first place. But some people didn’t buy my perspective and then the first “presidential” modifications to the AI Act came in, which constricted the definition of AI. But all we are saying is that if something is being automated, then we want to have a little bit of information and make sure that due diligence has been followed. Why would you not just apply this to everything, or at least all the things that could do some damage? The EC thinks it’s going to apply to around 10% of AI products, maybe 20%. I think what will happen is that all software is going to do some of that due diligence practice proportionately. So, lawyers will suggest to companies that they follow some of these practices even if they are not doing something the AI Act presently classifies as high-risk, just as protection against any other litigation. I think the problem is again the ignorance of American tech companies, which are trying to say “don’t do this, you are going to give AI to China, don’t overregulate this”. It’s not overregulated. This is a totally sensible regulation and the American government might be a little more secure if they well regulate this too.

3.  What are your thoughts on the AI Act’s approach to obligations regarding high-risk systems?

As I said, I think that all the EU is doing is saying that if a system could do some damage, then you better be able to show that you followed due diligence when you built it, just like any other product. Every single time that there has been a driverless car and some people died we have all known within 2 days what went wrong. Because the automotive industry is decently regulated. I was at a conference with Tim O’Reilly some years back in New York City, and a lot of the speakers were saying things about how hard it is to know about AI systems, and how machine learning is necessarily opaque, etc. But when you go to the basement of the building where the conference was held, you see all these rooms that various companies had booked to sell systems that do exactly what these guys on the main stage were saying is impossible. They are selling their systems to healthcare and to automotive, which demand that you show that you have practiced due diligence and that you have done your best effort to know exactly how your system works. Even if you do no other transparency with your system, you could still just run enough trials to say whether it’s a safe product or not, and document those.

All we are asking is to see the books, the documentation of how you are developing these systems. We want to know who did the testing, if you did cyber secure your data, etc. Due diligence is all about keeping those records and making them available. People were saying: “AI is so complicated, it moves too fast, legislators are never going to be able to keep up with it”, but it turns out lots of things are complicated. Governments are more complicated than AI. The way the law is set up is to say that you have to follow best practice for your industry, especially if you get to a certain level. If you are a new company, you can just follow standard practice but if you are the leading company, you have to follow best practice. Due diligence is great because nobody has to sit down and write somewhere what best practice is and best practice can keep improving so you are allowing society to keep increasing its abilities, to keep improving itself.

I was just at a meeting for the launch of the Global Initiative for Digital Empowerment and they were talking about the notion of “duty of care”. If you’re a doctor or even if you’re a baker, you have access to information about your customers that is deeply private. There are standards about what you do with that information. These guys are just saying that this should be true of everyone. Anyone that has any data should have to follow “duty of care”. This is great because it means we can look across the sector to see if we have an agreement about what duty of care is, is it intuitive, does it match up with other sectors, or is this particular sector evil, etc. We have a whole set of laws for dealing with that. In fact, my very first AI ethics paper was called Just another Artifact. It was trying to remind people of the non-exceptionalism of AI. There are some exceptional things, and as I just mentioned, that is why the AI Act is here to handle the fact that a process could go forward without any person there to second-guess it. A lot of people don’t even second-guess things they do observe, but at least if you have a bunch of them involved, some proportion might notice that something is going wrong. So, this is one of the small differences, which the AI Act is compensating for.

4. There has also been a lot of discussion about facial recognition and more broadly about systems for which due diligence may not be enough. What do you think of the decision to ban certain AI systems?

There is facial recognition, which is fine, which nobody has too many complaints about, and there is facial recognition which is banned under the current version of the AI Act, although it is not entirely coherent. Lots of people open their phones using facial recognition now. You gave your picture to the phone to say this is who you are. There are still going to be inequities because those systems will work a little better for people who look a little bit more like whatever the average is. But by and large this is all by consent. The difference is, if you are looking at every single person across society and you are scanning them all the time and therefore able to create databases about where they are all the time. In fact, you can tell a lot about people, what they do, when they are feeling insecure and many other things just from vision. Then you have an even bigger problem of inequity at that scale, because different people are more likely to be hassled for looking like a criminal or something. And it opens the door to so many other forms of misuse.

Some people push back and say: “what if you’re blind and your phone could tell you that your friend is across the street”, that would be more equalizing. But if I’m blind and I want my phone to be able to tell me when my friends are passing the street, I would just ask my friends to take their picture, the same way people do when they consent to their phone opening for their face. If you knew where everybody was everywhere and you also knew where someone’s friends were, you could say: “by the way, there is one of your friends”. But if you only knew who somebody’s friends were and there was a picture of each of the friends, then you can still say “there is one of your friends.”  And that would be consensual, they would have said: “yes I know we are friends now, I have your picture on Facebook”. Or you could get your friend to send you a picture and you could say I have a phone that knows all my Facebook contacts and recognizes them. It isn’t necessary that something is surveilling the identity of everyone on the street at all times.

The problem with the way the AI Act is currently phrased is that there are a couple of exceptions. If there has been a kidnaping, or there has been a terrorist attack, then we will surveil. I think that this is just broken. It could appear to be saying: “now we put these special things in place, now we are doing that with passports, we are looking everywhere for those people”. They said they would only turn this on when there is a kidnapping. But unfortunately, kidnappings are happening every day. I think what they really meant to say was that we might surveil every space on the street to see if it is a specific missing child or a specific terrorist. That is not the same as saying that when there is a missing child or when there is a terrorist, then we are going to recognize every person on the street. I think it’s really strange that people aren’t being clear about that distinction.

Joanna Bryson is a Professor of Ethics and Technology at the Hertie School in Berlin. Her research focuses on AI ethics and governance. She holds degrees in psychology and artificial intelligence from the University of Chicago (BA), the University of Edinburgh (MSc and MPhil), and Massachusetts Institute of Technology (PhD). Since July 2020, she has been one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence.