Why AI is amoral, and artefacts can’t consent

 Why AI is amoral, and artefacts can’t consent

Notes for my testimony to a joint meeting of two UK All Party Parliamentary Groups: the one on Future Generations, and the one on Artificial Intelligence, on the topic "Expanding Moral Circle: The Rights of Robots." This testimony was given 13 December 2021, so just a few weeks after UNESCO reached consensus that AI should never be given even legal personality (not everyone present at the APPGs' session had got the memo yet though). The video of the full meeting were meant to be made available, but as of March 2022 doesn't seem to have been.


In the mean time, you can watch me testify again to the APPG on AI on their subsequent (January 2022) event, AI and Intellectual Property Rights: IPR protection for AI-created work, where speakers were in apparent consensus that there was no need for legal personality for AI since anyway the firm owning the AI would have legal personality. See also my previous (2018) testimony to the APPG FG, where I was invited to speak on AI more generally with Nick Bostrom and Ed Felton.


Some of the points raised here are from my forthcoming book, but see below for links to existing publications on these topics.


Thank you for giving me the opportunity to participate in this debate. To be frank, ordinarily, I decline any event that labels itself as concerning ‘robot rights’. I find the phrase itself abhorrent, and do not want to either legitimise it or even draw attention to it. But given that I am a British Citizen, and was a long-time British resident for most of the period 1991 through 31 January, 2020, I am of course complying with any invitation by my country’s parliament to speak on a topic in an area of my expertise.


The purpose of this committee is to discuss the moral circle, so let me start briefly by describing how I will use terminology relating to morality. Then my primary focus will be on whether it is possible to construct a just society where artefacts can be considered to offer or withhold consent. Then I will finish with a brief discussion of my co-panelists' considerations concerning rights.


These are definitions chosen from many available, for the purpose of communicating effectively with you right now in this particular context. I don’t want to argue about definitions, rather only with them; I am not claiming that these definitions are the only available or even the most generally useful.

  • Intelligence: capacity to act appropriately in the given moment – to compute action from context (Romanes 1882).

    • Includes plants, thermostats.

  • Agents: any vector of change for a context.

    • Includes chemical agents.

  • Moral agents: those agents a society considers responsible for their actions.

  • Moral patients: everything considered the responsibility of a society’s agents.

  • Ethics: every behaviour contributing to the regulation (perpetuation) of a society.

  • Morality: the subset of ethics which is conscious – subject to discussion and agreement.

  • Artefacts: everything for which the fundamental design results from either conscious decision, or follows a culturally determined pattern.

    • includes stone axes, governments, and robots for which machine learning was used in the development; excludes (cross) breeding and GMO.

  • Consenting: a moral agent expressing agreement with the planned action of one or more other conscious agents.


Note that it is presently agreed by most moral philosophers that slaves are incapable definitionally to consent to sex with their masters, so all children of slaveowners by their slaves result from acts of rape. This wasn’t how we thought about the agency of slaves, who are of course human, when I was an undergraduate. At that time we considered it impossible that humans had ever truly and legitimately been owned. But this present interpretation is the language and reasoning that I will be using here. And to be honest, I now think it is a better description of power dynamics. For similar reasons, most universities ban dating between faculty and students – whatever their phenomenological experience of mutual attraction – and I would say rightly so. 


So I think you should be able to see from these definitions that the crux of my argument is not going to rest on some attribute of AI or robots – not consciousness, not opaqueness – but rather on whether attributing moral agency and the capacity for consent to something designed can be a sensible decision, or a coherent part of a stable justice.

  • The existence and design of artefacts is determined by conscious decisions for which we ordinarily attribute responsibility.

  • Human morality rests on pre-moral ethical foundations including intuitions, phenomenology, and even evolved requirements for well-being such as concern for social inclusion and social status, and knowledge of our own vulnerabilities, including to old age and death. 

  • Nevertheless, morality is itself conscious agreement, and so itself an artefact that can be altered, for example through legislation.

  • Constructing laws and administering justice are processes through which a society regulates itself – that is, attempts to ensure its security and sustainability


I hope it is clear from this that what we need here, and what is really up to debate, is a normative recommendation, not a fact of nature discoverable by science. So here is mine:

  • Excluding the occasional work of art, the vast majority of AI systems are and will continue to be commercial products, bought, sold, and operated.

  • Attributing moral or legal agency to these would facilitate constructing perfect “shell companies.” 

  • Recommendation: Do not allow powerful agencies or individuals to cap their legal or tax liabilities when they 

    • fully automate a part of their business process, or

    • obscure accountability for development and operation of their products. 

  • Note:  software systems are manufactured products, even if producing software is a service, and software can be used to provide services. The engineered system itself is still a product.


What we want to do is to encourage every agency creating AI, including those creating robots, to create products that are as transparent and beneficial to the rule of law as possible.


I’m going to conclude by addressing two objections I know [guessed wrongly] that my co panelists will [might] be making. First, given that we identify with AI, we must offer it the same moral patiency we offer each other, or we will reduce our capacity to do good towards others with whom we identify. Firstly, this argument overlooks that AI is an artefact, subject to design. In fact the vast majority of intelligent artefacts, including robots, we do not identify with at all; there is no call for rights for Websearch, grammar checkers, or factory robots. If there proves to be some intelligent technology that we cannot help but identify with, we have two choices: ban it (as many people have when done when they excluded voice-activated 'digital assistants' (e.g. Alexa) from their households), or offer it the same protections as any work of art.


Second Jacob will be arguing [has said in the past] that the perspective I’ve offered here is overly culturally specific. In fact, I developed it over decades of working on AI policy with colleagues all over the world, including from developing and East Asian countries. May I point out that if everything has a soul, then a soul makes a robot no more like a human than like a stone.  


I hope you can see that my recommendation is structured on a logic generalisable to any secure, sustainable society. Thank you again for this opportunity to serve.


zoom of myself, other speakers, various others with cameras on
The two other panelists were Professor David Gunkel and Jacob Turner, with an intro from Matt Warner, MP, and chairing by Lord Tim Clement-Jones





Related paper

Of, for, and by the people: the legal lacuna of synthetic persons a formal academic paper by myself, Mihailis Diamantis (Iowa), and Tom Grant (Cambridge) who are actually law experts, particularly on legal personality. Talks also about the problems of overextension of legal personality for corporations, failures to "pierce the veil", and so forth. See further Diamantis' other works on this, which is a recent source of corruption in US law.

Related global agreement: Conveniently, 193 nations just on 24 November 2021 signed the UNESCO Ethics of AI recommendation which states that AI systems should never be given legal personality.

Notes not used

I decided not to even engage on rights, but I include my notes here in case they are useful to someone else.


US rights 18C version, negative. Contemporary, even fundamental rights, positive, and in tension with each other. So it is possible that we have two things both of which we are obliged to defend and we still have to make moral decisions between them. 


I had been taught that rights are something of your own that you are obliged to defend. In 2007 I was on a student paper modelling the impact of social media on terrorism, and we took the case of the UK animal rights movement for our data. And indeed, at that time, whether a group talked about welfare or rights was a good indicator of whether it was activist or terrorist. 



When we think about animal welfare, no one can deny that being terrified and shredded at the end of your life is not a good way to die, though sadly it sometimes befalls even humans. But I think most of us would prefer to have an otherwise good life that ends that way than to have the life of a factory farmed pig – intelligent, naturally-social animals often driven psychotic and to self harm by an entire life of confinement. Of course, reasoning by empathy is highly morally precarious – it makes us more sympathetic with those with a shared background whom we find easier to understand. Nevertheless, the mechanism by which other social mammals experience the social drive is sufficiently similar to our own, that when we are trying to understand how it is to be a dog left alone at home we might do well to read accounts of humans held hostage with nothing to do and in isolation for the majority of their days. 


So while there is no question that the activists in this country massively improved the lives for example of laboratory animals that are now socially housed with adequate mental stimulation, when I see “rights” activists who care only about foxes and lab animals, and not about factory farms or isolated pets, then I think they have an agenda concerning class or perhaps simply power and prominence, not real concern about wellbeing.


[I feel like people, particularly governments, do not adequately realise that we're extending the capacity of people to create complexity generally, not only in ML/AI, and that part of the rule of law is dissuading this and motivating transparency and accountability as necessary aspects of legitimate systems.] *** 


Comments