On 7 November 2019, the Australian Federal Government’s Minister for Industry, Innovation, and Science, Karen Andrews, announced the Australian government’s official AI ethics framework.

This is a voluntary set of parameters for businesses and organisations when designing, developing, integrating, or otherwise using artificial intelligence.

The framework reduces to eight principles, and we replicate below the summary from the Department of Industry, Innovation, and Science’s website:

Principles at a glance

  • Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society and the environment.
  • Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
  • Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

More questions are raised by the framework than answered. We deal with some immediately obvious issues below.

Transparency

In respect of Transparency and Accountability, the Department’s website notes, “Responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making.” Critically for lawyers, this includes, “for regulators in the context of investigations” and “for those in the legal process, to inform evidence and decision-making.”

The Law Council of Australia’s submission to the Department (see https://www.lawcouncil.asn.au/docs/b3ebc52d-afa6-e911-93fe-005056be13b5/3639%20-%20AI%20ethics.pdf ), following a call for public comment earlier this year, expressed concerns about the administrative law implications of AI – an AI involved in a government decision should be able to explain its decision-making process.

But as noted in a paper issued published in the MIT Technology Review https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ in 2017,

“The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did… You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.”

Deep learning systems are already extensively in use in banking, medicine, and defence. The same MIT Technology Review article notes that an AI called “Deep Patient… was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver” and early onset schizophrenia.

No one knows how it does this. This bodes poorly for the Transparency and Accountability guideline.

The Australian Academy of Health and Medical Sciences in its submission https://aahms.org/wp-content/uploads/2019/06/AAHMS_Consultation-Response_Artificial-Intelligence-Australias-Ethics-Framework.pdf to the Department suggested, “It would also have to be considered whether information on the data sets and sample size used for machine learning should be made available to consumers, particularly for rare conditions with less available data and lower sample sizes.”

However, this might have privacy implications.

Privacy

The Department notes that, “This principle [relating to privacy protection and security] aims to ensure respect for privacy and data protection when using AI systems. This includes ensuring proper data governance, and management, for all data used and generated by the AI system throughout its lifecycle. For example, maintaining privacy through appropriate data anonymisation where used by AI systems.”

As noted in the submission by the Office or the Australian Information Commissioner https://www.oaic.gov.au/engage-with-us/submissions/artificial-intelligence-australias-ethics-framework-submission-to-the-department-of-industry-innovation-and-science-and-data-61/-and-data-61/ , “AI amplifies existing challenges to individuals’ privacy. It is important that personal information used to train AI systems is accurate, collected and handled in accordance with legal requirements and aligns with community expectations. There is a need to ensure that organisations using a range of technologies are accountable for handling personal information appropriately. This may be achieved through increased transparency, building in privacy by design and putting in place an appropriate system of assurance. Such assurance could include third party audit or certification…”

Retention of de-identified aggregated data by an AI’s owner is a common enough term if AI development and deployment contracts – the AI needs to retain a certain level of data, post-contract, in order to learn. Australia however has no database rights (there is no statute governing ownership of data bases) and the Australian Privacy Act does not mirror what is arguably the highest legal standard of data protection, the European Union’s General Data Protection Regulation and its extensive protections of personal data.

Third party audit or certification is, with respect, an excellent idea – provided, obviously, that audit is not conducted by an AI. That is not a churlish comment: an AI might be the only mechanism capable of ensuring compliance by another AI, but that exercise would remove human oversight.

Accountability

The Department notes, “AI systems that have a significant impact on an individual’s rights should be accountable to external review, this includes providing timely, accurate, and complete information for the purposes of independent oversight bodies.”

We think this guideline should include the following express prohibitions:

  1. an AI should not autonomously be able to make itself more intelligent. The importance of this should be paramount. An AI capable of ever increasing its intelligence without limit introduces too many unknowns about how the AI might act, or that it might become capable of disabling any in-built moral compass;
  2. an AI should not be able to autonomously make money. Acquisition of wealth enables AIs increased autonomy in human activities, including engaging human agents;
  3. an AI should not be permitted to pass itself off as a human, because of the risk of deception upon humans. A human may be more cautious about decision making if the human knows their interaction is with an AI. In 2018 Google showed off its AI assistant which sounded startlingly like a human – see https://www.theverge.com/2018/5/9/17334658/google-ai-phone-call-assistant-duplex-ethical-social-implications. As noted in that article, “Google… hopes a set of social norms will organically evolve that make it clear when the caller is an AI.” Human-assisted virtual assistants – when the AI trips up, it calls in a human to help – make this identification complicated.

Conclusion

We see the guidelines as a precursor to regulation, and we regard regulation on AI and especially AI ethics as inevitable. The significant benefit arising the exercise of the creation of the guidelines is that government and industry are, at least, giving the issue serious thought.