Humane AI requires a regulatory regime

Artificial intelligence (AI) is set to upend nearly every industry. It’s a technology that will deliver astronomical gains in productivity, dramatic cost reductions, and tremendous advances in research and development. With AI set to increase global GDP by more than $15.7 trillion by 2030, it can be easy to assume that the technology can be nothing but an unfettered good. That would be a dangerous mistake.

AI, like any technology, can have detrimental personal, societal, and economic effects: some common concerns include the fact it provides tools that can be exploited by criminals to compromise the cyber security of individuals and organisations, or that the predictive abilities of AI raise a swathe of privacy concerns. However, perhaps one of the most sinister possibilities for AI is that it will be used to erode the accountability structures within powerful organisations while perpetuating existing unjust biases along protected characteristics such as age, sex, or race.

Markets tend towards ends that may not prioritise the ethical and moral principles of individuals. While there are fantastic initiatives in the development of standards for AI, such as the IEEE Ethically Aligned Design or the ISO standards for AI, we can’t rely on self-regulation to make this technology work. Instead, we must protect our ethics by creating fit-for-purpose regulatory frameworks, which will compel best practices and mitigate malicious activity in the sector.

Why we need XAI, not just responsible AI

Hugo Chamberlain, COO of smartKYC, discusses why organisations not only need responsible AI, but also AI that’s explainable. Read here

The social challenges of AI

The two challenges that I believe make a compelling case for a regulatory regime, as mentioned, are the issues of bias and the issue of accountability.

It is known AI can perpetuate undesirable social biases if it’s improperly designed, developed, or deployed. This is because data inherently carries the same biases that societies have developed throughout history. This means that if a model is trained with biased data that overrepresents or underrepresents certain traits, outcomes, or groups, then that model can end up making socially unjust decisions on the basis of that bias. For example, many AIs that are tasked with policing across America are trained with now-falsified data regarding associations between crime and race, which causes those models to bias their decision-making against ethnic minorities.

When it comes to accountability, there is a level of ambiguity when it comes to the responsibility of individuals and organisations involved throughout the design, development and operation of AI systems. Consider the case of a person who is knocked over by a self-driving car – who do we hold responsible? Traditionally, vehicle manufacturers were responsible for damage caused by the design of their vehicles, but AI opens up a complex chain of accountability that enables buck-passing: one could argue it was the manufacturer, a software developer that was contracted to develop the driving AI, the compliance officer who signed off the AI, or the industry group that approved specifications for self-driving AI at fault.

The AI technology race: overcoming simulation hurdles to expedite the fully autonomous vehicle

Chris James, marketing director EMEA, Virtual Instruments, discusses the importance of infrastructure performance assessments and why looking for solutions that take an application-centric approach to infrastructure performance management can mitigate simulation challenges. Read here

The network of stakeholders who can claim a degree of culpability means that those seeking restitution or justice for the wrongs made by an AI could face a bureaucratic nightmare. Considering that AI models can make major decisions that affect a person’s future – from approving a mortgage to whether a person comes under suspicion by law enforcement – this inability by an individual to hold an organisation to account can render a person powerless to push back against incompetence or malevolence by institutions.

Why regulation is the solution to these challenges

Thankfully, experts have made inroads in developing tools and processes against both of the above social problems.

To mitigate bias, those working in AI know how to be more rigorous in selecting the training data. In cyber security, there is an acknowledgement that although it’s impossible to introduce a magical silver bullet that makes a system unhackable, it’s possible to introduce processes and touchpoints that ensure best practices that mitigate cyber security loopholes.

Use cases for AI and ML in cyber security

We explore how artificial intelligence (AI) and machine learning (ML) can be incorporated into cyber security. Read here

Similarly, in the context of algorithmic bias, it’s possible to introduce relevant processes and best practices that ensure undesired biases are mitigated. By leveraging the knowledge of domain experts and AI explainability techniques, organisations are able to mitigate risks and introduce interpretability into critical processes. This ensures systematic biases from training data is minimised, and thus prevents models from making discriminatory decisions. When it comes to restoring accountability, we can also introduce relevant touchpoints throughout the AI lifecycle which places humans into the decision-making process, ensuring there is an accountability structure throughout the entire process.

However, we cannot expect private industry to ensure the above practices are widely implemented. Instead, these best practices to combat bias, restore accountability, and improve explainability have to be incentivised from outside. This is why AI is calling out for a regulatory environment. To accomplish this, policymakers and developers will need to come together and engage in collaboration across their disciplines so as to balance technical, social, and legal demands.

An additional effect of regulatory compliance is that it will beget more confidence and stability in the field. If implemented properly, a good regulatory regime that compels adoption of best practices for AI deployment can create new jobs to carry out these practices, attract more investment into the industry, and spur yet more innovation to allow AI to be humane, along with being transformative.

Written by Alejandro Saucedo, engineering director at Seldon

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com