BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Five Ways Companies Can Adopt Ethical AI

Following
This article is more than 4 years old.

By Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning, World Economic Forum

In 2014, Stephen Hawking said that AI would be humankind’s best or last invention. Six years later, as we welcome 2020, companies are looking at how to use Artificial Intelligence (AI) in their business to stay competitive. The question they are facing is how to evaluate whether the AI products they use will do more harm than good.

Many public and private leaders worldwide are thinking about how to address these questions around the safety, privacy, accountability transparency and bias in algorithms. For example, the incoming EU Commissioner has said she wishes to see legislation to ensure the production of ethical AI in Europe within the next few months.

One of the major risks of AI is that poor data and poorly constructed algorithms could give poor results, which could mean bad outcomes for businesses using them, not only internally because businesses do not get the required insights, but also externally with their customers if users think that decisions exclude or marginalise them. For example, an algorithm could make a biased decision against awarding a loan or in hiring.

So, how can your company get ahead and avoid the pitfalls? Here are five lessons for the ethical use of AI.

Employ a Chief AI Ethics Officer: Chief AI Ethics Officers would be able to guide companies in their use of AI, particularly some of the more controversial uses such as facial recognition and exploiting personal data. In 2017, in IEEE Spectrum, I suggested that companies should employ a Chief AI Ethics Officer. In 2014, an AI start-up recruited me to this position, which I then had to define from scratch. In addition to my being able to alert the Board to any concerns, I organised a Panel of Advisors and worked with product teams to ensure embedding ethical frameworks from inception of product. Similarly, Salesforce has appointed a Chief Technology Ethics officer and an AI ethicist to work with the AI production team.

Educate your leaders: Both your executives and your boards need to be educated about the benefits and challenges of using AI. To help companies do this, the World Economic Forum will release a toolkit for board directors at our Annual Meeting in Davos-Klosters, Switzerland, 21-24 January. At the heart of the Toolkit is an ethics module that enables directors to ask good questions of the C-suite. The creation of ethics advisory boards could also help companies navigate how to produce or sell AI.

Watch government regulation: Government regulations can affect how a company approaches its AI product offerings. For example, the Forum worked with the UK government to co-create ethical guidelines for the procurement by government of AI. These guidelines have also been piloted in UAE and Bahrain, and the objective is to scale their use globally. The key impacts are that it allows the government to set out what they expect of ethical AI development in their jurisdiction without having to go through the lengthy regulatory process, it gives companies a base-line understanding of the governments tolerance levels and so enables them to expend R&D money with confidence, and it increases the number of companies thinking about ethical design development and us od AI tools.

Identify risks: Companies should be aware of where particular AI risks arise, for example in the use of AI in human resources. Once they have identified those risks, it is useful to look at the developing standards and certifications in this area. To address this, the Forum is creating a toolkit for human resources departments to use when considering deploying AI solutions.

Look ahead: Companies should start thinking now about how they will retrain and educate employees as AI is introduced to work alongside them. They must also consider what new markets might be opened by ethical design development and use of AI, and plan for how they will check for changes in algorithms or design to ensure ethical approaches.

It is my hope that we can avoid a techlash which would cause companies to miss out on the benefits to be derived from AI. We must consider carefully and proactively the necessary governance mechanisms to be used to ensure ethical considerations in the deployment of AI tools. Building trust in technological solutions and tools must be our principle goal so that humans and the planet can benefit from its use.

This article is related to the World Economic Forum’s Annual Meeting in Davos-Klosters, Switzerland, 21-24 January 2020.

Follow me on Twitter