BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Is Amazing But Complicated, And We Don’t Necessarily Need To Plunge In Headfirst

Forbes Business Development Council

Eric Hutto is President and Chief Operating Officer at Unisys Corporation. https://www.unisys.com/

Artificial intelligence (AI) can help humans address many challenges, but it also creates challenges. We know AI has biases. We know it’s complex. We understand that AI may or may not draw fair and ethical conclusions all the time. Yet it’s clear that AI is going to happen anyway.

Physicist Stephen Hawking said AI was a major concern. He commented, “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst.”

So, the question is: Are we spending enough time and effort on the soft side of this technology?

Questions about AI ethics are particularly relevant now as governments and businesses look to technologies such as AI, thermal cameras and scanners to address citizen and workplace safety. Such conversations tend to revolve around AI fairness, intended use, potential misuse, privacy pushback, worker displacement and product accuracy.

Complexity makes it tough to validate fairness.

One touted benefit of AI is that it could eliminate human biases to allow for more “fair decisions.” But experience has proven that AI efforts have not always delivered on that promise.

In 2014, Amazon began experimenting with AI for hiring. It trained an AI system using an internal database of its software developers, most of whom were male. The system responded by filtering out female candidates. And, in 2018, Amazon said it would scrap the recruiting tool.

The challenge may be that AI is so complex that developers don’t know how it arrives at conclusions. That could make it harder for people to validate that AI has reached a “fair” or “ethical” decision.

Intended use and potential misuse require serious consideration.

The basic premise is that AI should do good. When does it not do good? When its intended use is altered. Google may have the most visible example of why it's important to define intended use.

In 2018, Google won a big Department of Defense (DoD) contract to analyze video footage from drones. But some Google associates did not want to participate in the business of war. More than 3,000 employees signed a letter to the company’s CEO urging him to abandon the project. Google leadership responded by deciding not to renew (paywall) the government contract and publishing guidelines on the types of AI usage it will and will not support.

More recently, Clearview AI became part of the conversation about potential misuse. This company provides facial recognition technology to law enforcement. It captures images and locates publicly available information of the pictured individuals, reportedly from Facebook, YouTube and other websites. That could provide a lot of information about people, including their current location.

From a law enforcement perspective, the intended use is to catch criminals. That’s a positive intended use — it’s a do-good for humanity. But once you have that capability, how do you control and monitor it? Because if the technology can be used to identify a criminal, a predator could use the solution to identify and locate a woman or a child.

The rise of AI also creates accuracy, privacy and worker displacement concerns.

There are all kinds of great uses for AI. Law enforcement is one of them. Companies could also use AI to analyze the mood of a person entering a store, assess the health of an employee walking into an office, predict a job applicant’s likelihood of success based on a video interview of that individual, or evaluate a heavy equipment operator for signs of fatigue.

Consider the last example. The goal of using cameras and AI to prevent accidents is a great intent. But if AI determines that a person appears to be fatigued, on drugs or intoxicated, it had better be really accurate —because such decisions can affect people’s jobs and reputations.

There’s also a privacy aspect to this, and that goes beyond just the monitoring. How do you explain to the other workers on the factory floor that a co-worker got sent home today?

The people AI analyzes may feel their privacy is being violated. They may be unhappy about their interaction with the company or organization. If enough people feel that way, it could tarnish the company’s brand with consumers, employees and potential hires.

Then there’s the AI ethics conversation about jobs themselves. As I discussed in a previous article, forecasts suggest that AI will take over a growing share of human work. That means employees heavily impacted by AI may need to learn new skills and possibly find new jobs.

That raises the questions: Who is responsible for helping these workers reskill and find new positions? Is it the employer or the government? And what exactly is that responsibility?

Conducting audits, defining intended use and easing into AI may help.

These are difficult questions to answer. But there are some approaches — and checks and balances — that organizations can implement to help enable ethical AI.

In an effort to ensure accuracy and fairness, organizations may want to consider conducting random-select audits. If you are trying to make a decision about whether to select a job candidate, for example, you could have a human do a check on one out of every 30 applicants.

Addressing AI accuracy also involves monitoring for model drift and adjusting as needed. Suzanne Taylor of Unisys provides guidance on that in one of her recent articles.

It’s also important for organizations to define the intended use of AI applications and technologies. Businesses should also probably consider what they can do to prevent misuse.

Organizations also may want to grow their way into AI. That way they can start out in areas such as supply chain that are less focused on humans rather than use cases like job candidate selection.

Let’s not throw the baby out with the bathwater. There’s a lot of goodness to AI. But it's still early on.

Why don’t we use AI for less human-oriented things first? Let’s perfect it, understand it, check it, audit it and control it before we start applying it to the "people" aspects of society.


Forbes Business Development Council is an invitation-only community for sales and biz dev executives. Do I qualify?


Follow me on LinkedInCheck out my website