BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Top Nine Ethical Issues In Artificial Intelligence

Forbes Technology Council

Margarita Simonova, founder and CEO of ILoveMyQA.com.

Our lives are being transformed every day for the better by intelligent machine systems. The more capable these systems become, the more efficient our world becomes.

Some of today’s tech giants believe that artificial intelligence (AI) should be more widely utilized. However, there are many ethical and risk assessment issues to be considered before this can become reality. We discuss these below.

1. How Do We Deal With Unemployment?

The majority of people sell most of their waking time just to have enough income to keep themselves and their families alive. The success of AI, because of the amount of time it saves, will provide people the opportunity to spend more time caring for their families, become involved in their communities and experience new ways of contributing to human society.

Let’s take, for example, the trucking industry, where millions of people are employed in the United States alone. If Tesla’s Elon Musk delivers on his promise of offering true self-driving cars (and by extension, delivery trucks) and they become widely available within the next decade, then what’s going to happen to those millions of people? But self-driving trucks do seem like an ethical option when we consider their ability to lower our accident rates.

2. How Can We Equitably Distribute The Wealth Created By Machines?

AI, if it becomes widely used, can reduce a company’s reliance on the human workforce, which means that revenues will go primarily to people who own AI-driven companies.

Already, we are seeing startup founders take home the majority of the economic surplus they generate. So how do we equitably distribute the wealth created by machines?

3. Can Machines Influence Our Behavior And Interactions?

AI bots are becoming more effective at imitating human relationships and conversations. A major breakthrough on this front occurred in 2015 when a bot named Eugene Goostman became the first computer to pass the Turing test. This challenge requires human raters to use text input to chat with an unknown entity, and then guess whether the entity is human or machine. Over half of the raters chatting with Eugene Goostman believed it was human.

While this can prove very useful in nudging society toward more beneficial behavior, it can also prove detrimental in the wrong hands.

4. How Do We Guard Against Possible Detrimental Mistakes?

Intelligence results from learning, whether you’re human or machine. Systems normally have a training phase where they “learn” to detect the right patterns and act according to their input. After the training phase, the system then goes to the test phase where more scenarios are thrown at it to see how it performs.

Because it is highly unlikely that the training phase can cover all the possible scenarios that the system may encounter in the real world, the system can be fooled in ways that humans wouldn’t be. Therefore, if we are to rely on AI to replace human labor, we need to ensure it performs as planned and cannot be overpowered by humans with selfish intentions.

5. Can We Eliminate AI Bias?

Let’s not forget that AI systems are created by humans, who can sometimes be very judgmental and biased. Yes, AI, if used right, can become a catalyst for positive change, but it can also fuel discrimination. AI has the capability of speed and capacity processing that far exceeds the capabilities of humans; however, due to human influence, it cannot always be trusted to be neutral and fair.

6. How Do We Protect AI From Adversaries?

The more powerful the technology, the more it can be used for good as well as nefarious purposes. AI is no exception; therefore, cybersecurity becomes all the more important.

7. How Can Unintended Consequences Be Avoided?

There’s also the possibility that AI could turn against us—not in an evil way, but rather unintentionally. Let’s take, for example, an AI system that is asked to rid the world of cancer. After all of its computing, it spits out a formula that does just that, but it kills everyone on the planet. Yes, the goal was achieved but not in the way that humans had intended it.

8. Is There Any Way We Could Remain In Total Control Of AI?

Human dominance is not due to strong muscles and sharp teeth but rather intelligence and ingenuity. We can defeat stronger, bigger and faster animals because we’re able to create and use physical and cognitive tools to control them.

This presents a real concern that AI will one day have the same advantage over us. Sufficiently trained machines may be able to anticipate our every move and defend themselves against us “pulling the plug.”

9. Should Humane Treatment Of AI Be Considered?

Machines imitate us so well that they’re becoming more and more like humans by the day. Soon we’re going to get to the point where we consider machines as entities that can feel, perceive and act. Once we get there, we might ponder their legal status. Can “feeling” machines really suffer?

So How Do We Address Those Ethical Issues?

Many believe that because AI is so powerful and ubiquitous, it is imperative that it be tightly regulated. However, there is little consensus about how this should be done. Who makes the rules? So far, companies that develop and use AI systems are mostly self-policed. They rely on existing laws and negative reactions from consumers and shareholders to keep them in line. Is it realistic to continue this way? Obviously not, but as it stands, regulatory bodies are not equipped with the AI expertise necessary to oversee those companies.

Jason Furman, a professor of the practice of economic policy at the Kennedy School and a former top economic adviser to President Barack Obama, suggests that “The problem is these big tech companies are neither self-regulating nor subject to adequate government regulation. I think there needs to be more of both. …We can’t assume that market forces by themselves will sort it out. We have to enable all students to learn enough about tech and about the ethical implications of new technologies so that when they are running companies or when they are acting as democratic citizens, they will be able to ensure that technology serves human purposes rather than undermines a decent civic life.”

While technological progress translates to better lives for everyone, we should bear in mind that some ethical concerns will develop centering around mitigating suffering and risking negative outcomes.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on LinkedInCheck out my website