Opinion: AI For Good Is Often Bad

Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
elephants
Designed to mitigate poaching, Intel's TrailGuard AI still won’t detect poaching's likely causes: corruption, disregarding the rule of law, poverty, smuggling, and the recalcitrant demand for ivory.Photograph: Carolyn Van Houten/The Washington Post/Getty Images

After speaking at an MIT conference on emerging AI technology earlier this year, I entered a lobby full of industry vendors and noticed an open doorway leading to tall grass and shrubbery recreating a slice of the African plains. I had stumbled onto TrailGuard AI, Intel's flagship AI for Good project, which the chip company describes as an artificial intelligence solution to the crime of wildlife poaching. Walking through the faux flora and sounds of the savannah, I emerged in front of a digital screen displaying a choppy video of my trek. The AI system had detected my movements and captured digital photos of my face, framed by a rectangle with the label “poacher” highlighted in red.

I was handed a printout with my blurry image next to a picture of an elephant, along with text explaining that the TrailGuard AI camera alerts rangers to capture poachers before one of the 35,000 elephants each year are killed. Despite these good intentions, I couldn’t help but wonder: What if this happened to me in the wild? Would local authorities come to arrest me now that I had been labeled a criminal? How would I prove my innocence against the AI? Was the false positive a result of a tool like facial recognition, notoriously bad with darker skin tones, or was it something else about me? Is everyone a poacher in the eyes of Intel’s computer vision?

Intel isn’t alone. Within the last few years, a number of tech companies, from Google to Huawei, have launched their own programs under the AI for Good banner. They deploy technologies like machine-learning algorithms to address critical issues like crime, poverty, hunger, and disease. In May, French president Emmanuel Macron invited about 60 leaders of AI-driven companies, like Facebook’s Mark Zuckerberg, to a Tech for Good Summit in Paris. The same month, the United Nations in Geneva hosted its third annual AI for Global Good Summit sponsored by XPrize. (Disclosure: I have spoken at it twice.) A recent McKinsey report on AI for Social Good provides an analysis of 160 current cases claiming to use AI to address the world’s most pressing and intractable problems.

While AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need. In fact, the fanfare around these projects smacks of tech solutionism, which can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards.

Photograph: Yasuyoshi CHIBA/AFP/Getty Images

Tech companies that set out to develop a tool for the common good, not only their self-interest, soon face a dilemma: They lack the expertise in the intractable social and humanitarian issues facing much of the world. That’s why companies like Intel have partnered with National Geographic and the Leonardo DiCaprio Foundation on wildlife trafficking. And why Facebook partnered with the Red Cross to find missing people after disasters. IBM’s social-good program alone boasts 19 partnerships with NGOs and government agencies. Partnerships are smart. The last thing society needs is for engineers in enclaves like Silicon Valley to deploy AI tools for global problems they know little about.

The deeper issue is that no massive social problem can be reduced to the solution offered by the smartest corporate technologists partnering with the most venerable international organizations. When I reached out to the head of Intel’s AI for Good program for comment, I was told that the "poacher" label I received at the TrailGuard installation was in error—the public demonstration didn’t match the reality. The real AI system, Intel assured me, only detects humans or vehicles in the vicinity of endangered elephants and leaves it to the park rangers to identify them as poachers. Despite this nuance, the AI camera still won’t detect the likely causes of poaching: corruption, disregarding the rule of law, poverty, smuggling, and the recalcitrant demand for ivory. Those who still cling to technological solutionism are operating under the false assumption that because a company’s AI application might work in one narrow area, it will work on a broad political and social problem that has vexed society for ages.

Sometimes, a company’s pro-bono projects collide with their commercial interests. Earlier this year Palantir and the World Food Programme announced a $45M partnership to use data analytics to improve food delivery in humanitarian crises. A backlash quickly ensued, led by civil society organizations concerned over issues like data privacy and surveillance, which stem from Palantir’s contracts with the military. Despite Palantir’s project helping the humanitarian organization Mercy Corps aid refugees in Jordan, protesters and even some Palantir employees have demanded the company stop helping the Immigration and Customs Enforcement detain migrants and separate families at the US border.

Even when a company’s intentions seem coherent, the reality is that for many AI applications, the current state of the art is pretty bad when applied to global populations. Researchers have found that facial recognition software, in particular, is often biased against people of color, especially those who are women. This has led to calls for a global moratorium on facial recognition and cities like San Francisco to effectively ban it. AI systems built on limited training data create inaccurate predictive models that lead to unfair outcomes. AI for good projects often amount to pilot beta testing with unproven technologies. It’s unacceptable to experiment in the real world on vulnerable people, especially without their meaningful consent. And the AI field has yet to figure out who is culpable when these systems fail and people are hurt as a result.

This is not to say tech companies should not work to serve the common good. With AI poised to impact much of our lives, they have more of a responsibility to do so. To start, companies and their partners need to move from good intentions to accountable actions that mitigate risk. They should be transparent about both benefits and harms these AI tools may have in the long run. Their publicity around the tools should reflect the reality, not the hype. To Intel’s credit, the company promised to fix that demo to avoid future confusion. It should involve local people closest to the problem in the design process and conduct independent human rights assessments to determine if a project should move forward. Overall, companies should approach any complex global problem with the humility in knowing that an AI tool won’t solve it.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories