BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Making Sure AI Is Socially Responsible

This article is more than 4 years old.

A report published recently by Martha Lane Fox’s Doteveryone think tank revealed that 59 per cent of tech workers have experience of working on products that they felt might be harmful for society, with more than a quarter of those feeling so strongly that they quit their job over it. This was particularly marked in relation to AI products. Separately, 63 per cent said they want more space in their week to devote to considering the impact of the tech they work on. The sample size was small, and might not have been representative, but the report is nonetheless instructive.

This connects to two recent trends. First, the rise of employee activism with regard to the social impact of Big Tech employers — from Amazon workers’ call for the company to deliver a climate change strategy (recently rejected by shareholders) to the #GoogleWalkout campaign protesting the search giant’s handling of sexual harassment, misconduct, transparency, and other workplace issues. Second, widespread concern over the implications of advances in AI — from the ethics of “everyday” applications such as “spying” voice assistants, liberty-busting facial recognition systems, and the perpetuation of entrenched biases by algorithms used for predictive policing or ostensibly fairer hiring, to the potential (propagated by science fiction cinema and philosophically-inspired online games) for AI systems to eventually bring about mankind’s downfall.

Emerging recently as a counterpoint to this scepticism and scaremongering — some of it no doubt justified; some of it more fanciful — has been the concept of “AI for good”, a more specific strand of the “tech for good” movement.

The “AI for good” trend has led to the creation of initiatives such as the Google AI Impact Challenge, identifying and supporting 20 startups creating positive social impact through the application of AI to challenges across a broad range of sectors, from healthcare to journalism, energy to communication, education to environmentalism. Stanford University launched its Institute for Human-Centred Artificial Intelligence to great fanfare. Meanwhile, at Stanford my colleagues Jennifer Aaker and Fei-Fei Li have developed a course — Designing AI to Cultivate Human Well-being — that seeks to address the need for AI products, services, and businesses to have social good at their core.

This is key, as the label “AI for good” is somewhat misdirected. It suggests that there are multiple categories of AI — for good and for bad — whereas we should be focusing on making sure “goodness” is embedded in the very concept of AI. It would make no sense, for example, to refer to “electricity for good” — electricity and the products and services based on it were built as good, even though they can of course be used for bad ends. Electricity — then internet connectivity — became so fundamental to everyday life that it became a utility, and we are accelerating towards a near future in which AI (or at least the services and experiences driven by it) are set to become every bit as important and fundamental to daily life.

So, what does it mean to embed good at the core of AI? As I have written about before in discussing the often-misunderstood distinction between traditional and social enterprise, it is about more than simply tacking a positive application or outcome onto a process in mitigation of, or in response to, an otherwise negative impact. It is about avoiding generating negative impacts entirely, and making positive social value not just an afterthought or byproduct but the proactive goal of the activity.

In weighing the potential positive and negative impacts of AI, however, we must be careful to differentiate reality from myth. Popular culture and sensationalist coverage in non-specialist media provides a false sense of the short- to medium-term possibilities of AI. As Prof. Aaker points out in the notes to the first class in her course, the perpetuation of biases - conscious or unconscious - by algorithms trained on imperfect data sets poses a much greater existential threat than the “killer robots” that come to the fore every time Boston Dynamics demonstrates its creations’ latest capabilities.

The flip-side of this is that the most immediate positive impact to be created by AI will arise from its proficiency at organising data, not necessarily understanding it. In fact, during his Oxford Internet Institute lecture on the opportunities and risks of AI, delivered during London Tech Week this month, Prof. Luciano Floridi noted that “intelligence” is neither necessary nor in many cases desirable for success in most of the tasks for which AI is currently being deployed. With this in mind, we need to ensure the data going in is as good as possible, and the algorithms used to train AI on that data are as free from bias as possible - something that, per the AI Now Institute’s recent report, Discriminating Systems: Gender, Race, and Power in AI, we are not yet doing as well as we should.

Notwithstanding the question of bias, AI can have a positive social impact - for example, by automating large amounts of processes that currently depend on human labor but exact a steep cost on the individuals performing that labor. One such instance is identifying and stopping the spread of child abuse images on the dark web. Another example of AI’s ability to relieve pressure on human agents and produce better outcomes us provided by Annie MOORE, developed by researchers at the Universities of Oxford and Lund. The software matches refugees to locations based on their needs and skills and the availability of resources and opportunities, and is increasing the likelihood of someone finding employment within three months by more than 20 per cent as well as improving their chances of integrating into their new communities. This data processing power is also, through machine learning, accelerating the development of new models for understanding how the world is changing - ClimateAI, for instance, has developed a forecasting engine for the agriculture and energy sectors that can model the impact of climate change on asset values over time periods ranging from a single season to an entire decade.

Examples such as these, combined with a seemingly growing sense of social conscience from tech economy workers as to the social impact of their work, provide a powerful counterweight to negativity surrounding real-world applications of AI. It is nevertheless important to keep social impact front and centre in order to keep the pendulum swinging in the right direction.

Getty Royalty Free

Follow me on Twitter