Skip to main content

Study finds that few major AI research papers consider negative impacts

Google TPU MLPerf
Tensor processing units (TPUs) in one of Google's data centers.
Image Credit: Google

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


In recent decades, AI has become a pervasive technology, affecting companies across industries and throughout the world. These innovations arise from research, and the research objectives in the AI field are influenced by many factors. Together, these factors shape patterns in what the research accomplishes, as well as who benefits from it — and who doesn’t.

In an effort to document the factors influencing AI research, researchers at Stanford, the University of California, Berkeley, the University of Washington, and University College Dublin & Lero surveyed 100 highly cited studies submitted to two prominent AI conferences, NeurIPS and ICML. They claim that in the papers they analyzed, which were published in 2008, 2009, 2018, and 2019, the dominant values were operationalized in ways that centralize power, disproportionally benefiting corporations while neglecting society’s least advantaged.

“Our analysis of highly influential papers in the discipline finds that they not only favor the needs of research communities and large firms over broader social needs, but also that they take this favoritism for granted,” the coauthors of the paper wrote. “The favoritism manifests in the choice of projects, the lack of consideration of potential negative impacts, and the prioritization and operationalization of values such as performance, generalization, efficiency, and novelty. These values are operationalized in ways that disfavor societal needs, usually without discussion or acknowledgment.”

In the papers they reviewed, the researchers identified “performance,” “building on past work,” “generalization,” “efficiency,” “quantitative evidence,” and “novelty” as the top values espoused by the coauthors. By contrast, values related to user rights and ethical principles appeared very rarely — if at all. None of the papers mentioned autonomy, justice, or respect for persons, and most only justified how the coauthors achieved certain internal, technical goals. Over two-thirds —  71% — didn’t make any mention of societal need or impact, and just 3% made an attempt to identify links connecting their research to societal needs.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

One of the papers included a discussion of negative impacts and a second mentioned the possibility. But tellingly, none of the remaining 98 contained any reference to potential negative impacts, according to the Stanford, Berkeley, Washington, and Dublin researchers. Even after NeurIPS mandated that coauthors who submit papers must state the “potential broader impact of their work” on society, beginning with NeurIPS 2020 last year, the language leaned toward positive consequences, often mentioning negative consequences only briefly or not at all.

“We reject the vague conceptualization of the discipline of [AI] as value-neutral,” the researchers wrote. “The upshot is that the discipline of ML is not value-neutral. We find that it is socially and politically loaded, frequently neglecting societal needs and harms, while prioritizing and promoting the concentration of power in the hands of already powerful actors.”

To this end, the researchers found that ties to corporations — either funding or affiliation — in the papers they examined doubled to 79% from 2008 and 2009 to 2018 and 2019. Meanwhile, ties to universities declined to 81%, putting corporations nearly on par with universities for the most-cited AI research.

The trend is partly attributable to private sector poaching. From 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40%, reflecting the growing movement of researchers from academia to corporations.

But whatever the cause, the researchers assert that the effect is the suppression of values such as beneficence, justice, and inclusion.

“The top stated values of [AI] that we presented in this paper such as performance, generalization, and efficiency … enable and facilitate the realization of Big Tech’s objectives,” they wrote. “A ‘state-of-the-art’ large image dataset, for example, is instrumental for large scale models, further benefiting [AI] researchers and big tech in possession of huge computing power. In the current climate where values such as accuracy, efficiency, and scale, as currently defined, are a priority, user safety, informed consent, or participation may be perceived as costly and time consuming, evading social needs.”

A history of inequality

The study is only the latest to argue that the AI industry is built on inequality. In an analysis of publications at two major machine learning conference venues, NeurIPS 2020 and ICML 2020, none of the top 10 countries in terms of publication index were located in Latin America, Africa, or Southeast Asia. A separate report from Georgetown University’s Center for Security and Emerging Technology found that while 42 of the 62 major AI labs are located outside of the U.S., 68% of the staff are located within the United States.

The imbalances can result in harm, particularly given that the AI field generally lacks clear descriptions of bias and fails to explain how, why, and to whom specific bias is harmful. Previous research has found that ImageNet and OpenImages — two large, publicly available image datasets — are U.S.- and Euro-centric. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. Along this vein, because of how images of words like “wedding” or “spices” are presented in distinctly different cultures, publicly available object recognition systems fail to correctly classify many of these objects when they come from the Global South.

Initiatives are underway to turn the tide, like Khipu and Black in AI, which aim to increase the number of Latin American and Black scholars attending and publishing at premiere AI conferences. Other communities based on the African continent, like Data Science AfricaMasakhane, and Deep Learning Indaba, have expanded their efforts with conferences, workshops, dissertation awards, and developed curricula for the wider African AI community.

But substantial gaps remain. AI researcher Timnit Gebru was fired from her position on an AI ethics team at Google reportedly in part over a paper that discusses risks associated with deploying large language models, including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people. Google-affiliated coauthors later published a paper pushing back against Gebru’s environmental claims.

“We present this paper in part in order to expose the contingency of the present state of the field; it could be otherwise,” the University College Dublin & Lero researchers and their associates wrote. “For individuals, communities, and institutions wading through difficult-to-pin-down values of the field, as well as those striving toward alternative values, it is a useful tool to have a characterization of the way the field is now, for understanding, shaping, dismantling, or transforming what is, and for articulating and bringing about alternative visions.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.