Alondra Nelson Wants to Make Science and Tech More Just

The deputy director of the White House science office plans to tackle algorithmic bias and start candid conversations about the past.
Dr. Alondra Nelson
“The potential benefits of automation will not be accomplished and, indeed, will fail, if we do not develop policy that prioritizes equity,” Alondra Nelson says.Photograph: Angela Weiss/Getty Images

The pandemic taught us a lesson that we needed to learn again, says Alondra Nelson: Science and technology have everything to do with issues of society, inequality, and social life.

After a year in which science became politicized amid a pandemic and a presidential campaign, in January president-elect Joe Biden appointed Nelson deputy director of science and society in the White House Office of Science and Technology Policy, a new position. Nelson will build a science and society division within the OSTP aimed at addressing issues ranging from data and democracy to STEM education. In another first, Biden made his science adviser Eric Lander, who is also director of OSTP, part of his cabinet.

Nelson has spent her career at the intersection of race, tech, and society, writing about topics like how Afrofuturism can make the world better and how the Black Panthers used health care as a form of activism, leading the organization to develop an early interest in genetics. She's the author of several books, including Social Life of DNA, which looks at the rise of the consumer genetics testing industry and how a desire to learn about their lineage led Black and Mormon people to become early users of the technology.

Nelson is a professor at the Institute for Advanced Study in Princeton, New Jersey. Before her appointment, she was writing a book about the OSTP and major scientific initiatives of the Obama administration, which included a series of reports on AI and government policy.

In her first formal remarks in her new role in January, Nelson called science a social phenomenon and said technology such as artificial intelligence can reveal or reflect dangerous social architectures that undergird the pursuit of scientific progress. In an interview with WIRED, Nelson said the Black community in particular is overexposed to the harms of science and technology and is underserved by the benefits.

In the interview, she talked about the Biden administration’s plans for scientific moonshots, why the administration has no formal position on banning facial recognition, and issues related to emerging technology and society that she thinks must be addressed during the administration’s time in office. An edited transcript follows.

WIRED: In January you talked about the “dangerous social architecture that lies beneath the scientific progress that we pursue” and cited gene editing and artificial intelligence. What prompted you to mention gene editing and AI in your first public remarks in this role?

Alondra Nelson: I think what genetic science and AI share is that they are data centric. There are things that we know about data and how data analysis works at scale that are as true of large-scale genomic analysis as they are of machine learning in some regard, and so these are kind of foundational. What I think we still need to address as a nation are questions about the provenance of data analyzed with AI tools and questions about who gets to make decisions about what variables are used and what questions are posed of scientific and technical research. What I hope is different and distinctive about this OSTP is a sense of honesty about the past. Science and technology has harmed some communities, left out communities, and left people out of doing the work of science and technology.

Working in an administration that on day one identified issues of racial equity and restoring trust in government as key issues means that the work of science and technology policy has to be really honest about the past and that part of restoring trust in government—part of restoring trust in the ability for science and technology to do any kind of good in the world—is really being open about the history of science and technology's flaws and failures.

Unfortunately, there's lots of examples. Next month there will be another anniversary of the Associated Press story that exposed the Tuskegee syphilis study almost 40 years ago, so we're coming up on that anniversary again. Then of course we've got issues in AI from research that the data that’s used is incomplete and that their incompleteness means that they’re making inferences that are incomplete, inaccurate, and, when used in social services and the criminal justice system in particular, have real disproportionate harmful effects on Black and brown communities.

Lander said in his confirmation hearing that OSTP will address discrimination stemming from algorithmic bias. How will that work?

It's certainly top of mind and on the top of our priorities, but that’s still to be determined and played out.

The challenge is moving from the aspirational AI ethics principles many private and public sector organizations developed to a space where it's an actuality. That means forms of accountability need to be created, and it takes honesty about the ways in which AI is still very much a work in progress, particularly when you're dealing with its intersection with the social world.

I think it takes a real commitment to want to move from technical standards to what I call sociotechnical standards. A tool can be technically exactly right—we can think about the work that Joy Buolamwini and Timnit Gebru did around bias and facial recognition [which showed that many facial recognition programs are better at identifying white and male faces than female faces with dark skin].

But that doesn’t deal with the socio-technical issue, which is that there's still potential for disproportionate harm based on the incompleteness of the database and what we think the data tells us and what we think it “predicts” in the world.

Does the Biden administration have a position on facial recognition and whether a ban or moratorium should be put in place?

Not yet.

Is that forthcoming or upcoming or something the administration intends to weigh in on?

Not that I'm aware of. Obviously there's a lot of regulatory development in this space at the state and local level.

The OSTP is currently reviewing the issue of facial recognition technology bans and moratoria. Both Eric Lander and I have publicly stated that addressing algorithmic bias is a priority for our work at OSTP, especially given the demonstrated harms to Black and brown communities that have resulted from the use of automated systems—in the criminal justice system, social services, housing, employment, and other sectors—that have exacerbated racial and economic inequality, or produced new forms of inequity and harm. The Biden-Harris administration is committed to technology policy that advances justice and equity, preserves rights, and expands opportunities.

While looking things up before we talked, I learned that you were working on a book about the OSTP.

Yeah, I was working on a book that was historical in some ways. There hadn't really been a lot written about this office.

It was established in 1976, and I think, to my mind, one read on the history of the office is that it emerged at a time just after the Vietnam War in which the executive was having to think about how to structure science and tech policy and strategy advice in the context of social change. I think that's been the case since the 1970s. From an organizational theory perspective, I became interested in OSTP as a kind of federal startup organization within a centuries-old bureaucracy.

So the book was imagined as being a kind of history up through the Obama administration. Under Obama, the staff grew to its largest level, and they brought a lot of new directions to the work of science policy—grand challenges, public commitments, these sorts of things. I think now the book that I imagined will no longer exist. It was kind of an arm's-length book about a topic that was in the distance, and now I'm right in the middle of it. So I think it will be a very different book in the end.

Is that something that you would publish soon, or after you leave?

No, that would be after I leave.

Just checking. In a speech about that book you mentioned five major scientific initiatives of the Obama administration around issues like AI, cancer research, and understanding the human brain. What are some of the moonshots that the OSTP and the Biden administration want to aim at during your time in office?

Certainly, I think that we want to think about pandemic preparedness. Eric spoke recently about wanting to get to a place of readiness for the next pandemic, which is also a social shock, and can we be ready with a vaccine in 100 days? So that's a really ambitious goal.

You will have heard also that there's ARPA-H [modeled on the Pentagon’s Defense Advanced Research Projects Agency], which is trying to create an advanced research platform that will sit in the National Institutes of Health and will be more nimble and take private sector partners and can just be a lot more innovative in experimental health research. I don’t think that the phrasing moonshot has been described for either of them but certainly they are big, ambitious goals that one has a kind of timeline to accomplish. And so in that way are very much in line with President Kennedy's moonshot, which was a goal, a time period, and a way to get it done.

When the science team was named, I was clear that one of my goals is to be honest about the past in regards to the harms of science and technology and make it more small-d democratic, to really own our democratic principles with regards to science and technology. And that means things like dealing with algorithmic bias and not having the disproportionate benefit or harm go to certain communities.

If you've looked at some of the executive orders, on procurement, for example—who is asked to be partners with government and the use of government resources for science and technology, research and development—it means thinking about the composition of advisory committees, who gets to sit at the table and who's invited to sit at the table. The presidential memorandum for scientific integrity asked explicitly that agencies look at who sits on their advisory committees with an eye toward making sure they're representative of American society and have the expertise and stakeholder perspectives that one needs.

As a sociologist of science and technology, I also envision a kind of fundamental transformation in the necessity and urgency of advancing science and technology policy through an equity frame. To me that's a moonshot also.

Science, like democracy, is a process, and it's never quite realized but always trying to perfect itself. And so I think that bringing that process and a real kind of intentionality around having more inclusive science and technology policy is a pretty big aspiration, particularly given that we know that it's a space where people kind of felt like you didn't have to think about these issues. You can maybe think about them in social services, or you can think about them in culture or in the arts, but not in science. And so I'm really proud to be part of an administration that says even in science, and maybe even especially in science, the idea that these issues matter.

What are some issues related to emerging technology and society that you think must be addressed during the Biden administration’s time in office?

Clean energy as one facet of a broad strategy to address climate change; policy for broad participation in STEM and in advanced manufacturing that leverage a whole-of-government approach; robust pandemic preparedness that, in light of the lessons of Covid-19, includes both biomedical and social strategies; and shoring up the American public’s confidence in federal science and technology by having clear policies and practices like scientific integrity and by offering the public insight into an array of data about government initiatives. The current and potential expansion of automated systems in society, including machine learning, AI, and risk assessment tools, warrant our attention. Any of the potential benefits of automation will not be accomplished and, indeed, will fail, if we do not develop policy that prioritizes equity, civil rights, and justice upstream in the development of automated systems and throughout their use.


More Great WIRED Stories