Home

AI and National Security: Examining First Principles- a conversation with Lucy Suchman

Justin Hendrix / Apr 9, 2021

Lucy Suchman is Professor Emerita, Anthropology of Science and Technology in the Department of Sociology at Lancaster University. Suchman has a long career working at the intersection of human-computer interaction and questions around contemporary war fighting, including the ethics of the development and use of artificial intelligence in military contexts.

Last month, the National Security Commission on Artificial Intelligence (NSCAI) released its Final Report and Recommendations to the President and Congress. For the AI Now Institute, where she is an Advisory Board member, Suchman wrote about six unexamined premises of the NSCAI report. I reached out to ask her for more context on her thinking about AI, the supposed arms race with China, and the role of industry activism in setting the ethical bounds for the nation.

Subscribe to the podcast in your favorite player here. The following is a lightly edited transcript of our discussion.

Justin Hendrix:

You have a history of being concerned about questions to do with national security and the use of artificial intelligence in weapons. I see your citations on everything from algorithmic violence to questions around the use of drones and the use of algorithms in warfare.

Lucy Suchman:

Yes. I have been involved with this since the 1980s. Through a series of circumstances, I ended up doing my PhD research as an anthropologist at Xerox's Palo Alto Research Center starting around 1980. This was the period of the Strategic Defense Initiative- Ronald Reagan's so-called Star Wars Initiative- which was the idea of a comprehensive ballistic missile shield.

This was at the height of the Cold War, of course, and associated with that was also something called the Strategic Computing Initiative, which was very similar to many of the things that are going on today. Basically the idea of incorporating AI into all aspects of the armed forces. And with my colleagues at Xerox PARC- particularly Severo Ornstein who was in the computer science lab- we organized Computer Professionals for Social Responsibility, which was initially focused on the idea of the automation of the command and control of nuclear weapons systems. We were arguing all of the reasons that that was a really bad idea.

So that was the beginning of my interest in these things and I've followed them ever since. And for me, this is again at the intersection of my longstanding critical engagement with artificial intelligence and also with US militarism. I came of age during the Vietnam war, and that was certainly a formative experience for me.

Justin Hendrix:

We've got this latest effort- a National Security Commission on Artificial Intelligence chaired by Eric Schmidt, the former Google executive, and then a variety of other notable individuals from across industry, former staffers and former lawmakers and people in academia who focus on these issues. Tell us a little bit about this report.

Lucy Suchman:

I've been following the work of the National Security Commission on AI as part of a longer term interest of mine, beginning with the founding of the Defense Innovation Board in 2016, of which Eric Schmidt was also the chair. We had the formation of the Joint Artificial Intelligence Center within the DOD, and the National Security Commission on AI. And the players in these initiatives are recurring- Eric Schmidt is one of the continuities, along with Lieutenant General Jack Shanahan who was the head of project Maven, and then became the first head of the Joint AI Center. And also former Deputy Secretary of Defense Robert Work- I would characterize him as a major technophile within the military.

So this is very much a coalition made up of people with a strong interest in promoting Silicon Valley as a site of of technology innovation and research and development, and people within the military who are, I would say, captured by the idea that new technologies and particularly artificial intelligence are going to be able to address a series of very longstanding problems.

So I've been following the National Security Commission on AI, all of their public plenaries and then their final report. The final report is 750 pages. It's very extensive, but my experience is that the issues that to me are the most fundamental are very carefully placed outside the frame of the discussion. They're treated as taken for granted rather than actually opened up to questions, and that was what has provoked me to write a piece that I titled, Six Unexamined Premises around National Security and Artificial Intelligence.

Justin Hendrix:

Well, let's go through each of them. The first one you say is the assumption that national security comes through military advantage, which comes through technological or specifically AI dominance. The core idea is that without AI dominance, we will lose our competitive advantage from a security perspective. What's wrong with that idea?

Lucy Suchman:

In the interest of full disclosure, I guess I should say that as a US citizen, I don't feel that the strategy of global dominance on the part of the United States actually makes me more secure. And yet of course that in all of this is absolutely taken for granted- that the only way that the United States can achieve national security is through global dominance. That very quickly is mapped onto the idea of global military dominance- economic dominance also, of course- but certainly military dominance.

The Kill Chain, Yemen, 2018. "The frames indicate objects automatically identified as prospective targets." Source

And just to pause there for a moment- the idea that the United States is under any kind of threat or even serious competition militarily. The US spending on defense is the equivalent of the 10 next most highly militarized countries in the world. As many people know, we have roughly 800 bases of operation around the world. So the US is the overwhelming global military power. So to represent the United States as vulnerable in that respect, I think deserves immediately some serious question. And then we shift into the relationship between military dominance and technological dominance. And those two are treated as very closely equivalent. So military dominance relies upon dominance in the area of weapons systems and the continuous production of new, more expensive, higher tech weapon systems, which is of course the foundation of that which we know so well as the Military-Industrial Complex. And now we have this further specification of the idea that within that technological military dominance, artificial intelligence is somehow the absolute essential key to ensuring that the United States retains its global power.

So all of those things I think need to be called into question. Is militarism really the only avenue to national security or does it- and I think there are a lot of good arguments for this case- make us less secure? That it's actually part of what generates the enmity that it is ostensibly created to address?

Justin Hendrix:

So you look at this body itself, the groups who came up with it, this is sort of the Military-Industrial Complex plus big tech. Who's on this commission, and why is that a problem?

Lucy Suchman:

So this is a kind of reformation of the Military-Industrial Complex for the contemporary moment, with the idea that it's no longer industry in the sense of the traditional defense contractors. But now it needs to be high-tech and it needs to be in particular big tech that is really taking the lead here. And if we look at who comprises the National Security Commission on Artificial Intelligence, my second unexamined premise is the idea that NSCAI is an independent body without conflicts of interest. The members of the commission are basically current and former CEOs and other senior managers of big tech companies. So we have Amazon, Google, In-Q-Tel, Microsoft, Oracle, current and former members of the defense and intelligence agencies and senior members of universities with extensive DOD funding. So places like Caltech, CMU.

Now, arguably, these people are on the commission because they have the requisite expertise, but they all also have vested interest in increased funding for AI research and development. And in my piece, I cite a quote from Eric Schmidt at the January plenary of the commission where he was talking about the makeup of the commission. And he said, "We ended up with a representative team of America." Again, this is something that I think needs to be challenged. And I think the fact that this commission there is an enormous amount of self-interest here. This is not an impartial body. It begins already convinced that artificial intelligence is the answer to everything and needs further investment. And then those 750 pages are basically a justification of that position.

Justin Hendrix:

Some of this thinking probably comes from prior works, and there are folks like Nick Bostrom, who wrote Superintelligence. In the book he explores the idea that whoever gets to a superintelligence first has the ultimate competitive advantage, that it will be impossible to conquer the nation that arrives at that type of supremacy. But you find that AI is not even a coherent field of technology development. So what is AI to begin with?

Lucy Suchman:

That's such an important question. Interestingly, Lieutenant General Jack Shanahan himself, I was recently reading the transcript of a press conference that he held around in the beginnings of the Joint Artificial Intelligence Center. And he was asked the question by a member of the press, "What is AI, how do you define it?" And he had a very difficult time answering that. He kind of indicated, well, we're still in the process of defining it. But at the same time he argues that it's going to be ubiquitous throughout the Department of Defense. And when he's asked to offer examples, he resorts to examples like the recommender systems of Netflix and Amazon.

So if we think about those recommender systems, first of all, you could say, what does artificial intelligence actually have to do with the way those systems work? They're basically based on data analytics. In each case, there's the availability of an enormous and continually renewed body of data on what people are watching, reading, ordering online, et cetera, that can be continually analyzed and used to create the models that then generate predictions about what a particular person might want to watch or read or buy or purchase next. So we have a kind of closed world generating the formatted data that are needed in order to create the models that could be used to make the predictions. And we also have a situation where the cost of error is very low. So if you recommended something that you actually don't find interesting, no one is harmed by that.

So really even people working within artificial intelligence acknowledge that the term AI is largely a marketing term, and that the technologies involved are increasingly sophisticated techniques and technologies of data analysis which in order to work require enormous amounts of data- data that's been well formatted through various avenues, and where basically there's the possibility of generating models, trying them out, seeing how they work, and then assessing how well they work. Both the input and the output are thoroughly human based.

So the curation of data, the formatting of data, the training of data models to recognize particular categories of objects, has to be done by humans. And then the assessment of the significance and validity of what is generated also has to be assessed by humans. So really AI is data analytics. And the term AI is extremely mystifying, but also that mystification gives it a certain kind of, it makes it possible to suggest that there's something radically different and quite magical going on in these technologies. Even within the field of artificial intelligence, the ideas of Nick Bostrom and others who talk about so-called artificial general intelligence or the singularity, this idea that AI is going to achieve more than human capacity, is highly questioned by a lot of people working in the field. And most people working in the field recognize the limits of these technologies. And again, we could talk more about that.

But it seems quite clear to me that at least a large number of the people within the defense department who are on board with these initiatives themselves don't really understand what the technology is, but they assume somebody else does. And they're quite, again, I think intimidated and mystified and don't want to actually reveal the limits of their own understanding of what's really going on.

Justin Hendrix:

So one of the things I'm sure they, no doubt have in mind, is their position on artificial intelligence, vis-a-vis China, which is of course the du jour enemy and seems to be the main concern of this report. We know that the Chinese are spending billions on developing the various technologies that we may lump together as AI. Why do you question that? Why do you question this idea that we're in an arms race with China?

Lucy Suchman:

Well, I think arms races are interesting phenomena. Arms races are represented within the discourses about arms races as some kind of a naturally occurring phenomenon. But actually arms races are generated through those discourses, through the kinds of investments that they generate. And so I think the premise that we are in an arms race, and even more broadly an economic race with China is a self-serving kind of argument. If you are someone who wants to see increasing investments in artificial intelligence or any other technology, to frame the situation as an arms race makes it imperative that that investment happens. It makes it something that isn't a matter of choice, but a matter of necessity. And I think, again, this completely places outside the frame the question of what possibilities there might be to de-escalate this competition in the interest of all parties, and in whose interest is it that we are in such an arms race.

And I think throughout the history of the framing of our international relations in terms of arms races, that has always served the interests of those who are developing weapon systems and basically the arms trade around the world is very much served by the idea that we're in an arms race, and therefore must make those investments.

Justin Hendrix:

Well, that's onto your next point. The idea that any limits, any threats, any vulnerabilities of our current AI capabilities are something we simply need to spend more money on. So this was a government funded commission report that seems to really do the work of suggesting the government needs to spend loads more on these issues.

Lucy Suchman:

That's right. Yeah. The idea that any questions that arise around either the limitations of artificial intelligence- the vulnerabilities of existing artificial intelligence technologies- rather than those questions giving pause to the idea of further investment, they're taken as, well, of course, those limits demonstrate that we must invest more. So this might be a good time to go back to this question of what AI is and what kind of limits matter in this context.

I was talking about the tremendous need for data and training data for AI. Artificial intelligence really is based on either systems of categorization or statistical analysis of correlations that are identifiable within large amounts of formatted data. But if we go to the question of categorization, the kind of classic example that's offered for AI is- if we're talking about say image analysis- the difference between a dog and a cat. So we have a very strong ability to recognize, those are stable categories. Animal taxonomies are some of the oldest taxonomies that we have, and these are domestic animals with which we're very familiar. So these are two very stable categories. We can sort creatures in a set of images into one or the other of these two categories without having to specify exactly what it is that we are using as the basis for that recognition. We can get people- humans- to do an enormous amount. And there are systems like Amazon's Mechanical Turk, these are piecework kinds of outsourcing systems where you get people to identify images, sort them into different categories.

So we can get massive amounts of training data, and then these technologies basically translate those. We have a collection of images of cats and we have a collection of images of dogs, and those are translated into mathematical parameters basically, the pointedness of ears, so these images are analyzed statistically in computationally tractable ways. They really don't have anything to do with recognizing dogs and cats. They have to do with running analysis over these collections of images, looking for the regularities in each of the categories. So there we have dogs and cats.

Now, try mapping that onto categories like combatants and non-combatants or terrorists and non-terrorists. These are fundamentally different kinds of categories. The uncertainties, the questions around how someone gets placed into one or another of those categories are profound, and the consequences for the placement into one or another of those categories are literally matters of life and death. So there's very little acknowledgement of those differences across the kinds of entities that are being so-called recognized by these systems. And you could argue that, well, we just need to put more money in, and eventually we'll be able to stabilize the categories of combatants and non-combatants or who is a terrorist, but I don't think it works that way. In fact, there are good arguments that it is simply unfeasible if not unethical, immoral, illegal, to try to create, to automate the process of making those kinds of categorical identifications.

Justin Hendrix:

And of course, we know in the history of warfare, mistakes on identification of a signal or a target or an individual for that matter, or the wrong assessment of what's happened, have often led to the worst possible complications. You mentioned Vietnam- in part we got into Vietnam over a possible misreading of submarine signals.

Lucy Suchman:

That's right. And once we get into a situation like Vietnam and like those that are characteristic of the contemporary counter-terrorism operations, we're no longer dealing with situations where there are identifiable combatants in uniform operating in designated areas. We have incredibly complex relations between the so-called insurgents and civilian populations. And we know from particularly operations that are remote, where we're fighting through drones and with the use of various kinds of surveillance-based systems for detecting what's going on at a distance, that the ratio of actually known terrorists- that is people who pose an imminent threat to the United States to unknown people and people who are positively identified as civilians- The percentage of actual identified so-called bad guys who've been killed through those operations are somewhere around 2%.

So enormous numbers of people have been killed, who we literally don't know who they are. They might have been identified through some kind of so-called pattern of life analysis as being associated with people who are on the list of known threats to the United States. Or they may simply have been in relation to those people who are in the households or compounds of those people. So these are very imprecise forms of targeting, even though they're spoken about as precision weapons. I made this argument at one of the public meetings of the Defense Innovation Board, that the language of precision mistakes the precision with which a weapon, once a designated target has been set, the precision with which that weapon will hit the target on the one hand and the precision in the identification of who should be a target on the other. And the precision in the first sense is not in any way reflected in what's going on in the second sense. We're in a situation where more than ever, there are uncertainties around who is being identified as a target for our operations.

And of course, people point to previous bombing operations that have killed enormous numbers of civilians. But the fact that we may be killing fewer people through these operations than if we were doing carpet bombing is not an argument for continuing these operations, if they are in fact violating international humanitarian law in their indiscriminate killing of people on the ground.

Justin Hendrix:

Well, on that, your last point is around whether autonomous weapons systems are in fact inevitable. You note that the commission warns that, "AI will compress decision timeframes from minutes to seconds, expand the scale of attacks and demand responses that will tax the limits of human cognition." That's really terrifying, this idea of being in a war that we humans can't even possibly comprehend or even respond to in timescales that allow our faulty brains to even understand it. Why is this one of your premises that you think is improper?

Lucy Suchman:

Absolutely. What we're really talking about here is automation. And the ways in which the increasing automation of weapon systems increases the speed at which things happen and further shuts down the space that there is for deliberation and having second thoughts about what's going on or even making judgements about what's going on. Again, this is a case where there's a kind of self fulfilling prophecy. The argument is that automation, and in this case more specifically AI, is going to further accelerate the speed of war fighting. Now, we might in response to that say, Okay, well, we really need to back off of these automation projects, then we need to do everything that we can to slow down the speed of war fighting to create multilateral agreements that will create greater space for negotiation, for deliberation. But instead, somehow the idea is that because automation is going to make things go faster, then we need more automation. And you can see the sort of self fulfilling prophecy there, and built into that as well.

And this is one of the most worrying things to me about the final report, is the idea, and there's another quote from the report that, "AI promises that a machine can perceive, decide and act more quickly in a more complex environment with more accuracy than a human." Well, we have no evidence for this. The idea that somehow an artificial intelligence enabled weapon system will be able to perceive, decide and act more quickly and with more accuracy than a human, there's a lot of counter evidence to that. And there is growing international debate over the legality and the morality of autonomous weapons systems, which would be weapons systems in which the identification of a target would be automated as well as the subsequent use of force.

So the automation of target identification is the next logical step in this progressive process of automating weapon systems and the most questionable, the most debatable, and in no way inevitable. These are matters of political will and there are debates going on within the United Nations and other bodies at the moment. There's an enormous amount of agreement. There are about 30 countries now that have agreed that autonomous weapons systems, that is systems that can identify targets and automatically strike, that those systems should be banned. And yet the United States is resistant to that. Does not support a prohibition on lethal autonomous weapons systems, wants to have the freedom to continue this process of automation in spite of the fact that even by their own admission, this makes us less secure by leading to these accelerated timeframes.

Justin Hendrix:

At the beginning of our conversation, you brought up Project Maven. You brought up the fact that this commission was chaired by the former Google CEO, and its ties to the tech industry. There is a line of argument that's going on in some tech companies that we need our tech companies to be as big as possible to advance AI. We saw Mark Zuckerberg prepared to argue that Facebook needs to carry on its rapacious growth and effort towards scale so that it can continue to develop artificial intelligence to help make America nationally competitive. But we saw tech employees certainly pushing back when that work was associated with defense interests, in particular at Google. What do you see as the future of this? The activism around it-, things like the Campaign to Stop Killer Robots, NGOs, the government work that you're talking about- but then also tech employees themselves?

Lucy Suchman:

I think it's tremendously important. And I think the growing voice of tech workers in these areas and in a lot of areas is absolutely crucial. Project Maven is a project to automate the analysis of full motion video from surveillance drones to identify objects on, well, it's always the reference to objects on the ground, but of course those objects are vehicles, buildings, and people. So again, the problem is that there is way more surveillance data in the form of full motion video than any humans can possibly analyze and translate into, so-called actionable intelligence. And so Project Maven is a project to automate the analysis of that video footage. And the Google employees who discovered that technologies that they were working on were going to be the basis for that system, then objected that that was not anything that they had signed up for and that they had no confidence that those technologies could be used in a way that was legally and morally and ethically sound.

And their pushback was sufficient that Google then withdrew from the project. And this really, I think, shook things up. There was a kind of a dismissal of this on the one hand, but on the other hand, it was clear that it really kind of shook people up in the DOD including Shanahan, who was the head of that project. And it led to things, to a wave of ethics questions. The development of ethics guidelines- Google developed its own ethics guidelines. The Defense Innovation Board was then charged with developing ethical guidelines for the DOD. And so you can see the effects of these actions. And I think they're tremendously important and there's a lot of power in the hands of the people who actually work in these companies.

Justin Hendrix:

Are you optimistic on these issues in the long-term?

Lucy Suchman:

Well, it is easy to feel overwhelmed because we know just from the history of the Military-Industrial Complex in the United States, since the middle of the last century, that this is incredibly deeply entrenched and the degree of vested interest in the perpetuation of all of the things that frame the National Security Commission on AI report is very deep and very hard to undo. This is to me why it's so important to expand the frame of the discussion, call out the things that are being treated as if they were unquestionable and open them up to question and open up spaces for thinking about what alternatives there could be to a national security strategy based in US military and technological dominance. What possibilities there could be for greater investment in international diplomacy, in humanitarian aid, in fundamentally different ways of thinking about our security, about US security.

And there are people doing great work on this. People like Andrew Bacevich, who is the President of the Quincy Institute for Responsible Statecraft, who does wonderful work on rethinking US foreign policy, what it could look like. Or Philip Alston, former United Nations, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions. Again, these are people who are able to, from a very deep knowledge of how things work, articulate both the problems with our current ways of approaching questions of national security and what alternatives to that could look like. And there's very good critical work going on around artificial intelligence.

So as with all such efforts at social change, one has to have faith that small steps are crucial, even if it sometimes feels as though it's an overwhelming transformation to try to achieve.

Justin Hendrix:

Lucy, thank you for talking to me today.

Lucy Suchman:

You're very welcome.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics