Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Watch “The Artificial Intelligence Era: What will the future look like?”

May 11th, 2021 / in AI, CS education, pipeline / by Khari Douglas
Mary "Missy" Cummings headshot

Mary “Missy” Cummings

Eric Horvitz headshot

Eric Horvitz

Recently, the Bulletin of the Atomic Scientists — a media organization that “equips the public, policymakers, and scientists with the information needed to reduce man-made threats to our existence” and is famous for their Doomsday Clock — held a virtual program titled, “The Artificial Intelligence Era: What will the future look like.” 

Nadya Bliss, a Computing Community Consortium (CCC) Executive Council member and the Executive Director of the Global Security Initiative at Arizona State University, moderated the program. The speakers were Eric Horvitz, Chief Scientific Officer at Microsoft and a former CCC Council member, and Mary (Missy) Cummings, the director of Duke’s Humans and Autonomy Laboratory and a co-organizer of the CCC’s workshop series on Assured Autonomy. The conversation focused on the recent National Security Commission on Artificial Intelligence’s (NSCAI) Final Report and the potential benefits and harms of AI. 

Nadya Bliss headshot

Nadya Bliss

To begin the session, Horvitz, who is a commissioner of the NSCAI, explained the motivation for the commission and its report. In 2018, in response to the rapid improvement and increased real-world deployment of AI systems, Congress passed the John S. McCain National Defense Authorization Act for Fiscal Year 2019, which called for the creation of “an independent Commission to review advances in artificial intelligence, related machine learning developments, and associated technologies.” 

Thus the NSCAI was born. Over the past two years, the Commission studied the AI landscape, and in March 2021 they released their final report, which “presents an integrated national strategy to reorganize the government, reorient the nation, and rally our closest allies and partners to defend and compete in the coming era of AI-accelerated competition and conflict” (p. 8).  

Following this introduction, Bliss asked both speakers about the current limitations of AI. Cummings expressed concern about the potentially lethal outcomes that can occur when we  misunderstand the capabilities of these systems. Cummings was one of the first female fighter pilots in the U.S. and said that during her three years as a pilot flying F18 Hornets, on average one person died a month because of poor human-machine interaction. She cautioned that while AI has achieved human-level parity in some areas and by some definitions, there are still major gaps in understanding by both those who use and create the technology. She went on to describe neural network based AI as “dark magic” that we don’t fully understand and called uncritical belief in this “magic” very dangerous. 

Horvitz agreed that while these systems work for very specific use cases, we have yet to understand how to use AI more broadly to augment human decision-making in the open world. He also emphasized the need to define ideas like fairness and argued that when government agencies use AI systems they should be transparent about what objectives those systems are attempting to optimize, work with multi-party stakeholders to develop these objectives, and then continually test these assumptions and objectives to ensure fairness. 

Following the discussion on the limitations of AI, Bliss asked how we might create an “AI peopleforce,” an educated population that understands the limitations of AI. Horvitz said there was concern among the Commission that the people using and relying upon these systems might not fully understand them, so the report calls for training programs to improve the understanding of topics like statistics and data analysis. Horvitz went on to say that misunderstanding of AI’s limitations is not the only concern — AI could also be used maliciously and the Commission is worried about how attack surfaces might grow as these systems are deployed. There is also concern about potential AI-enhanced cyberattacks and the possible weaponizing of mis- and disinformation to enhance societal tensions. 

Despite these fears, both speakers ended the main program on a more positive note. Cummings said that she is generally optimistic about the future of AI — like all technologies it has the potential for abuse and technology developers will have to continually work to overcome those challenges. She argued that the biggest problem in this area is education and technical literacy, both about AI and computers in general. To improve this, Cummings said she believes “that in this country we should make all computer science classes at community colleges free for everyone.” 

To close, Horvitz provided an example of the potential benefits of AI — he cited a 2016 Johns Hopkins study that showed that over 250,000 people die in the U.S. per year because of avoidable medical errors. That would make medical errors the third leading cause of death in the country after heart disease and cancer. Horvitz argued that AI could be used to provide safety nets and help detect errors in hospitals to avoid these deaths. 

A few of the CCC’s activities are cited in the NSCAI report, including the 2019 AI Roadmap, a collaboration with the Association for the Advancement of Artificial Intelligence (AAAI) that lays out a roadmap to create a comprehensive national AI infrastructure and re-conceptualize AI workforce training; and the 2019 Evolving Academia/Industry Relations in Computing Research report, which surveyed the recent trends in collaboration between academia and industry. The NSCAI report also cites the CRA/CCC CIFellows program as an example of how you might fund an external organization to administer a program to “invest in talent that will transform the field.” (p. 443). 

Read the NSCAI Final Report here, and watch the full “The Artificial Intelligence Era: What will the future look like?” program here

Watch “The Artificial Intelligence Era: What will the future look like?”

Comments are closed.