A Milestone Reached
New digital content provenance systems certify the origin and history of changes made to media and can alert viewers to tampering.

A Milestone Reached

Strong democracies are built on an informed and compassionate citizenry. Disinformation campaigns, designed to mislead, persuade, and sow division, can disrupt our understandings, and fan the flames of discontent. We just hit a significant milestone in technologies that can bring more trust to digital content. The members of the Coalition for Content Provenance and Authenticity (C2PA), including Adobe, Arm, BBC, Intel, Microsoft, Truepic, and Twitter, announced the release of a Content Provenance Specification and offered a preview of the new technology at a recent event. I was invited to kick off the summit with some framing thoughts about why I believe media provenance is so important in our quest to protect people and society from deceptive photos, videos, and audio engineered by malevolent actors.

Over the last five years, advances in machine learning and graphics have led to the general availability of surprisingly powerful tools for modifying and synthesizing digital content that’s so realistic, it is difficult to distinguish fact from fiction. The proliferation of techniques for generating deepfakes--and also to make lighter-weight edits and manipulations--threaten to cast doubt on the veracity of all digital media.

A year ago, we helped to found C2PA with other partners as a standards body pursuing methods that could bolster trust and transparency in digital content. While we had our eye on deepfakes, we also sought to address the challenges coming with other disinformation tactics, such as the use of photos or snippets of video from long ago as breaking news—or the calling out photos of real events as fabricated.

The specification, the world’s first standard for certifying digital content provenance, will foster the creation of interoperable tools for media provenance, enabling content creators to cryptographically certify the source and history of changes made to digital assets, like images and videos, and to confirm that content has not been tampered with. The methods work by immutably embedding information about the media and its source and history as metadata that travels along with the digital content.

Methods for certifying content provenance will fill a gap in our ability to detect deepfakes. Some have hoped that we might thwart the flow of synthesized or manipulated digital content by using AI methods to detect artifacts or irregularities. The problem is that our best AI-based techniques for creating deepfakes actually use our best detectors of fakes in the very process of generating the realistic content. Better detectors will only mean that the fabrications will get even more realistic. In short, we end up in an AI versus AI scenario, where AI cannot reliably win. Thus, we realized the need to develop a different kind of approach—and this led to innovation with the authentication of content provenance.

The new content provenance methods will not solve the deepfake and broader disinformation threats on their own. It’s going to take a multi-pronged approach, including education aimed at media literacy, awareness, and vigilance, investments in quality journalism--with trusted reporters on the ground, locally, nationally, and internationally, and new kinds of regulations that make it unlawful to generate or manipulate digital content with an aim to deceive. However, I see today’s announcement, and the technologies that will flow from it, as a significant step forward in injecting a new layer of resilience and trust into the digital content that we see.

None of this would be possible without the cooperation and productivity of the multi-party C2PA standards body, which initially brought together top talent from two complementary projects, Project Origin and the Content Authenticity Initiative (CAI)—and then expanded to include additional stakeholders, including hardware producers, broadcasters, news agencies, and providers of software and services.

We recognize that restoring trust in digital content is an ambitious goal that will require diverse perspectives and participation. We are encouraged to see industry, non-profit organizations, research institutions, and governments embracing the content provenance approach. 

Progress is already being made in the United States and in other areas of the world. For example, Senators Rob Portman (R-Ohio) and Gary Peters (D-Mich) co-sponsored the bipartisan Deepfake Task Force Act earlier this year. The National Security Commission on AI (NSCAI), a panel on which I served, examined and reported on ways that our nation could leverage digital media provenance to mitigate the rising challenge of synthetic media. And in Europe, several national and European Union initiatives have been launched around introducing digital content provenance into legislation. 

I believe that content provenance will have an important role to play in fostering transparency and fortifying trust in what we see and hear online. I encourage all to review of the presentations at the C2PA virtual summit, which features experts in government, academia, media, and technology on this approach, its benefits, and its limitations. Technology previews are provided, showing how the methods work and how their use can be scaled across the internet. I particularly urge those involved in the business of capturing, creating, and transmitting news and information to take a careful look at this exciting development.

Rebecca Finlay

CEO at Partnership on AI

2y

Great read recognizing the people and organizations who have worked together to create an international community focused on the authenticity of online content -- a crucial component in building a healthy information ecosystem. The work of Project Origin and CAI, together with the recent release of the C2PA standard, are leading examples of this work. Looking forward to continuing to support this effort at Partnership on AI through our Media Integrity Steering Committee of Partners. Microsoft Adobe BBC Intel Corporation NYTimes.com CBC Claire Leibowicz

Thanks for sharing Eric, this was a really interesting read 👍

Irina Raicu

Director of the Internet Ethics Program at the Markkula Center for Applied Ethics, Santa Clara University

2y
Like
Reply
Hirendrasinhji Rana

Founder & MD at Indo Nordic Strategic Association (NPO/Think-Tank), India

2y

Thanks for sharing #Indonordicassociation(dot)org

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics