Advertisement

SKIP ADVERTISEMENT

Understanding the times

Satellite Images and Shadow Analysis: How The Times Verifies Eyewitness Videos

Visual investigations based on social media posts require a mix of traditional journalistic diligence and cutting-edge internet skills.

Credit...New Studio

In an effort to shed more light on how we work, The Times is running a series of short posts explaining some of our journalistic practices. Read more of this series here.

Was a video of a chemical attack really filmed in Syria? What time of day did an airstrike happen? Which military unit was involved in a shooting in Afghanistan? Is this dramatic image of glowing clouds really showing wildfires in California?

These are some of the questions the video team at The New York Times has to answer when reviewing raw eyewitness videos, often posted to social media. It can be a highly challenging process, as misinformation shared through digital social networks is a serious problem for a modern-day newsroom. Visual information in the digital age is easy to manipulate, and even easier to spread.

What is thus required for conducting visual investigations based on social media content is a mix of traditional journalistic diligence and cutting-edge internet skills, as can be seen in our recent investigation into the chemical attack in Douma, Syria.

The following provides some insight into our video verification process. It is not a comprehensive overview, but highlights some of our most trusted techniques and tools.

We review numerous videos on any given day. In addition to news agencies, we scour social media sites such as Twitter, Facebook, YouTube and Snapchat for news-relevant content. We also access eyewitness videos through WhatsApp, either by directly interacting with witnesses on the ground or by joining relevant groups.

All of this content needs careful vetting.

Image
Using WhatsApp, a Syrian medic was one of six sources who confirmed the location of a tunnel entrance to a hospital near the site of a chemical attack in Syria.Credit...Malachy Browne

Our verification process is divided into two general steps: First, we determine whether a video is really new. Second, we dissect every frame to draw conclusions about location, date and time, the actors involved and what exactly happened.

A major challenge for journalists today is to avoid using “recycled content.” This challenge is exacerbated in breaking news situations.

When the U.S. launched airstrikes against Syria the night of April 13, little footage was initially available on wires and social media. As our team closely monitored Twitter that Friday night, a video started to circulate that showed a series of explosions, reportedly in Damascus. What the video actually showed, however, was fighting in the Ukrainian city of Luhansk in February 2015.

This did not stop major broadcasters from using it in their coverage.

Whether it’s about natural disasters, school shootings or armed conflicts, we see this sort of misattributed content all the time.

And the same rules apply to nonviolent events: When news circulated about a ski lift that went rogue in Georgia in early 2018, we had to make equally sure it was not an old event.

The more dramatic the footage, the more careful we have to be.

We strive to establish the provenance of each video — who filmed it and why — and ask for permission to use it. In the perfect scenario, this involves obtaining the original video file or finding the first version of the video shared online, vetting the uploader’s digital footprint and contacting the person — if it’s safe to do so.

Once we believe a video is genuine, we extract as much detail as possible. Since situations of armed conflict and severe state repression often make it challenging to connect with sources, for logistical or security reasons, we’ve developed skills and methodologies to independently confirm or corroborate what’s visible in a video.

When we receive footage from wires such as The Associated Press or directly from a source’s cellphone, our job is easy: Such content comes with provided or intact metadata — embedded, detailed information about what camera or cellphone was used, date and time, sometimes even exact coordinates that reveal the place.

The metadata of content from social media and messaging apps, on the other hand — while it might reveal if a video was downloaded from Facebook or Twitter (which could debunk false claims from people who say they filmed an event) — comes with altered or removed metadata. We thus have to look for visual clues regarding location and date in the video itself.

Videos with wide shots often reveal landmarks such as mosques, bridges or distinct buildings, or show geographic features such as mountains or rivers. All these features can be matched up with reference materials such as satellite images, street views and geotagged photographs to determine the approximate or exact location of an event. Most recently, I used Google Earth to map out the drone attack against President Nicolas Maduro of Venezuela.

Video
Geolocating a video as part of the Douma chemical attack investigation.CreditCredit...By The New York Times

While this may sound easy, often all we see in a video is street lamps, traffic signs or trees. In one of my most challenging geolocation efforts, I used lamps and other characteristics of a street in a blurry and shaky cellphone video to identify the exact street corner of an extrajudicial execution in Maiduguri in northeast Nigeria.

In the most challenging situations, we might also call on the public to help.

It’s important to pay attention to the audio as well, as local dialects might help corroborate the general location.

Determining the exact date and time of an incident is more challenging. We can use historical weather data to detect inconsistencies in a video, but of course that does not provide an exact date.

To corroborate the specific time of day, we can conduct shadow analysis. When we reviewed footage of the helmet camera of a U.S. soldier killed in Niger in October, I used a tool called SunCalc to confirm — based on the short shadows — that the ambush happened around noon.

Finally, we take a close look at what else is visible in a video to draw conclusions about the event and the actors involved. We extract details on official insignia or military equipment. Our team identified over a dozen members of the Turkish president’s security detail who assaulted protesters in Washington, D.C., by doing a frame-by-frame analysis of multiple videos. And for a video that showed a U.S. soldier firing a weapon into the driver’s window of a civilian truck, our team identified the exact model of the shotgun and of the vehicle. This information was consistent with equipment used by U.S. Special Forces in Afghanistan.

Several projects are keeping our team busy. We have gathered security footage that — combined with shoe-leather reporting — has allowed us to reconstruct how a brutal murder in the U.S. unfolded. We are working with our international team to analyze a deadly crackdown on protests in one country and cut through the fog of war to distinguish real from fake atrocities spread on social media in another.

I also recently started researching how the emerging issue of deep fakes, or media generated with the help of artificial intelligence, will impact our newsroom. As these are computer-generated media, visual inspection and the verification process described above will not suffice. Instead, we have to build up the technical capacity to win the coming artificial intelligence-powered “arms race between video forgers and digital sleuths.”

You can watch more of our videos on our website, or by subscribing to our YouTube channel. You can also reach out to our team with feedback and story ideas at visual.investigations@nytimes.com.

A note to readers who are not subscribers: This article from the Reader Center does not count toward your monthly free article limit.

Follow the @ReaderCenter on Twitter for more coverage highlighting your perspectives and experiences and for insight into how we work.

Christoph Koettl is a visual investigations journalist, specializing in geospatial and open-source research. He is an expert on armed conflicts, human rights and social media verification. More about Christoph Koettl

A version of this article appears in print on  , Section A, Page 2 of the New York edition with the headline: How We Verify Eyewitness Videos. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT