What to Watch for in the Coming Wave of “Deep Fake” Videos
By now, you have probably heard some chatter about “deep fakes” — the emerging technology that inspired recent headlines like “We Are Truly F—ed” and “Deep Fakes: A Looming Crisis For National Security, Democracy and Privacy?”
A deep fake is a video that features one person’s face plastered onto another’s body. Though in theory this is nothing new, “deep fake” videos can be surprisingly convincing, and the software to make them has emerged rapidly, prompting headlines like the ones above.
The widely circulated video of what appears to be former President Barack Obama giving a public service announcement (PSA) on fake news — but is actually filmmaker Jordan Peele putting words in Obama’s mouth — is surprisingly realistic.
As speakers at a recent SXSW panel said, videos tap deeper into the human psyche than still images, and so have the potential to be much more impactful.
Products like FakeApp are already available to make these fake videos, and audio manipulation tools will soon be on the market. The audio tools could potentially be even more dangerous, according to Donie O’Sullivan, a producer on CNN’s Social Discovery team.
“If you can convincingly make it sound like someone said something that they didn’t say, and you post that audio,” O’Sullivan told GIJN, “that’s something that’s far more difficult to vet.”
The tech to make these “deep fake” manipulations is so new, Bellingcat lead researcher and trainer Aric Toler said the usual groups that focus on swatting down misinformation attempts are struggling to catch up.
But that doesn’t mean there aren’t a bunch of people out there trying.
Where Deep Fake Prevention Efforts Are Heading
Tools that can flag potential “deep fakes” are almost non-existent right now, but academics, journalists and tech companies are trying to catch up. Google announced that it is developing a tool to identify altered videos, but hasn’t said when it will come out, or what its capabilities will be.
A team at the Visual Computing Lab in Munich developed FaceForensics, a program that could successfully identify altered videos in a raw format, but wasn’t so successful on videos that were compressed for the web.
Another team in Italy proposed a technique to spot irregularities in blood flow in a person’s face.
Gfycat, the website best known for hosting funny gifs of cats and celebrities, might have the most advanced mechanisms so far. Wired magazine profiled the artificial intelligence tools Gfycat is using to trawl for deep fake content on its own site. One of them attempts to block out the face and find a video that matches the background. Another flags a low-quality version of a celebrity video if a higher-quality version exists elsewhere. The implication is that a lower-quality version of the same video could be a deep fake. It’s not a smoking gun, but it’s at least one way technology is being used to help identify these products.
But those tools are only being used internally, and Gfycat hasn’t announced any plans to make them public.
The team in Munich released their script on Github, but will only share their dataset with people who sign a Terms of Use form.
Licensing “deep fake” software, or restricting access to it, is one of the solutions a group of researchers proposed in a recent study on the malicious use of artificial intelligence. The researchers, made up of members of the Electronic Frontier Foundation, Oxford University and others, focused on theoretical ways forward, as the entire topic is so new.
A team at the University of Missouri is developing a program, called VeriPixel, that would use blockchain technology to attach a footprint to images taken on smartphones or cameras.
“You need a mechanism that will tell you that yes, a file is the original,” William Fries, one of the developers, told GIJN. “The benefit of blockchain is that… no matter what happens, that historical information is still accurate.”
But VeriPixel, like the other programs, is still in the development stage.
In the meantime, journalists are stuck with what they’ve always relied on — critical thinking. Backgrounding, finding corroborating evidence, and, most importantly, doing plain old-fashioned reporting is always the best way to approach a sketchy source, “deep fake” or not.
While we wait for better tools to come out, here are some steps journalists can take when faced with a potentially “deep fake” video:
Watch for Common Shortcomings
1. Examine the Face and Movements
Right now, one of the advantages journalists have is that “deep fake” productions aren’t quite good enough to pass the “eye test,” Bellingcat’s Toler said.
Since the essence of a deep fake is a human face, pasted on another body, the area around the face is the most important place to look for discrepancies. Do the lighting or shadows seem off? What about the movements of the mouth, chin and jawline? Do they look like the natural movements of a human being?
The makers of these videos haven’t yet been able to perfectly emulate how hair moves, Charlie Warzel, a senior tech reporter at BuzzFeed, told GIJN. He said to focus on the hair and the mouth for anything unnatural.
This article by HowToGeek offers some tips on spotting discrepancies in the facial features of a deep fake video. Here are a few things to watch for:
- Flickering
- Blurriness on the mouth or face
- Unnatural shadows or light
- Unnatural movements, especially of the mouth, jaw and brow
- A discrepancy between skin tone or body type
- Lip sync-like movements
2. Play It Frame by Frame
Warzel recommends slowing down a video or pausing it several times — anything you can do, he said, to up your chances of spotting a discrepancy.
How to spot a “deepfake” video like the Barack Obama one, according to @cwarzel
1️⃣ Slow it down and look for glitches
2️⃣ Watch it again and see if it seems “too good to be true” pic.twitter.com/njQXTUsyYb— AM to DM by BuzzFeed News (@AM2DM) April 18, 2018
Playing BuzzFeed’s fake Obama video at half-speed, for instance, reveals blurriness around the mouth and chin. “Obama’s” mouth also moves toward the side of his face in a way that it wouldn’t if it were a real human talking.
Warzel also recommended viewing the clip frame by frame — either pausing it multiple times in succession, or opening it in a video editing program like Final Cut.
Corroborate the Event
1. Examine the Sharing Networks
If a video is convincing in the technical sense, O’Sullivan said, try to find another version of it online. Journalists can and should use the same tactics for vetting a deep fake as they would for any other questionable video.
Trace the video’s origin — who allegedly filmed it? Published it to the web? Shared it, and to whom? Did the source share it many times over, or to many people?
“Sometimes that’s a sign that it’s part of a coordinated campaign rather than somebody gathering and sharing real news,” Warzel said.
2. Cross-reference Facial Features with Other Sources
Luckily, Warzel said, right now deep fakes tend to feature celebrities or other public figures. This means images and video footage of the person are usually easy to obtain. Even something as small as the figure’s teeth could be the clue to unraveling a fake, he said.
Background the Video
GIJN’s earlier guide to vetting videos offers several choices for investigating videos, not limited to deep fakes. For example:
1. YouTube Data Viewer or inVid
The YouTube Data Viewer will tell you the day and time a clip was uploaded, as well as the thumbnails that YouTube generated for this particular video. inVid is another option that works on videos as well as images.
This clip, claiming to be a timelapse of someone’s driveway during Hurricane Harvey, goes so far as to include a date in its title: “August 27 2017.” The Data Viewer shows the video was uploaded on August 31, 2017, and according to news sources, the hurricane hit in late August. So far, there is nothing to disprove what the uploader claims.
If the source is claiming the video shows an event from last week, but it was uploaded to YouTube a few years ago, you have your answer in the bag. If not, move on to the next test.
2. Reverse Image Search
The Data Viewer also provides handy links for a tool called Reverse Image Search, which runs through Google using an image as the search term, as opposed to text.
Applying Reverse Image Search to the thumbnails in the flood video returns news articles from the Chicago Tribune and the Houston Chronicle — both about flooding in Meyerland. That’s two data points that match our video. So far, so good.
3. Googling
It may sound like a no-brainer, but simply starting from square one — searching in Google — can work wonders. As Toler noted in this article on vetting videos, using as an example a video supposedly showing a convoy of military equipment in Lithuania, googling the terms “training exercises” and “night” would have popped up the original video, throwing the new one into doubt.
In the case of the Hurricane Harvey timelapse, searching Google for variations on “Harvey,” “flood,” “driveway,” “timestop” and “timelapse” could accomplish the same. Checking video repositories like YouTube, Vimeo and Gfycat would be a good step as well.
4. Google Street View
A lot of deep fakes center on a person speaking, so they may not have a distinctive background. But backgrounds and locations can be cross-checked using a tool called Google Street View.
The uploader of the flooding footage, for instance, claims to be in the “Meyerland Neighborhood,” which we know from googling, lies in southwest Houston. Opening Meyerland in Google Maps, you can drag and drop “Pegman” — the little orange man in the bottom right — to obtain a street-level view of the locale.
In this case, we see wide streets with light gray pavement, manicured lawns and a narrow strip of grass between the sidewalk and the road. Another point in favor of the uploader.
Though this Street View certainly doesn’t prove his video to be authentic, we at least ruled out one more way of proving it inauthentic. You can do a similar test using Google Earth, to see if the region is mountainous, grassy, flat, and so on.
5. Video Metadata
You may be familiar with EXIF data, the metadata on image files that often reveal the location, type of camera and other data points. For videos, Daniel Funke, a fact-checking reporter at the Poynter Institute, recommends downloading the video to your computer and running it through a tool called Exiftool.
But, as with all metadata, you have to be careful: it could have been edited after the fact, or scrubbed, or it could be a recording of another video. But like with all these tools, we can check factors off a list and hopefully enact a process of elimination.
For more tips on investigating videos: Check out GIJN’s earlier Advanced Guide on Verifying Video Content and First Draft News’s guide.
Samantha Sunne is a freelance reporter in New Orleans, Louisiana, where she writes data and investigative stories. She also teaches digital tools to journalists around the world and publishes a newsletter called Tools for Reporters.