Mary Meeker pointed out in her Annual Internet Trends Report that we’re spending an average of 6.3 hours a day on digital media. The opportunities to influence how people think via what they see, read, and hear online is vast. The problem, though, is that our digital media timelines are going to be increasingly tested by manipulative, false media known as deepfakes.
What is a deepfake? A deepfake is an imaging technique that uses AI (specifically, a GAN) to combine and superimpose images and videos onto other images or videos. The result, when done properly, is a fake video that convincingly passes as an original creation.
A harmless example, which we featured in the article 5 Ways that Deepfakes will actually bring good into the world, was a deepfake of David Beckham speaking 9 languages for a marketing campaign (David doesn’t speak that many languages):
A more harmful example is Mark Zuckerberg sitting at a desk giving a sinister speech about Facebook’s power. Fortunately, it’s done very terribly and most people were quick to realize it was a forgery.
Nonetheless, you can see that the stakes can become very high when deepfakes are involved. And with the 2020 Presidential Election right around the corner, the urgency to create defenses against deepfakes is heightening.
Building Defenses Against Deepfakes
Currently, the only two defenses we have as individuals against deepfakes are our contextual knowledge and skepticism.
Contextual knowledge includes our understanding of the person in the video, their personality, their likelihood to act a certain way, the situation they’re in, what they’re discussing, what else is going on in their life. If something they’re saying doesn’t align with who we know them to be, then a red flag should be raised in our heads.
Skepticism is loosely based on our contextual knowledge but from a larger worldview. When things sound too good to be true or just downright unlikely, this fuels our skepticism.
Together, context and skepticism would help any viewer come to the conclusion very quickly that what Zuckerberg was saying in the aforementioned video doesn’t align with what we expected him to say.
Honestly, though, we cannot expect 3.8 billion Internet users worldwide to increase their contextual knowledge and media skepticism in order to spot deepfakes. It’s just not realistic.
DARPA is pooling resources into their Media Forensics program to be able to identify when a deepfake is at play. But, this project will probably be kept in-house to track down content that stands to harm the government.
FakeApp is the national threat you’ve never heard of, but will soon see everywhere
In the article titled, Millions of Dollars Will Be Made on Digital Forgeries That Can’t Be Detected, I talk about how hardware solutions such as Amber Authenticate, which exist in the law enforcement space to verify the legitimacy of original versus altered body camera content. There’s a possibility it could one day be applied in a similar fashion to our cameras. But still unlikely.
Ultimately, this rests on the hands of Facebook, YouTube, Twitter, and other distribution platforms. After all, we’re talking about companies with some of the most advanced engineers and algorithms out there.
A Stoplight for Deepfakes
Algorithms, not humans, will be the best detectors of deepfakes. Unfortunately:
In AI circles, identifying fake media has long received less attention, funding and institutional backing than creating it: Why sniff out other people’s fantasy creations when you can design your own?
Drew Harwell, The Washington Post
Currently, DARPA is the frontrunner of detecting deepfakes. They, among other researchers, understand what to train their algorithms to detect:
Forensic researchers have homed in on a range of subtle indicators that could serve as giveaways, such as the shape of light and shadows, the angles and blurring of facial features, or the softness and weight of clothing and hair.
With one new method, researchers at the universities of California at Berkeley and Southern California built a detective AI system that they fed hours of video of high-level leaders and trained it to look for hyper-precise “facial action units” — data points of their facial movements, tics and expressions, including when they raise their upper lips and how their heads rotate when they frown.
Drew Harwell, The Washington Post
I’m confident someone will figure out how to continuously detect deepfakes (it’s an ongoing game of cat-and-mouse). And once they do, equally as important as detecting them will be deploying the system to warn people of potential deepfakes.
You’d think that Facebook or YouTube could just delete the fake videos. However, it often conflicts with their content policies, or they aren’t aligned whether fakes should be deleted, demoted, or flagged.
We’ve seen YouTube place Wikipedia “fact-checkers” underneath videos that are flagged as conspiracy theories or grossly misleading. The problem, though, is that people don’t go on YouTube or Facebook to read.
We need something simple that is understood across cultures and can be implemented across all Internet frameworks. A common standard will make it very difficult for deepfakes to ruin our society.
Look at the stoplight. Everyone understands what the lights mean. We’re taught from an early age – as early as five-years-old – what each color means. The stoplight interface would make sense in the context of deepfakes:
- Red = high probability of a deepfake
- Yellow = moderate probability of deepfake
- Green = cleared, it’s real
We’ve done it before with the “play” button, which is common across all videos. It’s miraculous that we collectively associate a rightward facing triangle with play and two parallel upright lines with pause. We are an incredible species when it comes to adopting cultural standards.
Deepfakes are just another (very challenging) threat facing us that requires a symbol that cautions us when misleading, untrustworthy content is present.
2 comments
Comments are closed.