A lie can go halfway around the world before the truth can get its shoes on. What can we do to slow down these lies? This was the major theme at the House Intelligence Hearing on National Security Risks of Artificial Intelligence, as Congress began wrapping its head around deepfakes – the imaging technique that uses AI to superimpose images and videos onto other images or videos, creating convincing false media.
If you’re unfamiliar with the massive threat that deepfakes pose to society, then I’d suggest you go read FakeApp is the national threat you’ve never heard of, but will soon see everywhere or yesterday’s article How will we defend against Deepfakes – the greatest threat to our democracy? before diving into this article.
The private research community, social media platforms, and the general public all play a role in the creation and propagation of deepfakes. The government, though, needs to take an active stance on starting conversations, creating policy, and demanding an alert reaction.
So how is Congress thinking about deepfakes?
Deepfakes Are A Threat
The House Intelligence Committee was joined by four experts on deepfakes with varying backgrounds in the legal, AI creation, FBI, and AI policy fields. There was no shortage of hypothetical, but very realistic, scenarios.
U.S. Representative Adam Schiff, Chairman of House Intelligence Committee, talked of:
A state-backed actor creating a deefake video of a political candidate accepting a bribe with a goal of influencing an election. Or an individual hacker claims to have stolen audio of a private conversation between two world leaders when, in fact, no such conversation took place.
David Doermann, Professor and Director of the Artificial Intelligence Institute at UB and Former Defense Department official mentioned:
One thing that kept me up at night was the thought that adversaries could create entire events. It might include images of scenes from different angles, video content from different devices, and text from other devices that event occurred and it could lead to social unrest or retaliation before it’s countered.
Danielle Citron, Law Professor and author of Hate Crimes in Cyberspace posited the thought:
Imagine the deepfake about the night before an IPO, timed just right with the CEO saying something that he never said or did, basically admitting the company was insolvent. Thus, upending the IPO. The market will respond far faster than we can debunk it.
Even one of the committee members reminded us of Forrest Gump “meeting” JFK and the potential social, influential effects of deepfakes.
But not all the stories were hypothetical. Ms. Citron shared the story of Rana Ayyub, a journalist who became a deepfake victim overnight:
She [Rana Ayyub] wrote a provocative piece in April 2018 and what followed was posters circulated over the Internet, deepfake sex videos of Rana. So her face was morphed into pornography. That first day it goes viral, it’s on every social media site, WhatsApp, and as she explained to me, millions of phones in India.
The next day, paired with the deepfake sex video of Rana, was rape threats, her home address, and the suggestion that she was avaialable for sex. Now, the fallout was significant. She had to basically go off line. She couldn’t work. Her sense of safety and security shaken. It upended her life and she had to withdraw from online platforms for several months. So the economic and the social and the psychological harm is profound.
Hearing the variety of stories and probable scenarios shows the expansive nature of deepfakes and the scale of defense we’re in need of. Nobody is safe from becoming a ventriloquist dummy at the hands of a person that can effectively use deepfake technology.
The immediate impacts, as explained above, are personal inflictions, political misdirection, media manipulation, etc. It undermines our ability to discern real from fake.
And the problem with that, is something called the Liar’s Dividend, where real video evidence can easily be passed off as fake (a deepfake). This threatens our judgement of discerning when we’re actually sitting on real evidence / video footage.
Ultimately, if you were to flash forward five years, to a time where deepfakes may have completely invaded our media, we don’t know what to believe and therefore, potentially believe nothing at all. As Clint Watts, Former FBI Special Agent outlines:
The problem, if we believe nothing at all, is that it leads to apathy, which leads to overall destruction for the United States.
Thankfully, I think the message got across to the Committee that deepfakes are a true terror threat. And possible solutions were proposed.
Deepfake Defense is a Team Effort
There isn’t any silver bullet to solve this issue. It needs to be a multifaceted approach that is constantly iterated upon. Below, I’ve outlined some of the different actors and the defensive stances they may be able to take.
Social Platforms
Currently, there exist ways to detect these deepfakes but it takes a lot of time to analyze videos one by one.
Therefore, for the social platforms, a triage approach makes sense, where the given social platform focuses on the most pressing and influential videos. Facebook, Twitter, Youtube, they all can predict or at least see the signs of a viral sensation before it reaches millions of people. Keeping an eye on viral hits and making a high priority to fact-check those immediately is something that was very important to the Committee.
Additionally, Clint Watts suggested creating rapid media response teams that can combat and discredit deepfake content at a moment’s notice. They’d work closely with public figures (politicians, journalists, celebrities, and other people of interest) and the social media platforms to rapidly verify or refute questionable media as it’s released.
Lastly, it’s important that the social platforms find a way to accurately label deepfake content and implement warning labels of some sort. In the article, How will we defend against Deepfakes – the greatest threat to our democracy? I talk about how this needs to be standardized across the Internet so that it can be effective.
Detection Technology
From a software detection standpoint, as David Doermann points out:
There are point solutions that can identify deepfakes reliably. It’s only because the focus of those developing the technology have been on visual deception, not on covering up trace evidence. If history is any indicator, it’s only a matter of time before the current detectino capabilities will be rendered less effective.
It’s a constant game of cat-and-mouse.
On the hardware side of things, there’s possibility for digital verification tied to a camera fingerprint – this would be the metadata in each photo which includes date, time, location of content created. This is much more difficult to implement. I talk about more in this article: Millions of dollars will be made on digital forgeries that can’t be detected.
Legal & Judicial
The knee-jerk reaction is to ban deepfakes on social media altogether. What’s difficult about creating any sort of law or provision tied to banning deepfakes, is that it dances finely on the line of the First Amendment right. Deepfakes, which can be described as impersonations, often fill the intent of parody or satire. Synthetic media is so crucial to our comedy and entertainment that banning deepfakes outright would be an injustice.
Danielle Citron wanted to make it clear that the legal field will play a very modest role in defending against deepfakes. But there are civil claims that targeted individuals can bring. Victims can help create precedent. They can sue for defamation, intentional infliction of emotional distress, and privacy tort. The victims can become advocates for change.
General Public
Finally, we must realize that this is not just a technical challenge, but a social one as well. The people who share the content, you and I, are part of the problem. As David Dohrmann sums up:
Get the tools in the hands of individuals rather than completely relying on the government or on social media platforms to police content. If an individual can perform a sniff test and the media smells misuse, they should have ways to verify it or prove it or easily report it. The same tools should be available to the press, to social media sites, anyone who shares and uses this content.
If we empower every social media user with the ability to analyze and report deepfakes, then we have a true chance at stamping the deception out.
There’s a long road ahead of us in this matter. Ten years from now, we’ll either look back at this moment in time as the year we took a strong position as a people against deepfakes and media deception. Or we’ll look back with great regret that we didn’t do something when we had a better chance.
Regardless, there needs to be increased education and funding around this topic. I’m not one to recommend C-SPAN to anyone, but I actually enjoyed watching the full hearing on this topic.
1 comment
Comments are closed.