It all began on “the front page of the Internet” – Reddit – when a bored technologist decided to place Nic Cage’s face on familiar film moments; taking Harrison Ford’s place in Raiders of the Lost Ark, once in Game of Thrones, and potentially as every character in Lord of the Rings.
It was a harmless Internet trick that’s taken a turn for the worst.
Using the same technology, fittingly named FakeApp (research at your own discretion), Redditors began replacing the faces of pornographic film stars with the faces of respectable celebrities – Jessica Alba, Emma Watson, and Daisy Ridley are among the dozens of victims.
Imagine this happening to you, your child, or a friend. The average person has hundreds of photos and video moments in the public domain which could be used to carry out one of these awful defamations of character.
Today, the average set of eyes can spot a
In the next 4-6 years, this technology will reach the usability (and believability) of Snapchat’s FaceSwap. You won’t have to be well-versed in machine learning to create one of these fake videos. And that is quite frightening.
There’s a chance that this technology could be used to bring media manipulation to an entirely new level. There’s a chance this technology could extinguish what little grasp we have on what’s reality and what isn’t.
I emphasize that there’s “a chance” because most of the media attention around FakeApp has fear mongered an awful future that none of us want to be a part of.
In the process of scaring the sh** out of the general public, though, the media inadvertently inspired many minds to take action. At UC-Berkeley, Hany Farid, a professor of computer science, is leading digital forensics and computer vision research to combat the
Peele’s message is that the tools to create and spread fake video news are here – that each one of us as consumers
Inscribe is one company we uncovered that’s using computer vision and machine learning algorithms to verify the legitimacy of online documents. Although their digital forensics may not be able to fight against
More concretely, the US Defense Department has launched a Media Forensics division with researchers like Professor Farid to develop tools that’ll detect
The examples above illustrate technology that will be used to counteract FakeApp and
Already a future thinker?
Then become a friend.
This is a Facebook Problem
Deepfakes wouldn’t be a major threat to society if there weren’t social media – this extremely low-barrier outlet for the average person to share lies. This has been a huge topic of discussion following the 2016 US Presidential Election, with the whole case around election meddling and fake news spread on Facebook.
More than anything, it was a huge call-to-action to entrepreneurs and great thinkers to start companies that combat Fake News. You’ve got plugins like BS Detector which warn you of an unreliable source. You’ve got AllSides which is a news source that identifies which political side the bias of an article leans toward.
Additionally, Facebook and Twitter are doing their part by combatting the automated bots that are running rampant on their sites – stealing identities and spreading fake news. Twitter has suspended over 70 million bots and fake accounts. Facebook disabled 1.3 billion fake accounts earlier this year.
Coming back to the FakeApp and
Following the 2001 anthrax attacks on American media companies, the FBI launched a 5-year investigation into pinning down the terrorists to make sure it would never happen again. In tandem, another solution emerged: companies just stopped sending mail. E-mail became even more present in day-to-day business communication.
If we experience a