How De-anonymizing Broadcast on Social Media could solve the Deepfake issue

News about deepfakes has not stopped for the past two weeks and probably won’t slow down anytime soon. Unfortunately, the media stories are consolidated around “deepfakes cause political strife”, with the only possible solution being technology.

This narrative is important but limited in scope, which is why I will continue to propose new ways to think about deepfakes (remember deepfakes in marketing).

Here’s why I believe deepfakes aren’t a technological problem:

A Technological Problem?

It’s natural to look at deepfakes as needing a technological solution, considering it seems to be a technological problem. This line of thinking is what Evgeny Morozov has called “technological solutionism” – basically an ideology where everything can be solved with the right algorithm or the right piece of technology.

Under this line of thinking, we’ll forever be amidst a Deepfakes Arms Race.

Today, researchers from UC Berkeley and USC have a solution for detecting deepfakes of people with lots of public video material to reference (basically celebrities and political figures). But tomorrow, that process won’t work anymore because new deepfakes will account for those methods of detection.

There absolutely needs to be buy-in from researchers and social media platforms to keep up technologically with the detection of deepfakes. But this isn’t a purely technological problem.

A Political Problem?

I think we all know the answer to this is “no”. Politics moves too slow to stay ahead of deepfakes, even though there was a committee hearing on deepfakes last week that made some good points.

The problem with applying a political solution directly to deepfakes is that it is extremely difficult to create a policy that distinguishes a satirical deepfake (like The Fakenings) from a deepfake designed to bring terror to the world.

The Washington Post created a guide to help get everyone on the same page, linguistically, with how to classify deepfakes. Still, law will have a tough time being effective here.

A Social Problem?

More than anything, I believe that deepfakes (specifically the divisive ones) are a social issue. At their core, they are just lies. The deepfakes that have the chance of undermining our entire democracy are the lies which we as a population cannot detect.

This social problem (deepfakes, misinformation, hate speech, divisive language, etc.) is amplified and accelerated by anonymity. Social media platforms allow for anonymous broadcasting – something that simply does not exist in the real world.

To describe how absurd anonymous broadcasting is, I’ve created a real-world equivalent:

You’re at one of Elton John’s final concerts, enjoying the tunes with 50,000 other fans, when a masked man jumps on stage and steals the microphone.

He begins talking about how Elton John is a devil-worshipper and his lyrics are bringing the end of the world. Not only does no one do anything to stop this person, everyone tweets exactly what the person is saying, without knowing who this person is or why they believe that.

The man jumps off the stage, runs away, and is never heard from again. But, his message was shared.

That’s the equivalent of anonymous broadcasting in the real world. Absurd, right?

In the real world, security guards would’ve stopped the man. The police would’ve identified the man. The news would’ve reported on his background. And the public would’ve debated about how his background led to his actions.

Why, again, do we allow for anonymous broadcasting?

Ending Anonymous Broadcast

We believe that anonymity needs to exist in order to create a safe internet where we don’t fear sharing opinions, can prevent cyberbullying, stamp out hate crimes, and avoid surveillance. Well, those all exist under our current anonymous-allowed situation. So that’s not true.

Realistically, anonymous social media accounts are allowed so that Silicon companies can inflate their numbers and look better to investors. Do you think there would be 2.7 billion active Facebook accounts if Facebook verified everyone’s identity via an ID, passport, or credit card? No way. Almost every kid under 18 has a “rinsta” and a “finsta” – a real and fake Instagram account. It’s much easier to grow without the need to verify identities.

Don’t get me wrong, anonymity on the Internet is fine for peer-to-peer transactions and content consumption. I don’t care what you buy or watch. That’s none of my business. But, the moment you press that share or post button, the public has every right to know who you are.

If you’re going to share an opinion or creation on the Internet, then you should have to show your face. This is how it works in the real world.

De-anonymizing broadcast on the Internet won’t stop people from creating and spreading lies. However, it will make people think twice about the effect it’ll have on their identity and we’ll at least be able to trace these negative forces back to their sources.

This would be a step in the proper direction and would be complemented by some of the other solutions we’ve mentioned:

  • Monitoring viral hits and fact-checking high priority posts 
  • Creating a Rapid Media Response Team
  • Labeling deepfake content to let the viewer decide
  • Digital media fingerprinting

The short answer to a long post is that deepfakes will need a mixture of social, technological, and political solution. Now is not the time to be shy about potential remedies to this problem.