Every great creator is contrasted by a critic. Zeppelin had The Rolling Stone magazine. Kubrick and Spielberg had Siskel and Ebert. But when we look out twenty years – realizing that the next Shakespeare, Pollock, or Spielberg could actually be a piece of software – will creative AI also be faced with critical AI? Are creative algorithms such as GPT-2 at the mercy of the reviews from algorithmic critics like GLTR?
I believe the answer is yes. But not for the same human reasons.
GPT-2: The AI Creator
A human critic will say their job is to push creativity further – to challenge the creator to be their best self. Although it doesn’t always seem that way. With machine creativity, though, AI critics are arising out of necessity.
There are many creative algorithms that have gained wide attention because of their progress. Deepfakes and the AI-painting that sold for $435k are notable. But the most advanced is a natural language generation (NLG) tool called GPT-2.
Created by OpenAI (the non-profit foundation dedicated to advancing AI for good purposes), GPT-2 can generate coherent passages of text one word at a time. They are shockingly realistic to the point that Elon Musk, one of the founders, said the AI is too dangerous to release because it could be used to create copious amounts of fake media.
Instead, OpenAI has released small and medium-sized versions of the language model for developers to play around with.
One developer, Adam King, created a tool with it called Talk to Transformer which uses the medium-size version of GPT-2. You give the NLG tool an idea and it creates a passage that actually makes sense. It’s that simple. Like this one:
All I did was give the tool a sentence and it basically created the first page of a novel. It’s not hard to imagine how this NLG tool and others – Wordsmith, Quill, Write with Transformer – could be used to fool us with mass-produced fake media. It’s why we need AI critics.
GLTR: The AI Critic
Researchers from Harvard University and the MIT-IBM Watson AI Lab have developed a new tool for spotting text that has been generated using AI. Called the Giant Language Model Test Room (GLTR), it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand.
Will Knight, MIT Tech Review
GLTR highlights words that are statistically likely to appear after the preceding word in the text… the most predictable words are green; less predictable are yellow and red; and least predictable are purple.
I ran the AI-generated text from Talk to Transformer through the GLTR tool and received this report:
Compare that with the GLTR analysis from the first few paragraphs of this article:
As you can see the human-generated text contains a lot more reds and purples, which means the words are more statistically unpredictable – a better indication of human creativity.
An NLG tool like GLTR could be an essential tool for stamping out fake media. Especially if we could find a way to embed the detector in social media and other places where fake media is prevalent.
In that sense, the battle of GPT-2 vs. GLTR may be less of a creator-critic relationship as much as it is a Skynet-Terminator relationship – designed to catch all the dangerous media that could cause chaos.
1 comment
Comments are closed.