What would happen if someone created an AI writer that could create at the rate of the entire staff of The New York Times? What if it were equivalent to that of 10 complete staff? Or 100? Naturally, we all fear the worst – a complete crumbling of public media. This belief is what led OpenAI to create a gradual, staged release of their GPT-2 text generator – their extremely effective “AI writer”.
In February 2019, they released Stage 1. Three months later it was Stage 2. And now Stage 3. I’m excited, worried, and curious all at the same time.
What is GPT-2
For a quick refresher, GPT-2 is trained simply to predict the next word in a text. The model takes a prompt, could be one word, could be a paragraph and predicts what will come next. It generates a passage one-word prediction at a time.
What sets GPT-2 apart from many other natural language generation models is the quantity of text it’s trained on. Their database amounts to 40GB of plain text. To put that number in context:
Typically, researchers will train their models on a file containing the complete works of Shakespeare which amounts to about 5MB. This means the GPT-2 model is trained on a file 8,000 times the size of Shakespeare’s work.
Their computing power and data combine to create a beast of a text generator, according to OpenAI. Which is why they decided to release it to the public in stages.
The Staged Release
The full-size GPT-2 model, let’s call it HULK, is programmed with over 1.5 billion parameters, which are basically guides that help it inform what to predict. Fewer parameters generally yield worse results.
OpenAI’s first stage of the public release was a model with about 8% of the number of parameters as HULK. Shortly thereafter, the second stage came with a model about 23% of the size of HULK. And it produced some impressive results.
One guy demonstrated how it could be used to generate fake YouTube comments. I used a text generator called Talk To Transformer (trained on this model) to create an AI-generated business plan seen in the video below:
The language model that was just a quarter the size of HULK generated text that was incredibly coherent. This confirmed the public’s fears about HULK, the monster known as GPT-2.
On the flip side, an advance spurred from GPT-2 was a sort of “Terminator” AI.
Researchers from Harvard University and the MIT-IBM Watson AI Lab developed a new tool for spotting text that has been generated using AI. Called GLTR, it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand.
AI Creators vs. AI Critics
[Overall] The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance.
Karen Hao, MIT Tech Review
For this reason, they’ve released the third stage of the model – standing at 50% the size of the HULK. In theory, we should see much better text generators emerge from this model. And ideally we’re getting closer to the AI writer that can do the work of an entire journalism studio. The damage that it can do is yet to be realized.
But just like Hulk the comic book character, GPT-2 may not even be a monster at all. This is the critical debate.
Text Generator Debate
The staged release is really what gathered all the attention from the media and such. Many researchers believe it was unnecessary. Some go as far as saying that it was irresponsible and did more harm in stifling the research of defenses against a powerful AI model like theirs.
Two researchers, passionately against the “doom-and-gloom”, went so far as re-creating a version of the withheld OpenAI language model and released it into the wild.
At the end of the day, it’s a difference of opinions.
Kind of like how some parents believe that children should have their hands held throughout their development years. While others give their children full autonomy to explore and figure things out for themselves.
Even though I lean more toward the autonomy route, I think that OpenAI took the proper route to release their work. They succeeded in creating an international dialogue and were able to research the waves that were made from their invention.
Jack Clark, the policy director of OpenAI, places GPT-2 in the context of the organization’s broader mission. “If we are successful as an AI community in being able to build [artificial general intelligence], we will need a huge amount of historical examples from within AI” of how to handle high-stakes research, he says. “But what if there aren’t any historical examples? Well, then you have to generate [your own] evidence—which is what we’re doing.”
Karen Hao, MIT Tech Review