This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding artificial intelligence.
OpenAI, the nonprofit research outfit founded by Sam Altman and Elon Musk, dominated last week’s artificial intelligence news coverage with GPT-2, a technology that has reportedly moved the needle in natural language generation (NLG), the branch of AI that tries to imitate humans’ writing abilities.
However, more than the technological breakthroughs of GPT-2, the headlines of tech publications focused on OpenAI’s decision not to release the AI to the public, as is the habit of research labs. According to comments shared with the media, the organization’s staff are worried that GPT-2 can be used to generate fake news.
“OpenAI built a text generator so good, it’s considered too dangerous to release” and “An AI that writes convincing prose risks mass-producing fake news” were just two of the many headlines of the stories that reputable tech publications published about GPT-2.
The engineers at OpenAI are right to be worried about the questionable uses their latest technological achievements might serve. Greg Brockman, co-founder and CTO of OpenAI, has been very vocal in his warnings about the potential threats of advances in artificial intelligence.
But the reality is, it takes much more than a good text-generating AI to disseminate false information.
A few notes on OpenAI GPT-2
Beyond the criticism against how OpenAI introduced its text-generating artificial intelligence, what the organization has achieved is very interesting.
The few journalists and analysts who were able to test GPT-2 shared some very fascinating examples of the AI’s output, including a natural (and fake) prose about military conflict between the U.S. and Russia and a very convincing argument about the downsides of recycling.
You provide GPT-2 with a sequence such as “Russia has declared war on the United States after Donald Trump accidentally…” and the AI does the rest, generating a sequence of sentences that are grammatically correct and make sense most of the time.
But to be clear, GPT-2—or any other NLG algorithm for that matter—doesn’t understand the meaning behind the text it generates in the way that a human writer would. GPT-2 is based on neural networks and deep learning, which means it uses complex math to compare the human-given excerpt against the millions of examples it has seen before and generate a sequence that would be statistically similar to something that a human would have written.
GPT-2 is certainly more advanced than any other AI model that generates text. It manages to maintain the coherence of its output in longer sequences than other advanced AI algorithms such as ELMo and BERT.
But it is not a breakthrough. While OpenAI has yet to decide when and if it will release the details behind GPT-2, what we know so far is that it has been trained with billions of examples more than other AI models, which gives it a richer database of to tailor its output.
“OpenAI trained a big language model on a big new dataset called WebText, consisting of crawls from 45 million links. The researchers built an interesting dataset, applying now-standard tools and yielding an impressive model. Evaluated on a number of downstream zero-shot learning tasks, the model often outperformed previous approaches,” writes Zachary Lipton, AI researcher and editor of Approximately Correct, observing that the samples generated by GPT-2 appear to exhibit more long-term coherence than previous results.
But Lipton also notes that this is the “science-as-usual step forward” that you would expect to see in a month or two, from any of tens of equally strong NLP labs.
“The results are interesting but not surprising,” Lipton concludes. “They represent a step forward, but one along the path that the entire community is already on.”
Why AI alone can’t spread fake news
We tend to overestimate the capabilities of AI, especially when we mix up myth and reality upon seeing computers do something that was previously thought to be off-limits to them. But let’s assume GPT-2 or some other AI manages to produce perfect sequences of text.
What makes it worrying is that the AI would be able to do it in mass scale. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” Jack Clark, policy director at OpenAI, told MIT Technology Review.
Further elaborating on the point, Clark said that such artificial intelligence technology might be used to automate the generation of convincing fake news, social-media posts, or other text content.
But here’s how this line of thinking is problematic. Firstly, there’s already no small amount of fake news stories and content on the web. The supply already exceeds the demand several fold. There’s perhaps more fake than authentic content. An AI would probably increase the amount of fake content on the web, but it wouldn’t make a huge difference.
The authenticity of content depends mostly on the source, not the volume. I for one put more trust in a story published by a reputable news publication or a blog from a known expert than thousands of random tweets and posts I see in my Twitter and Facebook feeds every day. Likewise, I don’t trust any video I see on YouTube without further investigating the subject from several reputable sources.
Accordingly, the real damage is done when a trusted news source publishes a fake or biased story (regretfully this happens more often than it should). A single poorly reported story from mainstream media can do more damage than millions of fake tweets generated by an AI algorithm.
Now say you’re a malicious actor armed with an AI that can generate millions of fake stories or social media posts every day. You won’t be able to put your newfound weapon to effective use unless you have a trusted source to publish AI-generated content.
Interestingly, if you do have a trusted source of publication, you won’t need huge volumes of AI-generated content and a minimal staff of writers or content curators can be more effective than the strongest text-generating artificial intelligence algorithm.
Case in point is a story that made a lot of noise after the 2016 U.S. presidential elections. A group of Macedonian teens developed websites that looked authentic and trustworthy, and then created a fake news crisis by publishing stories that had no modicum of truth to them but had sensational headlines. Again, the key to their success was the trust they created in their publication, not the volume of content they produced.
Ironically, in this case, the people who ran the websites didn’t even have proper English skills and created most of their stories by stitching together content they had gathered from different websites.
Manipulating ranking and trending algorithms through artificial intelligence
What if a malicious actor wants to use AI such as GPT-2 to trick the algorithms that rank and prioritize online content? For instance, an attacker might use the AI to generate a massive number of unique tweets from numerous bot accounts to trick Twitter’s algorithms into thinking there’s a genuine conversation going on a certain topic.
Here the threat is more genuine than fake news stories. But again, artificial intelligence is only one of the many components that the malicious campaign would require. In the past years, social media platforms have come a long way toward identifying bot accounts, and given the backlash following fake news stories in critical times, they have mostly adopted a conservative approach, sometimes even suspending real human-operated accounts on suspicions of automation.
Twitter would instantly identify an attacker who would setup a super-computer armed an AI such as GPT-2 and thousands of Twitter accounts.
The attacker would need to make those accounts look genuine. So they would need pictures of real people—that can be generated with AI—as well as bio lines that look authentic, which might also become possible with a well-trained AI.
But they also need thousands of IP addresses and devices to run those accounts to make it look like the tweets are coming from different people on mobile and desktop devices. The system would need along a control mechanism that would post the AI-generated tweets from the accounts in a way that would not seem automated.
Another hurdle is account creation times. The attacker would have to create the accounts over a long span of time—perhaps a year or even more—to again reduce overlap and similarities. A New York Times feature about fake account factories shows that bot users can often be discovered by simply comparing the creation date of accounts that are manifesting similar behavior.
None of the above requirements are impossible to fulfill, but make it increasingly difficult for the AI to hide its tracks while spreading its fake news on social media. An organization would need a full crew of real human operators to manage the many different aspects of its campaign.
And finally, the accounts need to have some genuine interactions with other users to blend in with the real humans using the social media network. This is perhaps the most difficult part of an AI-driven fake social media campaign. There are many AI technologies that can understand different nuances in written and spoken conversations, but for the most part, they’re good at executing simple, straightforward commands and can’t engage in two-way conversations about abstract, complicated topics.
The bottom line is, for the time being, you can’t automate fake news on social media. And unless GPT-2 can fully impersonate thousands of real humans in all their aspects and interactions, it will quickly blow its cover.
Why the exaggerated claims?
To be clear, GPT-2 is a very powerful AI tool, and like other advanced technologies, it could serve evil purposes in the wrong hands. We’ve already seen this happen with deepfake, the AI technology that can swap faces in videos. We have yet to find out what malicious goals AI-generated text will serve.
But why would OpenAI, whose name implies transparency and sharing of knowledge, decide to keep a tight lid on its technology under overblown concerns about AI-induced fake news? That is for the folks at OpenAI to answer, I wouldn’t go as far as to blame them of creating fake hype around their technology. After all, they’re a nonprofit and are not marketing to sell the results of their research. Their decisions might have been driven by genuine concerns, and I would prefer to hear the final words from them before making any judgement.
But as last week’s news stories around AI-generated fake news have shown, technologies wrapped in a shroud of mystery and conspiracy make for more sensational headlines. And sensational headlines draw more attention. Given the current climate of hype and excitement surrounding AI, we must be more responsible in introducing as well as writing about technological achievements.
“No matter what happens today in AI, Bitcoin, or the lives of the Kardashians, a built-in audience will scour the internet for related new articles regardless. In today’s competitive climate, these eyeballs can’t go to waste. With journalists squeezed to output more stories, corporate PR blogs attached to famous labs provide a just-reliable-enough source of stories to keep the presses running. This grants the PR blog curators carte blanche to drive any public narrative they want,” Lipton writes.