Many years before the release of ChatGPT, my research group, the Social Decision-Making Laboratory at the University of Cambridge, wondered whether it was possible for neural networks to generate misinformation. To achieve this, we trained ChatGPT’s predecessor, GPT-2, on examples of popular conspiracy theories and then asked it to generate fake news for us. It has given us thousands of misleading but plausible-sounding pieces of news. Some examples: “Some vaccines are loaded with dangerous chemicals and toxins” and “Government officials have manipulated stock prices to hide scandals.” The question was: would anyone believe these claims?
We created the first psychometric tool to test this hypothesis, which we called the Misinformation Susceptibility Test (MIST). In collaboration with YouGov, we used AI-generated headlines to test how sensitive Americans are to AI-generated fake news. The results were troubling: 41% of Americans mistakenly thought the vaccine headline was true, and 46% thought the government was manipulating the stock market. Another recent study, published in the journal Sciencethey demonstrated not only that GPT-3 produces more convincing misinformation than humans, but also that people cannot reliably distinguish between human-generated and AI-generated misinformation.
My prediction for 2024 is that AI-generated misinformation will come to elections near you and you probably won’t even realize it. In fact, you may have already been exposed to some examples. In May 2023, a fake viral story about a bombing at the Pentagon was accompanied by an AI-generated image showing a large cloud of smoke. This caused a public outcry and even a drop in the stock market. Republican presidential candidate Ron DeSantis has used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between reality and fiction and use AI to enhance their political attacks.
Before the explosion of generative AI, cyber-propaganda companies around the world had to write misleading messages themselves and employ human troll factories to target people on a large scale. With the help of artificial intelligence, the process of generating misleading news headlines can be automated and weaponized with minimal human intervention. For example, micro-targeting – the practice of targeting people with messages based on digital trace data, such as their Facebook likes – was already a concern in past elections, although its main obstacle was the need for generate hundreds of variations of the same message. to see what works on a given group of people. What was once expensive and labor-intensive is now cheap and readily available with no barriers to entry. AI has effectively democratized the creation of misinformation: anyone with access to a chatbot can now seed the model on a particular topic, be it immigration, gun control, climate change or LGBTQ+ issues, and generate dozens of fake news stories highly convincing in just a few minutes. In fact, hundreds of AI-generated news sites are already popping up, propagating fake stories and videos.
To test the impact of such AI-generated misinformation on people’s political preferences, researchers at the University of Amsterdam created a deepfake video of a politician offending his religious voter base. For example, in the video the politician jokes: “As Christ would say, don’t crucify me for this.” The researchers found that religious Christian voters who watched the deepfake video had more negative attitudes toward the politician than those in the control group.
It’s one thing to deceive people with misinformation generated by AI in experiments. It’s another to experience our democracy. In 2024 we will see more deepfakes, voice cloning, identity manipulation and fake news produced by artificial intelligence. Governments will seriously limit, if not ban, the use of artificial intelligence in political campaigns. Because if they don’t, AI will undermine democratic elections.