Elections in the Age of AI Misinformation and Deepfakes
Welcome to the age of mass misinformation campaigns powered by AI
More than 40 countries are set to hold national elections in 2024. The list includes eight of the ten most populous countries in the world - Bangladesh, Brazil, India, Indonesia, Mexico, Pakistan, Russia, and the United States - along with other key countries like Taiwan and Ukraine. The United Kingdom is expected to have elections too - the first since the UK left the European Union. In total, over 3 billion people will choose their next governments and leaders this year. Referring to 2024 simply as “an election year” would be an understatement. A better description might be “the year of elections”, the results of which will shape our daily lives and affect the decades to come.
At the same time, we are in the middle of massive advancements in AI, which now can generate text indistinguishable from what a human would write and create astonishingly realistic images. But the same technology that can be used to concisely summarise a scientific paper or create never-seen-before images can also be used to deploy AI-powered mass-scale misinformation campaigns.
From deepfakes of world leaders to chatbots making up facts, we will explore how generative AI blur the line between what’s true and what’s not, and what it means for society at large in an era increasingly dominated by sophisticated artificial intelligence.
The AI community has made tremendous progress in the last 10 years. In 2014, we were in the middle of the deep learning revolution, when machines learned to recognise images better than humans and DeepMind’s AIs mastered Atari games. In that same year, Ian Goodfellow published his paper on generative adversarial networks (GANs). GANs consist of two artificial neural networks - one network generates an output, and the second one evaluates that output. When the first network’s output is rejected by the second network, the first one learns from it and adjusts itself to make a better output next time. Repeat this process a couple thousand or million times, and you get an AI that can generate high-quality images almost indistinguishable from real ones.
The AI community took on the idea of GANs and started experimenting with them. In 2016, Face2Face has been released, which could mark the beginning of the modern deepfakes. Face2Face was the first program to allow a real-time face swap in videos. It made waves in the AI community and inspired more research into face swaps and deepfakes. The general public got the first taste of what is possible with deepfakes in 2017 with the infamous Obama deepfake video from Buzzfeed. From there, the deepfake technology became better, cheaper and more accessible.
Today, in 2024, we are in the middle of the generative AI revolution, powered by stable diffusion and transformer models. Current state-of-the-art text-to-image generators can generate extremely realistic images. To give an example, take a look at the image below.
If you came across this image on Instagram, you might assume it is a picture of a real person. But it is not - it was generated by Imagen 2, a text-to-image generator from Google DeepMind. Recently announced Midjourney v6, another popular text-to-image generator, is capable of generating photorealistic images, too. ChatGPT can also generate high-quality images with DALL·E 3.
The next natural step is to go from still images to moving images. Text-to-video generators have not yet reached the same level of sophistication as their text-to-image counterparts. Right now, these AI-generated videos look like AI-generated images did just a couple of years ago when they had this distinct AI style. However, these tools are rapidly improving, and it's only a matter of time before text-to-video generators can create videos indistinguishable from real footage.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
How to spread misinformation with AI
There are multiple ways AI can be used to spread misinformation. One method is to saturate the space with the desired information. Someone could create an army of bots, each with completely fake personas, and unleash them on the internet to leave comments, tweets, images, memes or even publish entire articles. Thanks to large language models, these bots can now generate text as well as, if not better than, most humans. The goal of these attacks is to flood public discussions with a specific message and drown out everything else in noise. Under such relentless pressure, some people might break and adopt these views, especially if they are repeated frequently. After all, a lie repeated often enough becomes the truth.
When it comes to deepfakes of world leaders, political candidates, or other office seekers, those deepfakes that appear plausible could be the most effective. Deepfakes that completely reverse someone’s stance (like this deepfake of Zelensky talking of surrendering to Russia) can be easily dismissed as fake. But a deepfake showing someone being rude to others or having some kind of moment of weakness is something that could happen and spread like wildfire on social media. This kind of attack undermines how we see the candidate and their ability to be fit for the office (physically or mentally). Deepfakes can also be used to slightly alter real footage in such a way that a different interpretation of events can be made.
Voice can easily be replicated, too. There are numerous services offering voice cloning from a short, just a couple of seconds long, sample. Voice deepfakes can be used to add convincing audio to deepfakes videos or fake voice messages. But we may also see some new and creative applications of voice cloning. For example, a month ago, Imran Khan, the former prime minister of Pakistan, cloned his voice to deliver a speech he had written from prison to his supporters.
Apart from bots and deepfakes, there is a new source of misinformation we have to deal with - AI hallucinations, or AI making up facts and presenting them with high confidence as if they were true. We saw last year examples of how AI can make up things, from ChatGPT making up legal cases that did not happen to supermarkets publishing an AI-generated recipe that would create chlorine gas.
Tools like ChatGPT or Bing are becoming a source of information for a growing number of people. But as useful as they are, they can too be a source of misinformation. In December 2023, two non-profit organisations AI Forensics and AlgorithmWatch published a paper showing that one-third of Bing Chat’s answers to election-related questions contained factual errors (Bing Chat is using GPT-4 under the hood). Errors include wrong election dates, outdated candidates, or even invented controversies concerning candidates. What’s concerning is that Microsoft is either unwilling or unable to fix the issue. Researchers informed Microsoft about their findings and the company announced to address the issues. A month later, the researchers took another sample and found that little had changed.
If you enjoy this post, please click the ❤️ button or share it.
How to deal with AI misinformation
There are efforts to either detect or label AI-generated content. However, these efforts usually play a catch-up game.
Images generated by text-to-image generators can be watermarked using invisible for human eye patterns embedded into the image that a computer can quickly detect. One such example is Google DeepMind’s SynthID, with which all images generated by Imagen 2 (which will include images generated by Gemini, too) will be labelled. Similar watermarks can also be applied to AI-generated audio clips and videos.
Detecting AI-generated text, however, has proven to be a more challenging task. OpenAI tried to build such a detector but it shelved the project a year ago due to a low rate of success. But where OpenAI failed, other companies stepped in and are working on detecting AI-generated text. There are also competitions aimed at developing better detectors. One such competition is happening on Kaggle right now (there are about two weeks left to submit your solution if you are interested).
The lawmakers identified fake content as a potential threat coming from the misuse of AI and prepared legislation to address it. The upcoming EU AI Act will require companies offering all kinds of AI generators to label what their systems produce as AI-generated content and to design systems in such a way that AI-generated media can be detected. Similar regulations are also proposed in China. The US, meanwhile, does not have an equivalent of these regulations yet. The US only has a commitment made by top US AI companies to the White House to develop tools to watermark and detect AI-generated content.
The detectors can help spot AI-generated content but they won’t prevent from publishing it. Technology can take us only this far. Beyond that, we are left to rely on our vigilance and our willingness to double-check with reputable and diverse sources of information if what we see or hear is true. However, that approach can lead to losing trust in everything we see or hear online. Every article, every TikTok, every YouTube video is then potentially fake content. This can create a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. And when trust is eroded, it is easier to raise doubts about specific events.
AI misinformation is here to stay
Unfortunately.
The Brexit campaign and the US presidential elections in 2016 have shown how a well-executed campaign on social media can affect public opinion and change the political landscape for years to come. These campaigns were powered by big data analysis. Today, eight years later and with generative AI, misinformation campaigns can reach a completely new level while being cheaper and quicker to create.
But politicians and world leaders are not the only targets for AI-powered misinformation campaigns. Business leaders can be targeted, too, which can result in economic or reputational damage. Even ordinary people are not safe from these attacks. There were already instances of scammers cloning the voices of relatives to steal money from people.
The advancements in AI, while remarkable, have ushered in a new wave of misinformation that threatens to undermine the very fabric of our democratic processes and societal trust. We must address this challenge with a multi-faceted strategy: embracing technological solutions for detecting fake content, fostering legislative action to hold tech companies accountable, and cultivating media literacy among the public. Vigilance, critical thinking, and a commitment to ethical AI practices must be at the forefront as we stride into a future where the lines between what’s real and what’s not become increasingly blurred.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!