
Combatting Misinformation in The Age of AI: Challenges and Solutions
It is expected that the spread and pervasiveness of misinformation in the wake of the global elections will take on unimagined proportions. In 2023, NewsGuard reported that websites hosting AI-created false articles have increased by more than 1,000 percent, rapidly growing from 49 sites to more than 600
While 2024 might be a year known for many significant advancements in AI technology, it’s also a significant election year. According to Time Magazine, more voters than ever will head to the polls this year. There are at least 64 countries (plus the European Union) holding elections. About 49% of the people in the world are expected to decide on who leads them. There will be housing, pro-life/pro-choice, poverty, unemployment, immigration, and trade debates happening on an unprecedented scale. This also provides a ripe opportunity for propaganda machines to hack and destabilize conversations leaving voters fatigued and untrusting of electoral outcomes. Misinformation has, historically, been weaponized to create voter distrust, apathy, electoral violence, and security concerns globally. With the advent of AI tools from companies like OpenAI and Google, AI-generated content has scaled the capacity for creating highly convincing fake news and information on a budget.
Where citizens’ trust is low, it might become even more difficult to govern and grow the economy
It is expected that the spread and pervasiveness of misinformation in the wake of the global elections will take on unimagined proportions. In 2023, NewsGuard reported that websites hosting AI-created false articles have increased by more than 1,000 percent, rapidly growing from 49 sites to more than 600. According to Ali Swenson and Kelvin Chan of the AP News, deep fakes have been used in video and audio to create false political information globally. This uncomfortable trend cuts across both developed and developing nations of the world. The rise of weaponized AI has the potential to disrupt elections worldwide. There have been several documented cases of these manipulated videos and false reports going viral in Ukraine, Russia, South Africa, China, the United States, Indonesia, India, Moldova, Slovakia, and all over the world. Large Language Models (LLMs) like Gemini and ChatGPT have made it quicker and easier to generate large bodies of text instantly. What would have taken a room of paid bloggers to create in one week, can now be done in a few hours. The new instant video generator Sora is used to create high-definition clips that could pass for professionally produced movies or real-time videos.
Elections are sentimental for most citizens and news plays a pivotal role in how electorates participate. Among the youth, there’s the danger of voter apathy when fake news overwhelms this demographic with unverifiable information designed to cause panic. In conservative cultures, AI-generated misinformation can create tension and spark riots. Where citizens’ trust is low, it might become even more difficult to govern and grow the economy. According to the Ipsos Global Trustworthiness Monitor, politicians remain the world’s least trusted profession, with just 14 percent of people across 31 countries saying they consider them to be trustworthy. They are also considered to be, “just below advertising executives and Government Ministers/Cabinet Officials, both of whom are considered trustworthy by less than one in five.” The challenges that AI and fake news can instigate borders on the issue of trust between voters, candidates, political parties, and government agencies.
Responses to these challenges posed by AI-generated fake news and content have to be adaptable, culturally relevant, and flexible. While censoring might be the most convenient response to media misinformation, it has proven inadequate as fake news outlets evade discovery. It is important to create policy frameworks for ethical AI usage and disclosure. A lot of platforms now require that AI use is disclosed on all media platforms. A collaborative approach to news verification can be promoted across social media platforms. On platforms like X (formerly Twitter), users can now contribute or provide context to media items or verify their originality. Similar protocols should be encouraged across various platforms. Government leaders engaging tech platforms and communities have a role to play in establishing acceptable AI usage. Public education on misinformation can be scaled using the same AI tools. Countering AI abuse is as much of a full-time job as any other and requires private and public sector investment.
Dr Jonathan Oladeji is a Fellow of the Sixteenth Council based in South Africa



