• logo linkedin
  • logo email
Robots & Generative AI by Masadepan
With an estimated four billion people voting in elections in more than 60 countries this year, observers fear the increasing use of artificial intelligence will pose problems for the political process, from “deepfake” videos to mass disinformation campaigns. At an AFD-hosted conference last month, AI experts discussed the risks posed to human rights, possible solutions, and the work that remains to be done.

An automatic “robocall” phone message imitating President Joe Biden’s voice warns New Hampshire voters in the United States to stay away from the ballot box in the run-up to November’s elections. An AI-generated video replicating the image and voice of the late Indonesian dictator Suharto goes viral as he urges voters to support the Golkar party at the polls. 

Just two of many examples of artificial intelligence being marshalled to manipulate public opinion.  

“AI amplifies the power of disinformation,” Olivier Lechien, AFD’s expert on Citizens and Institutions told AFD’s Paris conference on Artificial Intelligence in January. “With images that seem completely believable, audio that perfectly imitates the voice of a sitting US President calling on people not to vote in the primaries – the more plausible these productions are, the more people can be inclined to believe the opposite of what is true.” 

Mass deception, mass produced

The disinformation and “deepfakes” that have influenced elections in recent years are now being turbo-charged. Using social networks, individuals have been able to create and spread fake news for some time. But with new generative AI technology, such messaging can be confected in mere minutes, and spread like wildfire. 


Further reading: AI and the Sustainable Development Goals: A New Frontier


Flicking across the conference auditorium screen are a series of dramatic images. US presidential candidate Donald Trump is depicted running from police. French President Emmanuel Macron is shown protesting against his own government’s pension reform bill. Pope Francis appears in the kind of large puffer jacket normally sported by rappers. All fake images, generated by AI software. 

“We’ve seen AI increase at a rapid pace,” says Peter Addo, AFD’s Lead Data Scientist and head of the Emerging Tech Lab. “And in 2024, I expect to see a lot of high AI deepfake election scandals. It’s already started, because now, the tools that have been developed are in the mainstream; anyone can get an AI tool and start playing with it.” 


Watch the replay here (the video is mainly in French; you can set English subtitles)


Taming the Wild West

In December 2023, the European Union’s Parliament and Council agreed on a draft of the Artificial Intelligence Act, which aims to protect people from manipulation and social profiling.  

Under the pioneering bill, AI systems will have to comply with transparency requirements, and a rating system will classify AI systems according to the level of risk they pose to human rights. AI systems used to influence the outcome of elections and voter behavior for example, will be classified as high or “unacceptable” risk. 

In the same way that the EU’s General Data Protection Regulation (GDPR) on information privacy provided a regulatory model for other countries to follow, the new AI legislation could also lead the way. 

“With both the AI act and Digital Services Act, you have a comprehensive framework for working with systemic risks for digital public space,” says Anastasia Stasenko, Lecturer in Digital Strategy and Data Analysis at Sorbonne-Nouvelle University and founder of the startup Pleias

The regulations – among the first AI-related legislation in the world – could provide a template for improved governance in countries struggling to keep up with the searing pace of AI development. 

“The regulations around artificial intelligence [are born of] a clear vision shared by European countries,” said Elisabeth Barsacq, Head of European and International Affairs at CNIL, France’s national commission for digital freedoms. “Along with the GDPR, I think it's a solid basis on which companies, NGOs and institutions can build, across the Global South.”

Making AI accessible across the Global South

“The question is how it will be implemented,” says Stasenko. “But before developing countries even get to the stage of drafting legislation, they need access to better infrastructure and capacity to develop AI. Without the infrastructure, there will be a big digital divide.” 

Most countries lack the computational resources necessary for their own AI initiatives, as well as the capacity to detect AI-generated fakery. 

“They need diverse, culturally attuned AI, and also capacity for communities to build their own AI ecosystems,” says Stasenko. 

That will mean sharing AI language learning tools and datasets on open source platforms, so that communities across the Global South can hone their own systems. 


Further reading: AFD’s Sustainable Development Goal Prospector: Mobilizing AI to Achieve the SDGs


AI Awareness

“AI is both a threat and part of solution,” says Olivier Lechien. “The good news though, is AI also helps fact checkers determine whether a given image, text or audio is fake.” 

Political leaders could ward off interference by consulting with their electorates more frequently, says Lechien. While holding elections every four or five years gives hackers and manipulators the time to plot and intervene, multiplying political consultation – especially at the local level – makes manipulation more difficult.  “They can’t be everywhere all the time,” he adds. 

AFD Group is backing a number of projects supporting access to information, digital governance and civic participation. Initiatives range from the MediaSahel Project, which helps young people engage with the media in West and Central Africa, to Qarib, which supports media outlets that aim to reduce tensions and improve relations among communities and public authorities across the Middle East.  

“We need to rethink educational systems at all levels, from curricula to teacher training,” says Peter Addo. “We need to mainstream data and AI literacy, and develop critical thinking skills so that we question everything we see online.”


Join us on 26 March for the next Conference: AI for Sustainable & Inclusive Futures - a day of plenary sessions and workshops on AI