In the tumultuous landscape of global elections this year, a new and pervasive threat has emerged, challenging the very foundations of democracy. A recent study by the Center for Countering Digital Hate (CCDH) reveals a staggering rise of 130% per month in AI-generated disinformation, specifically deepfake images related to elections, on X (formerly Twitter) over the past year.
The unprecedented surge in AI-generated deepfakes, fueled by easily accessible and free AI tools, poses an imminent risk to democratic processes. The study focused on X, shedding light on the alarming growth of deepfakes but not delving into other platforms like Facebook or TikTok.
Callum Hood, the head of research at CCDH, warns that the lack of adequate social media moderation and the availability of easily exploitable AI tools could undermine democratic exercises globally, including the upcoming U.S. presidential election.
Deepfakes abundant
Deepfakes, once confined to the fringes of the internet, have now permeated mainstream consciousness. Research cited by the World Economic Forum indicates a staggering 900% growth in deepfakes between 2019 and 2020.
A 10x increase in deepfake numbers from 2022 to 2023, as observed by identity verification platform Sumsub, further emphasizes the escalating nature of this crisis.
A recent poll by YouGov highlights the growing concern among Americans, with 85% expressing worry about the spread of misleading video and audio deepfakes. Additionally, a survey by The Associated Press-NORC Center for Public Affairs Research reveals that nearly 60% of adults anticipate an increase in false and misleading information during the 2024 U.S. election cycle due to AI tools.
![AI-Generated deepfakes pose growing threat to democracy in 2024 elections image 16](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-16.png?resize=783%2C1024&ssl=1)
![AI-Generated deepfakes pose growing threat to democracy in 2024 elections image 16](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-16.png?resize=783%2C1024&ssl=1)
The CCDH study investigates the proliferation of election-related deepfakes on X by analyzing community notes related to deepfakes. The co-authors identified four primary AI image generators contributing to the creation of deepfakes: Midjourney, OpenAI’s DALL-E 3, Stability AI’s DreamStudio, and Microsoft’s Image Creator. Shockingly, the study finds that these generators produced deepfakes in 41% of the tests, despite specific policies against election disinformation.
Notably, different image generators exhibited varying propensities for generating political deepfakes. Midjourney emerged as the most prolific, generating election-related deepfakes in 65% of the test runs. The study underscores the urgency of addressing vulnerabilities in AI-generated images that could fuel disinformation about voting and rigged elections.
![AI-Generated deepfakes pose growing threat to democracy in 2024 elections image 17](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-17.png?resize=778%2C1024&ssl=1)
![AI-Generated deepfakes pose growing threat to democracy in 2024 elections image 17](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-17.png?resize=778%2C1024&ssl=1)
While some image generators are taking steps to enhance moderation systems, such as Midjourney’s upcoming updates, questions persist about the overall efficacy of current measures. OpenAI, Microsoft, and Stability AI are actively working on tools and policies to combat deepfakes, yet the sheer volume of AI-generated content on social media platforms continues to present challenges.
![AI-Generated deepfakes pose growing threat to democracy in 2024 elections image 18](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-18.png?resize=778%2C1024&ssl=1)
![AI-Generated deepfakes pose growing threat to democracy in 2024 elections image 18](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-18.png?resize=778%2C1024&ssl=1)
The study also illuminates the role of social media in amplifying the impact of deepfakes. Instances where AI-generated content goes unchecked, reaching millions of views despite fact-checks, raise serious concerns about the effectiveness of current moderation practices. The study suggests that the lack of proper guardrails allows AI tools to become potent weapons for bad actors, producing political misinformation at zero cost and spreading it on an enormous scale.
Addressing the deepfakes problem requires a multifaceted approach. Hood and the co-authors advocate for responsible safeguards in AI tools and platforms, increased collaboration with researchers, and investment in trust and safety staff dedicated to combating the use of generative AI for disinformation.
Policymakers are urged to use existing laws to prevent voter intimidation arising from deepfakes and to pursue legislation making AI products safer and vendors more accountable.
Encouragingly, some progress has been made on these fronts. Image generator vendors have signed a voluntary accord to adopt a common framework for responding to AI-generated deepfakes, signaling a collective effort to address the issue. Platforms like Meta and Google have implemented measures to label and disclose AI-generated content, while some U.S. states have enacted laws criminalizing deepfakes.
In the face of this growing threat, the article concludes with a sobering reminder that swift action by AI platforms, social media companies, and lawmakers is essential to safeguard democracy from the insidious influence of political deepfakes. The 2024 elections worldwide hang in the balance, making it imperative to act now before irreparable damage is done.