In a collective effort to combat election-related deepfakes, major tech companies, including Microsoft, Meta, Google, Amazon, Adobe, and IBM, signed an accord at the Munich Security Conference.
The voluntary agreement aims to establish a common framework for responding to AI-generated deepfakes designed to mislead voters. Thirteen other companies, including AI startups OpenAI, Anthropic, Inflection AI, ElevenLabs, and Stability AI, along with social media platforms X (formerly Twitter), TikTok, and Snap, joined the accord.
The signatories commit to employing methods to detect and label misleading political deepfakes on their platforms, sharing best practices, and providing swift responses when such content starts spreading. Emphasizing attention to context, the companies aim to safeguard various expressions, including educational, documentary, artistic, satirical, and political content. Critics argue that the accord, while voluntary, serves as a form of virtue signaling.
Although no federal law in the U.S. explicitly prohibits deepfakes, 10 states have enacted statutes criminalizing their use, with Minnesota leading in targeting deepfakes used in political campaigning.
Federal agencies like the FTC and FCC have taken enforcement actions to address the spread of deepfakes. The FTC recently sought to modify a rule, extending protection against impersonation to include all consumers, including politicians. Simultaneously, the FCC moved to make AI-voiced robocalls illegal.
In the European Union, the AI Act proposes clear labeling of all AI-generated content, while the Digital Services Act aims to curb deepfakes. Despite these efforts, deepfakes are on the rise, with a 900% year-over-year increase, according to Clarity, a deepfake detection firm. Recent incidents, such as AI-generated robocalls mimicking U.S. President Joe Biden’s voice in New Hampshire and AI-generated audio recordings impersonating a political candidate in Slovakia, underscore the persistent challenge.
Public concern over the spread of misleading video and audio deepfakes is evident, with 85% of Americans expressing concern, according to a YouGov poll. Another survey by The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe AI tools will contribute to the spread of false information during the 2024 U.S. election cycle.