The European Union (EU) has initiated a consultation on draft election security guidelines, specifically addressing larger online platforms like Facebook, Google, TikTok, and Twitter.
The focus extends beyond well-established concerns such as content moderation and political ad transparency, tackling risks associated with generative AI and deepfakes. The guidelines aim to ensure that major tech players take comprehensive measures to mitigate election-related risks, emphasizing the potential threats posed by advanced AI tools.
Key Points of election security guidelines:
- Scope of Guidelines: The EU’s election security guidelines are directed at approximately two dozen platform giants and search engines designated under the Digital Services Act (DSA).
- Concerns Regarding Generative AI: The guidelines address the risks associated with large language models (LLMs) and generative AI, emphasizing the potential use of AI-generated content to mislead voters and manipulate electoral processes.
- Labeling and Transparency: The draft guidelines suggest that platforms clearly and persistently label AI-generated, deepfake, or other media manipulations that significantly resemble existing persons, objects, places, entities, or events. Platforms are encouraged to provide users with accessible tools to add labels to AI-generated content.
- Watermarking for Distinguishability: The use of watermarking, including metadata, is recommended to distinguish AI-generated content. This is particularly crucial for content involving candidates, politicians, or political parties.
- Adherence to Legislative Proposals: The guidelines suggest drawing risk mitigation measures from the recently agreed legislative proposal, the AI Act, and the non-legally binding AI Pact. The emphasis is on ensuring that providers of generative AI systems use state-of-the-art solutions for content marking.
- Public Consultation: The draft guidelines are open for public consultation until March 7th, allowing stakeholders to provide input on the proposed measures.
- Election Integrity Measures: Recommendations include platforms making reasonable efforts to ensure AI-generated information relies on reliable sources in the electoral context, warning users of potential errors in GenAI content, and implementing safeguards to prevent the creation of false content with a strong potential to influence user behavior.
- Red Teaming and Performance Metrics: Platforms may be urged to conduct red-teaming exercises focused on electoral processes, set appropriate performance metrics, and continually monitor the performance of generative AI systems.
- Support for Researchers: The guidelines stress support for external researchers in scrutinizing AI-generated content, with suggestions to set up dedicated tools for researchers to access and analyze such content.
- Adaptation of Ad Systems: Platforms are recommended to adapt their ad systems to consider potential risks associated with generative AI in ads, such as providing advertisers with ways to label GenAI content.
The final election security guidelines, expected in the coming months, will provide comprehensive recommendations and best practices for tech giants to address election security concerns.
The EU confirmed today that the election security guidelines are the first set in the works under the VLOPs/VLOSEs-focused Article 35 (“Mitigation of risks”) provision, saying the aim is to provide platforms with “best practices and possible measures to mitigate systemic risks on their platforms that may threaten the integrity of democratic electoral processes.”
In a statement, Thierry Breton, the EU’s commissioner for internal market, highlighted the significance of the Digital Services Act in addressing systemic risks on online platforms affecting democratic societies. The guidelines aim to ensure compliance with obligations and prevent misuse of platforms in influencing elections while safeguarding freedom of expression.