The European Commission has taken a significant step in regulating the use of generative artificial intelligence (AI) by sending formal requests for information (RFI) to major tech players including;
- Bing
- Google Search
- Snapchat
- TikTok
- YouTube, and
- X
These requests come under the framework of the Digital Services Act (DSA), the EU’s revamped rules governing e-commerce and online governance, targeting platforms designated as very large online platforms (VLOPs).
The Commission’s focus is on assessing and mitigating risks associated with generative AI, particularly concerning issues such as the spread of misinformation, deepfakes, and manipulation of services with the potential to influence voters.
![European Commission urges tech giants to address risks of generative AI image 78](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-78.png?resize=450%2C253&ssl=1)
![European Commission urges tech giants to address risks of generative AI image 78](https://i0.wp.com/nosisnews.com/wp-content/uploads/2024/03/image-78.png?resize=450%2C253&ssl=1)
The requests for information delve into the platforms’ mitigation measures regarding these risks, emphasizing the importance of safeguarding electoral processes, combating the dissemination of illegal content, protecting fundamental rights, addressing gender-based violence, and ensuring the well-being of minors and mental health.
In addition to seeking internal documents and risk assessments, the EU plans to conduct stress tests after Easter to evaluate platforms’ preparedness for dealing with generative AI risks, especially in light of the upcoming European Parliament elections. The Commission aims to finalize election security guidelines by March 27, pushing platforms to enhance their readiness to detect and respond to potential incidents.
The Commission underscored the decreasing cost of producing synthetic content, heightening concerns about the proliferation of misleading deepfakes during elections. While acknowledging recent industry efforts to combat deceptive AI use, such as the tech industry accord arising from the Munich Security Conference, the EU believes its forthcoming election security guidance will provide more robust safeguards.
The EU’s approach involves leveraging existing regulatory frameworks, including the DSA’s due diligence rules, the Code of Practice Against Disinformation, and forthcoming transparency labeling and AI model marking rules under the AI Act.
The goal is to establish an enforcement ecosystem capable of addressing various generative AI risks in the lead-up to elections.
Furthermore, the Commission’s RFIs extend beyond voter manipulation to encompass a broader range of generative AI risks, including deepfake pornography and other malicious synthetic content. The requests also target smaller platforms and AI tool makers, acknowledging their potential role in disseminating harmful content despite not falling under explicit DSA oversight.
In summary, the European Commission’s actions signal a proactive stance in regulating generative AI to mitigate risks and ensure the integrity of democratic processes, reflecting the EU’s commitment to upholding fundamental values and protecting its citizens in the digital age.