Vera Jourova, vice president of the European Commission, recommended that businesses using generative AI tools that have the potential to disseminate misinformation be publicly identified.
Officials from the European Union are in talks to put further regulations into place that would improve the openness of artificial intelligence (AI) technologies like OpenAI’s ChatGPT. The possible spread of misinformation by generative AI systems is the main concern. The European Commission’s vice president for values and transparency, Vera Jourova, advised that businesses using these AI techniques mark their content in order to stop the spread of “fake news.”
Jourova notably cited Google’s Bard and Microsoft’s Bing Chat as examples of businesses using generative AI in their services, highlighting the necessity for controls to stop the harmful use of these technologies for disinformation. The goal is to prevent bad actors from using AI techniques to disseminate fake or misleading information.
The EU’s Code of Practice on Disinformation, which was established in 2022, has already received the support of several significant digital firms, including Google, Microsoft, and Meta Platforms (previously Facebook). Jourova urged these businesses and others to submit reports by the end of July on the new AI safety measures they are putting in place.
The debates show how the EU is working to control and encourage ethical AI deployment. The EU seeks to reduce the dangers connected with the possible abuse of AI technologies for disinformation campaigns by pushing for transparency and protections in AI systems.
The suggested actions reflect an increasing understanding of the necessity of addressing the societal effects of AI and its influence on information ecosystems. EU officials want to give the public more transparency and accountability in the use of AI technologies, especially those that have the potential to produce content that spreads misinformation, by promoting labeling and safeguards.
Related: China Officials warns of AI Security risks
Mixed responses have been given to the proposal. Some people think the labeling requirement is cumbersome and unnecessary. Others think it’s essential to take this action to safeguard users from any potential risks associated with AI-generated content.
In the upcoming months, the EU is anticipated to vote on the idea. The labeling requirement would take effect in 2024 if it were authorized.