In a move to combat the rise of AI-fueled misinformation, Meta is expanding its labeling of AI-generated content on Facebook, Instagram, and Threads.
While the company has been detecting and labeling content created using its own “Imagine with Meta” generative AI tool, it will now extend this practice to synthetic imagery generated by rival AI tools. This expansion is based on common technical standards developed through collaboration with industry partners, signaling when content is AI-generated.
According to Nick Clegg, Meta’s president, the expanded labeling will be rolled out gradually in the coming months, and it aims to cover all languages supported by each app. The company plans to focus on important election periods globally during this rollout to inform decisions about the expansion in different markets.
Detecting AI-Generated Imagery
Meta’s approach to detecting AI-generated content relies on both visible marks applied by its generative AI and “invisible watermarks” embedded in the file images. These signals, also embedded by rival AI tools, are what Meta’s detection technology will look for.
Clegg mentions collaboration with other AI companies, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, to establish common standards and best practices for identifying generative AI.
However, when it comes to AI-generated videos and audio, Clegg notes the challenge in detection due to the lack of widespread adoption of marking and watermarking. To address this, they are changing its policy, requiring users to manually disclose “photorealistic” AI-generated video or “realistic-sounding” audio, with potential penalties for non-disclosure.
Using Large Language Models (LLMs) for Meta Content Moderation
Meta is also exploring the use of generative AI, specifically Large Language Models (LLMs), as a supplement to its content moderation efforts. LLMs are being tested by training them on Meta’s Community Standards to determine content violations more accurately. Clegg expresses optimism about the potential of generative AI to enhance content moderation, making the takedown process faster and more accurate, particularly during heightened risk periods like elections.
While Meta’s efforts are aimed at curbing AI-generated disinformation, the efficacy of these measures and the prevalence of synthetic content on its platforms remain unclear.
Note: The article discusses Meta’s plans, policies, and challenges related to AI-generated content labeling and detection, highlighting the company’s approach and potential areas of focus.