Meta, the social media company formerly known as Facebook, will develop an invisible watermarking mechanism for all AI-generated photographs to prevent misuse. To increase transparency and traceability, producers will find it harder to erase watermarks from AI-generated photos.
Unlike typical watermarks, Meta’s “Imagine with Meta AI,” claim durability to cropping, color changes, and screenshots. Deep-learning models will apply invisible watermarks, ensuring their detection with a related model while staying undetectable to the human eye.
The endeavor will start with Meta AI-generated photographs and expand to other Meta services using AI-generated images. Regular watermarks can be easily deleted or altered, but the deep-learning model resists image tampering. The company’s dedication to preventing misuse matches with AI-generated content problems, including deepfakes and modified graphics for frauds and misinformation.
Meta’s action follows a growing trend of generative AI tools manipulating photos, films, and sounds, prompting requests for protections. Scammers have used similar technologies to produce bogus content depicting famous personalities, causing misinformation and affecting financial markets and public perception. Meta uses invisible watermarks to prohibit AI-generated image misuse and promote transparency about content origin and validity.
Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p— Andy Campbell (@AndyBCampbell) May 22, 2023
Meta AI’s latest release lets Facebook Messenger and Instagram users exchange and receive AI-generated images using the “reimagine” feature. A complete strategy to handle AI-generated content concerns will include the invisible watermark function in both messaging platforms.
Meta enhances manipulation robustness, unlike Dall-E and Midjourney, which provide standard watermarking. These controls are being extended to Meta services as part of the company’s proactive approach to AI-generated content issues in its ecosystem.