The Oversight Board, Meta’s independent policy council, is directing its focus toward the handling of explicit AI-generated images on the company’s social platforms. Two separate investigations have been announced regarding the management of such content on Instagram in India and Facebook in the U.S. These inquiries stem from instances where Meta’s systems failed to adequately detect and address explicit material.
In both cases, the offending media has since been removed. However, to protect the privacy and dignity of those targeted by the AI-generated images, their identities are not disclosed.
The Oversight Board intervenes in cases concerning Meta’s moderation decisions, typically after users have exhausted the appeals process with Meta. A comprehensive review of these cases’ findings and conclusions is forthcoming.
The First Case: In the initial case, a user reported an AI-generated nude image of a public figure from India on Instagram as pornography. Despite the report, Meta failed to promptly remove the content. After subsequent appeals, the Oversight Board intervened, prompting Meta to take action and remove the inappropriate content for violating community standards on bullying and harassment.
The Second Case: Regarding Facebook, a user posted an explicit AI-generated image resembling a U.S. public figure in an AI-focused group. In this instance, Meta had previously taken down the image, categorized it under derogatory sexualized content, and added it to a Media Matching Service Bank.
When questioned about selecting a case where the content was successfully removed, the Oversight Board clarified that such cases highlight broader issues across Meta’s platforms, facilitating a comprehensive examination of the company’s global policy effectiveness.
The Challenge of Deepfake Porn and Gender-Based Violence
The proliferation of generative AI tools has facilitated the creation of deepfake porn, posing significant ethical concerns and exacerbating gender-based violence, particularly in regions like India. While laws addressing this issue are scarce, the detrimental impact on victims is evident.
There are currently only a few laws globally that address the production and distribution of porn generated using AI tools. A handful of U.S. states have laws against deepfakes. The U.K. introduced a law this week to criminalize the creation of sexually explicit AI-powered imagery.
Meta’s Response and Future Steps:
Meta acknowledges the challenges associated with detecting and addressing AI-generated explicit content. While AI and human review mechanisms are employed, there are inherent limitations, and the company continues to refine its approaches.
The Oversight Board invites public input on these cases, addressing the harms of deepfake porn and Meta’s response strategies. Despite ongoing efforts, platforms struggle to combat evolving tactics used by perpetrators to disseminate harmful content.
In conclusion, these cases underscore the ongoing struggle of platforms like Meta to adapt their moderation processes to the rapid evolution of AI-generated content, emphasizing the need for continuous innovation and vigilance in safeguarding user safety and well-being.