Google has announced new guidelines for developers creating AI-powered apps on Google Play, aiming to prevent the generation and distribution of inappropriate content such as sexual content and violence. The move comes amid increasing concerns about AI apps that generate harmful content, including deepfake nudes and other offensive material.
Under the new guidelines, AI apps on Google Play must include mechanisms to block the creation of restricted content and provide users with options to report offensive material. This initiative is part of Google’s broader effort to ensure that apps adhere to its AI-Generated Content Policy, which mandates that apps must not facilitate the generation of any restricted content.
Google is also cracking down on the promotional practices of AI apps. If an app’s marketing materials suggest it can perform prohibited actions, such as creating nonconsensual nude images, it risks being banned from the platform. This policy is in response to recent incidents where apps were advertised on social media with claims of using AI to generate deepfake nudes, leading to their removal from both Google Play and the Apple App Store.
The guidelines stress the importance of community feedback in maintaining a safe environment on Google Play. AI apps must prioritize and monitor user reports of inappropriate content effectively. Additionally, apps where user interactions influence content visibility and experience must rigorously manage and filter those interactions to prevent the promotion of harmful content.
Developer responsibilities
Developers are tasked with thoroughly testing their AI tools to ensure they respect user safety and privacy. Google encourages the use of its closed testing feature, which allows developers to gather early feedback from users before wider release. Moreover, developers are expected to document their testing processes, as Google may request to review this documentation to verify compliance with its guidelines.
To assist developers in navigating these new requirements, Google is offering resources like the People + AI Guidebook. This guidebook is designed to support developers in building AI applications that are not only effective but also ethically responsible and aligned with Google’s standards for content and user interaction.
These measures by Google signify a growing recognition of the need to regulate AI applications more stringently, especially as they become capable of influencing public perceptions and behaviors profoundly. By setting strict guidelines and providing support resources, Google aims to foster a safer and more responsible AI ecosystem on its platform.