OpenAI, the artificial intelligence (AI) research organization, is taking steps to address concerns about the potential misuse of its AI tools by children. Under scrutiny from activists and parents, the company has established a dedicated Child Safety team to explore ways to prevent the unintended abuse or misapplication of its AI technologies by underage users.
In a recent job listing on its career page, OpenAI unveiled the existence of the Child Safety team. The team collaborates with platform policy, legal, and investigations groups both within OpenAI and external partners to manage processes, incidents, and reviews related to underage users.
The company is actively seeking a child safety enforcement specialist to join the team. This specialist will play a crucial role in applying OpenAI’s policies regarding AI-generated content, with a specific focus on “sensitive” content related to children. The move reflects their commitment to maintaining a responsible and safe environment for users, especially those who are underage.
The decision to form the Child Safety team aligns with OpenAI’s anticipation of a potentially significant underage user base. The current terms of use already require parental consent for children aged 13 to 18 and prohibit the use of its AI tools for kids under 13.
The establishment of this new team comes on the heels of OpenAI’s recent partnership with Common Sense Media to collaborate on creating guidelines for kid-friendly AI. OpenAI has also secured its first education customer, indicating a growing focus on ensuring its AI tools are suitable for educational environments.
While AI tools like ChatGPT have become popular among kids and teens for assistance with schoolwork and personal issues, concerns have risen regarding their potential misuse. A Center for Democracy and Technology poll revealed that a significant percentage of kids have used ChatGPT to address anxiety, mental health issues, friendship problems, and family conflicts.
Despite AI’s positive impact, challenges such as plagiarism and misinformation have prompted bans on ChatGPT in some schools and colleges. OpenAI has responded by providing documentation for ChatGPT in classrooms, offering guidance for educators on utilizing AI responsibly as a teaching tool.
The move to form the Child Safety team indicates OpenAI’s proactive stance in addressing potential challenges related to the use of AI by minors. With growing calls for guidelines and regulations on kid usage of generative AI, organizations like UNESCO are pushing for age limits, data protection measures, and user privacy safeguards in the educational application of AI technologies. OpenAI’s efforts reflect a broader industry acknowledgment of the need for responsible AI use, particularly when it comes to younger audiences.