In the ongoing battle against online harm, governments worldwide are increasingly scrutinizing the role of artificial intelligence (AI). Now, the UK’s regulatory authority, Ofcom, is poised to delve into how AI can aid in combating harmful online content, particularly concerning children’s safety.
Ofcom, tasked with enforcing the UK’s Online Safety Act, has revealed plans to initiate a consultation on the utilization of AI and other automated tools to proactively detect and remove illegal content online, with a specific focus on safeguarding children from harmful material and identifying previously elusive child exploitation material.
Mark Bunting, a director at Ofcom’s Online Safety Group, emphasizes the importance of evaluating the efficacy of existing AI screening tools in identifying and shielding children from harmful content. While some platforms already employ such tools, the accuracy and effectiveness of these mechanisms remain underexplored. Ofcom aims to address this gap by recommending industry standards for assessing tool accuracy while managing risks to free expression and privacy.
However, the introduction of AI tools for content moderation is not without its challenges. Critics highlight the inherent limitations of AI detection and its potential implications for free expression. Nonetheless, advancements in AI research offer promising avenues for detecting and mitigating online threats, including deepfakes and fraudulent activities.
Ofcom’s move coincides with its latest research revealing the increasing digital engagement of younger children in the UK. With a growing number of children accessing smartphones and tablets, concerns over age-appropriate content and online safety are mounting. Social media usage among younger demographics is on the rise, with platforms like WhatsApp and TikTok gaining popularity even among 5- to 7-year-olds.
Despite parental efforts to educate children about online safety, there remains a disconnect between children’s exposure to harmful content and their willingness to report such experiences to parents. This underscores the need for comprehensive measures to address online risks and empower children to navigate the digital landscape safely.
As AI continues to evolve as a tool for online content moderation, regulatory bodies like Ofcom play a pivotal role in shaping industry standards and safeguarding vulnerable users, particularly children, in the online realm.