In a notable policy shift, Microsoft has introduced amendments to its Azure OpenAI Service terms of service, effectively prohibiting U.S. police departments from employing generative AI for facial recognition. This move, announced on Wednesday, marks a significant departure in the company’s stance towards law enforcement utilization of AI technologies.
The updated terms now explicitly prohibit integrations with Azure OpenAI Service from being utilized “by or for” police departments in the U.S. for facial recognition purposes, extending to encompass OpenAI’s text- and speech-analyzing models. Additionally, a new provision addresses global law enforcement, barring the deployment of real-time facial recognition technology on mobile cameras, such as body cameras and dashcams, for identification purposes in uncontrolled environments.
This revision comes closely on the heels of Axon’s recent unveiling of a product leveraging OpenAI’s GPT-4 generative text model for audio summarization from body cameras. However, critics swiftly raised concerns over potential issues like hallucinations and racial biases, underscoring the complexities inherent in AI integration within law enforcement contexts.
While it remains unclear whether Axon utilized GPT-4 via Azure OpenAI Service, speculation abounds regarding the correlation between the policy update and Axon’s product launch. Notably, OpenAI had previously imposed restrictions on the use of its models for facial recognition through its APIs.
Yet, Microsoft’s revised terms exhibit certain nuances, leaving room for interpretation. While there is a blanket prohibition on Azure OpenAI Service usage by U.S. police, the ban does not extend to international law enforcement. Moreover, facial recognition deployment in controlled environments, like back offices, is not expressly prohibited, albeit the terms restrict any such usage by U.S. police.
This nuanced approach mirrors Microsoft’s and OpenAI’s evolving stance on AI applications in law enforcement and defense domains. Recent collaborations with the Pentagon underscore a departure from earlier restrictions, signaling a shift towards broader engagement in defense-related projects.
The availability of Azure OpenAI Service in Microsoft’s Azure Government product further underscores the company’s commitment to serving government agencies, including law enforcement, with tailored compliance and management features. As Microsoft Federal’s SVP, Candice Ling, reaffirmed, Azure OpenAI Service is poised to undergo additional authorization processes for DoD mission support, indicating the platform’s growing significance in critical government operations.
Update: After publication, Microsoft said its original change to the terms of service contained an error, and in fact the ban applies only to facial recognition in the U.S. It is not a blanket ban on police departments using the service.