Microsoft has strengthened its stance against the use of generative AI for facial recognition by U.S. police departments through its Azure OpenAI Service, a managed enterprise solution built around OpenAI’s technology.
TechCrunch reports that in a recent update to its terms of service, Microsoft has made it explicitly clear that integrations with Azure OpenAI Service are prohibited from being used “by or for” police departments in the United States for facial recognition purposes. This ban extends to current and potential future image-analyzing models developed by OpenAI.
The updated policy also addresses law enforcement agencies globally, specifically banning the use of “real-time facial recognition technology” on mobile cameras, such as body cameras and dashcams, to identify individuals in uncontrolled environments.
NEW YORK, NEW YORK – MAY 06: (EDITORS NOTE: Images contains profanity.) A protester gestures to NYPD officers at the Lexington Ave/63rd Street subway station during a “Justice for Jordan Neely” protest that began outside the Broadway-Lafayette station on May 06, 2023 in New York City. (Photo by Alexi Rosenfeld/Getty Images)
These changes come on the heels of Axon’s announcement of a new product that utilizes OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to highlight potential issues with this application, including the tendency of generative AI models to invent facts (known as hallucinations) and the introduction of racial biases from training data. Critics claim the latter is particularly concerning given the disproportionate number of people of color stopped by police compared to their white counterparts.
While it remains unclear whether Axon was using GPT-4 through Azure OpenAI Service and if the updated policy was a direct response to their product launch, the move aligns with Microsoft and OpenAI’s recent approach to AI-related law enforcement and defense contracts.
The new terms, however, do leave some room for interpretation. The complete ban on Azure OpenAI Service usage applies only to U.S. police, not international law enforcement. Additionally, it does not cover facial recognition performed with stationary cameras in controlled environments, such as back offices, although any use of facial recognition by U.S. police is strictly prohibited.
This stance is consistent with Microsoft and OpenAI’s recent engagements with government agencies. In January, reports revealed that OpenAI is collaborating with the Pentagon on various projects, including cybersecurity capabilities, marking a shift from the startup’s previous ban on providing AI to militaries. Meanwhile, Microsoft has proposed using OpenAI’s image generation tool, DALL-E, to assist the Department of Defense in developing software for military operations.
Read more at TechCrunch here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.