Microsoft bans US police officers from using its AI technology for facial recognition

Microsoft has updated the terms of use for its Azure OpenAI service, which restricts access to artificial intelligence (AI) models developed by OpenAI for US law enforcement agencies.

The objectives of the provision are to strengthen privacy and promote the ethical use of technologies, particularly real-time facial recognition.

Additionally, Microsoft’s new policy dictates that no police department in the US can use such AI tools for applications that involve text and speech analytics based on their investigations.

They also banned the universal use of real-time facial recognition technologies with mobile cameras operated by any police agency.

A Microsoft spokesperson noted that integrations with the Azure OpenAI service should not be used to identify people in an uncontrolled environment, use suspicious databases, or perform ongoing surveillance based on individuals’ personal information or biometric data.

Models affected by this update include the GPT-3, GPT-4 and its Turbo version, as well as Vision and Codex tools, DALL·E 2, DALL·E 3 and Whisper.

Microsoft has implemented limits on facial analysis to ensure that personal characteristics such as emotional state, gender or age are not inferred.

The move contradicts earlier reports that suggested possible collaboration between Microsoft and the US military in using the aforementioned technologies.

READ  Businesses in Malaga defend using technology to fight drought

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *