Microsoft Disables Facial Recognition, What AI Thinks May Not Be What You Express

take 9 minutes to read
Home Points Main article

Expressing emotions with facial expressions is an ability that is almost innate in every human being, and people are used to facial expressions to guess other people's emotions, but nowadays, when technology has advanced at a rapid pace, AI (artificial intelligence) also recognizes people's facial conditions and expressions.

▲Image from: ResearchGate

Not long ago, Microsoft, which has been focusing on developing facial recognition technology, released a guide with the theme 'A framework for building AI systems responsibly'. Publicly sharing Microsoft's standards for responsible AI, this is the framework that guides how Microsoft builds AI systems.

▲Image from: Microsoft

The article mentions Microsoft's decision to discontinue facial recognition features - Azure Face Services - because they can be used to try to infer emotional states and internal identity attributes that, if misused, could subject people to stereotyping, discrimination or unfair denial of service.

Currently, Face Services are only available to Microsoft hosted customers and partners; existing customers will have one year to transition, but must discontinue these features by June 21, 2023; new users can request access using the Face Recognition Request Form.

▲Image from: Microsoft

In fact, Microsoft isn't disabling the feature altogether, but rather integrating the technology into 'controlled' accessibility tools such as 'Seeing AI' for people with visual impairments. It describes objects and text, reads signs, interprets someone's facial expressions, and provides navigation for the visually impaired, and currently supports English, Dutch, French, German, Japanese, and Spanish.

▲Image from: Microsoft

The guidance, published by Microsoft, illustrates the tech company's decision-making process, including a focus on principles such as inclusivity, privacy and transparency, and is the first major update to the standard since its launch in late 2019.

Making such a big change to the facial recognition feature, Microsoft says it's because it recognizes that for AI systems to be trustworthy, they need to properly address the problems they're designed to solve.

▲Image from: Microsoft

As part of the effort to align Azure Face Services with the requirements of responsible AI standards, Microsoft will also disable the ability to infer emotional states (happy, sad, neutral, angry, etc.) and identity attributes (such as gender, age, smile, facial hair, hair, and makeup).

In the case of emotional states, Microsoft decided not to provide open API access to technologies that scan people's faces and claim to be able to infer their emotional state based on their facial expressions or movements.

▲Image from: Microsoft

Microsoft's page shows that recognition can be done by 27 facial markers of a person, and there are various facial attributes that can be determined, such as whether a given face is blurred, whether it has accessories, estimated gender and age, whether it wears glasses, the type of hair, whether it wears glasses, and whether it has a smiling expression ......

Experts inside and outside Microsoft have highlighted the lack of scientific consensus on the definition of 'emotion' and the high level of privacy issues surrounding this capability. So Microsoft has also decided that it needs to carefully analyse all AI systems designed to infer people's emotional states, whether the system uses facial analysis or any other AI technology.

▲Image from: Microsoft

It's worth noting that Microsoft isn't the only company taking a careful look at facial recognition, as IBM's CEO Arvind Krishna has also written to the US Congress revealing that the company has pulled out of the facial recognition business. Both companies have made such decisions for reasons that are inseparable from the previously sensational 'Freud's death' incident.

▲ Image from: BBC

This is because of the fear that this technology may provide law enforcement agencies with surveillance tools that could lead to some human rights violations, as well as the fear that citizens' privacy will be compromised, and because current legislation in the United States in this area is not very well developed.

So the companies that hold the technology decided to start by disciplining themselves so that the technology is not abused and so that citizens have more safeguards for their privacy and human rights. When the use of a technology is not regulated by sound norms, it may be a better option to regulate it from the technology development itself.

Parisienne Gives Away Fake Dollars And Has A Great Color And Flavor
« Prev 06-22
Analyst Says Meta Business To Slow Down, Shares Of Foundry Gore Immediately Drop
Next » 06-22