Business

Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’

For years, activists and scholars have claimed that face analysis software can identify a person’s age, gender, and emotional state. BiasedUnreliable, or invasive — and should not be sold.

Microsoft acknowledged some of these criticisms on Tuesday and said it plans to remove these features from its company. Artificial intelligence service Detects, analyzes and recognizes faces. This week, new users will be unavailable and existing users will be phased out within a year.

This change is part of Microsoft’s push for tighter control over artificial intelligence products. After two years of review, the Microsoft team created the “Responsible AI Standard.” This is a 27-page document that sets out the requirements for AI systems to prevent them from adversely affecting society.

Requirements include ensuring that the system provides “effective solutions to problems designed to be solved” and “similar quality of service to identified demographic groups, including marginalized groups.” It will be.

Prior to its release, the technology used to make important decisions about employment, education, healthcare, financial services, or access to life opportunities will be a team led by Microsoft’s Chief AI Officer, Natasha Crampton. Will be subject to review by. ..

There was growing concern at Microsoft about emotion recognition tools that labeled someone’s expression as angry, contempt, disgust, fear, happiness, neutrality, sadness, or surprise.

“There are many cultural, geographical and personal differences in the way we express ourselves,” Crumpton said. That led to credibility concerns, along with the bigger question of “whether facial expressions are a reliable indicator of your inner emotional state,” she said.

Age and gender analysis tools have been removed and may help interpret visual images of people who are blind or have poor eyesight, for example, along with other tools that detect facial attributes such as hair and smiles. Yes, but the company decided it was a problem to create, according to Crumpton, a publicly available profiling tool.

In particular, the so-called gender classifier for the system is binary, “it doesn’t match our values,” she added.

Microsoft will also add new controls to the face recognition feature. You can use this feature to perform identity checks or search for specific people. For example, Uber Use software In that app, make sure the driver’s face matches the ID in the file for that driver’s account. Software developers who want to use Microsoft’s facial recognition tools need to apply for access and explain how they plan to deploy it.

Users also need to request and explain how to use other potentially exploited AI systems, such as: Custom neural voice.. The service can generate a human voiceprint based on someone’s speech sample, so for example, an author can create a synthetic version of his voice and read an audiobook in a non-speaking language.

Due to the potential misuse of the tool (to give the impression that people have said what they are not saying), the speaker will perform a series of steps to make sure that the use of audio is permitted. need to do it. The recording contains a watermark that Microsoft can detect. ..

“We are taking concrete steps to adhere to the principles of AI,” said Crumpton, who worked as a lawyer at Microsoft for 11 years and joined the ethical AI group in 2018. “

Microsoft, like any other technology company, has stumbled upon artificial intelligence products. In 2016, we released a chatbot called Tay on Twitter. This chatbot is designed to learn “understanding conversations” from interacting users. The bot soon began to spit out racist and offensive tweets, which Microsoft had to remove.

In 2020, researchers discovered that speech-to-text conversion tools developed by Microsoft, Apple, Google, IBM, and Amazon didn’t work very well for blacks. Microsoft’s system was awesome, misidentifying 15% of white words, but 27% of blacks.

The company was collecting a variety of voice data to train its AI systems, but didn’t understand how diverse the languages ​​were.So it hired a sociolinguistics expert The University of Washington explained the types of languages ​​Microsoft needs to know. It went beyond demographics and regional diversity to the way people speak in formal and informal settings.

“Thinking about race as a deciding factor in how someone speaks is actually a bit misunderstanding,” Crumpton said. “What I learned in consultation with experts is that in reality a very wide range of factors influence linguistic diversity.”

Crumpton said the journey to correct audio and text discrepancies helped to inform the guidance set out in the company’s new standards.

“This is an important standard-setting period for AI,” she said, pointing out the proposed European regulations that set rules and restrictions on the use of artificial intelligence. “We hope that we can use our standards to contribute to the bright and necessary discussions that technology companies need to maintain.”

Years of lively debate about the potential harm of AI in the tech community, driven by mistakes and errors that have real consequences in people’s lives, such as algorithms that determine whether people benefit from welfare. It is in progress. Dutch tax authorities say Defective algorithm We have penalized dual citizens.

Automated software for recognizing and analyzing faces is particularly controversial. Last year, Facebook shut down a 10-year-old system to identify the person in the photo. “There are many concerns about the position of facial recognition technology in society,” said the company’s vice president of artificial intelligence.

Several black men were illegally arrested after a flawed facial recognition match. And in 2020, at the same time as Black Lives Matter’s protest after George Floyd’s police killing in Minneapolis, Amazon and Microsoft issued a moratorium on the use of facial recognition products by US police. Clearer law It was necessary for its use.

Since then, Washington Massachusetts, among other things, has passed regulations requiring judicial oversight of police use of facial recognition tools.

Crumpton said Microsoft considered making the software available to police in states where the book has law, but has decided not to do so for now. She said it could change as the legal situation changed.

Professor of Computer Science in Princeton Well-known AI expertThe company has stepped back from face analysis techniques because it is “more visceral, as opposed to other types of AI that may be suspicious but don’t always feel in the bones.” Said that it may be.

Companies may also find that some of these systems are of less commercial value, at least for now, he said. Microsoft couldn’t say how many users there were because of the facial analysis feature. Narayanan predicts that because companies are “cash cows,” they are unlikely to abandon other invasive technologies, such as targeted ads that profile people to choose the best ads to display. did.

Related Articles

Back to top button