BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Amazon Still Pushing Biased Facial-Recognition Software To Law Enforcement, MIT Researcher Contends

This article is more than 5 years old.

Getty

Amazon CEO Jeff Bezos was made aware of biases in its facial-recognition software last June when Joy Buolamwini, an activist at MIT Media Lab and founder of Algorithmic Justice League, an organization established to combat bias in decisioning software, wrote an open letter that revealed the company’s Rekognition tool especially underperformed in identifying darker-skinned individuals and women.

Seven months later, the software giant is still selling the technology, and following a recent New York Times story about Rekognition, has refuted MIT's findings, saying the study didn't use the latest version of Rekognition and was based on flawed methodology.

In her latest Medium post, Buolamwini responds to Amazon’s dismissal. “I share this information because Amazon continues to push unregulated and unproven technology not only to law enforcement but increasingly to the military,” Buolamwini wrote in a press statement. Potential harms from algorithmic decision-making can lead to illegal discrimination and unfair practices limiting opportunities, economic gains and freedom, Buolamwini notes.

Rekognition software has been marketed and sold to both federal and local law enforcement since 2016. In May 2018, shareholders, as well as the American Civil Liberties Union (ACLU) and other civil rights organizations, urged Amazon to stop selling the software. In an AWS blog post responding to the ACLU in July, Wood cautions law enforcement on how to use the tool. He reiterated the need for a higher level of accuracy when using the tool for law enforcement.

“There’s a difference between using machine learning to identify a food object and using machine learning to determine whether a face match should warrant considering any law enforcement action,” Wood said. “The latter is serious business and requires much higher confidence levels,” he continued. “We continue to recommend that customers do not use less than 99% confidence levels for law enforcement matches, and then to only use the matches as one input across others that make sense for each agency.”

A later MIT study calls into question levels of accuracy, specifically how the tool becomes less accurate at identifying black women.

In an August 2018 study, MIT researchers found that Rekognition performed with flying colors when identifying white men, but that accuracy dropped dramatically when identifying women of color, 100%, and 68.6% respectively.

The company spokeswoman said that inconsistency in bias testing may have been the result of testing on a version of the software that was not updated.

“The results from the [MIT] study last week and the results from the letter [Boulamwini] shared in June don’t match,” an Amazon spokeswoman told Forbes. “We investigated that as well, and at the time, it happened that there hadn’t been any changes to the service rolled out during that time frame.”

But Buolamwini notes that there may be a lag in new users adopting the new systems as older iterations of the software persist.

“Amazon states that they have made a new version of their Rekognition system available to customers since our August 2018 audit,” said Buolamwini. “This does not mean all customers are using the new system,” she continued. “Legacy use often occurs particularly when adopting a new system can mean having to invest resources into making updates with existing processes.”

What’s more Buolamwini goes on to note that Amazon did “not submit AI systems to the National Institute of Standards and Technology (NIST) for the latest rounds of facial recognition evaluations.”

Amazon’s response to Buolamwini’s latest post cites the reason the company hasn’t submitted software for testing is due to NIST not having a test that supports its platform.

“(NIST) allows a simple computer vision model to be tested in isolation,” Wood said. “However, Amazon Rekognition uses multiple models and data processing systems under the hood, which cannot be tested in isolation,” he continued. “We welcome the opportunity to work with NIST on improving their tests to allow for more sophisticated systems to be tested objectively.”

In addition to not having a test to accommodate the nature of the platform, the Amazon spokeswoman said intellectual property is also a barrier to NIST testing.

“NIST doesn’t support the protection of intellectual property that is part of our service, so it makes it challenging, but we do want to work with NIST so that we can do a test with them,” she said.

Amazon also reiterated that the technology being tested, facial analysis, rather than facial recognition being used by law enforcement, can’t be correlated.

“The research is being done is on facial analysis not facial recognition and these are two totally different technologies, it’s an apples and oranges comparison, it’s impossible to draw correlations of facial analysis test and try to confer them to any kind of meaning or implications for facial recognition.”

Yet concern around biases identified through testing of the product at any stage is still warranted.

“The main message is to check all systems that analyze human faces for any kind of bias,” said Buolamwini. “If you sell one system that has been shown to have a bias on human faces, it is doubtful your other face-based products are also completely bias-free.”