• Tehnologija
  • Električna oprema
  • Materijalna Industrija
  • Digitalni život
  • Politika privatnosti
  • O nama
Location: Home / Tehnologija / Why face-recognition technology has a bias problem - CBS News

Why face-recognition technology has a bias problem - CBS News

techserving |
2365

Ongoing U.S. protests over racially biased policing are also putting a spotlight on the tools of law enforcement, including widely used — but completely unregulated — facial recognition technology.

Democrats inCongress are probing the FBI and other federal agencies to determine if the surveillance software has been deployed against protesters, while states including California and NewYork are considering legislation to ban police use of the technology.

At the same time, major tech companies in the field are edging away from their artificial intelligence creations. Amazon on Wednesday announced a one-year pause in police use of its controversial facial recognition product, called Rekognition, after years of pressure from civil rights advocates. IBM also recently announced that it was abandoning facial-recognition research altogether, citing concerns about the human rights implications.

"We are terrified that so many of the images that are being posted on social media by protesters will be weaponized by police against them," said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, which is pushing for limits on the technology in New York. "It's just deeply chilling to think that engaging in protected activity, exercising your most fundamental rights, could end you up in a police database."

What's in a face?

Law enforcement uses a range of advanced technology to make their jobs easier, but facial analysis is one that is particularly powerful — and potentially dangerous.

So-called "facial analysis" systems can be put to many uses, including automatically unlocking your iPhone, letting a person into a building, identifying the sex or race of a person, or determining if the face matches a mugshot.

The problem is that no facial analysis system is perfectly accurate. And while that's less of an issue when it comes to a locked iPhone, it becomes a major obstacle when used to identify human suspects.

Why face-recognition technology has a bias problem - CBS News

Rekognition, Amazon's face-ID system, once identified Oprah Winfrey as male, in just one notable example of how the software can fail. It has also wrongly matched 28 members of Congress to a mugshot database. Another facial identification tool last year wrongly flagged a Brown University student as a suspect in Sri Lanka bombings, and the student went on to receive death threats.

"If you look at the top three companies [in the field], none of themperforms with 100% accuracy. So we're experimenting in real time withreal humans," said Rashida Richardson, director of policy research at the AI Now Institute.

Research shows these errors aren't aberrations. An MIT study of three commercial gender-recognition systems found they had errors rates of up to 34% for dark-skinned women — a rate nearly 49 times that for white men.

A Commerce Department study late last year showed similar findings. Looking at instances in which an algorithm wrongly identified two different people as the same person, the study found that error rates for African men and women were two orders of magnitude higher than for Eastern Europeans, who showed the lowest rates.

Repeating this exercise across a U.S. mugshot database, the researchers found that algorithms had the highest error rates for Native Americans as well as high rates for Asian and black women.

Perfectly imperfect

The bias and inaccuracy such research reveals comes down to how these tools are developed. Algorithms "learn" to identify a face after being shown millions of pictures of human faces. However, if the faces used to train the algorithm are predominantly white men, the system will have a harder time recognizing anyone who doesn't fit.

Joy Buolamwini, a leading researcher on algorithmic bias, found this out the hard way as an undergraduate in computer science. One of her assignments required interacting with a robot equipped with computer vision, but the robot was unable to "see" her. She later found that a computer camera did not recognize her face — until she put on a white mask.

"What a lot of those systems are doing are looking at vast amounts of data to recognize the patterns within it, and then being used against a different database to function in the real world," Richardson said.

Some scientists believe that, with enough "training" of artificial intelligence and exposure to a widely representative database of people, algorithms' bias issue can be eliminated. Yet even a system that classifies people with perfect accuracy can still be dangerous, experts say.

For instance, "smart policing" systems often rely on data showing past crime patterns to predict where crime are likely to occur in future. However, data on reported crime are heavily influenced by police and, rather than being neutral, "is a reflection of the department's practices and priorities; local, state or federal interests; and institutional and individual biases," Richardson said in a recent paper. That means perfectly reproducing a pattern of biased policing would do nothing to rectify it.

There's no law for that

As imperfect as biased algorithms are, activists and researchers agree that the reverse — a system that can perfectly identify any individual — would be far worse, as it could spell the end of privacy as Americans know it.

"Face surveillance is dangerous when it works and when it doesn't," the ACLU's Kade Crockford said during recent testimony to the Boston City Council.

That's why more and more people are calling for government limits on when and how such technology may be used. Currently, facial surveillance in the U.S. is largely unregulated.

"There's not yet a standard for how you evaluate [facial recognition] and how you consider whether or not the technology makes sense for one application or another," said Kris Hammon, a professor of computer science at Northwestern University.

"For something that recognizes my face when I look at my phone, I don't care. I want it to be good enough to protect my phone, but it doesn't need to explain itself to me," he said. "When I look at the same kind of technology applied to the law, the question is, 'Is the result of this technology admissible in court?' Who's going to answer that?"

Download our Free App

For Breaking News & Analysis Download the Free CBS News app