Study finds gender and skin-type bias in commercial artificial-intelligence systems

Mit Gender Shades 01

Source: Larry Hardesty
Affiliation: MIT News

This article tells the story of how Joy Buolamwini, a Black researcher at the MIT Media Lab, discovered that facial recognition systems were more likely to make incorrect predictions when recognizing darker-skinned people. After using the Fitzpatrick scale—a skin color scale that divides skin colors into 6 different categories by skin tone—to analyze the accuracy of the most common facial recognition programs by skin tone, Buolamwini discovered that the error rates were highest for people whose skin tone fell into the three darkest skin tone categories on the Fitzpatrick scale. Furthermore, Buolamwini found that the most common facial recognition technologies had higher error rates for women than they did for men. These findings prompted IBM to create a new model that performed better on Buolamwini’s benchmarks due to being trained on a more balanced dataset and employing a more robust underlying neural network.

Keywords: Computer Science , Facial Recognition , Tech Ethics