Technology: Facial Recognition Bias
Created: 9 months, 2 weeks ago.
by: Charlie
Categories:
AI BIAS
Machine Vision
Social
Domain: Computer Vision/Biometrics
Description: Joy Buolamwini (MIT) and Timnit Gebru's Gender Shades project exposed severe intersectional bias in commercial facial recognition systems, demonstrating these systems performed 32 times worse on dark-skinned women compared to light-skinned men.
Ethical Challenges:
- Intersectional Bias: Error rates varied dramatically: 0.8% for light-skinned men vs. up to 34.7% for dark-skinned women
- Data Representation: Training datasets with 83.5% white individuals
- Civil Rights Impact: Use in surveillance and law enforcement applications
- Platform Governance: Led to policy changes at major tech companies
Public Datasets:
- Primary: Pilot Parliaments Benchmark (PPB)
- Source: Algorithmic Justice League
- URL: Contact datasets@ajlunited.org for access
- MIT Media Lab: https://www.media.mit.edu/projects/gender-shades/faq/
- Research Paper: https://www.media.mit.edu/publications/gender-shades-intersectional-accuracy-disparities-in-commercial-gender-classification/
- Content: 1,270 unique individuals from 6 countries (3 African, 3 European)
- Features: Balanced representation, 44.39% women, 47% dark-skin representation
- Labels: Fitzpatrick Skin Type classification and gender annotations
2 Questions
Inherent Bias in Image Dataset Labelling
Asked: 9 months, 2 weeks ago
By: Deleuze(Citizen)
208 Views
IS IT LEGAL TO BE interviewed BY AN AI without consent?
Asked: 4 months ago
By: Allf(AI Ethics Expert)
71 Views