Human Resources: AI Recruitment Systems
Created: 9 months, 2 weeks ago.
by: Charlie
Categories:
AI BIAS
LLM
Social
Domain: Human Resources/Employment
Description: Amazon developed an AI recruitment tool (2014-2017) that systematically discriminated against women, leading to its termination. Recent University of Washington studies (2024) found similar bias in LLMs for resume screening.
Ethical Challenges:
- Gender Bias: Amazon's system penalized resumes containing words like "women's" and downgraded graduates from women's colleges
- Intersectional Bias: UW study found names associated with Black women were never favored over white men
- Historical Inequality Perpetuation: AI trained on historical workforce data predominantly male
- Disparate Impact Discrimination: Under Title VII violations
Public Datasets:
- Primary: Adult/Census Income Dataset - UCI Machine Learning Repository
- URL: https://archive.ics.uci.edu/dataset/20/census+income
- MIT Case Study: https://ocw.mit.edu/courses/res-ec-001-exploring-fairness-in-machine-learning-for-international-development-spring-2020/pages/module-four-case-studies/case-study-mitigating-gender-bias/
- Content: 48,842 records of census data for predicting income >$50K/year with 13 features
- Bias Use: Widely used for studying bias in income prediction across demographic groups
- Additional Platforms: Available on fairness toolkits - https://fairmlbook.org/datasets.html
3 Questions
What are the ethical issues of matching algorithms?
Asked: 9 months, 2 weeks ago
By: Agathe(Research)
245 Views
Classification models for suitability for a job: Is there really any way …
Asked: 9 months, 2 weeks ago
By: Hollytibble(Research)
257 Views
Who CAN I discriminate against?
Asked: 9 months, 2 weeks ago
By: Hollytibble(Research)
162 Views