Social Media: Content Moderation Bias
Created: 9 months, 2 weeks ago.
by: Charlie
Categories:
AI BIAS
Machine Learning
Social Media
Domain: Social Media/Content Moderation
Description: Automated content moderation systems on major platforms show systematic bias in flagging and removing content, disproportionately affecting marginalized communities and non-English content.
Ethical Challenges:
- Cultural Bias: Western-centric training data leading to misclassification of cultural content
- Language Bias: Lower accuracy for non-English content and code-switching
- Political Bias: Inconsistent enforcement across political spectrum
- Context Insensitivity: Inability to understand cultural or historical context
Public Datasets:
- HatEval Dataset: Multilingual detection of hate speech against immigrants and women
- URL: https://competitions.codalab.org/competitions/19935
- OLID Dataset: Offensive Language Identification Dataset with three-level annotation
- URL: https://sites.google.com/site/offensevalsharedtask/
- Founta et al. Dataset: 4-class hate speech detection dataset from Twitter
- URL: https://github.com/ENCASEH2020/hatespeech-twitter
2 Questions
Ethics of Content Moderation
Asked: 9 months, 2 weeks ago
By: Agavornik(AI Ethics Expert)
168 Views
What are the biases in digital advertising and entertainment?
Asked: 4 months ago
By: Bcorreyero@Ucam.Edu(Research)
56 Views