ETD-HUB

Predictive systems

Asked: 9 months, 2 weeks ago By: Charlie Views: 311 Criminal Justice: COMPAS Recidivism Prediction

Can a predictive system used in life-altering decisions ever be considered 'fair' if its inner workings are proprietary and its outcomes show systemic racial disparities, even when trained on 'real-world' data?

5 Answers

Answered: 9 months, 2 weeks ago By: Charlie
A predictive system like COMPAS cannot truly be considered fair if it lacks transparency and exhibits systemic racial disparities; even if trained on “real-world” data. Fairness in algorithmic decision-making, especially in high-stakes domains like criminal justice, goes beyond technical accuracy; it requires accountability, interpretability, and equity in outcomes. ProPublica’s 2016 investigation into COMPAS found that Black defendants were twice as likely as white defendants to be falsely labeled high-risk. Northpointe (now Equivant) contested this, arguing that the system maintained overall accuracy parity. This disagreement highlights a critical issue in the field: fairness criteria often conflict. As noted by Kleinberg et al. (2016), you cannot simultaneously satisfy predictive parity, equal false positive rates, and calibration across groups when base rates differ, as they often do in racially stratified societies. Moreover, the use of proprietary, black-box algorithms in sentencing decisions raises serious concerns about due process. Defendants, lawyers, and judges cannot meaningfully challenge or understand a system that lacks algorithmic transparency. This undermines legal principles such as the right to contest evidence and equal protection under the law. Training on “real-world” data is no guarantee of fairness either. Historical data from the criminal justice system reflects decades of systemic bias, from over-policing in Black communities to harsher sentencing for similar offenses. Algorithms trained on this data risk codifying and perpetuating these inequities (Barocas & Selbst, 2016). References: ProPublica (2016). Machine Bias – https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Kleinberg, Mullainathan, & Raghavan (2016). Inherent Trade-Offs in the Fair Determination of Risk Scores Barocas & Selbst (2016). Big Data's Disparate Impact, California Law Review
Joanne_Smith replied: You can reply to Questions!!
Answered: 9 months, 2 weeks ago By: Yuan_Liang_Ucd
First contact from Yuan
Charlie replied: Hello Yuan! Welcome!
Answered: 4 months ago By: Bsubielahernandez@Gmail.Com
¿No sería posible prohibir las "cajas negras" en los algoritmos que influyen en estas decisiones?, ¿no es posible hacer un proceso transparente también desde el punto de vista de la IA?

Your Answer

Login to add your answer!

We’d love to hear your thoughts — share a meaningful answer by logging in.