Domain: Algorithmic Fairness & Bias
Measurement and mitigation of bias in machine learning systems, fairness constraints, and societal impacts of algorithmic decision-making
Measurement and mitigation of bias in ML prediction systems. Covers impossibility theorems (Chouldechova, Kleinberg), facial recognition bias (Buolamwini & Gebru), and accuracy-fairness tradeoffs in criminal justice risk assessment.
Measurement and mitigation of bias in machine learning systems, fairness constraints, and societal impacts of algorithmic decision-making
prediction_accuracy
Prediction AccuracyCorrectness of model predictions, measuring how well a classifier or regression model maps inputs to true outcomes across the full population or subgroups.
false_positive_rate
False Positive RateRate of incorrect positive predictions, measuring how often a classifier incorrectly labels negative instances as positive, potentially causing harm through false accusations or unnecessary interventions.
demographic_parity
Demographic ParityFairness criterion requiring equal positive prediction rates across demographic groups, ensuring that the proportion of individuals receiving a positive classification is independent of group membership.
calibration
CalibrationProperty that predicted probabilities match observed frequencies of the outcome, meaning that among all individuals assigned a predicted risk of X%, approximately X% actually experience the outcome.
Domain: Algorithmic Fairness & Bias
Measurement and mitigation of bias in machine learning systems, fairness constraints, and societal impacts of algorithmic decision-making
Install this PAX into your Praxis instance:
praxis_import_pax("algorithmic-fairness.pax.tar.gz", install=True)