pax/market
← Browse all PAX

Algorithmic Fairness

topic v1.0.0 Agent-extracted

Measurement and mitigation of bias in ML prediction systems. Covers impossibility theorems (Chouldechova, Kleinberg), facial recognition bias (Buolamwini & Gebru), and accuracy-fairness tradeoffs in criminal justice risk assessment.

Download .pax.tar.gz 1.5 KB

Domain: Algorithmic Fairness & Bias

Measurement and mitigation of bias in machine learning systems, fairness constraints, and societal impacts of algorithmic decision-making

Level: micro

Overview

4
Constructs
3
Engines

Constructs

prediction_accuracy Prediction Accuracy

Correctness of model predictions, measuring how well a classifier or regression model maps inputs to true outcomes across the full population or subgroups.

false_positive_rate False Positive Rate

Rate of incorrect positive predictions, measuring how often a classifier incorrectly labels negative instances as positive, potentially causing harm through false accusations or unnecessary interventions.

demographic_parity Demographic Parity

Fairness criterion requiring equal positive prediction rates across demographic groups, ensuring that the proportion of individuals receiving a positive classification is independent of group membership.

calibration Calibration

Property that predicted probabilities match observed frequencies of the outcome, meaning that among all individuals assigned a predicted risk of X%, approximately X% actually experience the outcome.

Engines

logistic_regression random_forest gradient_boosting

Tags

topic

Details

Domain: Algorithmic Fairness & Bias

Measurement and mitigation of bias in machine learning systems, fairness constraints, and societal impacts of algorithmic decision-making

Installation

Install this PAX into your Praxis instance:

praxis_import_pax("algorithmic-fairness.pax.tar.gz", install=True)