Fairness in Binary Classification
To detect and mitigate societal bias in binary classification, you can use the
fairnessMetrics
, fairnessWeights
, and
disparateImpactRemover
functions in Statistics and Machine Learning Toolbox™. First, use fairnessMetrics
to
evaluate the fairness of a data set or classification model using bias and group metrics. Then,
use fairnessWeights
to
reweight observations, or use disparateImpactRemover
to remove the disparate impact of a sensitive
attribute.
The fairnessWeights
and disparateImpactRemover
functions provide preprocessing techniques that allow you to adjust your predictor data before
training (or retraining) a classifier. To assess the model behavior after training, you can use
the fairnessMetrics
function as well as various interpretability
functions. For more information, see Interpret Machine Learning Models.
Functions
fairnessMetrics | Bias and group metrics for a data set or classification model |
report | Generate fairness metrics report |
plot | Plot bar graph of fairness metric |
fairnessWeights | Reweight observations for fairness in binary classification |
disparateImpactRemover | Remove disparate impact of sensitive attribute |
transform | Transform new predictor data to remove disparate impact |
Topics
- Introduction to Fairness in Binary Classification
Detect and mitigate societal bias in machine learning by using the
fairnessMetrics
,fairnessWeights
, anddisparateImpactRemover
functions.
Related Information
- Explore Fairness Metrics for Credit Scoring Model (Risk Management Toolbox)
- Bias Mitigation in Credit Scoring by Reweighting (Risk Management Toolbox)
- Bias Mitigation in Credit Scoring by Disparate Impact Removal (Risk Management Toolbox)