Main Content

Fairness in Binary Classification

Explore fairness in binary classification

To detect and mitigate societal bias in binary classification, you can use the fairnessMetrics, fairnessWeights, disparateImpactRemover, and fairnessThresholder functions in Statistics and Machine Learning Toolbox™. First, use fairnessMetrics to evaluate the fairness of a data set or classification model using bias and group metrics. Then, use fairnessWeights to reweight observations, disparateImpactRemover to remove the disparate impact of a sensitive attribute, or fairnessThresholder to optimize the classification threshold.

The fairnessWeights and disparateImpactRemover functions provide preprocessing techniques that allow you to adjust your predictor data before training (or retraining) a classifier. The fairnessThresholder function provides a postprocessing technique that adjusts labels near prediction boundaries for a trained classifier. To assess the final model behavior, you can use the fairnessMetrics function as well as various interpretability functions. For more information, see Interpret Machine Learning Models.

Functions

expand all

fairnessMetricsBias and group metrics for a data set or classification model
reportGenerate fairness metrics report
plotPlot bar graph of fairness metric
fairnessWeightsReweight observations for fairness in binary classification
disparateImpactRemoverRemove disparate impact of sensitive attribute
transformTransform new predictor data to remove disparate impact
fairnessThresholderOptimize classification threshold to include fairness
lossClassification loss adjusted by fairness threshold
predictPredicted labels adjusted by fairness threshold

Topics