Mohsen Abbasi, Sorelle Friedler, Carlos Scheidegger, Suresh Venkatasubramanian. Fairness in representation: quantifying stereotyping as a representational harm. Proceedings of the 2019 SIAM International Conference on Data Mining, 801–809, 2019.
@inproceedings{abbasi2019fairness,
title={Fairness in representation: quantifying stereotyping
as a representational harm},
author={Abbasi, Mohsen and Friedler, Sorelle A and Scheidegger, Carlos
and Venkatasubramanian, Suresh},
booktitle={Proceedings of the 2019 SIAM International Conference on
Data Mining},
pages={801--809},
year={2019},
organization={SIAM}
}
While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets.