Design diagrams as ontological source
Pranay Lohia, Kalapriya Kannan, et al.
ESEC/FSE 2019
Whereas previous post-processing approaches for increasing the fairness of predictions of biased classifiers address only group fairness, we propose a method for increasing both individual and group fairness. Our novel framework includes an individual bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. We show superior performance to previous work in the combination of classification accuracy, individual fairness and group fairness on several real-world datasets in applications such as credit, employment, and criminal justice.
Pranay Lohia, Kalapriya Kannan, et al.
ESEC/FSE 2019
Rajeev Gupta, Shourya Roy, et al.
ICAC 2006
David Haws, Xiaodong Cui
ICASSP 2019
Aniya Aggarwal, Pranay Lohia, et al.
ESEC/FSE 2019