Victor Akinwande, Megan Macgregor, et al.
IJCAI 2024
Risk assessment is a growing use for machine learning models. When used in high-stakes applications, especially ones regulated by anti-discrimination laws or governed by societal norms for fairness, it is important to ensure that learned models do not propagate and scale any biases that may exist in training data. In this paper, we add on an additional challenge beyond fairness: unsupervised domain adaptation to covariate shift between a source and target distribution. Motivated by the real-world problem of risk assessment in new markets for health insurance in the United States and mobile money-based loans in East Africa, we provide a precise formulation of the machine learning with covariate shift and score parity problem. Our formulation focuses on situations in which protected attributes are not available in either the source or target domain. We propose two new weighting methods: prevalence-constrained covariate shift (PCCS) which does not require protected attributes in the target domain and target-fair covariate shift (TFCS) which does not require protected attributes in the source domain. We empirically demonstrate their efficacy in two applications.
Victor Akinwande, Megan Macgregor, et al.
IJCAI 2024
Hannah Powers, Ioana Baldini Soares, et al.
NeurIPS 2024
Yu-Hui Chen, Dennis Wei, et al.
ICIP 2015
Tian Gao, Dennis Wei
ICML 2018