How data science workers work with data
Michael Muller, Ingrid Lange, et al.
CHI 2019
Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need efective, unbiased, and user-friendly explanations that people can conidently rely on. Towards that end, we conducted an empirical study with four types of programmatically generated explanations to understand how they impact people's fairness judgments of ML systems. With an experiment involving more than 160 Mechanical Turk workers, we show that: 1) Certain explanations are considered inherently less fair, while others can enhance people's conidence in the fairness of the algorithm; 2) Diferent fairness problems-such as model-wide fairness issues versus case-speciic fairness discrepancies-may be more efectively exposed through diferent styles of explanation; 3) Individual diferences, including prior positions and judgment criteria of algorithmic fairness, impact how people react to diferent styles of explanation. We conclude with a discussion on providing personalized and adaptive explanations to support fairness judgments of ML systems.
Michael Muller, Ingrid Lange, et al.
CHI 2019
Q. Vera Liao, Michal Shmueli-Scheuer, et al.
IUI 2019
Yunfeng Zhang, Rachel Bellamy, et al.
CHI EA 2021
Q. Vera Liao, Moninder Singh, et al.
CHI EA 2020