AI Explainability 360 Toolkit
Vijay Arya, Rachel Bellamy, et al.
CODS-COMAD 2021
Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning software is also biased, sometimes in similar ways to humans, often in different ways. While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.
Vijay Arya, Rachel Bellamy, et al.
CODS-COMAD 2021
Rachel Bellamy, Sean Andrist, et al.
CHI EA 2017
Sandeep Kaur Kuttal, Jarow Myers, et al.
VL/HCC 2020
Jeremy Lau, Matthew Arnold, et al.
ACM SIGPLAN Notices