About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
INFORMS 2020
Talk
Human Cognitive Biases in Interpreting Machine Learning
Abstract
People are the ultimate consumers of machine learning model predictions and explanations in many high-stakes applications. However, people’s perception and understanding is often distorted by their cognitive biases, like confirmation bias, anchoring bias, and availability bias, to name a few. If our goal is to enable a human-machine collaboration that has the best possible classification accuracy (better than the human and machine working separately), we have to mitigate these cognitive biases. In this work, we make progress towards this goal through both mathematical modeling and human experiments. Specifically, we focus our human experiments on collaborative decision-making in the presence of anchoring bias.