Why or Why Not? The Effect of Justification Styles on Chatbot Recommendations
Abstract
Chatbots or conversational recommenders have gained increasing popularity as a new paradigm for Recommender Systems (RS). Prior work on RS showed that providing explanations can improve transparency and trust, which are critical for the adoption of RS. Their interactive and engaging nature makes conversational recommenders a natural platform to not only provide recommendations but also justify the recommendations through explanations. The recent surge of interest inexplainable AI enables diverse styles of justification, and also invites questions on how styles of justification impact user perception. In this article, we explore the effect of "why"justifications and "why not"justifications on users' perceptions of explainability and trust. We developed and tested a movie-recommendation chatbot that provides users with different types of justifications for the recommended items. Our online experiment (n = 310) demonstrates that the "why"justifications (but not the "why not"justifications) have a significant impact on users' perception of the conversational recommender. Particularly, "why"justifications increase users' perception of system transparency, which impacts perceived control, trusting beliefs and in turn influences users' willingness to depend on the system's advice. Finally, we discuss the design implications for decision-assisting chatbots.