When Is It Morally Acceptable to Break the Rules? A Preference-Based Approach
Abstract
Humans make moral judgements about their own actions and the actions of others. Sometimes they make these judgements by following a utilitarian approach, other times they follow simple deontological rules, and yet at other times they find (or simulate) an agreement among the relevant parties. To build machines that be-have similarly to humans, or that can work effectively with humans,we must understand how humans make moral judgements. This in-cludes when to use a specific moral approach and how to appropriately switch among the various approaches. We investigate how, why,and when humans decide to break some rules. We study a suite of hypothetical scenarios that describes a person who might break a well established norm and/or rule, and asked human participants to provide a moral judgement of this action. In order to effectively embed moral reasoning capabilities into a machine we model the human moral judgments made in these experiments via a generalization of CP-nets, a common preference formalism in computer science. We describe what is needed to both model the scenarios and the moral decisions, which requires an extension of existing computational models. We discuss how this leads to future research directions in the areas of preference reasoning, planning, and value alignment.