Horty’s Defaults with Priorities for Artificial Moral Agents

Can we build robots that act morally? Wallach and Allen’s book titled Moral Machines investigates a variety of approaches for creating artificial moral agents (AMAs) capable of making appropriate ethical decisions, one of which I find somewhat interesting. On page 34, they briefly mention “deontic logic” which is a version of modal logic that uses concepts of obligation and permission rather than necessity and possibility. This formal system is able derive conclusions about how one ought to act given certain conditions; for example, if one is permitted to perform some act α, then it follows that they are under no obligation to not do, or avoid doing, that act α. Problems arise, however, when agents are faced with conflicting obligations (McNamara). For example, if Bill sets an appointment for noon he is obligated to arrive at the appropriate time, and if Bill’s child were to suddenly have a medical emergency ten minutes prior, in that moment he would be faced with conflicting obligations. Though the right course of action may be fairly obvious in this case, the problem itself still requires some consideration before a decision can be reached. One way to approach this dilemma is to create a framework which is capable of overriding specific commitments if warranted by the situation. As such, John Horty’s Defaults with Priorities may be useful for developing AMAs as it enables the agent to adjust its behaviour based on contextual information.

Roughly speaking, a default rule can be considered similarly to a logical implication, where some antecedent A leads to some consequent B if A obtains. A fairly straightforward example of a default rule may suggest that if a robot detects an obstacle, it must retreat and reorient itself in an effort to avoid the obstruction. There may be cases, however, where this action is not ideal, suggesting the robot needs a way to dynamically switch behaviours based on the type of object it runs into. Horty’s approach suggests that by adding a defeating set with conflicting rules, the default implication can be essentially cancelled out and new conclusions can be derived about a scenario S (373). Unfortunately, the example Horty uses to demonstrate this move stipulates that Tweety bird is a penguin, and it seems the reason for this is merely to show how adding rules leads to the nullification of the default implication. I will attempt to capture the essence of Horty’s awkward example by replacing ‘Tweety’ with ‘Pingu’ as it saves the reader cognitive energy. Let’s suppose then that we can program a robot to conclude that, by default, birds fly (B→F). If the robot also knew that penguins are birds which do not fly (P→B ˄ P→¬F), it would be able to determine that Pingu is a bird that does not fly based on the defeating set. According to Horty, this process can be considered similarly to acts of justification where individuals provide reasons for their beliefs, an idea I thought would be pertinent for AMAs. Operational constraints aside, systems could provide log files detailing the rules and information used when making decisions surrounding some situation S. Moreover, rather than hard-coding rules and information, machine learning may be able to provide the algorithm with the inferences it needs to respond appropriately to environmental stimuli. Using simpler examples than categories of birds and their attributes, it seems feasible that we could test this approach to determine whether it may be useful for building AMAs one day.

Now, I have a feeling that this approach is neither special nor unique within logic and computer science more generally, but for some reason the thought of framing robot actions from the perspective of deontic logic seems like it might be useful somehow. Maybe it’s due to the way deontic terminology is applied to modal logic, acting like an interface between moral theory and computer code. I just found the connection to be thought-provoking, and after reading Horty’s paper, began wondering whether approaches like these may be useful for developing systems that are capable of justifying their actions by listing the reasons used within the decision-making process.

Works Cited

Horty, John. “Defaults with priorities.” Journal of Philosophical Logic 36.4 (2007): 367-413.

McNamara, Paul, “Deontic Logic”, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2019/entries/logic-deontic/>.