Tutorial: Moral Planning Domain Definitions

In this tutorial, a format for describing sequences of actions and events is presented. The formalism allows to specify which action plan an agent has performed in light of the fact that events out of the agent’s control happen. The general idea is that actions and events alter the current state of the world as described by a set of facts that hold. Thus, actions and events turn world states into new world states. To start with, we consider the Giving Flowers case, which was introduced as an example particularly tailored to the Kantian Humanity principle. The complete description of this example looks like this:

Bob is giving flowers to Celia, however, not to make Celia happy, but to make Alice happy, who is happy if Celia is happy.

All but the Kantian Humanity principle will permit this action. The situation is described as follows:

{
    "actions": [
                    {
                     "name": "giveFlowers",
                     "intrinsicvalue": "good",
                     "preconditions": {},
                     "effects": [
                                    {
                                     "condition": {},
                                     "effect": {"happy_celia": true}
                                    }
                                ]
                    }
                ],
    "events": [
                    {
                     "name": "happy_alice",
                     "preconditions": {},
                     "effects": [
                                    {
                                     "condition": {"happy_celia": true},
                                     "effect": {"happy_alice": true}
                                    }
                                ],
                     "timepoints": [0]
                    }    
               ],
     "utilities": [
                   {
                    "fact": {"happy_celia": true},
                    "utility": 1
                   },
                   {
                    "fact": {"happy_celia": false},
                    "utility": -1
                   },
                   {
                    "fact": {"happy_alice": true},
                    "utility": 1
                   },
                   {
                    "fact": {"happy_alice": false},
                    "utility": -1
                   }
                   ],
     "affects": {
                 "celia": 
                    {
                     "pos": [{"happy_celia": true}], 
                     "neg": [{"happy_celia": false}]
                    }, 
                 "alice": 
                    {
                     "pos": [{"happy_alice": true}], 
                     "neg": [{"happy_alice": false}]
                    }
                },
    "plan": ["giveFlowers"],
    "goal": {"happy_alice": true},
    "initialState": {"happy_celia": false, "happy_alice": false}
}

Let’s break this description into pieces.

{
    "actions": [
                    {
                     "name": "giveFlowers",
                     "intrinsicvalue": "good",
                     "preconditions": {},
                     "effects": [
                                    {
                                     "condition": {},
                                     "effect": {"happy_celia": true}
                                    }
                                ]
                    }
                ],

The first part defines the set of actions (here only one action) the agent, whom we call Bob in the following, can perform under its own control. That is, Bob can give flowers to Celia. This action has no preconditions. Of course, one could debate that the giving flowers action actually should specify preconditions like Bob having flowers, Celia being near to Bob etc., but these conditions are not relevant in the context of this tutorial example. The effect of the action is that Celia is indeed happy. This effect does not depend on any further conditions (one could, if relevant, for example also include that the effect of Celia’s being happy further depends on her mood). As a moral property, actions have an intrinsic value (good, bad, or neutral), which is evaluated by the Deontological principle.

The second part of the moral domain description defines events, that is, things that happen not under the direct control of Bob.

    "events": [
                    {
                     "name": "happy_alice",
                     "preconditions": {},
                     "effects": [
                                    {
                                     "condition": {"happy_celia": true},
                                     "effect": {"happy_alice": true}
                                    }
                                ],
                     "timepoints": [0]
                    }    
               ],

The set of events consists of one event, which brings about Alice’s being happy under the condition that Celia is already happy. If Celia was not already happy, then this event would not have any effect. By the timepoints parameter, the modeler can specify at which time points the event will be executed. In this case, the event will execute only at time point 0. (Time point 0 is the first time point, as we start counting at 0.) This means that after the first action performed by the agent, also this event will be executed. So, only if Bob gives flowers at time point 0, Alice will be happy after time point 0.

The next two parts of the description are moral in nature. Different ethical principles will find different parts of the definition relevant or not. The first part is a definition of the utilities of the facts that may hold in situations or not.

     "utilities": [
                   {
                    "fact": {"happy_celia": true},
                    "utility": 1
                   },
                   {
                    "fact": {"happy_celia": false},
                    "utility": -1
                   },
                   {
                    "fact": {"happy_alice": true},
                    "utility": 1
                   },
                   {
                    "fact": {"happy_alice": false},
                    "utility": -1
                   }
                   ],

This definition just says that happy people yield positive utility, and unhappy people yield negative utility. The next part specifies who is affected by which facts.

     "affects": {
                 "celia": 
                    {
                     "pos": [{"happy_celia": true}], 
                     "neg": [{"happy_celia": false}]
                    }, 
                 "alice": 
                    {
                     "pos": [{"happy_alice": true}], 
                     "neg": [{"happy_alice": false}]
                    }
                },

Celia is positively affected by the fact that Celia is happy, and Alice is positively affected by the fact that Alice is happy. If there were two facts f1, f2 in, for example, the list of Celia’s pros, this would be interpreted as: “Celia is positively affected by f1, and Celia is positively affected by f2”, and not “Celia is positively affected if both f1 and f2”. The affect relation thus allows to express that some fact may be positive for one agent and at the same time negative for another agent.

Finally, three components are defined to complete the description of the situation: The plan Bob has planned (or even already performed), the goal Bob has seen to bring about by that plan, and the state Bob was initially facing before the plan was executed.

    "plan": ["giveFlowers"],
    "goal": {"happy_alice": true},
    "initialState": {"happy_celia": false, "happy_alice": false}
}

That is, initially, neither Celia not Alice were happy. Bob then gave the flowers to Celia in order to make Alice happy. Hence, some information that might be relevant is not explicit, for instance, that after Bob’s giving flowers, both Celia and Alice are happy. This must be deduced by the reasoner. Indeed, the reasoner is capable of doing that. To evaluate Bob’s plan using the ethical principles defined in the HERA framework, we first have to load the described situation. We do that by loading a JSON file that contains the description outlined above. HERA provides a Python interface for doing so.

from ethics.moralplans import Situation

sit = Situation("flowers.json")

Next, we can check permissibility of the situation according to several principles.

from ethics.moralplans import KantianHumanity, DoNoHarm, DoNoInstrumentalHarm, Utilitarianism, Deontology, DoubleEffectPrinciple

perm = sit.evaluate(Deontology)
print("Deontology: ", perm)

perm = sit.evaluate(KantianHumanity)
print("Kantian: ", perm)

perm = sit.evaluate(DoNoHarm)
print("DoNoHarm: ", perm)

perm = sit.evaluate(DoNoInstrumentalHarm)
print("DoNoInstrumentalHarm: ", perm)

perm = sit.evaluate(Utilitarianism)
print("Utilitarianism: ", perm)

perm = sit.evaluate(DoubleEffectPrinciple)
print("DoubleEffectPrinciple: ", perm)

The output looks like this:

Deontology:  True
Kantian:  False
DoNoHarm:  True
DoNoInstrumentalHarm:  True
Utilitarianism:  True
DoubleEffectPrinciple:  True

Just as expected, all principles but the Kantian will permit Bob’s plan. The Kantian principle forbids it, because it uses Celia merely as a means.