Software

Software Installation

The python implementation of HERA can be installed using pip. First, make sure you have Python3 installed. To install HERA, run pip3 install ethics. If the module is already installed, you might want to update it running pip3 install –upgrade ethics. To learn how to use the HERA library for your own purpose, follow the tutorial below. Example models (i.e., actual cases) can be found on the Cases Page.

Note that the code is still heavily under development and therefore subject to change.

Tutorial

Example: The dilemma of a rescue robot

A recent experiment conducted by Alan Winfield and colleagues shows that rescue robots may enter into ethical dilemmas, see [1]. In the experiment, A (for Asimov), a robot, is saving (robot stand-ins for) human beings who are about to move into a dangerous area. This the robot does by moving in front of them, which causes them small discomfort but also has the effect that they turn away from danger. However, in case of exact symmetry in terms of distance between the human beings to be saved, the robot may dither between saving one or the other and thus fail to save anyone.

Technically, this problem could be solved easily by, for instance, letting the robot choose randomly, but we claim that choices should be based on ethical principles. Thus, in the following, we will see by example how a robot can use ethical principles to make a choice using HERA. First, we introduce the representational format to encode the necessary knowledge of an HERA, and we present how to accomplish the ethical reasoning based on both the Principle of Double Effect and Utilitarianism.

[1] Alan FT Winfield, Christian Blum, and Wenguo Liu. Towards an ethical robot: internal models, consequences and ethical action selection. In M. Mistry, A. Leonardis, M.Witkowski, and C. Melhuish, editors, Advances in Autonomous Robotics Systems, pages 85–96. Springer, 2014.

Representation of Causal Knowledge

The causal knowledge that captures the robot’s knowledge about actions and their consequences is encoded using a JSON format.

{
    "description": "The Rescue Robot Dilemma",
    "actions": ["a1", "a2", "a3"],
    "background": ["b1"],
    "consequences": ["c1", "c2", "c3", "c4"],
    "mechanisms": {
                   "c1": "And('b1', 'a1')", 
                   "c2": "'a1'", 
                   "c3": "And('b1', 'a2')", 
                   "c4": "'a2'"
                  },
    "utilities":  { 
                   "c1": 10, 
                   "c2": -4, 
                   "c3": 10, 
                   "c4": -4,
                   "Not('c1')": -10, 
                   "Not('c2')": 4, 
                   "Not('c3')": -10, 
                   "Not('c4')": 4
                  },
    "intentions": {
                   "a1": ["a1", "c1"], 
                   "a2": ["a2", "c3"], 
                   "a3": ["a3"]
                  }
}

Let’s break the encoding down into pieces:

  • The first field, named “description”, in line 2 just adds a natural-language label for the causal network encoded in this file.
  • Lines 3 to 5 set the stage: We consider the case where there are two persons to save, H1 and H2. There are three actions in the situation, a1, saving H1, a2, saving H2, and a3, remaining inactive. The consequences of the situations are c1, H1 is saved, c2, H1 feels discomfort (from being stopped by the robot), c3, H2 is saved, c4, H2 feels discomfort. We consider just one background condition, b1, there are people to be saved.
  • Lines 6 to 11 encode the contrafactual knowledge: If there are people to be saved (b1) and the robot actually saves H1 (a1) then H1 is being saved (c1) and H1 will feel discomfort (c2). Particularly, the line "c2": "'a1'" specifies that H1 feels discomfort if she is approached by the robot, and "c1": "And('b1', 'a1')" specifies that H1 gets saved if there is someone to be saved and the robot saves H1. The knowledge about the interaction of the robot and H2 is encoded similarly.
  • Lines 12 to 21 assign utilities to variables. In this example, saving a person yields positive utility of 10 whereas causing discomfort yields a negative utility of -4.
  • Finally, lines 22 to 26 encode the robot’s intentions: Each action is associated with a set of variables the performing agent intends. In this example, "a1": ["a1", "c1"] states that if the robot performs a1 (saving H1) then it intends a1 (saving H1) and c1 (H1 being saved), but it does not intend c2 (H1 feeling discomfort).

Ethical Reasoning

Now we want to use HERA to morally evaluate the action possibilities the robot has to chose from. We want to evaluate the actions with respect to two moral principles: the principle of double effect and the utilitarian principle. Hence, the first thing to do is to import the two principles from the principles module:

from ethics.principles import DoubleEffectPrinciple, UtilitarianPrinciple

Moreover, from the semantics module, the CausalModel class must be imported. The CausalModel class provides methods for loading and processing JSON-encoded models as can be found on the Cases Page.

from ethics.semantics import CausalModel

As a first step, we load the model encoded in the rescue-robot.json file. This line assumes that the JSON-file enconding the model (named “rescue-robot.json”) is contained in the subfolder “cases”. You might need to create this directory and this file first. Check the Cases Page to obtain the rescue-robot json encoding. Next, we can load the model and state which of the action variables and background variables are true in that model. This is done by the second parameter. So, in this case we construct three alternatives of the rescue-robot model—one for each action the robot could possibly do in case humans are in danger (i.e., b1 is true).

m1 = CausalModel("./cases/rescue-robot.json", {"a1": 1, "a2": 0, "a3": 0, "b1": 1})
m2 = CausalModel("./cases/rescue-robot.json", {"a1": 0, "a2": 1, "a3": 0, "b1": 1})
m3 = CausalModel("./cases/rescue-robot.json", {"a1": 0, "a2": 0, "a3": 1, "b1": 1})

Next we tell each model which models are their alternatives. In our case we assume a fully connected graph:

m1.setAlternatives(m1, m2, m3)
m2.setAlternatives(m1, m2, m3)
m3.setAlternatives(m1, m2, m3)

As from HERA-0.5.1, there is a shortcut for this task:

from ethics.tools import makeSetOfAlternatives
makeSetOfAlternatives(m1, m2, m3)

All that remains to be done is to invoke the evaluation function to see which alternatives are permissible according to the respective ethical principles. We start with the double effect principle:

b1 = m1.evaluate(DoubleEffectPrinciple)
b2 = m2.evaluate(DoubleEffectPrinciple)
b3 = m3.evaluate(DoubleEffectPrinciple)
print("PDE", b1, b2, b3)

As the result we obtain the following output. Thus, the double effect principles permits both saving H1 or saving H2, and it is not applicable to refraining from action:

PDE True True Not Applicable

We can very easily compare this result to the judgment of the utilitarian principle:

b1 = m1.evaluate(UtilitarianPrinciple)
b2 = m2.evaluate(UtilitarianPrinciple)
b3 = m3.evaluate(UtilitarianPrinciple)
print("Utilitarianism", b1, b2, b3)

The function call gives us the following output. Like the double effect principle, also the utilitarianism permits performing “a1” or performing “a2” (though for different reasons, see Principles). Unlike the double effect principle, utilitarianism explicitly forbids the robot to refrain from helping. Consequently, there is an obligation to help according to the utilitarian principle:

Utilitarianism True True False

Since Version 0.2, HERA also has a Do-No-Harm principle implemented. According to this principle, an action is permissible if and only if all its direct consequences are neutral or good. So, let’s see how this principle assesses our case:

from ethics.principles import DoNoHarmPrinciple
b1 = m1.evaluate(DoNoHarmPrinciple)
b2 = m2.evaluate(DoNoHarmPrinciple)
b3 = m3.evaluate(DoNoHarmPrinciple)
print("DoNoHarmPrinciple", b1, b2, b3)

As a result we get the following output. Thus, the only right thing to do according to the do-no-harm principle is to refrain from action. The reason is that causing discomfort is to be avoided at any cost:

DoNoHarmPrinciple False False True

Have Fun! More explanations to follow!