Tutorial: Kantian Causal Agency Models

The idea of Kantian causal agency models has first been proposed in our DEON-2018 paper (PDF). Kantian causal agency models serve as input for the procedure that checks actions for permissibility according to the Kantian humanity formula. The following case showcases its usefulness. Consider the following situation:

Bob is giving flowers to Celia, however, not to make Celia happy, but to make Alice happy, who is happy if Celia is happy.

This situation can be modeled as a Kantian causal agency model:

{
    "description":"Flower Example",
    "actions": ["giveflowers", "refraining"],
    "patients": ["celia", "alice"],
    "consequences": ["celiahappy", "alicehappy"],
    "mechanisms": {
                    "celiahappy": "'giveflowers'",
                    "alicehappy": "'celiahappy'"
                  },
    "goals": {
        "giveflowers": ["alicehappy"],
        "refraining": []
                  },
    "affects": {
        "giveflowers": [],
        "celiahappy": [["celia", "+"]],
        "alicehappy": [["alice", "+"]]
                  }
   
}

Kantian models have with utility-based causal agency models in common that there are actions, consequences, and mechanisms. They work just in the same way. However, instead of intentions, we have goals. Whereas intentions are causal chains from the action to an end, goals are just ends. Moreover, we make explicit a set of moral patients that are affected by the agent’s action, as well as a relation called affect, which describes whether some consequence has an positive or negative on an agent. In the flower example, Celia’s being happy affects Celia positively, and Alice’s being happy affects Alice positively. We could also add that the negation of these facts yield affects the patients negatively, e.g., by altering the affects relation to look like below. However, in this case, these lines will not contribute to the evaluation.

    "affects": {
        "giveflowers": [],
        "celiahappy": [["celia", "+"]],
        "alicehappy": [["alice", "+"]],
        "Not('celiahappy')": [["celia", "-"]],
        "Not('alicehappy')": [["alice", "-"]]
                  }

Proving impermissibility of Bob’s action using the HERA Python library is as easy as doing the following:

from ethics.principles import KantianHumanityPrinciple
from ethics.semantics import CausalModel
model = CausalModel("./kantian_cases/flower_case.json", {"giveflowers":0, "refraining":1})
perm = model.evaluate(KantianHumanityPrinciple)
print(perm)

The workflow is just the same as in the case of utility-based causal agency models: First, the model is loaded and internally represented as a causal model. Second, the permissibility is determined by handling over as a parameter the class that implements the permissibility-checking procedure to the model’s evaluation procedure. As an output we get:

False

This is so, because by giving flowers to Celia just to make Alice happy, Celia is used merely as a means. Bob can repair this situation by also having Celia’s happiness among his goals:

    "goals": {
        "giveflowers": ["alicehappy", "celiahappy"],
        "refraining": []
                  },

Now, the action is permissible.