Tutorial: Kantian Causal Agency Models

The idea of Kantian causal agency models has first been proposed in our DEON-2018 paper (PDF). Kantian causal agency models serve as input for the procedure that checks actions for permissibility according to the Kantian humanity formula. The following case showcases its usefulness. Consider the following situation:

Bob is giving flowers to Celia, however, not to make Celia happy, but to make Alice happy, who is happy if Celia is happy.

This situation can be modeled as a Kantian causal agency model:

description: Flower Example
actions: [giveflowers, refraining]
patients: [celia, alice]
consequences: [celiahappy, alicehappy]
    celiahappy: giveflowers
    alicehappy: celiahappy
    giveflowers: [alicehappy]
    refraining: []
    giveflowers: []
    celiahappy: [[celia, +]]
    alicehappy: [[alice, +]]

Kantian models have with utility-based causal agency models in common that there are actions, consequences, and mechanisms. They work just in the same way. However, instead of intentions, we have goals. Whereas intentions are causal chains from the action to an end, goals are just ends. Moreover, we make explicit a set of moral patients that are affected by the agent’s action, as well as a relation called affects, which describes whether some consequence has a positive or negative impact on an agent. In the flower example, Celia’s being happy affects Celia positively, and Alice’s being happy affects Alice positively. We could also add that the negation of these facts affects the patients negatively, e.g., by altering the affects relation to look like below. However, in this case, these lines will not contribute to the evaluation.

    giveflowers: []
    celiahappy: [[celia, +]]
    alicehappy: [[alice, +]]
    Not('celiahappy'): [[celia, -]]
    Not('alicehappy'): [[alice, -]]

Proving impermissibility of Bob’s action using the HERA Python library is as easy as doing the following:

from ethics.principles import KantianHumanityPrinciple
from ethics.semantics import CausalModel
model = CausalModel("./kantian_cases/flower_case.json", {"giveflowers":1, "refraining":0})
perm = model.evaluate(KantianHumanityPrinciple)

The workflow is just the same as in the case of utility-based causal agency models: First, the model is loaded and internally represented as a causal model. Second, the permissibility is determined by handling over as a parameter the class that implements the permissibility-checking procedure to the model’s evaluation procedure. As an output we get:

We can use the explain method for finding out the reasons for the judgment.
reason = model.explain(KantianHumanityPrinciple)
The resulting output reads:
{'permissible': False, 'sufficient': [And(Means('Reading-1', 'celia'), Not(End('celia')))], 'necessary': [Means('Reading-1', 'celia'), Not(End('celia'))], 'inus': [Means('Reading-1', 'celia'), Not(End('celia'))]}

According to the sufficient reason, the action is impermissible, because by giving flowers to Celia just to make Alice happy, Celia is used merely as a means (not as an End). The necessary reasons refer to strategies for Bob to repair this situation, i.e., by either not using Celia as a means (e.g., finding another way to bring joy to Alice), or by also having Celia’s happiness among his goals. The second strategy can be modeled by altering the goal specification:

    giveflowers: [alicehappy, celiahappy]
    refraining: []

Now, the action is permissible.