In my previous two blogs, I showed that machine-supported decisions are necessarily biased and that this biased can be reduced if the decision maker asks nine questions about a decision.

The core of artificial intelligence decision making is a prediction machine that contains a model learned from a database of decisions. The prediction machine uses this model to predict an outcome for an individual case. The prediction is used by the decision maker to make a decision.

The sources of bias in this process and the questions to be asked by the decision maker are summarized in the next diagram.

But not everyone has the time and capability to answer the nine questions. A judge who takes 30 seconds for a decision about detainment has too little time; a driver using a semi-autonomous driving system has no time. To even be able to answer these questions, we must redesign the decision-making ecosystem and allocate the task of answering these questions to different actors. And this changes the allocation of responsibility for a decision.

In this blog I first sketch the outline of a decision-making ecosystem and then give some hints how to divide decision-making responsibility in it.

Decision machines are part of ecosystems

Even systems considered to be autonomous such as deep-sea exploration robots and drones are embedded in ecosystems containing designers, manufacturers, quality control, pilots, sensor operators, maintenance personnel and regulators. Decision-making is distributed among all these actors. They cooperate to make the autonomous system fly, drive or navigate.[i] Prediction machines are no different.

Here is a diagram of some of the stakeholders involved in using a prediction machine to make a decision.

The blue stakeholders correspond to components of the decision-making diagram shown at the start of this blog. The others are present in most decisions. Each kind of application of a prediction machine requires its own ecosystem diagram.

Common to all decision ecosystems is that the prediction machine is part of a decision-making system and that the decisions are made about part of the context of the decision-making system. For example, hiring decisions are made by management of a business and they are made about applicants in the context. An autonomous car is a decision-making system that makes predictions about the presence and behavior of other cars and obstacles in its context.

Our goal is to make people and prediction machines cooperate to improve decision-making. All stakeholders in the above diagram contribute to the decisions that are made. Who of them is to be treated as the decision maker? Let’s first ask who needs to answer the decision-making questions listed above.

Who answers the nine questions?

The two stakeholders most involved in answering the questions are the data scientists and decision makers. Data scientists should reflect on prejudice in the population, the sampling method, the quality of data, the construct validity of the data, and the error rates of the learning algorithm.

For some of this they need the help of subject matter experts such as the decision makers. For example, construct validity is about the meaning of data and should be discussed with subject matter experts. Prejudice that may be present in the population is a concern of the subject matter experts too.

Sampling methods and error rates are the expertise of the data scientist and these should be explained to the decision maker.

The explanation of correlations is a task of the data scientist. Creating a theory about the underlying mechanism is a task of subject matter experts, including the decision makers.

Assessing the reliability of the case data and the structural similarity of the case to the cases in the training sample is a responsibility of the decision maker.

We see that responsibility for answering these questions is distributed over the ecosystem but that all answers should flow to the decision maker. This brings us to the question who, in the ecosystem, is the decision maker.

Who is the decision maker? Where is the decision maker? When?

Not everyone has the time and capability to answer the nine decision-making questions. A judge who takes 30 seconds for a decision about detainment has too little time; a driver using a semi-autonomous driving system has no time.

Yet, to take responsibility for a decision, the decision maker must answer all decision-making questions.  If we introduce AI in decision-making, there are two possibilities.

  • If the person using the prediction machine does not have the time to answer the nine decision-making questions, then the decision-making system behaves as an autonomous machine. Then the decision is really made elsewhere in the ecosystem, for example by the manufacturer of the prediction machine, who should then take responsibility for the decision.

This means that the real decision maker will, ahead of time, decide that new case data is reliable and that all new cases are structurally similar to cases in the training sample.

  • If the person using the prediction machine does have the time to ask the nine decision-making questions reviewed above, then he or she should do so and take responsibility for the results. This should not only make the decision maker aware of bias in the prediction machine, but also in his or her own decisions. It is the best way to not only improve the decisions but also the decision maker.

If both these options seem unattractive to you, there is a third option:

  • Don’t use a prediction machine. But this may bypass an opportunity to improve our decisions. France, when confronted with biases in judicial decisions after statistical analysis, banned the use of AI in legal decision-making.

Reducing bias is not just a matter of introducing AI or improving the learning algorithm. It involves a redesign of the decision-making ecosystem to allow time and provide capability to answer the nine decision-making questions.

[i] David A. Mindell. Our Robots, Ourselves. Robotics and the Myths of Autonomy. Viking, 2015.