The recent proposal for a regulation on a European approach to AI has a problem. What kind of systems does it want to regulate? The current definition of subject and scope begs the question what AI is. And if it is not clear what the regulation is about, then its 125 pages regulate nothing.
The current definition consists of two parts (Article 3.1).
- An AI system is a software system developed with techniques from a list of data-driven or rule-based techniques listed in Annex I, and
- The system can, given human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing their environments.
Begging the question
The problem with part (1) is that the list in Annex I is contingent on the current state of the art. Annex I lists machine learning techniques, logic-based techniques, and statistical and Bayesian approaches. However, new techniques will emerge.
To be able to extend the list in Annex I, the Commission has given itself the right to update this list “on the basis of characteristics that are similar to the techniques and approaches” listed in the Annex. (Article 4).
But what is “similar”? New AI techniques may be radically new. What if we could grow artificial brains in the future? This is not like anything in Annex I. Since it is not similar, Article 4 does not allow it to be added as an AI technique.
Also, in the future some of the techniques listed in Annex I may not be regarded as AI techniques anymore. When a technique is commercially available on a large scale, and we get used to it, we tend to stop considering it as AI.
In short, to know which techniques are similar to the ones in Annex I, we need to define AI. The definition in the Regulation begs the question what AI is.
Reactive systems
The problem with part (2) is that there are many systems that influence their environment by means of content, predictions, recommendations, or decisions, that we would not consider to be AI systems. Software for weather predictions, product recommendation software, and the cruise control software of a car are not commonly regarded as AI systems.
This part of the definition actually describes reactive systems [1]. These are systems that maintain a model of their environment and respond to input based on the current state of the model. If the model is very sophisticated, it can make reliable predictions about its environment. The system can then respond to input based on its predictions of behavior in the environment. This makes the system look intelligent. But is having a sophisticated model sufficient for a system to be an AI system? And what is “sophisticated” anyway?
What is special about AI systems?
What is special about AI systems is not the techniques they implement nor the kind of output they produce, but the fact that they make decisions that require moral reasoning. Let me unpack what this means.
A cruise control system is a reactive system, and its output may benefit or harm people. It is a reactive system because it maintains a model of its environment (the current and desired speed of the car) and responds to changes in the current speed depending on the desired speed. Its output benefits the driver because it saves the driver the work of pushing the pedal. And it may also harm the driver because it may cause the car to crash in the car in front.
But it is not an AI system, because its decisions to open or close the throttle do not require moral sensibility or moral reasoning.
By contrast, a system that classifies an action of a person as fraudulent, is an AI system. For example, the system used by the Dutch tax department to classify requests for child-care allowance as valid or fraudulent, is an AI system. These decisions require reasoning about possible benefits and harm to individuals and to the public, decision fairness and the good faith of people, among others. In short, they require moral reasoning.
My proposal is to define the systems for which the EU regulation is intended as systems that make decisions that require moral reasoning.
We can still call these systems AI systems. But be aware that the technology may be quite simple. For example, a fraud decision may be based on a decision tree without any machine learning at all. And there is evidence that simple statistical techniques (e.g. the weighted sum of two variables) perform on a par with complex algorithms [2]. Defining AI systems in terms of a list of technologies misses the point.
My proposed definition raises a question. Can moral decisions be automated at all? Can AI systems, defined this way, exist?
Can moral decisions be automated?
Let’s take an example. The decision to grant housing subsidy includes moral reasoning about fairness, good faith, and private and public interest, so it requires moral reasoning. At the same time, it should be based on clear rules applicable to all. Perhaps these rules are so clear that they can be automated? The decisions would then become like the decisions made by a cruise control system, which are not moral.
However, in this case moral reasoning would have moved to another level: Are the decision rules for housing subsidy morally defensible in all cases? The moral question does not disappear but becomes invisible in the practice of decision making.
Even if you believe that in the future, we will be able to automate moral reasoning completely, let’s acknowledge that currently, this is not possible. If a decision about housing subsidy is automated, then the moral part of it moves out of the immediate decision-making context.
Does that mean that we should not use AI systems at all? Evidence shows that professional judgment can be improved when supported by AI systems [2]. So I think it is a good thing to use AI systems for these decisions.
The EU AI regulations are intended precisely for the cases in which AI systems are used but do not capture the moral aspect of decisions in all cases. Which brings us back to the definition. What, exactly, are these cases?
Which decisions require moral sensibility?
A general specification of decisions that require moral reasoning is a task for moral philosophers, which is to say there is no definite answer. The proposed Regulation solves this by giving a list of examples in Annex III. The list mentions applications in the judiciary, law enforcement, personnel management, education, and public services, as well as all applications of biometric systems. AI systems used in these cases are considered high-risk and need to be regulated.
If we update the definition of “AI system” to my proposal (systems that make decisions that require moral reasoning) then Annex III suffices to clarify what this means. No need to give a general definition.
The advantage of giving a definition in terms of moral reasoning that is illustrated by examples, is that it makes clear why AI systems need regulation (they can be used to make decisions that require moral reasoning) without entering an interminable discussion of what characterizes these systems. At the same time, manufacturers know whether they are in the current scope of the definition.
Unlike the list of technologies in Annex I, the examples in Annex III will never become obsolete. But we can expect new applications of AI systems to emerge in the future, that may have to be added to the list. The Commission may update Annex III in the future based on criteria concerning the possible harm done to subjects of a decision (Article 7).
What is regulated are AI ecosystems, not AI systems
Analysis of the examples in Annex III shows that each of these cases constitutes a complex sociotechnical ecosystem. Stakeholders in the ecosystem include designers, engineers, data scientists, maintainers, operators, decision makers, decision subjects, technical support staff, quality management staff, etc. When an AI system is used to automate a decision, the moral part of it is not dealt with by the machine but by other actors in the AI ecosystem.
The proposed Regulation aims to structure this AI ecosystem in conformance with the public interest and Union values (preamble). It requires stakeholders to perform risk management, data governance, and record-keeping, and requires that the system has proper technical documentation, is accurate, robust and secure, and provides transparent advice. There must be human oversight of the system as well as national and European bodies for notification and audit. So far, so good.
Who is responsible?
One ugly property of human ecosystems is that if a morally bad decision is made, the stakeholders in the decision-making ecosystem will point fingers at each other. The least powerful of the lot will probably lose this game. They will get the blame or, if they were harmed by the decision, they will not get redress.
The requirement of human oversight demanded in Article 14 of the proposed Regulation is too weak to deal with this. It should be strengthened to human responsibility. It must be clear in advance of using an AI system who is responsible for the decisions made by, or with the help of, the system.
This in turn implies a strengthening of the transparency requirements in Article 13. In an earlier blog I showed that in a data-driven AI system, data scientists must be aware of the unavoidable prejudices in the training sample, unavoidable limits to the representativeness of the sample, and limits to the construct validity of the measured variables. They must understand the strengths and weaknesses of the choice of error rates in the learning algorithm.
And data scientists must explain these properties of the AI system to the decision makers so they can take responsibility. The decision makers are responsible for assessment of the similarity of the case at hand to the cases on which the algorithm was trained and tested — this requires moral reasoning too.
Does it work?
Would the regulations, suitably improved, work? Would they contribute to the public interest and Union values, as stated in the preamble?
Let’s return to the fraud classification system used by the Dutch tax department. The decisions of this system required moral reasoning. So the Regulations apply to this case.
As the record shows, the decision-making ecosystem miserably failed. Valid requests done in good faith were frequently misclassified as fraudulent. As a result, people lost their house or job. What would have happened if the ecosystem would have been designed according to requirements of the proposed EU AI regulation?
To test the proposed Regulation, I propose analyzing this failure in detail to see how the regulation would have applied and to assess whether this would have prevented moral failure.
References
[1] | R. J. Wieringa, Design Methods for Reactive Systems, Morgan Kaufmann, 2003. |
[2] | S. Goel, R. Shroff, J. L. Skeem and C. Slobogin, “The accuracy, equity, and jurisprudence of criminal risk assessment,” SSRN, no. December 26, p. 21, 2018. |
No Comments Yet