The AI regulation proposed by the EU is bound to increase the power of Big tech and forgets the rights of European citizens. It threatens to create a system in which unfair decision about citizens can be made in bulk and in which attempts to correct this get bogged down in a swamp of good intentions and unrealizable repairs.
Let’s start from the beginning.
What’s the problem the regulation wants to solve?
The regulation attempts to regulate high-risk decisions made by AI systems. These are decisions that affect human dignity, health, and safety. For example, decisions about human employment and schooling, promotion at work or at school, access to public services, immigration, and the interpretation of legal evidence are examples of these decisions.
Obviously, one wants these decisions to be made responsibly and with justification. Errors may cause loss of human dignity, loss of health, loss of safety. Automating moral decisions comes with risks and it these risks that the regulation wants to manage.
What’s the proposed solution?
The EU views these risks as properties of AI systems. True to this systems orientation, the regulation demands that systems that automate high-risk moral decisions be certified. To get a certificate, there must be quality management, technical documentation, risk management, record-keeping, transparency of decisions, human oversight, and reliability of the system.
The system provider is responsible for certification. For certified systems, users are obliged to follow their instructions for use.
This increases the power of the providers
For most kinds of AI systems, developers can conduct the assessment themselves, which puts too much power in their hands.
Moreover, once a system is certified, users are obliged to follow its directions of use. But this implies that when they use these systems to decide about public subsidies, promotion at work, entry to an educational institution, immigration, law enforcement, etc. they must follow the developer’s procedures. This is undesirable. The responsibility for these procedures must be with the user itself.
Certified systems still make mistakes
Even certified systems will make wrong decisions. De data on which they have been trained will be incomplete — the only complete set of data about a population consists of the population itself. Not only will the data set be incomplete, it will contain errors and biases. In addition, algorithms to train the system necessarily have false positive and false negative rates. And the subjects about whom decisions are made may be different from the training sample in important ways.
The focus on systems and their certification blinds the EU regulator for the fact that what must be regulated are not systems, but moral decisions made with the help of these systems. Any moral decision made by a machine poses a risk. What happens if a wrong decision is made? And what happens if morally wrong decisions are made in bulk, as is possible with automation?
Looking at the decisions, we see a glaring omission.
You and I are missing from the Regulation
The subject of a decision has a right to know what considerations went into the decision and why the decision maker thinks this is a fair decision. To guarantee this right, the decision maker should be able to explain the decision.
And if the subject does not agree, he or she should have the right to appeal to the decision. Which implies that the decision maker is responsible for the decision, even if the decision has been made, or has been recommended, by a machine. To take responsibility, the human decision maker must support the decision.
These are human rights that follow from European values such as human dignity, democracy and the rule of law.
The current Proposal does not guarantee these rights. Transparency of a decision, as required in Article 13, is not the same as explainability of the decision. Human oversight, required in Article 14, is not the same as human responsibility.
Automation does not make moral decision making easier, but it can make it fairer
Studies have shown that AI can improve the fairness of legal decisions. So in some cases at least, automation can improve moral decision making.
But this requires understanding of the decision by the decision maker, the ability of the decision maker to explain the decision to the subject, and the support of this explanation by the decision maker. We should not accept that the maker of a moral decision about schooling, employment, access to public services or any the other high-risk decisions listed in the Regulations, does not understand the decision, does not know why it is made, or does not support the decision. Decision makers should take responsibility.
This means that automating moral decisions creates more work for decision makers because they must now do the work of understanding and explaining the decision and decide whether or not they support it. But only in this way can automation improve the quality of decision making.
Large-scale automated moral decision-making can lead us into a swamp of good intentions and unrepairable suffering
The automation of decisions to access public services went horribly wrong in the Dutch childcare benefits scandal. Based on automated decisions, childcare benefits were withdrawn from tens of thousands of families who, after investigation of some individual cases, probably all had a right to the subsidy. The withdrawal caused severe financial strain on these families. Some people were evicted from their house because they could not pay the rent anymore, some lost their job and some couples divorced under the stress. One person committed suicide.
Attempts by the government for bulk reparation payments have attracted fraudsters and victims alike and have increased the non-automatable workload on government administration beyond its capacity. As of today, this still delays reassessment of the tens of thousands of victim cases. As I write this, Secretary of State Alexandra van Huffelen announced that no solution is in sight before 2023 because the administration lacks the machines and people to judge all cases individually any time sooner.
The scandal illustrates that automating moral decisions on a massive scale without requiring case-by-case explanation and responsibility by a human decision maker leads to a decision maker that has collapsed under its workload, who has morphed into a swamp of good intentions and incompetence, and who is unable to deal with the suffering it has caused. To use an unappealing metaphor that sticks: If you have shit on one hand and try to clean it with your other hand, then you get shit on both hands.
The lesson is clear: When automating moral decisions, the decision maker should still be human, take responsibility for the decision and be able to explain the decision about individual cases. This can improve the quality of the decisions.
But this prevents mass execution of the decisions. Which is a good thing because massively wrong decision-making is a horror for subjects and decision makers. Automation of moral decisions should not be aimed at quantity but at quality.
No Comments Yet