Moral Computers? Trusting AI with Right and Wrong

24.05.2022

On March 12th, 1992, a light-rail trolley train in service of the Gothenburg, Sweden mass-transit system, got stuck without power at the top of a hill. The passengers were evacuated, and the traffic manager ordered the driver to loosen the brakes and let the two-car train roll down the hill about a quarter mile, to where the engines could reconnect with power.

This, he said, would reactivate the brakes. 

The manager was wrong: it was technically impossible to engage the brakes this way. The maneuver was also illegal. 

As the two-car train began moving, it rolled down from the light-rail stop at the top of the hill, past the Chalmers Institute of Technology where it was supposed to pick up power. Without functioning brakes, it accelerated down Aschebergsgatan reaching an estimated speed of 100 km/h (62mph). 

About 1.6 kilometers (1 mile) down the hill, just past the University of Gothenburg main building, the train crashed into the platforms of the Vasaparken light-rail stop. It crushed cars, caused a fire and killed 13 people. Another 29 were taken to hospital, many with serious injuries. 

A court later found the traffic manager fully responsible. His order to the driver to illegally override established safety protocol had caused the accident. However, according to some accounts at the time, the accident could have happened due to rare technical circumstances.

In either case, the tragedy is a real-life proxy of the so-called trolley problem, which is sometimes used in empirical research on moral values. In the thought experiment, an out-of-control trolley is heading in a direction where several people are in its way, and the only way to save them is to switch the trolley’s direction onto another track, where it will hit a smaller group of people. The person whose values are studied is asked to choose which track to put the trolley on.

At the time of the accident in Gothenburg, there was only one track along the path of the runaway train. However, a few years later a new track, going to a different part of town, was added about one quarter down the hill. Had it been in place in 1992, the transit traffic manager would have had the choice to send the runaway train onto the alternative track instead. 

Which track would the traffic manager have chosen? What if the manager function had been upheld by an artificial intelligence? Would the decision have been different?

Computers are becoming increasingly involved in moral problem solving. This is inevitable, as we make ourselves dependent on them for more and more functions of our lives. This trend begs the question:

Should AI make moral decisions?

There is an implicit premise in the expansion of AI technology, namely that it is flawless, or at least far less prone to errors than humans are. This is a dicey premise to rely on, especially since the most recent expansion of AI technology in our lives, self-driving cars, has proven just the opposite. Not only are self-driving cars far from flawless, but as a report from the American Insurance Institute for Highway Safety (IIHS) explains, autopiloted cars are not very good at reducing accidents.

The reason, says the IIHS, is that when AI gets behind the wheel, it drives much like humans do.

Why? The answer to this question tells us all we need to know about why we do not want AI making moral decisions for us. 

With the rise of AI technology comes a growing body of literature that examines the relationship between programming and moral values in both theory and practice. One of many good examples is a recent article over at BigThink.com, where Jonny Thomson asks: “Whose ethics should be programmed into the robots of tomorrow?” With the trolley problem in mind, he asks:

Who should decide how our new machines behave? Perhaps we need not fear a “tyranny of the majority,” as de Tocqueville mused, but rather the tyranny of a tiny minority based in Silicon Valley or robotics factories. Are we happy to have their worldview and ethical values as the model for the brave new world ahead of us?

These are good questions, and highly relevant given the rapid advancements in artificial intelligence. At the same time, the problem that Thomson’s questions address is not as novel as it may seem. Trusting computers on ethical matters is technically new, but the idea of handing moral decision-making over to machines is not. The mechanized form of moral decision-making envisioned in the growth of AI technology is in many ways an adaptation of another, well established form of mechanized decision-making: government-run health care.

Many decisions with direct implications for patients are made by standardized, institutionalized functions within a government bureaucracy. Designated boards of experts use statutes, regulatory guidelines and discretionary powers to decide what treatment methods are permitted, and which ones are banned. Other experts decide what hospitals, clinics, and areas of medicine should get more funding, and which ones should have their budgets downsized. 

The mechanized moral decision-making in government health care systems is manned by humans. They write the regulations and then make decisions based on them. 

Three challenges for the moral AI

Would it be desirable to replace those humans with AI? Consider the following three examples:

1. Two cancer patients are waiting for surgery. The operation will permanently cure them, but waiting any longer will lead to death. Government has not appropriated enough money for both patients to receive treatment within life-saving time. Both patients are 40 years old, married, and have two kids. One of them makes twice as much money as the other and—thanks to the progressive income-tax scale in the country—pays more than twice as much in taxes as the other patient. 

Many countries use a decision-making tool known as QALY, Quality Adjusted Life Years, in their health-care systems. This tool is used for allocating medical resources and for making priorities as to who gets what medical treatment, and when. In the current example, QALY would guide the hospital to prioritize the patient who pays more taxes. This will guarantee a higher return to the government on the cost of the surgery.

2. Two patients, equal in all relevant aspects, are both being treated in the same hospital for the same rare medical condition. Treatment is expensive and therefore heavily rationed by government. Only one patient can get the treatment offered by the hospital.

One of the patients explains that his wealthy brother has promised to buy a new medical drug from abroad. It is known to cure the medical condition, but due to its high cost it has been excluded from the supply of pharmaceutical products in this country. A group of experts deny the patient privately funded access to the medicine, on the grounds that it would lead to unequal distribution of medical treatment. 

3. A mother gives birth to a child with a medical condition that will require life-long treatment. The treatment will allow the child a normal life; without the treatment the child will still be able to live but be handicapped and dependent on others for the rest of its life.

Upon evaluating the child’s medical condition, government orders euthanasia for the child. It will save taxpayers money, either by not paying for the drug or by not paying for life-long assistance. It will also, government argues, save the patient the pain of living a low-quality life.

As I explained in my books Remaking America and The Rise of Big Government, the first two examples are mildly stylized versions of real-life decisions made regularly in government-run health care systems. The third example is—to the best of my knowledge—still illegal in the civilized world, but the case is morally not far away from the other two. Besides, if the advancing frontline of the abortionist war on life is combined with government-run health care, infanticide may soon be not only legal, but prescribed under the given circumstances.

In all three examples, the moral judgments regarding the allocation of resources and application of treatment methods have been removed from the patient. The decisions regarding who gets what treatment, and who is left to die, are guided by moral preferences found in legal statutes or regulatory guidelines. 

Prefabricated moral algorithms

Decisions on life or death have become administrative, bureaucratic—and, yes, mechanical.

Can we, and should we, hand over these types of decisions to an artificial intelligence? If so, as per Thomson’s first question, someone will have to decide how the AI should make its moral decisions. Someone will have to write the decision guidelines that the AI will apply.

That “someone” is most likely going to be the same people who wrote the current guidelines for moral decisions in government-run health care today. They will be deemed to have the appropriate expertise and experience. 

Over time, as artificial intelligence gets more entrenched in the realm of moral decision-making, it is entirely possible that the AI’s used for those decisions become standardized. Increasingly, the manufacturers equip them with prefabricated moral preferences to address what are considered the most common problems facing the AI. Those preferences will be tailored to the mechanized decision guidelines that most buyers of AI are already using. 

The criteria for how to prioritize patients; the evaluation of treatment methods based on standardized testing; the decision whether or not a newborn baby shall be euthanized can all be subjected to standardized, prefabricated algorithms. Over time, all health care AI’s will be equipped with them. Buyers, i.e., government agencies in charge of allocating health care resources, can then fine-tune for their own particular needs. 

Think of it as a car that can be configured for sale to the general public, as a police car or a taxi. Different applications, but the same machine underneath. 

This is all possible. But is it desirable?

The answer has less to do with the form under which the decision is made—an algorithm or a human brain—and more with what moral values the decision maker applies to the problem. Specifically, since this is about health care, we need to know what those values say about human life itself. 

Is it possible to equip any application of artificial intelligence with an absolute ban on the negotiation of human life? In theory, the answer is affirmative, of course: just like we can decide that life is sacrosanct, we can write it into a computer algorithm that life is sacrosanct. However, in practice it is impossible to construct any situation where AI can make a difference for the better while still respecting that life is sacrosanct. 

Plainly, artificial intelligence does not hold any decisive advantage in qualitative evaluations; its edge is on the quantitative side. There, upon measurable facets, AI can apply its superior abilities to process large amounts of data, such as in calculating the outcomes of different distributive patterns for health-care resources. By estimating outcomes to a greater detail than humans could under comparable circumstances, artificial intelligence can make better quantitative decisions. 

The only way that the AI’s quantitative superiority can be used for qualitative evaluations is if we—as in the humans who write the algorithms—assign quantitative values to qualitative variables. We can, e.g., give the AI the ability to calculate whether a patient’s life will be “good” or “bad” as the result of medical treatment. If the AI is told to maximize “good,” then it will recommend as many treatments as possible based on that expected outcome.

Then again, someone will have to decide what “good” actually means. The AI will never be able to independently make that decision, unless of course it learns how to independently study value theory in philosophy.

Source