ETHICAL ALGORITHMS  

Can Machines Have Morals?

83907182 driving no hands dpa
No hands, no liability? Source: DPA

The day will come when a driver takes a nap behind the wheel as his self-driving car speeds down a steep incline. Suddenly, a woman on a bicycle falls over in front of him with a little boy in the bike’s child seat. A few milliseconds will be enough for the car’s software to determine that the braking distance is too short. What should the self-driving car do? If it continues straight ahead, it will drive over the mother and child. If it veers to the left, the car and driver will plunge down the hillside. Three human lives, two options.

An ethics committee appointed by the German Federal Ministry of Transport recently submitted 20 rules for self-driving cars. The committee dodged the issue of whether a car should accept the death of one or another person in an unavoidable accident. It only decided there could be no weighing of one human life against another. But is it possible to dictate to an algorithm not to draw conclusions from the data on which it functions?

Algorithms already navigate us through traffic, they decide what we want to read and hear, who can be insured for how much and recommend to police how long a suspect should be held in detention. The debate over decisions made by self-driving cars serves as a template for the services machines will provide in the future – robots working in factories alongside humans, computers in hospitals that recommend drug dosages. Can ethical conduct be programmed? Can machines be taught moral standards?

“When you lose your child in an accident, it isn’t enough that someone pays for the damages – and points to an improved accident statistic.”

Volker Lüdemann, lawyer specializing in digital ethics

Matthias Ulbrich is Audi’s top innovator. He stresses that experts agree automated driving will reduce accidents involving injuries and fatalities. “Human error is involved in 90 percent of all accidents today.” But decreasing the frequency of something does not make it less horrific.

“People want to know why something happened,” says lawyer Volker Lüdemann. “When you lose your child in an accident, it isn’t enough that someone pays for the damages – and points to an improved accident statistic.” Mr. Lüdemann specializes in the ethical dilemmas of going digital.

When a bicyclist falls in front of a car, the driver reacts instinctively. In most cases, he brakes, with or without success. A machine, however, calculates the various options in a fraction of a second, and assesses them as it was trained to. The death of the bicyclist is no longer a matter of fate but programming. That’s the quandary facing those training robot cars with traffic regulations.

On the one hand, it’s in the interest of car companies to protect passengers as they are, after all, customers. “No one is going to buy a car with a built-in hero’s death algorithm,” Mr. Lüdemann says. But placing one human life over another is a PR disaster, as Daimler learned when an executive spoke out in favor of always protecting the lives of the car’s passengers. The company hastened to clarify it definitely hadn’t already made a decision in favor of passengers.

Eric Hilgendorf sat on the self-driving car ethics committee – a lawyer among carmakers and consumer protectionists, philosophers and computer scientists. “We are a long way away from programming moral precepts,” he says, adding that machines do not have all the information in a critical situation. “Sensors also have only limited ability to register things.” Moreover, questions of ethics and legal procedures, such as weighing up the value of different things, has yet to be transferred into computer code. The software found in all smart machines is designed to learn, using humans as models, moral defects and all.

The police in the English town of Durham will soon use artificial intelligence to decide how long to keep a suspect in custody. The software, Harm Assessment Risk Tool (HART), bases its recommendations on severity of the crime, suspect’s prior convictions and level of risk the suspect will flee. Local crime statistics of recent years serve as the basis. Such software is already in use in the US – and its weak spots are already showing. It classifies blacks, for example, as being more dangerous than whites.

Now that machines, toys and security systems are being trained, it’s a matter of what we teach them – and what we should keep from them. Should a sensor in a self-driving car know that a pedestrian is pregnant and therefore worth protecting, or make judgments by employment status or criminal background?

Sohan Dsouza is letting people decide. As a doctoral student at the Massachusetts Institute of Technology’s Media Lab, he has developed a simple online game that asks a serious question: Whom would you kill? The Moral Machine offers a selection of scenarios and asks, for example, should the self-driving car rundown a man on the crosswalk or swerve and kill its five passengers? Should it slam into a wall with two children sitting in the backseat or veer off and kill two old people walking by?

The Moral Maschine’s 3.5 million players have already made their choice. Mr. Dsouza and his colleagues found that in some Catch-22 situations, everyone tends toward a certain solution. In others, there are strong cultural differences. In China, Eastern Europe and Russia, for example, people are more interested in the lives of the passengers; in the West, the welfare of the largest number takes precedence. “People decide very differently depending on the culture, country and global region,” Mr. Dsouza says.

A year ago, a Tesla on autopilot slammed into a semi at an intersection. The car hadn’t recognized the white side of the truck against the bright sky and didn’t brake, killing the driver. The accident was considered proof of defective technology, but it also shows how the US company is driving forward development of the self-driving car in typical Silicon Valley style: in practice, making updates as it goes along. At any rate, not in the style of ethical debates.

Germany is the only country to have set up an ethics commission, which is no surprise. Idealistic motives guided by conscience and duty play a greater role in the German philosophical tradition than that of Anglo-Saxon countries, where a moral standard based on the end-justifying-the-means is popular, the benefit of an action for the greater number.

Ethics committee member Mr. Hilgendorf is firmly against weighing one life against another. “One individual must not be obliged to offer his life for another,” he says. Even if it means preventing the death of children on the street, a self-driving car cannot be allowed to swerve on to the sidewalk and kill a pedestrian there.

Machines won’t just become increasingly autonomous in traffic. There are other scenarios where life and death could be in the balance. Two years ago, a robot arm at the Volkswagen works in Hesse killed an assembly worker. The machine didn’t know to protect humans. A year ago, a security robot patrolling a mall in America injured a small child. Drones that recognize obstacles and people by camera will soon need to decide for themselves in the event of a technical fault or breakdown where to make an emergency landing without injuring anybody.

“Apple and Google,” Mr. Lüdemann says, “don’t want to make cars,” but operating systems for the multimedia-networked, mobile person of the future. They want to insert themselves between producers and customers and rake in the lion’s share of value creation as the providers of an attractive mobility concept.

Carsharing, Mr. Lüdemann says, has been impractical so far because the complex system isn’t controlled by an omniscient algorithm. But what if you no longer needed to walk in the rain because the next available car simply drives up? What if it took children to their sports games without the parents having to play taxi driver? “All that would be so attractive to the individual that they would probably no longer worry about ethical issues,” predicts Mr. Lüdemann. “When the first iCar, with all those apps, goes on sale, people will want to have it.”

 

This article was originally published in Handelsblatt’s sister publication Wirtschafts Woche. To contact the authors: varinia.bernau@wiwo.de, stefan.hajek@wiwo.de, and andreas.menn@wiwo.de

We hope you enjoyed this article

Make sure to sign up for our free newsletters too!