Can Machines Have Morals?

83907182 driving no hands dpa
No hands, no liability? Source: DPA

The day will come when a driver takes a nap behind the wheel as his self-driving car speeds down a steep incline. Suddenly, a woman on a bicycle falls over in front of him with a little boy in the bike’s child seat. A few milliseconds will be enough for the car’s software to determine that the braking distance is too short. What should the self-driving car do? If it continues straight ahead, it will drive over the mother and child. If it veers to the left, the car and driver will plunge down the hillside. Three human lives, two options.

An ethics committee appointed by the German Federal Ministry of Transport recently submitted 20 rules for self-driving cars. The committee dodged the issue of whether a car should accept the death of one or another person in an unavoidable accident. It only decided there could be no weighing of one human life against another. But is it possible to dictate to an algorithm not to draw conclusions from the data on which it functions?

Algorithms already navigate us through traffic, they decide what we want to read and hear, who can be insured for how much and recommend to police how long a suspect should be held in detention. The debate over decisions made by self-driving cars serves as a template for the services machines will provide in the future – robots working in factories alongside humans, computers in hospitals that recommend drug dosages. Can ethical conduct be programmed? Can machines be taught moral standards?

Want to keep reading?

Subscribe now or log in to read our coverage of Europe’s leading economy.