Blog

Mar 18, 2015

Can We Trust Robot Cars to Make Hard Choices?

Posted by in categories: driverless cars, ethics

By - SigularityHubhttp://cdn.singularityhub.com/wp-content/uploads/2015/03/robots-making-hard-choices-11-1000x400.jpg

The ethics of robot cars has been a hot topic recently. In particular, if a robot car encounters a situation where it is forced to hit one person or another—which should it choose and how does it make that choice? It’s a modern version of the trolley problem, which many have studied in introductory philosophy classes.

Imagine a robot car is driving along when two people run out onto the road, and the car cannot avoid hitting one or the other. Assume neither person can get away, and the car cannot detect them in advance. Various thinkers have suggested how to make an ethical decision about who the car should hit:

  • The robot car could run code to make a random decision.
  • The robot car could hand off control to a human passenger.
  • The robot car could make a decision based on a set of pre-programmed values by the car’s designers or a set of values programmed by the owner.

The last of these deserves a little more detail. What would these values be like?

Read more

Comments are closed.