Making Moral Robots

One of Google’s famously ambitious projects is the self-driving car. Less famous (but only a matter of time) is the ethical problem in programming cars to drive themselves. When circumstances make an accident inevitable, how do you program the car to choose between outcomes? Should it protect the driver at all costs, or should it protect the greatest possible number of human lives?

Robotics of all kinds, not just cars face this problem. While we haven’t yet progressed to the I, Robot or Matrix stage of robot morality, we are very quickly coming up to the point when robots have to choose between morally competitive outcomes.

A fair amount of digital ink has been spilled on the topic. Here are a few interesting articles that can get you started:

Leave a Comment

Your email address will not be published. Required fields are marked *