If data is human - who sets the moral code and other ethical issue

Image001

The BBC has a long term program called the moral maze hosted with Michael Buerk.  This was a good one exploring the issues of social media. http://www.bbc.co.uk/iplayer/episode/b01ntgw5/Moral_Maze_The_Moral_Code_of_Social_Media/

However, as we begin to employ more computer-controlled objects, cars, robots, and machines that need to operate autonomously in our real-time chaotic environment, situations will inevitably arise in which the software has to choose between a set of tragic, unpleasant, bad, even horrible, alternatives.

Example 1. You’re driving along in your car which has an insurance protection system on and can see that a n uninsured poor driver is about to break a red light in front of you that will lead to a crash.  The automatic system takes over and you come to a sudden halt, however the person behind you now takes evasive action, swerves to avoid you and hits the same uninsured driver killing the person instead of you.

Example 2. Your self-driving car crosses a bridge as a group of schoolchildren walk along the pedestrian path. Suddenly there’s a tussle and one of the kids is pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it run over the children.

How do you program a computer to choose the lesser of two evils? Should one person’s moral code who writes the algorithm make your decision?  What are the criteria, and how do you weigh them?

We aren’t very good at codifying responses to moral dilemmas ourselves, particularly when the factors that effect a situation can’t be predicted ahead of its occurrence, should we expect coders and programmers to take the burden?