Wrong question: Who lives and dies in a self-driving car accident?

Original Article : https://www.trustedreviews.com/news/self-driving-cars-life-death-3640133

These are fantastic discussions and as humanity we are not having enough of them but rather than clink-bate with headlines for adverting revenue, what are the first principal questions?

This specific moral dilemma make a number of assumptions :

  • the machine can differentiate
  • the machine can make the choice (algorithm/ software/ data)
  • it is possible to do one action over another (physics: motion and time)
  • where is experience / learning in the feedback loop
  • who said we had the choice in the first case

Given that the road accident for the victim is currently random (other than premeditated and malicious) - who gave someone the right to pick or select me. If I am selected that means a new liability for someone.  The existing system being based on risk and acts of freewill, allowing machines to decide, as the human has determined by programming and selection this removes freewill.  However if AI based and the machine decides on its own - is that not closer to the existing system where luck is part of the unknown equation?

One suspects we need a wider and more rigorous debate