The subject of self-driving cars opens a can of worms. If a self-driving car crashes this will beg the question of who is liable within the chain between (and ecosystem around) designer and point of sale and will no doubt lead to complex debate and disclaimers. 

There is also the tricky moral dimension to grapple with. Should a car sacrifice a driver to save a pedestrian? Should a greater number in the car be protected over a smaller number crossing the road? What if children are involved? Where will wildlife feature in the hierarchy of protections? Who can really decide?

Essentially, all of the above will come down to how an algorithm (which will be responsible for making split second decisions based on data) is rigged and there may need to be conformity on this point between different self-driving cars through regulation. This is where morality and science (and law) would come together. 

However, even supposing car buyers will want a car programmed to act like Jeremy Bentham (of utilitarian fame), what if hackers enter the scene and begin to tamper with the virtuous path set for a car?

All of this feels difficult to digest, I'm afraid I am going to catch a taxi (with a human driver)....