I will not bore you with a long description of Foot’s work; just let’s pretend you are the conductor of a runaway trolley running the railway tracks in the direction of five workmen who cannot get out of the way fast enough. To avoid killing them you have to trigger a switch to move the trolley onto a side track, but diverting would kill one worker on the side track. What would you do? Would you take an utilitarian strategy and do the thing that would kill just one man or try to brake and let God decide who’s going to live?
There are many variations of this experiment seeking to understand how the variables do change our ethics’ math: would you kill the one worker if he was your son? What if you have to decide whether to kill ten people or kill yourself by diverting the trolley against a wall? Is someone’s life more important than someone else’s?
Sometimes in real life things happen so quickly and we’re so distracted by the environment that really don’t have time to make any conscious decision but it seems that now the trolley problem is finally going to enter our future in a very tangible way by means of the software that runs self-driving cars.
An autonomous vehicle continuously evaluates the scenario, environment conditions, nearby vehicles and their type and mass and speed and vectors… and defines the short term strategy to keep the passengers safe while trying to reach the final goal. What if an unavoidable accident is the only outcome of a short term strategy? Would the software drive the car to take a utilitarian ethic approach and choose for the supposedly less lethal course of action?
The paper linked here below provides an interesting angle for looking at the self-driving car from a future buyer perspective and opens unexpected psychological dynamics in how we will evaluate and choose cars, insurance contracts and how we perceive the importance of our own life with respect to others’.
Are we going to to buy a car that will kill us if necessary?