I just answered the Moral Machine self driving car scenarios and i did it honestly unlike it seems most respondents. This is one of the artifacts of surveys like this. The issue comes down to what reference point are you applying to your answer? When asked “moral dilemmas” most people try to answer what they think others want to hear, they take the “what if my answer were published in the New York Times?” Approach, typically choosing whatever is politically popular today. That is why in this survey the average respondent to the “protect passengers” questions are exactly in the middle. Indifferent. Yet is everyone really indifferent?
To combat this bias I took the purposeful approach of answering the question not as if were some theoretical car with theoretical people. I decided to answer as if it were my car with my family in it and I didn’t know the animals or pedestrians. That is the 99.9% real life scenario. When I buy a self driving car I want it to have variables that I can configure on these kinds of things. And I for one will set it to always protect me and my family. When researchers ask the question this way (your family in the car) they in fact find that there is a significant preference to protect the passengers.
When reading these kinds of survey results. Always ask yourself it the designers considered the frame of reference correctly and if you were in the car with your children would you answer differently