AI, autonomous cars and moral dilemmas

AI, autonomous cars and moral dilemmas

You’re a train conductor speeding along when you suddenly see five people tied up on the tracks in front of you. You don’t have enough time to stop, but you do have enough time to switch to an alternate track. That’s when you see there’s one person tied up on the alternate track. Do you pull the lever to make the switch, or stay the course?

Any college graduate who has ever stepped foot in an introductory philosophy course is likely to recognize this problem immediately. The question is a classic jumping off point for discussions about utilitarianism, consequentialism and fairness. Subsequent twists on the question — what if the one person standing on the other track was a child? — come with new moral dilemmas and further abstract discussions. There is no clear correct answer. In this ambiguity lies conversation.

The tech community as a collective whole is now facing a similar conundrum when it comes to programming machines. This time, though, the philosophical decisions aren’t theoretical — and nobody will be saved by the bell. With the advent of smart machines with learning capabilities powered by artificial intelligence, we need to reach a final consensus for a very practical purpose: We need to teach robots how to be moral.

Read full article here

Back to Top