It seems almost inevitable that self-driving cars will soon fill our roadways. Equipped with a 360-degree awareness that far surpasses human eyesight – plus a split-second reaction time – these vehicles will be able to “see” potential hazards and, in most cases, avoid accidents before they even arise. By some estimates, self-driving vehicles will reduce road fatalities by 90 percent.
Addressing the “what-if’s”
But automakers still have some kinks to work through – including difficult programming decisions with far-reaching ethical ramifications. These issues center on the one-in-a-million scenarios where the vehicle can’t possibly protect everyone from harm. What parameters should guide its actions in these rare (yet potentially catastrophic) cases? Put simply, how will it decide who to save and who to sacrifice?
Many agree that, in theory, self-driving cars should abide by utilitarian principles. They should be programmed to take whatever course of action will save the most lives. Applying that theory in real life, however, gets more complicated. What if the life of a child is at stake? Should that have more weight? And, given that most Americans have misgivings about self-driving cars to begin with, will they really entrust their lives to a vehicle that would potentially “choose” to let them die so that occupants of another vehicle could live?
Looking to the law for answers… in vain
Proposed federal legislation on self-driving cars addresses design and safety issues but doesn’t explore these ethical quagmires. The law can’t anticipate every possible nightmare that could occur, although it can articulate principles that automakers must abide by. In all likelihood, these issues will remain largely unaddressed – for now.
One thing is for sure: In this ever-evolving intersection between law, morality and technology, there are no easy answers.