%%title%%

Resource

Driven To a Dilemma

May 02, 2017 | John Sarich
Driven To a Dilemma-logos

 The moral issue of how a driverless car will respond to split-second, life-and-death situations is something that requires much more thought.

In the daily drumbeat of news on driverless cars, one topic rarely comes up: the moral issue when making split-second judgments in the case of accidents. Why should the concept of driverless cars make the insurance industry snap to attention? Well, along with the ethical issues surrounding this new technology which we discuss herein, it represents a huge shift in liability. The question is, who is the insured—the manufacturer or the vehicle’s owner? Currently, it’s the owner. But in this new model, it’s likely that the manufacturer will be responsible, in which case we migrate from auto insurance to product liability insurance. And the possible shift of some $250 billion in premium revenue from auto to product.

Driven To a DilemmaThat’s a huge topic that needs to be thought through. But I’d like to raise a seldom discussed concern that all the euphoria around driverless cars has crowded out, and that’s morality. In the current, owner insured model, the driver is the one responsible for how the vehicle behaves. Self-driving cars muddy the waters. Consider for example, that self-driving cars have a difficult time identifying bicycles—their sensors don’t “see” their thin frames very well. Who is

Responsible for hitting that biker?

Rarely have software developers been asked to code moral dilemmas that absolutely have a life-and death dimension. However, if you insist on being in the artificial intelligence business as it pertains to driverless cars, then you must be willing to explain the life-and-death issues that surround the operation of a driverless vehicle.

In comments published by Car and Driver in October, a Mercedes-Benz executive said all of Mercedes-Benz’s future Level 4 and Level 5 autonomous cars will prioritize saving the people they carry. “If you know you can save at least one person, at least save that one,” according to Christoph von Hugo, the automaker’s manager of driver assistance systems and active safety. “Save the one in the car,” Hugo said in an interview at the Paris auto show. “If all you know for sure is that one death can be prevented, then that’s your first priority.” Save the driver, not the child in the street. We’re not necessarily in new territory here: Ethics, philosophy and psychology have all weighed in on the question of artificial intelligence and driverless cars. Consider the “trolley problem.” The modern form of the trolley problem was articulated in 1967 by British philosopher Philippa Foot using this example: Imagine a runaway streetcar is racing toward five workers. Would you pull the switch to direct the trolley onto another track, where one man works alone? Or do you stand by and do nothing?

Driven To a Dilemma-textLater, Judith Thomson of the Massachusetts Institute of Technology added a couple of other variations on Foot’s scenario. In one of them, a streetcar is headed for five workers and you are on a footbridge over the streetcar’s tracks. A large man is walking next to you. If you push him over the bridge, could his body stop the streetcar from hitting the workers? Should you do it? What the trolley examples are attempting to tease out is whether or not there can be a best possible action or outcome—one person dying instead of five. When an Air Force pilot goes through flight training, “what ifs” are a large part of that training. What happens if control surfaces are lost; what happens if you run out of fuel; or an engine flames out? Pilots are trained to understand that if they have some control and are heading for a crash in the desert, then they should bail out.

However, if they are in a populated area, the objective is to get the plane to crash where it will not kill or injure people. Ejecting isn’t the first option—in fact, it isn’t an option at all. The only option for the pilot is to fly it into the ground in a safe area. Why? Because we value innocent lives and will do our utmost to save lives at the expense of our own.

The range of human behavior, when confronted with do-or-die situations, is many times superhuman as well as ingenious, creative and instantaneous. We’ve seen cases where people small in stature are able to perform Herculean tasks that are beyond comprehension, such as lifting a car off of a person, carrying a person out of a burning building and so on.

The point is, while artificial intelligence might mimic human behavior, there is an programmable gap in precisely how a human will react compared to how a machine will. With thousands of years of experience, humans are superior to what someone recently wrote into a computer program. That gap between machine and human is real, and it is quite large.

As noted earlier, we are facing some very vexing, complex, and little-understood areas of human behavior.

It’s a good thing that we have not yet begun to put too many of these vehicles on the road. From a societal standpoint, we aren’t yet ready to go down this path without a lot more introspection, study and understanding, not to mention infrastructure.

The moral issue of how a driverless car will respond to split-second, life-and-death situations is something that requires much more thought, discussion and debate. There are many issues on the moral aspect of driverless cars, and these issues, from religious, political, legal and academic circles as well as from the common person, need to be heard.

We need to understand that autonomous vehicles are not just about whiz-bang technology. Nor is the only issue the enormous liability that auto insurers haven’t begun to figure out how to deal with. There is much more at stake here.

 

Best’s Review: May 2017

Copyrighted A.M. Best Company, Inc. 2017

All Rights Reserved, Reprinted with Permission   

 

## End of Article ##

Download Featured Article