Who’s responsible when autonomous vehicles kill?

Self-driving carIn spite of evidence to the contrary, we keep being told that an onslaught of autonomous cars will be flooding the market—and the highways—in the next decade. There certainly are a lot of promises, big numbers and millions of dollars being thrown around: GM has spent more than $1 billon to acquire self-driving startup Cruise ($581 million) and a 9% stake in Lyft ($500 million); Ford will invest $1 billion in Argo AI and promises a Level 4 (highly automated) vehicle by 2021.

Market research firms are producing global estimates for autonomous vehicles that range from 600,000 units in 2025 to 21 million units in 2035, and the latest figure from IHS Markit puts autonomous vehicle sales at 33 million annually in 2040. Yet, according to a number of reports in various news media, very few big cities (where autonomous vehicles make the most sense) are even ready for this projected onslaught.

Robert Huschka in his online publication A3 Insider asks the question, “When Artificial Intelligence Makes a Decision, Who is Responsible?” Huschka noted in his editorial from Nov. 9 that “insurance giant Allianz cited emerging AI technologies as the seventh top risk to business—ahead of political turmoil or climate change,” and warned companies that they “face new liability scenarios and challenges as responsibility shifts from human to machine.

“Assignment of blame will prove challenging, ‘increasing the pressure on manufacturers and software vendors and decreasing the strict liability of consumers,’” the Allianz report says.

About 15 years ago I spoke with an automotive industry attorney in Michigan about liability issues for molders and mold makers. She concluded at the end of our conversation that when the advent of self-driving vehicles becomes a reality (as it is today), the real problem for automotive OEMs and their suppliers would be liability issues. “People who get hurt by a self-driving vehicle can now sue the automotive OEM with deep pockets instead of the 16-year-old kid with no insurance,” she commented.

In light of all the considerations that must be explored in an autonomous vehicle world, there are a few things that the industry needs to take into account. Huschka asks what if the AI system breaks the law? “If AI breaks the law, who should pay the fine or go to jail?” asks Huschka. “Legal scholars are already furiously debating how laws should apply to AI crimes.”

For example, if a person driving a car hits and kills a pedestrian, the driver typically would be at fault, especially if the pedestrian was in a cross-walk. The person could be charged with vehicular homicide or manslaughter, put on trial and even serve jail time. If an autonomous (fully self-driving) vehicle hits and kills a pedestrian, who is to blame? Most likely the automaker and the AI manufacturer will be held responsible. Or would they?

This is a huge question since a pedestrian was hit and killed by an autonomous vehicle being tested in Tempe, AZ. This question also arose when a driver of a Tesla S that was in autonomous mode drove into the side of a tractor-trailer, mistaking the expansive white side of the trailer for the empty sky. 

In more recent news, a Florida man is suing Tesla for negligence over its autopilot feature because it failed to detect a car stalled alongside a highway, leading to a collision that left him with permanent injuries. The driver was told that the autopilot feature would allow him to work while on his two-hour commute to his job each day. And a woman in Salt Lake City was injured after her Tesla car in autopilot mode failed to detect a stopped fire truck in front of her—she suffered a broken ankle in the subsequent collision.

A Tesla spokeswoman commented, “Tesla has always been clear that Autopilot doesn’t make the car impervious to all accidents, and Tesla goes to great lengths to provide clear instructions about what Autopilot is and is not.”

Can AI systems make ethical decisions? That is another question Huschka asks us to consider. “Obviously, computer systems don’t know good from evil,” he said in his editorial “It’ll be up to humans to build this in. But who decides what those morals and ethics should be?”

I would ask whether an AI system can actually be programmed to see every possible scenario that a driver could encounter on city streets and highways and make the snap decisions that humans have to make every day.

Comments (1)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...