We Still Haven't Figured Out How Self-Driving Cars Should Make Life-Or-Death Decisions

We Still Haven't Figured Out How Self-Driving Cars Should Make Life-Or-Death Decisions

Autonomous cars are coming. Slowly. The rosy predictions that we heard back in 2016 proved to be overblown, and research and development have cost companies billions and billions of dollars, but robotaxis are already on the road today. It hasn’t gone perfectly, but it’s going. So how are companies approaching the ethical issues surrounding the potential danger that robotaxis raise to the public? Recently, the Wall Street Journal took a look at several of the biggest questions programmers are grappling with.

People Are Relying on ADAS to Do Things it Can’t Do

For example, for self-driving cars to know to avoid pedestrians, they first have to detect them. Even with all the sensors that companies such as Waymo use in their robotaxis, it can still be hard to distinguish a person from a mannequin or street-level ad, and the software is still worse at detecting people with dark skin than it should be. One possible solution to that problem is to have cars detect nearby cell phones, giving them a better idea of who’s in the area and where they’re headed.

That idea, of course, comes with all sorts of privacy concerns, but there’s also another side to it. If self-driving cars depend on cell phones to detect people, then they’re more likely to miss someone walking without one. Should you really be expected to carry a charged phone at all times to avoid being hit by a car? Should people who can’t afford a cell phone have to accept the streets will be more dangerous for them simply because they’re poor?

See also  AI-powered parking platform Metropolis raises $1.7B to acquire SP Plus

And before you even get to a hypothetical trolley problem involving robotaxis, you first have to consider the far more common occurrence of animal encounters:

In a collision, moose and deer pose an existential risk to vehicles and their occupants. Smaller animals such as hedgehogs, or cats and dogs, present less of a risk. Is it morally acceptable for AI to weigh the lives of these animals differently?

For large animals, Waymo gives priority to “reducing injury-causing potential” for humans, through avoidance maneuvers, Margines says. When it comes to small animals, such as chipmunks and birds, Waymo’s AI “recognizes that braking or taking evasive action for some classes of foreign objects can be dangerous in and of itself,” he says.

How might the equation differ for midsize animals such as porcupines or foxes? Or for animals that might be pets?

Most people would probably agree that it’s better to hit a squirrel than crash into a tree, but what about a dog or a cat? What about animals that are large enough to damage the car? One bioethicist is concerned that programmers will be “speciesist” and disregard the safety of smaller animals in favor of owner comfort and convenience.

While we don’t have all the answers just yet, the entire article is a great read. It’s also far too long for us to ever summarize here, so head on over to the Wall Street Journal to check it out.