Cruise Knew Its Robotaxis Struggled To Detect Children But Still Kept Them On The Road

Cruise Knew Its Robotaxis Struggled To Detect Children But Still Kept Them On The Road

Photo: Cruise

Things aren’t going so well for General Motors’ autonomous Cruise division right now. After a human driver hit a pedestrian, a robotaxi pinned her to the ground, and it later came out that the Cruise taxi actually dragged her for 20 feet before stopping. Understandably, California then banned Cruise from operating in San Francisco. As it turns out, though, Cruise had way more problems than just that one incident. Allegedly, it also knew its robotaxis were a danger to children but still kept them on the road, the Intercept reports.

I Need To Replace My Dying Volkswagen | What Car Should You Buy?

Internal materials reviewed by the Intercept show that Cruise knew its autonomous vehicles struggled to detect children and wouldn’t drive more cautiously when they were nearby. One safety assessment literally even said, “Cruise AVs may not exercise additional care around children.” The company also knew it needed “the ability to distinguish children from adults so we can display additional caution around children.”

Cruise was also reportedly worried that it didn’t have enough data on children’s behavior to behave safely around them. It did, however, know that in a test, one of its cars actually detected a toddler-sized dummy and still hit it while going nearly 30 mph. From the Intercept:

The internal materials attribute the robot cars’ inability to reliably recognize children under certain conditions to inadequate software and testing. “We have low exposure to small VRUs” — Vulnerable Road Users, a reference to children — “so very few events to estimate risk from,” the materials say. Another section concedes Cruise vehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects around the car and maneuver accordingly. The materials say Cruise, in an attempt to compensate for machine learning shortcomings, was relying on human workers behind the scenes to manually identify children encountered by AVs where its software couldn’t do so automatically.

See also  Tesla Autopilot crashes occur when drivers use it on unsuitable roads, report says

In a statement, Cruise told the Intercept that its software “hadn’t failed to detect children but merely failed to classify them as children” since it treats children as a special category that is more likely to behave unpredictably. According to Cruise, “Before we deployed any driverless vehicles on the road, we conducted rigorous testing in a simulated and closed-course environment against available industry benchmarks. These tests showed our vehicles exceed the human benchmark with regard to the critical collision avoidance scenarios involving children.”

Apparently, Cruise also struggles to detect holes in the ground and there was a good chance one of its robotaxis would drive right into a pit that it encountered. So it didn’t just have problems detecting children. To get those details, head over to the Intercept and read the whole story there.