Social vs. Interpersonal Trust and AV Safety

Social vs. Interpersonal Trust and AV Safety

Bruce Schneier has written a thought-provoking piece covering the social fabric vs. human behaviors aspects of trust. Just like “safety,” the word “trust” means different things in different contexts, and those differences matter in a very pressing way right now to the larger topic of AI.


Exactly per Bruce’s article, the car companies have guided the public discourse to be about interpersonal trust. They want us to trust their drivers as if they were super-human people driving cars, when the computer drivers are in fact not people, do not have a moral code, and do not fear jail consequences for reckless behavior. (And as news stories constantly remind us, they have a long way to go for the super-human driving skills part too.)

While not specifically about self-driving cars, his theme is about how companies exploit our tendency to make category errors between interpersonal trust and social trust. Interpersonal trust is, for example, the other car will try as hard as it can to avoid hitting me because the other driver is behaving competently or perhaps because that driver has some sort of personal connection to me as a member of my community. Social trust is, for example, the company who designed that car has strict regulatory requirements and a duty of care for safety, both of which incentivize them to be completely sure about acceptable safety before they start to scale up their fleet. Sadly, that social trust framework for computer drivers is weak to the point of being more apparition than reality. (For human drivers the social trust framework involves jail time and license points, neither of which currently apply to computer drivers.)

See also  Surprisingly Unsafe, then Surprisingly Safe: the Chevy Lumina

The Cruise debacle highlights, once again (see also Telsa and Uber ATG, not to mention conventional automotive scandals), the real issue is the weak framework to create social trust of the corporations that build the cars. That lack of framework is a direct result of the corporation’s lobbying, messaging, regulatory capture efforts, and other actions.

Interpersonal trust doesn’t scale. Social trust is the tool our society uses to permit scaling goods, services, and benefits. Despite the compelling localized incentives for corporations to game social trust for their own benefit, having the entire industry succeed spectacularly at doing so invites long-term harm to the industry itself, as well as all those who do not actually get the promised benefits. We’re seeing that process play out now for the vehicle automation industry.

There is no perfect solution here — it is a balance. But at least right now, the trust situation is way off balance for vehicle automation technology. Historically it has taken a horrific front-page news mass casualty event to restore balance for safety regulations. Even then, to really foster change it needs to involve someone “important” or an especially vulnerable and protection-worthy group.

Industry can still change if it wants to. We’ll have to see how it plays out for this technology.

The piece you should read is here:  https://www.belfercenter.org/publication/ai-and-trust