- Google’s self-driving cars have been in 11 accidents because humans are dumb (PDF)
- Humans at fault in self-driving car crashes – LA Times (PDF)
- Self-driving car accidents: How big a problem is this? (PDF)
First, the bad news: Self-driving cars have been getting in accidents. An exclusive Associated Press report Monday revealed that four self-driving cars have been involved in fender benders on California’s streets just since September. That’s out of a total of fewer than 50 legally permitted self-driving cars on the state’s public roadways.
Three of the cars were Google’s; the fourth was a self-driving Audi owned by Delphi Automotive. Two of the four were under human control when they wrecked; the other two were in autonomous mode. All of the accidents were minor, with no injuries reported. The AP obtained the data from an unnamed source who wasn’t allowed to talk about accident reports publicly.
On its face, that sounds pretty bad for self-driving cars. Google told the AP its autonomous vehicles have covered a total of roughly 140,000 miles in the state since September. As the AP points out, three wrecks per 140,000 miles is actually significantly worse than the national average of 0.3 “property-damage-only” accidents reported per 100,000 miles driven. Hey, wasn’t the whole point of these things to make us safer? And what happened to all those triumphant media reports about how far Google’s self-driving car had gone without a single accident in autonomous mode?
Well, here’s the good news. Shifting its PR team into overdrive, Google responded to the AP’s report within hours in the form of a detailed Medium post by the self-driving car project’s director, Chris Urmson. And he said not one of the three accidents was caused by the self-driving car in question.
In fact, Urmson writes, Google’s self-driving cars have logged some 1.7 million miles over the six-year life of the project, and they’ve been involved in a grand total of 11 accidents. Not one has been serious. More importantly, if you believe Google, not a single one of those accidents was caused by the self-driving car. Eleven minor accidents in 1.7 million miles is a much less alarming ratio. And, Google noted, comparing it to the national average may be misleading, since a great many minor accidents go unreported.
Oh, and zero accidents caused in 1.7 million miles is obviously a strong record by any standard.
So what’s the takeaway here? Are Google’s robot Lexuses supersafe or superscary? The answer, actually, is the same as it’s been all along: As far as we know, they’re really quite safe—but to render a verdict would still be premature.
For one thing, we assume Google is telling the truth and not fudging its figures. But it would be nice, if we’re going to trust these things with our lives in the foreseeable future, to be able to verify such assertions with something other than the occasional anonymously sourced investigative AP story.
Meanwhile, there’s another confounding factor in these numbers that’s often overlooked. When a Google employee behind the wheel of a self-driving car sees a risky situation developing, her instructions are to take the wheel herself. That means Google’s cars don’t even have a chance to cause an accident unless the person behind the wheel fails at her job first. No wonder crashes in autonomous mode are rare!
Now, whenever this happens, Google takes the car back to its shop and simulates what would have happened if the driver hadn’t taken over. In every case, Google says, the simulations show that the car would have automatically avoided causing the accident.
That’s reassuring if true, but there also seems to be some circularity at work here. Remember, these are Google’s own computer simulations we’re talking about—presumably using the same assumptions that are built into the car itself. How can we be sure the simulations aren’t mistaken?
To recap, here’s what we can say with confidence: Self-driving cars are not running amok and causing accidents at an alarming rate. If anything, they appear to be excellent drivers based on the limited evidence we have so far. By the same token, they will not save you from all the other idiots who populate our nation’s fair roadways.
What we cannot say with confidence yet is whether self-driving cars will in practice substantially reduce the rate of traffic accidents in a world where they share the road—and sometimes, the wheel—with human drivers. Which is a very good argument for exactly the kind of road testing that California and several other states are now permitting on their public roads. It’s also a good argument for a little more transparency as to how those self-driving cars are performing.
I chose the least sympathetic of the three articles to make a point. We need not be shy about the early testing of these autonomous cars. There will be a learning curve as testing goes forward.
If you had ever been involved in the early days of wireless communications for what is now considered the cellphone you would know that things did not always go well when these devices moved between cell towers.
Likewise keep in mind that every single airliner these days has an auto-pilot mode. These planes often land themselves because they can do it better than the pilots themselves.
What we should however be worried about is what will need to be done to keep bicyclists from continually killing pedestrians in the crosswalks of America?