The first death due to “self driving” cars: who is at fault?
Tesla is now being investigated by the NHTSC following a May accident where a driver was killed while his car was in Autopilot mode. There are a few articles out there, but this Washington Post has the best information by far, accident site is the picture at top. In addition to the obvious human cost, this is the first time a human has been killed while a car was in autonomous or semi-autonomous self driving mode. There will be a lot of debate about where the fault lies. I am not interested in the legal aspects, but there are a few parties who share some fault. Sorry for the cursing below, but a guy is dead and I am angry about that.
The Driver: Not a nice thing to say, but the Tesla website and owner’s manual and everywhere else tells you SPECIFICALLY to keep your eyes on the road, your hands on the wheel, and always be ready to take control. That said, attempts to make this all about the (dead) driver are NOT going to fly with popular opinion, politicians, the media, regulators, and so on.
Tesla: Stop calling it fucking Autopilot. It is a very sophisticated and capable advanced cruise control – and calling it Autopilot makes drivers think it is more capable than it is. Yes, Tesla warns people not to trust it too much, but if a pilot cruising along at 35,000 feet turns on the “autopilot” the plane NEVER smashes into another airplane. It will automatically avoid a crash, which did not happen in this case.
The Media: Stop fucking calling Teslas “self driving cars.” They are nothing close, at least in 2016. Tesla warnings that the Autopilot feature is in beta, needs to be backed up by human drivers, and so on get turned into background noise by a million media mentions that call them self-driving. Nobody reads End User License Agreements disclaimers, but that usually doesn’t end up with people getting killed.
Tesla Again: Time to get a little technical. There are two broad approaches to making vehicles more autonomous that are related to how the car “sees” the road. One is to put a big, expensive, active sensor on top of the car a la Google. The Google car uses LIDAR, which is like laser radar, to scan the environment with great precision. It costs a lot of money, is pretty ugly, and doesn’t work in snow, but it does have certain advantages. One of them is that it would have detected a truck in the path of Joshua Brown’s car.
The other approach is to have a suite of cameras that look in all directions. This is cheaper, blends in better with the car, and works well under many circumstances. This is what Tesla uses, and it appears to be at least in part responsible for the fatal crash. To quote the Tesla blog post announcing the crash: “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” In other words, the lighting conditions were such that purely optical systems (whether a human eye or a semi-autonomous car with cameras) were not good enough. Elon Musk stated publicly in October 2015 that fully autonomous vehicles don’t need to use LIDAR, but would need “passive optical and then with maybe one forward RADAR… if you are driving fast into rain or snow or dust.” I think we can now say that we can add bright daylight and white trucks to “rain or snow or dust” and remove the word “maybe.” I will make a prediction here: purely optical solutions are not sufficient and all autonomous (and maybe even semi-autonomous, see next point) vehicles MUST have at least one active sensing technology at wavelengths different than the human eye uses. We will not settle for robot cars that are roughly as dangerous as human drivers: they need to be safer, or there’s not much point. [Edited to add: I was unclear above. The Tesla does have a front facing radar unit, but it only scans the road ahead up to about the level of the hood. The truck body was high enough that the radar didn’t ‘see’ it, not detected by the cameras, and still low enough to cause a catastrophic and fatal crash.]
Semi-autonomous Vehicles: There is a fundamental problem here. Developing fully autonomous vehicles is going to take a while, and there are many benefits from incrementally getting there. Rolling out features like automatic emergency braking (which will be standard on most American cars for sale in 2022) will save thousands of lives, billions of dollars, get consumers to trust the technology, and also allow the technology to reach economies of scale. But there is an uncanny valley in terms of driving.
Uncanny valley refers to the fact that Elmer Fudd is kind of adorkable, but the characters from Polar Express were nightmarish! As animation moves from the cartoonish to almost-human, there is a perverse effect where “superior” animation actually looks worse.
In the same way, nobody became a worse driver because they had an automatic transmission. Even cruise control doesn’t seem to have increased accident rates. But as semi-autonomous technology gets better and better, there is a very real risk that human drivers will be lulled into inattentiveness by the improvements in autonomy.
I am not sure there is an easy fix for that last point, except getting active sensing into cars fast.
Compared to What?: Tesla is spending a lot of time saying that this was the first fatality in over 130 million miles driven, and the average in the US is one fatality every 94 million miles driven. That is true, but beside the point in two ways.
First, I think the public and regulators are going to demand more of semi-autonomous cars. Making the same mistakes a human would have made won’t be good enough, and it is clear that the Tesla camera approach was not good enough in this instance.
Second, this was a car, and expensive car, and on a divided highway. As part of that 94 million mile stat, there are many motorcycles (15% of fatalities), old and unsafe vehicles, and collisions in bad weather, at night or on much more dangerous roads. Given the conditions, I think that most of us would expect our robot cars to do better.