Tesla’s Autopilot: a potentially dangerous distraction from true autonomous cars

As Tesla's Autopilot comes under scrutiny, are we finding a dangerous middle grounding autonomous car tech?

The fatal Tesla crash I wrote about last week is still sparking debate. Nobody is seriously arguing that autonomous cars are not on the way – the question is the path we take there. And there’s a growing consensus that the Tesla Autosteer solution might be too little, too soon.

Here’s how Autopilot works:

Developer and podcaster Marco Arment is a Tesla owner, and has shared his own experiences of using the Tesla Autopilot:

Autosteer is a strange feeling in practice. It literally turns the steering wheel for you, but if you take your hands off for more than a few minutes, it slows down and yells at you until you put your hands back on the wheel. It’s an odd sensation: You’ve given up control of the vehicle, but you can’t stop mimicking control, and while your attention is barely needed, you can’t safely stop paying attention.

It is very much not an autonomous driving solution – it’s an assisted driving system, and therein lies the problem:

It’s automated enough that people will stop paying attention, but it’s not good enough that they can. You could say the same about cruise control, but cruise control feels like an acceptable balance to me, whereas Autosteer feels like it’s just over the line. History will probably prove me wrong on that, but it feels a bit wrong today.

Beta tech in cars can kill

Alan Patrick of analysts Broadsight also questions the readiness of the tech::

Firstly, its is clear now that the technology is not yet ready for wide scale deployment – a camera probably should not be the prime mode of sensing (night and difficult lighting conditions, lens obstructed) and the current radar is clearly sub-optimal – there are a lot of potential obstacles below car roof height.

This is, of course, why most true autonomous car efforts use Lidar – which isn’t a visual recognition system. But that tends to place an ugly “bump” on the car, something that Tesla owners would probably find aesthetically unacceptable right now.

It almost smacks of software industry culture – push a Beta product out there, let the customers find the bugs. But bugs in heavy, powerful, fast mechatronic devices can kill.

Writer Nick Carr goes further:

With the Tesla accident, the evidence suggests that the crash happened before the driver even realized that he was about to hit a truck. He seemed to be suffering from automation complacency up to the very moment of impact. He trusted the machine, and the machine failed him. Such complacency is a well-documented problem in human-factors research, and it’s what led Google to change the course of its self-driving car program a couple of years ago, shifting to a perhaps quixotic goal of total automation without any human involvement.

The blame can’t easily be placed at the feet of the system or the driver. On the one hand, this is not what the system was designed for. Letting your attention drop when using an assisted steering system still leads to accidents:

Assist the driver – or replace the driver?

So, under that reasoning, the driver is at fault. The problem is that these systems, intentionally or not, lead to drivers’ attention wandering. They don’t have to be in complete control of the vehicle, os their mind loses focus. That’s natural – and perhaps unavoidable. As Carr mentions above – research has proven this tendency exists.

Increasingly, this looks like an area where the middle-ground – a human driver with some computerised assistance – is the wrong direction. We either need completely autonomous cars – or human driver assist systems which step in when problems are detected, rather than asking the human to step in when something goes wrong. Even a few seconds’ delay in a distracted driver reacting is a long, long way on fast roads…