Driverless cars may prevent accidents. But at what cost?

Posted by ap507 at Jan 26, 2017 09:35 AM |
Dr Neil Walkinshaw from the Department of Informatics discusses the advent of driverless car technology

Think: Leicester does not necessarily reflect the views of the University of Leicester - it expresses the independent views and opinions of the academic who has authored the piece. If you do not agree with the opinions expressed, and you are a doctoral student/academic at the University of Leicester, you may write a counter opinion for Think: Leicester and send to ap507@le.ac.uk  

The advent of driverless car technology has been accompanied by an understandable degree of apprehension from some quarters. These cars are after all entirely controlled by software, much of which is difficult to validate and verify (especially given the fact that this software tends to involve a lot of behaviour that is the result of Machine Learning). These concerns have been exacerbated by a range of well-publicised crashes of autonomous cars. Perhaps the most widely reported one was the May 2016 crash of a Tesla Model S, which “auto piloted” into the side of a tractor trailer that was crossing a highway, killing the driver in the process.
 
As a counter-argument, proponents of driverless technology only need to point to the data. In the US Department of Transportation report on the aforementioned Tesla accident, it was observed that the activation of Tesla’s autopilot software had resulted in a 40% decrease of crashes that resulted in airbag deployment. Tesla’s Elon Musk regularly tweets links to articles that reinforce this message, such as a link to an article, stating that “Insurance premiums expected to decline by 80% due to driverless cars”.
 
Why on earth not embrace this technology? Surely it is a no-brainer?
 
The counter-argument is that driverless cars will probably themselves cause accidents (possibly very infrequently) that wouldn’t have occurred without driverless technology. I have tried to summarise this argument previously - that the enormous complexity and heavy reliance upon Machine Learning could make these cars prone to unexpected behaviour (c.f. articles on driverless cars running red lights and causing havoc near bicycle lanes in San Francisco).
 
If driverless cars can in and of themselves pose a risk to their passengers, pedestrians and cyclists (this seems to be apparent), then an interesting dilemma emerges. On the one hand, driverless cars might lead to a net reduction in accidents. On the other hand, they might cause a few accidents where they wouldn’t have under the control of a human. If both are true, then the argument for driverless cars is in its essence a utilitarian one. They will benefit the majority, and the question of whether or not they harm a minority is moot.
 
At this point, we step from a technical discussion to an philosophical one. I don’t think that the advent of this new technology has really been adequately discussed at this level.

Should we accept a technology that, though it causes net benefits, can also lead to accidents in its own right? This is everything but a no-brainer in my opinion.  

Share this page: