As companies such as Google, Mercedes-Benz and Volvo race to develop vehicles that are smart enough to drive themselves, even the most sophisticated systems are vulnerable to human error.
In a recent incident, a Volvo employee attempting to demonstrate safety features instead ran into a group of people. The driver thought the car was equipped with an option that detects pedestrians and brakes automatically. It wasn't, and the way the driver gunned the XC60 sport utility vehicle might have overridden the function anyway.
The mishap wasn't isolated. One of Google's driverless cars was rear-ended last week, bringing the total to 13 accidents over six years of tests, with nine being a hit in the backside. While smart cars are intended to prevent accidents, the incidents illustrate the challenges posed by the interplay between people and computers.
As cars do more of the driving themselves, alternating control between the machine and a distractible human is “going to be a tough issue”, says Philippe Crist, an economist at the Organisation for Economic Cooperation and Development (OECD) who coordinated a May report on autonomous driving. That’s a big reason why many car makers probably won’t introduce completely automated vehicles any time soon. Plus, there’s a risk that such cars will result in new types of crashes, he says.
Even if robot cars are still a way off, manufacturers are adding more and more systems that can take the wheel in certain situations. The lure for car makers is clear. Demand for features that ease the more tedious aspects of driving, such as steering through stop-and-go traffic, could create a $42 billion market by 2025, Boston Consulting Group estimates.
The accidents involving Google’s autonomous cars seem to stem from the fact that they don’t bend traffic rules the way human drivers expect, according to Mr Crist.
“We were stationary” for most of the accidents, said Astro Teller, head of the Google research laboratory handling the driverless car effort, at a developers’ conference last month. “The car wasn’t driving. The human wasn’t driving either. We were just rear-ended” by another vehicle.
With 94 per cent of crashes linked to some kind of driver error, computers taking more control might be a good idea.
“We strongly believe this technology will help reduce accidents,” says Eric Schuh, head of Swiss Re’s Casualty Center, which analyses risk for the Zurich-based company’s reinsurance business.
Volvo agrees, despite the embarrassing crash last month, which went viral on YouTube with more than 4 million views.
“There was nothing wrong with this car itself,” the Gothenburg, Sweden-based car maker said. “The unfortunate incident happened only due to human error.”
Indeed, Volvo is banking on automated-driving technology to eventually eliminate deaths and serious accidents in its new cars in the coming years. What the crash did show – besides the obvious fact that it’s not a good idea to drive straight into people – is that it’s critical to understand a vehicle’s safety capabilities, which can differ vastly from model to model.
“Does that car just adjust the speed when you drive on the motorway, or does that car indeed also know when a human crosses the street in front of it and put on the brakes?” Mr Schuh asks.
The most effective technology doesn’t require any human interaction. Igor Kryuchkov, managing director of T3 Risk Management SA, a Geneva-based consultancy, knows first hand.
He was driving his BMW coupe on a French motorway last month, returning from a trip with his wife and son, when a Volkswagen Golf swerved into his lane. The BMW, which was equipped with a collision-avoidance system, tightened the seat belts and braked before Mr Kryuchkov had time to react.
“You don’t really appreciate it until it becomes very useful,” Mr Kryuchkov says. “It was kind of like the car braced for impact,” which it fortunately avoided.
Follow The National's Business section on Twitter