freshidea - Fotolia

First fatality for self-drive cars as Tesla driver killed in crash

US authorities have launched an investigation after the driver of a Tesla Model S was killed in what is believed to be the first fatality involving an self-drive vehicle

The US National Highway Traffic Safety Administration (NHTSA) has opened an investigation after the driver of a Tesla Model S was killed when his car crashed in autopilot mode in what appears to be the first fatality directly caused by an autonomous or self-driving vehicle.

The fatal accident occurred in May 2016 on a stretch of divided highway, or dual carriageway, near the town of Williston in Florida. A truck crossing the road was struck by the Model S, driven by Joshua Brown. The car’s roof was ripped off and Brown killed; the driver of the lorry, Frank Baressi, was not injured.

In a statement, Tesla said that it appeared neither autopilot nor the driver had noticed the white side of the truck against a brightly lit sky, and so neither applied the brakes.

The high ride height of the truck’s trailer combined with its positioning on the road perpendicular to the Model S allowed the car to pass underneath, shearing off the roof.

According to the company, if the car had hit the trailer at virtually any other angle, its crash safety system would probably have prevented serious injury.

“The customer who died in this crash had a loving family and we are beyond saddened by their loss. He was a friend to Tesla and the broader electric vehicle community, a person who spent his life focused on innovation and the promise of technology, and who believed strongly in Tesla’s mission. We would like to extend our deepest sympathies to his family and friends,” said Tesla.

Read more about self-driving vehicles

Brown owned a wireless networking firm called Nexu. He was a long-time Tesla enthusiast and known in the self-driving vehicle community for YouTube postings of dashcam videos filmed with autopilot enabled. One of his videos showed his vehicle taking action to avoid a collision.

Autopilot mode was launched in autumn 2015 in a software update. The feature does not enable fully autonomous driving, but is designed for high speeds. It assists with manoeuvres such as changing lanes and adjusting speed using a combination of cameras, radar, sensors and map data. The system is available in the UK.

Driver responsibility

At the launch, Tesla chief executive Elon Musk said autopilot mode was designed to improve driver confidence, although he stressed this did not mean users could “abdicate responsibility” and that drivers would still be liable for accidents.

Tesla said the feature was disabled by default and required explicit acknowledgement that it was new technology in beta phase before it could be enabled. Autopilot also requires drivers to keep their hands on the steering wheel at all times, and alerts the driver if it cannot detect this is the case, slowing the car until manual control is restored.

First fatality

Tesla said it was the first known fatality in over 130 million miles of autopilot-enabled driving. It pointed out that globally there was a fatality for every 60 million miles driven.

“As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing. Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert.

“Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving,” said the company.

Questions raised

There have been numerous accidents involving self-driving vehicles, including Teslas. The majority have been minor shunts that tend to involve errors made by human drivers in their vicinity, although one incident involved two autonomous cars.

Nevertheless, as autonomous vehicles clock up more miles every day in testing, it was only a matter of time before the first fatality happened. Now it has, questions over the safety of the technology in the real world will inevitably be raised.

The US authorities have already indicated a willingness to waive some road safety laws to enable more real-world testing of autonomous vehicles. That may now be called into question.

UK position

Meanwhile, the UK government laid down a non-statutory code of practice for autonomous car testing in 2015, and created a joint unit at the Department for Transport and the Department for Business, Innovation and Skills to coordinate policy.

The code of practice sets out rules around insurance for testing, training of drivers taking part in trials, and the involvement of local authorities and emergency services. It also contains guidance on security best practice, both to protect data generated during the trials, and to prevent hackers from taking control of an autonomous vehicle.

A number of tests are taking place in the UK, including in Greenwich in London, and in the Midlands, where Jaguar Land Rover has set up a ‘living lab’.

Another test in London, run by Sweden’s Volvo, is to begin in 2017. Highways England is also participating in a number of experiments on public roads, including wireless transmission of information on road conditions to adapted vehicles, enabling them to route drivers around congestion.

Read more on Internet of Things (IoT)

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

My concern isn't even about death of the driver, who practically signed up to be a beta tester, but about other drivers, who didn't give a consent to become guinea pigs on the road they share with experimental cars. For that matter, shouldn't cities be concerned about their infrastructure used for field tests?
Cancel

-ADS BY GOOGLE

SearchCIO

SearchSecurity

SearchNetworking

SearchDataCenter

SearchDataManagement

Close