AndSus - stock.adobe.com
However many millions of miles self-driving cars, or connected and autonomous vehicles (CAVs) cover in real-world testing, it will never be enough to fully ease concerns among the general public that the technology presents a threat to road safety and public health, according to French technology giant Thales.
The firm, which specialises in aerospace, defence, transport and security technology, says the burgeoning CAV industry must urgently adopt a more rigorous, repeatable testing process if the government’s target of allowing commercially-available CAVs onto the UK’s roads by 2021 is to be met.
A recent study of 2,000 UK adults, commissioned by Thales and conducted by OnePoll, reveals the scale of mistrust in CAV technology, with respondents distinctly uneasy at the prospect of driverless cars taking to the roads in the next few years.
“The question ‘how do you feel about the prospect of self-driving cars on the UK’s roads in the next three years?’ brought these answers: ridiculous, stupid, daft, dread, panic, petrified, terrified, horrified, concerned and worried,” says Thales UK CEO Victor Chavez. “These are not the most positive of terms.”
Just 16% of those surveyed said they would feel safe riding in a CAV, with the most pressing concerns being pedestrian safety (56%), passenger safety (51%), a rise in accidents (49%), connectivity and backhaul network failure (35%) and cyber attacks on personal data (29%). One-fifth of respondents said CAVs scared them, 23% said they were apprehensive, and just 12% – mostly in younger age groups – were excited or optimistic.
Safe testing = confident humans
Armed with these statistics, it should be reasonably clear to any informed observer that there needs to be a titanic industry effort to sell CAVs to the wary British public, and rigorous, repeatable and, above all, safe testing will be a crucial part of this, according to Timothy Coley, product specialist at XPI Simulation and a Thales alumnus.
“It is not yet clear really what the process is going to be for the safety approval of these vehicles, but it is likely to involve some elements of controlled test environments, some element of public testing as well, but also there is a key use case here for the application of simulation in order to accelerate that testing,” says Coley.
“If I can create a virtual environment that has all the richness of the real world and the different moving entities in it, the different road layouts, the different materials, the weather conditions that an autonomous vehicle will be expected to encounter, then I can start to create different scenarios for that autonomous vehicle in order to find out how it’s going to behave in those conditions.”
Coley says there should be a “good level” of validity between what happens in the virtual and real worlds, which means he can then start to build confidence that CAVs can behave safely.
“Earlier on in the design cycle, we can do virtual testing at a much greater scale than it would be possible or practical to do in the real world, and of course because it’s in a virtual environment, it’s completely safe as well,” he says. “You can run the tests as many times as necessary in order to gather the evidence you require.”
Latent Logic, a spin-out from the computer sciences department at the University of Oxford, is already using deep learning techniques to develop autonomous systems that imitate complex human behaviours and teach CAVs road safety.
Alongside 11 consortium partners and backed by a grant of £2.7m, Latent Logic is leading OmniCAV, one of six projects awarded funds in a CAV R&D contest backed by the Centre for Connected and Autonomous Vehicles and Innovate UK in the summer of 2018.
“We are doing tests in the real world and also loads of tests in simulation to amplify and to grow the range of different scary scenarios you can put your self-driving car up against to make sure that the self-driving car we’re testing is actually able to pass those different tests,” says Kirsty Lloyd-Jukes, CEO of Latent Logic.
The immediate application for this in terms of CAVs is to create virtual reality environments that the cars can explore. It is like taking the world’s most difficult, dangerous and downright scary driving test you could possibly imagine, says Lloyd-Jukes, and the driving instructor has a grudge against you.
Kirsty Lloyd-Jukes, Latent Logic
“If you do this testing in simulation, you can recreate these scary situations before you see them on the road,” says Lloyd-Jukes. “That is totally critical because you don’t want the first time your self-driving cars encounter this to be when they’re actually driving for real.”
A big part of this is addressing problems around how humans and cars “talk” to each other today. This means building in variables such as hand gestures and other cues that pedestrians and drivers have learned to pick up on almost innately over the past 50 to 60 years of mass-car ownership in Britain.
“Once you start being aware of this, it’s really worth noticing while you’re driving or walking all the different things you do, slightly subconsciously, but actually you’re giving indicators all the time,” says Lloyd-Jukes.
“What we do is try to imitate that in simulation using lots of machine learning and artificial intelligence [AI], which enables you to learn what humans do in real-life situations and play them back in the simulator. If you put those things together you can create this really realistic environment, which means you can take away a lot of the fear and worry.”
Lloyd-Jukes believes this will ultimately give CAVs a big advantage over a human driver because for humans, the learning process doesn’t end when the instructor says “I’m pleased to tell you you’ve passed”. People learn far more, continuously, by picking up subtle cues about road conditions over the course of many years’ driving – part of the reason why young drivers cost so much to insure.
The CAV, on the other hand, can be immediately programmed using millions of miles of driving experience that a human driver would never achieve on their own if they lived to be 150.
Addressing edge cases
Many of the challenges around how machine learning and AI will teach CAVs road safety will arise around so-called edge case scenarios, where rarely-occurring, interdependent and ultra-specific sets of circumstances that are hard to predict come into play. This will also need to be unpacked, says Coley.
“When we talk about edge cases, we’re thinking about combinations of different things that create failures,” he says. “It might be that your sensor works fine in the rain, but on this stretch of road with these lighting conditions, with this type of pedestrian, that combination of factors means that things might go wrong.
“We’re looking in simulation to try to identify combinatorial effects that lead to failures with autonomous vehicles.
“In terms of how you can look at the applications of AI and machine learning within that, there is AI in the vehicle itself, so the vehicle is in learning mode while it is being exposed to these conditions, and you can teach it from the presentation of data how to behave and what to expect,” says Coley.
“You can also have algorithms that are incentivised to come up with increasingly difficult tests and to really get good coverage of the different conditions in order to expose those failures.”
Mind that cow! Why CAVs need diverse design
As we have seen, the OmniCAV consortium is soliciting input and data from all over the world to help the automotive industry devise safe CAVs that can be driven confidently in any scenario.
Latent Logic CEO Kirsty Lloyd-Jukes has already been speaking to a number of potential partners in India, which has by some margin one of the world’s largest automotive sectors, and, with car ownership per capita still very low, plenty of room to grow.
But Lloyd-Jukes says that when she talks to Indian investors, they often ask whether the CAV system simulates cows to teach CAVs to avoid them.
“No,” she says. “But we should, because if you want to drive around in India, you have to work out how to avoid hitting cows.”
Cows are venerated and respected in Hindu culture, and in most of India it is illegal to possess or consume their meat. As a result, it is also not uncommon when driving in India to see loose cows wandering happily just as they please, even using the human road system as pedestrians. Although hitting a human pedestrian will usually have dire consequences only for the pedestrian, hitting a bovine one is a far more serious matter for the driver.
“It’s probably more important not to hit a Brahmin cow than it is to hit a tuk-tuk [a rickshaw built on a motorcycle or scooter platform], because a tuk-tuk can get out the way,” says Lloyd-Jukes.
The need to design cows into the system for Indian users is a perfect example of why it is crucial for developers to get better at designing for diverse cultures and needs, rather than simply designing systems that benefit white people living in western countries.
Lloyd-Jukes adds: “AI and machine learning can be really helpful to enable us to understand what real is, and by that I mean collecting data on how people behave in real-world situations.
“We do that using computer vision. That enables you to process hours and hours of footage. That gives us really good data, which you can analyse automatically using computer vision, to actually work out how humans really behave in different road situations.”
What this also means is that we can start to understand how people drive differently around the world, which will be critical if CAVs are ever going to scale up. Most trials that are happening in the real world today are very local and confined to specific areas – usually in Silicon Valley.
But no one will make any money from CAVs unless they can scale beyond the relatively predictable suburban roads of California. This means the industry must understand how people drive differently around the world.
“Getting round Rome is completely different from trying to go around London or Palo Alto,” says Lloyd-Jukes. “So that ability to scale the industry can be helped by simulators recreating driving conditions around the world.”
Learning from others
Thales’ study also revealed that 65% of respondents were perfectly comfortable with flying, and because, in terms of miles travelled, the aviation industry is by far the safest in the world, Thales is urging the industry to investigate the potential of using similar technology that the likes of Airbus and Boeing use to develop new planes, and that airlines use to exhaustively test and train their human pilots.
“By using synthetic environment technologies, currently used for full-flight simulators in aerospace and vehicle simulators, we are able to subject autonomous driving systems to huge numbers of scenarios to gain confidence in their safety,” says Alvin Wilby, Thales UK vice-president of research, innovation and technology.
“If successful, this work will lay the foundations for the development and certification of all types of unmanned vehicles,” he adds.
XPI’s Coley is fully on board with this. “The aviation industry has a much more robust methodology around safety in terms of sharing data around accidents, having independent investigations when things go wrong, and making sure the findings from those are widely shared and implemented across the industry,” he says. “I think that’s something the automotive industry is going to have to look to adopt as well, to build public trust and confidence, and also to raise the bar for safety.”
Read more about CAVs
- Cab operator Addison Lee has teamed up with CAV software specialist Oxbotica to start preparing the streets of London for self-driving cars.
- Self-driving cars may have to break the rules of the road to operate safely, and the Law Commissions of England, Wales and Scotland are embarking on a consultation to seek input on how to manage this.
- Living in Digital Times’ Robin Raskin took her first self-driving car ride at CES 2019. While uneventful, the sensor technology overload was impressive.
Coley identifies a challenge here as well, which is around the economic viability of CAVs. A plane is clearly more expensive than a car, so the automotive industry will have to build in similar redundancy and reliability levels on a platform that remains commercially viable.
This cost will drive commercial change in the ground transport sector, Coley reckons, giving rise to more asset (car) sharing schemes, with fewer cars running for more time to generate maximum value.
Ben Pritchard, research group lead at Thales Research, Innovation and Technology (RTI), says: “In the air, we have heard about this very systematic approach to sharing and learning and how that is done in an open way. There’s no complacency or embarrassment, it’s done for the benefit of the industry and it continually ratchets up safety for the many, not the few. That’s a really important point that needs to be transferred to autonomous vehicles.”
Pritchard says the automotive industry can also learn from metro systems such as London’s DLR or San Francisco’s Bart, which have been driverless as far back as the 1980s and 1970s, respectively.
“That doesn’t mean there isn’t a member of staff,” he says. “There’s always an attendant on the DLR, but they’re available so that passengers can see them, feel secure, get help from them, interact with them, which they can’t do if the staff member is locked in a cab.
“I think there’s some learning perhaps for professional drivers on what that role of the professional member of staff should be.”