U.S. Congress committee warned that autonomous vehicles could ‘pose an avenue for terrorism’

By Canadian Underwriter | February 14, 2017 | Last updated on October 30, 2024
4 min read
||
||

The industry and regulators need to “think very broadly” around the cyber security of autonomous vehicles, and having autonomous cars operate in such a way that human drivers must be ready to take control can actually increase the risk of collision, experts warned a House of Representatives committee Tuesday.

“Transportation is one of the areas that receives a lot of attention from hacking because it is a way to disrupt our transportation system,” said Nidhi Kalra, co-director and senior information scientist with the Rand Corp.’s Center for Decision Making Under Uncertainty.

“Transportation is one of the areas that receives a lot of attention from hacking because it is a way to disrupt our transportation system,” said Nidhi Kalra, co-director and senior information scientist with the Rand Corp.’s Center for Decision Making Under Uncertainty.

Kalra, who has a PhD in robotics, made her comments in Washington, D.C. in a hearing before the Subcommittee on Digital Commerce and Consumer Protection of the House of Representatives Committee on Energy and Commerce. The hearing was titled Self Driving Cars: The Road to Deployment.

Frank Pallone, a Democrat who represents the 6th District of New Jersey in the House of Representatives, asked witnesses how real the threat of hacking is.

“It is a very real threat,” Kalra replied. “It’s not only hacking for fun and profit but autonomous vehicles provide an avenue for terrorism as well … the threat is no longer just suicide bombers that blow themselves up but now we have vehicles that can drive around. I don’t want to over state the risk at this time but we need to think very broadly around cyber security.”

Another witness appearing before the subcommittee was Mike Ableson, vice president of global strategy at General Motors.

Another witness appearing before the subcommittee was Mike Ableson, vice president of global strategy at General Motors.

“We need to design vehicles from the ground up with that threat in mind,” Abelson said of cyber security.

“Cyber security is not something that can be shrink wrapped on top of the vehicle because there are so many parts that contribute to the ultimate vehicle, but it has to be baked in from the ground up,” Kalra said.

The U.S. government uses SAE International definitions of automation, according to background from the U.S. Federal Autonomous Vehicles Policy. At SAE Level 0, the human does everything. At SAE Level 1, the automated system can sometimes assist the human on some part of the driving tasks. At SAE Level 2, the automated system can conduct some parts of the driving task but the human continues to monitor the driving environment and performs the rest of the driving task. At SAE level 3, the automated system conducts some part of the driving and monitors the driving environment in some instances but the human must be ready to take back control.

“There is evidence to show that Level 3 may show an increase in traffic crashes and so it is defensible and plausible for auto makers to skip Level 3,” Kalra told the subcommittee Tuesday. “I don’t think there is enough evidence to suggest that it should be prohibited at this time but it does pose safety concerns that a lot of auto makers are recognizing and trying to avoid.”

At SAE Level 4, the automated system can conduct the driving tasks and the human does not need to take back control, but the automated system can only operate in certain environments and under certain conditions.

Volvo suggests it will accept liability, as a manufacturer, for collisions at SEA Level 4.

On Tuesday, Congressman Pallone asked Anders Karrberg, vice president of government affairs for Volvo Car Group, to explain its decision.

“Car makers should take liability for any system in the car,” Karrberg told the subcommittee. “We have declared that if there is a malfunction to the system when operating autonomously, we would take the product liability.”

“Car makers should take liability for any system in the car,” Karrberg told the subcommittee. “We have declared that if there is a malfunction to the system when operating autonomously, we would take the product liability.”

Subcommittee chairman Bob Latta – a Republican representing the 5th District of Ohio in the House –  asked Gill Pratt, executive technical advisor and CEO of Toyota Research Institute, when Toyota might be ready to deploy autonomous vehicles.

“We don’t have a specific date as to when we are going to remove the driver from the car, very much like GM, but rather we are going to test and to see when the system is safe enough to do so,” replied Pratt.

Toyota will have “a step by step process for removing the amount of supervision that’s necessary by the driver,” Pratt suggested. “Eventually the goal is that no supervision is necessary …  but checking each stage to ensure it is safe enough.”

Currently more than 90% of vehicle crashes are caused by human factors such as driving too quickly, impairment, distraction and fatigue, Kalra told the committee.

“Autonomous vehicles have the potential to significantly mitigate this public safety crisis by eliminating many of the mistakes that human drivers routinely make,” she said. “To begin with, autonomous vehicles cannot be drunk, distracted or tired. These factors are involved in 29%, 10% and 2.5% respectively, of all fatal crashes.”

But she added that autonomous vehicles are not likely to eliminate all crashes.

“For instance, inclement weather and complex driving environments pose challenges for autonomous vehicles as well as for human drivers, and autonomous vehicles might perform worse than human drivers in some cases, particularly at the early stages of testing and development,” Kalra said.

Canadian Underwriter