Cracking the Code: Driverless Cars vs. Hackers
Feb. 3, 2016 – Electrical and computer engineering researchers at USU are finding ways to defeat the safety and security measures in place to keep driverless cars rolling safely down the road.
And while it’s a complex exercise, the group says it hasn’t found a single layer of security they haven’t been able to defeat. It’s all part of a four-year study led by Assistant Professor Dr. Ryan Gerdes who was recently awarded a major grant from the National Science Foundation to study the security of autonomous vehicles and incorporate what he calls ‘adversarial thinking’ into the design process.
“We’re working to ensure the safety, reliability and integrity of America’s transportation system as we come to rely more and more on automation technology,” said Gerdes.
He and his colleagues are discovering how resilient automation systems are against hackers and malicious attacks. Such a threat could stem from a lone assailant, bent on causing a single accident, or from corporate sabotage – an automaker’s attempt, for example, at undermining the competition. There are a lot of questions to be asked before the first consumer-friendly driverless passenger cars hit the roadways.
“Security in this realm really just hasn’t been touched,” added Gerdes. “Vehicle communication can be jammed, sensors can be jammed, and attackers could try to do just about anything to cause the system to be unsafe.”
To test these concepts, Gerdes and his team of undergraduate and graduate students use four-wheeled robots capable of autonomous travel. The fleet of tiny vehicles gets tested at USU’s Electric Vehicle and Roadway, or EVR complex – a brand new research facility and quarter-mile-long test track in North Logan.
“We use the robots to look at the security of control algorithms,” said Gerdes. “What actually determines how a vehicle reacts to what’s happening in front of or behind it? The control algorithm tells the vehicle what the car in front of it is doing. If the lead car accelerates, it tells your car to accelerate at a specific speed.”
The control algorithms are designed to achieve stability in a system made up of multiple vehicles. When everything goes to plan, the platoon speeds along seamlessly. But what happens when an attacker manipulates a piece of information? Even worse: what if adversarial thinking is never considered during the design phase of the algorithms?
“What happens is the system fails to meet its objective and you get catastrophic collisions,” adds Gerdes. “Even much worse than we would see with human drivers.”
The research focuses on how safety and security algorithms function while vehicles are in platoons – groups of vehicles traveling together. Platooning increases energy efficiency, is shown to significantly reduce traffic congestion and is one of the key aspects of driverless travel over long distances.
“A lone autonomous vehicle doesn’t increase road capacity,” said Gerdes. “In fact we actually see a decrease in roadway capacity if everyone is traveling around in, for example, a Google Car. A single self-driving car requires more headway time than even a human driver. If autonomous vehicles make up a sizable percentage of vehicles on the highway, you have to platoon, otherwise you lose capacity.”
The work highlights a series of important issues that engineers and computer security experts will have to resolve in the near future. The workload doesn’t intimidate Gerdes, however. He says he’s excited to be at the leading edge of such important research.
“We will have automated vehicles in the future,” he said. “A key factor to people embracing them and feeling confident in their operation is that we design security into their systems from the very beginning.”