Miscellaneous Technology

How AI-Driven Vehicles Can Be Tricked Into Detecting Fictional Objects

According to the WHO, up to 1.35 million people around the globe lose their lives to road accidents. Many researchers have also indicated that a large percentage of road accidents happen due to human error. Together, these factors have contributed to the conception and introduction of autonomous vehicles, aimed at removing tired, distracted, and dangerous drivers from the roads.

However, unfortunate as it may be, driverless cars are not yet ready to replace human drivers. Today, while you may find several examples of driverless cars, their success is mostly a result of driving repetitively on the same routes. Besides, recent experiments have shown that autonomous vehicles are easily deceived – leading to possibly dangerous consequences.

A whitepaper published by Tencent’s Keen Security Lab shows how a Tesla Model S was spoofed into switching lanes, so that it drove directly into the traffic, by placing three stickers on the road. 

But how did this happen? 

Let’s start by discussing the technology behind autonomous vehicles, followed by its shortcomings and some suggestions to overcome these issues.

The Concept Behind Autonomous Vehicles

Well, as you would agree, an autonomous vehicle must be able to sense what’s happening around it and make instantaneous decisions to avoid collisions and other road accidents. To enable this, most autonomous vehicles derive inputs about their surroundings using a combination of cameras, radar sensors, and LiDAR or light detection and ranging sensors. These streams of data coming from multiple sources are merged using sophisticated software to send instructions to actuators for controlling acceleration, braking, and steering. Besides, predictive modeling and object recognition algorithms enable the software to navigate obstacles and follow traffic rules.

While this system sounds great and actually works, it is not exactly fool-proof or safe. For example, lane dividers are often not visible when roads are covered in snow during the winter season. Would the Lidar sensors of multiple autonomous vehicles plying on such roads interfere with each other? Even if we discount the weather conditions, there are several examples like the one quoted above wherein autonomous vehicles have been duped into misinterpreting traffic signs with things like a small piece of tape – with possibly dangerous consequences.

How Are Autonomous Vehicles Spoofed?

McAfee researchers recently tricked a Tesla car into speeding with a piece of tape. The researchers used a 2-inch piece of black electrical tape and placed it in the middle of the 3 in a 35 MPH speed limit sign. The system, as a result of this small abhorrence, read the sign as 85 MPH (instead of 35 MPH) and accelerated accordingly.

Image Source: mcafee.com

The work done by RobustNet Research Group at the University of Michigan with other partners has also shown that a vehicle’s LiDAR-based perception system can be spoofed into “seeing” non-existent obstacles. If planned strategically, such attacks could lead to severe road accidents and consequent damage to life and property as well.  

The explanation for such divergent behavior lies in slight alterations to an image that, though invisible to the human eye, can lead to bizarre interpretations from machine learning algorithms. You can find several examples of such adversarial images on the internet. Below, you can see an instance wherein an adversarial patch was used to trick AI into believing that a banana is a toaster, which was non-existent.

https://www.youtube.com/watch?v=i1sp4X57TL4&feature=youtu.be

Of course, it must be noted that adversarial attacks like these are difficult to replicate in real life, as one may not have digital access to the inputs of the neural network that is being targeted. Besides, the neural network in an autonomous vehicle analyzes multiple images of a sign from various angles and at different distances, reducing the scope of error significantly. Yet, one cannot rule out the possibility of attacks caused by the physical alteration of road signs.

A paper published by the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley supports this observation. It points out that visual classification algorithms may be tricked by making slight alterations in the physical world (as in the tape example quoted above). The experiments referred to in this paper show that any graffiti, spray paint, or stickers on a stop sign almost always spoofed a deep neural-based classifier into interpreting the stop sign as a speed limit sign – which is quite dangerous in real life. 

The Road Ahead For Autonomous Vehicles

While autonomous vehicles are not yet ready to replace humans, it must be borne in mind that the traffic signs are also not designed for software but to aid human drivers. Besides, autonomous vehicle technology is being developed to take inputs from multiple sources, including crowdsourced mapping, to vet the data received from camera sensors for better decision-making and more safety.

On a lighter note, it is clear that teaching a computer to drive is much more complicated than teaching a human, and more research is required in the area to make autonomous driving systems safer and fool-proof.

About the author

avatar

Laduram Vishnoi

Laduram Vishnoi is CEO and Founder at Acquire. He loves to share his research and development on Artificial intelligence, machine learning, neural network and deep learning.