IEEE University of Lahore

IEEE

Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane

Security researchers from Tencent have demonstrated a way to use physical attacks to spoof Tesla’s autopilot

An integral part of the autopilot system in Tesla’s cars is a deep neural network that identifies lane markings in camera images. Neural networks “see” things much differently than we do, and it’s not always obvious why, even to the people that create and train them. Usually, researchers train neural networks by showing them an enormous number of pictures of something (like a street) with things like lane markings explicitly labeled, often by humans. The network will gradually learn to identify lane markings based on similarities that it detects across the labeled dataset, but exactly what those similarities are can be very abstract.

Because of this disconnect between what lane markings actually are and what a neural network thinks they are, even highly accurate neural networks can be tricked through “adversarial” images, which are carefully constructed to exploit this kind of pattern recognition. Last week, researchers from Tencent’s Keen Security Lab showed [PDF] how to trick the lane detection system in a Tesla Model S to both hide lane markings that would be visible to a human, and create markings that a human would ignore, which (under some specific circumstances) can cause the Tesla’s autopilot to swerve into the wrong lane without warning.

Pages: 1 2

Leave a Reply