Much like how the Galaxy S8's iris scanner was hoodwinked by a photo, researchers have learned that self-driving cars can be deceived by a simple defacing trick that even a prankster can employ to compromise an autonomous vehicle.
A new study led by the University of Washington found that the type of computer brain set to be used in autonomous cars could be fooled into thinking a Stop sign is a 45 miles per hour Speed Limit or other sign by adding just a little graffiti. If an attacker learns how the vehicle classifies incoming objects, they can easily generate a sticker to throw the car's sensor off. By adding the words "LOVE" and "HATE' to the sign above, the computer vision algorithm no longer recognized it as a stop sign and instead believed it to be a speed limit notice".
It's easy to see the potential problems.
As a backup to this type of situation, driverless cars use several inputs, including hyper accurate Global Positioning System mapping, to help them avoid issues caused by confusing signage, but still need to sort out the data to make the correct - sometimes life or death - decision. It's also worth considering alternatives to simply relying on visual information to determine what a road sign means.More news: Dow Jones Industrial Average Notches 9th Straight Record High
There are ways to fight this. The researchers have since suggested that using contextual information is one way to overcome this problem, such as how a 65 miles per hour sign in a suburb, or a stop sign on a highway would not make sense. That way cars wouldn't care about a STOP sign in the middle of a transited high way.
Many GPS devices know speed limits and road signage due to having them pre-programmed in along roads and popular travel routes.
There is also the anti-sticker materials governments can implement.
Self-driving cars are, supposedly, a safer alternative to the over-congested, human-filled roads of today. And more measures can be applied as the cars start to become smarter. However, this could lead to some issues in regards of safety if the technology isn't implemented well enough. And we can only wait until further testing is done so we can see more results.