Optical Illusions for AI: A Bad News?

Khryss | Published 2017-11-14 20:06

Researchers have found a way to trick an AI into seeing a turtle as a rifle and a cat as a guacamole.

A group of MIT students that calls themselves labsix, managed to make Google’s image classifier InceptionV3 think into seeing a turtle as a rifle. The researchers used “adversarial examples”, which are like optical illusions for neural network. Even viewed from “a variety of angles, viewpoints, and lighting conditions”, the neural network still classifies the turtle as a rifle.

“The choice of turtle is by no means special, the algorithm is very general,” says Andrew Ilyas, a member of labsix. “The biggest reason we chose a turtle was because it was the first printable 3D model we could find.” The researchers also fooled the neural network into thinking that a picture of a cat was actually a guacamole, but when rotated slightly, the system identifies it as a cat. In addition to that, the researchers also tricked InceptionV3 into thinking a real baseball is an espresso, from multiple angles.

 “The examples still fool the neural network when we put them in front of semantically relevant backgrounds; for example, you’d never see a rifle underwater, or an espresso in a baseball mitt,” the researchers wrote in their paper.

“There is now a rather extensive literature showing the Deep Neural Networks can be easily fooled in a myriad of ways,” says Yevgeniy Vorobeychik at Vanderbilt University. But the study’s findings of adversarial examples fooling neural networks make AI “vulnerable” and do pose a “practical concern”. Some minor adjustments and a little bit of effort, image recognition systems could be sabotaged.

“A hacker could make a hospital look like a target to a military drone, or a person of interest look like an innocent stranger to a face-recognition security system,” says Jeff Clune at the University of Wyoming, an author of one of the first studies about adversarial examples.

AI could recognize objects as a different thing and see things that don’t exist, which could lead to dangerous results. “We do not currently know how to solve this problem,” Clune says. “So far it has resisted the best efforts of the brightest minds for years.”

https://www.newscientist.com/article/2152331-visual-trick-fools-ai-into-thinking-a-turtle-is-really-a-rifle/

Hey! Where are you going?? Subscribe!

Get weekly science updates in your inbox!