The neural network trained to recognize objects by their images was too easy to deceive. That casts doubt on all achievements in the development of algorithms for AI over the past few years. Interpretation was started by programmers from the University of Kyusu (Japan), and they needed only one pixel for this.
During the testing of new image recognition systems, the researchers have deliberately placed one single pixel in the picture not in their place. Not at random, but according to strategically selected coordinates, based on the analysis of the algorithm of the work of this AI. And the system began to confuse everything – cats with puppies and horses with cars.
One “fake pixel” is enough for a trick with a picture of 1000 pixels, and if there are a million of them, then you need to rearrange only a couple of hundred points. Using this principle, scientists from MIT printed a toy turtle on a 3D printer, which the neural network took for an army rifle. And baseball ball per cup of coffee. And this is a huge problem.
The nearest future will be given to cars that should independently recognize real objects and navigate in the real world. If they are so easy to deceive, risks of errors with catastrophic consequences increase colossally. And “Protection from the Fool” will now need people who will face these AI. But there is a plus in this – if the Skynet starts the uprising of the terminators, it will be possible to just crookedly paint under the bush and the robot will not understand anything.
An example of AI with modified photos. The result of recognition is given in brackets
Source — Arxiv