How does it work?
Google has image recognition software – in use for years now, that has thousands of reference images of known things, that it uses to identify objects in a photo. It sees a dog-like feature and tells you ‘this photo is a dog!” It’s very complex and self-learning.
Take this one step further, and create a program to confuse and tell the recognition tool there IS a dog in the photo, when there really is not. Then tweak tolerances until it begins finding dogs and houses and trees all over the image! Then another program edits the image to ‘clean up’, accentuate and enhance a dog or tree-like feature when it finds them, and voila.
This results in all sorts of craziness. For example, they gave the picture of blury static, and told it to look for dumbbells. What it came up with was a whole lot of dumbbells, but every dumbbell also had an arm weirdly attached, meaning that the computer thought the dumbbells must have an arm attached, because it only every saw dumbbells with arms attached to them.
The computer started seeing things where there wasn’t anything really, because it’d say “oh, this clump of pixels looks sliiightly like <X>, I’ll make it look a tiny bit more like <X>” and when you do that 3.2 million times you start seeing some wild things.
By making the computer have these feedback loops, it’s screwing around with its sensory perception, much like LSD or other hallucinogenic drugs affect a human brain’s sensory perception, making us see things that aren’t there. It’s a really cool look into a mind of this computer that taught itself once it began seeing things that weren’t there.
The actual algorithms and logic to make it all happen are complex, with self-learning and neural-network like processess. Want to learn a bit more about AI and neural networks? Mario can teach you: