- Japanese researchers have developed an AI that can literally see what you’re imagining.
- The system uses deep image reconstruction from human brain activity.
- After initial training, the DNN was able to reconstruct pictures it had never seen before.
The research done in the last few couple of years on artificial intelligence has amazed us all. On a daily basis, we interact with dozens of AI programs, like personal voice assistant, web search engine, social media platforms, and recently Samsung introduced a technology that uses machine learning to convert any video format to 8K resolution. The possibilities are endless.
It seems like every month there is some astonishing development in AI that contributes a lot in changing the world for the better. In the end of 2017, Nvidia revealed a neural network that is capable of imagining scenes or pictures it had never seen before. And now, a team of Japanese researchers at ATR Computational Neuroscience Labs has developed an AI that can literally see your thoughts.
Just think of anything, and a robot tries to visualize it. Researchers have made this possible through deep image reconstruction from human brain activity. If a person, for instance, is looking at a picture of letter “Z”, the AI will form an image resembling a fuzzy version of that. It actually reads the person’s mind – sort of. Let’s find out in details how this system works.
So far, the fMRI (functional Magnetic resonance Imaging), based on machine learning technique, has allowed us to see the perceptual content. But it has been limited to the low level reconstruction of image bases.
Last year, the advances in neuroscience enabled us to decode visual cortical activity into hierarchical levels of Deep Neural Network (DNN).
The predicted levels could be used to detect imagined object categories from an array of computed feature for different object images. The decoding of imagined objects reveals progressive recruitment of high-to-low visual representations. This offered us a technique to utilize data from hierarchical visual features.
The New Image Reconstruction Technique
Now, Japanese scientists have come up with an image reconstruction technique that optimizes the pixel values of the image to match its DNN features which was extracted as multiple layers of human brain activity.
Reference: bioRxiv | doi: 10.1101/240317
The system analyzes the signals coming from fMRI scanner in real time using a DNN, which further recreates what people are seeing. To train the system, the pictures of natural world were shown to both AI and participants, so that the network could precisely determine the pattern of flow of blood in brain while people perceive different photos.
After the first phase of the training (10 weeks), the DNN was able to reconstruct pictures (in real time) it had never seen before, like numbers and alphabets.
Reconstructions of alphabetical letters
Reconstruction quality of colored artificial shapes
They learn in the same way human psychics do – by guessing. It knows what your brainwaves look like when you are thinking of a particular number. The system visualizes your thoughts by guessing the output you want to see, based on your brain activity.
Unlike humans, it can do a lot of guessing. It takes all the data it has (extracted from brainwave in the form on DNN) and transform it into an image. The system does this over and over until it recreates the sensible image.
Of all the recent artificial intelligence developments, this one sounds like a science fiction. This technology could be really helpful for people who can’t speak. For now, it lacks perfection when it comes to reading minds, therefore the study needs further improvements.
Meanwhile, in the future, some more advanced AI approaches could open new modes of communication. As we have said earlier, the possibilities are limitless.