AI Reveals What Neurons In the Brain’s Visual System Prefers To Look At

  • Researchers develop an AI-based model that generates valuable signals to reliably determine what neurons prefer to look at. 
  • The model uses a generative deep neural network and a genetic algorithm named XDREAM. 

For over 5 decades, scientists have known that neurons in the visual cortex don’t respond equally to different pictures. This is what makes animals recognize, understand and interpret large amounts of visual clues around them.

For instance, certain visual neurons in the brain region called inferior temporal cortex fire more when animals look at text, objects, places, or faces. However, scientists still do not understand exactly what these neurons respond to.

Recently, a team of researchers at Harvard Medical School developed an artificial intelligence (AI) model that generates valuable signals to reliably determine what neurons prefer to look at.

The experiments carried out so far attempted to analyze neuronal preferences using real images. These images could be inherently biased because of the two main reasons – they only represent real world scenes and they are chosen by humans for tests. The new model solves this problem by generating synthetic pictures adjusted as per the preference of individual neurons.

The Experiment

The team captured neural responses of 6 macaque monkeys and fed it to the AI-based model. The model measured the firing rates of each visual neurons in the brains and generated self-adjusting images.

Monkeys were shown these synthetic images in 0.1-second blips for a few hours. The first image was an arbitrary textural pattern in grayscale. The AI slowly introduced colors and shapes based on neural responses. The final image was morphed into something that completely embodied the preference of a neuron.

This AI-based approach uses a generative deep neural network and a genetic algorithm named XDREAM, which is specifically designed by researchers to guide the evolution of a novel, synthetic picture.

Reference: Cell | DOI:10.1016/j.cell.2019.04.005 | The Harvard Gazette

The AI was able to generate super-stimulus for neurons at the end of each experiment. The outcomes of all experiments were quite consistent. Many neurons evolved pictures through the AI that were not identical but were exceptionally similar.

Some images met the researchers’ expectations. For instance, a neuron that is expected to respond to faces evolved circular pink pictures with two dark spots similar to eyes.

Natural images vs images evolved by neurons in the monkey’s inferotemporal cortex | Courtesy of researchers 

Some results were astonishing. For example, one of the monkey’s neuron consistently produced pictures of a monkey’s body with red splotch around its neck. The team ultimately found that this monkey was raised with the other one who used to wear a red collar.

However, not all final images made sense: one monkey’s neuron produced an amorphous dark shape while another evolved a tiny black square.

What’s Next?

According to the researchers, these neurons’ responses aren’t natural, instead, they’re results of continuous exposure to visual stimuli for a specific period of time. How exactly neurons produce these images is still unknown. The team plans to answer this question in their next study.

Read: Altering Brain Neurons Can Remove The Natural Carving For Sweet

Learning how visual cortex responds to certain pictures could help scientists better understand the mechanism that drives cognitive associated diseases, including autism spectrum disorders and learning disabilities.

Leave a reply