AI Can Visualize What You See From Brain Scans

The ability of image-generating AI to recreate what people are seeing from their fMRI data is improving.

Functional magnetic resonance imaging, or fMRI, is one of the most sophisticated methods for studying our thinking. The fMRI scanner creates fascinating and vivid visuals of the brain as the subject performs various mental activities inside.

Neuroscientists may determine a person's brain activity in this way to determine which brain regions they are using. However, they cannot determine what a person is thinking, perceiving, or feeling. Researchers have been working on breaking that code for decades, and they are finally making significant strides thanks to artificial intelligence-based number crunching. Using advanced image-generating AI and fMRI data, two scientists in Japan recently translated study participants' brain activity into images that uncannily resembled the ones they saw during the scans.

In the most recent study, the researchers used stable Diffusion, a so-called diffusion model from London-based start-up Stability AI. According to Takagi, diffusion models are "the main character of the AI explosion," a category that also includes image producers like DALL-E 2. By introducing noise to their training images, these models learn. The noise affects the visuals similarly to TV static, but in predictable ways that the model starts to pick up on. The model will eventually be able to create images just from "static."

By adding photo captions into the algorithm, Stable Diffusion got more out of less training for each participant than earlier attempts utilizing AI algorithms to read brain scans, which required training on massive data sets. It is a revolutionary strategy that combines textual and visual information to "decipher the brain," according to cognitive neuroscientist Ariel Goldstein of Princeton University, who was not involved with the experiment.

According to a systems neuroscientist at Osaka University who collaborated on the project, the AI program takes advantage of data from various brain regions involved in visual perception, such as the occipital and temporal lobes. The software analyzed data from fMRI brain scans, which track changes in blood flow to the brain's active regions. The brain's temporal lobes are primarily responsible for registering information about the image's contents. In contrast, the occipital lobe is primarily responsible for registering information about the image's layout and perspective, including the size and placement of the contents. The fMRI records all of this data when it detects brain activity peaks; these patterns can later be artificial intelligence reconverted into an imitation image.

The AI system was only evaluated on the brain scans of the same four people who provided the training brain scans, and adding support for more people would entail retraining the system using their brain scans. Therefore, it can take some time before this technology is generally available. However, according to Groen, "these diffusion models have [an] unprecedented ability to generate realistic images," which may open up new avenues for research in cognitive neuroscience.

Shinji Nishimoto, a systems neuroscientist at Osaka University who contributed to the study, envisions using the technology to monitor dreams and imagined thoughts or to help researchers better understand how other animals see the world.