Reading a brain with a machine to get the information in it has been the stuff of sci-fi for years. Now, scientists at UC Berkeley’s Gallant Lab have demonstrated that it’s possible.
The scientists used functional magnetic resonance imaging (fMRI) to reconstruct movies subjects watched by reading their brain activity.
The process involved measuring brain activity in the part of the brain governing vision when the subject watched a selected set of movies. They developed computational models and correlating those models to signals measured in the subject’s brain when watching a different set of movies.
The process was long and involved, and it required plenty of work.
“To achieve reconstructions, we currently need a relatively large amount of computational resources and several hours of recordings for each subject,” Shinji Nishimoto, the first author of the Gallant Lab paper and a post-doctoral candidate at the Helen Wills Neuroscience Institute at UC Berkeley, told TechNewsWorld.
Gallant Lab doesn’t expect practical results any time soon, although “it’s hard to predict the future,” Nishimoto said.
What Gallant Lab Did
First, Gallant Lab used fMRI to measure the brain activity in the visual cortex of three subjects, all of whom are co-authors of the paper, as they watched movies for several hours.
That data was used as a baseline to develop computational models that could predict the pattern of brain activity as the subjects watched any movies at random. The models described how movies are transformed into brain activity.
The researchers then gave the subjects a second, different set of movies to watch and measured their brain activity again by fMRI. The computational models developed previously were then used to process the brain activity resulting from watching this second set of movies.
The motion-encoding model Gallant Lab developed describes the filtering process through which our brain makes sense of visual input — spatial position, motion direction and speed are among the filters used.
However, fMIR measures hemodynamic changes — differences in blood flow, volume and oxygenation — caused by neural activity, and that’s slower than the neural activity itself.
So, Gallant used a two-stage encoding model. In the first stage, lots of motion-energy filters spanning a range of positions, motion directions and speed were used. The output from this was fed into the second stage, which describes how neural activity affects hemodynamic activity.
The experiment is the first demonstration that dynamic natural visual experiences can be recovered from very slow brain activity recorded by fMRI, Gallant Lab researchers stated.
Previous attempts at brain reading could only decode static information, but this experiment focused on dynamic visual experiences because these are the most compelling aspect of the visual experience, Gallant Lab stated.
Gallant Lab had to build a huge database of movies or videos in order to make the experiment work.
“We tried many different ideas without using a movie database, including analytics approaches,” Nishimoto said. “Currently, the solution using a database works best.”
Using Brain Imaging in the Real World
Brain reading devices could be used to help in the diagnosis of conditions such as stroke or dementia, to assess the value of medical treatment such as drug and stem-cell therapy, to function as the core of a neural prosthesis, or to build a brain-machine interface.
“The more detail that we can learn about the anatomy of the brain, and about the functional anatomy of the brain, the greater the insight it will provide to us,” Joe Rizzo, a professor at the Harvard Medical School and director of the Center for Innovative Visual Rehabilitation at the VA Boston Healthcare System‘s JP Campus, told TechNewsWorld.
“That kind of information could be useful in visual rehabilitation,” Rizzo added.
The Foundation Fighting Blindness might have a use for the Gallant Lab work.
“We are open to all applicants, including the Gallant Lab, so they can apply to us for support in response to our open call for applications,” Stephen Rose, the Foundation’s chief research officer, told TechNewsWorld.