Scientists are one step closer to knowing what you’ve seen by reading your mind.
Having modeled how images are represented in the brain, the researchers translated recorded patterns of neural activity into pictures of what test subjects had seen.
Though practical applications are decades away, the research could someday lead to dream-readers and thought-controlled computers.
“It’s what you would actually use if you were going to build a functional brain-reading device,” said Jack Gallant, a University of California, Berkeley neuroscientist.
The research, led by Gallant and Berkeley postdoctoral researcher Thomas Naselaris, builds on earlier work in which they used neural patterns to identify pictures from within a limited set of options.
The current approach, described Wednesday in Neuron, uses a more complete view of the brain’s visual centers. Its results are closer to reconstruction than identification, which Gallant likened to “the magician’s card trick where you pick a card from a deck, and he guesses which card you picked. The magician knows all the cards you could have seen.”
In the latest study, “the card could be a photograph of anything in the universe. The magician has to figure it out without ever seeing it,” said Gallant.
To construct their model, the researchers used an fMRI machine, which measures blood flow through the brain, to track neural activity in three people as they looked at pictures of everyday settings and objects.
As in the earlier study, they looked at parts of the brain linked to the shape of objects. Unlike before, they looked at regions whose activity correlates with general classifications, such as “buildings” or “small groups of people.”
Once the model was calibrated, the test subjects looked at another set of pictures. After interpreting the resulting neural patterns, the researchers’ program plucked corresponding pictures from a database of 6 million images.
Frank Tong, a Vanderbilt University neuroscientist who studies how thoughts are manifested in the brain, said the Neuron study wasn’t quite A pure, draw-from-scratch reconstruction. But it was impressive nonetheless, especially for the detail it gathered from measurements that are still extremely coarse.
The researchers’ fMRI readings bundled the output of millions of neurons into single output blocks. “At the finer level, there is a ton of information. We just don’t have a way to tap into that without opening the skull and accessing it directly,” said Tong.
Gallant hopes to develop methods of interpreting other types of brain activity measurement, such as optical laser scans or EEG readings.
He mentioned medical communication devices as a possible application, and computer programs for which visual thinking makes sense — CAD-CAM or Photoshop, straight from the brain.
Such applications are decades away, but “you could use algorithms like this to decode other things than vision,” said Gallant. “In theory, you could analyze internal speech. You could have someone talk to themselves, and have it come out in a machine.”
- Brain Scanner Can Tell What You’re Looking At
- Mind-Reading Machines: How Far Should They Go?
- Brain Scanners Know Where You’ve Been
- Scanning Dead Salmon in fMRI Machine Highlights Risk of Red Herrings
- MRI Lie Detection to Get First Day in Court
Citation: “Bayesian Reconstruction of Natural Images from Human Brain Activity.” By Thomas Naselaris, Ryan J. Prenger, Kendrick N. Kay, Michael Oliver, and Jack L. Gallant. Neuron, Vol. 63 Issue 6, September 24, 2009.
Image: From Neuron. Images seen by test subjects are in the left column. In the middle the image reconstructions returned by the researchers’ older, structure-focused analysis. At right are the image reconstructions produced by the newer, category-including model.