Friday, September 23, 2011
Reconstructing Visual Perception using fMRIs - Wowza!
I'm still trying to wrap my head around this research, but ... WOWZA!
(Note: Follow any of the links on this page in order to see videos of this in action)
Researchers from University of California, Berkeley used fMRI scans to "reverse engineer" images of how the brain might put together visual stimuli. Below is their "simple outline" of the study (taken from their summary on the Gallant Lab web page).
"The goal of the experiment was to design a process for decoding dynamic natural visual experiences from human visual cortex. More specifically, we sought to use brain activity measurements to reconstruct natural movies seen by an observer. First, we used functional magnetic resonance imaging (fMRI) to measure brain activity in visual cortex as a person looked at several hours of movies. We then used these data to develop computational models that could predict the pattern of brain activity that would be elicited by any arbitrary movies (i.e., movies that were not in the initial set used to build the model). Next, we used fMRI to measure brain activity elicited by a second set of movies that were completely distinct from the first set. Finally, we used the computational models to process the elicited brain activity, in order to reconstruct the movies in the second set of movies"
It's a tricky one to understand - I think the summary on the Gizmodo blog is a bit more clear, and this summary article in a Berkeley newsletter includes quotes that might help.
Here's how I think I'd summarize it for students (and there is a STRONG chance I might be wrong here, so please correct me in the comments!): Participants spent time (a long time!) in an fMRI watching movie trailers, and the researchers used that data to create a model of what their brains were doing while watching the movies. Then they collected a LOT of random youtube clips and transformed them into data ("voxels") the computers could compare with the fMRI data they stored from the participants. The computers picked the youtube clips that best matched the fMRI data, and smooshed all those video clips into a composite video. When we watch the composite video, we can see the similarities to the original clips the fMRI participants watched (although there is a high chance of confirmation bias here, right?)
Please holler in the comments about this if you have time - I'd love to know if I'm understanding this correctly!
posted by Rob McEntarffer
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment