backtop


Print


Berkeley scientists reconstructed clips from brain scans of those who just watched it  (Source: newscenter.berkeley.edu)
Using fMRI and computational models, researchers were able to decipher and reconstruct movies from our minds

Researchers from the University of California-Berkeley have used functional Magnetic Resonance Imaging (fMRI) and computational models to watch clips of movies inside the minds of people who just viewed them.

Jack Gallant, study leader and a UC Berkeley neuroscientist, and Shinji Nishimoto, a post-doctoral researcher in Gallant's lab, were able to "read the mind" by deciphering and rebuilding the human visual experience.

In previous studies, Gallant was able to 
record activity in the visual cortex (which processes visual information in the brain) while participants viewed black-and-white photos, and then use a computational model to predict what the participant was looking at. 

Now, Gallant and his team have decoded brain signals created by viewing moving pictures
. They were able to do this by placing Nishimoto and two other research members in a MRI scanner while they viewed two sets of Hollywood movie trailers. While watching the first set of trailers, the fMRI measured blood flow through the visual cortex and this information was directed to a computer, which portrayed the brain as tiny three-dimensional cubes called "voxels," or volumetric pixels. For each voxel, there was a model that detailed how motion and shapes in the movie are translated into brain activity. The computer program learned to relate visual patterns in the trailers with corresponding brain activity. 

The second set of clips tested the computer's algorithm by giving it 18 million seconds of YouTube clips allowing it to eventually predict the brain activity that each clip would induce. 

Then, a reconstruction of the original trailer was produced by merging 
brain scans that were most similar to the YouTube clips. The end result came out a bit blurry, but represents a large step toward reconstructing images humans see and process.

The team hopes that this research can lead to technology that can decipher what is happening in the minds of those who cannot communicate verbally, such as 
stroke victims or coma patients. Eventually, this could lead to the creation of an interface that allows people (with paralysis, for instance) to use their minds to control machines. 





"We basically took a look at this situation and said, this is bullshit." -- Newegg Chief Legal Officer Lee Cheng's take on patent troll Soverain






Most Popular ArticlesHow Apple watch Series 2 differ from the S1
February 18, 2017, 5:37 AM
AMD Offers
February 17, 2017, 6:01 AM
Samsung Notebook 9 vs Acer Aspire S 13
February 17, 2017, 7:23 AM
Seagate FireCuda – 2TB of Fast Gaming Solid State Hybrid Drive Storage
February 6, 2017, 8:24 AM
Comparison: NuVision vs Kindle Fire HD
February 18, 2017, 6:25 AM







botimage
Copyright 2017 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki