Brain Videos!

For the first time ever, scientists have recorded video images from the brain’s visual pathway, a new study reports.

The next step is obviously to develop contact lenses that a computer can wear.

By recording fMRI scans of volunteers’ brains as they watched various movie clips, the scientists were able to correlate neuronal firing patterns with certain aspects of visual images – like, say, coordinates or colors. Another specially designed software program then looked back through the brain scans, and assembled its own composite video clips that corresponded to certain data from the volunteers’ brain activity.

This might sound a little confusing, so let’s break the process down step-by-step.

As the journal Current Biology reports, a team led by UC Berkeley’s Jack Gallant began by showing movie trailers to volunteers as they lay in an fMRI scanner. The scans were fed into a computer system that recorded patterns in the volunteers’ brain activity. Then, the computer was shown 18 million seconds of random YouTube video, for which it built up patterns of simulated brain activity based on the patterns it had recorded from the volunteers.

The computer then chose 100 YouTube clips that corresponded most closely to the brain activity observed in, say “volunteer A,” and combined them into a “superclip.” Although the results are blurry and low-resolution, they match the video in the trailers pretty well.

While the human visual pathway works much like a camera in some ways – retinotopic maps, for instance, are fairly similar to the Cartesian (X,Y) coordinates used by TV screens – in many other areas – such as color and movement – there’s no direct resemblance between the way a certain pattern of brain activity looks to us on a scan, and what it actually encodes. So – as in so much of neuroscience – this study instead focused on finding correlations between certain brain activity patterns and certain aspects of visual data.

Up until recently, one major problem with finding these correlations was that even the quickest fMRI scans – which record changes in blood flow throughout the brain – couldn’t keep up with the rapid changes in neuronal activity patterns as volunteers watched video clips. The researchers got around this problem by designing a two-stage model that analyzed both neural activity patterns and changes in blood flow:

The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies.

In other words, the system recorded both blood flow (BOLD) and voxelwise data, which map firing bursts from groups of neurons to sets of 3D coordinates. This gave the scientists to plenty of info to feed the computer for its reconstructions. As this wonderful Gizmodo article explains it:

Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

In short, the composite clip isn’t an actual video of what a volunteer saw – it’s a rough reconstruction of what the computer thinks that person saw, based on activity patterns in the volunteer’s visual cortex.

Even so, I think it’s totally feasible that (as this article suggests) systems like this could be letting us re-watch our dreams when we wake up. My guess, though, is that we’ll be surprised at how swimmy and chaotic those visuals actually are – I mean, have you ever tried to focus on a specific object in a dream?

Then again, I would totally buy a box set of Alan Moore’s dreams.

Share this post…
Email Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr
You can leave a response, or trackback from your own site.

4 Responses to “Brain Videos!”

  1. […] to be alive – we’ve got mind-controlled computers, we’ll soon be able to record videos of our thoughts and dreams, and it won’t be long before we can see, hear and even touch […]

  2. […] than just passively observing events. And in light of other new technologies like the thought-video recorder, it looks like we may be able to record and play back our thoughts and dreams within the next few […]

  3. […] other teams have already developed technologies that can record memories and dreams right out of the human brain – so what’s so amazing […]

  4. […] But one thing’s for certain: Posthumanity isn’t just over the horizon – it’s already an active movement today. And whatever name our children call it, it will continue to be a cultural force as the lines between biology, technology and mind keep getting blurrier and blurrier. […]

Leave a Reply

Powered by WordPress | Designed by: free Drupal themes | Thanks to hostgator coupon and cheap hosting
Social links powered by Ecreative Internet Marketing