1364985353732.cached

Three Big Doubts About Brain-Mapping Efforts

Neuroscience research has come a hell of a long way since the days of scalpels and electrodes.

While some research teams are exploring the molecular machinery that churns at the hearts of nerve cells, others are working to assemble wiring diagrams for whole regions of the human brain. Just as biological science never looked the same once Watson and Crick explained the structure of DNA, neuroscience is transforming into a field filled with laser-controlled neurons, programmable stem cells and micro-scale brain scans.

Beyond all this excitement, though, looms a far more vast and ambitious goal – one whose scale and complexity exceed even the mapping of the human genome. Over the past several years, a growing group of scientists have been fighting for the idea that we can (and should) produce, within our lifetimes, a digital map of every function of every one of the trillions of synaptic connections in a human brain: A complete human connectome. Teams around the world, such as the minds behind the Human Connectome Project, are already working hard toward this goal, often freely sharing the data they discover along the way.

The Human Connectome Project's first huge data sets are already freely available to scientists around the world.

The Human Connectome Project’s first huge data sets are already freely available to scientists around the world.

Meanwhile, this February, the White House announced the launch of the Brain Initiative, a decade-spanning effort to build a “Brain Activity Map” or BAM – a simulation, in other words, of all the activity in a human brain, from the cellular level on up. The project’s launch budget is $100 million, and some scientists expect that costs will soar into the billions before it starts cranking out useful data.

Unsurprisingly, this has stirred up a hurricane of press coverage – not all of it positive. While some advocates of the BAM project promise that it’ll unleash a wealth of new cures for neurological and psychological diseases, opponents argue that even billions of dollars and years of research won’t be enough to decode the brain’s workings on such a comprehensive scale – especially if, as some anti-BAM pundits say, we’re still a long way from knowing how the brain even encodes information at all.

I’ve put together a little write-up on three of the biggest BAM bones of contention. Though I can’t cover the whole issue in detail with just one article, these summaries should help you score some points in a BAM-related argument – and give you some fuel for your own exploration. So let’s see what (some of) this fuss is all about.


Doubt #1: Do we have the computing power to simulate a whole human brain?

Nvidia's "Titan" supercomputer, which (as of April 2013) holds the world speed record of 20 petaflops.

Nvidia’s “Titan” supercomputer, which (as of April 2013) holds the world speed record of 20 petaflops.

The BAM invites a lot of comparisons – both positive and negative – with the Human Genome Project of the 1990s. Both are long-term projects, both are hugely expensive, and both involve number-crunching and analysis on scales that demand tight cooperation from top scientists and universities around the globe.

But whereas the Human Genome Project set out to map somewhere in the neighborhood of 20,000 to 25,000 genes, all of them constructed from the same four nucleotide molecules, a map of the human connectome would have to incorporate the behavior of at least 84 billion neurons and as many as 150 trillion synapses – all communicating via a dizzying menagerie of messenger chemicals, not to mention physically reshaping themselves as a brain grows and learns.

Estimates vary widely on the question of how much computing power it’ll take to simulate a whole human brain, but even the most optimistic experts believe it’ll take a computer capable of performing at least 1 quintillion (that’s 1,000,000,000,000,000,000) floating point operations per second (1 exaflop). By comparison, your average home computer processor maxes out around 7 million flops (7 gigaflops), a fast graphics card can reach over 300 million flops (300 gigaflops), and the latest supercomputer processors clock in at a little over 20 quadrillion flops (20 petaflops). So, on that front at least, our resources are rapidly approaching the goal – scientists at Intel predict that we’ll be computing in exaflops before this decade is over.

But raw computing power is only one part of the equation. In the most basic sense, even the most advanced computer is just a machine that follows instructions – so even after we’ve built our exaflopping supercomputer, we’ll still need to know what instructions to give it.

[UPDATE! - May 8, 2013]

Carlos Brody, a neuroscientist at Princeton’s Brodylab, has added a clarification of his own to this section. Here’s what he has to say:

“I think Doubt #1 is about the European Human Brain project, not about the U.S.-based BRAIN Initiative. The way I’ve understood it, the Europeans, with their billion-euro Human Brain project, are trying to simulate every neuron in a brain. In contrast, the U.S.-based BRAIN Initiative/BAM is about developing the technology to allow us to record the activity of every neuron in a brain. Not simulate, but measure what’s there. It’s a big difference, because in order to simulate you have to build in a lot of knowledge we don’t yet have (i.e., put in a giant truckload of untested assumptions). That is largely why many people think the simulation effort is pointless, there’s so many untested assumptions going in that what you end up with may bear little to no relation to an actual brain. The goal of measuring the activity, as in BAM, is to gain that knowledge we don’t yet have.”

Thanks, Dr. Brody, for your insight into that distinction!

[END OF UPDATE]


Doubt #2: Do we know enough about brains to know what we’re attempting?

All human DNA is made up of just four "bases," known as nucleotides: Adenine, cytosine, guanine and thymine.

The Human Genome Project set out to map the position – but not necessarily the function – of each nucleotide in all 23 human chromosomes.

Contrary to oft-repeated belief, the Human Genome Project’s goal was never to decode the function of every gene in human DNA – it was to map (sequence) the order and position of every nucleotide molecule in all 23 human chromosomes.

Scientists have only begun to make a dent in decoding the 20,000+ genes whose positions the Human Genome Project mapped. Even today, leading researchers are still debating how many genes the human genome actually contains – let alone what functions most of those genes encode. And that’s more than half a century after Watson and Crick described, in detail, the way that DNA encodes recipes for manufacturing the molecules that make up our bodies.

When it comes to the brain, on the other hand, the world’s top neuroscientists are still puzzling over the question of how neural activity encodes information at all. We’re using computers to construct videos of entire visual scenes based on the brain activity of people watching them – but that’s only after recording brain scans of dozens of patients as they watched hundreds of videos, then telling a computer to reverse the process and assemble a video that matches the brain activity patterns it sees.

This is no small achievement, to be sure – but even so, it’s sorta like learning to recognize whether the letters in a book are Chinese, Japanese or Arabic (assuming you don’t read any of those languages). You might be able to match a new book with the country that produced it, and maybe even recognize whether it’s, say, a novel or a dictionary. But none of that tells you much of anything about what a specific line on the page actually says.

This is one of the trickiest questions for BAM advocates to answer – and the answers tend to come in two main flavors. One response is that the fastest way to crack the neural code is to try simulating it digitally – just as the fastest way to learn a new language is to start writing and speaking it yourself. Another response is that a base-level understanding of the code may not be necessary for a rich and detailed understanding of how a brain works. Scientists have already mapped the functions and interactions of all 302 neurons in the nervous system of the tiny roundworm known as C. elegans. Even without knowing exactly how these neurons encode information, we’ve still built up a precise understanding of how each of them influences other neurons and muscle cells throughout the worm’s body.

Although the human brain’s 84 billion neurons aren’t exactly a small step up from C. elegans‘s 302, it stands to reason that if we do develop software and hardware that can simulate all our neurons’ interactions, we’ll be in a much better position to pinpoint specific processes and problems down at the cellular level.


Doubt #3: Will an epic mapping project produce useful results?

Just as no two humans share exactly the same set of genes, no two human brains are wired in exactly the same way.

Just as no two humans share exactly the same set of genes, no two human brains are wired in exactly the same way.

BAM critics like to draw a third unflattering parallel between the BAM project and the Human Genome Project: As the Human Genome Project approached completion, its White House advocates predicted that a sequenced human genome would lead to cures for diseases like cancer and Alzheimer’s, along with “a complete transformation in therapeutic medicine.” But more than a decade after the Project’s completion, very few of those medical benefits have actually materialized.

What has resulted from the Human Genome Project is a vast storehouse of data on how human DNA differs from that of other animals – and from one human being to another. This means that when we consider the outcome of the BAM, it’s important to keep our sights not on vague and grandiose promises about cures for poorly understood problems, but on what we can be sure would come out of a successful BAM project: A more detailed, accurate and integrated understanding of the human brain’s workings than we’ve ever had before.

If one thing about the BAM is certain, it’s that the project’s news coverage – and the intensity of the debates that coverage stirs up – will increase in step with the Brain Initiative’s funding demands and timing estimates. As I said at the beginning of this article, a few thousand words aren’t nearly enough to cover all the ink that’s already been spilled in the earliest stages of this debate – so jump into the comments and chime in with your own opinions, doubts, speculations and questions. Because in the end, the only way to resolve an argument is to talk it out.

Roundtable Round 1

“Engineering a Mind (Part 1)” — Podcast 6: David Saintloth and Wai Tsang

Episode 6 of the Connectome podcast brings together two guests who are obsessed with understanding how intelligence and thinking work – not by studying patients in MRI scanners, but by working to develop software that recognizes patterns and connections in the same way a brain does.

Our guests are David Saintloth, a software engineer who’s working on programs that use a technique he calls “action-oriented workflow” to proactively learn and adapt as they find connections between data patterns; and Wai H. Tsang, a thinker, lecturer, futurist and software programmer who champions what he calls the “fractal brain theory:” the idea that everything the brain does can be described in terms of a single type of fractal pattern.

To be fair, many of the ideas we discuss here – or, at least, very similar ones – have already been developed in detail by theorists like Jeff Hawkins and Ray Kurzweil. So our goal here is simply to share and compare what we’ve each learned so far, and bring you in on our conversation. We’re looking forward to dialoguing and debating with you in the comments.

Who knows – maybe you’ve noticed something all three of us are totally missing.

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

(Produced by Devin O’Neill at The Armageddon Club)

David Eagleman

“Senses That Bleed” — Podcast 5: David Eagleman

On episode 5 of the Connectome podcast, I chat with David Eagleman, author of the international bestseller Incognito: The Secret Lives of the Brain. Eagleman’s lab mainly studies the ways our brains encode sensory perceptions – but as you’ll hear, he’s also fascinated by questions on the nature of consciousness, synesthesia, meaning and representation, and even the potential development of new human senses.

Eagleman starts by talking about his new paper on overlearned sequences (word lists like days of the week or months of the year), but he also describes some findings in a 2009 paper he authored. Both papers are available for free online, and they’re intriguing to read.

Click here to play or download:

 

Oh, and here’s a diagram that may come in handy as you’re listening. It might look a little confusing now, but it’s actually a nice visual aid as we dig into the details of Eagleman’s research. Just click for a bigger version.

Now sit back, relax, and get ready to have your mind blown.

viewer

Enjoy, and feel free to email us questions and suggestions for next time!
(Produced by Devin O’Neill at The Armageddon Club)

roflbot

The Top 5 Neuroscience Breakthroughs of 2012

More than any year before, 2012 was the year neuroscience exploded into pop culture. From mind-controlled robot hands to cyborg animals to TV specials to triumphant books, brain breakthroughs were tearing up the airwaves and the internets. From all the thrilling neurological adventures we covered over the past year, we’ve collected five stories we want to make absolutely sure you didn’t miss.

Now, no matter how scientific our topic is, any Top 5 list is going to turn out somewhat subjective. For one thing, we certainly didn’t cover every neuroscience paper published in 2012 – we like to pick and choose the stories that seem most interesting to us, and leave the whole “100 percent daily coverage” free-for-all to excellent sites like ScienceDaily.

As you may’ve also noticed, we tend to steer clear of headlines like “Brain Region Responsible for [X] Discovered!” because – as Ben talks about with Matt Wall in this interview – those kinds of discoveries are usually as vague and misleading as they are overblown by the press.

Instead, we chose to focus on five discoveries carry some of the most profound implications of any research published this past year – both for brain science, and for our struggle to understand our own consciousness.

So on that note, here – in countdown order – are the five discoveries that got us the most pumped up in 2012!

 

5. A Roadmap of Brain WiringA grid of fibers, bein' all interwoven and stuff.

Neuroscientists like to compare the task of unraveling the brain’s connections to the frustration of untangling the cords beneath your computer desk – except that in the brain, there are hundreds of millions of cords, and at least one hundred trillion plugs. Even with our most advanced computers, some researchers were despairing of ever seeing a complete connectivity map of the human brain in our lifetimes. But thanks to a team led by Van Wedeen at the Martinos Center for Biomedical Imaging at Massachusetts General Hospital, 2012 gave us an unexpectedly clear glimpse of our brains’ large-scale wiring patterns. As it turns out, the overall pattern isn’t so much a tangle as a fabric – an intricate, multi-layered grid of cross-hatched neural highways. What’s more, it looks like our brains share this grid pattern with many other species. We’re still a long way from decoding how most of this wiring functions, but this is a big step in the right direction.

 

Opto Brain Blue_49704. Laser-Controlled Desire

Scientists have been stimulating rats’ pleasure centers since the 1950s – but 2012 saw the widespread adoption of a new brain-stimulation method that makes all those wires and incisions look positively crude. Researchers in the blossoming field of optogenetics develop delicate devices that control the firing of targeted groups of neurons – using only light itself. By hooking rats up to a tiny fiber-optic cable and firing lasers directly into their brains, a team led by Garret D. Stuber at the University of North Carolina at Chapel Hill School of Medicine were able to isolate specific neurochemical shifts that cause rats to feel pleasure or anxiety – and switch between them at will. This method isn’t only more precise than electrical stimulation – it’s also much less damaging to the animals.

(Thanks again, Mike Robinson, for sharing this image of your team’s laser-controlled brain!)

 

3. Programmable Brain Cells

7d1-chk-nf-h-3Pluripotent stem cell research took off like a rocket in 2012. After discovering that skin cells can be genetically reprogrammed into stem cells, which can in turn be reprogrammed into just about any cell in the human body, a team led by Sheng Ding at UCSF managed to engineer a working network of newborn neurons from a harvest of old skin cells. In other words, the team didn’t just convert skin cells into stem cells, then into neurons – they actually kept the batch of neurons alive and functional long enough to self-organize into a primitive neural network. In the near future, it’s likely that we’ll be treating many kinds of brain injuries by growing brand-new neurons from other kinds of cells in a patient’s own body. This is already close on the horizon for liver and heart cells – but the thought of being able to technologically shape the re-growth of a damaged brain is even more exciting.

 

microchip2. Memories on Disc

We’ve talked a lot about how easily our brains can modify and rewrite our long-term memories of facts and scenarios. In 2012, though, researchers went Full Mad Scientist with the implications of this knowledge, and blew some mouse minds in the process. One team, led by Mark Mayford of the Scripps Research Institute, took advantage of some recently invented technology that enables scientists to record and store a mouse’s memory of a familiar place on a microchip. Mayford’s team figured out how to turn specific mouse memories on and off with the flick of a switch – but they were just getting warmed up. The researchers then proceeded to record a memory in one mouse’s brain, transfer it into another mouse’s nervous system, and activate it in conjunction with one of the second mouse’s own memories. The result was a bizarre “hybrid memory” – familiarity with a place the mouse had never visited. Well, not in the flesh, anyway.

 

1. Videos of Thoughts

reconstruct1Our most exciting neuroscience discovery of 2012 is also one of the most controversial. A team of researchers from the Gallant lab at UC Berkeley discovered a way to reconstruct videos of entire scenes from neural activity in a person’s visual cortex. Those on the cautionary side emphasize that activity in the visual cortex is fairly easy to decode (relatively speaking, of course) and that we’re still a long, long way from decoding videos of imaginary voyages or emotional palettes. In fact, from one perspective, this isn’t much different from converting one file format into another. On the other hand, though, these videos offer the first hints of the technological reality our children may inhabit: A world where the boundaries between the objective external world and our individual subjective experiences are gradually blurred and broken down. When it comes to transforming our relationship with our own consciousness – and those of the people around us – it doesn’t get much more profound than that.

 

So there you have it: Our picks for 2012’s most potentially transformative neuroscience breakthroughs. Did we miss an important one? Did we overstate the importance of something? Almost certainly, yes. So jump into the comments and let us know!

david_rye_hires

Deciphering Sleep: Our Interview with David Rye

Why do we need to sleep? In all of human biology, few questions are more persistent – or more mythologized – than this one. Almost as puzzling as sleep itself are sleep disorders like narcolepsy and insomnia, which make us wonder why some of us need so much more sleep than others do.

David Rye, a neurologist at Emory University’s School of Medicine, thinks he may finally have some answers to these age-old questions. While studying hypersomnia, an unusual disorder characterized by long yet unsatisfying sleep, David and his co-researchers discovered a new brain chemical that may not only be responsible for hypersomnia, but could help explain why we need to sleep at all.

As soon as I read David’s paper, I knew I had to call him up and talk in more detail. I think you’ll agree that his ideas could turn out to have big implications for all of us.    –Ben
 
 
Ben Thomas: How’d you get interested in studying hypersomnia, David? Was there a particular patient…?

David Rye: Any sleep physician will see hypersomnia patients from time to time. I’ve been doing this since 1992, and you sometimes run into patients like this: They’re fairly young, but they’re just addicted to sleep. They sleep for long periods of time, and they’re very efficient sleepers – they tend to sleep deep, and they can sleep through alarms and fires and sometimes even bombs. But despite sleeping all that time, they wake up very unrefreshed and they’re still sleepy during the day.

BT: And you’ve pointed out that this pattern is different from what we see in narcolepsy.

DR: Hypersomnia has been labeled with so many inappropriate names, and “narcolepsy” is the biggest one we hear. But narcolepsy has been classified very precisely into several categories – for instance, Type I narcolepsy, which is characterized by the loss of this neuropeptide chemical known as hypocretin. And narcolepsy – the literal translation is “to be seized by sleep.” Narcoleptics don’t actually sleep more than healthy people over an average 24-hour period; their disorder is characterized by sudden attacks of sleepiness. So to characterize a person with hypersomnia as “narcoleptic” is really a disservice to the language.

BT: What are some unique features of hypersomnia?

DR: Hypersomnia patients don’t really respond to traditional stimulants – they can still sleep for hours even after taking a large dose of caffeine, or an amphetamine. And what they usually describe is that these stimulants make them feel physically awake, but they never quite wake up mentally – they feel as if they’re in a fog. Academics at sleep centers around the country agree that they’ve seen these patients, but no one knew exactly how to treat them.

BT: Could there be such a thing as “hypersomnia with narcoleptic symptoms,” or are these two disorders completely distinct?

DR: I think it’s closer to the former, which makes the process of diagnosing these disorders even more confusing. There are people who are labeled “narcoleptic” because they fall asleep during the day and go right into a dream – and this is characteristic of the “genuine” narcolepsy I was just mentioning. But in reality, recent research has found that many narcoleptics are more like our hypersomnia patients – they sleep for long periods of time during the day.

BT: So how do you distinguish these diagnoses in the clinic?

DR: Well, see, that’s what makes this even trickier: Patients don’t come into a clinic complaining that they fall directly into dream states – that’s just not a complaint you’d hear. So my question is, why are we classifying patients as “narcoleptic” based on whether or not they fall into a dream state when they have a nap during the day? As physicians, we start diagnosing a new disorder based on clusters of signs and symptoms, rather than on underlying biological processes – but as time goes by and we come to better understand the biological basis for a particular disorder, we update our diagnostic criteria so we can more accurately categorize the patients we see. And that’s been a huge uphill battle for these sleep disorders, because we’re only beginning to understand their biochemical causes.

BT: Do we have any idea, neurochemically, why hypersomnia symptoms are so unusual?

DR: I think we’re putting our finger on it with this research. The traditional approach is sort of a “Western” approach – what I mean is, when Western neuroscience deals with a problem, we have a bias to characterize the problem as “something’s missing.” In this case, that plays out as, “This patient’s wake-up system isn’t working properly,” so we say, “Give it more stuff to make it awake.”

BT: But that’s focusing on the wrong issue.

DR: Right. By analogy, we’re pushing the accelerator pedal – but the problem isn’t that we need more gas; it’s that the parking brake is still on. The “parking brake” is gamma-aminobutyric acid (GABA), which is an inhibitory neurotransmitter chemical that helps promote sleep. And that parking brake needs to be disengaged before we can hit the gas and hope to actually get moving.

BT: And in this study, you looked at substances that interact with GABA receptors, and isolated a chemical that’s strongly correlated with hypersomnia.

DR: Exactly. We’ve characterized this chemical quite well – in fact, when we submitted our paper to the New England Journal of Medicine, they told us they were impressed with the level of detail in our pharmacological analysis of this chemical’s interaction with GABA receptors. The real question now is, is this biological agent unique to the types of hypersomnia we studied, or might it also be a factor in other known types of narcolepsy that we didn’t look at? Or might it be part of a general-purpose pathway to sleepiness, common to all human brains? That’s actually the explanation we’re leaning toward, though you can’t make that claim from our data in this particular paper.

BT: There’s been a lot of media blitz around this study since you announced your results. Why do you think this work resonates so strongly with the general public?

DR: I think it speaks, first of all, to our culture’s traditional and literary fascination with sleep. There’s Sleeping Beauty, Rip Van Winkle – for centuries, we’ve been enamored with stories of people who sleep for long periods of time. It’s also the opposite side of the coin of insomnia, which is something a lot of us struggle with. When I talk to reporters about hypersomnia, one comment I hear a lot is, “Can you give me a dose of that?” They’re jealous! So I think it speaks to a lot of topics that puzzle and intrigue us.

BT: It sounds like people find the actual experience of hypersomnia difficult to relate to, even if they’re jealous of it.

DR: And yet, what our data really seem to suggest is that hypersomnia is just one end of a spectrum – a bell curve – that includes all of us. The majority of us sleep for seven and a half, eight hours – and what we’re showing here is that even people who need to sleep nine or ten hours are probably just less extreme variants from the center of that curve. So my wife and I joke that we can definitely relate to this – when her family come to visit for the holidays, they can sleep for nine hours and take a two-hour nap in the afternoon.

BT: So this data could give us insight into why some people are fine with six hours of sleep, while others are drowsy if they get less than nine.

DR: I think that’s the point. Beyond the direct implications of being able to treat hypersomnia, we’re getting some good insights into how normal sleep is generated – and maybe even why we need to sleep at all.

BT: That question’s always intrigued me – it’s definitely one of the all-time great science mysteries.

DR: Well, if we know what this molecule is, and what chemical pathways it’s involved in, and we can make specific statements like, “It accumulates in excess under these certain conditions,” I think we’re gonna be pretty damn close to understanding why we sleep.

BT: That’ll be really interesting to see. I’m excited to hear about where that research goes.

DR: We’ve actually gathered quite a bit of sleep-related data that didn’t make it into this paper, so we’ll be publishing more about our findings in the near future. So stay tuned.
 
 

I definitely will, and I’ll do my best to keep all of you in the loop, too. Thanks for joining us – we’ve got lots more exciting interviews coming up soon!

WalnutOpen_shutterstock_2087698

Science Fights Back With Open Access

A major paradigm shift is taking the science world by storm. Open source is taking over.

For more than a century, scientists have depended on peer-reviewed journals to keep them up to date on the latest research. But as many of these journals have raised their subscription fees to bank-breaking levels, and locked life-saving research behind exorbitant paywalls, the gloves are finally coming off. Thousands of researchers are fighting back by boycotting publishers, submitting their papers to open-access journals like PLOS ONE and PNAS, and – most excitingly of all – making their datasets freely available online, for everyone.

As of September 2012, PLOS ONE appears to be the biggest scientific journal in the world.

But that’s just the beginning. In October 2012, an international team of neuroscientists known as the CONNECT Consortium released the first micro-structure atlas of the human brain: A massive open-access database of brain connectivity down to the micron scale. Quite a bit of their data is already available on their website, to be used with free open-source brain imaging programs like OpenWalnut and BrainVISA. Anyone with a computer and a willingness to tinker can play with the same data that professional neuroscientists are using.

Once upon a time, this guy was just a geek who liked to tinker.

The CONNECT project has already led to dozens of breakthroughs, including more precise techniques for modeling white matter wiring, advanced technologies for preserving live samples of neural tissue, and more than 80 new peer-reviewed journal papers based on the data the project assembled. And the work continues today – the team plans to continue gathering data, refining their techniques, and releasing new tools and models to scientists and the public.

The CONNECT Consortium aren’t the only ones with this dream. The Human Connectome Project, which ultimately aims to simulate the functioning of an entire human brain, makes intricate reconstructions of brain functionality available to anyone who qualifies for an account. If you’re in the mood for instant access, the Open Connectome Project will let you explore the brains of various animals online, right now. BrainMaps.org also has loads of models for you to play with.

If you’d rather gather your own data, grab a $299 neuroheadset and check out OpenVIBE, a free open-source program that helps you record your brainwaves, design your own experiments, and even create video games and art projects controlled by your brain activity. Though the EEG recordings you’ll get with a neuroheadset aren’t as deep or precise as the fMRI and DTI data used in many imaging studies, plenty of modern research is still EEG-based. It’s all about choosing the right tool for the right experiment.

What is a scientist, really? Anyone who’s willing to dig into the data, test expectations, and revise conclusions when they don’t agree with reality. If you agree that it’s more important to be true to the facts than to win an argument, you’re already a scientist at heart.

It’s like in the old quote attributed to the Buddha:

Do not believe in anything simply because you have heard it… But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it.

Every day, the world is metamorphosing. The landscape of science is shifting and quaking. Have you ever dreamed of making a scientific discovery? What’s stopping you?

brain comp chip SS

Q&A: Can We Preserve Our Brains After Death?

As promised, here’s the first-ever official Connectome Q&A! We’ve been getting lots of incoming questions on our Facebook and Twitter pages – some of them on the technical side; others of the more “general interest” variety. Most of these questions require pretty involved answers – and it’s important to me that each of them gets the full treatment it deserves.

So for today’s Q&A, I’ve decided to focus on just one question. That doesn’t mean we’ve forgotten the others, though – we keep everything archived, and we’ll be doing plenty more Q&As in the future. So keep sending those neuroscience queries and article requests our way.

And now, let’s get into today’s question!     —Ben

 

Q. “What is the best current method to preserve a connectome or whole brain – cryonics, vitrification, chemical brain preservation or something else?”     Ivan Smirnov, via Facebook

 

A. There are actually two different questions here: 1) “What’s the best way to preserve a brain?” and 2) “What’s the best way to preserve all the information in a brain?” There’s also a third question that’s sorta implied by the first two: “Is it possible to preserve a person’s ‘self’ by preserving that person’s connectome?” Let’s take these questions one-by-one.

When it comes to preserving brain tissue, the most successful and widely used technique available today is cryopreservation: cooling a brain to sub-zero temperatures, which stops all biological processes. Techniques like slow programmable freezing (SPF), along with cryoprotectant chemicals, can protect cells from being damaged by freezing. In fact, laboratories around the world use CPF and cryoprotectants to preserve living cells and organs – and even whole animals – for later resurrection.

A technique that shows some promise for the near future is neurovitrification, which transforms the water in living neural issue into a glass-like solid, allowing cells to be frozen with minimal damage. Though vitrification preserves many organs quite well, we’re still years away from being able to freeze and resurrect a mammalian brain – let alone a human one. In theory, a perfected neurovitrification technique would preserve living neurons right down to the molecular level, locking every atom of every synapse in place.

That kind of lockdown would be crucial for preserving and resurrecting a brain, because most information in the brain is stored not in physical connections, but in electrochemical communication networks that shift and morph from moment to moment. Keeping those synaptic molecules in place would be the only way to resurrect a brain without “wiping its hard drive.” For the same reason, it’d be vital (no pun intended) to preserve a brain before its owner had died; once those neurons have stopped firing, most of the brain’s stored information is lost forever.

Early brain preservation experiments didn’t always go as planned. (image credit: kosy790am.com)

This brings us to another approach: preserving the information encoded in a brain’s connectome. The idea here is that if we can understand precisely how the brain encodes various kinds of information, we could record and save a digital “copy” of its connectome – perhaps without having to map the location of every molecule in every synapse – and, helpfully, without having to freeze a living person (possibly ending his/her life and thus defeating the whole point).

To understand how a connectome does what it does, efforts like the Human Connectome Project, the Whole Brain Catalog and the Open Connectome Project aim to produce functional simulations of brain activity at all scales. Before we can build a digital copy of a connectome, we need to understand exactly what needs to be copied, and how it’s supposed to work.

Connectome naysayers claim that even this won’t be enough. They draw comparisons between the Human Connectome Project and its earlier cousin, the Human Genome Project (HGP). Although the HGP has led to plenty of breakthroughs, we’re still a long way from understanding what most of our genes actually do. And whereas a human genome has about 3,000,000,000 (3 billion) base pairs, a human connectome may involve as many as 100,000,000,000,000 (one hundred trillion) synaptic connections – all of them constantly transforming and exchanging information with their neighbors. At the very least, we’re going to need some major innovations in computing power, and a vastly expanded program of data-gathering, before we can simulate an entire human connectome at the synaptic level.

Of course, none of this tells us anything about whether preserving a person’s connectome or brain – even flawlessly – will preserve that person’s mind/self/individuality. Sebastian Seung, one of the big brains behind the Human Connectome Project, has called the question of immortality “the only truly interesting problem in science and technology.”

We know that some animals, such as frogs and salamanders, can be frozen for long periods, thawed, and awakened to keep going about their amphibian business. So far, though, no one’s been able to do this with a mammal – so it’s unclear whether it’d be possible to resurrect your dog or cat, spunky personality and all. (Unsurprisingly, that hasn’t stopped wealthy pet owners from putting Rover on ice just in case.)

As with the connectome mapping projects, the problem with preserving a personality is that no one knows exactly what needs to be preserved: living nerve cells, positions of neurotransmitter molecules, and/or certain kinds of neurochemically encoded data might all be necessary pieces of the puzzle.

In other words, what works for frogs and salamanders may or may not work for mammals like us. It’s unclear whether what we call a “self” is correlated with stable body temperature, or even with a cerebral cortex – for example, there’s some evidence that octopi may have a form of self-awareness, and they’re ectothermic and have very different neural wiring from vertebrates.

It’s also very unlikely that one day in vertebrate evolution, the “self” just popped into being – more likely, self-awareness arose gradually out of the interactions among many self-referential circuits of brain activity; the same goes for personalities, emotions, hopes, fears, and all the other vaguely defined ghosts that haunt our machinery.

What this implies is that the processes of understanding, disassembling and reverse-engineering the “self” are going to be very gradual ones. It’s likely that we’ll spend decades studying correlations between neural activity and subjective experience before we understand exactly how they correlate – which means that for the foreseeable future, the only one who’ll know for certain whether a resurrected brain contains “the true-blue you” will be …well… you.

Matt Wall

Brain Scans and Bold Plans: Our Interview with Matt Wall

Sometimes, a conversation takes you to places you never would’ve expected. Matt Wall and I struck up a chat about brain-scanning technology early this year, and he mentioned that he’d like to do an interview for The Connectome.

Since he’s got 5+ years of published brain research under his belt, I jumped at the chance.

I figured we’d be talking about what life’s like in an fMRI lab, and maybe about some recent discoveries – but we wound up chatting about science misconceptions, the nature of pain, post-traumatic stress disorder and psychedelic drug trips (among other things). I think you’ll enjoy hearing his insights as much as I enjoyed picking his brain.    –Ben
 

Ben Thomas:
So, Matt, how’d you get involved in fMRI research?
 
Matt Wall: My bachelor’s degree was actually in psychology. I earned my Ph.D. in Cambridge, and I started working with fMRI in my post-doctoral work – first in a vision lab, looking at low-level visual physiology, and then working on the spinal cord. So I keep getting further and further away from my roots as a psychologist.
 
BT: And you’ve done some work in cognitive neuroscience as well. So, quite a broad range of topics. What’s the unifying theme in all this work?
 
MW: Well, in a sense, I’ve just been moving from one field to another as I find interesting projects – but the unifying theme is really the fMRI itself. I’ve come away from each of these projects with a richer understanding of the technology. fMRI is such a technical specialty that it often takes years to become familiar enough with the technology to understand how to perform effective research with it. Once you’ve got that understanding, though, you can apply it to studying lots of different areas of neuroscience.
 
BT: Once you’ve reached that level of familiarity, what kinds of tasks are you expected to be involved in? What’s day-to-day life like in an fMRI lab?
 
MW: My latest project has been studying reactions to pain – so a lot of my day-to-day work has been strapping electrodes to people’s heads, giving them electrical shocks, and recording their brain activity. Which sounds pretty sadistic, but we’re doing it for a good reason: We’re particularly interested in the trigeminal nerve, and its role in processing pain signals. Although this nerve is technically part of the peripheral nervous system, you can – in theory – get BOLD (blood oxygen level-dependent) fMRI signals from it, as you can from, say, the cerebral cortex. Our goal has been to try to figure out where exactly in the pain system a given drug starts messing with the signal. We haven’t quite managed to get things working as we’d hoped, though, and my contract with that lab is about to be up. But it’s been an interesting project to work on.
 
BT: To backtrack just a bit, you mentioned looking for BOLD signals, which is a great opportunity to talk about a common source of confusion I see in articles on brain studies: fMRI doesn’t directly measure neural activity at all.
 
MW: Right. What fMRI is actually tracking is the ratio of oxygenated to deoxygenated hemoglobin in a given area – basically, the oxygen level in the blood pumping through that region of the brain. One thing we know about the brain’s vascular system is that it tends to overcompensate; so the more active a certain area of the brain becomes, the more oxygen it demands, and the more oxygen the vascular system dumps into it.
 
BT: So fMRI is measuring the level of oxygen in that area, but not what that area’s doing – it could be processing input from some other area; it could be sending out excitatory or inhibitory signals, or a mixture of the two…
 
MW: That’s one of the trickiest things about analyzing fMRI data. You’re comparing brain activity during a task with “control” activity – i.e., activity in that same area when that task isn’t being performed – and sometimes you notice an increase in activity in a certain region during the task, while maybe you notice a decrease in others; they were more active in the control condition. So there’s often the possibility that an increase of activity in one area could mean it’s inhibiting activity in another. But it’s very difficult to be sure about those kinds of causal interpretations.
 
BT: And yet every day we’re reading headlines like, “Brain Area Responsible for [X] Discovered!” All we really know is that’s an area that becomes more active during task [X]. It could be processing that task, or it could be inhibiting another region that interferes with the task.
 
MW: A better way to phrase it would be, “Brain Area Associated with [X] Discovered.” It’s quite a leap to say that activity in a certain area is directly causing a certain effect – especially if that supposed effect is something as complex as a whole behavior or a personality trait.
 
BT: Yeah, no kidding. So, now that this pain research is wrapping up, what’s on the horizon for you?
 
MW: Lately I’ve been getting into neuropharmacology. I recently wrapped up some work for GlaxoSmithKline, at a lab that was studying pharmaceutical drugs before they went to market. But my most recent project has actually been a TV series exploring the effects of recreational drugs on the brain. I was just a relatively small part of a large team working on the program – the major players were Prof. David Nutt of Imperial College and Prof. Val Curran of UCL, with Dr Robin Carhartt-Harris being the principal investigator. Most recently we performed fMRI scans on people who were on quite large doses of MDMA (ecstasy). So that’s been fun.
 
BT: Any interesting discoveries yet?
 
MW: A few things, actually; yeah. The point of the MDMA episode was to examine the therapeutic uses of that drug. MDMA was used in clinical settings quite a lot in the 1940s, when it was first discovered – but once it became known as a street drug, all that research stopped, and a lot of misinformation started spreading.
 
BT: The claims that it caused “holes in the brain” and all that.
 
MW: Right. But there’s some evidence that MDMA can be used to help treat PTSD (post-traumatic stress disorder). A therapist can walk patients through memories of their traumatic experiences while they’re on ecstasy; they’re still lucid about the details of the memory, but the drug seems to counteract a lot of the agitation and distress they’d normally feel. So after a few walkthroughs of the traumatic memory while on MDMA, the patients develop the ability to think through the details of the experience without suffering that intense anxiety.
 
BT: And what’s fMRI telling you about how that works neurologically?
 
MW: What we’re finding so far is that MDMA usage is correlated with increased activity in the visual cortex (vs. placebo) during recall of emotionally positive memories. And that lines up with our subjects’ reports – they generally report that those memories are much more vivid than usual when they’re on MDMA. And during recall of emotionally negative memories, we’re seeing that MDMA use is correlated with much lower activity in the parahippocampal gyrus, which is involved in memory retrieval, and in the amygdala, which is associated with negative emotions like fear and disgust. So it’s possible that MDMA inhibits the negative emotions associated with those memories.
 
BT: Sounds like MDMA shows some promise for clinical treatment.
 
MW: I think that’s likely. Oh, and there was also another episode where we were studying the effects of psilocybin, the chemical in “magic mushrooms.” I took part in that one myself, which was quite interesting. It was a crazy experience, actually. A real roller coaster.
 
BT: The last time a lot of these drugs were seriously studied was back in the ’60s, when they were still legal. And back in those days, we didn’t have fMRI; we had EEG (electroencephalography), which really just measures electrical activity across the scalp. So I’m really excited to see what we’ll discover now that we can correlate activity in a specific brain region with all these perceptual distortions and expansions that psychedelic drug users describe.
 
MW: I think the tide of public opinion is starting to shift. MDMA, psilocybin, LSD, and so on are really powerful mind-altering compounds. There’s a lot of anecdotal evidence about them; and so far, only a few studies demonstrating that, say, psilocybin can alleviate depression in people who aren’t responsive to other treatments. These are very small studies; it’s tough to get bigger funding and wider sample groups, because, as you say, these drugs are still illegal in many parts of the world. But over the past few years, I’m hearing more and more researchers saying, “Hey, if these drugs have potential therapeutic benefits, we really should be investigating them further.” Any powerful mind-altering compound carries certain risks, of course – but there’s no reason they shouldn’t be tested in clinical environments.
 
BT: Absolutely. This sounds like a fascinating project; when I asked you to talk with me about fMRI research, I had no idea we’d wind up talking about psychedelic drug trips.
 
MW: Yeah; it’ll be interesting to see where this research takes us.
 
 
Thanks again, Matt, for taking the time to chat. We’ve got more interviews with big names in neuroscience on the way, so stay tuned for lots more exciting times!

 

 

 

S. Emmons

“Using Worms to Crack the Human Brain” — Podcast 4: Scott Emmons

On episode 4 of the Connectome podcast, I chat with Scott W. Emmons, Ph.D., a professor of neuroscience and genetics at Albert Einstein College of Medicine. Dr. Emmons talks about his cutting-edge connectomics research, which may help us understand how neural circuits “decide” on a particular behavior. Though his recent work focuses on the nervous systems of microscopic worms, its implications may reach all the way to the human brain.

 

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

 

(Produced by Devin O’Neill at The Armageddon Club)

reptile-eye-1217

The Lurking Lizard

He has haunted us for more than fifty years – this strange scientist, with his theory of primal reptiles embedded in each of us. And for years I wondered, Could this bizarre hypothesis be true? Might it explain the ancient instincts – so contrary to my intentions – which I felt arising from the depths of my unconscious mind? Could it really be possible that inside my primate brain lurked a vicious lizard mind, a relic from the era when reptiles ruled the earth?

As it turns out… no, actually. The theory – if not the man who coined it – has proven outright insane in light of the latest scientific research. And yet, speakers and writers around the globe continue to trumpet the truth of the idea.

Let me lay the facts bare for you, and explain the enduring appeal of this weird  theory, in my latest article for Scientific American: “Revenge of the Lizard Brain!”

“There’s a scene in Fear & Loathing in Las Vegas in which the writer, high out of his mind on hallucinogens, watches a roomful of casino patrons transform into giant lizards and lunge at each other in bloody combat. Under the veneer of civilization, the scene suggests, we’re all still reptiles, just waiting for the moment to strike.

Like the Fear & Loathing scene, the Triune Brain idea holds a certain allegorical appeal: The primal lizard – a sort of ancestral trickster god – lurking within each of us. But today, writers and speakers are dredging up the corpse of this old theory, dressing it with some smart-sounding jargon, and parading it around as if it’s scientific fact…”

Powered by WordPress | Designed by: free Drupal themes | Thanks to hostgator coupon and cheap hosting
Social links powered by Ecreative Internet Marketing