1476040_624215474303434_478603856_n

“2013′s Nobel Prize Winners” — Podcast 11: James Rothman, Randy Schekman & Thomas Südhof

On Episode 11 of The Connectome Podcast, I’m joined by all three of 2013′s Nobel Prize winners in the Physiology/Medicine category — James Rothman, Randy Schekman and Thomas Südhof!

All three of these guys contributed crucial pieces to a longstanding puzzle: How, exactly, do our brain cells communicate with each other? See, biologists had known since the 1960s that nerve cells pass chemical messages to one another inside hollow little globs of proteins called synaptic vesicles — and yet, as recently as the early 90s, no one had figured out much of anything about how this process worked.

Meanwhile, as James Rothman and Randy Schekman plugged away on their own seemingly unrelated projects — cell metabolism and yeast genetics — they were both starting to notice something intriguing: The chemical reactions they were studying looked like suspiciously good candidates for certain stages of the brain’s vesicle transmission process. And sure enough, before long, a young researcher named Thomas Südhof started to discover many of those very same chemicals in brain cells…

Click the “Play” button below, and they’ll tell you how their journey to a Nobel prize unfolded from there. And for more info on these guys and their research, check out my article in Scientific American: “The Search for a Nobel Prize-Winning Synapse Machine.”

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

(Produced by Devin O’Neill at The Armageddon Club, with lots of help from Tim Udall)

roflbot

The Top 5 Neuroscience Breakthroughs of 2013

If 2012 was the year neuroscience exploded into pop culture, 2013 was the year it stepped into the halls of power.

The Obama administration’s $100-million BRAIN Initiative stirred up furious debate, as proponents cheered to see so much funding and press attention thrown at large-scale efforts to map the human brain, while opponents claimed that the whole thing might be a gigantic waste of valuable resources. Meanwhile, across the Atlantic, the European Union’s Human Brain Project sparked similar disputes – disputes that continue even as unexpected breakthroughs have begun to surface.

It’s also been a year of explosive growth here at The Connectome. I’ve been spending less time posting on this blog because (gratuitous brag alert!) I’m now regularly blogging for national press outlets like Scientific American, The Huffington Post, Forbes and Discover Magazine. But when I do post here, I make sure to leverage every connection in my address book to bring you guys bigger, cooler, more exciting content – like podcast interviews with researchers like Oliver Sacks, David Eagleman and Sebastian Seung. On other fronts, my TEDx talk finally made it onto YouTube, you guys have been showing love for my webseries, “DEBUNKALYPSE,” and The Connectome’s Facebook, Google Plus and Twitter feeds each broke 1,000 followers this year.

None of this could’ve happened without you guys. I owe this all to you. You’re awesome. I mean it. And lots more cool stuff is on the horizon, I promise.

But enough about how amazing The Connectome is. That’s not why you’re here.

And so, without further fanfare, here – in countdown order – are the five most thrilling neuroscience discoveries of 2013!

 

5. The Emergence of Individuality in Clones

Individuality-CageIf you’ve ever raised a litter of newborn puppies or kittens, you’ve seen that each baby displays its own personality right from the start. Some are feisty and adventurous, some hog all the milk, some hide close to mom, some bully their siblings mercilessly, and so on. Years of studies have found that this is even true of genetically identical animal clones – but it wasn’t until 2013 that Gerd Kempermann, a professor of genomics at the Center for Regenerative Therapies (CRTD) in Dresden, Germany, scoped out exactly how these differences in experience shape the unique development of each individual’s brain. Kempermann and his team cloned a group of genetically identical mice and set them loose in a large enclosure with lots of places to play. Within just a few months, the mouse clones that had explored the most actively had sprouted new nerve cells throughout their brains – especially in the hippocampus, a region that’s crucial for memory – while the less-adventurous clones showed less brain development. Although this research doesn’t tell us why some mouse clones were more adventurous in the first place, it’s still a clear demonstration that individual experiences sculpt individual brains, right from the earliest months of life – even if those brains are genetically identical.

 

4. “Two Brains in One Cortex”

LayersYour cerebral cortex – the outermost “rind” or “bark” of that cauliflowery mass that makes up most of your brain – isn’t just a single structure. All across your brain, the cortex is divided into stacked layers of neurons, many of them overlapping like the patches of a quilt. Each layer plays its own part in processing information; and since the early twentieth century, most neuroscientists have taught that these layers work as a strict hierarchy: That each layer does its part, then passes its results on to the next layer, all nice and orderly-like. But in 2013, Columbia University neuroscientist Randy Bruno showed that cortical layers 4 and 5 both receive “copies” of the same exact information, and perform their processing simultaneously. The discovery led Bruno to declare, “It’s almost as if you have two brains built into one cortex.” The exact implications of this revised cortical hierarchy aren’t quite clear yet – but it’s another humbling reminder that our understanding of brain wiring is still at a very primitive stage.

 

3. “Mini-Computers” Hidden in Nerve Cells

201310278553910For more than 100 years of brain research, scientists thought that dendrites – those branch-like projections that connect one neuron to others – were just passive receivers of incoming information. But in 2013, researchers at the University of North Carolina at Chapel Hill demonstrated that dendrites do a lot more than just passively relay signals – they also perform their own layer of active processing, hinting that the brain’s total computing power may be many times greater than anyone expected. This discovery is so new that no one’s had much time to figure out what, exactly, all this additional processing power changes about our understanding of the brain; or how we’ll have to revise our models of brain function to incorporate it. But mark my words – this is gonna turn out to be a major paradigm shifter over the next few years.

 

2. Crowdsourced Connectomics

ConnectomeWhen researchers first started talking seriously about human connectomics – the science of constructing cellular-level wiring diagrams for entire regions of the human brain – back in 2005, supporters of the idea were all but laughed out of the building. We had nowhere near enough computing power, opponents claimed, to even attempt to map the human brain’s 84 billion (-ish) neurons and 100 trillion (-ish) interconnections – and even if we did, we’d still need humans to double-check every synapse the computers tried to map. Even today, the science of human connectomics has loads of vocal critics. But in 2013, a collaborative effort by researchers at MIT, along with another team at Germany’s Max-Planck Institute for Medical Research, used an innovative combination of computerized rendering and human tracing to map the precise shapes and points of contact between all 950 neurons in a patch of mouse retina – and they did it in 1/100th of the time, and at a fraction of the cost, that naysayers predicted. It’s a small step in the grand scheme of connectomics, but it’s a proof-of-concept for a cheap, efficient technique that can be applied throughout an entire brain – and a hint that the dream of a complete human connectome isn’t necessarily out of reach in our own lifetimes.

 

1. The Human Brain-to-Brain Interface

B2B-image-1024x550Back in 2012, researchers at Harvard found that if they stuck electrodes into certain points in the brains of two rats, they could enable the first rat to control the physical movement of the second one using only the power of its thoughts. Human-to-rat interfaces soon followed – but it wasn’t until 2013 that University of Washington scientists Rajesh Rao and Andrea Stocco created the first human-to-human wireless brain-to-brain interface. Sitting on one side of campus, Rao thought, “tap the spacebar,” and at the other end of campus, Stocco’s hand tapped his spacebar involuntarily. It’s a simple interface, but the implications aren’t hard to see: Movement impulses – and someday, perhaps even thoughts and memories – can be beamed directly from one human brain to another.

 

And those are The Connectome’s picks for the most fascinating, transformative, implication-riddled neuroscience breakthroughs of 2013. What about you – which of this year’s discoveries do you think made the biggest waves? Which ones are poised to change the world? Which ones did I miss? Jump into the comments and tell us all what’s up!

4726957185_b5b36d8933_o

“Crowdsourcing a Neuroscience Revolution” — Podcast 10: Sebastian Seung

On Episode 10 of The Connectome Podcast, I chat with Sebastian Seung, a neuroscience researcher whose latest work — in cooperation with teams at MIT, at Germany’s Max Planck Institute and at other cutting-edge institutions — is proving that an improbable-sounding dream isn’t so improbable after all: We may be able to map the structure and function of every neural connection in an entire mammalian nervous system, from the cellular level up… and it may happen within our lifetimes.

Seung’s bestselling book Connectome offers an exciting tour through this fast-growing field of connectomics — and in fact, it was his TEDTalk, “I Am My Connectome,” that sparked the creation of this very website, almost three years ago. His lab also created the free crowdsource game EyeWire, which lets anyone with a computer and an internet connection help his research team map the cellular structure of the brain.

But he’s on the show today to talk about the latest project he and his co-researchers have published: A structural map of all 950+ neurons in a patch of retina. Not only does this project represent a leap upward in complexity of neural mapping — it also required innovative new techniques for crunching massive amounts of data; and the result is a proof-of-concept for a revolution in the way we approach our study of the brain.

You can read more here, in my article for Scientific American: “The Neuroscience Revolution Will Be Crowdsourced.”

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

(Produced by Devin O’Neill at The Armageddon Club)

Young_woman's_neck

Sexy Neuroscience IV

Every culture and subculture has its own rituals of greeting and affection – handshakes, backslaps, fist-bumps, hugs and so on – but when it comes to erotic contact, cultural differences seem to melt away into something more primal: Touch that just feels good for its own sake.

In fact, a new study has confirmed that erogenous zones are remarkably similar and consistent among people from widely different cultures. This first “systematic survey of the magnitude of erotic sensations from various body parts” found that both men and women in Britain and in sub-Saharan Africa love be caressed on their lips, necks, ears and inner thighs; while pretty much no one is into kneecap-play (rule 34, though, folks). In short, erogenous zones seem to have a whole lot more to do with touch-sensitive nerves than they do with cultural conditioning.

And so, in the spirit of Part I, Part II and Part III of the Sexy Neuroscience series – which, incidentally, got this site banned from buying advertising on Google (yes, really) – The Connectome presents Sexy Neuroscience IV: Global Erogenous Zone Challenge!

As the journal Cortex reports, a team led by Bangor University’s Oliver Turnbull surveyed 800 people, mostly from Britain and sub-Saharan Africa. The investigators asked the participants which body parts (aside from genitalia) produced the most intense erotic sensations when others touched them. While the researchers did discover a few differences between male and female erogenous zones – for instance, men found it more arousing to be touched on the backs of their legs, and on their hands, than women did – most of the participants ranked a list of 41 body parts in similar erogenous order.

“Surprising!” say the researchers. “Why?” reply the rest of us.

I mean, most of us learn what our own bodies enjoy long before we clearly understand what sex and eroticism are. And plenty of us have defied cultural conventions when they didn’t line up with our own experiences of physical pleasure. I’d say it makes more sense that the whole concept of erogenous zones, and the culture surrounding them, both stem from common physical experiences; not the other way around.

But this study actually does reshuffle the erogenous-zone deck in one surprising way: It revises the sensory homunculus yet again. As I explained back in Part II, the sensory homunculus is a concept developed in the 1950s – by a bunch of men, which turns out to be a very significant part of the story.

The core concept is pretty simple: As you can see in this picture, touch sensations in various parts of our bodies are mapped onto a series of adjacent but differently sized brain areas; the larger the area, the more touch-sensitive a body part is. So far, so good. Except that until a few years ago, hardly anyone bothered to mention that this entire model was based solely on male brains. The cervical walls, the labia and the clitoris weren’t on it at all. And it took until 2011 for someone to come along and fix this.

So it makes sense that this latest erogenous-zone study has cleared up yet another longstanding myth about the sensory homunculus: That the bottoms of the feet are erogenous zones. Previous researchers had claimed this was true because a) lots of people think feet are sexy, and b) the sensory brain areas devoted to the bottoms of our feet lie right alongside the areas devoted to genitalia.

And although there’s no doubt that feet can be sexy – both visually and to the touch – and that they’re highly touch-sensitive and often ticklish, three fourths of the people surveyed in Britain and sub-Saharan Africa gave feet an erogenous touch rating of zero, right alongside kneecaps.

Turnbull and his team suspect that those previous researchers may have confused fetishistic touch with erogenous touch – two related but distinct phenomena. Those two feelings can – and often do – feed off one another; but there’s nothing to suggest that a caress on the foot feels inherently erotic in the same way that, say, a nip on the earlobe or a breath on the neck does. If anything, feet seem to serve as a clear example of culturally (and/or experientially) conditioned eroticism.

So where does this leave us as far as sensory homunculi and erogenous zones? Well, results like this reinforce the importance of communicating with your partner(s) instead of just following sexual ideas you’ve picked up from others. Erogenous zones may be strikingly similar across genders and cultures, but no two of us are exactly alike: Some find erotic what others find ticklish or painful – and some find tickling and pain erotic. The only way to find out is to ask. Who knows – you might even find someone who enjoys kneecap foreplay.

Jeff_Hawkins_by_Jeff_Kubina

“Learning How Brains Learn” — Podcast 9: Jeff Hawkins

On Episode 9 of the Connectome podcast, I’m joined by Jeff Hawkins, a computer engineer and neuroscience geek who’s obsessed with understanding how the brain learns.

Jeff is the inventor of the Palm Pilot and the founder of Palm Computing – as well as another computing company called Handspring – but in addition to his computer skills, he’s also been fascinated by neuroscience since the late 70s. Today, his company Numenta designs a range of software known as Grok, which learns and thinks like a living brain.

Jeff’s superb book On Intelligence lays out his theory in detail, and he also runs over the basics in this podcast. If you’re interested in digging further, here’s a link to Numenta’s technical documentation of how their software works, and here’s a page with lots of videos of Jeff’s other media appearances.

As you’ll hear on this podcast, though, Jeff’s curiosity extends far beyond software engineering, and explores subjects from space exploration to computing’s future to the nature of intelligence itself. Listen in, and you may find that your own curiosity gets sparked, too.

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

(Produced by Devin O’Neill at The Armageddon Club)

Oliver Sacks

“Hallucination & Imagination” — Podcast 8: Oliver Sacks

On Episode 8 of the Connectome podcast, I talk with Oliver Sacks, renowned neuroscientist and author of such books as The Man Who Mistook His Wife for a Hat, Musicophilia and Hallucinations. In particular, Sacks joins us to talk about some patients of his who’ve been hallucinating strange varieties of musical notation.

But musical hallucinations are only the beginning – Sacks also shares his insights on dreams, hallucinogenic drugs, selfhood, and plenty of other phenomena that make subjective experience so mysterious. Whether you’re new to Dr. Sacks’ work or a lifelong fan of his writing, this interview raises some consciousness-related questions that you may never have considered before.

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

(Produced by Devin O’Neill at The Armageddon Club)

Roundtable Round 2

“Engineering a Mind (Part 2)” — Podcast 7: David Saintloth and Wai Tsang

On Episode 7 of the Connectome podcast, we rejoin our two-part roundtable discussion on the nature of intelligence, on the differences between biological and artificial intelligence, and on the ways in which the idea of digital intelligence can inform our understanding of how our own minds work. (Here’s the link to Part 1 of this discussion.)

Joining us, once again, are David Saintloth, a software engineer who’s working on programs that use a technique he calls “action-oriented workflow” to proactively learn and adapt as they find connections between data patterns; and Wai H. Tsang, a thinker, lecturer, futurist and software programmer who champions what he calls the “fractal brain theory:” the idea that everything the brain does can be described in terms of a single type of fractal pattern.

As before, we’re discussing a lot of ideas developed by thinkers like Jeff Hawkins and Ray Kurzweil, and our goal here is simply to compare notes on each of our perspectives, look for ways in which computer science can inform neuroscience (and vice versa), and hash out some general outlines of a shared descriptive vocabulary for comparing intelligence across digital and biological platforms. So feel free to jump into the comments and share your thoughts, criticisms and insights.

Who knows – your idea might be the spark that launches this discussion in a whole new direction.

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

(Produced by Devin O’Neill at The Armageddon Club)

1364985353732.cached

Three Big Doubts About Brain-Mapping Efforts

Neuroscience research has come a hell of a long way since the days of scalpels and electrodes.

While some research teams are exploring the molecular machinery that churns at the hearts of nerve cells, others are working to assemble wiring diagrams for whole regions of the human brain. Just as biological science never looked the same once Watson and Crick explained the structure of DNA, neuroscience is transforming into a field filled with laser-controlled neurons, programmable stem cells and micro-scale brain scans.

Beyond all this excitement, though, looms a far more vast and ambitious goal – one whose scale and complexity exceed even the mapping of the human genome. Over the past several years, a growing group of scientists have been fighting for the idea that we can (and should) produce, within our lifetimes, a digital map of every function of every one of the trillions of synaptic connections in a human brain: A complete human connectome. Teams around the world, such as the minds behind the Human Connectome Project, are already working hard toward this goal, often freely sharing the data they discover along the way.

The Human Connectome Project's first huge data sets are already freely available to scientists around the world.

The Human Connectome Project’s first huge data sets are already freely available to scientists around the world.

Meanwhile, this February, the White House announced the launch of the Brain Initiative, a decade-spanning effort to build a “Brain Activity Map” or BAM – a simulation, in other words, of all the activity in a human brain, from the cellular level on up. The project’s launch budget is $100 million, and some scientists expect that costs will soar into the billions before it starts cranking out useful data.

Unsurprisingly, this has stirred up a hurricane of press coverage – not all of it positive. While some advocates of the BAM project promise that it’ll unleash a wealth of new cures for neurological and psychological diseases, opponents argue that even billions of dollars and years of research won’t be enough to decode the brain’s workings on such a comprehensive scale – especially if, as some anti-BAM pundits say, we’re still a long way from knowing how the brain even encodes information at all.

I’ve put together a little write-up on three of the biggest BAM bones of contention. Though I can’t cover the whole issue in detail with just one article, these summaries should help you score some points in a BAM-related argument – and give you some fuel for your own exploration. So let’s see what (some of) this fuss is all about.


Doubt #1: Do we have the computing power to simulate a whole human brain?

Nvidia's "Titan" supercomputer, which (as of April 2013) holds the world speed record of 20 petaflops.

Nvidia’s “Titan” supercomputer, which (as of April 2013) holds the world speed record of 20 petaflops.

The BAM invites a lot of comparisons – both positive and negative – with the Human Genome Project of the 1990s. Both are long-term projects, both are hugely expensive, and both involve number-crunching and analysis on scales that demand tight cooperation from top scientists and universities around the globe.

But whereas the Human Genome Project set out to map somewhere in the neighborhood of 20,000 to 25,000 genes, all of them constructed from the same four nucleotide molecules, a map of the human connectome would have to incorporate the behavior of at least 84 billion neurons and as many as 150 trillion synapses – all communicating via a dizzying menagerie of messenger chemicals, not to mention physically reshaping themselves as a brain grows and learns.

Estimates vary widely on the question of how much computing power it’ll take to simulate a whole human brain, but even the most optimistic experts believe it’ll take a computer capable of performing at least 1 quintillion (that’s 1,000,000,000,000,000,000) floating point operations per second (1 exaflop). By comparison, your average home computer processor maxes out around 7 million flops (7 gigaflops), a fast graphics card can reach over 300 million flops (300 gigaflops), and the latest supercomputer processors clock in at a little over 20 quadrillion flops (20 petaflops). So, on that front at least, our resources are rapidly approaching the goal – scientists at Intel predict that we’ll be computing in exaflops before this decade is over.

But raw computing power is only one part of the equation. In the most basic sense, even the most advanced computer is just a machine that follows instructions – so even after we’ve built our exaflopping supercomputer, we’ll still need to know what instructions to give it.

[UPDATE! - May 8, 2013]

Carlos Brody, a neuroscientist at Princeton’s Brodylab, has added a clarification of his own to this section. Here’s what he has to say:

“I think Doubt #1 is about the European Human Brain project, not about the U.S.-based BRAIN Initiative. The way I’ve understood it, the Europeans, with their billion-euro Human Brain project, are trying to simulate every neuron in a brain. In contrast, the U.S.-based BRAIN Initiative/BAM is about developing the technology to allow us to record the activity of every neuron in a brain. Not simulate, but measure what’s there. It’s a big difference, because in order to simulate you have to build in a lot of knowledge we don’t yet have (i.e., put in a giant truckload of untested assumptions). That is largely why many people think the simulation effort is pointless, there’s so many untested assumptions going in that what you end up with may bear little to no relation to an actual brain. The goal of measuring the activity, as in BAM, is to gain that knowledge we don’t yet have.”

Thanks, Dr. Brody, for your insight into that distinction!

[END OF UPDATE]


Doubt #2: Do we know enough about brains to know what we’re attempting?

All human DNA is made up of just four "bases," known as nucleotides: Adenine, cytosine, guanine and thymine.

The Human Genome Project set out to map the position – but not necessarily the function – of each nucleotide in all 23 human chromosomes.

Contrary to oft-repeated belief, the Human Genome Project’s goal was never to decode the function of every gene in human DNA – it was to map (sequence) the order and position of every nucleotide molecule in all 23 human chromosomes.

Scientists have only begun to make a dent in decoding the 20,000+ genes whose positions the Human Genome Project mapped. Even today, leading researchers are still debating how many genes the human genome actually contains – let alone what functions most of those genes encode. And that’s more than half a century after Watson and Crick described, in detail, the way that DNA encodes recipes for manufacturing the molecules that make up our bodies.

When it comes to the brain, on the other hand, the world’s top neuroscientists are still puzzling over the question of how neural activity encodes information at all. We’re using computers to construct videos of entire visual scenes based on the brain activity of people watching them – but that’s only after recording brain scans of dozens of patients as they watched hundreds of videos, then telling a computer to reverse the process and assemble a video that matches the brain activity patterns it sees.

This is no small achievement, to be sure – but even so, it’s sorta like learning to recognize whether the letters in a book are Chinese, Japanese or Arabic (assuming you don’t read any of those languages). You might be able to match a new book with the country that produced it, and maybe even recognize whether it’s, say, a novel or a dictionary. But none of that tells you much of anything about what a specific line on the page actually says.

This is one of the trickiest questions for BAM advocates to answer – and the answers tend to come in two main flavors. One response is that the fastest way to crack the neural code is to try simulating it digitally – just as the fastest way to learn a new language is to start writing and speaking it yourself. Another response is that a base-level understanding of the code may not be necessary for a rich and detailed understanding of how a brain works. Scientists have already mapped the functions and interactions of all 302 neurons in the nervous system of the tiny roundworm known as C. elegans. Even without knowing exactly how these neurons encode information, we’ve still built up a precise understanding of how each of them influences other neurons and muscle cells throughout the worm’s body.

Although the human brain’s 84 billion neurons aren’t exactly a small step up from C. elegans‘s 302, it stands to reason that if we do develop software and hardware that can simulate all our neurons’ interactions, we’ll be in a much better position to pinpoint specific processes and problems down at the cellular level.


Doubt #3: Will an epic mapping project produce useful results?

Just as no two humans share exactly the same set of genes, no two human brains are wired in exactly the same way.

Just as no two humans share exactly the same set of genes, no two human brains are wired in exactly the same way.

BAM critics like to draw a third unflattering parallel between the BAM project and the Human Genome Project: As the Human Genome Project approached completion, its White House advocates predicted that a sequenced human genome would lead to cures for diseases like cancer and Alzheimer’s, along with “a complete transformation in therapeutic medicine.” But more than a decade after the Project’s completion, very few of those medical benefits have actually materialized.

What has resulted from the Human Genome Project is a vast storehouse of data on how human DNA differs from that of other animals – and from one human being to another. This means that when we consider the outcome of the BAM, it’s important to keep our sights not on vague and grandiose promises about cures for poorly understood problems, but on what we can be sure would come out of a successful BAM project: A more detailed, accurate and integrated understanding of the human brain’s workings than we’ve ever had before.

If one thing about the BAM is certain, it’s that the project’s news coverage – and the intensity of the debates that coverage stirs up – will increase in step with the Brain Initiative’s funding demands and timing estimates. As I said at the beginning of this article, a few thousand words aren’t nearly enough to cover all the ink that’s already been spilled in the earliest stages of this debate – so jump into the comments and chime in with your own opinions, doubts, speculations and questions. Because in the end, the only way to resolve an argument is to talk it out.

Roundtable Round 1

“Engineering a Mind (Part 1)” — Podcast 6: David Saintloth and Wai Tsang

Episode 6 of the Connectome podcast brings together two guests who are obsessed with understanding how intelligence and thinking work – not by studying patients in MRI scanners, but by working to develop software that recognizes patterns and connections in the same way a brain does.

Our guests are David Saintloth, a software engineer who’s working on programs that use a technique he calls “action-oriented workflow” to proactively learn and adapt as they find connections between data patterns; and Wai H. Tsang, a thinker, lecturer, futurist and software programmer who champions what he calls the “fractal brain theory:” the idea that everything the brain does can be described in terms of a single type of fractal pattern.

To be fair, many of the ideas we discuss here – or, at least, very similar ones – have already been developed in detail by theorists like Jeff Hawkins and Ray Kurzweil. So our goal here is simply to share and compare what we’ve each learned so far, and bring you in on our conversation. We’re looking forward to dialoguing and debating with you in the comments.

Who knows – maybe you’ve noticed something all three of us are totally missing.

Click here to play or download:

Enjoy, and feel free to email us questions and suggestions for next time!

(Produced by Devin O’Neill at The Armageddon Club)

David Eagleman

“Senses That Bleed” — Podcast 5: David Eagleman

On episode 5 of the Connectome podcast, I chat with David Eagleman, author of the international bestseller Incognito: The Secret Lives of the Brain. Eagleman’s lab mainly studies the ways our brains encode sensory perceptions – but as you’ll hear, he’s also fascinated by questions on the nature of consciousness, synesthesia, meaning and representation, and even the potential development of new human senses.

Eagleman starts by talking about his new paper on overlearned sequences (word lists like days of the week or months of the year), but he also describes some findings in a 2009 paper he authored. Both papers are available for free online, and they’re intriguing to read.

Click here to play or download:

 

Oh, and here’s a diagram that may come in handy as you’re listening. It might look a little confusing now, but it’s actually a nice visual aid as we dig into the details of Eagleman’s research. Just click for a bigger version.

Now sit back, relax, and get ready to have your mind blown.

viewer

Enjoy, and feel free to email us questions and suggestions for next time!
(Produced by Devin O’Neill at The Armageddon Club)

Powered by WordPress | Designed by: free Drupal themes | Thanks to hostgator coupon and cheap hosting
Social links powered by Ecreative Internet Marketing