The Watson Situation

Or, “Logistical Problems in Subsymbolic Computation: A Neurophysiological Perspective.”

Because that title’s fun - that’s why.

Today, I want to take a little break from connectome hacking and talk about a question that’s been generating some lively debate lately: what exactly is IBM’s Watson doing?

Watson's grandfather in a cheeky pose, mocking its creator.

I don’t mean “doing,” so much, in the sense that can be answered, “Crushing every human opponent with a silicon fist,” but in a more abstract, philosophical sense: “Is Watson thinking? Does it have awareness? Does it display any of the components of conscious thought?”

I think we can learn a lot about consciousness by comparing some things Watson does with some things that happen in a biological connectome. So let’s dig into this debate, and see if we can come to a more precise understanding of some important differences – and similarities – between the two.

A good place to start is 1972, because it’s the year Neil Young’s “Heart of Gold” broke the top ten. More relevant to this discussion, it’s the year the philosophy professor Hubert Dreyfus released the first edition of his book “What Computers Can’t Do.”

As a little background, the late 1960s were an exciting time for artificial intelligence (A.I.) research, because the development of the integrated circuit allowed computers the size of a filing cabinet (which was stunningly compact for the time) to solve mathematical equations and win games like chess – and their speed and complexity were leaping ahead every year. Some researchers predicted that by the early ’70s, computers would be proving new mathematical theorems and diagnosing people’s psychological problems.

Not so much ackshully, said Dreyfus. By the ’70s, some unanticipated hurdles on the path to Strong A.I. had been discovered. Dreyfus pointed out four false assumptions in particular:

1) The biological assumption: that the brain processes information via on/off switches, similar to a computer. But neurons are much more like “fuzzy” analog signal carriers (such as antennas) than digital gates.

2) The psychological assumption: that brain activity is based on equivalents of “bits” of information, and follows discrete rules for moving these “bits” around. But a connectome is much more holistic than this – each “operation” it performs seems to be a dynamic congruence of a multitude of perceptions and biases.

3) The epistemological assumption: that all knowledge can be formalized and represented. But plenty of human knowledge – especially things like “what really matters” about a situation – can only be formalized in a unique experiential context.

4) The ontological assumption: that the universe consists of discrete facts that can be translated into pure information. But some subjective experiences may lie outside the realm of objectivity, simply because they’re subjective. For example, we can all agree on some objective definition of what the phrase “in love” means, but we can’t define exactly what it feels like to be in love in any objective sense – we can only experience it subjectively.

All four of these objections point back to the same essential idea: that the vast majority of human knowledge – the knowledge we’re acting on when we make instinctive decisions, or “lean” one way or the other – is inherently experiential. Every connectome participates in a ceaseless feedback loop with its environment, and that feedback – especially from others of the same species – is crucial to understanding (for example) what “really matters” about a certain situation, or what counts as a “weird” answer.

Watson, on the other hand, can only compare mathematical probabilities of correctness (or “weirdness”) based on rather strict formulas – which is why it guessed “Toronto” in a category called “U.S. Cities.”

During its training phase, Watson had learned that categories are only a weak indicator of the answer type … The problem is that lack of attention to such a mismatch will sometimes produce a howler. Knowing when it’s relevant to pay attention to the mismatch and when it’s not is trivial for a human being. But Watson doesn’t understand relevance at all. It only measures statistical frequencies.

The temptation at this point is to slip into a debate about what we really mean when we say “understand” – that is, whether understanding is simply a perception of a perception, or if it’s a different order of phenomenon altogether. But I think there’s an even more obvious reason why Watson can’t be said to truly understand or think anything: it’s not designed to factor new experiences into its analytical process.

Oh, it can factor in new facts at a dazzling rate. What it can’t do is abstractly assess the reason it got a question wrong – it can’t look for errors in its own thought process, and recalibrate its own analysis methods to avoid errors of that type in the future. Animals (including humans) do this automatically all the time – operant and classical (Pavlovian) conditioning are two examples.

But these types of learning depend not on data points and formulas, but on the strength of connections between certain neurons and groups of neurons. Feelings like pain and pleasure play a major part in forming or weakening those connections. And as the philosopher John Haugeland put it, “The problem with computers is that they just don’t give a damn.”

The writer of this New York Times article sums up this contrast neatly:

What computers can’t do, we don’t have to do because  the worlds we live in are already built; we don’t walk around putting discrete items together until they add up to  a context; we walk around with a contextual sense — a sense of where we are and what’s at stake and what our resources are — already in place; we inhabit worldly spaces already organized by purposes, projects and expectations.

In other words, context isn’t a piece of information fed into a connectome, or a conclusion drawn by one – context is inherent in each connectome’s unique identity. Every connectome evolves ceaselessly from one moment to the next, and recalibrates its thoughts and behavior in response to new stimuli. Watson, on the other hand, has no framework for knowing when its own rules can – or should – be bent.

So symbol manipulation can only take Watson so far – to make the leap into what we’d call ”understanding,” an A.I. would need to somehow carry around an awareness of what it’s like to be that unique A.I. This is where we move from the symbolic to the subsymbolic.

Symbols. Not pictured: meanings.

As an analogy for subsymbolism, think of pixels in an image – an individual pixel doesn’t symbolize anything; it just is what it is: a dot of color. A grouping of pixels can collectively symbolize a letter, a shape, or a whole photo; but even then, the symbol itself has no inherent meaning – the meaning is in the mind of the interpreter(s). Meaning is a subjective process, rather than an objective fact.1

No one’s questioning that our thought processes rely heavily on symbols – they obviously do. The objection is that symbol manipulation is insufficient to explain consciousness itself – that is, the subjective experience in which those symbols have meaning. Just as the subsymbolic pixels in a line of text only have significance to an interpreter who understands the letters they collectively signify, there must be a “language,” of sorts, in which the symbols of the mind have significance – and that “language” is the experience of being subjectively conscious.

This isn’t just philosophy – researchers in fields like computational neuroscience have learned that although consciousness makes use of symbolic systems, it isn’t based on symbol manipulation, but apparently on interactivity patterns that develop in complex neural networks over time. If anything, the question “What is consciousness?” seems to look blurrier the harder we stare at it.

Maybe that’s why Dreyfus has followed up with a new book: “What Computers Still Can’t Do.”

So, what is Watson doing? It’s correlating data, just as your word processor checks its dictionary for words spelled similarly to the one you just misspelled – and like your word processor, Watson doesn’t learn from its mistakes unless it’s explicitly taught that they’re mistakes. Because Watson doesn’t have subjective experiences, it doesn’t carry around feelings like “success” or “failure” or “difficulty,” so it can’t factor such ideas into its reasoning. It simply computes probabilities mathematically, and selects the highest one.

And what’s a connectome doing? Experiencing, of course. Every thought, feeling, stimulus, and reaction is a part of that experience. It’s not just context that separates biological minds from machines – it’s the fact that subjective experience itself is the context.

_____________

1. There’s a Heraclitus quote that seems à propos here: “Ever-newer waters flow on those who step into the same rivers.” Or, as another translator put it, “We both step and do not step in the same rivers. We are and are not.”

Share this post…
Email Facebook Twitter Linkedin Digg Delicious Reddit Stumbleupon Tumblr
You can leave a response, or trackback from your own site.

3 Responses to “The Watson Situation”

  1. Quora says:

    By what means can consciousness be defined or perceived?…

    If you’re still checking this thread, Craig, thanks for clarifying those points. Your explanations helped me get around a few conceptual problems of my own. It was an oversight on my part not to have considered the role of perceived self/other boundar…

  2. hyunhochang says:

    Well said! This brings us, I think, toward recognizing what a wonder it is how we living things have learned to perceive and learn. We fabricate a system of conciseness out of a tangle of neurons and learn to thrive, even though the only world we are aware of is entirely in our heads.

    Great analysis of Watson and why he, despite his monstrous amount of processing power, made such laughable answers. I appreciate how thorough you were.

    Also: “Meaning is a subjective process, rather than an objective fact.” YES!! Because, in the end, matter and energy don’t possess inherent meaning. Their arrangements only take meaning when perceived by a being capable of putting meaning into them. It exists in the collective heads of living creatures.

  3. [...] most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information [...]

Leave a Reply

Powered by WordPress | Designed by: free Drupal themes | Thanks to hostgator coupon and cheap hosting
Social links powered by Ecreative Internet Marketing