The Consolations of Philosophy

Jay Thompson
August 18, 2010
Comments 4

After a few beers, I thought I’d comported myself fine in my college French with the Ni??oise beachbums on my summer trip. I grasped a few things the one time I read Hegel. I can tell gas from brake and could probably call myself a “driver” so long as all roads were straight, flat and vacant. But do I speak, do I fathom, and do I know how to drive? When someone says they understand, what do they mean?

In 1980, philosopher John Searle–a perfectly reasonable man who’s written widely on facts, consciousness, and the mind–devised a thought experiment so contentious and unkillable, its ramifications (like those around Nietzsche’s Superman, or Camus’s claim of Sisyphus’s heroism) have entered the lexicon of philosophical table-talk. The experiment is called the Chinese Room.

So. John S., someone who knows no Chinese, shuts himself in a large, comfy box with a very fast instructional computer. On a keyboard attached outside, Chinese speakers write him messages or questions in their native language: anything from What’s your favorite nighttime snack to Hi, my name is Jiang.

Inside, John S. receives messages he doesn’t understand, and, following character-by-character instructions, consults his computer to respond. However, he never touches a Chinese-English dictionary and translates a response himself; the operations he follows are entirely syntactical. Something like: When you see this Chinese character (the computer shows him the Chinese characters for “snack” or “my name is”), type this character (the computer shows him the “crackers” character, or the “how do you do? My name is” characters).

The painstaking response John S. assembles, thus, is meaningless to him, but when completed and displayed, constitutes a satisfactory reply to the Chinese questioner waiting (maybe waiting awhile) outside. So, the questioner assumes there’s a Chinese speaker inside the box. However, Searle stresses, this John S. cannot be said to understand Chinese.

Got that? Since John S. was only following instructions from a computer that plucked search terms in a dictionary, phrasebook, etc., he wasn’t speaking Chinese in creating a reply; he was only completing an operation.

This distinction is important: it’s the reason, Searle says, that no computer will ever truly think.

Got that? Since, Searle says, computers perform only syntactic, switch-based operations, they’re the equivalent to John S. in the box, using inputs to arrange lexical content they don’t “understand,” but can sequence.

Google cross-references your search string and image tags to bring you that picture of a sneezing panda; the chatbot weighs your idioms and mood and aims straight for the middle in its prefab reply. But the brain, Searle stresses, does more than enact formal computations on symbols; thought requires semantics, the mysterious contentual something that Searle mostly characterizes by what it isn’t.

Some philosophers call the Chinese Room experiment a fallacy, akin to dismissing the theory of electromagnetism because you see no glow when you shake a fridge magnet. Others simplify Searle’s situation: would, say, a 3G network precisely simulating the neural activity of a brain in pain, feel pain?

The question goes farther, though: what the hell is understanding?

To understand understanding (lord), we have to wrestle with the idea of intentionality. Intentionality means the ability to be about or toward something, and, Searle claims, it’s unique to the brain. Language as it’s spoken or written means only what the listener or reader understands of it; its intentionality is said to be only derived. But our fancies, propositions, and longings themselves aren’t just empty vases. They feel to us like they have intentional content.

I wake up from a dream still muddled, thinking about dinosaurs (about); I wish my girlfriend were back from Lopez Island (toward); I plan a trip (toward) before I trip-plan. So. Is intentionality intrinsic to flesh? Or to some sort of immaterial tingle only found in flesh?

One scientist, the charming Daniel Dennett, says no; he goes as far as implying that Searle is a gray-matter chauvinist. All intentionality, from mice to iPads, he says, is derived. An expression means only what the perceiver (including the self) takes it to mean. So we can call something intentional whenever an intentional explanation predicts behavior.

Got that? The conclusion would be that “states of suitably organized causal systems can have content” (that is, meaning) “no matter what systems are made of.” (The formulation is David Cole’s, from the addictive Stanford Encyclopedia of Philosophy.) So, Dennet might say, yes, the 3G network is in pain.

Then again, the desire to be unique. Are we only thinking if we know we’re thinking? Must a computer–or a toaster, which performs, after all, a simple 0-to-1 operation–be aware it’s intentional to be intentional?

David Cole, in his overview of the Chinese Room argument, asks another doozy of a question. How could evolution have led to understanding, or intentionality, anyway? Evolution doesn’t presuppose that others have minds; it selects not on the basis of understanding, but of behavior. A goldfish’s mental shark-image itself won’t save the goldfish from the shark. So how could evolution have led to understanding, if understanding doesn’t confer an advantage?

Or is what feels like understanding to us just a happy higher-order confluence, like a footpath walked into a shortcut through the grass, or thunderheads forming from small ionizations and electron shocks? Could it be merely that our intuitions and terms are wrong?

More on these questions next week, after I push my forecasting, desirous head ahead through work, privacy, pleasure, travail, and imaginings.

4 thoughts on “The Consolations of Philosophy

  1. @Helen:

    That so rad! Why haven’t WE had that conversation? Stay tuned, I think the next column I’m going to write here is going to be about belief.

    @MZA:

    A friend of mine compares anxiety to a staticky radio– trying and failing to hold and follow a clear signal, different parts seeming to buzz off in different directions. So, being handed a thought– does a calm mind feel more like an old sunk ship with eels of input, sensation, desire swimming in and out of the holes? –Yet when I get really excited about an idea, I think I am having original thoughts about it though I am not, I am simply eagerly loosing eels out through the holes, the same ones that came in.

    @Chris:

    It’s interesting that Searle and Dennett both are willing to consider understanding a ‘higher-order process,’ and maybe even an aggregate or epiphenomenon of billions of ‘lower-order processes’– something that only SEEMS like one discrete thing, the way a single cumulus cloud seems discrete. Their dispute, it seems, is mostly in the purely biological nature Searle attributes to this process.

    Or, as Cait tells me, the vanguard of prion research might make nonsense of any attempt to build a neuron-imitating machine. http://www.sciencedaily.com/releases/2010/02/100204144420.htm
    Is it possible that our delicate heads are the most complicated things in the universe? Let’s never die.

  2. Two weeks ago the Trek in the Park cast party ended with the small late-night remainder having a long conversation about the Chinese box. Minds, like particles, moving in unison without touching, yes?

  3. I think it’s worth explicitly mentioning the Systems Reply to the Chinese Room thought experiment (which you basically outlined without calling it such). The Systems Reply claims that it’s the whole system of box, man, cards, instructions, etc, that “knows” Chinese or that “is a Chinese speaker.” As analogy to the human brain, we would say: in the mind of a Chinese speaker, Broca’s area (a region linked with language production) does not “understand Chinese,” only the entire brain “understands Chinese.”

    The Systems Reply was never super-convincing for me, and it hasn’t been for Searle. But since reading Daniel Dennet’s book, Consciousness Explained, it seems obvious to me that the Systems Reply is correct.

    In particular, I’m thinking of what Dennet says about the fallacy of the Cartesian Theatre. In talking about vision, for example, he says that it’s very intuitive for us to think that somewhere inside the brain, there’s a place where all the input data from the visual systems displays a picture for the inner observer, some place where “it all comes together,” a place that is “the seat of conscious experience,” or that IS equivalent to the running stream of consciousness that we experience.

    He calls this the Cartesian Theatre because he sees this inner homunculus model as a remnant of Cartesian Dualism, which most scientists reject. Descarte posited that the Pineal gland was the point at which the immaterial soul interacted with the material brain (and hence body). But in discarding Descarte’s dualism (Dennet argues), many scientists and philosophers retain his notion that there must be a choke-point, or a single location where your phenomenal experience resides in totality, where the real “I” sits.

    Modern neuroscience and computational psychology suggest that this is an unnecessary (and likely false) premise. And Searle’s Chinese Room is entirely predicated on it. Though you’ve got to give it to Searle for coming up with such a delicious metaphor.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

 

Back to top ↑

Sign up for Our Email Newsletter