After a few beers, I thought Id comported myself fine in my college French with the Ni??oise beachbums on my summer trip. I grasped a few things the one time I read Hegel. I can tell gas from brake and could probably call myself a driver so long as all roads were straight, flat and vacant. But do I speak, do I fathom, and do I know how to drive? When someone says they understand, what do they mean?
In 1980, philosopher John Searlea perfectly reasonable man whos written widely on facts, consciousness, and the minddevised a thought experiment so contentious and unkillable, its ramifications (like those around Nietzsches Superman, or Camuss claim of Sisyphuss heroism) have entered the lexicon of philosophical table-talk. The experiment is called the Chinese Room.
So. John S., someone who knows no Chinese, shuts himself in a large, comfy box with a very fast instructional computer. On a keyboard attached outside, Chinese speakers write him messages or questions in their native language: anything from Whats your favorite nighttime snack to Hi, my name is Jiang.
Inside, John S. receives messages he doesnt understand, and, following character-by-character instructions, consults his computer to respond. However, he never touches a Chinese-English dictionary and translates a response himself; the operations he follows are entirely syntactical. Something like: When you see this Chinese character (the computer shows him the Chinese characters for snack or my name is), type this character (the computer shows him the crackers character, or the how do you do? My name is characters).
The painstaking response John S. assembles, thus, is meaningless to him, but when completed and displayed, constitutes a satisfactory reply to the Chinese questioner waiting (maybe waiting awhile) outside. So, the questioner assumes theres a Chinese speaker inside the box. However, Searle stresses, this John S. cannot be said to understand Chinese.
Got that? Since John S. was only following instructions from a computer that plucked search terms in a dictionary, phrasebook, etc., he wasnt speaking Chinese in creating a reply; he was only completing an operation.
This distinction is important: its the reason, Searle says, that no computer will ever truly think.
Got that? Since, Searle says, computers perform only syntactic, switch-based operations, theyre the equivalent to John S. in the box, using inputs to arrange lexical content they dont understand, but can sequence.
Google cross-references your search string and image tags to bring you that picture of a sneezing panda; the chatbot weighs your idioms and mood and aims straight for the middle in its prefab reply. But the brain, Searle stresses, does more than enact formal computations on symbols; thought requires semantics, the mysterious contentual something that Searle mostly characterizes by what it isnt.
Some philosophers call the Chinese Room experiment a fallacy, akin to dismissing the theory of electromagnetism because you see no glow when you shake a fridge magnet. Others simplify Searle’s situation: would, say, a 3G network precisely simulating the neural activity of a brain in pain, feel pain?
The question goes farther, though: what the hell is understanding?
To understand understanding (lord), we have to wrestle with the idea of intentionality. Intentionality means the ability to be about or toward something, and, Searle claims, its unique to the brain. Language as its spoken or written means only what the listener or reader understands of it; its intentionality is said to be only derived. But our fancies, propositions, and longings themselves arent just empty vases. They feel to us like they have intentional content.
I wake up from a dream still muddled, thinking about dinosaurs (about); I wish my girlfriend were back from Lopez Island (toward); I plan a trip (toward) before I trip-plan. So. Is intentionality intrinsic to flesh? Or to some sort of immaterial tingle only found in flesh?
One scientist, the charming Daniel Dennett, says no; he goes as far as implying that Searle is a gray-matter chauvinist. All intentionality, from mice to iPads, he says, is derived. An expression means only what the perceiver (including the self) takes it to mean. So we can call something intentional whenever an intentional explanation predicts behavior.
Got that? The conclusion would be that states of suitably organized causal systems can have content (that is, meaning) no matter what systems are made of. (The formulation is David Cole’s, from the addictive Stanford Encyclopedia of Philosophy.) So, Dennet might say, yes, the 3G network is in pain.
Then again, the desire to be unique. Are we only thinking if we know were thinking? Must a computeror a toaster, which performs, after all, a simple 0-to-1 operationbe aware its intentional to be intentional?
David Cole, in his overview of the Chinese Room argument, asks another doozy of a question. How could evolution have led to understanding, or intentionality, anyway? Evolution doesnt presuppose that others have minds; it selects not on the basis of understanding, but of behavior. A goldfishs mental shark-image itself wont save the goldfish from the shark. So how could evolution have led to understanding, if understanding doesnt confer an advantage?
Or is what feels like understanding to us just a happy higher-order confluence, like a footpath walked into a shortcut through the grass, or thunderheads forming from small ionizations and electron shocks? Could it be merely that our intuitions and terms are wrong?
More on these questions next week, after I push my forecasting, desirous head ahead through work, privacy, pleasure, travail, and imaginings.