A Horse of a Different Color
November 1, 2013
Stuart W. Mirsky in Epistemology, John Searle, Language, Ludwig Wittgenstein, Philosophy of Mind, Philosophy of Mind, Stuart Mirsky, The Chinese Room Argument, Understanding, W. V. O. Quine, philosophy

How is it that we take meaning from some sounds, symbols, visual cues and not others -- and what does it mean to say of any of these that they have meaning, and that we get it?

It's not wrong to suppose that we impute meanings to such things and that, without us or entities like us, there would be none. But what is it that we think we are imputing to the things in question? And how does our imputation constitute the attachment by which the sound, symbol or other cue carries with it that same imputation for others?

A symbol inscribed in some long forgotten language, when unearthed by an archaeologist, would have no meaning attached to it unless and until someone uncovers the key to it. It might not even be recognizable as something meaningful at all until the key is discovered. Absent a key, we should take it for nothing more than random markings or the like. But with a key for decoding we find meaning there. What is this meaning we have unlocked?

Wittgenstein might have said it's just the use to which the symbol was put by its long ago makers, a use we discover for ourselves by effective exercises in decoding (possibly through reliance on some standard, e.g., like the Rosetta stone, or by using mathematical means to discern linguistic frequencies and deduce, from these, the role the markings once played for their makers in the long lost language). Words and other physical signifiers get their meaning because we give it to them by coming to understand their intended uses.

But what does it mean to understand the use? Is it just that now we know how to use it, as well? In a sense that's true, but what does that understanding itself consist of, besides our performing the right acts (those the signifier was intended to evoke in us) or having the potential to do so under the appropriate circumstances? What constitutes the basis for those behaviors which follow and demonstrate our understanding of a discovered use, which becomes the reason for the behaviors? John Searle's Chinese Room argument (CRA) offers an opportunity to explore this aspect of meaning, i.e., the occurrence of understanding in a subject.

His argument is deceptively simple. Addressed to the question of whether a computer could be made to think, it pivots on the presumption that something peculiarly private and subjective is going on in us whenever we are thinking and that this sort of phenomenon is denied to computers. The argument goes like this: Put a man in a sealed room who does not understand Chinese and give him a file of written Chinese symbols along with a set of rules for relating them to one another according to their semantic content (the meanings they have for actual Chinese speakers) and then present this incarcerated individual with a series of questions in these same Chinese characters from the outside without any other information. Following the set of rules with which he has been provided, and assuming they are sufficiently comprehensive, he can now respond to the queries from the outside as if he understood the symbols' content. But, of course, he has no idea what he is reading or what the symbols he is outputting to his questioner say. He doesn't understand Chinese anymore than a computer understands the symbols in its programs that it is called upon to manipulate.

The formal argument then goes something like this:

1) Computers consist of nothing more than machines running programs.

2) Programs are nothing more than rules for symbol manipulation, i.e., they are just "syntax" (formal operations without any relation to the manipulated symbols' content).

3) Minds, on the other hand, have "semantics" (i.e., they are able to recognize and understand the contents of symbols with which they are presented).

4) Syntax does not constitute and is not sufficient for semantics. (This is self-evidently true, Searle tells us, and so indisputable.)

5) We know that brains "cause" (produce) minds while

6) Programs running on computers can't do the critical thing we know minds do: understand.


7) Computers can't "cause" (produce) minds

On these grounds Searle asserts the impossibility of what he calls "Strong Artificial Intelligence," the idea that computers can be engineered and programmed to think. This doesn't foreclose the possibility of great sophistication in computational programming and computer behaviors, of course, some of which may look very much like the smart behaviors humans are capable of. It just says that, however complex and sophisticated we make a computer, it will never be able to understand something in the way we mean when we speak of understanding in relation to ourselves.

Left out in all this, though, is an account of meaning and understanding. For Searle the idea, which he generalizes as "semantics," is self-evident to anyone who takes a moment to introspect, to examine the contents of his or her own thoughts when in the act of understanding anything. Do we see the word "horse" spelled out in English letters (assuming we are English speakers and literate, of course)? Then we know what is meant while the computer just manipulating those symbols cannot. We get the meaning of the markings that form that word.

But what constitutes getting the meaning? What exactly is gotten? As Wittgenstein suggested, it is not this or that mental picture because we're all likely to have quite different ones. Your horse may seem a pony to me, or a statue, or an image or a toy or a workbench used by carpenters. Mine may be a full on profile, a picture of a handsome racehorse I once saw, perhaps, which I happen to recall when presented with the word "horse," while you may think of a frontal view of a very different looking animal, or a running one, or perhaps you'll see a wild mustang loose on the range. Or a herd of them. What is it about all these different pictures that allows us to recognize them as being the same thing the word signifies when we see it, the same as what another means by that word "horse" when he or she utters it?

If it's not this or that picture, can we dispense with pictures entirely then? For some words that is no doubt quite possible. Sometimes the thought a word expresses may be so abstract as to resist pictorial reference, or it will allow for only a very vague and confused one. Try to think of a square circle, for instance, or the "flying purple people eater" referenced by an old rock song. Even the term "horse" itself can signify an abstraction as in the class of creatures of many different shapes, sizes, colors and capabilities we call by that word. Or perhaps a class of creatures we have never laid eyes on at all. In that case, though, would there be no pictures at all? The Indians of North America, when they first saw European horses, are said to have thought them very large dogs, a kind of creature with which they were quite familiar. And since the Europeans not only rode them but used them to pull things, as the Indians used dogs, that makes a kind of sense. But eventually the Indians learned that horses were nothing like the dogs of their experience and presumably their pictures of these creatures changed.

While it's possible to imagine meaning without mental pictures in some cases, or with only rudimentary or very vague ones, it seems unlikely we can eliminate pictures entirely from the idea of understanding. So, Wittgenstein notwithstanding, we cannot take pictures out of the account. But he certainly seems to have been right about eliminating the notion of a simple correlation between some symbol and some mental picture in any attempt to say just what understanding and semantic content amount to.

Willard Van Orman Quine suggested something he called the indeterminacy of translation (which actually fits in quite nicely with Wittgenstein's views re: language games, meaning as use and the familial nature of word-use relations). Simply stated this can be seen as the idea that there are no exact translations, that they are always approximate, always a little vague, at least around the edges. Even within a common language we don't always know exactly what another means. The idea of synonymity then is seen to be elusive, as most folks who spend a good deal of their time writing for their living would probably be quick to confess. Writers will spend a great deal of time searching for the right word or phrase in a bit of text they're crafting, particularly if they happen to be so unfortunate as to be denied the right word's unbidden appearance in the stream of their consciousness. But such searches reveal the weakness of the concept of synonymity itself for there are few if any words that mean precisely the same thing, even in the same language. There are always nuances, connotations and so forth to any word choice and, indeed, it is precisely to get the right mix of these that the writer searches when he is so unfortunate as to lack an inspired choice right off.

Translation between languages on this view, the process of getting meanings from unfamiliar words, works like this, too. Moving between languages, one must constantly be careful to pick the right version of the idea he or she means to express in the other language. A look at a translation program like Google Translates quickly shows the pitfalls. A word like "short" for instance, translated to Chinese, quickly nets nine different results, many of which look superficially similar in their English equivalents but for which familiarity with Chinese linguistic practices will present other variations not captured in Google's somewhat limited translation program. There are three Chinese words that pop up as potential translations for "short" for instance. Google doesn't differentiate them though a translator would use them in an equivalent way at his or her own risk. They are given in English as "short, brief." Another appears as "short, brief, concise;" one translates only as "short;" while another is given as "short, concise". Another adds "low" and "low in grade, low in rank" to "short" while another introduces "insufficient, lacking, inadequate, not worth"(sic). Still another refers to the rapid passage of time, a short span of time as it were, and adds "hurried" and "pressing." If anyone wanted to make a simple one-to-one translation of "short" into its Chinese equivalent it would not be enough to recognize in the translation the word he or she were seeking because that word can have a variety of connotations and applications in Chinese as it can in any given language.

Aside from rendering John Searle's Chinese Room argument just a bit more problematic than it initially appeared, this tells us something important, not only about the vagaries of translation but about what understanding itself entails. As Quine suggested, we don't seem to understand on a bit for bit, or word for word, basis but, rather, in context, as part of a whole language -- that is, as part of a fairly broad working knowledge of how the words in a given language fit together.

Are they applied to time or distance, to physical entities or abstractions, to colors or shapes, to past or present or future phenomena? Each word has its own distinct set of connections and, within a given language, among speakers of a shared language, there will be a range of connections, too. Understanding is not something we do in the isolation of symbol-to-symbol comparisons and meanings are not simple thoughts or ideas that link, one-to-one, with any given signifier. Nor is understanding just the behaviors we are prompted to by the occurrence of a particular symbol to the extent we understand it. There is, as Searle rightly sees, something going on in the mind as well, something subjective.

But it doesn't have the look of a simple phenomenon, of being a feature such as perceptual capability (e.g., seeing a color or tasting a flavor -- though these are likely no less complex underneath even if they give the appearance of relative simplicity). Rather, understanding appears to be a complex phenomenon even as it manifests for us, consisting of the capacity to associate past and present inputs in complex web-like patterns -- which probably helps explain why it's so hard to pin this down and say what it is in fairly straightforward terms. Understanding, on this view, appears to be a matter of achieving a certain critical mass of commonality in the pictures held, crossing, as it were, a certain threshold of similarities.

If understanding the meanings of signifiers is not just a matter of one-to-one correlations between words and the pictures of objects they are thought to signify, then indeterminacy of translation would be just what we would expect. My horse is not your horse even if we both recognize the meaning of the signifying markings "horse" in ways that seem comprehensible to each of us because they prompt familiar behaviors most of the time. And when they don't there is confusion -- until, of course, we sort out the differences between our ideas of what's been said and realize, say, that you mean the carpenter's device on which he may cut a two by four and I mean the last winner of the Triple Crown.

Article originally appeared on Ludwig (http://ludwig.squarespace.com/).
See website for complete article licensing information.