Can Machines Get It?
John Searle famously claimed, in his Chinese Room Argument (CRA), that computers running programs, however sophisticated or complex, can never be expected to understand anything because computer programs are, as he put it, formal (consisting of rules of operation which have no relation to the content of the symbols they manipulate). Computers running programs, the argument goes, cannot have semantics (cannot understand meanings) because computers don't get the meaning of the symbols they are manipulating according to the programmed instructions they follow. But Searle never provides us with an analysis of just what this thing computers are allegedly missing, semantics (getting meaning), amounts to. We know it when we have it, he suggests, and what we know we have is manifestly absent from computational programs running on computers. Just look and see for yourself! That's the CRA in a nutshell.
It's long seemed to me that one of the serious flaws in that argument (and there are several) is its failure to elucidate this mysterious feature he calls semantics (i.e., the meaning of a symbol, word or statement, etc.). After all, if machines like computers can't have it, we ought to at least know what it is we think they are unable to possess. It can hardly be enough to suppose that what we find in ourselves, at moments of introspection, when we are aware of understanding a symbol, word or statement, isn't available to computers merely because we can't imagine it occurring in such systems.
We can't "see" into the minds of other humans either, yet we have little doubt that most entities like ourselves have the sort of mental lives we have -- even if there are differing degrees of understanding available to this or that individual. We can readily imagine other persons having the kind of mental life we find in ourselves. But computers, being man-made machines built of silicon chips and wires and running electrical currents, just seem different. We're flesh and blood with constituent organs, such as brains which do mysterious things (mysterious as of now at least) and all our experience tells us that only organisms like ourselves have the capacity for behaviors which manifest consciousness (including instances of understanding). Machines (at least in our experience to date) are, at best, glorified calculators, doing the things we make them do without benefit of thinking about any of it.
And yet it's perplexing because we can't say just what this missing component, this understanding, amounts to. We can recognize it behaviorally, of course, but if by some chance some machines were to behave as we do, would we even acknowledge it? Not, one supposes, if we embrace the Searlean argument that it is logically denied them.
Searle's argument suggests that it's inconceivable for machines to have the kind of interior life (the mental phenomena) that characterizes our existence on a subjective level. But is the argument enough? After all, we have access to no one else's interior life but our own, even among the community of the animate, so knowing it when you have it cannot possibly be enough to base a denial of another's interiority on. The issue then is what is it that we are asked to think is absent in the machine that is present with us and why can't a machine, made of chips and wires and other inert materials, have it too?
Some years ago I found myself on a road trip with my wife, driving back to New York after a brief sojourn in Florida, when, passing through the Carolinas, I noticed a road sign which threw me for a loop. "Burn lights with wipers" it read, and for a moment I had an image of a blazing bonfire with local folks tossing light bulbs and windshield wiper blades into the middle of the raging flames. It was a peculiar picture that made no sense and for a few moments I was befuddled. What was that sign trying to tell me, a driver on the highway in his vehicle, hurtling along at 60 miles per hour? Then the pictures I had changed and I got it.
What was meant, of course, was that drivers should turn on their vehicles' headlights when inclement weather obliged them to turn on their windshield wiper blades. It was quite simple really, only expressed in an idiom with which I wasn't familiar. Understanding had finally, if belatedly, arrived and it did so in the form of several new pictures, mental images I had of my car's dashboard and me leaning toward it to yank the headlight switch on as a pounding rain beat against the windshield and my car's wiper blades rhythmically stroked the sheets of cascading water away. I had another image, too, of a car smashing head on into mine because I had failed to follow the sign's instructions and so that car's driver, in the terrible downpour, hadn't seen me coming toward him.
All of that was prompted in me by the simple words of that sign, replacing the earlier out-of-place image of light bulbs going into a bonfire. The latter images fit the circumstance in which I found myself while the former had not.
I turned to my wife, intending to tell her what had occurred, but she was talking about something else and I didn't want to interrupt her and the end of it was that I let the moment pass and never mentioned the incident to her afterwards. If I hadn't had occasion to refer to it over the years in passages like this one, it would have long since faded from my memory like so many others must have done over the course of my life. But what had really occurred at that moment? What had my understanding consisted of?
Anyone observing me from the outside might have said that it was just my behavior that would have changed following my intake of the sign's information. But, of course, my behavior never actually changed so that can hardly be an adequate description of the instance of understanding I'd had. Not even my verbal behavior changed since I hadn't said anything about it to my wife and there was no one else around to report it to. Indeed, the moment passed except for my recalling it some time later and thinking about it periodically since.
Now Wittgenstein had made the point that the meaning of a word cannot lie in some picture it conjures up for us and that seems right. After all, my pictures are not accessible to others and, indeed, the people who made that sign would not have had the same pictures in mind that occurred to me at the moment I got their sign's message. And what are the chances that anyone else, reading and understanding the sign, would have the same pictures I had? What they would think of, what they would see in their own minds' eyes at the moment of comprehension, would be a function of their own particular histories, the things they had seen and experienced over the course of their own lives, just as my mental images were the result of the experiences that had informed my life.
At a minimum, others could be expected to see different dashboards and actions than I had, different scenes of rain and roads and events, each unique and quite unlike the ones I'd had. And while I saw a crash in which I was involved for failing to heed the sign, they might see something else entirely, their minds, their associations working with quite different memories than mine. And yet all of us, reading that sign, if we understood it, would have to share something or there could be no possibility of communication via language at all. What was this strange thing, this instance of understanding which Searle suggests we can have but which is necessarily unavailable to computers then?
The popular television game show, Jeopardy, in which contestants are presented with various arcane words and phrases and asked to formulate the single appropriate question, to which that word or phrase is the answer, offers a possible solution. The IBM company built a complex array of ninety or so interconnected servers with vast databases of information and a complex system of programs to run on this array, calling the whole thing "Watson". The point was to build a system that could respond like a human when confronted with the same degree of complex information. The programs were designed to interpret human language and, importantly, to make associations when given inputs in the form of Jeopardy "questions", associations that hadn't been pre-scripted, in order to come up with the kinds of answers humans would produce in similar circumstances. Watson was then pitted against human contestants and, in the event, did quite well in a series of tests broadcast on national television. It was something of a big deal because Watson made human-like associations and beat its human counterparts a good deal of the time.
So did Watson understand? Well not in the sense we humans do because Watson, for all its sophisticated programming and engineering, still lacked the full panoply of human experiences (a history of remembered associations to constitute a self) and, importantly, the human capacity to have mental images based on its inputs. Watson associated inventively, unpredictably, but still without the added elements we humans can claim. As Searle put it afterwards in a piece he wrote for the Wall Street Journal, unlike a human being, Watson didn't know it was in a contest and that it had won when it did so. Watson had no knowledge of the broader situation in which it was operating and no capability for elation at its own success. But is that fatal?
Searle's argument against the possibility of machine consciousness hinges on these absences, but if a system can be built to associate inputs like we do, why not to do those other things we do as well? What is there about what we are capable of doing that is beyond the scope of increasingly complex computational programming? Can machines have mental images as we do? Well, as with understanding, it depends what we mean by the terms. Certainly, closing one's eyes and evoking a mental picture of some image one has seen is in no way like seeing that image in fact. And yet there is a link between what we have seen and what we recall having seen. I can tell the difference between my memory of a face and actually seeing that face and the former is nothing like the latter.
What then is a mental life for us and why can't machines, quite unlike ourselves in their constituents and organization, also have that? If the system called "Watson" could run programs that get the point of an input sufficiently to come up with an answer a human might select, why couldn't it also run programs that see and recall images?
What is seeing after all but the processing of inputs in a way that connects them with other retained inputs in ways that fit into some larger set of retained inputs which is usable by that system? And then why would a machine, driving a vehicle on the road and confronted with the same sign with its symbols that confronted me, not be able to make the necessary associations as I did, and, in doing so, run the same kinds of images, for its review, that flew through my mind when understanding occurred in me?
Searle argues that semantics is excluded from computational processes running on computers but if semantics is just those meanings we get from certain inputs by the associations they prompt in us, then there is no obvious impediment to building machines that can have that, too.
Reader Comments