Editor's note: Professor Simon's article appears here in 5 parts. This is the second part.
A word that is recognized may be associated with a vast amount of information in memory. How long an essay could a person write from all of the information associated in memory with the word "religion"? However, not all of this information becomes available immediately upon recognition of the word. What symbols will be evoked from memory when the word is recognized depends on context, where context includes memory of surrounding elements of the text but also other relevant memory elements that have been active recently.
Accessing items in memory activates them so that they are likely to be evoked again by subsequent recognition of symbols that are at all connected with them. Because activation gradually dies away, only a small fraction of memory will be activated at any one time. If Roman Catholicism has been a recent topic of conversation or contemplation, then the word "religion" is likely to evoke information about Catholicism, but information about Islam may not come immediately to mind; while if the conversation was about recent developments in the Islamic world, the same word is more likely to evoke information about Islam.
The meaning of the text, then, will be a function of the memory contents that are accessed by recognition of words. Which of the whole collection of memory contents will be accessed depends on context, that is upon what contents are both associated directly or indirectly with the word recognized and also the extent to which they have recently been activated.
Many meanings can be evoked by recognition of a single word or phrase, and, especially if the context is lean, more than one of these meanings may be compatible with the text. Ambiguity of meaning derives from the multiplicity of elements that may be evoked by a text in specific readers and on particular occasions.
The contents of the reader's memory have their origins both in direct experience with the physical environment and in communication with the surrounding culture, the latter generally accounting for far more content than the former. Hence, insistence that meaning is evoked in an individual memory in no way ignores the social content and origins of meaning.
Compare, for example, the meanings evoked by the first and the last sentences, respectively, of Stendhal's Chartreuse de Parme. The novel begins:
On the l5th of May, 1796, General Bonaparte made his entry into Milan at the head of that young army which managed to cross the bridge of Lodi, and to teach the world that after so many centuries, Caesar and Alexander had a successor.
Since we have just begun the book, our context derives from the knowledge and experiences we already have had prior to opening it. To give meaning to this sentence we must draw on our knowledge of history, of Napoleon and the critical battle at Lodi, of Rome, and of the Hellenistic world. What affect the passage evokes will depend on our own attitudes toward Napoleon, the French Revolution, and military conquest. Stendhal himself, of course, must have had in mind additional meanings that will only become apparent--will only be evoked--as the book proceeds.
The last sentence of Chartreuse reads:
The prisons of Parma were empty, the count immensely rich, Ernest V adored by his subjects who compared his government to that of the grand dukes of Tuscany.
This final sentence gains meaning from the whole novel that precedes it, in particular from the contrast between the glorious era of Napoleon and the mean epoch (in the novel's description) that followed. The last sentence of the Chartreuse explicates the first one, as the first now illuminates the last. The meaning of each is expanded by knowledge of the meaning of the other.
The process of evoking is absolutely central to finding meaning and, indeed, to most thinking processes. When we attend to stimuli, certain patterns in the stimuli serve as cues: noticing them produces acts of recognition, and acts of recognition give us access to information stored in memory about the recognized things. We observe a face, and its features allow us to recognize a friend, thereby gaining access to the friend's name and all sorts of additional information about him or her.
Recovering information by recognition is exactly like finding information in a book from an entry in the index. The cue, the index entry, directs us to the pages in the book where the information is stored. In fact, we will not be misled if we think of the human memory as a richly indexed encyclopedia, which is also liberally provided with cross-references (associations) from one part of the text to others. Observing a word or phrase in something we are reading, we recognize it, thereby getting access in a second or less to information stored in memory about it. Then we augment that information further (expand the meaning) by following associations (cross-references) in memory. In particular, we are likely to follow those associations that are activated.
Another text, the opening lines of Camus' La Chute, illustrates brilliantly these mechanisms at work:
May I, Sir, offer you my services without risking being importunate? I fear that you may not know how to make the estimable gorilla who presides over the destinies of this establishment understand you. Indeed, he speaks only Dutch. Unless you allow me to plead your cause, he will not guess that you want gin.
Where might the cross-references in the text lead us? By the third sentence, we become aware that we are in the Netherlands, perhaps in Amsterdam (Dutch Æ Netherlands Æ Amsterdam). Of course "Amsterdam" can now evoke the whole mass of information we have about that city and associations with it. "Gin," in turn, cross-references to "bar," defining the milieu and identifying the "gorilla" as a bartender. Other connections are more subtle. "Plead your cause" suggests the law and indeed foreshadows the profession of the speaker. And the word "importunate," applied to the relation of speaker to listener, can lead some readers to the Ancient Mariner. Moreover, the tone of the passage may evoke in the reader whatever affect is associated in that reader's memory with a courtly manner (or a pretentious one, if it is heard that way). If we perseverate on the paragraph, there is much more that can be evoked--much of it idiosyncratic to the reader.
Thus, the evocation of certain symbols may evoke others by the chain reaction that we call mental association. The burst of evocation that a bit of text may induce is limited only by the richness and complexity of the memory structures that it activates. The more elaborate the structures that are evoked, the more the meaning to the reader is defined by the reader's memory, the less by the author's words. The meaning of text is determined by a relation between the text itself and the current state of the memory of the reader, its contents, and its state of activation.
Although there is no doubt that the processes just described take place in the collection of neurons called the brain, neurophysiology has not yet discovered by what biological mechanisms they are accomplished. Nor do we know, physiologically, how neurons store memories. We do know, thanks to modern electronic computers, that memories can be stored in physical devices, and that such processes as evoking, activating, and associating can be carried out with them.
Without attempting to imitate the underlying physiological mechanisms, we can program digital computers to simulate human thinking closely at the level of symbolic processes (information processes). The hypothesis that this is possible--that the symbolic processes a computer can execute are precisely the processes that are needed for thinking of the sort we humans do--is referred to as the "physical symbol system hypothesis." Let us see just what this means--but first, what it does not mean.
The physical symbol system hypothesis does not mean that there is any resemblance between chips and neurons or between the arrangements of circuitry in a computer and in the human brain. But at a more aggregate level, symbols can be represented by neuronal patterns, and, in a manner that is wholly different physically, symbols can also be represented by patterns of electromagnetism in computers. It is at this symbolic level that, despite the radical difference in implementation, the symbols stored and processed in a computer can simulate the symbols stored and processed in the brain.
A symbol is a pattern, any pattern that denotes or points to some other pattern. The pattern pointed to may be another pattern stored in the brain (or computer) or a pattern in the external world. The basic processes that a computer can perform with symbols are to input them into memory, combine and reorganize them into symbol structures, store such structures over time, erase them, output them through motor processes, compare pairs of symbols for equality or inequality, and "branch" (behave conditionally on the outcome of such tests). The physical symbol system hypothesis asserts that possessing these processes is the necessary and sufficient condition for a system to be capable of thinking.
To the extent that the hypothesis is true (an empirical question), computers can be used to simulate human thinking and, to the extent of their success, can provide testable and tested theories of thinking. The computer program (technically, a set of difference equations) is the formal equivalent of the systems of differential equations that natural scientists so commonly use to express their theories. One important feature that distinguishes this new form of theory is that it is not limited to numbers: symbolic computer programs can incorporate words, diagrams, or pictures as readily as numbers. As a matter of fact, numbers seldom enter at all into the programs that have been written to simulate human thinking.
The confidence with which, in the previous sections of this paper, I have spoken of the structure of human memory as resembling an indexed encyclopedia, or of the process of evoking meanings from memory, is based on the fact that just such processes have been embodied in computer simulations, and the behavior of the resulting programs in the face of varied tasks has been found to resemble closely human behavior in the same tasks (Feigenbaum and Simon, 1984).
The use of computers to simulate human thinking is usually regarded as a part of the larger discipline of artificial intelligence (as well as of cognitive psychology). So it is, but we should not confuse the aims and content of work in artificial intelligence that seeks to simulate human thought with the aims and content of quite different work that seeks to produce intelligent programs to complement (or sometimes substitute for) human intelligence. Artificial intelligence embraces both endeavors.
If we write a computer program to play chess as well as it can (and the best existing programs play very well indeed), we need not observe the same limits on memory and processing speed as restrict humans. In fact, the most successful programs at present substitute a great deal of brute force search for human cunning. Deep Thought, the current champion among computer programs, may look at 50 million branches of the game, or more, before choosing its next move. Human grandmasters seldom look at more than 100 (but they almost always look at the relevant ones).
When we write a computer program to imitate (and understand) how grandmasters operate, we forego the speed and memory advantages of the machine. Instead, we try to incorporate a rich body of chess knowledge in the program, and imitate the human capability of choosing good moves after searching very selectively. Computer programs, quite unlike Deep Thought, that play along these lines have performed at perhaps the Expert level, but not the Grandmaster level. This means that, while we have managed to simulate the principal mechanisms of human chess play, we have not yet managed to provide a single program with the full array of knowledge that a grandmaster possesses.
To say much more about computer simulation of thought would go beyond the scope of this paper, but I and others have explained in some detail on other occasions how the processing of meaning is accomplished by computers (Simon, 1981, chapters 3 and 4). Computer simulation is the gold coin that stands today behind the promissory notes of neurophysiologists, who some day will tell us how the manipulations of meanings we now carry out in computers are carried out by brains. Until that time, simulation tells us why the symbolic manipulation called thinking can be carried out at all and provides us with a high degree of precision in expressing and testing our theories of human thinking. It assures us that when we talk about meanings, evoking, and associating, we are talking about perfectly definite processes that can be executed by mechanisms.
Let us turn to even simpler examples of meaning than those already discussed. Meanings can attach to units as small as a word (or even a morpheme within a word), as well as to clauses, sentences, and much larger units. We begin at the bottom. One can think about the meaning of the word "dog" in a variety of ways. One can think of the executable perceptual tests stored in memory that enable one to answer, more or less veridically, the question: "Is that a dog?" These tests provide what is often called the intensional meaning of the term.
But there is more to dogs than just recognizing them. One has also stored in memory a large body of information about dogs that goes far beyond the capacity of recognition. There is the whole set of dogs one has known: a part of the extension of the term. One knows also a great deal about what dogs eat, what noises they make (and under what circumstances), what behaviors toward them might cause them to wag their tails, to bark, or (worse yet) to bite. On the basis of this knowledge one can form expectations and predictions about a dog's behavior. One has not only knowledge and beliefs about dogs, but feelings as well. The mere propinquity of a dog arouses strong fear in some people and irresistible impulses to pet it in others.
I haven't begun to exhaust the meaning of "dog" for almost anyone. There's the Dog Star, and dog days, hangdog behavior, a dog's life, and there are doggish people. All of these chunks, and others, stored in memory, are part of the meaning of "dog." Thinking of one or another of the chunks may evoke the word "dog," and hearing or reading the word may evoke some of the chunks.