Previous Next Up Comments
SEHR, volume 4, issue 1: Bridging the Gap
Updated 8 April 1995

Editor's note: Professor Simon's reply appears here in two parts. This is the first part.

reply to commentaries

literary criticism: a cognitive approach

Herbert A. Simon

Meaning, I say in my paper on literary criticism, is what is evoked by the stimulus. Having evoked about 60,000 words of commentary from 33 busy people, my paper must have been quite meaningful. But "meaningful" does not imply that these readers deemed everything it said to be correct. In general, people comment on things they want to improve, add to, put straight, or deplore. The present case is no exception. So now I have the formidable task of commenting on the proposed corrections and addenda.

Given the 4/1 ratio of commentary to text, there is a danger that my responses could expand to a quarter million words. To avert that disaster, I will not reply separately to each author, but will organize my remarks under a few main headings: the definition of meaning, literature as rhetoric, the non-cognitive aspects of meaning, computer simulation as a means for understanding human thinking, the special role of embodiment in human understanding, situated action and LCT, research methods, and some brief comments on a few other topics.

I hope that no one will take umbrage or think that I have not benefitted from his or her comments if I do not reply individually or if I direct some of my replies at a subset of those who commented on a particular issue. For brevity, I will reference comments by the last name of the author and page number. I will not reply to statements of the form: "We already knew that" (e.g., Dupré and Gagnier, 54; Akman, 31-32; Adams, 28; Rotman, 99), for I take such statements as an agreeable sign that some bridges are already in place for conversation between cognitive science and literary criticism.

The whole exercise has been a valuable learning experience for me. I have learned a great deal about how the critical enterprise is regarded by some of those engaged in it (and some not engaged in it), have learned what I must phrase differently if my meaning is to be understood by readers, have learned of valuable writing and research with which I was not previously familiar, and have even altered my views on a few matters. (It is well understood that exchanges of this sort seldom bring about major conversions.) In any event, I am deeply grateful to the commentators for their thoughtful and courteous remarks, and I am understanding of the occasional discourteous ones generated either by genuine passion or in obedience to the formulary (especially in deconstructionist and LCT genres) that calls for scolding and scoffing.

My paper centered around the question of ascertaining the meaning of a text. This is certainly a focal question for literary criticism, but my preoccupation with it led some commentators to suppose that I thought it embraced the entire subject. They mentioned other topics that are both relevant and important, thereby giving me an opportunity now to broaden the picture somewhat. I will comment on such matters as the rhetorical content of text, the affect it may evoke, and whether meaning is meaning to the reader or to readers in the plural and social sense. But in my replies I will continue to focus on meaning and the psychological processes that establish and extract it, for understanding meanings is prerequisite to any functions-descriptive, esthetic, rhetorical, moral, or what not-that the text may perform.

At the outset, I have a few words of apology and explication for Holland (65), who points out that, while I generally adopt a reader-active stance in describing how meaning is extracted from text, I occasionally slip into treating the text as the active agent (as in my reference above to "the affect the text may evoke.") Let me say that I intend every such phrase to mean that "the reader, in processing the text, evokes such and such symbol structures from memory." Clearly, the processes are in the reader, not in the text.

Some commentators find my definition of meaning too broad and vague (Dupré and Gagnier, 54; Adams, 28; van Brakel, 113). Is everything evoked by mention of the word "dog" to be part of its meaning? We surely need a term to cover that whole domain, and "meaning" seems to be the nearest equivalent in English. Is this vague? Not if we build computer models of the evoking process and use them to predict behavior. Of course in my paper, I did not describe the actual models that are available for examination in the cognitive psychology literature, e.g., John Anderson's ACT* (1983), or Feigenbaum, Richman, and Simon's EPAM (Feigenbaum and Simon, 1984). In such models there is nothing vague about what contents of memory are evoked at a given time when a particular stimulus is presented-although other things may be evoked at other times (pace Korb, 75). Complicated, but that's the way the world of meaning is; and as more than a few commentators observed, literature counts on its being intricate in this way.

Dreyfus (50) is concerned about something he calls the "relevance" problem: is everything that is evoked in any given context relevant to interpretation? Of course not. If your mind works like mine, it is troubled by irrelevancies all the time, and it would be wrong in a theory of mind to pretend they aren't evoked. Not everything that is evoked is retained and used for understanding and interpreting. It is one of the functions of conscious awareness to perform a filtering function that strives to keep thinking on its path-something our minds do not always fully achieve.

So it seems quite appropriate to me that nausea be part of the meaning of "apple" if nausea is evoked by the word (Dupré and Gagnier, 54). (Indeed, I find this a startlingly apt example, as baked apples were in fact nauseous to me for some years when I fortuitously contracted a serious illness immediately after-but not because of-eating some. Nausea was then definitely as much a part of the meaning of baked apple to me as was the color or taste.)

If the objectors (who are almost all philosophers) wish to substitute another term for meaning, say "evoked set," I have no objection-provided that they can convince the scientific and humanist communities to adopt this usage. Philosophy has had its own tangled encounter with the word "meaning." In general, the word is not used in formal philosophy as a technical term, and when it is, it is used fairly specifically for denotation and divides into two species: extension and intension. The extension of "dog" is the set of all objects that are dogs; the intension is the predicate (test) that we use to determine whether something is a dog. In formal languages, these terms can be precise; in real life we encounter hybrid wolf-dogs and dogs that are three-legged as a result of birth defects or accidents. One badly needs contexts (evoked sets) to deal with these cases: "What I am seeing is obviously a dog that has been injured."

(This may be a good point at which to apologize for my sloppiness in the use of the term "intension," which I have defined in the previous paragraph in its proper sense. On pages 4, 10 and 11, I nodded, and used it where "intention" was meant. Elsewhere, I used it appropriately.)

Suppose that we accept my usage, which equates "meaning" with "evoked set." What do we add that literary criticism did not already know? We add just the precision that is provided by the evocation mechanism, empirical tests of its operation in people (in laboratory and natural settings), and running computer programs that we can use to test our hypotheses about how human beings handle language. If I were to revise my paper now, I would emphasize more than I did that experiments and computer simulations are a principal contribution of cognitive science to literary criticism, precisely because of the precision they provide for terms that are horribly and inescapably vague in common usage. The text of my paper is best regarded as a (very inadequate) surrogate for demonstrations of actual programs and comparisons of their behavior with the behavior of people confronting the same tasks. I will come back to this point when I discuss the comments that were offered on computer simulation.

Several commentators (van Brakel, 113; Korb, 76) are fearful that defining meaning in terms of evocation leaves the mind lost in thought, without contact with the outside world, hence without semantics. They forget that the reader senses external stimuli and encodes them and that these encoded stimuli, through recognition processes (simulated, for example, by the discrimination nets of EPAM), associate the external object with the appropriate information in memory. A semantic meaning (an intension) attaches to dogs, for when a dog appears in vision, the tests that distinguish dogs from other things produce the appropriate recognition via the EPAM net and evoke other information about dogs (including the word, "dog"). Similarly, the word, "dog," recognized via a path in the EPAM net, evokes information about its denotation: dogs.

The tests embedded in the discrimination net define the system's intensions (spelled correctly with an "s" this time). In a system like EPAM, these discrimination nets are learned or "grown" by experience with objects that are to be distinguished. The EPAM system inputs vectors representing features that have already been extracted from stimuli by a sensory processor, but of course many robotic devices carry out the whole encoding from the raw sensory stimuli.

Two of the commentators (Palma, 91; Smith, 105) carry the misconception just corrected even further, describing me as an "internalist" who cannot legitimately claim for my system any connection with meaningful things in the outside world. My strategy, says Smith, is betrayed by "writing in unsubscripted language "(his italics), which he sees as "an amazing ruse." I have given away the "ruse" in the previous paragraph, and like all magician's tricks it is just a straightforward application of natural law.

Palma (92) continues with an argument from Searle, related to the latter's notorious Chinese Room "proof" that a machine cannot understand language. As has often been pointed out, the Chinese Room argument only works because Searle's room has no windows, and without windows it cannot connect words with their semantic meanings "outside." Here Searle's argument fails because our room has windows that admit stimuli from outside and a (self-taught) recognition net that discriminates among these stimuli, hence handles intensions.

I share some responsibility for the error these commentators have made. Most of the examples of stimuli used in my paper were words (especially names of things) rather than pictures or stimuli produced by real objects. Since much of my own recent research is concerned with the attachment of meanings to diagrams and other pictorial displays, it was rather blind of me to use mainly verbal examples, but as my topic was literature, hence texts, they were the examples that came naturally to mind. I hope that I have now made clear how words become attached to their denotations in the real world. Of course, accounting for the denotations of abstract terms and theoretical terms (terms in scientific theories that do not denote observables) would call for further elaboration that I cannot undertake here. This question has been addressed elsewhere, for example in our work on scientific discovery (Langley, et al., 1987).

This suggests a comment on the important problem that Matteuzzi raises (79-81), which I would paraphrase: How do we go, via thought, from language to its denotations? What is this "thought" that Matteuzzi places between them? If we acknowledge that symbol structures are not limited to representing language strings, but that they also are capable of representing pictures and diagrams or even much more abstract structures, then thoughts are simply symbol structures in memory, some of them linguistic, some pictorial, some neither linguistic nor pictorial. I am just repeating, in fact, the symbol system hypothesis; and indicating, on the basis of what we have been able to program computers to do, how this hypothesis provides an answer to Matteuzzi's question. Perhaps he finds it a problem because he identifies "symbol" too closely with "linguistic structure"; he says (incorrectly) that "as we must develop programs running on a computer, obviously we cannot avoid a linguistic medium." On the contrary, symbol structures in computers may represent any kinds of patterns; they are not limited to numbers and letters.

I find it rather interesting that the issues discussed so far were raised by philosophers or computer scientists trained in philosophy. Perhaps I directed my paper too much toward students of literature, who I thought were my principal audience, and neglected the philosophers; but I have now learned that there are many misconceptions among philosophers, and even computer scientists, about how cognitive simulations operate, and my paper would have been clearer had I anticipated and dealt with these misconceptions.

Still focusing on the cognitive dimension of meaning, I will turn next to the relation between description and persuasion in literature. Several of the commentators (Bookstein and Winn, Patel, Byrd, Kaul, and others) observed that, while the intent of much writing is to persuade and to influence values, my paper dealt with writing only as description. They used a variety of terms for what was missing: "values" (Carnochan), "rich meanings" and "relevance" (Dreyfus), "happenings" (Byrd, who also suggests that literature is more concerned with "important" and "unimportant" than with "true" and "false.")

The charge is not without warrant that I rather neglected rhetoric even while practicing it (affect will be dealt with in the next section). I cannot even use my vocation of scientist as an excuse. In recent years, students of the history and sociology of science have emphasized strongly (sometimes overemphasized) that scientific writing is not just an account of facts and the theories supported by them. Effective scientific writing is as much persuasion as description. In agreeing with this, I am not taking a Feyerabendian position of extreme social relativism but simply observing that scientific papers, in acquainting scientists with facts and theories, also aim at persuading them of the believability of these facts and theories. Their goal is to induce knowledge and belief, not just awareness.

My lack of explicit reference to argument and rhetoric is also a bit incongruous with my having devoted the first half of my career to studying human decision-making processes. (I should be grateful for the pointers to the literature provided by Bookstein and Winn, but in this area I cannot really claim amateur status or prior ignorance.)

All is not lost. Once we have established a theory of meaning and its evocation, it is easy to build the connection from meaning to rhetoric. Some of the examples I cited-for example, both Stendhal and Camus-exemplify that relation admirably. Stendhal's contrast of the Napoleonic era with its successor aims at persuading readers to prefer the heroism of the former to the stuffiness and oppression of the latter. Camus is creating an atmosphere that may persuade readers to contemplate and reevaluate their own feelings about respectability, their links with other human beings and virtue, perhaps altering their behaviors. Both authors are persuading and preaching.

These examples illustrate a point quite analogous to one I made about affect in my paper: the rhetorical dimension of literary texts does not reside mainly (and certainly not exclusively) in explicit logical argument. It resides largely in accounts of concrete situations and events that evoke-in interaction with attitudes,

values and feelings already stored in the reader's mind-the judgments the author would like to induce.

This, together with the axiom that hot cognition is often more powerful than the cold kind, sanctions the writer's claim to the role of social critic. Flaubert describing French provincial life in Madame Bovary as both oppressive and vacuous is the writer as revolutionary-an enormously popular view today of what writing is all about (and a view embraced by several of my commentators, e.g., Kaul). This is also a basis for the claim that the writings of Marx and Freud are a part of literature and not "merely" science.

Literary criticism has perhaps been insufficiently concerned with the special problems created by the dual role of much writing as exposition and persuasion. About any such piece of writing we can ask, "Is it good rhetoric?" We can also ask, "Is it good science?" The answers to these two questions are not always the same nor are the criteria for settling their answers. Here is a domain, the coexistence of exposition and persuasion in the same text, that calls for combining scientific and rhetorical tools that are not often seen together. Science and literature each stake their claims to be legitimate interpreters of the human condition, without always respecting the responsibility that claim imposes for maintaining both the veridicality of their descriptions and the power of their persuasions. For example, students of the human sciences are often put off by writings that have too little concern for the evidentiary tests that Marxist and Freudian (or anti-Freudian) claims should be required to satisfy before they are accepted as truths.

In partial atonement for neglecting the topic of rhetoric in my paper, let me pursue a little further the observation made above that writing persuades more by evocation than argument-in particular, by evoking in the reader's mind values and attitudes that provide firm and direct premises for the conclusions the author wishes the reader to accept. Proof reaches conclusions from accepted premises, and if premises are not already accepted, they must, in turn, be derived from previous premises. The more links in the reasoning chain, the greater the chance the reader will find a link implausible and breakable. Hence, the fewer links, the stronger the persuasion.

A rhetorician is lucky if he or she can count on readers' holding beliefs that support directly the views being offered and unlucky if readers have beliefs that oppose these views. Persons in harmony with the Holy Alliance would not share the premises on which Stendahl's persuasion depends, hence would find his rhetoric unappealing. For rhetoric it is important that most of us carry around many values and beliefs that are mutually inconsistent and often lead in opposite directions. Then it is the task of the rhetorician to evoke the subset of attitudes that link directly to his conclusions and to leave those asleep that would lead the reader to opposing conclusions. (This is a commonplace in the strategy of political debate.)

It is instructive to examine from this viewpoint the rhetoric in the present exchange with my commentators, especially the rhetoric that relates to computers. Much of the argument of

commentators rests on the "self-evident" premises that "machines can't think" or that their thinking is unrelated to emotion (or to esthetic values, or to the body) hence unhuman. If such premises are indeed self-evident, and it appears that they are for most of the commentators, then (a) no evidence or argument need be presented for them, and (b) they lead immediately to serious flaws in the conclusions I reached in my paper about the abilities of computers to simulate human thinking. The critics, starting from the widely held belief that computers can't think, can present the short argument. Faced with the necessity of countering that belief, I must present a more elaborate case.

In identifying myself as a rhetorician who must challenge common social beliefs and attitudes, hence must counter short chains of reasoning with long ones, I am seeking neither sympathy nor pity. Rather, I am trying to evoke in readers an awareness of the asymmetry-and of their need to reexamine beliefs they have regarded as self-evident-reassessing them in the light of the computer simulations that have been tested and the psychological experiments that have been run in the past forty years. My hope is to persuade you to look at the evidence, not just in my essay but in the literature, or at least in literature-based reviews like my Sciences of the Artificial (1981), before reaching a conclusion.

My main task now, however, is just to acknowledge, not to repair, the gap in my account of literary criticism that followed from my inattention to the rhetorical side of writing. I am grateful to my commentators for raising this issue. Especially in times when writing takes on the tasks of social criticism, we need to understand how good science can be good rhetoric, and vice versa. As the examples of Marx and Freud show, it ain't easy.

Many of the humanists among my commentators, far from thinking that evocation in context gave too broad scope to meaning, sometimes found my definition too narrow. Palma (91) attributes to me the view that meaning "pertains to single words." Somehow, he interpreted my "dog" example to imply that only lexical units have meaning. A rereading of my text should persuade him that this is not my view. For instance, my quotations from Stendhal, Camus, Stevens, and others are decisive counterexamples, as is my emphasis on context as intrinsic to meaning.

Schleifer (102) also ascribes limits to my definition but makes them a little broader than Palma's. Schleifer thinks that, contrary to the teaching of semiotics, I do not carry "my analyses of meaning beyond the confines of sentences." Following Charles Saunders Peirce and Charles Morris (the latter one of my teachers), I have always interpreted semiotics as comprehending syntax, semantics, and pragmatics-the latter embracing not only persuasion and affect but all consideration of meaning that makes reference to the reader, listener, writer, or speaker, and certainly including the connection between the symbols in the reader's head and their referents in the world outside. Using the terms syntax, semantics, and pragmatics in their conventional meanings, it is clear that my essay was saturated with all three, but especially pragmatics. Johnston (67) aptly speaks of, "Simon's notion of the world as a text in our heads." No neglect of semiotics there.

Petitot's (96) comments on "semic contrasts" identify another role for a discrimination net like EPAM: to select the contexts within which texts will be interpreted. In fact, the current EPAM IV is used in exactly this way. When a (simulated) chess master is presented with a board to be recalled later, previously stored knowledge is used by EPAM to recognize the board as of the same type as boards seen previously, hence to provide a representation within which information about the positions of individual pieces in the new position can be stored "meaningfully."

The more frequent concern of this group of commentators was that, in focusing upon cognitive meanings, I had omitted or played down emotion and affect. (Dupré and Gagnier; Carnochan; Bookstein and Winn; Dreyfus; Patel; Byrd; Kaul). Clearly, these critics have rather different views from those who found my definition too broad. Their complaint is not unrelated to that about my neglect of rhetoric. At the present point in history, psychology has better founded things to say about cognition than about affect or motivation, and the relative attention I devoted to the two topics reflects this. But I dealt quite explicitly with affect as central to meaning in pages 13-14 and in several of my examples. I said little about it in connection with computer modeling, and some commentators suggest that it can't be dealt with at all by computers. I'll return to that question a little later.

Harrison (59-61) seems rather unhappy that I quoted one verse of Wallace Stevens' "Thirteen ways of Looking at a Blackbird" to make a very specific point about visual imagery and did not offer a comprehensive interpretation of the poem; but I have no problem with Harrison's exegesis of it, which illustrates elegantly the rich flood of ideas and emotions that rereading it evoked in him. I do have serious problems with his conclusion that his enterprise of interpretation is totally different from mine. His own prose ("a gust of raw, Alpine air rushing through the claustrophobic laboratory of the cognitive scientist") is highly evocative and not a little Byronic. It is consistent with the evidence from his elucidation of Stevens that reading is, for him, something quite like the context-sensitive interpretive process I describe in my paper. In particular, his response illustrates how rich an expert's context can be; specifically, how the expert reader can challenge the poet's ownership of his text (19). Or does he suppose that his evoked thoughts simply report Stevens' intent?

Byrd (43) sums up what we might call the "anti-cognitive" view in his assertion that "The poem is not an object but an event. The poem does not mean but happens." I rather like this manifesto and his elaboration of it, but it hardly exiles meaning from the happening. Celebration of the event begins with a sequence of words, seldom of nonsensical letters (I think I have not excluded Apollinaire, Mallarme, Joyce, or Eliot), and certainly has a great deal to do with what the words evoke. And the musical metaphor that Byrd (44) borrows from Pound to explain the organization of the poetic event is already present on page 17 of my essay, where I discuss the representational and nonrepresentational aspects of literature. So perhaps Byrd's view is not anti-cognitive after all.

In a vein similar to Harrison's and Byrd's, Wynter (124) regrets my "refusal to explore. . . .the phenomenon of wonder, which is, of course, the phenomenon of the aesthetic," thereby reenacting "the very division [Simon's] essay set out to bridge." I interpret this, as well as many of the other comments about the indifference of my paper to affect, as a denial that cold cognition can be employed as an aid to understanding the mechanisms of hot cognition and the connections between the two. I expressed no reluctance in my essay, nor exhibited any, to exploring the phenomena of wonder (more generally, of esthetics) in order to understand how the evocation of meanings from text can lead to esthetic and other affective responses.

As my quotation from Simon Stevinus ["wonderful, yet not unfathomable"] was intended to suggest, understanding and explaining wonder creates no barrier to experiencing it. There is nothing about the (relative) objectivity of science that forbids the scientist's feeling or expressing her or his emotions, even while seeking to understand these emotions or the emotions of others. I should suppose that this same statement could be made about the humanist, and that humanist and scientist could share the adventure and the wonder of seeking to understand what we do when we encounter a literary text. Contrary to the fears of Wild (66), nothing in that adventure constitutes a refusal to experience wonder, much less to explore it.

There are other expressions of concern in the commentaries that too much objectivity in addressing texts must destroy or diminish their affective and esthetic content. Perhaps the examples I have just given will make clear that this concern arises from a confusion between the processes that go on when one is engaged in the happening of literature (to borrow Byrd's happy phrase) and those that go on when one is analyzing and understanding the happening. While engaging in the latter does not destroy the wonder of either, we must keep clear which we are doing at any given moment.

A theory of literary criticism (that is what my paper was about) attempts to make veridical statements about the processes of criticism, and of writing and reading in general. The validity of such statements is an empirical matter, to be settled by marshaling evidence. In that respect, the commentators are right in sensing my respect for the values of science. The same values are central to humanist endeavors also, especially when the humanists are seeking to understand processes, including psychological and social processes, of creation or appreciation. In no way do these values exclude other values from the minds and hearts of either humanists or scientists.

In my paper, I asked readers to consider psychological experiments as relevant to literature. In addition, I even asked them to consider that computer simulations of thinking might be relevant. Some commentators appear to have been underpersuaded. Because my argument challenges deeply engrained beliefs (e.g., that machines cannot think) and because, as we have seen, rhetoric is least persuasive when it pits long arguments against short ones that start with "self-evident" premises, I should not be astonished that I have been only partially successful.

Can "Machines" (Computers) Model Thinking?

Today the computer is popularly regarded as the quintessential machine. So deeply engrained is its image in popular thought that Biddick (35) can write a passage that begins:

Both the sciences and the humanities collaborated to "invent" the von Neumann architecture in the crucible of colonialism and imperialism. It was a long time in the making. We can find rusted fragments of its assemblage in heterogeneous spaces such as medical meetings held in late-nineteenth century London; in census reports for India distributed in the Imperial Gazetteer of India. . . .

I will not try to guess how the veridicality of such a remarkable sequence of words could be tested, but I will return to a simpler context. In ordinary English, to be machinelike is "to engage in repetitive action or produce products of stereotyped uniformity." So sayeth my Webster, which also defines as a machine "a person or organization that resembles a machine, as in being methodical, tireless or unemotional." Either computers are poor instruments for modeling human behavior (except the behavior of the methodical, tireless, and emotional minority) or the dictionary definition has not quite caught up with computers. Anyone familiar with AI applications of computers, especially with computer simulations of human thinking, knows that in these contexts computers do not "engage in repetitive action or produce products of stereotyped uniformity." On that demonstrated fact rests my argument for computer models.

Again, this is an empirical question, although Eldridge (57) believes (in italic type) that it is not. I know that you, reader, (and other human beings) think because I observe you when you are presented with a problem and note that you sometimes solve it, but more important, that you go through certain characteristic stages and perform certain characteristic manipulations while trying to solving it. These are, in fact, about the same stages and manipulations that can be observed in me when I am thinking, although I have additional private evidence of what goes on then. Even without that private evidence I am (generously) willing to acknowledge that you too are thinking whenever the public symptoms are present (and perhaps on other occasions, too). This is, in fact, my semantic definition of thinking-the intension of the word for me-and, I expect, for you. That's how you decide to believe and assert about me, "You are thinking." In fact, even Eldridge, judging from the lines at the top of page 57 of his essay, makes his assessments of what human beings are doing in much the same way.

Having this useful word, I find it convenient to apply it to computers also, using the same intensional tests. This is convenient,

because it gives me a testable theory of what may be going on in a human being when the human being is thinking. As I emphasized in my paper, it is not enough for this purpose that the computer solve a problem or problems. To the extent that evidence allows me to compare the stages and manipulations of the computer's thought processes with those of the human, the computer must behave in the same way as the human, traversing the same kinds of paths in its search for a solution. Otherwise, it does not satisfy my definition of thinking. A computer that is solving a large set of simultaneous equations ("number-crunching") is not thinking in this sense.

Others may decline to follow this usage, and, if they prefer a different term, I have no quarrel with them. Whatever word we use, to the extent that we can write computer programs that mimic people's behavior-not only the outcomes but also the paths along the way-these programs provide symbolic descriptions of the processes that people use when they (we) think. In every other science I am familiar with, such descriptions of phenomena, if they match the facts up to a reasonable approximation, are regarded as successful theories. Whether and to what extent they match the facts is decided by observation and experiment, not abstract debate.

What Does the Evidence Say?

In my paper, I gave a few pointers to the vast body of evidence that today supports the theories of human thinking embodied in such programs as EPAM (Feigenbaum and Simon, 1984), the General Problem Solver (Newell and Simon, 1952), ISAAC (Novak, 1977), UNDERSTAND (Hayes and Simon, 1974), ACT* (Anderson, 1983), Soar (Newell, 1990), and others. (These are not competing, but complementary, theories, for they address different but overlapping aspects of thought processes.) To these references I could add, among many others, two volumes of my own experimental and theoretical papers, Models of Thought. I find precious little reference to this or to other evidence in the remarks of the skeptics.

Akman (31) actually chides me (gently) for having included in my paper some comments on evidence which he believes "contribute little to the overall theme." I, on the contrary, believe that most of the disagreements with my position put forth by commentators involve questions of fact; such questions are properly settled only by producing evidence. The question of whether the thought processes used in reading are as I describe them to be can perhaps be answered by empirical research but certainly not by armchair philosophizing. In the next few paragraphs I will comment on some of the more egregious attempts of critics to settle such issues without looking at the substantial body of evidence that is available.

Dreyfus, for example, who has made an entire career of pronouncing on what computers can't do, does not (either in his commentary here or in his other writings) base his case on any detailed analysis of the psychological evidence but prefers to report what he believes to be the "prevailing opinion" on these issues. He "would have thought that research over the past thirty years has tended to show the implausibility of Cognitive Science (50)," then "proves" this by citing his most recent book on the subject, which is mainly an armchair discussion and in no way a systematic review of the empirical evidence reported in the psychological literature. In one of his typical rhetorical moves, he explains "one of the reasons that [Cognitive Science] has fallen into disrepute (51)" without first establishing that it has, in fact, fallen into disrepute-a claim that is decisively refuted by the extensive treatment of serial symbolic theories of thinking in the current professional books and papers in psychology. One should not provide explanations for non-existent phenomena.

With similar ingenuousness, Dreyfus refers to my "implicit admission that [Cognitive Science] has so far been unable to simulate" the ability to play grandmaster chess (51). What "admission"? He does not mention (perhaps he does not know, but he should know) that the MATER program (Baylor and Simon, 1966) discovers deep mating combinations by searches that seldom exceed 100 branches of the game tree and that demonstrably examine the same branches that human masters do, thereby illuminating the recognition and search mechanisms strong chess players employ to find such combinations. Dreyfus's "rebuttal" is like claiming that DNA does not say much about genetics, because biologists "admit" that the genes responsible for the synthesis of most proteins have not yet been identified and large stretches of the human genome are still uninterpreted.

Connectionist versus Serial Systems

Dreyfus and some other commentators think that parallel, connectionist computer programs (so-called "neural networks") provide a more promising approach to computer simulation of human thinking than the serial symbolic programs that I emphasize. In fact, both classes of programs are symbolic, and Dreyfus "admits"(50) that the patterns connectionist systems create and use fit my definition of symbol. When he claims that such systems "do not compare for equality nor do anything resembling discrete branching" he simply reveals that he has never examined closely or understood the inputs and outputs of the so-called "hidden layers" in connectionist systems and the way in which particular stimuli are sorted to one output node or another in the process of recognition. Oddly, he thinks that structures that are weighted combinations of simpler structures (nodes) are not structures at all. He has missed the fact that prominent symbolic systems, e.g., Anderson's Act* (Anderson, 1983) also assign weights to structures as a function of their levels of activation (i.e., evocability). A reader who wishes to pursue these particular issues further may wish to study Richman and Simon (1989) where a connectionist system is compared with EPAM when the two systems are performing identical tasks.

No verdict, nor anything close to a verdict, has been handed down by the psychological community as to whether connectionist or serial symbolic systems are the more promising, and contrary to Dreyfus' assertion, researchers employing the serial approach are more numerous than connectionists. The question of which approach (if either) is correct will not in any event be decided by vote. It will be decided by a gradual accumulation of evidence that will show in what domains of brain functioning the mechanisms are serial and in what domains they are parallel. As in other fields of science, as the evidence accumulates the "vote" will come closer and closer to unanimity. (It is hard to find scientists today who support the phlogiston theory of combustion or who question whether DNA has anything to do with inheritance. At one time such scientists were numerous, even the majority.) Until the happy day of consensus arrives, the rational strategy is to continue to enrich our body of evidence, and to give greatest credence to those components of the two forms of theory that point to the same conclusions. There are many of these.

My own view, for what it is worth, is that parallel connectionist simulations will continue to find their main applications, as they have in the past decade, in understanding sensory processes, the early stages of perceptual processes and perhaps some learning processes; but that most activities to which we apply the word "thinking" will prove to be serial. Everyday experience, consistent with laboratory evidence, shows that human beings have very limited capacities for carrying on more than one task at a time (e.g., driving in heavy traffic while engaged in a serious philosophical or literary conversation), unless the tasks have been automatized and become completely routine ("mechanical").

There have been no convincing demonstrations to date that connectionist systems are able to model human complex thinking that requires attention. In the no-man's-land of perception, near the presumed boundary between parallel and serial functions in the nervous system, the Richman-Simon (1989) comparison of EPAM with a connectionist model, mentioned above, showed that the connectionist system was able to perform the task precisely because its structure contained two key components in series (technically, two serial "hidden layers"). This is hardly a demonstration of the powers of parallelism.

For our purposes, the relevant point is not whether one particular theory will prove more satisfactory than another but, rather, what computer simulations can teach us about the nature of human thought. It is not unusual in science for two or more distinct theories to cast light on the same phenomena (e.g., wave mechanics and matrix mechanics, or thermodynamics and statistical mechanics). Up to the present time, serial symbolic theories have been developed much farther than connectionist theories, and they provide an excellent explanation, supported by extensive evidence, of a wide range of cognitive phenomena. That is the message I tried to convey, and nothing in the comments shows that I should change it.

The other commentators who are skeptical about computer simulation also are short on empirical evidence. Patel, for example, states that "it is claimed that [serial symbolic] models 'simulate thinking' though this conclusion is highly controversial"(94). What evidence does he cite in support of this view? No evidence, but the equally unempirical views of three philosophers-Searle, Harnad, and himself. If we let empirical matters be settled by philosophers instead of observation and experiment, controversy will persist for a long time.

Similarly, Patel (94) makes the blanket statement that "ISAAC. . . .fails to model just the sort of richness and ambiguity in meaning evocation that is being offered to Literary Critics as a contribution to solving their differences over what meaning a particular text evokes." Does he think that physics students (ISAAC reads problem descriptions in textbooks) do not grapple with "ambiguity in meaning evocation" when they try to go from a verbal statement of a problem to a more precise representation of it? Recent experiments by Reif and Larkin on this very point show that Patel is plain wrong, as do earlier experiments by Hayes and myself using the UNDERSTAND program to simulate people's interpretations of "story puzzles." What the former experiments show clearly is that even in the "unambiguous" contexts of physics problems, all kinds of alternative interpretations can be, and are, evoked by texts. Revealing the sources of ambiguity in such impoverished contexts (if, indeed, they are impoverished) can be a valuable step to understanding them in other kinds of contexts, including literary ones.

Some of the important ambiguities arise because we must engender some sort of representation before we can interpret the text (or while interpreting it). Miall (82) points out that I said relatively little about this process (he calls it "schema generation"), and suggests that this is an important area for research. I agree fully with him. In fact both the UNDERSTAND and the ISAAC program, just mentioned, are efforts in this direction, and I will mention a third (research on the so-called "mutilated checkerboard" problem) towards the end of my reply. But these are only a beginning, and the formation of representations is one of the burning research topics in cognitive science today.

Keil-Slawik's skepticism derives from the viewpoint of situated action, which I will deal with later. I will only observe here that he completely ignores learning processes in computers. He says (72) "if we regard software as a mathematical object that is interpreted by a machine, its semantics are a static attribute of the program text." Surely he is aware, for example, of adaptive productions systems that construct new productions in the course of problem solving and add these to their programs (even, I might say, as you and I). He must also be aware of programs like EPAM that grow their own discrimination nets in response to experience, or programs like Soar that learn to solve problems by "chunking" the relevant procedures, discovered through experience. In what respects are these learning processes different from human learning processes and what is his evidence that they are different?

Simulation of Affective Processes

We come next to the question of whether computer simulations can go beyond cognition into the realms of emotion, motivation and esthetic judgment. Even if they could not, their value for understanding human cognitive processes would not be destroyed. But the wall between cognition and affect is not at all solid, and some useful insights have already been gained about ways of simulating events on both sides and their interactions. Clearly, the exploration of the affective side of the wall has not gone nearly as far as exploration of the cognitive side, but it has achieved some important successes. I will mention just two of these.

Twenty years ago, the psychiatrist Kenneth Colby (1975) wrote a computer program to simulate paranoid processes. The program embodied a theory of paranoia that was already familiar in psychiatry in a less precise verbal form. Paranoia, according to this theory, involves the interaction of belief systems held in memory with the autonomic nervous system. When certain specific beliefs are evoked (by a spoken word or an event), signals associated in memory with those beliefs activate nerves in the autonomic system, whose response is experienced subjectively as an emotion (fear, say, or anger). The emotional response, in turn, interrupts ongoing attention and switches it to a part of memory closely linked with that emotion. For example, the word "father" may evoke in the patient fear that has already been associated with that term, interrupting the patient's attention and turning it to his fear (based in reality or imagination) that the Mafia is after him. Notice that in this system, affect is linked to cognition via the attentional mechanism, explaining how intelligent systems that are fundamentally serial can shift attention from a current goal to another that has, in affective terms, become momentarily more urgent.

The mechanisms of PARRY, which is Colby's name for his program, are very simple, one might say primitive. Murray (88) calls them "too wooden and predictable to be persuasive models of human psychodynamics." Yet they deceived a number of professional psychiatrists who could not tell when they were communicating (via computer keyboard) with a computer program (PARRY) and when with a paranoid person. There can be no claim that this simple system is a full-blown model of human affect, but it definitely supports the view that simple mechanisms, complemented by a rich semantic memory, can account for some basic phenomena of emotion, and it thereby dissipates some of the mystery usually associated with affect. It demonstrates that computers are not forever shut out from the simulation of motivation and emotion.

My second example is the Aaron program written by the painter, Harold Cohen and described in Pamela McCorduck's book, Aaron's Code (1991). Aaron draws pictures, which in the early days of its existence were non-representational, but now deal with human and floral figures in a landscape. Each picture is created autonomously; if left to its devices, Aaron will create an indefinite sequence of them, all different, although similar in style. Changes in style are a result of Cohen's intervention.

Now we easily recognize that the program's method of painting is its teacher's (even, we may say again, as yours and mine), but this does not settle the question of whether Aaron has an esthetic component. For on each new drawing, Aaron must choose the succession of lines and strokes it will place on the paper and their arrangement, and must decide also when it has a completed drawing and should stop. (Anyone who has drawn or painted knows that this is a critical and non-trivial decision.) Moreover, in looking at a number of these (non-identical) drawings we are

immediately convinced that the human figures in them are arranged in ways that are not only esthetically pleasing from a compositional standpoint, but in ways that persuade the viewer that the figures are engaged in meaningful social interaction. It is not unreasonable to claim that, along these dimensions at least, Aaron simulates important aspects of an artist's application of esthetic criteria to guide what she or he is doing. (Apparently Aaron's drawings have some esthetic value, for they are hung in a number of locations, including my home.)

As I am not an unbiased witness for the esthetic merits of Aaron's drawings, let me quote from someone who is, Eugene M. Schwartz, a collector, curator, and Fellow of the World Academy of Arts and Sciences. Of Aaron's Code he says:

Until this book, we had believed that only we could create art. Now. . . .we find that there is a new kind of artistic intelligence working alongside of us. This confronts us with the ultimate challenge: the human use of human artists. This book will send first a chill, and then a burst of energy through the art world. Its effect on tomorrow's art may equal the invention of the camera.

I will stop with these two examples, with the hope that they have introduced into minds heretofore convinced that computers can say nothing about affect or esthetics just a little element of doubt that the wall is as thick or high as has been supposed. As with all these issues, the decision will not be reached in the armchair, but by experiments to determine how far computer simulation of these matters (yes, even of poetry) can be carried. To carry it very far, it will be essential to introduce learning into the affective system, as it already has been introduced into computer cognition. What are the processes that transform the young Picasso of Barcelona into the Picasso of his Blue Period, and thence into the later Picassos?

Previous Next Up Comments