Previous Next Up Comments
SEHR, volume 4, issue 1: Bridging the Gap
Updated 8 April 1995

simon's simple solutions

Hubert L. Dreyfus

Simon argues for two claims:

(1) that Cognitive Science has reached an understanding of human thinking, and

(2) that this understanding has something to contribute to literary criticisim and the humanities in general.

Both claims seem to me to be highly problematic.

If Cognitive Science is taken to be what I would prefer to call Cognitivism, i.e. an understanding of human thinking based on Newell and Simon's Physical Symbol System hypothesis, I would have thought that research over the past thirty years has tended to show the implausibility of Cognitive Science (Dreyfus, 1992). The Physical Symbol System hypothesis claims that to produce thought a physical device, whether a computer or a brain, need only store and process physical instantiations of symbols. Where

the basic processes that a computer can perform with symbols are to input them into memory, combine and reorganize them into symbol structures, store such structures over time,. . . .compare pairs of symbols for equality or inequality, and "branch" (behave conditionally on the outcome of such tests) (7).

This is a testable hypothesis because the processes involved are not the only kind of processes that computers and brains might instantiate in order to produce thought. Neural networks and simulations of them also manipulate patterns (which on Simon's definition can be called symbols) but they do not manipulate them in a way that involves operating over discrete symbols. Neural networks have patterns with continuous elements (neural activities) and certainly do not compare for equality nor do anything resembling discrete branching. They also do not store structures of neural activities, they store weights which can produce structures. Network activations evolve continuously and their evolution is describable by non-linear, perhaps chaotic, dynamics. Given the difficulties that Cognitivism has faced trying to deal with learning and pattern recognition, as well as commonsense knowledge and relevance, most Cognitive Scientists now consider simulated neural networks a more promising research program than GOFAI.1

Oddly the paradigm shift in Cognitive Science from Cognitivism to neural network simulation does not even elicit a defensive comment from Simon. Indeed, it seems that for Simon, despite his explicit avowals to the contrary, the Physical Symbol System hypothesis has ceased to be an empirical theory and has become a tautology. The suspicion first arises on page 13 where, in discussing work on images, Simon speaks of the processing of images as an example of symbolic processing. He tells us that "the mental image is neither words not light but a symbolic coding of the initial stimulus"(13). Yet the view seems to be winning out among Cognitive Scientists that human image processing is some sort of analog process that does not involved discrete symbol manipulations at all (Block, 1990).

For Simon any bunch of organized neurons seems to count as a pattern and so as a symbol, and transforming the pattern continuously without branching or comparison still counts as GOFAI. But if the Physical Symbol System hypothesis merely states that the brain contains patterns that denote other patterns and features of the world and that these patterns go through processes that change them and thereby produce thought it is completely vacuous.

As I mentioned above, one of the reasons that GOFAI has fallen into disrepute is that is has failed to solve the problem of relevance. This failure is hinted at in Simon's frank remark on chess programs that

the current champion among computer programs may look at 50 million branches of the game, or more, before choosing its next move. Human grandmasters seldom look at more than 100 (but they almost always look at the relevant ones) (8).

And his implicit admission that GOFAI has so far been unable to simulate this ability.

This is just one instance of the general relevance problem. The same problem appears in Simon's account of meaning. The "potential meaning" of the word "dog" is supposedly all the information one associates with the word.

The whole store of information indexed by the string D-O-G (as well as other things that can be obtained from its association-cat, mammal, wolf, pet, and whatnot) constitutes the potential meaning of "dog" (9).

Presumably if I associate dog with log because the words sound similar that becomes part of the term's potential meaning.

Simon thus equates the meaning of "dog" with everything a given person has ever associated with dogs. "All of these chunks, and others, stored in memory, are part of the meaning of 'dog'" (9). This may perhaps capture the meaning of "dog" in the sense of what "dog" means to me-although it fails to face the problem that eventually everything one knows will be drawn into the potential meaning of "dog." In any case, this sense of meaning is so broad as to have little or nothing to do with what one normally means by the meaning of a word.

To determine the "actual meaning" of "dog" Simon brings in the context.

The tiny subset of this vast totality that is evoked in a particular reader on seeing the word in a particular context may be regarded as the actual meaning in that context on that occasion (9).

But what counts as the context is just another version of the same problem of delimiting what is relevant. Indeed, Simon tells us:

context includes memory of surrounding elements of the text, but also other relevant memory elements that have been active recently (5) [My emphasis].

Here relevant information either includes all information that has been active in memory recently, and in that case any association, like "I want a drink," may be part of any context, or relevance is supposed to be independently defined, in which case Simon still owes us a definition of context.

But Simon seems to have no better proposal for defining context than whatever information is evoked when one limits in some arbitrary way one's search through the network of associations.

At any given moment, memory has certain contents. Some of these may be more accessible than others, either because they are temporarily in short-term memory, or because they are in a high state of arousal or activation by reason of recent use or of association with something recently accessed. Meaning is shaped by the particular parts of the contents of memory that are accessed; these constitute the context (10).

But again one wants to say it is the relevant associations, not just the recent ones, that define the context.

To show that his proposal for defining context by association works Simon cites a program that "has shown how contexts can be developed incrementally while reading the (English) text of a physics problem. . . ."(11). Of course in artificial cases such as physics problems, where relevance and context are fixed in advance and all other associations are ruled out, "the concept of context can be made wholly operational"(11). But this only shows how far physics problems are from texts in the humanities where all facts are potentially relevant. It certainly does not show that the definition of context in terms of evolved associations presents "no difficulties in principle"(11). If anything, it shows just the contrary.

No one knows how human beings determine meaning and relevance but one thing seems fairly certain. What is relevant, and therefore what counts as part of the meaning of our terms and our texts, depends on what human beings find important. That in turn depends partly on having bodies, partly on commonsense knowledge of the everyday world, and partly in what culture one is socialized into. The Humanities focus on the latter. Our history gives us many accounts of what human beings essentially are from heros, to saints, to autonomous subjects. Each cultural self-interpretation serves to define what is important and so to help fix meaning and relevance for that period.

This changing account of what matters to human beings and why is what the humanities are about. Simon shows little appreciation of our changing interpretations of ourselves, and our constant re-interpretations of these interpretations. Having already settled the question what human beings are, viz. information processing devices, and finessed the question of relevance by saying whatever is associated with a text counts as its meaning, all Simon can see in the humanities in general and literary criticism in particular is the attempt to make the network of associations evoked by a text as large as possible.

This can be seen in Simon's irenic response to the canon wars dividing humanities faculties. After making some sensible suggestions that in no way depend on his mechanical theory of meaning, Simon argues that what matters is giving students "the opportunity to acquire knowledge that will assure the evocation of rich meanings from any text"(20) [My italics]. If "rich" means relevant to the current concerns of human beings in our culture then this just rephrases what is being disputed. If, however, the theory of meaning Simon is proposing can "help us examine issues like these" (21) we should conclude that "rich" means as large a set of associations as possible. On this account the meaning of the Bible would be enriched once one realized, like the poet in Sleeper, that God is dog spelled backwards. Simon should approve Woody Allen's bemused response, "It makes you think."

No wonder the parties engaged in the canon wars appear to Simon to be arguing merely for "fun"(24). They should all realize that they are in the same business of increasing associations and pick their texts accordingly. That they might be arguing over what human beings are and therefore which associations are relevant and important cannot be dreamt of in Simon's mechanistic philosophy.

Previous Next Up Comments


1. To avoid long circumlocutions each time I refer to symbol-processing of the sort Simon defends I will use John Haugeland's acronym, GOFAI, Good Old Fashioned AI, which has become a technical term in the field.