Previous Next Up Comments
SEHR, volume 4, issue 2: Constructions of the Mind
Updated 20 July 1995

swamped by the updates

expert systems, semioclasm, and apeironic education

Michael L. Johnson

First, let me, borrowing from Peter Jackson, offer a brief definition: "An expert system is a computing system capable of representing and reasoning about some knowledge-rich domain, such as internal medicine or geology, with a view to solving problems or giving advice."1 Then, let me offer a prophecy: the expert system of the future will think like a woman. Now, let me explain, gradually, what I mean.

Perhaps the most daunting obstacle facing expert-systems research and development is the so-called frame problem. Though variously theorized for some time-arguably since before Plato-it remains essentially an epistemological problem of how to update ("up-data") a world-model in terms of a changing world. Given the persistence of this seemingly intractable problem, we must find a better approach to it. I propose that we consider a semiotic approach, a kind that is foreshadowed by much research on the problem and that virtually begs for further exploration, especially in relation to an analogous problem now plaguing education generally.

John McCarthy and Patrick Hayes first define the frame problem explicitly for the domain of artificial intelligence in 1969. Though they discuss it as one of several "major traditional problems of philosophy" that bear on the issue of what knowledge, what "general representation of the world in terms of which its inputs are interpreted," a computer program must have in order to be "capable of acting intelligently in the world,"2 they clearly suggest that it is a practical problem of indubitable importance. And it figures in their conception of "a Missouri [show-me] program" for which "the world and its laws of change" can be represented (469).

McCarthy and Hayes investigate the difficulties of construing that representation as a system of interactive abstract descriptions called finite automata. Though such a deterministic system has certain attractions, it also involves, among other snags, geometric multiplications of information and patent epistemological inadequacy. They entertain the idea of avoiding the first, most bothering difficulty by the introduction of a frame-based principle of economy, so that "A number of fluents [situational functions] are declared as attached to the frame and the effect of an action is described by telling which fluents are changed, all others being presumed unchanged"(487). There remains, however, "the impossibility of naming every conceivable thing that may go wrong," and McCarthy and Hayes discourage any hope that such uncertainty could be dealt with by "attaching probabilities" to statements of the representational formalism or by describing the laws of change in any given situation through parallel processing (490). Their critical survey of the literature concerned with simpler mechanisms for grappling with this knotty enigma (making use of modal logics of contingent truth, multiple possible worlds, cohistoricality, schemes for maintaining consistency through change), while not wholly pessimistic, surely indicates its durability.

Over the next few years a number of techniques for dealing with or reducing the frame problem were tried. None were really successful, but many were suggestive. Consider, for example, the method-learning STRIPS system of Fikes and Nilsson, which introduces some interesting features of heuristic economy (by stripping away subgoals as it goes) into state-space theorem proving. Or consider the PLANNER programming language developed and refined by Carl Hewitt and others, which provides principled strategies for changing propositions in a state-space database. But surely it is Marvin Minsky who first takes thinking about the frame problem to the next, somewhat more productive stage.3

Minsky begins by endorsing what he sees as a general movement away from the theories of both behaviorists and logic-oriented AI researchers and toward cognitively-oriented theories that, "in order to explain the apparent speed and power of mental activities," see their processes as "larger and more structured, and their factual and procedural contents...[as] more intimately connected...."4 Drawing upon previous work in the latter vein, he offers his own theory, admitting that it "raises more questions than it answers"(212). Because his brief version of that theory lays out its premises so systematically and also because it anticipates future theorizing as well as my own later discussion, I here quote it at some length:

When one encounters a new situation (or makes a substantial change in one's view of the present problem) one selects from memory a substantial structure called a frame. This is a remembered framework to be adapted to fit reality by changing details as necessary. A frame is a data-structure for representing a stereotyped situation.... Attached to each frame are several kinds of information. Some of this information is about how to use the frame. Some is about what one can expect to happen next. Some is about what to do if these expectations are not confirmed.

We can think of a frame as a network of nodes and relations. The "top levels" of a frame are fixed, and represent things that are always true about the supposed situation. The lower levels have many terminals-"slots" that must be filled by specific instances or data. Each terminal can specify conditions its assignments must meet. (The assignments themselves are usually smaller "subframes.") Simple conditions are specified by markers that might require a terminal assignment to be a person, an object of sufficient value, or a pointer to a sub-frame of a certain type. More complex conditions can specify relations among the things assigned to several terminals.

Collections of related frames are linked together into frame systems. The effects of important actions are mirrored by transformations between the frames of a system....

For visual scene analysis, the different frames of a system describe the scene from different viewpoints, and the transformations between one frame and another represent the effects of moving from place to place. For nonvisual kinds of frames, the differences between the frames of a system can represent actions, cause-effect relations, or changes in metaphorical viewpoint. Different frames of a system share the same terminals; this is the critical point that makes it possible to coordinate information gathered from different viewpoints.

Much of the phenomenological power of the theory hinges on the inclusion of expectations and other kinds of presumptions. A frame's terminals are normally already filled with "default" assignments. Thus a frame may contain a great many details whose supposition is not specifically warranted by the situation....

The default assignments are attached loosely to their terminals, so that they can be easily displaced by new items that better fit the current situation....

The frame systems are linked, in turn, by an information retrieval network. When a proposed frame cannot be made to fit reality-when we cannot find terminal assignments that suitably match its terminal marker conditions-this network provides a replacement frame....

Once a frame is proposed to represent a situation, a matching process tries to assign values to the terminals..., consistent with the markers at each place. The matching process is partly controlled by information associated with the frame (which includes information about how to deal with surprises) and partly by knowledge about the system's current goals. There are important uses for the information obtained when a matching process fails. (212-213)

Minsky later pursues the question of how such "failure" information may be used in choosing an alternative frame, a process he compares to the notion of paradigm shift. The thrust of his theory is toward a fuller understanding of human intelligence as a problem-solving faculty engaged through vision, but he conceives no distinction between such a theory and "a scheme for making an intelligent machine"(215). He believes that both must be more complex and procedurally-oriented than their typical predecessors.

Minsky is quick to dismiss any theory "that we see so quickly that our image changes as fast as does the scene" and presses his own case that "the changes of one's frame-structure representation proceed at their own pace; the system prefers to make small changes whenever possible; and the illusion of continuity is due to the persistence of assignments to terminals common to the different view frames"(221). This dependence of continuity "on the confirmation of expectations" thus provides a way of explaining why human vision apparently does not-and in the case of a machine, I gather, should not-require moment-to-moment "complete reprocessing"(221, 224). Still, however persistent those assignments may be-and Minsky offers the "conjecture that frames are never stored in long-term memory with unassigned terminals" but rather "with weakly bound default assignments at every terminal!"-they are all "weakly bound" and therefore ultimately changeable (228).

Stretching his theory a bit more, Minsky compares it to the developmental learning theory of the Swiss psychologist Jean Piaget. Minsky suggests that "there is a similarity between Piaget's idea of a concrete operation and the effects of applying a transformation between frames of a system"(229). He appears less sanguine about the possibility of describing the role played by frames in Piaget's formal stage of thinking, when children are "able to reason about, rather than with transformations," when they "learn the equivalent of operating on the transformations themselves"-though he easily analogizes such an operation to that by which an AI system reads and comments on its own programs (230). I see no reason why one could not articulate frames (or meta-frames and so on) as "'representations of representations'"(230), but one would be advised to think of them at all levels, including that of what is "originally" represented, in textualist or semiotic terms.

Such speculation leads Minsky to linguistic and, more specifically, semantic concerns. Transcribing Chomsky's transformational grammar into frame terms, he argues, for instance, that grammatical rules and conventions that people use function "to induce others to make assignments to terminals of structures," a line of thought he follows until he arrives at a theory of the interrelationship of the grammatical and the meaningful: "if the top levels are satisfied but some lower terminals are not, we have a meaningless sentence; if the top is weak but the bottom solid, we have an ungrammatical but meaningful utterance"(231, 232). Also, he reminds us that "terminals and their default assignments," especially in natural-language structures, "can represent purposes and functions, not just colors, sizes, and shapes"(232)-though he neglects to mention other dimensions of experience that a full-blown semiotic approach to representation would want to take into account.

Still, Minsky's frame-oriented discussion of natural language is quite suggestive, especially his treatment of discourse. Much of that treatment concerns a brief fable, an animal story. Careful to avoid any "radical confrontation between linguistic vs. nonlinguistic representations," Minsky nonetheless risks assuming their intimacy or identity at some level and offers a hypothetical "frame-oriented scenario for how coherent discourse might be represented," one that finally bristles with implications for both machine and human learning in the postmodern age:

At the start of a story, we know little other than that it will be a story, but even this gives us a start. A conventional frame for "story" (in general) would arrive with slots for setting, protagonists, main event, moral, etc.... Each sentential analysis need be maintained only until its contents can be used to instantiate a larger structure. The terminals of the growing meaning-structure thus accumulate indicators and descriptors, which expect and key further assignments.... As the story proceeds, information is transferred to superframes whenever possible, instantiating or elaborating the scenario.... But what if no such transfer can be made because the listener expected a wrong kind of story and has no terminals to receive the new structure? (236-237)

As Minsky explores this failure, larger issues begin to emerge between the lines:

We go on to suppose that the listener actually has many story frames, linked by...retrieval structures.... First we try to fit the new information into the current story frame. If we fail, we construct an error comment like "there is no place here for an animal." This causes us to replace the current story frame by, say, an animal-story frame. The previous assignments to terminals may all survive, if the new story frame has the same kinds of terminals. But if many previous assignments do not transfer, we must get another new story frame. If we fail, we must either construct a basically new story frame-a major intellectual event, perhaps-or just give up and forget the assignments. (Presumably that is the usual reaction to radically new narrative forms! One does not learn well if the required jumps are too large....) (237)

Much learning theory, particularly that of Joseph Novak, addresses the importance of effective learning building, in a principled way, from what the student already knows. The more the updating (new terminal assignments) involved, all the way to the radical "updating" of changing frames or creating a new one, the more crucial and complex becomes the educator's mediation. When the "jump" is large enough-and the "stories" have more at stake-we are in the realm not of conventional but of what I call apeironic education (from the Greek àpeiron, the "indeterminate," "endless," "unfamiliar"). That realm involves, following Minsky's treatment above, intra- and inter-frame problems, both of which, as we shall see, may be most productively conceived not simply in linguistic and/or nonlinguistic terms but in semiotic terms.5

If Minsky does not inaugurate a semiotic approach to the frame problem, he surely does offer theoretical provisions helpful to such an inauguration. Reviewing earlier work, he is led "to a view of the frame concept in which the 'terminals' serve to represent the questions most likely to arise in a situation"(246). This view proposes a more poststructuralist understanding of the terminals as having not only weak assignments but interrogative ones whose meaning is non-primitive in some sense and to that extent open. This view also occasions his recasting his definition of a frame as, now, "a collection of questions to be asked about a hypothetical situation; it specifies issues to be raised and methods to be used in dealing with them"(246). Grappling with the problem of how one locates "a frame to represent a new situation," he argues that locating it "must depend largely on (learned) knowledge about the structure of one's own knowledge"(247). Such a provision entails the complicating postulates that "an active frame cannot be maintained unless its terminal conditions are satisfied" and that "even the satisfied frames must be assigned to terminals of superior frames," along with "any substantial fragments of 'data' that have been observed and represented"(248). By these provisions for the interconnection, at once hierarchical and rhizomatic, of frames at once open and "satisfied," Minsky's portrait does indeed begin to suggest the human brain/mind itself. It hints at the staggering complexity of the interlaced processes involved in keeping the whole ever-switching thing updated in terms of progressively refined "difference information" arising from attempts to match situations with memories of other situations (253).

Minsky retreats from this snarl-though also from the opportunity of semiotically exploring the mechanism of "difference-describing processes"-in order to deal with the more practical issues of problem solving in what he contends is typically for humans "a small context"(255, 257). In doing so he proposes a shift away from older approaches to solving problems (by game theory and so on) and toward a frame-oriented (and apeironic) approach. It entails "a more mature and powerful paradigm," but it also presumes more productive management of the frame problem:

The primary purpose in problem solving should be better to understand the problem space, to find representations within which the problems are easier to solve. The purpose of search is to get information for this reformulation, not-as is usually assumed-to find solutions; once the space is adequately understood, solutions to problems will more easily be found. (259)

Thus his approach to problem solving constitutes "a way to improve the strategy of subsequent trials," but its lingering frame-problem questions are addressed only by the invocation of a model for "the frame-filling process"(263). This model seems somewhat jerry-built (with "demons" and such) (263)-though it does provide for a fairly "realistic" flexibility in dealing with the dialectics of known and unknown.

Others since have variously redefined the frame problem, dealt with the loose ends of Minsky's pioneering work, or taken off in directions of their own.

Bertram Raphael, for example, is especially concerned with frame-problem implications for robot systems. He acknowledges that the problem has discouraged the development of advanced AI systems and contributed considerably to turning attention toward the development of what we now call expert systems. This trend, I would contend, is increasingly fueled by delusions of simplicity, and its limitations now make imperative a full-scale and more creative return to that problem. It is still true, as Raphael concludes, that "No completely satisfactory method has been discovered" for dealing with it.6

Daniel Dennett sees the frame problem as comprising an area where AI researchers and philosophers should collaborate more. Like other thought-experimenters, he realizes that the originary enigma of the frame problem is that it is, in effect, already "solved" by organically-embodied cognition:

When a cognitive creature...performs an act, the world changes and many of the creature's beliefs must be revised or updated. How? It cannot be that we perceive and notice all the changes (for one thing, many of the changes we know to occur do not occur in our perceptual fields), and hence it cannot be that we rely entirely on perceptual input to revise our beliefs. So we must have internal ways of up-dating our beliefs that will fill in the gaps and keep our internal model...roughly faithful to the world.7

If perceptual processes cannot account for how all the updating is accomplished, what "internal" processes could account for the rest? Whatever they are, they are not, as Dennett, echoing McCarthy and Hayes, persuasively argues, driven by propositional logic. Since "systems relying only on such processes get swamped by combinatorial explosions in the updating effort," it appears that "our entire conception of belief and reasoning must be radically revised if we are to explain the undeniable capacity of human beings to keep their beliefs roughly consonant with the reality they live in"(125-126). Now that is the real "updating effort" required; while Dennett does not propose a specific game plan for it, he does recognize that it must be interdisciplinarily collaborative on a new scale-though he neglects to speculate on the pertinence of semiotics.

But many philosophers who have engaged the frame problem have done so with little sympathy for the whole AI/ES enterprise-a generalization that holds also for most semioticians, who bracket or reject the idea of a machine engaging in semiosis. Consequently, such philosophers typically have used the problem not as a spur toward deeper understanding and more creative strategizing but as an (or the) Achilles heel into which they can shoot all manner of epistemological arrows. One could easily compile a list of such "negative" philosophers, but probably the most exemplary is Hubert Dreyfus, who therefore can represent the rest.

Dreyfus compares Minsky's frame model to Husserl's analysis "of intelligence as a context-determined, goal-directed activity-as a search for anticipated facts"-through which "the noema, or mental representation of any type of object, provides a context or 'inner horizon' of expectations or 'predelineations' for structuring the incoming data..."8 The comparison is not extended, but it does include analogous functions for Husserl's predelineations and Minsky's default assignments; and Dreyfus finds it compelling enough to argue that, just as Husserl's attempt at formalizing such functions "ran into serious trouble," so will Minsky's (35). Thus Minsky, like Husserl, has taken on a futile "'infinite task'" but has done so more naïvely-by losing track of how human intelligence "presupposes a background of cultural practices and institutions" that "pervade our lives as water encompasses the life of a fish"(36). Terry Winograd's work toward a more holistic account of context through KRL receives a smilar judgment.9

Indeed, Dreyfus's critique of KRL, with its focus on the problem of regress in specifying context, entails his most forceful counterstatement on frames. It begins with the originary enigma:

Human beings, of course, don't have this problem. They are, as Heidegger puts it, already in a situation, which they constantly revise. If we look at it genetically, this is no mystery.... Human beings are gradually trained into their cultural situation on the basis of their embodied precultural situation, in a way no programmer using KRL is trying to capture. But for this very reason a program in KRL is not always-already-in-a-situation. Even if it represents all human knowledge in its stereotypes, including all possible types of human situations, it represents them from the outside like a Martian or a god. (52-53)

As many AI workers have observed, critics of their enterprise have a way of constantly altering the definition of what would constitute its success, a maneuver that Dreyfus here takes to the tautological extreme of arguing that machine intelligence, however impressive, is not human because it is not humanly embodied. And he seems oblivious to the possibility that an extraordinary intelligence, because it is other than the usual genetically- and culturally-conditioned kind, might be quite useful.

But Dreyfus labors this argument, and at the same time he blames AI workers for ignoring "noncognitive aspects of the mind"(53), rather than suggesting how they might study them more productively (semiotically, I would propose) toward ends other than a total mimesis that probably is both impossible and ultimately irrelevant to their enterprise. Well, maybe I am unfair to Dreyfus, accusing him of not doing what he never set out to do. Let me apply his critique more positively but also point out a blind spot in it that needlessly impedes thinking about the frame problem. His ceteris paribus thesis is essentially a sort of Wittgensteinian translation of that problem. It goes like this:

whenever human behavior is analyzed in terms of rules, these rules must always contain a ceteris paribus condition, i.e., they apply "everything else being equal," and what "everything else" and "equal" means in any specific situation can never be fully spelled out without a regress. Moreover, this ceteris paribus condition is not merely an annoyance which shows that the analysis is not yet complete.... Rather...[it] points to a background of practices which are the condition of the possibility of all rulelike activity. (56-57)

Furthermore, those practices are a matter of "what we are," which is, because of the incompletable regress, "something we can never explicitly know"(57). Certainly the thesis makes a caveat worth having, but it not only serves as perhaps the significant criterion of AI accountability: it also tells us that the ceteris paribus condition is the most sensitive condition with which apeironic education must deal when engaging the indeterminate.

Still, however, regardless of Dreyfus's conclusions, one has to remember that the regress somehow is halted in the human brain/mind and by a process hardly as meta-cognitive as that which allows us to discern and puzzle it. That process, whatever its representational logic, is largely unconscious, as practical finally as an expert system in limiting "infinite tasks," and surely best understood in terms not so much of logical conundrums as of semiotic mechanisms.

Nonetheless, some work in unconventional logic has bearing on the possibility of a semiotic approach to the frame problem, as may be seen in the 1980 double issue of the journal Artificial Intelligence.10 That issue was devoted to the theme of non-monotonic (or, to simplify, inconsistent) reasoning and featured papers concerned with extending the system of conventional logic, either by refining and enlarging it or by elaborating mechanisms for "meta-ing" it. Whatever the ostensible theme of those papers, the frame problem, in various guises, expectably pops up enough in them to be regarded as the leitmotif of the issue. The problem is hardly "solved" there, but it is suggestively reconsidered; and I would recommend the issue to anyone concerned with the background to a semiotic approach.

However, it is Paul Thagard who really introduces a new step into the non-monotonic dance by arguing for "the relative unimportance of consistency as a property of knowledge systems"(233).11 He moves toward a semiotic view of the frame problem at least to the extent of defining a frame as "a particular kind of nested association list" that is "more than thinly disguised sets of propositions in predicate calculus" and emphasizing its procedural efficacy (236, 241). But he also adduces results from psychological research that argue for "at least a presumption that the human information processing system uses framelike structures"(245). For Thagard the particular power of frames inheres in their capability for rich procedural interrelation. This capability is of such importance, in humans, that one should not be unduly irritated by the "proneness to error" of default values (251). Indeed, he proposes that frame-based AI systems should procedurally accommodate inconsistencies and contradictions by placing constraints on inference production when they occur, localizing their effects much as human cognition, with its "modular character," does (253). Thus he proposes more attention to the overall capability of such systems, as humanly instantiated, and less to their fallibility.

In retrospect one can see Thagard's preoccupations as setting the stage for the surfacing of a paper by Brachman (1985) that had been "circulating underground for quite some time."12 Informally structuralist but not quite semiotic in his approach, he mounts a sobering attack against the widespread use of prototypes in knowledge representation:

Along with a way to specify default properties for instances of a description, proto-representations allow the overriding, or "cancelling" of properties that don't apply in particular cases. This supposedly makes representing exceptions...easy; but, alas, it makes one crucial type of representation impossible-that of composite descriptions whose meanings are functions of the structure and interrelations of their parts. (80)

As a consequence, frame-based AI systems "are able to represent only a fraction of what it might appear they can," because they fail to incorporate "some definitional capability," without which "frames cannot express even simple composite descriptions, like 'elephant whose color is gray'..."(80, 81). Brachman argues not that default reasoning is unnecessary but that it is far from sufficient. It supplies negative power-that of dealing with what does not change or, in cancellation, with what is not the case-but lacks the means for defining what does change or is the case and so entails primitives at a rather high level of analysis (a limitation with which Minsky struggled).

Brachman admits the difficulties of any strategy of definition (circularity, incompleteness, and so on), especially for complex phenomena in a changing world. Still, he rightly insists that the avoidance of it, of some smaller grain of "compositional structuring" (and thus "representation by structured correspondence"), has given rise to frame-based systems that entrain crippling paradoxes of typicality (88). He calls such paradoxes "entertaining dilemmas" (nicely exemplified by the platypus as "the typical atypical mammal"), and they suggest that "In general it is probably a good idea to keep 'typical' out of the names of our nodes"(89). However, though he echoes Thagard in his conclusion that "the pendulum seems to have swung too far in the direction of exception-handling," he does not pursue the development of an alternative approach (92).

The frame problem has since received more thought relevant to a semiotic approach, albeit none of it yet revolutionary.

In his elaboration of the requirements for a better theory of change, Yoav Shoham outlines worthwhile refinements in temporal reasoning, partly by renaming the frame problem proper "the inter-frame problem" and distinguishing it from the "intra-frame problem" (a matter of internal non-monotonicity arising from concurrency of actions).13 Though his theory requires that both problems be avoided, he offers no instrumentality for doing so.

Paul Thagard (1986), in contrast, makes an important advance into the territory of machine semiosis by announcing that, "If C.S. Peirce were alive today, he would be an avid practitioner of artificial intelligence"-because "computational models provide new ways of investigating signs dynamically, seeing them in fluid interaction through diverse modes of inference and association."14 Thagard does not attempt a thoroughgoing semiotic translation of the frame problem, but he does suggest the fruitfulness of dealing with it through Peircean abduction. The program that he, along with Holyoak and others, developed to exploit such abduction is called PI (Processes of Induction, abduction being regarded as a kind of probabilistic induction). The program is apeironic to the extent that it can combine existing concepts to form new ones and thus "produce new knowledge under conditions of uncertainty"(292). Though its use of both production rules and concepts (which cluster rules in a frame-like way) makes it deductive as well as inductive and reliant on typicality rather than definition, the program incorporates features of economy that suggest a certain psychological realism, mostly in its provision for inferential focus. Thagard and Holyoak have exploited its constraints on informational swamping in limited simulation, but such work stops short of considering "mattering" in semiotic terms, especially in situations of radical (informational) change.15

Still, Thagard is on the right track in arguing that any useful theory of abduction must be "part of some sort of information processing theory of memory and problem solving," one that accounts for rich patterns of association and retrieval.16 Thus, as he explains, "PI implements parallelism by allowing for the simultaneous activation of numerous concepts and for the (simulated) simultaneous firing of numerous rules" and, thereby, for the interrelating of "lots of problematic facts"(294). Nonetheless, he readily admits that much remains to be done to determine whether or not such "mechanisms for spreading activation" could adequately describe the processes that shape the "amazing fit between our minds and the world that makes it possible for us to construct abductions"(294). Be that as it may, his argument that there is no "special faculty" unaddressed by his discussion-unless, I might suggest, it is semiosis itself-persuades.

In his survey of the status of expert systems, Peter Jackson finds the frame problem-or variations on it-ubiquitous. It clearly has played a major role in defining AI research in what he terms its "Modern Period" (from the late 1970s on), which "is characterized by an increasing self-consciousness and self-criticism, together with an orientation towards techniques and applications."17 This attempt to escape from an AI enterprise paralyzed by critical self-awareness into an ES enterprise hell-bent on practical applications has, because of the persistence of the frame problem, turned out to be a move from the frying pan into the fire. Jackson doesn't say that, but he implies as much.

Jackson's survey illuminates this irony suggestively, albeit controversially. He agrees with Brachman that the appealing project of building "frame- and object-based systems" that rely on analogical representations instead of production rules and predicate logic has gone awry in its obsession with exception-handling (67). Definition, conceptual composition, structural correspondence, minimal conditions of consistency, whatever useful supplementation to semantic nets such systems once promised-all are "open to question" until there is "further work on their epistemological foundations," which are "quite shallow"(67, 68). Whatever those neglected foundations amount to, they have not been substantially deepened by the development of expert systems based on either structured objects or a combination of frames and inference engine.

Jackson's conclusions concerning the state of ES are sobering. Many of them implicate the frame problem, and they suggest the importance of approaching it semiotically and with an awareness of the increasingly apeironic nature of knowledge. He asserts confidently-and, I think, correctly-that knowledge engineers want expert systems to function more like human beings and less like theorem provers. His assertion is "not motivated by anthropomorphism" but is made in recognition of "an essential requirement for the mechanization of expertise," one that "may mean that future systems will be less, not more, 'logical' than they are now." Why? Because expert systems "deal not with truth, but with knowledge..." And, since "Knowledge is corrigible," progress in machine learning "has been unsurprisingly slow" in the camps of both "'cognitivists'" and "'automatic programmers'"(205). In their representations of it, both have relied on oversimplifications of the uncertainty, incompleteness, and exceptionality that are part and parcel of human experience. Moreover, as Jackson stresses, implementing an expert system typically "is not a controlled experiment, in that the contrast with alternative approaches is seldom systematic"(216). But I would argue that ES development will hardly go further without such experimentation, which ought to involve also comparing different combinations of approaches. As Jackson observes, "There is certainly little evidence from psychology to suggest that human beings use a single representational scheme for encoding information"(216). If one expands "psychology" until it comprises semiotics, then that evidence is much to the contrary.

No matter how multiple the approach to ES implementation, however, it must encounter the frame problem. Just as theorem provers are, to follow Jackson, dominantly "rule-based" systems, so human beings appear to be dominantly "model-based" systems, precisely the kind for which frames "are especially suitable"(218). Consequently, any approach will have to deal with all the issues of dynamically structured data and (in)consistency with which I have been concerned, and it will have to incorporate a world-model and mechanisms for updating it, rapidly and radically in many cases.

Well, after this history, what kind of model? Doubtless frame-like in some way, but composed of what? Surely Thomas Sebeok c(l)ues us here: "The Innenwelt of every animal comprises a model...that is made up of an elementary array [not a bad synonym for frame] of several types of nonverbal signs..."; in hominids that array is compounded, consisting of "two mutually sustaining repertoires of signs, the zoosemiotic nonverbal, plus, superimposed, the anthroposemiotic verbal."18 However one theorizes those repertoires, some equivalent of them must be instantiated in any future expert system that minimizes the frame problem. The Innenwelt of the machine must comprise a model made up of a frame of both verbal and nonverbal signs.

Thus I do not agree with Feigenbaum and McCorduck (1984) that "the critical bottleneck in artificial intelligence," particularly "applied AI," is "the problem of knowledge acquisition" rather than "the problem of knowledge representation" or "the problem of knowledge utilization."19 I am persuaded that representation is the most crucial of the three though I suspect that we will discover them all to be (versions of) the same (frame) problem.

Recently on the television program Beyond 2000 one of the hosts used the word semiotics in reference to some high-tech research problem but then apologized for it and quickly passed on to another topic. We still live in an age in which such a gesture is perhaps typical-but not for long. AI and ES researchers clearly have been exploring approaches with semiotic overtones; and, as is demonstrated by Sebeok's (1986) survey concerning the most important goals for semiotics during the 1990s, some eminent semioticians have a burgeoning interest in machine semiosis. The time is ripe for reframing the frame problem.

Let me propose, more specifically, that, in thinking of the frame as an array of signs, the frame problem should be recast as a one of semioclasm, of "breaks" between signifiers and signifieds arising from changes in referents (which idea should be distinguished from Roland Barthes's notion of "'semioclasty,' a destruction of the sign"20). By such recasting, terminal assignments are conceived as signifieds, relatively more "abstract" at super- than at sublevels of the frame, whose "floatingness" (but not necessarily "destruction") of attachment is conditioned by the labilities of their referents (thus the destruction of a referent, however one imagined that, might well entail a destruction of attachment-as well as loss of the relevant signifier). Or, to turn this about, the degree of lability of the referent, as mediated by endosemiosis, determines the degree to which the signified (and associated signifieds in the tree of the frame) is called into question. In effect my proposal is that we shift our frame from one that interprets the frame problem in terms of theorem-prover non-monotonicity (and theorematic contingency) to one that interprets it, no less "logically," in terms of human semioclasm (and semiotic contingency).

What is gained by this shift? Certainly the theoretical particulars of this semiotized frame will remain vague for some time-in large part because AI and ES workers have no more productive agreement on the conceptual architecture or the terminology of the frame than semioticians have on those of the sign array-but the advantages of the shift are not difficult to argue. Terminals can be readily understood as subframes, assignments as more or less floating or detached (less or more viable as defaults) on a continuum from apeironic to conventional knowledge. If Minsky's nodes are regarded as signifieds (or signifiers-or both-depending on one's perspective), then their relations could be described as interpretant (inter-)relations. (Here, though, one might want to invoke a terminology more like that associated with Lamb's notion of the "nection." He defines that entity as "the internal sign or the micro-sign, ...the basic module of which the individual semiotic systems are built" and "which we could describe as an organizing device that connects a combination of mental features to a meaning or consequence or function" and thus, through interconnections, constitutes the cognitive-semiotic "relational network."21) Provisions for inter-frame sharing of terminals, the elaboration of frame systems, the evolution of meta-frames (or meta-frame systems and so on), as well as the fine details of functions like terminal markers-all seem equally "natural" to the ingenuities of this approach. It appears to be the only one that can even begin to deal with codifying Dreyfus's "cultural situation" and sorting out the cognitive economies implied by the ceteris paribus condition. The semiotic approach should have little trouble accommodating Thagard's characterization of the frame as a nested association list or his defensible insistence on its tolerance for inconsistency-since, as Floyd Merrell observes, "Inconsistency...lies at the heart of semiosis, coiled like a worm, doubled back onto/into itself."22 The sign array, however constructed as software, is definitional by its nature and should lend itself readily to Brachman's composite descriptions. If, as Thagard says, computational models allow us to see signs dynamically, surely it is conversely true that our increasingly dynamic view of signs should enable us to build computational models that instantiate fluidly interactive processes. Such models would entail frames that are semiotically procedural (far beyond being merely fancy databases with updatable slots) and can incorporate practical relevance restrictions and constraints on attention (mattering after all has to do mostly with precisely which frame is retrieved in response to a given situation and-a little more complexly, in a meta-frame perspective-with why). The parallelism/concurrency mandated here now seems less a difficulty at the hardware level, more one that might be managed at the software level (if the two levels can still be so distinguished) by richly interrelated arrays in which sharp-edged inference is intertangled with much fuzzier associational patterns.

But how do we translate the frame problem into semioclastic terms? That task requires that we learn more of how humans handle semioclasm. In more or less routine situations they seem to handle it well; that is, if a "normal" 30-year-old person moves from his/her first house to a second one, then the break between my house (as signifier) and the first house (figured as the signified) is mended by an updating that substitutes the second house (figured as a signified). But what if the person is 80 years old and moves from his/her nineteenth to twentieth house? (And one could continue: what if s/he is married for the fifth time, is afflicted by Alzheimer's, and so on?) The point is that, though humans have "solved" the frame/semioclasm problem, they have done so only to some degree. There are circumstances, increasingly more prevalent in a world of accelerative change, in which humans cannot deal with that problem well. The "outer" situation may be too changeful for "normal" human comprehension, or the "inner" world-model may be sclerotic with defaults (traditionally typical of old age?) or adrift on weak and constantly switching terminal assignments (traditionally typical of youth?). There also may be combinations of these possibilities, any of which can be imagined, in extremis, as psychotic. And any of them may be described as involving disjunctive signifieds, things not meaning what they once did or being indeterminate in meaning (culturally or even neurologically). But surely we can learn from these hyperbolic circumstances much about the character of the human modeling system, how it deals (and fails to deal) with semioclasm.

We do not have to look far to find circumstances of interest. A controlled semioclasm doubtless is crucial to cognitive growth and creativity. Lev Vygotsky says as much of the former when he characterizes literacy in terms of the acquisition of the ability to detach signs from any unique context.23 Somewhat similarly, creativity, as many have argued, very much involves the displacement of "old" meanings and the assignment of "new" ones (in both cases signs are broken and mended/updated with a different signified). But what if semioclasm is not so controlled? A version of what happens is what happened a hundred years or so ago in Meiji Japan:

When European languages and their phonetic orthographies replaced Chinese as the central repository of otherness language itself became ineluctably bifurcated into sign and meaning. The infusion of foreign technological and cultural artifacts...transformed the environment into one of kaleidoscopic opacities resistant to immediate comprehension. Perception itself became a form of translation, an epistemological encounter with the dislocation of meaning constituting the modern experience of the world.24

This circumstance did not occur-and does not variously continue-without dire consequences, though the Japanese in some ways arguably have succeeded in dealing with it as a positively apeironic opportunity. But notice the unavoidable implication that the Japanese are not alone in their "epistemological encounter." Theirs is ours, is increasingly everyone's; and insofar as our experience of the world is no longer modern but postmodern, that dislocation is all the more (frame-)problematic.

No one has explored the scale and purport of this postmodern dislocation of meaning more cogently than Bill McKibben. Though he is concerned principally with the greenhouse effect and related phenomena, they entail issues semiotic in flavor:

We have not ended rainfall or sunlight; in fact, rainfall and sunlight may become more important forces in our lives. It is too early to tell exactly how much harder the wind will blow, how much hotter the sun will shine.... But the meaning of the wind, the sun, the rain-of nature-has already changed. Yes, the wind still blows-but no longer from some other sphere, some inhuman place.25

The consequence of the impact of human artifice becomes apparent: "We have deprived nature of its independence, and that is fatal to its meaning"-so that "there is nothing but us"(58). Thus "Summer is going extinct, replaced by something else that will be called 'summer,'" but "it will not be summer, just as even the best prosthesis is not a leg"(59). In our "postnatural world" the connection between past and present is broken, so that "Those 'record highs' and 'record lows' that the weathermen are always talking about-they're meaningless now"(60). Such phrases are still generally used as if their previous meanings were still pertinent to the situation to which they refer. Likewise, we continue to use the signifier rain as if it did not mean, among other subframe associations, something like "poisonous acid solution" or sunlight as if it did not mean, in similar fashion, something like "actinic carcinogen"-though we seem to be gradually updating or at least tweaking the subframe for sex to include more associations with "death." Our world-model has not been updated in terms of a radically changing world: the superframe nature is riddled with semioclasm. To mend it with updated meaning-as McKibben has it, "to work out our relationship with it" as a "new 'nature'"-"will take us a very long time..., if we ever do"(96). Its unpredictability now has less to do with (the nostalgic notion of) the whims of Mother Nature than with a lack of correspondence with our model.

"What do you do when the past is no longer a guide to the future?" asks McKibben (133), indexing the nonlinearity, the non-monotonicity, of this apeironic (if not psychotic) circumstance. If the defaults are out-of-date, the new assignments/signifieds are direly floating. "The problem is," to trope McKibben a bit, "there are no good substitutes," and we have thereby "a vast collection of 'mights'"(133). This line of meditation about the seeming definiteness of the past leads him to speculate about ghostlier demarcations: "Such notions will quickly become quaint. The idea that nature-that anything-could be defined will soon be outdated. Because anything can be changed"(168-169). What most troubles him about this semioclastic apocalypse is that such redefinition as can be achieved may be worked out in machine terms, with the consequence that, by a kind of meta-semioclasm, all distinction between life and nonlife will break down (and, one supposes, culture will become "culture"). In view of McKibben's proposal for the global development of a "deep ecology" that would counter the momentum of such a possibility (181), I would insist that its most important issue is that of how world and world-model became so dangerously divergent. What has limited our updating? Can we learn-and this is the primary question for apeironic education-to do it better? And can we create AI and expert systems that can do it better or, even, better than we can-not to master us but to help us close the gap between mind, multifarious in its cultural articulations, and a runaway world?

If we are to understand more fully semioclasm and the mending of it, we must have not only a more sophisticated theory of change, as Shoham has emphasized, but one that treats change as temporalized semiosis that involves, to cite Dominick LaCapra's capsuling of the historical process, "iteration with alteration"-with allowance for "major discontinuities or breaks across time."26 Such a theory must take into account not only the way "objects may function as signifiers" but also the "'slippage of signifieds under signifiers'" and how a system can resist both the "vertiginous lability and rigid binary opposition" of meanings (247). In textualist terms, such a theory must deal with how a world-text can be read diachronically as various text-worlds or how various world-texts can be read diachronically as the same text-world; that is, it must provide for the switching (or not) of signifieds "under" signifiers (the making [or not] of new terminal assignments) in shuttling (or not) between differently constituted past/present interpretive frames. It also should offer ways to expedite such shuttling and to enrich the (endosemiotic) world-text/text-world interface in terms of the mutual changes entailed. Without such a theory we will not go much further in the development of human-like expert systems; indeed, without it we may find ourselves profoundly lost in the woods of a world of just such historical alienation (not knowing what to update from) as McKibben envisions us already entering, a post-postmodern version of the world at the end of Umberto Eco's The Name of the Rose, a place where we "no longer know what it [the text] is about."27

But we have not gotten so lost yet. And we are beginning to become more sensitive to and intelligent about semioclasm. James White, in a book aptly entitled When Words Lose Their Meaning, investigates interconnected changes in the world, in the world as constituted by language, in the reader who reads and writes the text of the world. Since "a text is in fact largely about the ways in which its reader will be changed by reading it,"28 reading (in the broadest sense) itself involves, by this construal, a process of updating. Thus "reading" a text (a situation, the world) requires a crucial sense of difference between past and present, for words (signifiers) do lose their meanings (signifieds) over time and do not have "exactly the same meaning each time they are used..."(23). Any "mode of thought" not responsive to such differences would be "impossibly mathematical"(23)-synchronically prisoned much as the "thought" of AI and expert systems is today. Recognizing the increasingly radical changefulness of meaning, we are gradually enacting White's advice "not to lament the loss of fixity but to learn to sail" on meaning's "shifting sea"(278). Can we help machines do so as well?

Perhaps a hint may be found in Susan Noakes's proposal for "a new strategy" of what she calls, as her title has it, Timely Reading. Her fundamental argument for this strategy is that reading "must be understood as the constantly transformed product of historical change, not a timeless process focused on timeless texts but rather a 'timely' activity."29 Though her principal concern is with literary texts, much of the argument she elaborates for her proposal can easily be seen as pertinent to the "reading" of other kinds of "texts"-even by a machine. Timely reading is an oscillatory reading of a text in terms of both what it meant then and what it means now, a shuttling back and forth "between exegesis and interpretation," each of which "mirrors and depends upon the other"(12, 13). Thus Noakes is interested, to translate to my perspective, in how a given world-text can be read as two text-worlds, two interrelated frames-in updating not in terms of changes in the world-text (though they may be taken into account) but in terms of its context as embodied in the reader, exactly the background that the reader uses (or is used by) in reading the world-text and that Dreyfus regards as outside the AI/ES domain.

Noakes argues that a useful understanding of timely reading, which involves "two representations [of the same object] as both congruent and non-identical" (compare LaCapra's "iteration with alteration"), requires that the process be semiotized (210). Her semiotization attends to semiosis as the "generation of interpretant-signs," with particular focus on "the temporality of the interpretant"(212). For her, as for Eco and much less for Peirce, "Change is essential to the interpretant," and she emphasizes "the fundamentally temporal character of...reading, showing how the interpretant changes-that is, metamorphoses itself into another, and still another, interpretant-" on various time scales (212). Thus the interpretant "must be conceived of as intrinsically and necessarily dynamic rather than static," as characterized by what she calls "alternativity," a "tendency to posit more than one possible meaning"(213). Through such a conception she is concerned to understand better a process that could help us overcome the restriction whereby, as the poet Howard Nemerov has it,

we do not learn from history...,
Because we are not the people who learned last time

an oblique but startling evocation of the frame problem.30

In Noakes's conception of the interpretant, "Reading is by nature a 'timely' activity" not only because the reader expects that the text means/meant something but also because "it is a process of sign production in which there occurs over time a series of substitutions of interpretants, backward and forward and back again, for the representation in the text..."(215). With everything read (words, objects, events) "but elements in the process of sign production"(216), temporal contextualization, backward and forward, is imperative-which would seem to demand that frames be able not only to update interpretants but to "back-date" them as well (for which provision subgoal-stripping is patently counterproductive).

This dynamic of interpretant shuttling does not get us out of the frame problem; indeed, it gets us more deeply into it-but with a fuller understanding that implies both important limitations on anthroposemiosis and, in Noakes's perspective, the possibility, perhaps for machines as well, of making (once again) a virtue of necessity:

I propose that the subject seeking to be conscious of its own temporality and of that of the texts it encounters must be always making its continuity in the present, and consciously doing so, while deliberately encountering an ever-widening range of information from the past, information that announces its discontinuity with the present.... The range of information that must be encountered must indeed remain "ever-widening" in that as one element of information becomes incorporated into the continuity being made in the present, it loses its character as temporally discrepant from the present; and different ones must be encountered to serve anew the old function, entirely essential to "timely reading," of recalling to the reader his or her temporal discrepancy from the text. (233-234).

Helping people be better exegetes/interpreters thus becomes a matter of "explicit education in the principles of hermeneutics"(243)-and, I would add, of semiotics-good advice for machine learning as well. And such education will have to be saliently apeironic because there is no possibility of ever really being updated, only one of being engaged in richer updating. In consequence of the permanence of electronic memory, machines may succeed over indefinite periods of time in being, as it were, both "the people who learned last time" and those who are learning this time (reproductive and inventive), but they still will have to make the best of a no-more-than-asymptotic mending of semioclasm. What Jules Henry says of human learning holds also, mutatis mutandis, for machine learning: "we will never quite learn how to learn, for since Homo sapiens is self-changing, and since the more culture changes the faster it changes, man's methods and rate of learning will never quite keep pace with his need to learn," a predicament that "is the heart of the problem of 'cultural lag,'"31 itself a version of the frame problem.

While mechanosemioclasm so far is a relatively simple phenomenon, the anthroposemioclasm from which we need to learn may well be much subtler than I have suggested. And though I offer the latter with little hesitation as rubrical for what must be the most basic failure (when unmended) of semiosis as adaptive behavior and its most basic success (when serving as a precondition for a new sign structure), empirical/experimental research has gleaned little about its mechanisms. Certainly one is tempted to theorize how it underlies Henry's cultural lag, how its mending involves (to translate Noakes into Derrida) the retention of the trace of what was changed (as unchanged), how it maintains beliefs/expectations that hinder the acquisition of (new) knowledge, how it figures in the relation between frame density and learning ability, how it correlates with probabilistic information-theoretical terms (specifically, the degradation of information into redundancy [increasing semioclastic defaults] or its enhancement through the accommodation of surprise [increasing apeironic mending]), how it might help us better comprehend the shared features of various kinds of semantic aphasias/amnesias, and so on. But such theorizing will not become fruitful for AI/ES development-and human education-without more investigation, direct or indirect, of anthroposemioclasm in operation. There is, however, some research that bears on the matter, and more may be in the offing.

Let me cite a few examples. John Hutchinson and Daniel Beasley, reporting results of their research on aphasia and related disorders in older persons, observe that "One common linguistic problem concerns a disturbance in semantic functioning" (by which patients have trouble understanding the meanings of words or cannot evoke words to express what their thoughts mean [anomia]), and they call for more experimental research "devoted to the study of subtle symbolization changes that may exist among geriatric patients."32 (160, 167). Reviewing the work of Piro (1971) and Irigaray (1973), Bär (1976) jointly reads the former's synthesis of findings regarding semantic dissociation in schizophrenics (in which the "relation between the sign and what it ordinarily signifies in a given culture is modified in various degrees") with the latter's study of linguistic automatism in senile psychotics and relates them "to the much more advanced semiotic studies of aphasia," seeing in all three kinds of disorders symptoms of semioclasm: "loss of lexical stability," "reduction of metalinguistic distancing," "incapacity to deal with multiple contexts simultaneously," and "general reduction or even abolition of the various receptor and central skills required to deal with novel information"-all "features...of quasi-closed systems which tend to abandon interaction not only with the external environment, but often with the internal one as well"(271, 275-276). Sebeok notes how little we know about either the formation or impairment of human sign systems; speculates about the possibility that "'repetitiousness'" in aging or aged persons is "not simply a symptom of physiological deterioration" but "rather a semiotic manifestation of an adaptive strategy" that, along with other compensatory semiotic modifications, helps them "to cope with the unusually, often dramatically, altered social environment" in which they live; and is "convinced that the semiotics of old age is one of the most promising research areas for the immediate future"-especially, as I read him, if aging involves not only semioclasm but also processes of coping different from the "semiotic competencies 'normal' adults take for granted."33 Deirdre Kramer and Diana Woodruff's study, corroborating earlier research, shows "lower conceptual differentiation among older adults," but their study finds also that young women demonstrate "higher conceptual differentiation relative to all other groups"-results that are cogent but in need of further investigation, and clearly related to "highly educated older women" demonstrating "the greatest breadth of categorization."34 Such results may well be pertinent to a fuller understanding of semioclastic sclerosis (terminals locked with outdated defaults or somehow otherwise-say, by neural necrosis-incapable of updating) and how it might be minimized. Furthermore, they correlate interestingly with the conclusion of Nancy Mergler and Michael Goldstein-who, like Irigaray and Sebeok, argue that the old process information by a system different from that of younger persons (and certainly different from that of very young persons)-that "the information-processing system of the elderly adult, though poorly suited for rapid, interpretative encoding of information, is well suited for the decoding and transmission of information already in the system."35

There is a strong implication here that the characteristics of dysfunctional semioclasm in humans are much the same as those in present AI and expert systems at the edges of their cognitive-behavioral envelopes. The better we understand such "pathological" semiosis, the better we will understand how and why such systems are hindered by the frame problem. If the key to updating is careful attention to and flexible reconceptualization (going "against the grain" of the defaults) in response to change/difference-I think it is-and if Kramer and Woodruff's findings are corroborated by further work-I think they will be-then the most productive way of dealing with the frame problem will be discovered/invented by exploring what must be called gynosemiosis. Virgil had it right (though for the wrong reason) when he described woman as "varium et mutabile." That is why the expert system of the future will think like a woman.

Which is not to say that we cannot gain from other semiotic studies knowledge useful to that discovery/invention. But it is certainly to say that we are slowly learning, from studies of sexual dimorphism and gender differences too numerous to cite here, a good deal about gynosemiotic capacities that have been devalued by the male-dominant cultures of the world but are crucially relevant to dealing with the frame problem-and that we should enlarge and deepen such studies. Those capacities, all present in men but apparently generally stronger (perhaps partly by reason of hemispheric morphology) in women, include-though doubtless are not limited to-acute sensitivity to changes in the immediate environment, fine-motor skills, an ability to detect and formulate small sensory distinctions, a keen sense of affective/biological relevance, a tolerance for differing or contradictory points of view and the inconsistencies of their interplay, and (as Kramer and Woodruff observed) a powerful faculty for conceptual differentiation and categorization. The recognition of such capacities seems wholly appropriate in an age increasingly wary of objective universals, certainties, and absolutes (male narratives of sameness) and intent on intersubjective particulars, uncertainties, and relatives (female narratives of otherness).

To put it another way, our age is witnessing and more or less self-consciously promoting a shift from "the grand narratives" of knowledge to "the little narrative";36 consequently, as Karlis Racevskis synthesizes a cognate argument, "the role of the 'universal' intellectual...has now given way to that of the 'specific' intellectual, the savant or expert."37 And that expert, machine or human (though perhaps we should remain the meta-experts), is a creature whose god is in the details, the bailiwick of gynosemiotic discriminations. To acknowledge that situation, surely fortuitous for ES development (an enterprise that needs to welcome more women into its ranks), is not to advocate an overthrow of anthroposemiosis but to anticipate a continuing renversement, a corrective balancing of perspectives.

In an essay tellingly entitled "An Uncertain Semiotic," previously cited, Merrell attempts to define our era in terms of how meaning is viewed. "Between the meaning determinists and the indeterminists," he asserts, "the scales at present weigh in favor of the latter."38 That tipping began "around 1905 with Albert Einstein's special theory of relativity," when "a new world-perspective finally began to emerge," one that marks our era as that of "The Emergent Perspective"(250, 251). Merrell elaborates:

Just as quantum theory has superseded classical physics, so the "new cybernetics" approach has superseded the classical theory of communication. In this new era, and speaking generally of the reigning conceptual framework, incompleteness, openness, inconsistency, statistical models, undecidability, indeterminacy, complementarity, polyvocity, interconnectedness, and fields and frames of reference are the order of the day. (252)

One easily could extend his inventory to include fractals, chaos theory, string theory, fuzzy logic-all the conceptual apparatus we have developed to engage events, however categorized, that keep meaning in flux.

In an era so intricately woven with indeterminacy of meaning, the more we appeal to unrevised conventional wisdom (the established Certeau-esque "semiocracy") for answers and solutions, default harbors in the storm of environmental complexity, the more we risk institutionalizing a general unmending cultural semioclasm, an unresponsive closure to the indicators of (the signs constituted by) change/difference ("external" or "internal," perhaps Derridean différance itself) that puts us in the position of the old (at least those who are not "highly educated older women"), that of those otherwise withdrawn into automatism, that of our present intelligent machines. If humankind is to survive and endure more meaningfully, then it must, as it grows older, learn-and learn by-even more powerful heuristic strategies of mending. It must build machines that are "younger" and less automatic. In their joint dealing with the unknown, humankind and its machines must become more feminized, so that finally both are engaged in a common apeironic education.

Previous Next Up Comments


I would like to thank Doug Klusmeyer for editorial assistance on this essay.

1 Peter Jackson, Introduction to Expert Systems (Wokingham, UK: Addison, 1986) 1.

2 John McCarthy and Patrick Hayes, "Some Philosophical Problems from the Standpoint of Artificial Intelligence" Machine Intelligence 4, ed. Bernard Meltzer and Donald Michie (New York: American Elsevier, 1969) 463.

3 See R.E. Fikes and N.J. Nilsson, "STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving," Artificial Intelligence 2 (1971) 189-208; Carl Hewitt, "Procedural Embedding of Knowledge in PLANNER" Proceedings of the Second International Joint Conference on Artificial Intelligence (1971) 167-184; Marvin Minsky, "A Framework for Representing Knowledge," The Psychology of Computer Vision, ed. Patrick Henry Winston (New York: McGraw, 1975) 211-277.

4 Minsky, 211.

5 See Joseph D. Novak, A Theory of Education (Ithaca: Cornell UP, 1977).

6 Bertram Raphael, The Thinking Computer: Mind Inside Matter (San Francisco: Freeman, 1976) 175.

7 Daniel C. Dennett, Brainstorms: Philosophical Essays on Mind and Psychology (Montgomery, VT: Bradford, 1978) 125.

8 Hubert L. Dreyfus,What Computers Can't Do: The Limits of Artificial Intelligence (1972; New York: Harper, 1979) 34.

9 See Daniel G. Bobrow and Terry Winograd, "An Overview of KRL, a Knowledge Representation Language," Cognitive Science 1.1 (1977) 3-46.

10 See Daniel G. Bobrow, ed., Artificial Intelligence 13.1-2 (1980).

11 Paul Thagard, "Frames, Knowledge, and Inference," Synthese 61 (1984) 233.

12 Ronald J. Brachman, "'I Lied about the Trees' Or, Defaults and Definitions in Knowledge Representation," AI Magazine 6.3 (1985) 80.

13 Yoav Shoham, "Ten Requirements for a Theory of Change," New Generation Computing 3 (1985) 470,472.

14 Paul Thagard, "Charles Peirce, Sherlock Holmes, and Artificial Intelligence," rev. of Umberto Eco and Thomas A. Sebeok, eds., The Sign of Three, Semiotica 60 (1986) 289.

15 See Paul Thagard and Keith Holyoak, "Discovering the Wave Theory of Sound: Inductive Inference in the Context of Problem Solving," Proceedings of the Ninth International Joint Conference on Artificial Intelligence, Vol. 1 (1985) 610-612.

16 Paul Thagard, "Charles Peirce," 294.

17 Jackson, 8.

18 Thomas A. Sebeok, "In What Sense Is Language a 'Primary Modeling System'?" Proceedings of the 25th Symposium of the Tartu-Moscow School of Semiotics, Imatra, Finland, 27th-29th July, 1987, ed. Henri Broms and Rebecca Kaufmann (Helsinki: Arator, 1988) 74.

19 Edward A. Feigenbaum and Pamela McCorduck,The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World (New York: Signet, 1984) 85, 84.

20 Roland Barthes, "Lecture in Inauguration of the Chair of Literary Semiology, Collège de France, January 7, 1977," trans. Richard Howard, October 8 (1979) 14.

21 Thomas A. Sebeok, S. M. Lamb, and J.O. Regan, Semiotics in Education: A Dialogue (Claremont, CA: The Claremont Graduate School, 1988) 21, 22.

22 Floyd Merrell, "An Uncertain Semiotic" The Current in Criticism: Essays on the Present and Future of Literary Theory, ed. Clayton Koelb and Virgil Lokke (West Lafayette, IN: Purdue UP, 1987) 255.

23 Lev S. Vygotsky, Thought and Language, ed. and trans. Eugenia Hanfmann and Gertrude Vakar (Cambridge, MA: MIT Press, 1962) 99.

24 Earl Jackson, Jr., "The Metaphysics of Translation and the Origins of Symbolist Poetics in Meiji Japan," Publications of the Modern Language Association 105 (1990) 261.

25 Bill McKibben, The End of Nature (New York: Random, 1989) 48.

26 Dominick LaCapra, Rethinking Intellectual History: Texts, Contexts, Language (Ithaca: Cornell UP, 1983) 336.

27 Umberto Eco, The Name of the Rose, trans. William Weaver (New York: Warner, 1984) 611.

28 James Boyd White, When Words Lose Their Meaning: Constitutions and Reconstitutions of Language, Character, and Community (Chicago: The University of Chicago Press, 1984) 19.

29 Susan Noakes,Timely Reading: Between Exegesis and Interpretation (Ithaca: Cornell UP, 1988) xii.

30 Howard Nemerov, "Ultima Ratio Reagan," in War Stories: Poems about Long Ago and Now (Chicago: The University of Chicago Press, 1987) 6.

31 Jules Henry, Culture against Man (New York: Random, 1963) 284.

32 John M. Hutchinson and Daniel S. Beasley, "Speech and Language Functioning among the Aging," Aging and Communication, ed. Herbert J. Oyer and E. Jane Oyer (Baltimore: University Park Press, 1976) 167, 160.

33 Thomas A. Sebeok, The Sign & Its Masters (Austin: University of Texas Press, 1979) 59, 70.

34 Deirdre A. Kramer and Diana S. Woodruff, "Breadth of Categorization and Metaphoric Processing: A Study of Young and Older Adults" in Research on Aging 6 (1984) 282-283.

35 Nancy L. Mergler and Michael D. Goldstein, "Why Are There Old People: Senescence as Biological and Cultural Preparedness for the Transmission of Information" in Human Development 26 (1983) 78.

36 Jean-François Lyotard,The Postmodern Condition: A Report on Knowledge, trans. Geoff Bennington and Brian Massumi (Minneapolis: University of Minnesota Press, 1984) 60.

37 Karlis Racevskis, Michel Foucault and the Subversion of Intellect (Ithaca: Cornell UP, 1983) 129.

38 Merrell, "An Uncertain Semiotic," 250.