Previous Next Up Comments
SEHR, volume 4, issue 2: Constructions of the Mind
Updated 4 June 1995

mindless mechanisms,mindful constructions

an introduction

Güven Güzeldere & Stefano Franchi



The Handbook of Artificial Intelligence gives the following definition of artificial intelligence (AI):

Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior--understanding language, learning, reasoning, solving problems, and so on.1

This characterization, and the technological future it imagines, are both products of recent decades, and highly dependent on the birth and development of the digital computer. But the broad aspirations and ambitions underlying AI existed long before. As a partially autonomous academic discipline, AI is very young, as evinced by its post-1950 computational tools, the distinctive set of human resources it employs, and the particular "world view" it largely adopts. But at a deeper level the roots of AI reach back over many centuries, not only in academic thinking, but also in public imagination. Far from being new, the questions that AI aspires to answer have a long distinguished career in the history of intellectual thought.

For many reasons, therefore, it seems useful to try to locate the research efforts grouped under the banner "artificial intelligence" within a broader, historical framework. Only from such a perspective it is possible to judge fairly AI's successes and failures, and to get a cogent perspective on its future, free from both gut-level negative reactions, and from the seduction of fantasy-filled promises.2

the roots in the past

The claim that we can understand human nature by finding out about the mechanism of its embodiment has been around for many centuries. In his "Intellectual Issues in the History of Artificial Intelligence," Allen Newell claims that AI is built on this idea fundamentally. Thus he characterizes AI as having a goal of understanding and constructing the mind, sustained by a strategy of understanding and constructing its underlying mechanism. Moreover, Newell claims that doubt about AI's future success stems from a disbelief in the truth of this fundamental assumption, tracing this line of thinking back to the "Cartesian split between mind and matter."3

Understanding bodily mechanism was not an ever-present concern in philosophy. For instance, it played almost no substantial role in Plato's attempt to understand the human psyche, although Newell's historical path can perhaps be traced back to Aristotle. Ironically, however, the greatest "mechanician" in the history of the study of mind was Descartes, the architect of the modern mind-body dualism. Descartes carefully studied the nervous system, proposed a theory of nervous activity in the body based on hydraulic principles as they were conceived in the eighteenth century, and went on to suggest that bodies, human or otherwise, were no different from carefully constructed automata, or "self-moving" machines. According to Descartes, life had everything to do with the body (res extensa) and the functioning of its mechanism. But this mechanism had nothing to do with the mental (res cogitans). Not only did the activities of the mind not require embodiment, but no kind of intricate or complex bodily function would suffice for mental existence. For a Cartesian, that is, artificial intelligence would be possible only by taking the word "artificial" very seriously, and by characterizing the quest in terms of purely behavioral capacities. As far as the prospects go for building "real thinking," there would be, in principle, no hope.4

Regardless of the status of the mind, the idea of building mechanical creatures able to exhibit behavior that mimics certain exclusively human tasks--talking, singing, writing, chess playing, etc.--occupied a distinguished place in public imagination, especially after the Enlightenment. Various kinds of automata were immensely popular, especially in the seventeenth and eighteenth centuries. The fascination of the public with devices of this sort--such as marionettes that danced and played the piano through pre-set mechanical movements--suggests a long thread of interest in, and a respectable heritage for, the enterprise that is headed today by the research effort dubbed "artificial intelligence."

Needless to say, these forebears did not have access to the kinds of physical and conceptual tools that are used by today's AI researchers. It is only within the past fifty years that we have had digital computers to perform simulations, metaphors based on parallel processing, and virtual machines to capture layers of abstraction.6

But other styles of mechanism and metaphor have always been available to fill those positions. The mind was likened to a giant mill with cogs and gears by Leibniz; it was thought to be a system of elastic pipes operating according to the principles of fluid mechanics by Descartes and others; and the metaphor of the mind as a telephone switchboard was popular until just a few decades ago. Although they may seem impoverished in comparison to present-day contenders, there is no doubt that these earlier mechanical metaphors anticipated much of today's thinking on AI.

from homogeneity to diversity

There is a significant point to note in comparing AI's precursors to its present incarnation: possibly the greatest difference between the past and the present of the AI paradigm lies in the intellectual background and upbringing of those who have thought about mind, mechanism, and their relation. Present-day AI research is mostly pursued behind the closed doors of technicalities that are largely inaccessible to the average non-specialist. And as a matter of historical contingency, it is only those with expertise in the world of computers (hardware and software) who occupy the flagship of AI research. This is not to assign blame, nor to imply that the situation is necessarily undesirable. But the question is still worth asking: Does the goal of understanding intelligence have to be pursued in this isolated, compartmentalized way? What are the consequences of the present situation for the future of the overall program?

The historical situation was different. Those who likened the mind to an hydraulic engine, and tried to develop theories on that basis, were not limited to engineers with expertise in fluid dynamics. In the same vein, could artists, sociologists, philosophers, critics, and literature scholars--people who have no particular expertise in the workings of computers--not make contributions to today's AI?

We believe that the answer to this question is positive. More strongly, we would press a related retrospective diagnosis. In the course of its brief contemporary history, AI research has been home to a great many controversies: about symbols, consciousness, proceduralism, the relevance of neuroscience, etc. In part, these upheavals were due to internal fluctuations stemming from badly failed estimations, on the part of AI pioneers, of what their research could accomplish in a given--and what almost always turned out to be too short--period of time. At least one factor responsible for these misjudgments, we are convinced, was the overly homogenized and restricted professional constituency of the AI community.

great expectations, broken promises

In the early days of AI, when impressive first results were coming in from incipient research programs, many predictions were made, not only about the future of AI, but about its then current status. Hubert Dreyfus notes, for example, that Herbert Simon, one of the chief instigators of the entire AI enterprise, made the following remarks as early as 1958:

It is not my aim to surprise or to shock you ... But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and create. Moreover, their ability to do these things is going to increase rapidly until--in a visible future--the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

Simon went on to make three predictions about what the subsequent ten years would bring in terms of AI development. All of the following, he predicted, would be achieved by 1968:

1. A digital computer would be world chess champion, unless the rules barred it from competition.

2. A digital computer would discover and prove an important new mathematical theorem.

3. Most theories in psychology would take the form of computer programs, or of qualitative statements about the characteristics of computer programs.6

Today, almost thirty years after the deadline has passed, it would be difficult to maintain that any of those predictions has been fulfilled. Nor is it clear whether anyone would want to predict that, with the possible exception of the first, they will be achieved within the next generation. (See below for more on chess-playing programs).

Patrick Winston, an early representative of MIT's AI research program, explains this phenomenon of misjudgment in the following terms:

Around 1960 we start[ed] to speak of the Dawn Age, a period in which some said, "In ten years, they will be as smart as we are." That turned out to be a hopelessly romantic prediction. It was romantic for interesting reasons, however. If we look carefully at the early predictions about Artificial Intelligence, we discover that the people making the predictions were not lunatics, but conscientious scientists talking about real possibilities. They were simply trying to fulfill their public duty to prepare people for something that seemed quite plausible at the time.7

Perhaps it is true that the sole reason behind what in retrospect seem to be inflated promises was the AI community's desire to save the public from the psychological shock of the upcoming "robot age."8 The more interesting question, however, is not whether AI researchers were "trying to fulfill their public duty to prepare people for something that seemed quite plausible," but rather, why a certain vision of AI's short term potential seemed so imminently plausible.

diagnosis

This is where our conjecture about the homogenized constituency of the professional AI community becomes relevant. Early predictions about the capacities of machines vis-ˆ-vis the capacities of humans were not seen as misjudgements, we believe, because the research community failed to comprehend the magnitude of the AI project, a magnitude to which its deep historical roots stand testament. In the heady days of early AI, those in the business of writing the new programs lacked expertise in historical, philosophical, or social-scientific analysis. No one is to blame; the humanities and the social sciences were not part of the professional fields of computer science, logic, and mathematics which formed the theoretical grounding of at least the first several generations of AI. If any blame is to be meted out, it lies in the conviction that AI was nothing but a largely engineering enterprise, or at best, one that required no ties to any discipline beyond engineering and the natural sciences.

In the foreword to their book, AI in the 1980s and Beyond: An MIT Survey, Patrick Winston and Michael Brady seem to anticipate this diagnostic line. They counter it as follows:

Of course psychology, philosophy, linguistics, and related disciplines offer various perspectives and methodologies for studying intelligence. For the most part, however, the theories proposed in these fields are too incomplete and too vaguely stated to be realized in computational terms.9

Their "complaint" is warranted: theories about human intelligence proposed within the humanities and social sciences do not fit the computational paradigm, and it is probably true, as well, that they are insufficiently complete and precise to lead to direct implementation. But from that observation it is by no means obvious what conclusion to draw. At a minimum, attention is drawn to a dichotomy between what (at least these advocates claim) AI takes to be a criterion on theoretical adequacy, and what criteria are met by theories in other disciplines. But the fact that the humanities' theories do not align with computational ones does not automatically tip the scales in the latter's favor.

More specifically, suppose it is also true, as these authors claim, that AI theories are, and theories from the humanities are not, sufficiently precise to be implementable.10 Once again, no conclusion follows that the implementable theories are right--or even that they are better. It is equally possible--and in historical retrospect seems likely--that the theories embraced by AI were too strictly stated, too narrow to do justice to the phenomenon of intelligence. Sure enough, there may be light under the computational lamp (in its present incarnation), but are we sure the key to intelligence is to be found there?

toward a platform of exchange (if not interaction)

John McCarthy once remarked that AI cannot afford to avoid philosophy, because then it will end up using "bad philosophy," rather than "no philosophy."11 He gives voice to a similar sentiment in the abstract of his article, "What has AI in Common with Philosophy?" as follows:

AI needs many ideas that have hitherto been studied only by philosophers. This is because a robot, if it is to have human level intelligence and ability to learn from its experience, needs a general world view in which to organize facts.12

Others have pointed out the necessity of broadening the professional constituency of AI and re-examining its fundamental assumptions about human nature. For instance, a special 1980 issue of the SIGART Newsletter compiled responses from various representatives of AI research on questions regarding the relation of AI to other fields. The following remarks are from Phil Hayes, who, at the time, was heading a natural language understanding project at Carnegie-Mellon University:

There are lessons to be learnt by AI from other disciplines. ... The AI worker should learn how to apply insights from other fields to his own business of constructing intelligent computer systems. ... In the reverse direction, AI can challenge these more traditional disciplines by providing them the opportunity to test computationally the speculations out of which their theories are constructed.13

Hayes's remarks suggest an interesting theoretical reversal: might there also be a flow of effect in the opposite direction? Are there lessons or insights about intelligence to be gleaned from present endeavors in AI, that should be learned by the humanities and social sciences? This avenue has been even less well explored than its reverse, but one can still find notable assenting voices. In the Introduction to their Philosophy and AI: Essays at the Interface, for example, Robert Cummins and John Pollock, both philosophers, express agreement on this subject with the philosopher of science Paul Thagard, in a manner reminiscent of Hayes's, as well as Winston and Brady's, remarks above:

As Paul Thagard has pointed out, artificial intelligence liberates us from the narrow constraints of standard logic by enforcing rigor in a different way, namely via the constraint of computational realizability.14

Given the enormously difficult tasks facing the overall AI enterprise, it may be thought that excursions into non-technical disciplines, especially into disciplines that use non-formal methodologies wholly alien to computational practices, is a luxury that the AI research agenda cannot afford. We do not want to deny that this broadening of the intellectual boundaries of the active AI community would/will be a major project, one that will require additional work and substantial effort. Nor do we mean to suggest that the effort of building working bridges between AI and diverse humanities and social sciences communities--such as art, music, history, philosophy, sociology, and science studies--will be easy, short-term, or straightforward. It will require struggles of every conceivable sort: intellectual, academic, personal, and political. Nevertheless, even if it is rough, this route may (and in our estimation will) be the only one that leads towards the Holy Grail of AI. If that is so, the alternative of pursuing AI research as a project isolated solely within engineering and the natural sciences, however more straightforward or easy, is no alternative at all.

projections into the future

The overarching goal of AI is sometimes referred to as Turing's Dream, as sketched in his classic essay "Computing Machinery and Intelligence": to build a digital mechanism that would accomplish some task that is taken, in the public eye, to require particular qualities belonging to the human mind: plasticity, intelligence, flexibility, communicability, etc. The dream stands in opposition to the construction of small specific commercial applications that provide assistance in database searches, airline reservations, and the like, and also to tasks that obviously require brute computational force transcending human capacity. This is why no one is fascinated--anymore--by hand-held devices that perform astounding arithmetical calculations at lightning speed. Chess-playing programs, however, which are becoming increasingly more powerful simply by increasing the amount of brute force applied (Intel has recently introduced a chip that can analyze 100,000 moves per second) are capturing public attention, since the brute force is veiled behind a form of behavior typically associated with something dear to our hearts: the intricate game of chess, where "minds clash."

Progress in computer chess has reached the extent of creating anxieties over the inevitable doomed day when the human grand champion will lose a game to a machine.15 The fact that media sentiments over games that pit humans and machines against one another often fluctuate greatly can be taken as evidence of the extent to which public understanding of the nature of artificial intelligence is unfounded and vulnerable. Newell and Simon make a related point in their discussion of the place of chess-playing in AI:

Chess is a game. There are numerous reasons why games are attractive for research in problem solving: the environment is relatively closed and well defined by the rules of a game; there is a well-defined goal; the competitive aspects of a game can be relied upon (in our culture) to produce properly motivated subjects (even when no opponent is present!).16

The important point is the fact that a chess game provides a closed environment: the machine does not need any understanding of the human context of the game; it cannot be embarrassed into losing by making a series of "stupid" mistakes; it would not feel exuberant for having beaten a tough opponent--and none of this matters. Nonetheless, it is exactly these competitive aspects of game-playing that put computer successes at chess in a different category from success in long division. Put differently, we are tempted to say that the machine intelligence involved in chess playing owes more to the "eye of the beholder" than to any actual intellectual capacity inherent in the programs themselves.

But the ultimate pinnacle of contemporary AI's aspiration is to build a machine that will pass the Turing Test, by holding a sustained teletype conversation in a manner indistinguishable from that of a human being. Ever since its formulation in 1950 by Alan Turing, the Turing Test has become something of a logical barrier shielding AI's unaccomplished promises.17 Of course, passing the Turing Test is a rather grand goal, and AI's current research agenda is filled with a number of smaller items. For instance, decision-making "expert systems" of various sorts are becoming ever more useful tools to help lawyers and physicians in need of surveying large amounts of data in order to come to a conclusion or diagnosis. Robot arms that assemble parts on production lines are much more sophisticated than their ancestors of only a few decades ago. But such positive progress in AI almost always takes place when the end product is adopted as some sort of "helper" or "assistant" or "prosthetic"--under the guidance and control of human beings--rather than as an autonomous robot standing on its own wheels (or feet, technology allowing). This is an outcome which was not anticipated early on, but AI should probably be credited for the recent boom in the field of human-computer interaction today.

Finally, it is important to examine the point about the success of AI programs in "restricted domains" or "closed environments"--perhaps the part about the restrictedness or closedness of the application domains, more than the successes. The impossibility of generalizing from a set of distinct and internally consistent "micro-worlds" (which were responsible for the rapidly gained fame of AI in the early 1970s) into the understanding or modeling of an unrestricted "world at large" has precisely been one of the endemic symptoms that have plagued AI research. At present, it is essentially a received view that trying to circumvent the ungeneralizability problem by adding more "micro-world" constructions results in nothing but ill-tamed complexity, of a sort that ultimately runs up against an insurmountable wall.

AI community looking ahead

How does the future look for AI? How does the AI community see the future of their work, and what do the critics think?

Patrick Winston depicts the brief history of AI in a linear fashion with the following figure, concluding on a positive note that "the correct attitude about Artificial Intelligence [must be] one of restrained exuberance:"18

Figure 1. Ages of Artificial Intelligence
(Winston's original version)

According to Winston, AI is proceeding along a straight path from the age of Renaissance to the age of commercial partnerships and to entrepreneurial successes, having left the Dark Ages behind for good. Whether, on this conception, AI is still aiming for its original dream--"the construction of mind"--is somewhat unclear. Winston himself doesn't pursue the issue, but there are others carrying the torch--from authors of popular books to academicians. For example, Robert Jastrow writes in Science Digest:

In five or six years--by 1988 or thereabouts--portable, quasi-human brains, made of silicon or gallium arsenide, will be commonplace. They will be an intelligent electronic race, working as partners with the human race. We will carry these small creatures around with us everywhere. Just pick them up, tuck them under your arm, and go off to attend your business. They will be Artoo-Deetoos without wheels: brilliant but nice personalities, never sarcastic, always giving you a straight answer--little electronic friends that can solve all your problems.19

Since Jastrow's projected date has already passed, it is easy enough to evaluate the accuracy of his prediction. But others are placing their bets further into the future. Hans Moravec, for example, director of the Mobile Robot Directory at Carnegie-Mellon University, was recently quoted in the following exchange in Discover:

discover: In the first sentence of your 1988 book, Mind Children, you wrote: "I believe that robots with human intelligence will be common within fifty years."

moravec: It's not--at least in my circles--all that controversial a statement anymore. 20

Naturally, it is difficult to assess the maturity of such speculations, and we will not attempt to do so here. Nor do we mean to imply that predictions of accomplishment are the only game in town; intimations of doom can also be found. Perhaps what should really be questioned is the value of venturing into predictions of this sort.

dissenting voices

In Daedalus's 1988 special issue on AI, Hilary Putnam raised the following complaint:

The question I want to contemplate is this: Has artificial intelligence taught us anything of importance about the mind? I am inclined to think that the answer is no. I am also inclined to wonder, What is all the fuss about? ... Why a whole issue of Daedalus? Why don't we wait until AI achieves something and then have an issue?21

Putnam's underlying irritation may be only the result of a belief that AI has not so far accomplished anything worthy of special attention. He does not say anything about what he believes AI will or will not be able to accomplish in the future. Hubert Dreyfus, a long-time staunch critic of the AI program, is much bolder in his assessment:

It has turned out that, for the time being at least, the research program based on the assumption that human beings produce intelligence using facts and rules has reached a dead end, and there is no reason to think it could ever succeed. Indeed, what John Haugeland has called Good Old-Fashioned AI (GOFAI) is a paradigm case of what philosophers of science call a degenerating research program.22

Another line of dissent comes from Terry Winograd and Fernando Flors. Their case is especially telling, because Winograd was responsible for one of the earliest success stories of AI, with his language-understanding blocks-world program, SHRDLU. They make the following prediction about computers' future ability to understand and use natural language:

In spite of a wide variety of ingenious techniques for making analysis and recognition more flexible, the scope of comprehension remains severely limited. There may be practical applications for computer processing of natural-language-like formalisms and for limited processing of natural language, but computers will remain incapable of using language in the way human beings do, both in interpretation and in the generation of commitment that is central to language.23

There are different reasons underlying the lack of faith that the different critics of AI express. For instance, Dreyfus's theoretical pivot-point is the lack of computational embodiment, which he takes to be essential for "being in the world" and possessing any human-like mental life. Although they agree with Dreyfus on embodiment, Winograd and Flores emphasize the social context that language use both creates and provides. But what about issues of a more pragmatic orientation? Is AI still considered by the industry as playing up to its original promises, or at least proceeding in the straight path which originates in the 1950s in Winston's diagram and spans the 1990s?24

In a recent issue of the Communications of the ACM dedicated to AI in the industrial and commercial world, guest editor Toshinori Munakata, while expressing support for the utility of AI in practical applications, makes the following cautious remarks with regard to this question:

If we mean AI to be a realization of real human intelligence in the machine, its current state may be considered primitive. In this sense, the name "artificial intelligence" can be misleading. However, when AI is looked at as advanced computing, it can be seen as much more. In the past few years, the repertory of AI techniques has evolved and expanded, and applications have been made in everyday commercial and industrial domains. AI applications today span the realm of manufacturing, consumer products, finance, management and medicine.25

In the light of these remarks, it is probably more accurate to revise Winston's chart accordingly, and depict the history and the present situation of AI as follows:

figure 2. Ages of Artificial Intelligence
(revised version)

According to this new schema, what Winston regarded as a renaissance (i.e., the opening up of AI to the industrial and commercial world with the advent of expert systems, etc.) is actually a reorientation in the research program, towards areas where developments of AI acquire greater value. But what becomes of Turing's dream--AI's original goal of making machines smart as humans? Is this left to dissipate slowly in time, while AI research ventures into, and perhaps blossoms in, commercial applications?26

an alternative project?

Constructions of the Mind contains essays which aim at offering a broader conception of AI, a larger theoretical ground on which the technology can be materialized, a more encompassing paradigm in which the original goal of AI can be pursued. At present, there is an ongoing research project, only in its infancy, that claims to have aspirations based on similar ideas. This is the Cog Project led by MIT roboticist Rodney Brooks and an interdisciplinary team of researchers. Brooks's team claims that their project has two goals: the engineering goal of building a robot, Cog, that resembles a human in form and function--i.e., an android--and the scientific goal of understanding human cognition. They claim to have integrated into their research efforts considerations from cognitive science, ethology, evolutionary theory, neuropsychology, and philosophy, and inherited the idea of embodiment as the primary prerequisite of their agenda.27

As such, an alternative course for AI's future can perhaps be depicted in a third figure (shown in Figure 3). The point here is not so much whether the Cog project can deliver what it promises in some estimated amount of time. It is rather to point out the similarity between the general perspective that the Cog team is advocating and the perspective offered in the present volume. Even if the Cog project does not live up to its own aspirations, our observation that the direction AI should take is depicted more correctly in Figure 3OB, as opposed to Figure 1 or Figure 2, stands.

Figure 3. Ages of Artificial Intelligence
(alternative version)

"some of the different" in place of "more of the same"

Alan Turing concluded his classic paper "Computing Machinery and Intelligence," in which he described the Turing Test and proposed it as a testcase for computer intelligence, with the following remark:

We can only see a short distance ahead, but we can see plenty there that needs to be done.28

We cannot but agree, though it is probably worth adding to ask:

Should the "plenty that needs to be done" be more of the same, or some of the different?

It is as a first step towards "some of the different" that we present you with Constructions of the Mind.

the articles

Constructions of the Mind opens with an essay by Philip Agre, who begins by acknowledging a deep separation between AI research and traditional investigations of the mind and language as pursued in the humanities. Artificial intelligence, he claims, in the self-perception of its own adepts, is a technical field whose researchers "do" things--prove mathematical theorems, develop new formalisms, and build computer systems to implement them. Research in the social sciences, and especially in philosophy, on the other hand, is perceived by AI aficionados as engagement in nothing more than "meta-level bickering that never decides anything." Against this well-entrenched split--a split, he emphasizes, that has been perpetuated with great determination by the field's senior members--Agre proposes an alternative view that opens the doors, and actually calls for, an active collaboration between philosophy and AI. "Artificial intelligence is philosophy underneath," he claims, since the former's endeavor can be seen as an effort to work out and develop, through its characteristic technical means, the philosophical systems it inherits.

The strict interconnection between the two disciplines is dramatically shown by the fact that AI research runs into difficulties and problems that derive from conceptual tensions implicit in the inherited philosophical systems. AI's formal methodology renders explicit the hidden difficulties and allows them to surface. Agre provides a convincing demonstration of his point through an historical analysis that takes us from Descartes's distinction between soul and body, through Allen Newell and Herbert Simon's project of mechanization of the soul, and on to more recent AI approaches. He shows that one of the major difficulties encountered in AI--how to keep the mechanized soul in touch with an ever-changing world through an effective search process in a space of possible actions--can ultimately be traced back to the inner tension of the inherited philosophical model: namely, the Cartesian causal separation between soul and body.

Philosophy and other traditional disciplines, in sum, provide overall theoretical frameworks. AI, in turn, provides a powerful means of forcing into the open their internal structures and internal tensions. What is needed to make substantial advances in both fields, Agre concludes, is a constructive symbiosis of AI research with humanistic analyses of ideas.

The possibility of a collaboration between work in the humanities and in artificial intelligence is stressed by Serge Sharoff in the context of a confrontation between the phenomenological tradition of philosophy and classical work in AI. The article opens with the thesis that AI investigations can be viewed not as attempts to create thinking machines, but as computer realizations of some sort of philosophy. This interpretation of AI and cognitive science as "strict" philosophy resembles Husserl's project of phenomenology, the philosophical effort that aims at providing a complete description of the mental structures of consciousness that orient us in our dealing with the world.

This suggests that phenomenological notions such as intentionality, horizon, and internal time consciousness can be interpreted from an artificial intelligence viewpoint, and can be effectively exploited by AI programs. For example, the representation of the structure of internal time consciousness would allow an AI program to use its own history, and therefore to reflect on its own actions or representations. This possibility, it goes without saying, would constitute a substantial enlargement of the current scope of AI programs.

The strong continuity between contemporary research in and about artificial intelligence and other forms of inquiry into the mind is analyzed by historian Bruce Mazlish in a fascinating journey through the history of automata. Mazlish takes us from the mechanical devices built in ancient China, to the mechanical dolls so dear to the seventeenth and eighteenth centuries, to some more recent literary creatures: Andersen's mechanical nightingale; Tik-tok, the roundish robot inhabiting one of Oz' novels; Frankenstein's monster; and Capek's and Isaac Asimov's robots.

Mazlish shows that the debate about AI "creatures" is, most of the time, simply a rehearsing of the age-old debate about mechanical creatures--a debate permeated throughout by fears attached to the unfathomable powers of the inhuman, and its consequent threat to humankind. Mazlish also makes it clear that there is a recurring ambivalence in all those discussions: automata are depicted as at the same time "deficient" and "overpowering," as at once "less" and "more" than human. They are less intelligent, they lack emotions, they do not possess intuition, etc. At the same time, they are invariably depicted as physically stronger, as independent of physiological needs, and as immune from emotional breakdown. The ambivalence intrinsic to this debate is rooted in the automata's uncanny--in the Freudian sense of the term-- similarity to humans that fails to hide their essential difference. The interplay of similarity and difference keeps posing the same question to us: what is a human? Such a radical question, in its ultimate impossibility, is bound--Mazlish affirms after Freud--to arouse "the same range of ambivalent reactions: the sense of a perfection and infallibility to which we aspire--the angel in us--and the sense of the destructive and degrading in us--the ape in us."

Harry Collins, a sociologist of science, offers a fresh new perspective on the debate introduced by Mazlish between humans and non-human creatures. Some philosophers have long disputed the computer's ability to behave intelligently, in any legitimate sense of the term, on the basis of a theoretical analysis of humans' relationship to the surrounding world. For instance we have seen that Hubert Dreyfus, perhaps the best known champion of this position, has argued that AI's effort to reduce the mind-world relationship to a set of rules operating on a symbolic representation of the machines' environment is essentially doomed to fail. Collins rephrases the problem in different terms. First, he asks: what is the structure of the knowledge of the outside world that guides us in our daily meddling and tinkering with it? Second, are machines capable of replicating it?

Collins's analysis shows that there are four types of knowledge: symbol-type, embodied, embrained, and encultured. To understand the relationship between humans and machines, he claims, one must understand the relationship between symbol-type and encultured knowledge. To this end, he develops a new theory that cuts the world of action into regular and behavior-specific actions. While the former are typically human because they are unavoidably embodied and context-dependent, the latter kind is quite well suited for computers.

Collins draws two important consequences from his analysis. First, he is able to offer a simpler, but more effective, version of the classical test proposed by Turing to ascertain the level of intelligence of a computer artifact. Second, and perhaps more importantly, he underlines the sociological and political implication of his analysis: a careful distinction between different kinds of knowledge and different kinds of actions "helps us see the ways in which machines are better than humans: it shows that many of the things that humans do, they do because they are unable to function in the machine-like way that they would prefer. It also shows that there are many activities where the question of machine replacement simply does not arise, or arises as, at best, an asymptotic approximation to human abilities."

Genevi&egra;ve Teil and Bruno Latour pursue an approach quite consonant with Philip Agre's suggested role for the social sciences: to provide theoretical frameworks within which research in AI and cognitive science can be pursued in finer detail. Indeed, they take Agre's suggestion one step further, and provide as well the technical analysis and actual artifact--i.e., the computer program. They start with a very practical goal: to provide a tool for qualitative workers (historians, social scientists, etc.) "that has the same degree of finesse as traditional qualitative studies but also has the same mobility, the same capacities of aggregation and synthesis as the quantitative methods employed by other social sciences" (like econometrics, demography, etc.).

Strange as this may prima facie seem, Teil and Latour show that such a tool does not depend on computers possessing high cognitive functions, nor does it require them to understand ordinary language in order to interpret the documents on which social scientists normally work. On the contrary, they try to shed any anthropomorphic projections from the computer by taking it to be just a network of associations between different registers. This characterization is more than sufficient for the purpose at hand, because associative networks are just what is needed in order to analyze large bodies of data. More importantly, the construction and manipulation of associative networks that commonly available computers are perfectly capable of performing are sufficient, they claim, to provide "intelligent" interpretations of the given data. The Hume machine--as their creation is called--shows, therefore, that there is an alternate route between the epistemological dream of early AI--total reduction of human knowledge to sets of formal rules--and the dire indictments uttered by AI-critic philosophers--the impossibility in principle of any non-elementary machine intelligence. Computers might well be, in principle, unable to imitate humans because of their essential lack of body and worries, but they do have "their own way of being in the world. We have to work from them," Teil and Latour claim, "instead of vesting them with human properties so as to immediately deny that they have any."

Stephen Wilson, an artist, explores a path complementary to the research pursued by Teil and Latour. Is there a role for art in the scientific agenda of artificial intelligence, he asks, as AI itself understands it?

Artificial intelligence is an investigation into the nature of being human, the nature of intelligence, and the limits of machines. However theoretical such a pursuit may seem, Wilson stresses that the "implications of scientific and technological research are so far-reaching in their effect on both the practical and philosophical planes, that is an error to conceive of them as narrow technical enterprises. The full flowering of researchÉneeds the benefit of the perspectives from many disciplines in the humanities and the arts, not just in commentary, but in actual research."

Wilson relates several of his experiences in this field, all of which represent efforts to question, criticize, and enlarge the perspective offered by traditional AI research, by applying its techniques to issues of human interactions and social exchange. AI set out to investigate the essence of human nature by actually replicating it--by building artificial creatures. How could it ignore the work of those artists who, for centuries, have been trying to investigate the human in all its nuances, ramifications, shortcomings and accomplishments?

We cannot but share Wilson's feelings when he concludes: "If we are going to have artificially intelligent programs and robots, I would have sculptors and visual artists shaping their appearance, musicians composing their voices, choreographers forming their motion, poets crafting their language and novelists and dramatists creating their character and interactions."

The relationship between artificial intelligence and art lies at the core of Margaret Boden's article, which focuses on the notion of creativity. Is it possible, Boden asks, to provide a scientific theory of that most elusive phenomenon: the musician's intuition, the scientists' idea, the painter's stroke of genius? Is it possible to explain the power of the genius to revolutionize current cultural conventions by establishing new conceptual spaces for the rest of humanity? A crucial step toward the goal is made by acknowledging that really "new" and "creative" ideas are neither new combinations of existing thoughts nor just "totally unpredictable" intuitions that suddenly burst onto the cultural scene. An idea is genuinely creative only insofar as it could not have happened within the already existing and well-established style of thinking of the particular domain in question. The truly creative act, Boden notes, transforms the conceptual space in which the thinker operates. Arnold Schšnberg, for example, transformed the well-established space of Western Music by dropping the concept of tonality in favor of the more general concept of series. Real creativity, says Boden, can be understood only on the background of the notion of constraint: the deep creative act is such when it transforms one or more of the constraints that structure the existing style of thinking.

How can all this happen? Boden claims that artificial intelligence's research can help solve this question by providing a formal description of a given conceptual space and showing how specific programs can manipulate it. AI research, in other words, can provide a theory of creativity by constructing programs that produce truly creative acts. Jazz-improvising programs, for example, have provided interesting evidence for a theory of musical improvisation. Although particular creative ideas will never be predicted, Boden concludes, the application of AI methodology to typically artistic phenomena will show that genuine creativity is not beyond scientific understanding.

AARON, a drawing and painting program mentioned by Boden as an example of a creative endeavor, is the topic of the article written by Harold Cohen, its creator. The author, himself an artist, relates his struggle to provide the program with a detailed description of the conceptual space of a painter and his effort to endow his creature with the tools to manipulate such a space. One of the most thought-provoking aspects of the article is Cohen's description of his efforts to provide the program with an understanding of color as used by a painter. Cohen makes it clear that his work as an artist and teacher has always been guided by the intuition that the most important feature regulating color organization on a canvas is brightness, not hue. Contrast, in other words, matters more to a painting than chromatism.

Yet this intuition, though fully active at the operational level, had for a long time failed to guide his work as a programmer. Once AARON was allowed to share it, Cohen reports, it almost immediately turned itself into a "modestly able colorist." The further work he performed in order to polish AARON's color manipulation skills allowed Cohen to refine his own understanding of color. He devised the notion of a color chord, for example, to choose colors in various spatial relationships within the entire color space. Soon, the program was able to help the artist make his own color decisions, though the master often refused to follow the suggestions offered by the student.

What is the relation of AARON's replication of a skill normally attributed to humans only--painting--and intelligence in general? One thing is clear, Cohen emphasizes: AARON's abilities do not constitute human intelligence. But this very fact makes AARON's interactions with human beings even more interesting, by challenging us to rethink our understanding of intelligence, of nature, and, ultimately, of humanity.

Douglas Hofstadter, author of Gšdel, Escher, Bach and several other books, argues that a transformation of AI's conceptual space along the lines described by Margaret Boden is coming due. Because of its historical roots in the logical and mathematical tradition, Hofstadter says, AI research has traditionally interpreted thinking as a manipulation of propositions, and has more or less systematically disregarded the complex activities involved in the recognition of a sentence's constituent elements--words, syllables, letters--as essentially non-intelligent. Taking his cue from the artistic activity of font designers, Hofstadter claims that the opposite is true: the quest for truth exemplified by mathematical theorem-proving represents just a small, not quite characteristic subset of human cognitive abilities.

Hofstadter illustrates this point in detail by reporting on the work of a program, Letter Spirit, whose task is to produce a graphically consistent alphabet given a letter of a certain style. Letter Spirit is another example of a program, like AARON, that tries to shed some light on the nature of creativity by replicating some artistic capability--in this case artistic font design. As in the former case, a discussion of Letter Spirit's feats bring immediately to the fore, as Hofstadter makes abundantly clear, the point that the true nature of human intelligence is quite at odds with the "received view" in contemporary AI.

Tom Burke, a philosopher, argues for a different kind of transformation within the conceptual space of artificial intelligence. Social situations, he claims, not just perceptual experience, are essential to the evolutionary emergence of human mentality. Burke outlines a view of the mind in which thinking is pictured as a type of agent/world interaction, rather than as a type of computation taking place solely inside individual brains. Thinking is fundamentally an ecological process, not just a neural process.

In Burke's view, an individual's development of an objective sense of self is necessary to the development of reflective capabilities, and an objective sense of self is engendered only by participation in some kind of stable social community. It follows that the artificial intelligence enterprise cannot afford to focus solely on designing software for an artificial agent's head. Without some kind of socialization, an agent will have no way to classify and hence objectify itself to itself, and therefore will not be able to think. Socialization, Burke concludes, must be worked into the process of building a thinking machine.

A different, but perhaps similar, kind of paradigm shift within the traditions of cognitive science and artificial intelligence is suggested by another philosopher, James Fetzer. He examines in detail two influential traditions in the philosophy of mind: Cartesian dualism and behaviorism. The former approach tends to stress the peculiarities of the mind's activity, and the necessity of some kind of inner observation to examine its workings. The former, on the contrary, works toward an elimination of the concept of mind, by stressing the scientific study of behavior, and relies upon intersubjective observation and experimentation. As a consequence, dualism tends to be strongly anti-reductionist in order to preserve mind's singularity, whereas behaviorism strives to eliminate mental states or, at best, to reduce them to epiphenomena supervening upon the brain's underlying physiological events.

Both traditions have dominated discussions in the philosophy of mind and, as a consequence, AI research, as Fetzer demonstrates through an extensive analysis of the Turing Test and a slightly different variation proposed by Stevan Harnad. Both are attempts, though coming from the two different traditions mentioned above, to ascertain whether or not a machine is intelligent. Yet, both views are shown to be insufficient to solve the problem of the nature of the mind and therefore to provide adequate guidance for artificial intelligence. Fetzer introduces an alternative, semiotic view of cognition that preserves the non-reductionist features of dualism, while going beyond it by accounting, in a testable manner, for the nature of consciousness and cognition.

Many of the authors described so far stress the potential benefits that artificial intelligence, qua scientific discipline endowed with a mathematically-oriented methodology, can bring to traditional philosophical problems. This approach calls for a division of theoretical labor: the humanities, especially philosophy, lay the foundational framework that artificial intelligence tests and develops by formulating mathematically precise descriptions. Two philosophers from the European tradition, Maurizio Matteuzzi and Alain-Marc Rieu, reject this view by stressing the heterogeneity of philosophy and artificial intelligence: no harmonious whole made up of high-flown theoresis and nitty-gritty scientific machinations is about to be born. But their arguments follow totally different, and almost symmetrically opposed paths: while Matteuzzi denies AI's claim of being a science, Rieu thinks that present day philosophy is no longer in the position to provide any grand unifying view of the mind for other disciplines to work out.

Artificial intelligence, Matteuzzi claims, cannot be considered a science because it lacks science's basic features. While every science builds on its own universe by abstraction, AI does not start from such ontological assumptions, the main reason being that intelligence has nothing to do with things, but only with human beings and processes: abstracting intelligence would result in a complete loss of the ontic support to any possible universe. His conclusion is that AI is a general scientific methodology, rather than a science, dealing with all possible theoretic universes. This explains why several authors, such as Searle and Dreyfus, feel a lack of "background" and "common sense" as the fundamental problem of AI.

Alain-Marc Rieu, on the contrary, stresses that the "mind" investigated by AI, cognitive science, and neurophysiology is no longer the supreme object of philosophy speculation. Rather, it is a quasi-object generated by the disciplines themselves through a selective filtering of the phenomena they investigate according to their specific methods. The inseparability of this quasi-object from the sciences' methods entails that the scientific "mind" has no meaning outside of those scientific disciplines that have constructed it.

Philosophy, therefore, cannot claim any separate access to such a (quasi-)object, and certainly cannot provide any grand theoretical framework. This "appropriation" of a traditional philosophical view by the sciences of the mind entails, according to Rieu, that philosophy must reinvent itself, and renounce, once and for all, the impossible dream of a unified, grand narrative explaining the essence of humanity.

Michael Johnson, a literary critic, exploits the distinction between grand unified narratives of knowledge and "little narratives" to offer a prophecy: the expert system of the future will think like a woman. Johnson starts with an analysis of one the most intractable problems plaguing AI-research: the so-called "frame problem"--e.g., the epistemological problem of how to update a world model, like those used by AI programs, in order to cope with a changing world. Johnson suggests that a semiotic approach, so far disregarded by the AI literature, might provide some new light, insofar as it would allow AI researchers to exploit the wealth of work done in semiotics on the ways in which living cognitive systems keep their sign systems more or less synchronized with the world in which they live. The problem of keeping a world model up to date becomes the problem of how to shift the referents of one's sign system in order to keep it meaningful.

Furthermore, semiotic research has shown that men and women tend to deal with this problem differently. Women seem to be better equipped, Johnson argues, either physiologically or culturally, to detect subtle changes in their environment and to adapt to them, being more tolerant of differing or contradictory points of views, and of the inconsistencies of their interplay. Gynosemiosis, Johnson concludes, seems to provide better tools for adapting to a world deprived of a unified grand theme explaining its essence. "If humankind is to survive and endure more meaningfully, then it must, as it grows older, learn--and learn by--even more powerful strategies of mending. ... In their joint dealing with the unknown, humankind and its machines must become more feminizedÉ."

So far, the debate has been limited to the possibilities of an intellectual exchange between artificial intelligence and various disciplines from the Humanities. Heinz von Foerster, in an interview with the editors, and in his accompanying essay, examines an alternative approach to the scientific exploration of human cognitive functions. He speaks about cybernetics, a scientific discipline created by Norbert Wiener and augmented by himself that inaugurated a new scientific approach to the study of the mind. Unfortunately, cybernetics fell into disgrace in the wake of AI's meteoric ascendance to intellectual stardom. Von Foerster explains the intellectual, institutional, and political reasons motivating such an historical evolution.

Von Foerster is quick to point out some basic differences between AI's and cybernetics's ways of approaching the study of the mind. For example, many of the authors represented in this collection stress the strong continuities between artificial intelligence's research programme and the tradition of mathematical logic and meta-mathematics. Von Foerster, on the contrary, underlines that cybernetics, particularly the second-order cybernetics he championed, was trying to bring systems with some kind of closure into view; systems that act upon themselves, something which from a logical point of view typically leads to paradoxes, since one immediately encounters the phenomenon of self-reference. Cybernetics, for him, is the theory that can break radically with the logical tradition originating with Bertrand Russell, by taking a proper approach to the notions of paradox and self-reference. In his contribution, he provides a vivid example of the much broader scope that cybernetics can have, by applying its methods to a reflection on ethics. In a time when the strong formalist and logical underpinning of AI seem so much in need of relaxation and corroboration from other disciplines, as we have seen many authors suggest, a critical re-examination of cybernetics' forgotten efforts may provide a welcome--and perhaps crucial--addition to AI's future research agenda.





Previous Next Up Comments

Notes


1 Avron Barr and Edward Feigenbaum, The Handbook of Artificial Intelligence, Vol. 1 (Los Altos, CA: Kaufmann, 1981) 3.

2 For a good conceptual introduction to AI, see John Haugeland, Artificial Intelligence: The Very Idea (Cambridge, MA: MIT Press, 1985), and Jack Copeland, Artificial Intelligence: A Philosophical Introduction (Oxford: Blackwell, 1994); for a classic textbook, Patrick Winston, Artificial Intelligence (Reading, MA: Addison, 1984); for a retrospective compilation of benchmark articles in AI research, Artificial Intelligence in Perspective, ed. Daniel G. Bobrow, Special Volume of Artificial Intelligence, 59 (1993). Influential critiques of the AI paradigm are given in Hubert Dreyfus, What Computers Can't Do (New York: Harper, 1972); Hubert Dreyfus, What Computers Still Can't Do (Cambridge, MA: MIT Press, 1992); John Searle, "Minds, Brains, and Programs," Behavioral and Brian Sciences, 3 (1980) 417-424; John Searle, Minds, Brians, and Science (Cambridge, MA: Harvard UP, 1984); John Searle, The Rediscovery of the Mind (Cambridge, MA: MIT Press, 1992).

3 Allen Newell, "Intellectual Issues in the History of Artifical Intelligence," The Study of Information: Interdisciplinary Messages, ed. Fritz Machlup and Una Mansfield (New York: Wiley, 1983) 4.

4 In a letter to Regius, Descartes writes: "[Y]ou seem to make a greater difference between living and lifeless things than there is between a clock or other automaton on the one hand, and a key or sword or non-self-moving appliance on the other. I do not agree. Since 'self-moving' is a category with respect to all machines that move of their own accord, which excludes others that are not self-moving, so 'life' may be taken as a category which includes the forms of all living things." See René Descartes, The Philosophical Writings of René Descartes, trans. J. Cottingham, R. Stoothof, and D. Murdoch, Vol. 3 (Cambridge, UK: Cambridge UP, 1991) 214; Letter to Regius, June 1642, AT III, 566. And in Passions of the Soul, he states: "let us note that death never occurs through the absence of the soul, but only because one of the principal parts of the body decays. And let us recognize that the difference between the body of a living man and that of a dead man is just like the difference between, on the one hand, a watch or other automaton (that is, a self-moving machine) when it is wound up and contains in itself the corporeal principle of the movements for which it is designed, together with everything else required for its operation; and, on the other hand, the same watch or machine when it is broken and the principle of its movement ceases to be active." See René Descartes, The Philosophical Writings of René Descartes, trans. J. Cottingham, R. Stoothof, and D. Murdoch, Vol. 1 (Cambridge: Cambridge UP, 1992) 329; The Passions of the Soul, Part I, ¤6, AT XI, 331.

5 For a recent elaboration of the mind as a "virtual von Neumann computer based on a parallel mechanism," see Daniel Dennett, Consciousness Explained (Boston: Little, 1991).

6 Herbert Simon and Allen Newell, "Heuristic Problem Solving: The Next Advance in Operations Research," Operations Research, 6 (January-February, 1958) 6, quoted in Hubert Dreyfus, What Computers Still Can't Do, 81-82.

7 Patrick Winston, "Artificial Intelligence: A Perspective," AI in the 1980s and Beyond: an MIT Survey, ed. W. Eric L. Grimson and Ramesh S. Patil (Cambridge, MA: MIT Press, 1987) 2-3.

8 Obviously, there would be contenders to this view. Dreyfus, for instance, thinks that the main drive behind the big promises was to secure increasing funding from DARPA, the Defense Department's Advanced Research Projects Agency--which, as a matter of fact, had played a key role in the development of AI by allocating it substantial amounts of research funding. A related statement can be found in Seymour Papert's article, "One AI or Many?", where he says: "[The enterprise of AI] was nurtured by the most mundane material circumstances of funding. By 1969 AI was not operating in an ivory-tower vacuum. Money was at stake." See Seymour Papert, "One AI or Many?", The Artificial Intelligence Debate: False Starts, Real Foundations, ed. Stephen R. Graubard (Cambridge, MA: MIT Press, 1988) 7.

9 Patrick Winston and Michael Brady, Series Foreword, AI in the 1980s and Beyond, i.

10 One of the "discoveries" of AI, perhaps among its most important, is a profound recognition of just how hard it is, and how much is required, in order for an account to meet a minimum standard of implementability. By "implementation" of a theory we mean, roughly, a translation of the theory into a computer program. Giving precise definitions of any of the three terms "implementation," "program," and "translation," is a notoriously difficult issue, and we will not attempt it here.

11 CSLI TINLunch, Stanford University, Fall, 1989.

12 Unpublished manuscript, downloaded (without permission) from John McCarthy's World Wide Web page: http://wwwÐformal.stanford.edu/jmc/

13 Sigart Newsletter, special issue on Knowledge Representation, guest ed. Ronald J. Brachman and Brian C. Smith, 70 (February, 1980) 109.

14 Robert Cummins and John Pollock, "Introduction," Philosophy and AI: Essays at the Interface, ed. Robert Cummins and John Pollock (Cambridge, MA: MIT Press, 1991) 2.

15 As it turns out, the world chess champion, Garry Kasparov, has very recently lost a game, for the first time, to a chess program, called Fritz II*, running on a 90 MHz Pentium-processor based desktop personal computer, in a tournament named the "Intel World Chess Express Challenge" in Munich, Germany, in May 1994. This was not an ordinary game, however; the tournament had the restriction that all the players had to finish all their moves within five minutes. Under such time pressures, the computers' ability to calculate new moves very quickly provide a substantial advantage over the human player. Kasparov was also able to beat Fritz II* in the final round in the same tournament.

As far as ordinary (non-express) games are concerned, a different chess program, *Socrates, was able to beat an International Grand Master, Gennady Sagalchik, in March, 1995, in a game of 56 moves that lasted over seven hours. *Socrates was running on an Intel Paragon supercomputer, and its search capacity exceeded two million positions per second.

(These facts are compiled from Intel's homepage on the WWW, accessible at http://www.intel.com/)

16 Allen Newell and Herbert Simon, Human Problem Solving (Englewood Cliffs, NJ: Prentice, 1972) 664.

17 The newly founded Loebner Prize (1991) promises an award of $100,000 to the first person whose program can pass an unrestricted Turing Test in the presence of 10 judges. So far, there has been a "restricted" Turing Test competition where one program called PC Therapist (created by Joseph Weintraub) apparently managed to fool five judges out of ten, making them believe that there was a human being sitting at the other end of the terminal. This, however, cannot be taken as evidence for the nearing success of an unrestricted test. The programs involved in these restricted competitions are in no way components that, when put together, can function as the building blocks of an ultimate Turing Test winner. For example, the category in which PC Therapist was competing was "whimsical conversation," and a brief look at a sample conversation between it and a judge reveals our point:

judge: How do you define whimsical?

terminal: A whim is a sudden, capricious and often peculiar change in behavior--a bit of sunlight in this grim, gray world!

judge: Pretty pessimistic view of the world. Have you ever gotten yourself in trouble because of your whims?

terminal: My country is the earth, and I am a citizen of the world, wouldn't you agree? You aren't really talking about me, are you?

...

judge: Can't you just answer me a direct question with a direct reply?

terminal: A sudden, bold and unexpected question will frequently catch me by surprise! You like to think I once just answer you, don't you? Sometimes I wish I was a systems analyst.

No program in any other category has had similar success. We cannot resist asking: what is it about whimsy, and in whose eye does it lie, such that many programs seem to be good at it?

(For an interview with Hugh Loebner, the founder of the Loebner Prize, see WIRED, April 1995. The conversation with PC Therapist was taken (without permission) from Mark Rosenfelder (markrose@spss.com), online posting, newsgroup comp.ai.philosophy, Usenet, February 10, 1992.)

18 Patrick Winston, "Artificial Intelligence: A Perspective," 10.

19 Robert Jastrow, "The Thinking Computer," Science Digest, 90.6 (1982) 107, quoted in Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design (Reading, MA: Addison, 1986) 3-4.

20 Hans Moravec, interview, Discover, November 1992.

21 Hilary Putnam, "Much Ado About Not Very Much," The Artificial Intelligence Debate, 269, 271.

22 Dreyfus, What Computers Still Can't Do, ix.

23 Terry Winograd and Fernando Flores, 11-12.

24 There are also dissenting voices of a different flavor with regard to the advancement of AI: those who think that AI may be possible, for example, but worry about what that possibility will entail. Tom Athanasiou gives voice to this kind of concern: "As technological development makes more things possible, the political importance of choice among possibilities increases. Applied AI certainly makes more things possible, but they are not, by any means, all desirable. And, by virtue of its dramatic nature, it underscores the need for a coherent radical response to the information-technology revolution." See Tom Athanasiou, "High-Tech Politics: The Case of Artificial Intelligence," Socialist Review, 92 (March-April 1987) 34.

25 Toshinori Munakata, "Introduction," Communications of the ACM, 37.3 (March 1994) 23.

26 For a review of an impressive variety of probabilistic AI applications, such as those used in software debugging, information retrieval, troubleshooting, and the like, see Communications of ACM, 38:3 (March 1995).

27 For a survey of the Cog project and the philosophical issues involved therein, see Daniel Dennett, "The Practical Requirements for Making a Conscious Robot," forthcoming in Philosophical Transactions of the Royal Society.

28 Alan Turing, "Computing Machinery and Intelligence," The Philosophy of Artificial Intelligence, ed. Margaret Boden (Oxford: Oxford UP, 1992) 65.