The Augmentation of Human Intellect

as an Alternative Research Program to Artificial Intelligence:

Implications for the Definition of the Human-Machine Boundary

 

 

 

 

 

 

 

 

 

 

 

by Thierry Bardini

Department of Communication

Université de Montréal

CP. 6128 Succursale Centre-ville

Montréal QC H3C 3J7 CANADA

Tel: (514) 343-5799

bardinit@com.umontreal.ca

 

 

 

 

 

 

 

 

 

April 2000

The Augmentation of Human Intellect

as an Alternative Research Program to Artificial Intelligence:

Implications for the Definition of the Human-Machine Boundary

Journal entry 37. Thoughts of the Brain are experienced by us as arrangements and rearrangements--change--in a physical universe; but in fact it is really information and information processing that we substantialize. We do not merely see its thoughts as objects, but rather as movement, or, more precisely, the placement of objects: how they become linked to one another. But we cannot read the patterns of arrangement; we cannot extract the information in it--i.e. it as information, which is what it is. The linking and relinking of objects by the Brain is actually a language, but not a language like ours (since it is addressing itself and not someone or something outside itself).

Philip K. Dick, Valis.

The computing press has been predicting the next "computing revolution" for a while now, and many words slowly invaded our technological culture as a result: multimedia, hypermedia, virtual reality, etc. The explosive success of the World Wide Web in the early 1990s, with his graphic interfaces for browsing in datascapes of text, sound, still or animated pictures, promises to realize soon previous fictitious visions of a "cyberspace," this "consensual hallucination" (Gibson, 1984: 5) offering a new realm for socialization. But such technological breakthroughs are also giving ground to "fin-de-siècle" fears and luddite condamnations, and there is not one day without a redundant but hype mediatic statement urging the public to get ready for the cultural shockwave of multimedia computerized communication.

For the historian of the technology, all this verbal explosion might evoke Francis Bacon's version of Salomon's sentence, "that all novelty is but oblivion." In this paper, I describe a fundamental scenario in the history of the ideas and artefacts that led to this present supposed "revolution." Without going as far as proposing an evolutionary narrative that would emphasize continuity over change, or even read the present situation as the only "logical" outcome of an irrepressive progress, I merely want to uncover part of what these revolutionary discourses regularly obliviate, to put in an historical perspective the present situation and therefore provide a better understanding of the innovative practices ahead of us. To paraphrase Jean Cocteau, the tenses of the innovative action are like those of the verb to love: its past is never simple, its present is merely indicative, and its future always conditional.

In this perspective, I focus on the computer science community of the U.S. during the 1960s, where many of the present innovations were first conceived, and for most of them, implemented. I especially emphasize two alternative conceptions of the redistribution of intelligence in the human-computer association, leading on one hand to the Artifical Intelligence (AI) research program, and, on the other hand, to the Augmentation of Human Intellect (AHI) research program. From their early roots in the Cybernetics project, these two alternative foci blossomed in the U.S. Department of Defense Advanced Research Projects Agency (ARPA) funded community and provided a central polarity to make sense of present and future developments.

These two trends in the history of computing have been unequally covered in previous historiography. The AI project has been and still is relatively over-emphasized, when the AHI project has never been formerly acknowledged as an alternative to AI. Only parts of AHI achievements and questioning have been recognized, and usually reduced to the most trivial and institutionnalized historical narratives. Few historical accounts indeed give us an overall picture able to present the links between crucial developments like expert systems and hypermedia environments, and even more importantly, to put them in the perspective of an history of the ideas that they actualized. That is precisely such an overall picture that I attempt to sketch in the present paper.

Very few people outside the computer industry know Douglas Engelbart, the leading figure of the AHI project, and among those people many still merely credit him with some technological innovations like the mouse, the outline processor, the electronic mail system, or, even sometimes, the window interface. But Douglas Engelbart never really gets credit for what he usually claims: An integrative and comprehensive framework to tie both technological and social aspects of the development and use of personal computing technology in human organizations. In this paper, I demonstrate how Douglas Engelbart participated in setting the agenda for current social uses of computing technology at two different levels: (1) the integration of natural language in human-computer interaction, and (2) the introduction of the body of the user in the human-computer interaction process.

In the 1950s, computing technology was still in its early age of massive machines devoted to number crunching. Input and output technology was rudimentary, one user could access the computing power at a time, and a very few users indeed did so. It was the time of the mainframe computer priesthood, when the power of the machine was thought to serve the very large corporation or military uses. In the late 1950s and early 1960s, thanks to a few visionary pioneers of computer science, time-sharing of computing power by various users simultaneously became the order of the day. Among these pioneers, one of the most influential was certainly J.C.R. Licklider, a former professor of psychology at Harvard University, who in 1962, became the first director of ARPA's Information Processing Techniques Office (IPTO).

Licklider, and his successors at the direction of IPTO (Ivan Sutherland, Robert Taylor, and Larry Roberts) were members of the computer science community who all had the opportunity to direct and influence the overall shape of computer science during their tenure at the head of the most important source of funding for this activity in the 1960s. All together, they allocated over 200 millions of U.S. dollars over a period of ten years to a dozen of institutions that are still today at the leading edge of computing research. They created a network from which stemmed personal distributed computing technology, both as dedicated individual tool (personal workstations) and on-line means of communication (the ARPAnet, ancestor of the Internet).

In this paper, I show how the two moves I credit Engelbart with were indeed crucial issues in the ARPA community, and how, under the successive IPTO directors guidance, they became appropriated by various groups. More accurately, I attempt to show how the first move, the integration of natural language, became the central concern of the AI clique, and maybe at the same time their outmost achievement and their worst failure, precisely because they failed to take into account the second and most decisive contribution of the AHI clique, the introduction of the body of the user in the human-computer interaction process.

Licklider and the ARPA-IPTO Community

J. C. R. Licklider played a major part in the history of personal computing technology. In 1960, his paper laid the groundwork for a program of action on "man-computer symbiosis." Licklider was a psychologist at MIT, where he was appointed in the department of Electrical Engineering and in the Lincoln Laboratory. He had a conception that computers could become a means for "interactivity": Humans can communicate with each other by using computers as a channel of communication. Prior to this time, mainframe computers were used for the batch-processing of numbers. Such "number-crunching" was not interactive, in that a user would hand over a data-set plus the instructions for analyzing these data, and then typically wait for some hours before getting the results (which often led to a further request, etc.).

Soon after President John F. Kennedy took office in 1960, global tensions with the Soviet Union led the Federal Government to stress technological competition with Russia. The Advanced Research Projects Agency (ARPA) was established in the U.S. Department of Defense. The U.S. Congress provided a large budget for R&D to ARPA, and ARPA's Information Processing Techniques Office (IPTO) was created to fund R&D efforts in computing. Licklider was appointed in October of 1962 to direct IPTO, where he set out to implement his vision of interactive computing.

The organization and the management of IPTO were central to the shaping of early personal computing. ARPA soon became the major source of funding for academic researches in computer science. Through IPTO, Licklider established a proto-network of computer scientists on the basis of a philosophy of conflict and cooperation. Since the beginning of his tenure at IPTO, Licklider formed an advisory committee to the IPTO composed of the representatives of the main funding institutions involved in computing at that time. One member was a young psychologist, Dr. Robert Taylor, who was then heading a research program on computing at NASA. In 1964, Licklider resigned from ARPA in order to return to MIT, and was replaced as director of IPTO by Ivan Sutherland, at that time a 26-years-old Army Lieutenant at Ft. Meade, Maryland. Taylor served as Sutherland's Associate Director until 1965, when Sutherland accepted a faculty position at Harvard University. Thus Taylor became the third director of IPTO.

Over the next five years, IPTO funded 20 or so large research projects, mostly at U.S. universities. Particularly heavy funding went to the computer science departments at MIT, Carnegie Mellon University, and at Stanford University, three departments still top-rated in computer science today. IPTO served in a coordinating role in facilitating the formation of an invisible college among the ARPA R&D contractors in the computer field. For example, starting in 1965, Taylor annually called together the IPTO principal investigators for a two or three day meeting. Taylor also visited each of his funded projects for several days each year, which provided him with an opportunity to observe various styles for managing R&D on computing. Later, after 1970, at Xerox PARC, Taylor was able to implement the management lessons that he had learned at ARPA from 1965 to 1969.

The key feature of the management style developed by Licklider at IPTO was the absence of use of the peer review system, replaced by a more informal networking system, designed to spot "the best people doing the best projects related to its mission." Licklider described this networking mode in the following words: "I had been going to computer meetings for quite a while. I'd heard many of these people talk...There is a kind of networking. You learn to trust certain people, and they expand your acquaintance. I did a lot of traveling, and in a job like that, when people know you have some money it's awful easy to meet people; you get to hear what they are doing." This informal networking system is best epitomized by the annual IPTO contractors meetings started by Robert Taylor in 1965, where the various PIs would give short presentations of their work and engage in quite lengthy arguments and debates. In this management style, according to Alan Kay, decisions where in the hands of the director of the office, himself a member of the community, and were usually based on controversy rather than consensus.

The recollections of these particular times of some of the major actors of the IPTO contractors community were documented in the participants' discussion following Licklider's presentation at the Association for Computing Machinery Conference on the History of Personal Workstations : "To some extent, much of what we have in personal workstations is a result of the quality of research that Lick[lider] funded" (John Brackett)... "One of the things that characterized ARPA's history was a selection of foci" (Allan Newell)... "I think that ARPA, through Lick, realized that if you get n good people together to do research on computing, you're going to illuminate some reasonable fraction of the ways of proceeding because the computer is such a general instrument" (Alan Perlis).

Everyone in this group agrees that there was not a common definite project to build a personal workstation. Why this lack of definition of a workstation project? Each participant had a different explanation. These possible explanations add up to the way in which IPTO followed the philosophy of cooperation that Perlis makes clear: The organization of the IPTO-contractor relationships is congruent with the shape of the computer as "a general instrument." Present day microcomputers and the personal workstations are (also) technological systems that reflect the legacy of the collective output of a network of creative individuals bound together in shaping an evolving representation : personal computing. As we will see now, Douglas Engelbart's relative dismissal has a lot to do with the particular position that he occupied in the contractors community.

 

Douglas Engelbart and the Augmentation of Human Intellect

When J. C. R. Licklider was proposing in 1960 his ideas on "man-computer symbiosis," Douglas Engelbart was beginning to implement at SRI (then Stanford Research Institute, now Stanford Research International) a vision that would be the quest of his career: The Framework for the Augmentation of the Human Intellect. Beginning in 1948, when he was an engineer in the electrical section of the Ames Navy Research Laboratory in Mountain View, CA, Engelbart had developed his augmentation framework. He earned his Ph.D. in electrical engineering at the University of California at Bekeley in 1956, where he realized that "It really became clear in a disappointing way that [he] couldn't do what [he] wanted to do." By 1956, his vision was transformed into what he refers to as his "crusade," a long-lasting obsession to "augment the human basic capability to cope with the increasing complexity/urgency of human problems."

In tune with Licklider "Man-Computer Symbiosis," Engelbart's perspective was based on the premise that computers should be able to perform as a powerful auxiliary to human communication. Although Engelbart "literally at that time didn't know how the computer worked, " he realized that he "just knew that much, that if it could do the calculations and things, it could do what [he] wanted." Moreover, it was clear to Engelbart that for the augmentation to take place, a co-evolution of the computer and the human being was necessary. Here he drew on the biological notion of symbiotic association, proposed earlier by Licklider as a model, in which with two or more entities co-evolve for an ever better fit: The computer should learn to manipulate the human language, and the human should learn to use computers.

The core of this anticipated co-evolution was based on the notion of "bootstrapping," considered as a co-adaptive learning experience. In Engelbart's framework, the tool system and the human system are equally important. The technological development of computing is associated with the human capacity to take advantage of the tool. Engelbart's bootstrapping philosophy was not originally conceived as a design principle, but as a basic methodology to attain a rather abstract goal, augmentation of the human intellect. The focus was not on a product or an artifact, but on the process, as both tool and human organization were considered complementary parts of the augmentation system.

In Licklider's groundbreaking proposal, one of the main ideas was that human-computer interaction should be seen as a communicative act. But there were at least two major communication models underlying the various representations of what personal computing should be. The conceptual innovation that led to the birth of personal computing was a new conception of the computer, progressively reconceptualized from a task-oriented logic machine to a "dynamic personal medium." With his concept of Man-Machine symbiosis, J.C.R. Licklider launched a new program of action for the then emerging field of Human-Computer Interaction (HCI): "To think in interaction with a computer in the same way you think with a colleague whose competence supplements your own will require much tighter coupling between man and machine than is suggested by the example and than is possible today." For this first representation then, HCI is conceptualized as a communicative act between the user and the computer, modelled on a conversation between "colleagues."

Although Licklider's ideas seemed to Engelbart to be very close to his own, their relation is very complex to decipher. Englebart, indeed, was not heavily funded by ARPA-IPTO until 1967 and his involvement in the ARPANET Network Information Center. His funding for the Augmentation of Human Intellect research dates back to 1959 with a first small grant from Air Force Office of Scientific Reseach (under the supervision of Rowena Swanson and Harold Wooster), increased in 1962 with Bob Taylor's funding from NASA and finally Licklider's funding at IPTO from 1963 on. But this early IPTO funding was relatively marginal as Engelbart himself reported in his contribution to the 1988 ACM History of Personal Workstations Conference:

Lick was willing to put some more support into the direct goal (more or less as originally proposed), but the support level he could offer wasn't enough to pay for both a small research staff and some interactive computer support [...] What saved my program from extinction was the arrival of an out-of-the-blue support offer from from Bob Taylor, who at that time was a psychologist working at NASA headquarters.

To understand better this situation, we need to present now a clear dynamic picture of the structure of the ARPA IPTO's contractors community and the main theme that got funded by IPTO in the 1960s.

Augmented versus Artificial Intelligence

This conversational model applied to HCI was a fundamental innovation because it cast a new light on what computing was about. In a McLuhanian perspective on the history of media, it introduced the computer as a new kind of medium: an extension of the brain. But the conversation was rapidly reconceptualized in two very different ways. On the one hand, this conversation could be seen as a sort of an internal conversation, as if the computer were a prosthesis of the brain, an extension of the thinking processes of the user. On the other hand, the computer could be seen as an autonomous entity, an "artificial" colleague.

This second program of action was developed in what is known as the program of artificial intelligence (AI): the idea was to enable the computer to behave as a colleague, and therefore mimic the highest human attribute, intelligence. The first program of action was taken over by Engelbart and his ARC group on a very distinct perspective:

When interactive computing in the early 1970s was starting to get popular, and they [researchers from the AI community] start writing proposals to NSF and to DARPA, they said well, what we assume is that computer ought to adapt to the human [...] and not require the human to change or learn anything. And that was just so antithetical to me. It's sort like making everything to look like a clay tablet so you don't have to learn to use paper.

This trend was established before the early 1970s, and what Engelbart narrates here can be seen as the result of the structuration of the contractors' community funded mainly by IPTO. Between 1962 and 1967, three main themes emerge in IPTO's funding in computer science: time-sharing, graphics and artificial intelligence. But AI research was somehow an emergent program, and ARPA budgets did not include it as separate item line until 1968. Bob Taylor gave the rationale for such a situation from his standpoint as IPTO director:

The AI people, who were getting support from ARPA when I was there, may have thought that the reason why I was supporting AI was because I believed in AI, qua AI. If they thought that, they were mistaken. I was supporting it because of its influence on the rest of the field, not because I believed that they would indeed be able to make a ping-pong playing machine in the next three years, but because it was an important stimulus to the rest of the field. There was no reason for me to tell them that, of course.

As we will see now, Engelbart's position appears as relatively marginalized in the institutional network that progressively emerged at that time, and his interest for the augmentation of the intellect of the user (not of the intelligence of the computer, as in AI) finally got the lower hand.

Time-sharing was without contest Licklider's first interest in the early 1960s. One of his first decision when he arrived at IPTO was to restate the contract with the Santa Monica-based System Development Corporation (SDC), which was the only contract he inherited of. SDC, the computing arm of the Department of Defense since its work on the Sage System for the Air Force, was in possession of four Sage DOD-owned IBM AFSQ32, the largest computer of that time. Much to the unhappiness of some at SDC, Licklider reworked their work statement in order to design and build a time-sharing system for this machine.

At the same time, Licklider led another contract with the University of California at Berkeley (UCB) where David Evans and Harry Husky were the Principal Investigators. The work statement of this second contract was to put a model 33 teletype in the UCB lab and connect it up on-line to the time-sharing system at SDC. According to Robert Taylor, this second contract was made to help Licklider evaluate, motivate and stimulate SDC's progress.

Since his Ph.D. at UCB under Paul Morton's supervision in the early 1950s, Douglas Engelbart had kept some contacts with his alma matter. For instance, some UCB computer science students came to ARC for summer jobs. Engelbart, in connection with Evans' contract with IPTO, also got to work on the SDC AFSQ32 on-line. Therefore, Engelbart's situation (and funding from IPTO) in the contractors' community was marginal, as one modestly funded project in the West Coast sub-network centered on SDC-RAND.

Apart from this West Coast pole of the ARPA IPTO contractors network, Licklider established an East Coast pole centered on the Boston-Cambridge community, including institutions and projects like MIT Project MAC and Lincoln Laboratories, Bolt, Beranek, and Newman (BBN), and a number of smaller projects and companies. In this second pole, IPTO's three main themes were investigated by prestigious scientists such as John McCarthy, Marvin Minsky, Wesley Clark, Edward Fredkin, and younger members such as Ivan Sutherland. MIT also had an early time-sharing system, and Ivan Sutherland developed Scketchpad, the first graphic software on the TX-2 computer. But what made the strength of the East Coast pole was certainly the research on Artificial Intelligence. Apart from the MIT-centered Boston community, the East Coast pole also included the Carnegie Mellon University (CMU) group led by Herbert Simon and Allen Newell (joined by Alan Perlis) after they had left Rand in the early 1960s.

One last set of reasons helps understand Engelbart's relatively marginal position inside the IPTO contractors community. These reasons deal with Engelbart's personality and communicative skills. Engelbart was not exactly the biggest star in this contractors community (that was full of such "stars," recepient of Turing awards and other honnors), and this, added to his relative lack of communication skills and his inability to compromise on his "crusade" (the rather aracane Augmentation of the Human Intellect) did not help to attract much interest until his famous 1968 San Francisco presentation, and very often, to the contrary, helped to categorize him as a "loner" doing "weird stuff."

Interpretive Flexibilty and a Basic Consensus

Because of the reasons that I presented previously, Engelbart's framework for the Augmentation of the Human Intellect was never discussed inside the IPTO contractors' community as a potential alternative to a research program in Artificial Intelligence. But even as a means to augment the human intellect, the computer acquired a basic human attribute: the capacity to communicate through language. Whether this acquisition of language redefined the divide between human and machine became a major philosophical question for debate. But whatever position they took on this philosophical issue, the ARPA contractors gave birth to a new way to conceive the Human-Computer interaction, as a communicative act:

Prior styles of interaction between people and machines--such as driver and automobile, secretary and typewriter, or operator and control room--are all extremely lean: there is a limited range of tasks to be accomplished and a narrow range of means (wheels, levers and knobs) for accomplishing them. The notion of the operator of a machine arose out of this context. But the user is not an operator. He does not operate the computer, he communicates with it to accomplish a task. Thus we are creating a new arena of human action: communication with machines rather than operation of machines.

However, the existence of such a fundamental debate on the nature of the computer created an interpretive flexibility about the way this communication act was performed. Apart from the early conversational model, a second model progressively emerged, for which the computer was best conceptualized as a kind of protean clay, an indefinite material whose essence rested in between humanity and machinery. The tighter coupling of man and machine evoked by Licklider comprised in germ the cyborg-like association that is the other side of the symbiosis, on the mechanical side. Therefore, once this interpretive flexibility was acknowledged, the problem became how to conceive the interaction, regardless of the essence of the man-machine association, but according to the modalities of the human-computer communication. This came to be translated in term of conceiving the computer as a medium:

Creative, interactive communication requires a plastic or moldable medium that can be modelled, a dynamic medium in which premises will flow into consequences, and above all a common medium that can be contributed to and experimented with by all. Such a medium is at hand-the programmed digital computer. Its presence can change the nature and value of communication even more profoundly than the printing press and the picture tube, for as we shall show, a well-programmed computer can provide access both the informational and to the processes for making use of the resources.

But to consider the computer as a medium does not close the debate on its essence, it only brackets the question and moves it to a different ground. If the computer is conceived as a medium, it can be thought of as a means to communicate among human: the communicative act originally thought to occur between man and machine is then displaced to a infeodated interaction between them, in service of a purely human communication. This third vision of the computer, as a medium analogous to the telephone, for example, emerged slowly and grew to its full potential in the interactive networking program.

In this perspective, the personal computer becomes the individual portal into a network of similarly equipped individuals, and the coupling of the user and its portal becomes an "interface," a means to translate and then channel information through the network. The computer becomes an encoding-decoding device allowing distributed communication. The strength of this new medium comes from its protean nature: what allows a better communication among humans via a computer network is the fact that even if the computer is indefinite in essence, the interaction with it (or him, or her) remains conversational and therefore analogous to an (unmediated) interpersonal communication.

Even if the field of computer science has coped with and developed on the basis of the interpretive flexibility as to the essence of the computer and the human-computer association, I feel that it is time to re-open the debate from a historical stand-point. Not at all in order to solve a problem (i.e. to define the essence of the human-computer association on the basis of an analysis of the communication modalities of their interactions), but more as a heuristic project of getting to understand what this set of phenomena can mean for a reconceptualization of the computer-mediated communication process itself. The centrality of language is certain in this enterprise, and I feel that it is time to re-open the debate around the influence of Engelbart's framework on this matter.

Language in the Augmentation Framework

It is very difficult now to remember the time when computer could not deal with "natural language," and especially for people like the author of the present paper, who never knew such a time. As strange as it can appears now, the notion of an interface between the computer and its user is relatively young idea that did not occur without cognitive uncertainty. If we know look at the computer as a medium more than as a tool for number crunching, it is the result of a slow process of teaching both the user and the computer to talk to each other, to find a common language.

Engelbart on Language

In Engelbart's framework, Human-Computer Interaction is an internal process of information exchange that takes place inside a greater order entity that he calls the "H-LAM/T system" for "Human using Language, Artifact, Methodology, in which he is Trained." Figure 1 depicts Engelbart's scheme of the H-LAM/T system.

Figure 1: Engelbart's protrayal of the H-LAM/T system

In this schematic representation, arrows represent flows of energy between domains of the system and the "outside world," and a caption of the original picture refers to the grey areas as "matching processes." For Engelbart, these "explicitly" human and artifactual processes match through language, understood both in its physiological and social dimensions. It is through language that the computer is neither an extension of the brain nor a medium. It is through language indeed that the computer can appear both as an extension of the brain (in its physiological dimension) and as a medium (in its social dimension):

I remember the revelation to me when I was saying, "Let's look at all the other things that probably are out there in the form of tools," and pretty soon focusing on language; realizing how much there was already that is added to our basic capability [...] It amounts to an immense system that you essentially can say augments the basic human being.

Moreover, Engelbart made it clear in the same interview that the main evolution in the maturation of his vision for the Augmentation of Human Intellect was the shift from the display of symbol to the learning of a common language. The most important cognitive feature of this process is the realization that language is more than the manipulation of symbols, and that the most basic means to augment human intellect lies in the co-evolution strategy that would make this difference available to the user. To make this point, we need to go back to the meaning of "language" in the first comprehensive expression of Engelbart's framework.

Language--the way in which the individual parcels out the picture of his world into the concepts that his mind uses to model the world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts.

This definition provides clearly two levels to understand the meaning of "language" in Engelbart's framework: 1) language means concept structuring and 2) symbol structuring, in order to model and represent "a picture of the world." The difference between the idea of displaying new symbols and the idea of augmenting human intellect via the use of a common language opens the realm of conceptualization to the medium. The computer becomes an open medium when it can be used to present a certain "picture of the world," and not only re-present it. Another excerpt of the 1962 framework states this point even more clearly:

A natural language provides its user with a ready-made structure of concepts that establishes a basic mental structure, and that allows relatively flexible, general-purpose concept structuring. Our concept of "language" as one of the basic means for augmenting the human intellect embraces all of the concept structuring which the human may make use of...The other important part of our "language" is the way in which concepts are represented--the symbols and symbol structures.

The evolution of this definition during 20 years of Engelbart's publications shows that the meaning of "language" remains constant in Engelbart's work, as one of the most basic assumption of his framework over time: In the chapter in Vistas in Information Handling, "parcels out" becomes "classifies," and the rest remained unchanged; in the chapter in Emerging Office Systems, "language" becomes "how we conceptualize, attach labels and symbols, externalize, portray, model, communicate."

The Whorfian Connection

In order to understand how crucial the shift from symbol to language is for Engelbart's framework, we need to explain its relation with one of the few influences that Engelbart acknowledges in the first publication of the framework: The Whorfian hypothesis.

The Whorfian hypothesis states that "the world view of a culture is limited by the structure of the language which this culture uses." But there seems to be another factor to consider in the evolution of language and human reasoning ability. We offer the following hypothesis, which is related to the Whorfian hypothesis: Both the language used by a culture, and the capability for effective intellectual activity, are directly affected during the evolution by the means by which individuals control the external manipulation of symbols.

Engelbart's formulation of the Whorfian hypothesis resonates with a stream of philosophical ideas pervasive in most classical Euro-Mediterranean thought, as well as in much of modern social theory. But Engelbart's use of the notion of "worldview" refers more accurately to the German influences on Whorf, and especially Humboldt, who, like Whorf, combined the knowledge of non-Indo-European languages with a broad philosophical background.

In his reading of the Whorfian hypothesis, Engelbart postulates in fact a dialectic relationship between the two sub-levels of "language" previously introduced: The symbolic representation of the concepts affects the way these concepts picture the world. The computerized display of new symbols should therefore affect the way we conceptualize our world. The computer medium changes radically the intellectual activity, and not only in improving its efficiency, that is making it faster, more economic, etc. This dialectic relationship between the computer and the intellectual activity mediated by it is therefore at the heart of the strategy to augment the human intellect.

The most important change caused by the use of the computer medium in comparison to other media (print, oral) is its non-linearity: The importance of considering the two dimensions of language (and not only the symbolic function) lies at this level. The opening of the conceptualization dimension in the medium enables the framework to conceive collaboration independently of the linear fashion in which concepts are usually communicated. The question therefore becomes that of conventions, as this excerpt from the famous presentation that Engelbart and his group gave in 1968 in San Francisco demonstrates:

With the view that the symbols one works with are supposed to represent a mapping of one's associated concepts, and further that one's concepts exist in a "network" of relationships as opposed to the essentially linear form of actual printed records, it was decided that the concept-manipulation aids derivable from real-time computer support could be appreciably enhanced by structuring conventions that would make explicit (for both the user and the computer) the various types of network relationships among concepts.

Language as a Social Construction

Before concluding this section, I want to stress one more point about the legacy of Engelbart's work for the ways we now consider computing and its main applications. For many analysts of the field, the next "revolution" in computing will be based on hypermedia applications. Along with Ted Nelson, Douglas Engelbart is often credited for his pioneering work in the field of hypertext or hypermedia, defined as "a style of building systems for information representation and management around a network of nodes connected together by typed links." This legacy is determined in an important manner by the developments of the framework that I describe in the present article.

In fact, one of the main characteristics of current hypermedia systems is based on the opposition between outlined-based versus network-based systems, the former being associated with Engelbart's legacy and the later with Nelson's legacy. If we go back further in the genealogy, we find that this opposition reflects the opposition of the conceptions of two other forefathers of the technology, Vannevar Bush and Benjamin Whorf. The influence of Vannevar Bush and his Memex has been well documented in the recent years, but such is not the case of Whorf's legacy.

In working on Engelbart's vision, it appeared to me that it is difficult to understand the origin of current hypermedia systems without taking into account the opposition between "association" and "connection," as the conceptual translation of the opposition between Bush and Whorf influences. Most authors dealing with hypertext or hypermedia systems usually refers to the following quotation from Bush (1945) "How We May Think" as the conceptual origin of hypertext:

The human mind...operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain... Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it...The first idea, however, to be drawn from the analogy concerns selection. Selection by association, rather than by indexing, may yet be mechanized.

For instance, Linda Smith's citation analysis of "As we May Think" identified 375 citing documents, half of which published in the period 1981-1990, and stated that "the continuing high level of citation of the 1945 article in the 1980s can be attributed at least in part to the association of Bush with concepts similar to those underlying hypertext." But the same authors (including Smith as well as all the contributors to Nyce and Khan From Memex to Hypertext) usually neglect to mention an alternative that was historically available at that time, at least at the conceptual level, in the work of Benjamin Whorf:

The "connection" of ideas, as I call it in the absence of other term, is quite another thing from the "association" of ideas. In making experiments on the connecting of ideas, it is necessary to eliminate the "associations," which have an accidental character not possessed by the "connections." [...] "Connection" is important from a linguistic standpoint because it is bound up with the communication of ideas. One of the necessary criteria of a connection is that it be intelligible to others, and therefore the individuality of the subject cannot enter to the extent that it does in free association, while a corresponding greater part is played by the stock of conceptions common to people. The very existence of such a common stock of conceptions, possibly possessing a yet unstudied arrangement of its own, does not yet seem to be greatly appreciated; yet to me it seems to be a necessary concomitant of the communicability of ideas by language; it holds the principle of this communicability, and is in sense the universal language, to which the various specific languages give entrance.

If most of the authors writing about Hypertext mention Engelbart's work, most of them usually fail to understand how crucial and direct is Whorf's influence on his framework--even if Engelbart himself fails to acknowledge it and only refers to the Whorfian hypothesis. It is nevertheless difficult to find an explanation for the total lack of acknowledgement of Whorf's influence on current hypertext or hypermedia systems, and especially as this influence appears as crucial as soon as one focuses on Engelbart's work.

Put that Body Back in the Picture

In a previous paper, I have characterized the interface as the space in which user and designer meet. Then, I focused on the modalities of their meeting, and described it as the negotiation leading to the construction of the entities capable of action (agents, actants) that inhabit this space. But so far, what I have called "user" and "designer" are abstract actors, disembodied characters in our narrative, pure essences. The central tension between the two dimensions of their being, biologically concrete individual or abstract member of a social community, has been somehow evacuated from my narrative, through the artifice of directing my discourse to their action. It is now time to consider how is this action possible. However multiple and collective they may be in action through time, user and designer are, no doubt possible, flesh and blood at any given moment, embodied in organic individual sensorimotor systems that we call their bodies.

The historical sketch of the evolution of the models of the user-interface that I proposed previously must then be paralleled by a historical sketch of the ways that the sensorimotor system of the user interacts with the computer through time. Here, I examine the turning point that led Douglas Engelbart and his colleagues at SRI to introduce the body of the person in his Framework for the Augmentation of Human Intellect. For Engelbart, the augmentation of the human intellect should start with a systematic analysis of the potential candidates for change, beginning with what he refers to as "the basic human capabilities" (Figure 2).

Figure 2. Engelbart's Simplified Model of the Basic Human Capabilities.

For Engelbart, the sensorimotor system ("the body") is at the interface between the "mental part" of the human being and the "outside world." The deliberate decision to "begin with the basics" led Engelbart and his group to develop a series of artifacts that mirror the importance of the body on the computer side of the interface. I refer here to the display system (the eye), the mouse, and the keyset (the hands). Prior to the work of Engelbart's ARC group, the physical interaction between human and computer was mostly limited to typing. In many ways, the keyboard and the teletype display available in the early 1960s were a mere extension of the punch card as a communication medium between human and computer. The communication was based on the manipulation of symbols, first numeric, and then alphanumeric. The body of the user entered the picture only as a medium to transfer his/her symbolic manipulations, and the connection of the body to the rest of the world was incidental only. Hands and eyes were extensions of the tool system, as input and output devices.

In developing the mouse and the chord keyset in 1964, Engelbart and his ARC group at SRI made a quantum leap in human-computer interaction: The introduction of the body as whole, as a set of connected basic sensorimotor capabilities. The experimentations that the group conducted was not limited to the hands and the eye, but involved many other parts of the body (the knee, the back, the head) as potential sensorimotor ways to control a pointer on the screen. The liberation of the left hand from the typing process, made possible by the invention of a chord keyset (one-handed typing), allowed a direct connection between the eye (perception) and the hand (motor).

The mouse is certainly the most famous device developed by Engelbart and his group at SRI. But a little-commented aspect of the invention of the mouse is crucial to my thesis, as well as, we may claim, to its fate as the most used pointing device in modern user-interfaces. I refer here to the origin of the idea of the mouse as revealed by Douglas Engelbart: "I remember thinking, "Oh, how would you control a cursor in different ways?" I remember how my head went back to a device called a planimeter that engineering uses."

As the conceptual grandchild of the planimeter, the mouse also translates motion (the arm of the holder) into graphical mathematics. It therefore not only allows the user to point at any object on the screen, but also introduces a direct connection between the topographical space of the interface and the human gesture of the user. By extension, the invention of the mouse opens space for any translation of human motion into the electronic space of the computer interface. This point is fundamental in that it allows us to evacuate definitively the notion of cognition as purely abstract representation, to introduce instead the "embodied action" in the computer space:

...perception does not consist of the recovery of a pregiven world, but rather from the perceptual guidance of action in a world that is inseparable from our sensorimotor capacities. Cognitive structures emerge from recurrent patterns of perceptually guided action. I can summarize, then, by saying that cognition consists not of representations but of embodied action.. Correlatively, the world we know is not pregiven; it is, rather, enacted through our history of structural coupling.

The Construction of the Observer

In his remarkable book entitled Techniques of the Observer, Jonathan Crary describes the shift in the historical construction of vision in the early nineteenth century, and relates it from the start with the recent "sweeping reconfiguration of relations between an observing subject and modes of representation that effectively nullifies most of the culturally established meanings of the terms observer and representation." According to Crary, the recent development of computer graphic techniques (from computer-aided design to virtual environments) are manifestations of an "on-going abstraction of vision," a process where "historically important functions of the human eye are being supplanted by practices in which visual images no longer have any reference to the position of an observer in a 'real,' optically perceived world." The strength of his demonstration comes from the enlightening (no pun intended) way in which he ties the on-going transformation of vision to the early nineteenth century production of a "new kind of observer," as "a new set of relations between the body on one hand and forms of institutional and discursive power on the other [hand]."

Much of these new forms of institutional and discursive power obviously dealt with the social and cultural construction of what counts as a "real, optically perceived world" at one given time. In this perspective, Crary shows how the autonomization of sight is a central phenomenon, characteristic of an "industrial remapping of the body in the nineteenth century," that "enabled the new objects of vision (whether commodities, photographs, or the act of perception itself) to assume a mystified and abstract identity, sundered from any relation to the observer's position within a cognetively unified field." For Crary then, the origin of present ways to visually perceive and represent the "real world" in "a plane severed from a human observer" is to be found in a fundamental shift of the early nineteenth century from the classic seventeenth and eighteenth century notion of point of view to the modern subjective vision.

Many commentators have stressed the importance of the end-result of this process on our twentieth century culture, but Jonathan Crary goes one step further when he explains the logic of the process itself, most clearly expressed in the nuance he makes between "spectator" and "observer:"

Unlike spectare, the Latin root for "spectator," the root for "observe" does not literally mean "to look at." Spectator also carries specific connotations, especially in the context of nineteenth-century culture, that I prefer to avoid--namely, of one who is a passive onlooker at a spectacle, as at an art gallery or theatre. In a sense more pertinent to my study, observare means "to conform one's action, to comply with," as in observing rules, codes, regulations and practices. Though obviously one who sees, an observer is more importantly one who sees within a prescribed set of possibilities, one who is embedded in a system of conventions and limitations. And by "conventions" I mean to suggest far more than representational practices. If it can be said there is an observer specific to the nineteenth century, or to any period, it is only as an effect of an irreductibly heterogeneous system of discursive, social, technological, and institutional relations. There is no observer prior to this continually shifting field.

In a parallel perspective, I have shown in this paper how two fundamental steps in shaping personal computer technologies were taken at Engelbart's ARC laboratory: (1) the introduction of natural language in a Whorfian "connectionnist" fashion, and (2) the introduction of the whole body of the user in the human-computer interaction process. It is now time to tie-up these two steps in one dynamic, comprehensive description of the social construction of this "prescribed set of possibilities" that constrain the action of today's computer users.

Kinaesthesia, Synesthesia and Language

We have seen that through the notion of "linguistic relativity," the influence of Benjamin Lee Whorf's work was central in the genesis of Douglas Engelbart's framework. But my genealogical enterprise would not be complete if I relied totally on Engelbart's translation of Whorf. The next step of my inquiry now requires a deeper look at Whorf's formulation of the linguistic relativity hypothesis (also named the "Sapir-Whorf hypothesis). To do so, I need to go back to one of the early expression of the hypothesis in "The Relation of Habitual Thought and Behavior to Language."

From the start, Whorf's interest is relatively more limited than the broad claim about the relationship between "culture" and "language" that many critics and followers alike (including Engelbart) ascribed to his work. Here is how Whorf introduces his inquiry:

That portion of the whole investigation here to be reported may be summed up in two questions: (1) Are our own concepts of time, space, and matter given in substantially the same form by experience to all men, or are they in part conditioned by the structure of particular languages? (2) Are there traceable affinities between (a) cultural and behavioral norms and (b) large-scale linguistic patterns?

In an illuminating parenthesis that a footnote makes even clearer, Whorf adds: "I should be the last to pretend that there is anything so definite as a 'correlation' between culture and language...We have plenty of evidence that this is not the case...The idea of 'correlation' between culture and language is certainly a mistaken one." And the answer given to the second question near the end of the article certainly reinforces this: "There are connections but not correlations or diagnostic correspondences between cultural norms and linguistic patterns." For Whorf however, there is such a thing as a "principle as linguistic relativity," but his own formulation is quite different that the broad translations that made his fame, as can be seen from this excerpts of "Linguistics as an Exact Science" published in 1940 in Technology Review:

The phenomena of language are background phenomena, of which the talkers are unaware of or, at the most, very dimly aware...These automatic, involuntary patterns of language are not the same for all men but are specific for each language and constitute the formalized side of language, or its "grammar"...From this fact proceeds what I have called the "linguistic relativity principle," which means, in informal terms, that users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world.

Thus for Whorf, the principle of linguistic relativity applies at the level of this prescribed set of possibilities that constrain the "observer" to whom Crary was referring: The connection between language, cultural norms and behavior is to be found at the level of the relationship between observation (that is, perception) and representation. Whorf proposes, in other words, a principle that establishes a link between (individual, biological) perception and (collective, social) construction of what counts as "the real world" at a given time and in a given culture. How unconscious it may be, Whorf postulates here that language always plays a part in this process, and his idea of "connection" (vs. "association") helps to convey this point: the construction of the "real world" stems from the process of sharing meaning through language. In this perspective, Crary's observer cannot be understood apart from the collective he identifies with (whether consciously or not does not really matter).

To understand fully this point, one has to go back to Whorf's answer to his first question about the way particular languages may "condition" our concepts of time, space and matter. On this point, Whorf's answer is more subtle:

Concepts of time and matter are not given in substantially the same form by experience to all men but depend upon the nature of language or languages through the use of which they have been developed. But what about our concept of space, which was also included in our first question? (...) Probably the apprehension of space is given in substantially the same form by experience irrespective of language (...) but the CONCEPT OF SPACE will vary somewhat with language, because, as an intellectual tool, it is so closely linked with the concomitant employment of other intellectual tools, of the order of time and matter, which are linguistically conditioned.

It is in the matter of "space" that Whorf's answer is the mot interesting because it clearly articulates two levels: the "apprehension of space" and the "concept of space." At the first level, space is perceived in a similar fashion and is therefore common to all human beings, but at the second level, space is a linguistic construction and therefore varies with the different human groups singularized by their language. The first level is the level of the individual, described by its physiology, and the second level is the level of the group, described by its sociology, and for Whorf, mostly by its language. At this second level, says Whorf, belong "Newtonian" and "Euclidean" notions of space.

The consideration of the two levels previously introduced in the deconstruction of the concept of space led Whorf to propose to "make more conscious" through language the notions of kinaesthesia and synesthesia. Whorf defines kinaesthesia as the "sensing of muscular movement" and synesthesia as the "suggestion by certain sense receptions of characters belonging to others." From the Greek syn (together) aisthanesthai (to perceive), synesthesia is one of these topics that periodically reemerges on the scientific agenda since its first medical description in 1710, when Thomas Woolhouse, an English ophtalmologist, described the case of a blind man who perceived sound-induced colored visions.

Whorf's point about the connection between synesthesia, kinesthesia and language is fundamental, as it allows to understand the link between two main advances in computing proposed by Engelbart in his Framework for the Augmentation of Human Intellect. Since Whorf's writings, numerous results in cognitive science have shown that this connection deserves a central position in our increasing understanding of the evolution of the human brain. Whorf had certainly intuited some of these later results when he stated that "probably in the first instance metaphor arises from synesthesia and not the reverse." But to him metaphor is nevertheless the key to reach a higher level of consciousness, and we find here the organizing principle that allows us to link the two major threads of Engelbart's contribution that our previous analysis had revealed: It is through the metaphorical capability of language that the introduction of natural language and of the whole body of the user are linked in the same project, augmenting the human intellect.

 

 

Metaphors within a metaphor: implications for the Human-Machine boundaries

The boundary between human and computer can only be located metaphorically. There are two major levels of justification for this claim. First, the very use of the word "boundary" in this context is itself metaphorical : it suggests that there is an interface on the ontological maps of the human being and the computer, a "space" where they are in contact, a line where one cannot be distinguished from the other, except by convention (usually set after a war, if one follows the lessons of human history). Therefore, to wonder about the boundary between human and computer is to think about the nature of this interface, physically representing abstract concepts (the ontological nature of human and machines).

Secondly, to talk about the analogies between human and computer at this specific time (the end of the twentieth century), has to be metaphorical, since a direct perception (sight, sound or touch) is still enough to know absolutely that humans and machines are different things. Since the early days of computer science however, the most common test to decide whether a computer can be considered analog to a human being is the Turing test , a variation on the imitation game whose experimental setting make sure that there cannot be a direct perception. In his elegant "simple comment regarding the Turing test," Benny Shanon has stressed this point and demonstrated how "the test undermines the question it is purported to settle, for with it a case of petitio petitii is introduced." The conclusion of the paper states this point more clearly:

But, of course, there are ways to tell the difference between computer and man (sic). Everybody knows them. Confronted with candidates for identification, look at them, touch them, tickle them, perhaps see whether you fall in love with them. Stupid, you will certainly say: the whole point is to make the decision without seing the candidates, without touching them, only by communicating with them via a teletype. Yes, but this, we have seen, is tantamount to begging the question under consideration.

To say that "the mind is a meat machine," or, more accurately, that "the mind is a computer" is a metaphor: it relies on an analogy that "invites the listener to find within the metaphor those aspects that apply, leaving the rest as the false residual, necessary to the essence of the metaphor." With special concern to this second metaphor, the mind-computer metaphor, the greatest source of this false residual lies in the direct perception of the computer. Now, when one considers this metaphor as a means to make sense of the "boundary" metaphor (a metaphor within a metaphor), the obvious conclusion is that the topographical aspects are definetely not what is determining: if the compared materiality of human beings and computers is the false residual of the mind-as-computer metaphor, one should conclude that there is no "natural" relief helpful to locate the boundary. No ontological connection, that is, between our materiality--our body, and the material manifestation of the computer. Past meat and circuits, ahead with conceptual conventions.

For AI indeed, the computer-as-mind metaphor points at the level of information processing and symbolic manipulation. In this perspective, the greatest philosophical achievement of the AI research program might very well be that it provides an invaluable source of insight in the formal/conventional nature of the ontological boundary between humans and machines. In his chapter in the Boundaries of Humanity quoted previously, Allen Newell expressed dissatisfaction with metaphorical thinking in general: "it is clearly wrong to treat science as metaphor, for the more metaphorical, the less scientific." For him, AI is about a theory of mind, and not a metaphor for the mind: it should provide organized knowledge about the mind. In his general introduction to the book however, Morton Sosna, echoes AI critics who " have questioned whether AI has remained, or can or ought to remain, unmetaphorical."

In the same book, Terry Winograd introduced yet another metaphor to describe the traditional research program in Artificial Intelligence: the bureaucracy-of-the-mind-metaphor. For Terry Winograd, AI is the ultimate avatar of the Western philosophical program that, since Descartes, Hobbes and Leibniz, has sought to "achieve rational reason through a precise method of symbolic calculation." This "mechanization of reason" relies heavily on the techniques of "formulation of rule-governed operations on symbol systems," which are to the mind what bureaucracy is to human social interaction. Back to the original metaphor of the human-machine boundary, the implication here is that the boundary is a border, marked by the existence of a bureaucratic apparatus (the customs, the imigration office) in charge of enforcing it. This metaphor for the human-machine boundary is consistent with our previous conclusion of the absence of a "natural formation" at the border: no river nor moutain, no interface where carbon-based organization merges with silicon-based organization, but an arbitrary definition that states "here you are in machine territory, there, in human territory." For Terry Winograd, AI historical claim of approaching the workings of the mind through the building of a symbol-processing machine is overwhelming. He argued that computer ought to be seen as a "language machine" rather than a "thinking machine":

The computer is the physical embodiement of the symbolic calculations envisaged by Hobbes and Leibniz. As such, it is really not a thinking machine but a language machine. The very notion of "symbol system" is inherently linguistic, and what we duplicate in our programs with their rules and propositions is really a form of verbal agreement, not the workings of mind.

In this perspective, the language-machine metaphor makes sense of the boundary metaphor by locating the boundary more accurately witthin the realm of "verbal agreement." Now, one can still wonder whether this claim does not simply reproduce the tautological problem at the heart of the AI research program since the formulation of the Turing test. We argue here that such is not the case if one does not equate language with symbol processing. Somehow, AI has been at the same time over-ambitious in its claim to model human intelligence or "thinking" and under-ambitious in its understanding of the linguistic phenomenom. If the notion of "symbol system" is indeed inherently linguistic, language, on the other hand, cannot be reduced to the conventional manipulation of symbols.

Hubert L. Dreyfus has regularly stated this objection since 1972: there are things that computers (still) can't do, because they function in a binary logic at odds with human reasoning, and binary translations in machine logic of symbols are far from enough to mimic human thinking. Jean-François Lyotard has recently summarized this phenomenolgical tradition (from Husserl to Merleau-Ponty) position on this issue, to conclude that:

Now, these are the paradoxical operations that constitute the experience of a body, of an "actual" or phenomenological body in its space-time contiunuum of sensibility and perception. Which is why it's appropriate to take the body as model in the manufacture and programming of artificial intelligence if it's intended that artificial intelligence not be limited to the ability to reason logically.

It's obvious from this objection that what makes thought and the body inseparable ins't just that the latter is the indispensable hardware of the former, a material prerequisite of its existence. It's that each of them is analogous to the other in its relationship with its respective (sensible, symbolic) environment: the relationship being analogical in both cases.

This point was not entirely ignored by the AI researchers, but I demonstrated in this paper how the Whorf-Engelbart perspective on the connection between language and world-view for the former, and language, world-view and communication technologies, for the latter, helps us make better sense of the human-computer boundary. If we take seriously both Whorf and Engelbart's claims, we should realize that the articulation between language and technology is specifically human, and that as a "language-machine" the computer could serve as a boundary spanning object. In this perspective, the materiality of humans and computers takes a different meaning than that of a "false residual": Both language and technology are inherently tied to the "body" on the human side of the border, and to the circuits on the mechanical side of it. It is in this perspective maybe that the metaphor regains its natural character, in the Deleuzian fashion of geological strata.

Conclusions

Following his goal to augment the human intellect, Douglas Engelbart contributed in setting the agenda for present computing technology. More than the numerous innovations that his laboratory at SRI produced, his work deserves credit for his role in helping establish the current (and even the next, for that matter) paradigm in human-computer interaction. In this perspective, the ARC legacy is to be understood as one of the leading locus for the implementation of a strategy of technological development that aimed at a co-evolution of man and its tools: Douglas Engelbart and his colleagues worked for the improvement of the man-machine relationship, and not only for the creation of smarter machines.

In the present paper, I have shown that such an undertaking should be read in the extended perspective of the relativist program that spans the twentieth century, starting with physics and mathematics, and making its way into the social sciences. In this perspective, I demonstrated the influence of Benjamin Lee Whorf on Douglas Engelbart's framework, and showed that this influence should be read from within the relativist diaspora. I insist on the historical importance of such a connection, that locates Douglas Engelbart's work within a broader intellectual context and informs his work on the basis of a stream of philosophical ideas, that Engelbart himself seldomly acknowledges or comments.

The human-machine boundary is indeed a strong metaphor when it links through language some basically human attributes and technology at its best for a certain point in time. And in so doing, the metaphor questions the very nature of these attributes, hides them for a while, abstracts them with their own codes, understands them. The articulation of language and technology is what constitute human beings. To model, create and use a "language machine" has been and still is the source of knowledge-claims of epistemic dimensions. It goes straigth to the heart of the metaphor of humanity and opens many new questions.