Previous Next Up Comments
SEHR, volume 4, issue 2: Constructions of the Mind
Updated 5 September 1995

dance floor blues

the case for a social ai

Tom Burke


Put yourself in the following position. You are a fifteen or sixteen year old white male, and here you are at a white middle-class suburban high-school dance. And there is Mary across the room. You think she's pretty fantastic. Maybe she likes you, but as a matter of fact you've never really talked to each other. Maybe you should ask her to dance. That bass guitar is drumming out a good beat. You do want to dance with her. Or do you? Do you really want to cross the room and ask? She's smiled at you a couple of times in English class, and one time you both sat at the same table at lunch. But so what? She smiles at everyone, and she eats lunch with a lot of people. So where do you stand? Do you really want to dance with her? Well of course you want this to be an enjoyable-dance-with-Mary situation. In that sense you definitely do want to dance with her. But that doesn't answer the question. The question is: is this that kind of situation? What if she says no? If that were the situation, then, no, you might be better off to just not ask, and, in fact, you do not want to dance with her right now. If she doesn't really like you or if she doesn't like to dance or if she is anywhere nearly as nervous as you are, then you don't want to press it and make matters more complicated than they already are. Or worse, what if she couldn't care less about you, and were to make it obvious to everyone that she doesn't want to dance with a toad like you? In that case, you definitely do not want to dance with her. But what if she does want to dance with you? Did she just look at you?! Damn; you weren't paying attention. For sure, what you want depends on what she wants. You want to dance with her only if she wants to dance with you. So what do you do? What do you want to do? You have to do something, to press the issue, to clear this up. You ought to make a move. Of course, you don't have to be dumb about it. You could go over there, but leave yourself a way out. You could go talk to someone you know standing close by. That would be a good thing to do anyway. Maybe you'll get a chance to talk to Mary that way. She only has to show you some kind of sign-to look at you and smile, maybe even say something, anything-so that you'll know what to do, so that you'll want to dance.

However this little story ends up, it is designed to illustrate some facts about desire and other aspects of mentality, at least mentality of a younger white adolescent male variety, circa 1965.1 The uncertainty involved in this situation means that you are uncertain as regards your own "desire-state"-not that you could not articulate your desire clearly to yourself, but that the desire itself was not determined. The same can be said for some of your beliefs in this situation. It is possible to determine what you believe, desire, and intend to do only as the situation unfolds.

This points to something that is largely absent in the account of your situation on the dance floor-namely, to the interactive character of experience. What you believed was not geared just to your own actions, desires, and beliefs, but also to her actions, desires, and beliefs. Your beliefs, like your desires, were initially undetermined, but they were capable of being determined in the course of and by virtue of various mutual actions, not just in terms of what you did with regard to Mary but also in terms of her actions toward you and her reactions to you. It was up to Mary and you together in a larger social setting-not just you alone and not just Mary alone-to coordinate your actions and otherwise clarify the situation by cooperative and consensual means. It is not clear that one's desires and beliefs are socially distributed, but their determination-the active gathering and processing of information by which they are made determinate-often is socially distributed. Whether or not you want to dance with Mary depends, from one moment to the next, on how you and Mary proceed to interact, not solely on anything you can discern from past and present information.

The story also illustrates how indeterminacies in social situations can reveal us to ourselves in ways that mere perceptual experience generally does not. By compelling you to consider your own actions as means for altering present social circumstances, this type of situation served to highlight and bring your self into focus. The uncertainty that was involved resulted in putting the spotlight on you. "Do I make a move, or do I hold back? I am uncertain, hence I am." Activities like playing ping-pong might yield similar results, but it would more likely be the social character of the game and not just its perceptual character that brings the self into focus.

These observations support the claim that social situations, as opposed to merely perceptual experience, are what initially gave rise, way back when, to the evolutionary development of an objective sense of self in the first place and ultimately to the emergence of human mentality-which is what I want to investigate in this paper. The lesson to be drawn from this, or so I will argue, is that the Artificial Intelligence enterprise cannot afford to focus solely on designing software for an artificial agent's head. Without some kind of socialization, an agent will have no way to classify and hence objectify itself to itself, and hence it will not have the kind of constitution it takes to engage in mental activity. Socialization must be worked into the process of building a thinking machine.

If we believe the social psychology of George Herbert Mead and John Dewey, then the AI enterprise, at least with respect to its more ambitious aims to build humanly intelligent and autonomous machines, will not begin to achieve its goals without introducing a social dimension into the theoretical picture.2 Mead and Dewey, working in the first half of this century, developed a view of human mentality and self-awareness which fundamentally draws on the social character of human nature. In this view, one would have to say that the AI enterprise has so far been misguided because it works with a faulty conception of human mentality-particularly by having no real appreciation of its social aspect. I do not wish to argue against the AI enterprise simply by pointing out that having a social life is something computers can't do-what computers can and cannot do may not be the important issue here, particularly since in talking about either artificial or natural agents we are talking about something more than just a computer.3 But what and how much more? To begin with, a natural agent inextricably exists and acts in the world, and hence is more than just a symbol processor inside of a skull. Moreover, according to Mead and Dewey, if such an agent has a mind, then that is the case by virtue of its possessing more basically a social nature. The aim of this paper is to summarize this social-psychological account of human mentality. I do not want to make claims about what AI researchers can or cannot accomplish, but only to point out certain design principles-having to do with the social character of human mentality-which so far do not seem to figure into what it is they think they are doing. I will try to paint a comprehensible picture of a rather complex view, sketching in fairly broad strokes a framework of ideas that serves as a positive alternative to certain erroneous preconceptions characteristic of the three-hundred year old epistemology inherent in most AI research.

To start, in order to understand what mind is, we first need to see how much of human experience we can account for without appealing to or taking for granted the presence or availability of mind in the first place. We should be able to go a long way toward understanding the nature of mind by studying the natural evolution and development of experience more broadly. According to Mead and Dewey, the workings of the mind in experience turn out to be only a small and later part of this evolutionary/developmental story. As mental agents, we may find it difficult if not impossible to step outside of ourselves to get the right perspective on this fact. Nevertheless, it stands to reason, in their view, that this is a fact.

To properly address the question of what it means to have a mind or how mentality arises and functions in human experience, we will need to work out a constructive account of how our thinking relates to the world, i.e., how it is that our thoughts (concepts, ideas, beliefs, etc.) correspond to the world at large (objects, events, facts, etc.). The account of correspondence between mind and world which I want to develop in this paper constitutes an elaboration of Dewey's alternative to a classical British empiricist picture of correspondence between facts in the world and true beliefs. The latter picture simplistically aligns and conflates various dichotomies: outer versus inner, physical versus mental, actual event versus representation, and so forth. In contrast, Mead and Dewey tended, in the absence of evidence otherwise, to treat all such distinctions as mutually orthogonal. They talked in particular about a correspondence not between two kinds of ontological realms but between two kinds of experience: one which is basically perceptual and otherwise immediately engaged in the world, versus another which is involved in planning and controlling the first kind of experience. There is a positive correspondence between these two kinds of experience if one's plans and expectations fit well with what is actually happening ("like a key fits a lock," as Dewey would say).4

In more detail, the classical view of human mentality (associated with a variety of thinkers, from Locke to Russell) looks something like Figure 1. Things are divided into two realms: a) an external world at large, populated by objects, events, properties and relations, facts; and b) an internal world of the mind, populated by representations of things in the external world (concepts, ideas, thoughts), and supporting processes which manipulate such representations (inferences, reasoning, beliefs, desires, decisions). Following Descartes, this distinction constitutes an ontological duality of matter versus mind.

figure 1

figure 1 The Classical Picture

These two realms affect one another. Perception is a process by which the outer realm somehow affects the inner realm, producing ideas and such in the mind as results of causal processes originating in the world. Likewise, by virtue of decisions based on certain beliefs and desires, agents perform actions in and on the outer world, so that the inner realm thereby affects the outer realm. For example, a cup of coffee is in front of your eyes, so you see a cup of coffee (i.e., the outer realm affects the inner realm). You believe a cup of coffee is present; so if you want to drink some coffee, you reach out, grasp the cup's handle, bring the cup to your lips, and you sip (i.e., the inner realm affects the outer realm). You taste the coffee (outer affects inner). You return the cup more or less to its original place (inner affects outer). And so forth and so on.

In this view, a true belief is a belief about things which corresponds to the facts, i.e., to the way things actually are. The actions you perform in various situations will be determined by your desires and beliefs about such facts. Your belief that Mary wants to dance with you is true just in case Mary, in fact, wants to dance with you. Or to say the same thing somewhat differently: a concept (in the mind) applies to an object (in the world) if the object actually has traits and fits specifications characteristic of the concept. For you, Mary falls under the concept wants to dance with me just in case Mary, in fact, wants to dance with you.

This outline of the classical view is simplistic, but even after some refinement, various well-known problems remain. For instance, it isn't clear how these two realms actually influence one another-though obviously there has to be some such influence so that we can perceive things in the first place, much less think about them and act on our thoughts.

Various solutions to this puzzle have been proposed. Assuming that the external/internal distinction lines up with a physical/mental distinction, one might think that the duality is only apparent and that everything is really just physical. In some sense, the brain is the mind. Variations of this view include the claim that the mind, while not strictly identified with the brain, is to be found in how the brain functions-that is, we should look for the mind in the brain's software rather than in the hardware. There are various subtle and not so subtle ways one can try to make such ideas work. No such solution so far proposed has come to be generally accepted, though this is the view that is implicit in AI research. However this pans out, we avoid the problem about the gap between these two ontological realms, since the matter of how software and hardware influence one another is presumably well-understood.

On the other hand, one might think that everything is really just mental. Everything apparently external to us, as well as everything internal to us, is just ideas-our ideas, God's ideas, someone's ideas. This sounds less plausible, or at least we quickly overreach our ability to understand what we are talking about by holding such a view. But at least on the surface, it would solve the ontological-gap problem.

Putting such metaphysical questions to one side, there are still epistemological questions to consider-such as whether or not (and how) we can perceive things as they are, directly, or whether in the course of perceiving things we ever get outside the realm of our own ideas and beliefs about what or how things are. How one answers such questions leads to further questions about the nature of the alleged correspondence between these two realms, leading to an ad hoc naive realism, or in any case leaving us with no account of what it means to know anything of any concrete existential significance. That predicament is just an historical fact about modern philosophy as practiced according to the strictures of British empiricist epistemology. Such problems are not unique to this long-standing philosophical tradition, but they are problems which proponents of this view have not been able to properly address after several hundred years of trying.

The social-psychological philosophy of mind developed by Mead and Dewey is different in various ways. They present a view which does not posit any kind of strict duality to begin with, so they are not faced with having to figure out how these realms interact or which of the two realms is more or less real or basic. There are distinctions to be made, but not in the simple dualistic manner characteristic of British empiricism.

The view developed by Mead and Dewey looks something like Figure 2. They start with a distinction similar to the classical divide between an agent and the world-however this is not a strict ontological duality, but rather a definite though somewhat fuzzy if not floating distinction between one kind of thing (an agent) acting within a single larger ontological domain. We should note that it is not appropriate to refer to this one grand agent/world realm either as "physical" or as "mental" since at this point we neither have nor want a physical/mental distinction to draw on. Without fixing categorical boundaries but simply to acknowledge some distinctions, it is also important to note that the distinction between an agent and the world is orthogonal to the distinction between an organism and its environment.5 The agent as such is an organism/environment system, and the world outside of the agent will include things within the organism as well as in the environment. Clothing, dental fillings, eye glasses, canes, automobiles, and other tools and devices in many circumstances function as part of the agent (and of course in other circumstances not). Hair, fingernails, hands, feet, and virtually any other part of the anatomy can function as part of the world in circumstances where it is acted on by the agent (clipping fingernails, combing hair, dressing a cut finger, removing a loose tooth, and so forth).

figure 2

figure 2 An Alternative View

Rather than an ontological bifurcation, the distinction between an agent and the world is more like the distinction between a knot and the rest of a rope, which is itself an interweaving of both organismic and environmental fibers. One can easily point to the knot and to the rest of the rope, but it might be difficult to specify where the knot begins and where the rest of the rope ends. In the case of an agent in the world, unlike an ordinary knot in a rope, various transactions and changes of perspective occur from one set of circumstances to the next which make a clean boundary between the two even harder to locate (as if the knot were to be incessantly changing size, shape, and position on the rope). Nevertheless, an agent/world distinction is no doubt acceptable in explaining what perception and thought are insofar as perception and thought constitute the transactions and transformations of circumstances which distinguish the agent in and bind it to the world at large.

In this picture, perception, rather than a one-way causal process, is by itself a two-way interactive process. An agent does not just passively register sensory excitations in order to perceive; but rather perception is an active, on-going process-motor activities are as essential to perception as are stimulations of nerve endings. Eyes and ears function dynamically in a dynamic world if they function at all.

Similarly-and this is the more interesting claim-"thought" (or what Dewey also calls "reflection") is a kind of two-way agent/world interaction. Thinking doesn't happen just "inside the head" or purely "within the agent," contrary to the computer metaphor.6 Rather, one thinks by virtue of using, for instance, chalk and chalkboard, pencil and paper, paints and canvas, keyboard and monitor, objective media of all sorts-such that the hardware involved includes more than just connected systems of neurons. And the "software" controls not just brain processes but rather processes which include and exploit regularities and dependable constraints in the agent/world system as a whole.

In this view, thinking is a dynamic process which has basically the same structure as that of perception, although it involves a different and more specific range of agent/world interactions. We therefore get a different notion of correspondence between the things we perceive and whatever it is that we think about those things. In this case, the correspondence is not between entities or processes in two distinct ontological realms but between two kinds of agent/world interaction.

So far we have a rather broad outline of an account of what thinking is and of its general function in experience. We will look at each of the processes of perception and thought more closely later. But first note that even at this level of broad detail, the picture is incomplete in the sense that it depicts only slices or flavors of experience. In particular, what is it that motivates any of this perceptual or reflective activity? What compels it to proceed in one way and not another? How or why does any of this two-way interactivity occur at all? To answer questions like these, Dewey and Mead proposed a theory of experience which was built around the notion of inquiry and problem-solving.7

Neither perception nor thought, alone or together, constitute the "experience" embodied in the process of problem-solving. In order to complete an account of what experience is, we have to acknowledge a third element or dimension of activity which is independent of perception and thought as such but which is the common basis for their existence in the first place. That is to say, perception and thought occur as pieces or phases or parts of an overarching process which we want to identify as experience. According to Dewey and Mead, this overarching process consists of the activities involved in an agent's attempting to resolve conflicts, breakdowns, predicaments, or troublesome situations otherwise, in some broad sense of those terms. This natural impulse to resolve unresolved situations is characteristic of living systems whose existence depends not just on reproductive capabilities but also on their adaptability to changing conditions. Episodes of resolving discordant situations are the dynamic contexts in which perception and thought take place. Perception and thought have no other function or purpose except insofar as they are motivated, as problem-solving activities, by an innate and overarching impulse to resolve discordant situations.

In this view, experience consists of episodes of problem-solving, which in its simplest form is a matter of an agent's being motivated to maintain some kind of stable existence. It is not that experience occurs within such episodes of problem-solving, but that occasions of experience are such episodes.

These episodes of problem-solving could be termed "inquiries" to the extent that thinking is involved in the process. That is to say, experience can be merely perceptual-such as when you can't quite make out a visual image until you squint or move closer to the object in view. This requires only perceptual resolution procedures, not necessarily thought processes. On the other hand, "inquiry"-which is a reflective sort of problem-solving-involves both perception and thought.

When we bring these different features together-perception and thought in problem-solving contexts-we come up with something like a corkscrew picture of experience (a "hermeneutic helix"), depicted in Figure 3. According to this picture, experience can be resolved into 1) a linear, progressive, teleological component (pointing in the "direction" of solving a given problem) and, orthogonal to that, 2) a circular, interactive component (consisting of perception and thought processes, which are not always in accord with each other, despite the figure).

figure 3

figure 3 The Corkscrew Pattern of Experience

The linear dimension of experience, which constitutes its primary impetus and its "directedness," is a process of transformation of an unresolved situation into one which is no longer troublesome or problematic. This progressive, conative component constitutes the basic intentional character of experience. Even the most rudimentary forms of experience involve a kind of reference and attribution, even if this reference and attribution is neither linguistic nor cognitive in nature. Any given episode of experience involves a simple form of indexical reference to the present situation as the "subject" of experience; and it involves the attribution of a "predicate" to that subject, in the form of an acceptance or rejection of the present course of activity (which is the content of the predicate) as being an appropriate response to the problem which gave rise to the "subject" in the first place.

The circular component of experience, on the other hand, consists of interactive processes of perception and thought. These interactive processes constitute the motions of the gears and drive-trains that make the linear transformation of the given situation happen. The story here is that the agent gets into a position to step its way through various transformations of a situation by sifting through details of the given situation and scoping out a space of possible courses of action. Perception and thinking are initiated and conducted only in such contexts of transforming some given situation (otherwise there is no impetus to think or do anything at all). So far as mere perception is concerned, the transformation proceeds more or less automatically according to the dictates of relevant habits, natural dispositions, and circumstantial accidents. To whatever extent thought is involved, this transformation is guided by more or less autonomous processes insofar as alternative courses of action may be explored and attempted according to the dictates of reason. So how do these processes actually work? We will now look at these two "circular" aspects of experience in more detail.

Perhaps we are going a bit overboard with so many line drawings, but-the structure of perception looks something like Figure 4. The terminology here is taken mainly from Dewey, with some borrowing from ecological psychology.8

figure 4

figure 4 Perception

Perception is depicted here not merely as the reception of sensory data but rather as a two-way action/reaction process. An agent's perceptual systems are two-way input/output devices sensitive to continual feedback, not just input-plus-transduction devices. In this view, perception is an interaction-between the world at large and an agent which is attuned by evolutionary forces to various processes and constraints in this interactive domain.

This interaction has a more or less cyclical structure, which we can explain by walking our way around Figure 4: In the process of perception, an agent performs actions, as determined by some collection of attunements to various regularities in the world. Such attunements are systematically bundled into more or less definite packages which Dewey and Mead refer to as habits. The actions which an agent performs have repercussions in the world, and the world acts back on the agent in response, producing some kind of detectable results. The results of particular actions are registered by the agent as qualities of the immediate situation, which, as potentially familiar traits characteristic of certain kinds of things, trigger or otherwise activate selected habits and not others. (A notion of noncognitive rationality is introduced into the picture here, as measured by the appropriateness of given habits in given instances. The rationality involved in determining which habits are triggered in a given instance and which are not is a function of the systematicity of the spaces of constraints and processes which make up the contents of the various habits, matched against whatever actions and results are actually occurring in the present situation.) Such triggerings of the agent's habits constitute noncognitive "interpretations" of registered qualities, in the sense that habits bring to bear and otherwise make salient certain noncognitive expectations about what the triggering qualities signify. Such signification occurs on the strength of whatever constraints constitute those habits. That is to say, on the basis of registered qualities, the affordances of things are detected by virtue of whichever habits are triggered by those qualities. But then such affordances determine straight away what further actions are possible; and the process goes around and around, progressively transforming a given perceptual situation as new actions lead to new results and vice versa. All of this fits together to generate the process of perception-a two-way process where actions lead to results, results lead to actions, and the whole process is tempered and modulated by the way the world is and by various habits and attunements brought to bear by the agent.

For example, you walk into a room and there on the table, free for the taking, is a dark substance in a brightly colored paper cup. You will probably see a cola drink, if you are like me, or perhaps you will see coffee or used motor oil or blackstrap molasses, based on different attunements. Notice that what you perceive is based on a very limited array of qualities that are not sufficient by themselves to uniquely determine what you do, in fact, perceive. Continued visual probing will likely as not confirm your current perception but not rule out other justifiable possibilities. Your subsequent actions might in fact lead you to alter your perception, such as when you walk over to the table, grasp the cup, bring it to your lips, and sip the liquid. The resulting taste may, in fact, disconfirm whatever expectations you might have had which permitted you to perform those subsequent actions and otherwise treat the substance as if it were cola rather than motor oil. However things turn out, these and other ongoing actions are part of the process of perceiving the cola-like substance. This interaction may be entirely mindless in the sense that it may proceed automatically, requiring no thought or deliberation.

On the contrary, thinking is a distinct and more or less independent activity in its own right. Nevertheless, it is patterned after perception so far as its overall structure goes (as depicted in Figure 5). That is to say, thinking is itself a two-way interactive process. But instead of the world-at-large, it more specifically involves symbol systems embodied one way or another in linguistic, discursive, or expressive media. In the course of thinking, we use objects or events in the world (such as specially designed pencil marks on paper, for a canonical example) to represent other objects or events in the world. In this regard, thinking necessarily employs symbols, which is to say that it is a special kind of situated activity.9 Thinking is not just an internal computation process-not simply neural activity that connects up inputs and outputs in some regular fashion (we will say more about this as we go). It is rather a more or less autonomous two-way action/reaction process, sensitive to continual adjustment and feedback, involving the world as much as it does the brain. It is an interaction-between systems of symbols and an agent which is attuned, through evolutionary and developmental processes, to various activities and constraints governing the nature and use of symbols. Working concretely in some such representational domain in the course of thinking, the agent acts on the world, and the world acts back.

figure 5

figure 5 Thought (Reflection)

This interaction has a more or less cyclical structure, which we can explain by walking our way around Figure 5: Acting within a domain of symbols, the agent is able to formulate certain proposals concerning whatever matters are being represented, thereby positing or formulating certain possibilities as actual-by writing or speaking or using some such expressive medium. These proposals lead to various consequences by virtue of derivations, calculations, proofs, computations, cipherings, or other more or less mechanical processes applied to symbolic formulations of those possibilities. The consequences of given proposals are registered as symbolic expressions (or should we say symbolic impressions), and they are taken as formulations of facts of the case. When presented systematically or otherwise coherently, these facts will be characteristic of certain concepts (in particular, those concepts, or modifications thereof, which suggested the initial proposals to begin with) and not others. (A notion of cognitive rationality is introduced here, as measured by the appropriateness of given concepts in given instances. The rationality involved in determining which concepts get triggered by certain facts and which do not is a function of the systematicity of the contents of various concepts, matched against whatever proposals and consequences are actually derived in the present situation.) The concepts brought to bear in this way constitute interpretations of the given facts, making salient certain cognitive expectations about what those facts mean, on the strength of systematic constraints built into the interpreting concepts. But then such interpretations generate ideas (suggestions) about the scope of possibilities for the case at hand, delimiting what further proposals might consistently be made and how prior proposals might be reformulated. The process goes around and around, serving to plot out details of the current situation, at least tentatively, as regards what is actually the case as well as what the current potentialities are as results of different courses of action. Thinking encompasses this entire two-way cyclical process-of deriving consequences from given proposals, and making proposals on the basis of given consequences-the whole process being tempered and modulated by formal properties of the symbol systems one uses and by the conceptual apparatus brought to bear in the process.

The computer metaphor in modern philosophy of mind is not entirely misguided, but consider another analogy. Thinking, in the present view, is not unlike using a clutch-and-transmission system, allowing the agent to disengage itself from concrete problematic activities so that it can readjust its conduct according to changing conditions and foresight into possible consequences of feasible actions. The value of reflective thought lies in its allowing an agent to scope out possibilities, on the basis of results of past and current experience, and thereby to avoid troublesome alternatives and choose more promising ones. Referring back to Figure 3 and perhaps pushing this analogy too far, perception (like an automobile's engine and drive train) inevitably moves the transformation of a situation along, for better or worse, according to the dictates of established habits; whereas mentality (like a clutch and transmission system) is a mechanism by which one disengages from a given situation in order to survey alternative courses of action and to determine how to adjust one's conduct (to shift gears). Once engaged (releasing the clutch), the actual changes in one's conduct should serve to effect transformations in the given situation, presumably to move matters toward a solution to the given problem. Broadly speaking, this clutch mechanism is continually engaged or disengaged as dictated by the circumstances and fortunes of one's ongoing conduct.

For instance, as you walk across the dance floor in Mary's direction, you can proceed resolutely according to some fixed plan and walk right up to her no matter what she does. Or you may reflect a bit on what is happening as you go. You don't want to burn out your clutch, but as you walk you might take note of her reactions to your taking this initiative (if there are any) and modify your course of action accordingly. If she looks at you and smiles or turns to face you or some such thing, then-yes-this is going to be a fine evening. But if she frowns and turns away or seems apprehensive or bothered, or worse, if she appears to be entirely unaware of your existence, even now as you walk across the room toward her, then perhaps you should head toward your friend standing nearby and otherwise ease up and look for another opening.

All of this, of course, depends on your being able to read and interpret the gestures and behaviors characteristic of white middle-class suburban high-school dance etiquette. This is not exactly a well-defined symbol system with a formalizable syntax, but it is a definite body of conventions and rules which, by common consent, are governing the behaviors of everyone present. Of course, one can and should successfully function in this setting in a largely spontaneous and intuitive manner, not reflecting so much about what is going on as you have just done. But you, after all, are rather unsophisticated and hence a bit ill at ease when it comes to these matters. And besides, you really don't know where you stand so far as Mary is concerned. Her various behaviors are not simple events to you but rather stand as symbolic gestures in the context of a more or less well-defined system of social conventions. You are predisposed, therefore, to try to fathom the significance of her various behaviors and to otherwise think about what you are doing, rather than simply do what you are doing. There is nothing about the present setting by itself that necessarily calls for reflection, but rather it is your being ill at ease which turns the present setting into that kind of situation. Alas, the situation may be so overwhelmingly unfathomable that you would continue to do nothing else but consider your options without ever making a move to walk across the floor while you still have an opportunity to do so.

As we have said, the value of thought lies in the capacity it affords the agent to back off from the world in the concrete and deal with it only in terms of possibilities represented symbolically. The evolutionary value of an increased capacity not just to solve problems but to foresee and avoid problems is obvious. One can thereby intelligently formulate and rehearse possible courses of action without being bound to suffer the actual consequences of those actions. This disengaged symbolic activity is useful, of course, only to the extent that one brings the developments of thought back into the concrete world. Though it may proceed independently more or less for its own sake, the primary function of thinking is to monitor and control actions in an efficient and effective way-to avoid problems but not by avoiding action altogether. You can't hold in the clutch forever. Thinking must answer to the practical application of its results in the real problem-solving situations which give rise to it in the first place. To think is primarily to think about things that matter.

Note that these two kinds of interaction-perception and thought-are on an ontological par. Both are equally accessible and directly comparable. The question of correspondence between thought and perception is then relatively straightforward, posing a concrete empirical problem in the course of given inquiries, but not a philosophical problem as such. Possible actions whose consequences may be scoped out reflectively can be actually performed, and predicted consequences can be compared against actual results. In this way, perception and thought are able to work in unison, such as to be mutually consistent. This functional correspondence in given instances is the measure (or at least one measure) of an inquiry which is succeeding.

Mead and Dewey would seem to agree with Kant at least in this one regard (despite so many differences otherwise), namely, that an agent's autonomy is given by that agent's ability to exercise reason and to apply the results of rational thought to the control of its own conduct.10 The sense of freedom that we have as rational agents is a sense that the choices we make and the actions we perform on the basis of those choices are able to change the world in some sense, such that the world is not just changing itself through us but that our own cognitive decision-making processes take place and yield results independently of the rest of the world. But this is precisely what is meant by likening thought to a clutch-and-transmission system. For Mead and Dewey, freedom is a particular capacity (more or less forced on us) to deal with contexts where there is some impetus to coordinate contrary impulses and to otherwise weigh incompatible alternatives as to possible courses of events. Conflict encourages detachment, detachment encourages reflection, and that in turn may give rise to altered courses of activity. It is not that thought does not have a naturally systematic character just like anything else in the world, but that an agent's mental faculty is a piece of the world that is designed to operate autonomously to varying degrees. As such, thinking is the basis of our spontaneity in a universe otherwise governed by natural law-not that thinking isn't as lawful as anything else, but that it is by design a free-running bit of machinery capable of engaging with and disengaging from everything else (more or less) on the basis of its own operations. That we have such a capacity, especially as brought to light in uncertain situations where the whole point is to determine how to readjust one's conduct in the face of conflicting circumstances, is what fosters our sense of free-agency and efficacy in a world in which we otherwise seem to be buffeted by forces beyond our control.

The AI enterprise on this view should be aimed at understanding not just the systematic character of symbol systems, but also the character of mentality as a capacity of an agent to engage and disengage itself from activities ordinarily driven by habit and blind appetitive impulse. Thinking is a process of sorting out the possible courses of events in a given situation from a distance, but generally so as to monitor and control the transformation of that situation more effectively. It hardly seems appropriate to characterize thinking or perception as mere computation, unless this metaphor is developed in such a way as to be embedded within some such broader theory of situated activity. In appealing to notions of computation, representation, and symbolic processing as central to a theory of intelligence,11 what is actually meant by a "symbol" or by "representation"; and more specifically, where are such things to be found?

In the present sense of those terms and contrary to the computer metaphor, perception by itself does not employ symbols and representations, though it does involve organized systems of correlated activities that include states and processes that are in some sense "internal" to the agent. To refer to such internal states and processes as symbols or representations of the external states and processes to which they are dynamically tied is like saying the front half of a car is a representation of the rear half, or that gasoline is a representation of a carburetor, or that a car as a whole is a representation of the road it is driven on. The road signifies the potential presence of automobiles, but it is not on those grounds a symbol or representation of automobiles. A footprint in the sand is evidence of a person having passed by recently, but it is not thereby a representation of a such a person. If anything, it is a detached presentation of that person-the result of one way in which that person has been presented to you. The same can be said of internal states (e.g., qualitative results of motor activities) which are integral to the overall system of agent/world interactions which constitute perception.

Symbols are something else altogether besides significant correlates of other things. It is a waste of a good technical term-and one that collapses important theoretical distinctions-to refer across the board to states and processes inside the agent as "symbolic." It goes without saying that symbols are functional correlates of other things; but more specifically, they are things in the world that stand in place of other things in the world in the course of representational activity-in a robust and literal sense of representation. In this richer sense of the term, contrary to the computer metaphor, symbols may be anywhere in the world, not necessarily in the head, and only so long as they function properly in representational activity.

Suppose for instance that you are playing sandlot football (U.S.-style). In the huddle, the quarterback draws some Xs in the sand. "This is you," he or she says, pointing to you and then to one of the Xs, and then draws and explains some arching lines from and around various Xs to picture how the next play will go. Perhaps discussion ensues and different possible plays are considered. In any case, the Xs and lines are literally representations of people and projected paths of motion, by virtue of their function in the planning by means of which your team coordinates its next action. Our thesis here is that perception deals with presentations of things, like footprints in the sand, whereas reflection trades in representations, like the Xs and arcs.

It is somewhat ironic that the systematic bundles of attunements which constitute habits for dealing with presentations of things (cf. the lower dotted oval in Figure 4) answer fairly closely to what Vera and Simon mean by symbol systems. But rather than a general theory of cognition, Vera and Simon are proposing an account of something which Dewey and Mead would identify as just one piece of the perception puzzle. Habits are in a sense input/output systems internal to perceptual processes, and they very likely can be modeled as computational systems; but such computational systems are not "symbol systems" in the present sense of the term, even if they take as inputs and outputs objects which the cognitive scientist can read as symbols. What matters is how they function in the specimen agent's activities; and in the case of that agent's perception, such inputs are direct indications of things, not representations of them. On this account, any such computational system constitutes only a piece of the picture, not a full account, of what perception is, which is itself only a piece of the picture of what intelligent agency is. A standard symbol-systems approach to AI, besides watering down the notion of a "symbol" and otherwise collapsing the distinction between presentations and representations of objects, generally lacks an account of the functional context in which symbol systems are orchestrated to interpret and steer behavior. It is as if Vera and Simon are claiming that we would have a theory of driving once we had a schematic for a piece of an automobile. A theory of how an automobile engine works, for instance, is important to, but does not constitute, a theory of how the engine is used in driving. One also needs an account of the rest of the automobile, not to mention some sense of the various traffic patterns and terrains in which different functions of the engine come into play, to have a full account of what an engine is. Similarly, a theory of symbol systems cannot by itself yield an adequate explanation of cognitive abilities, even once we distinguish symbol systems as such from the computational systems that constitute habits. Each of these various systems are only separate pieces of what constitutes intelligence.

If thinking and perceiving are mere computing, then computing is something that encompasses a world of symbol-processing and motor activity, out there on the dance floor, not just inside the head. One's perceptions and beliefs and desires take shape by virtue of such activity, by no other means and for no other purpose.

But now there is the harder question of why and how things are this way. One might allow that the present account of perception and thought up to this point is plausible, but why think this is the way things are?

In particular, the account at times sounds rather behavioristic, in the worst sense of the term. How would Mead and Dewey explain the internal brainy feel of silent thinking where one does not use pencil and paper or chalk and chalkboard? What is going on as you stand there looking at Mary or staring at the floor, tracing out possible courses of action but being too unsure of yourself to make a move? In a great many such cases, thinking would appear to take place solely within the agent, not in some interactive agent/world domain.

In talking about thinking as an autonomous activity, have we introduced some kind of unbridgeable duality between thought and perception? What is the connection, if any, between these two kinds of agent/world interaction? While presumably they are structurally similar, they still seem like two entirely different kinds of activity. Mead and Dewey maintain that there is some sort of continuity between perceptual and reflective activities, so that they are pieces of a single fabric. In this view, thinking is a unique kind of activity in its own right; but allegedly it is a variation on a theme already at work in the dynamics of perception. But why think of thinking in this way as a variation on a theme? And why that particular theme? How do we account for the continuity?

It has been pointed out that thinking involves the use of concepts. But what does that mean? What is a concept? For instance, you apparently have certain concepts about what high-school dances are all about, and you are drawing on those concepts accordingly to come to terms with the present situation. According to Mead and Dewey, concepts as such are not really learned by a cognitive agent until they can be used by that agent as means for solving problems and otherwise for making one's way through the world. In particular, concepts play the same role in thought which habits and attunements play in perception; but are concepts something different from habits? Happily, nothing we have said so far suggests an ontological distinction between habits and concepts. The difference is presumably a functional one. It is clear that not all habits are conceptual, in that not all habits deal with the use of symbols and symbol systems. But concepts are those habits that do pertain to the use of symbols and which thereby function as such in thought processes. Not all habits are concepts, but all concepts are at bottom habits of a particular sort. We might even suppose that in order to build a machine that can think, we first need to build a machine that can perceive, and then apply the same design principles to activity within the more limited domain of symbol systems.

Or is it that simple? How did machines that can think naturally emerge? Mead and Dewey were not motivated to build artificial thinking machines; but they were motivated to understand the nature of mentality as a natural phenomenon, and the view they developed was decidedly evolutionary in character. According to this view, perceptual and thinking abilities are evolutionarily (and hence developmentally) linked by an intermediate series of similar interactive capabilities. The emergence of thinking as a natural sort of activity constitutes not an ontological bifurcation but rather an evolutionary expansion or extension or specialization of already-existing capabilities. Whether or not it helps in the effort to build an artificial thinking machine, we can say something fairly definite about this evolutionary connection, drawing more perhaps on Mead's views than on Dewey's. To understand their account of this evolutionary linkage, consider Figure 6.

figure 6

figure 6 Evolutionary Development of Reflective Abilities

The defining characteristic of thought is that it is a representational activity in a representational domain. To account for how thinking came about, one has to give an account of the natural emergence of such representational domains. One might try to tell this story in neuropsychological terms. For Mead and Dewey, that is not going to be enough. An essential part of the story about the connectedness of perception and thought involves the fact that we are social animals. Perhaps contingently but nevertheless as a matter of fact, we are mental creatures by virtue of having evolved and developed as social creatures. Thinking is built upon an edifice that includes not only perceptual capabilities, but also social and cultural features as well. Symbol systems, as the objective media of thought, are features or elements or aspects of a cultural milieu that have come about as a result of the need to coordinate shared activities and to stabilize our capabilities to do so. Your high-school dance situation illustrates not just a particular scenario where thinking might occur, but rather it is the kind of social interactive ooze out of which human thinking abilities emerged in the first place.

Let's work through Figure 6 in detail, from left to right. We have already looked at the nature of perception in the discussion of Figure 4. As a next step, we should be able to take as a premise the claim that human individuals exist as members of some kind of social system. Like anything else, a social sphere is a subdomain of the world at large, not something distinct from it. And as a relatively specific kind of agent/world interaction, the overall structure of social activity is not all that different from the structure of perception, though it involves a more limited range of actions and results (such as nonverbal gestures and responses to gestures).

The following description is too brief, but it is just a matter of walking our way around Figure 7: In accordance with various manners and social dispositions developed (like any other habits) through extended social interaction, the individual interacts with other individuals by way of gestures and other means for signaling and influencing others. Such actions bring about particular reactions from others-approvals, disapprovals, protests, encouragements, support, or resistance. When interpreted in terms of one's given manners and social dispositions, such reactions engender certain social attitudes which determine one's subsequent activity. And around and around it goes. What is happening on the dance floor is not an exercise in mere perception, but rather it is also a social interaction geared to relatively complex systems of conventional constraints.

figure 7

figure 7 Social Interaction

While not intrinsically mental in itself (in that it neither presupposes nor requires reflective processes), social activity yields new and unique capabilities over and above mere perception. The fundamental motivation of social activity is a drive to coordinate shared activity. Many of the actions we perform as a social group are not simply decomposable into the actions of individuals, but rather they are distributed across a social system as shared acts and exist only in this distributed sense, like dancing, for example. A social system must be able to function as a single agent-performing such communal actions as hunting, fighting, dancing, loving, playing games, buying and selling, teaching and learning. The distributed character of social activity introduces evolutionary potentials which many animal species seem to have picked up on and developed more or less extensively while others have not. Compare, for instance, the mating practices of a flounder with those of humans and other primates. Even if it were the case that the sexual activity of significant numbers of human beings is comparable in its social character to that of flatfish, this does not negate the fact that, for the most part, human sexual activity is embedded in relatively complex systems of social ritual and convention with recognized conditions and consequences beyond itself. Your problem on the dance floor comes down to whether or not you and Mary can achieve some kind of coordination, on the dance floor but also in terms of any consequences it might have in that and other social contexts.

The individual member of a society cannot help but be affected by the fact that the requisite attunements and habits that such membership engenders involve constraints of a looser kind than is typical of mere perception. Social activity involves more sensitive and less reliable means of information flow and communication, requiring conventional rules, agreements, contracts, and policies (which one can only presume are secure) in order to maintain some kind of regularity and systematicity in complex social spheres. The greater likelihood of uncertainty in social situations, plus the fact that one's own actions do not simply bring about reactions from others but also partially determine social reality, tends to reveal us to ourselves in ways that mere perception is not likely to do. In this sense, it is within contexts of shared experience-of socially distributed experience-that an individual self can start to emerge as a distinct, objective constituent of reality. A social sphere creates a space in which the self can exist as such. The fact that other individuals react so sensitively to your actions, and that you react to their actions in similar ways, brings attention to the efficacy and reactivity of your own attitudes, in a way that is not present in ordinary perception. (As we will see, this tendency of social problem-solving to reveal us to ourselves constitutes the social basis of mentality.)

It is not impossible that a sense of self could emerge in nonsocial agents, e.g., by virtue of perceptual activities that do not involve social coordination. But this does not seem so likely, given that one's actions in perceptual activities do not have the same salience and level of effectiveness characteristic of social situations where one's actions normally contribute to making reality-in the way, for example, that one's queries in a conversation help to constitute the very discourse one is trying to comprehend. Perceptual capabilities supply the basic pattern of social interaction, but perception normally does not involve the same levels of indeterminacy nor the same kinds of resolution procedures characteristic of social interaction. One does not dance with a tree in order to perceive a tree, even though perceiving a tree does involve a kind of interaction with the tree. The activities intrinsic to perception, while effecting a kind of phenomenal filtering process, serve nevertheless to reveal the affordances of a largely independent world, whereas the social reality of which we are a part and the activities inherent in discerning social reality are often one and the same thing. That is, what we do to discern social reality is often precisely what we do to make social reality-not just to perceive other human beings but to engage in social relationships. This is, in fact, something of a dilemma for an anthropologist, namely, to have to influence a given society just to be able to observe it. Plain sensory perception does not bring us around to ourselves in the same way. It does not make us aware of ourselves as autonomous agents in quite the way that social interactions do. In fact, it is barely obvious that perception is "interactive" at all, which would account for why a passive-receptive view of perception such as Locke's could be taken seriously for so long. (Unfortunately this passive-receptive view of perception still has far too much influence on how human agency is viewed by the AI community.)

From an evolutionary perspective, we should expect that the volatility of constraints in a social sphere will engender a more salient sense of possibility, due to the greater likelihood of breakdown in social coordination. That is, a greater likelihood of social discordance requires a greater facility not only to formulate and exploit possibilities but also to design and appropriately implement contingency plans. Processes of planning and rehearsing lines of conduct are valuable as means for establishing reliable and effective courses of shared activity. The foresight afforded by planning no doubt facilitates identification and avoidance of potential pitfalls, and this capability is surely useful in situations where given constraints are relatively unreliable, such as in a social sphere. Distance detection in social space is somewhat more complex than it is in visual space, due to the different natures of the spaces. While planning of a sort is already present in the very design of perception (by virtue of sensitivities to the affordances of things), social activities call for an enhanced refinement and cultivation of broader based abilities to exercise foresight.

In earlier evolutionary eras, such planning was most likely simple in character. Nothing we have said so far introduces or allows recourse to mental abilities. Without assuming the existence of mind to begin with, can we describe primitive social settings where we could imagine reflective activities emerging out of communicative activities more broadly? We run the risk of sounding like nineteenth century evolutionists to even ask such a question. But the point here is to try to give an evolutionary account of the emergence of human reflective abilities. Can we imagine such abilities coming about where all we have to start with are agents with perceptual capabilities and who are otherwise sensitive to social constraints? Full-fledged reflective capabilities aside, how about the emergence of simple representational acts to begin with? Dance, for one thing, seems like a typical candidate for a prereflective communicative activity which might yield a kind of representation.12 That is, if dancing were to have any significance beyond itself, it might be as a matter of recounting or mimicking successful exploits that are somehow worth recounting to others. But then such activity becomes more than just "significant" beyond itself. It is a kind of representation. The earliest kind of planning might have been indistinguishable from acting out successful feats relevant to the social group as a whole (hunting, fighting, etc.), where recounting and rehearsing such actions would amount to the same thing. A younger individual could learn to participate in such collective endeavors by first playing a role in their rehearsal. A social group, as a singular agent in its own right, could thereby develop a looser kind of coordination than what underlies perceptual capabilities of single individuals (as with hand/eye coordination, etc.); but the basic idea is the same.

The next step in the evolutionary story would be that the value of rehearsing-of looking backward and forward in experience, of exploiting memory and of making predictions-encourages the standardization of representational gestures and the institution of routine customs, icons, rituals, ceremonies, and any other such means to formulate effectively and codify and thereby strengthen, solidify, and stabilize social constraints and processes. It is by such means that a social sphere can give rise to a transmittable cultural milieu of some sort. We thus move further to the right in Figure 6.

Cultural milieux afford a new and different kind of agent/world activity, thus allowing the development of new kinds of operational capabilities. The main difference between social and cultural activity is that social interaction is directed at other individuals, whereas cultural activity is directed at the community at large. One participates in a culture as a kind of generalized conversation, using methods and media that everyone finds significant, so that one converses with the community at large rather than specifically with other individuals. Socially, one engages in give-and-take with other individuals, whereas culturally, one engages in give-and-take with one's society-as-a-whole-one dons a certain kind of clothing or hairstyle, writes a poem, publishes a paper, paints a painting, performs a rite of some kind, as an utterance directed to one's whole society. Today, in the United States, one is educated, holds down a job, pays one's taxes, votes, watches or reads the news, as manners of conduct within a particular culture. One helps to make and modify the culture not just by writing books or making movies or inventing new appliances but also on the basis of what one chooses to read, watch, or use. Otherwise the structure of participation in a culture, broadly conceived, is basically the same as the structure of social activity and hence of perception. This is depicted in Figure 8.

figure 8

figure 8 Participation in a Culture

It is by way of participation in a culture that an individual develops a sense of personal character and hence a sense of self. To begin with, there is a simple precognitive significance available in simple social interactions, given one's sensitivity to the affects of one's own attitudes, postures, and conduct on the attitudes, postures, and conduct of other individuals (and vice versa). It is within a social sphere that the self emerges as an object. Your self-absorption on the dance floor is evidence of this tendency. But a sense of oneself, not just as a felt-object but as an object with characteristic traits, is engendered not merely by virtue of social sensitivities, but in and by a process of gaining a sense of one's culture. Having a sense of one's community supplies one with a sense of oneself insofar as one's self is thereby an instantiation of a kind. It probably did not occur to you at the time, but you were not just at a dance but at a white middle-class suburban U.S. high-school dance. The latter characterization of yourself, and in much more detail, would actually come later, in retrospect, against the backdrop of a very different sense of who you are, as tempered by a broader sense of the cultural diversity that exists in the world and of where you stand in this diverse milieu. But at the time, whether or not you were cognizant of it, that is where you in fact stood and you must have had some grasp of it in order to take part in that activity in the first place.

The emergence of a self as a felt-object is a necessary but not sufficient condition for what we think of as human mentality. More is needed beyond mere reference to a self. Having and exercising a bona fide sense of character and identity, of regarding oneself as an instantiation of a kind, is also a prerequisite for having and exercising mental abilities. Gaining a sense of one's culture provides the means needed to classify oneself; and once objectified as such, this sense of oneself provides the raw materials for the evolutionary structuring of mental activities. Namely, participating in a culture is like a conversation with another individual, except that the other "individual" is one's very own culture. Mental abilities, in a proper sense of the term, come about as a matter of reflexivizing discourse with this so-called "generalized other," that is, of reflexivizing the process of participation in a culture. Mentality results as a process of using stabilized means of social communication (languages) to converse with oneself in ways typical of those means of communication. That is to say, as languages become stable, one becomes sensitive to typical responses to standard actions, to some extent or other, so that, as if in a linguistic dance with oneself, one can simultaneously play the role of respondent to one's own utterances. This is accomplished by objectifying a sense of one's own culture, that is, a sense of what it amounts to to be a member of a given society, and conversing as if this part of oneself were a typical "other." This process is successful in proportion to the stability, reliability, systematicity, and expressiveness of the means of communication which constitute one's culture. Such means of communication, thus objectified, are then able to function in their own right as an independent domain of activity, providing the ways and means of "reflection" to the extent that such reflexive activity may (but need not) have some representational role to play in this or that concrete context. Language (or culture, more broadly), in this reflexive representational capacity and to whatever extent it is in fact reliably systematic and hence objectifiable, thus gives rise to full-blown symbol systems.

Current AI research does not evidently draw on any such cultural grounding of symbols when referring to intelligent agents as symbol systems, drawing instead on formal "languages" used to program computers to cash out what they mean by a symbol.13 In any case, whatever one might think a symbol is, it is not enough to talk merely about symbol processing in order to explain intelligence in general and mental capabilities in particular, even if symbol processing is an essential part of the story. Rather, more to the point, one must center such talk about symbol systems around the fact that it is a sense of one's culture which gives rise to the kind of symbol processing which we refer to as thinking, insofar as we view thinking as the self (as an individual agent) talking to itself (in the guise of a generalized other). Without this general sense of one's culture, one has no "inner self" with which to converse. Such reflexive discourse becomes possible as one starts to grasp the social and cultural significance of one's own utterances in such a way as to be attuned already to how others would respond. One can thereby respond to one's own utterances just as others would. Full-fledged reflective thought is able to take place once one is able to objectify languages and use them as the means and media for carrying on this conversation in a representational domain. In this view, language is the medium not just of conversation but of thought itself-language, that is, in a full-blown natural sense, loaded with cultural content and patterns of meaning, rather than in the comparatively anemic syntactic sense of an implicit "language of thought" based on an analogy with FORTRAN, LISP, or the predicate calculus.14

Of course, the emergence of mind in a cultural setting is going to feed back onto the social and cultural world which gave rise to it. We should not underemphasize the capacity which mind affords the individual agent to be uniquely different, not just another carbon copy of one and the same cultural identity. One is compelled to do so, to be different, if only to maintain a place (territory, status, standing) in one's social and cultural environment. The story we have told here is primarily about the emergence of mind and not so much about further evolutionary consequences of its existence. In particular, it does not follow from what we have said that we should all be so similar, given that no two individuals can have the same perspective on the world, including its social and cultural domains (so that my "generalized other" will vary to some degree from yours); yet what we have said comfortably allows for the degree to which we are in fact so similar by virtue of shared cultural traditions.

But this now basically completes an account of the developmental connection between perception and thought, at least in rough outline. Symbol systems in particular are a special manifestation of cultural artifacts, serving as agent-independent media that give form and substance to thinking.

Certainly, now that human beings have a capacity to think, this capacity is what it is on its own terms. An individual does not have to be particularly communicative or overtly social in order to exercise reflective abilities. Mead and Dewey were arguing rather that, as a species, these abilities initially developed as a refinement of our social and cultural nature. Otherwise, thinking is a process carried out by the individual as part of that individual's experience. The claim here is not that an individual alone on a desert island would not be able to think. Rather, the claim is that, whether on a desert island or in the middle of a crowd, an individual whose species does not have the right kind of social and cultural evolutionary history will not have thinking abilities in the first place.

Despite the origins of mentality in overtly social and cultural domains of activity, there is no reason to think that over the course of time an ability to reflexivize this activity would not become so refined, efficient, and "intimate" that the outward features of "conversation" might be grossly altered if not completely refined away, in which case not all thinking would come across literally as verbally talking to oneself. Perhaps the evolutionary relationship between thinking and conversation is analogous to that between humans and other existing primates, in the sense that thinking did not evolve from anything like modern conversation, but rather evolved with it, from some common source in early preverbal sociocultural communicative practices. This does not weaken the present thesis, but only suggests why thinking silently would not be like reading The Times or talking on the telephone.

In any case, there is also no reason to think that our thinking would not exploit and take explicit advantage of its overt social and cultural origins whenever that is reasonable-for example, not to proceed silently and invisibly, but to extend itself in the world, to use sounds, scripts, expressive media of any kind, to take notes, to make lists, to write out formulas, to sketch diagrams, to give its processes and products some kind of external substance. It is quite likely that the latter kind of thinking-using external media to give form and stability to one's representational activity-is far more characteristic of thought than is armchair reverie, wherein the objects and processes of agent/world interaction take place as a silent reflexive conversation. To say that this activity is reflexivized does not entail that it is internalized as a silent and invisible process. If it is ever entirely internalized, then the organism is nevertheless in that instance playing the role of both agent and world. But more commonly, in this view, thinking takes place overtly in a domain of agent/world interaction. In that sense, mind is a process not enclosed within the agent, much less located in the brain.

The significant symbol is then the gesture, the sign, the word which is addressed to the self when it is addressed to another individual, and is addressed to another, in form to all other individuals, when it is addressed to the self ...

[I]nsofar as thought-that inner conversation in which objects as stimuli are both separated from and related to their responses-is identified with consciousness, that is insofar as consciousness is identified with awareness, it is the result of this development of the self in experience ...

Mind, which is a process within which [analyses of objects] take place, lies in a field of conduct between a specific individual and the environment, in which the individual is able, through the generalized attitude he assumes, to make use of symbolic gestures, i.e., terms, which are significant to all including himself. While the conflict of reactions [which separates objects from (and relates them to) their meanings] takes place within the individual, the analysis takes place in the objects. Mind is then a field that is not confined to the individual, much less is located in a brain. Significance belongs to things in their relations to individuals. It does not lie in mental processes which are enclosed within individuals.15

In broad strokes, this is the picture of mentality and thought found in the social psychology of Mead and Dewey. This picture is clearly at odds with the ontological dualism of classical epistemology. It is not simple, but it enjoys a notable degree of breadth and coherence, and it is not subject to the metaphysical and epistemological dilemmas and puzzles characteristic of classical views. Descartes found epistemological bedrock in a phenomenalistic "I think, I am." For different reasons and with a different purpose in mind, the pattern of inference outlined in Dewey's and Mead's social-psychological philosophy of mind is more along the lines of "I dance, I am, I think."

It remains unclear how this social-psychological view of human mentality might inform ongoing research in AI and robotics, much of which is based on a view of an artificial mind encapsulated inside of a machine's computer-brain, set over and against an external world. Even without the social element, AI research would in the present view do better to think of mind as residing in a domain of agent/world interactivity, and to think of thought and information-processing more generally as an interactive process, not as a unilateral algorithmic process enclosed in some kind of a box with input and output slots. It is hardly a minor fact that research in robotics usually requires some engineering of the environment in which the robot works (quite heavily in some cases, e.g., as with the early Shakey project at SRI)16 or else long training periods in which the robot's software attunes itself to specific physical domains. This has been viewed as a temporary ad hoc fix, to be dispensed with once we figure out how to design the software so that we can plunk the robot down in arbitrary environments. We certainly want a robot to have as much flexibility and adaptability as possible, but the lesson that has been missed here is that the problem does not and cannot center around designing software solely for the robot's head but must also include structures and processes in the environment as part of the robot's architecture.17

But more than that, it is only with a social dimension added to this interactive view, not as a behavioral goal but as a condition of initial design, that AI research can hope to achieve some kind of success in its more ambitious aims (e.g., to build a machine that is able to converse in English or some other natural language). It is only by virtue of such considerations that some kind of practical connection is made with what it is that makes symbol systems and representational information processing what they are in the first place. Or is this claim too strong? This is not to say that thinking, real or artificial, is necessarily a social process and so must be designed as such, nor is it a claim that an individual, real or artificial, can think only if it maintains some kind of social life. Thinking, as a natural evolutionary phenomenon, is obviously something that an individual can do all by itself, assuming the individual has that capacity to begin with. So why can't we build an artificial agent so that it can do this very same thing, all by itself, without our having to mimic the social and cultural developments out of which this capacity actually naturally emerged in the development of one particular species (viz., us)? How do we instill in an artificial agent a capacity to think in the first place, and more specifically, does this absolutely require socialization processes or can we circumvent that part of actual human evolution as just one out of any number of ways that such a capacity might come to exist?

The evolutionary story outlined by Mead and Dewey is remarkable even if taken as nothing more than a plausibility argument for one particular naturalistic philosophy of mind. In outlining at least one way to think about thinking as a natural evolutionary phenomenon, they have avoided a number of classical philosophical pitfalls and otherwise demonstrated how to move the study of mind out of metaphysics and into the domain of science. Let us assume that they are right in broad outline about what thinking is and how it actually came into being. Are they bound to claim that this is the only way that thinking might come about? The answer seems to be yes. Thinking is a kind of reflexive discourse in which a self interacts with what is essentially itself. Thinking presupposes not just reference to a self but also a kind of activity in which the self is objectified as one of a kind. In this view, an artificial thinking machine is going to have to have a sense of self which it can in turn objectify as such. If we could explain how to give the machine a sense of self without utilizing some kind of real or simulated socialization process (in which the agent accrues the habits which eventually constitute its sense of identity), then the stronger claim would be undermined. But is this likely? In objectifying itself, an agent will necessarily classify itself as one of some kind or other and interact with itself in ways appropriate to that kind of thing. Talking is not something that we normally do with trees and chairs. Rather, the story is that talking and other refinements of gestural communication is something we do with others of our own kind by virtue of their being of our own kind. Without some sense of one's own kind, i.e., some sense of a generalized other, there is no reliable sense of systematic communication, hence no objective identification of self, and hence no thinking. We might aim to code up some such generalized other and work that into the machine's software, and we might even try to avoid having to train the thing in actual social settings; but the aim in that case would be to give it precisely what would eventually come about in the course of some such socialization process. If not a homunculus, we would be attempting to build into the machine what is essentially a sense of membership in a discourse community.

The point is simple enough. Without socialization, the machine has no way to classify and hence objectify itself. Socialization has to be part of the process of building a thinking machine. If this sounds like a theoretically intractable or impractical admonition, then so much the worse for the AI enterprise. In that case we will have to be content with trying to build what are essentially nonthinking artificial animals able simply to run on automatic or not at all.

Previous Next Up Comments

Notes

I would like to thank Güven Güzeldere, Larry Hickman, Laura Kerr, Allen Poteshman, Crystal Thorpe, and participants in the 1993-94 Symbolic-Systems-in-Education seminar at Stanford, particularly John Baugh, Randi Engle, Bob Floden, Jim Greeno, Mimi Ito, Jan Kerkhoven, Laura Kerr, Ray McDermott, Denis Phillips, Christian Rohrer, and Decker Walker, for challenging discussions and useful comments on the ideas in this paper. This is an expanded version of a paper presented at the March 1994 meeting of the Society for the Advancement of American Philosophy, at Rice University. This work was supported by the National Academy of Education under the Spencer Post-doctoral Fellowship program.

1 Bobby Freeman, "Do You Wanna Dance?" (New York: Josie Records, 1958).

2 Jo Ann Boydston, ed., John Dewey: The Early Works, 1882-1898 (Vols. 1-5), John Dewey: The Middle Works, 1899-1924 (Vols. 1-15), and John Dewey: The Later Works, 1925-1953 (Vols. 1-17) (Carbondale: Southern Illinois UP, 1967-1990); John Dewey, "The Reflex Arc Concept in Psychology," Psychological Review 3 (1896) 357-370, rpt. Boydston, Early Works 5:96-11; Human Nature and Conduct (New York: Holt, 1922), rpt. Boydston, Middle Works 14; Experience and Nature (Chicago: Open Court, 1926), rpt. Boydston, Later Works 1; How We Think (Chicago: Henry Regnery, 1933), rpt. Boydston, Later Works 8:105-354; Logic: The Theory of Inquiry (New York: Holt, 1938), rpt. Boydston, Later Works, 12; George Herbert Mead, On Social Psychology, ed. A. Anselm (Chicago: University of Chicago Press, 1956); Selected Writings, ed. Andrew J. Reck (Chicago: University of Chicago Press, 1964); "Social Consciousness and the Consciousness of Meaning," Psychological Bulletin 7 (1909) 397-405, rpt. Mead, Selected Writings 123-133; "A Behavioristic Account of the Significant Symbol," Journal of Philosophy 19 (1922) 157-163, rpt. Mead, Selected Writings 240-247.

3 John Haugeland, ed., Mind Design: Philosophy, Psychology, Artificial Intelligence (Cambridge, MA: MIT Press, 1981); Hubert L. Dreyfus, What Computers Can't Do: A Critique of Artificial Reason (New York: Harper, 1972); "From Micro-Worlds to Knowledge Representation: AI at an Impasse," Mind Design, ed. John Haugeland (excerpted from the Introduction of the 2nd ed. of Dreyfus, What Computers Can't Do).

4 John Dewey, "Propositions, Warranted Assertibility, and Truth," Journal of Philosophy 38.7 (1941) 169-196 rpt. Boydston, Later Works 14:168-188.

5 Dewey, Logic: The Theory of Inquiry, ch. 2; Mead, On Social Psychology, 189-200.

6 Allen Newell and Herbert A. Simon, "Computer Science as Empirical Inquiry: Symbols and Search," Communications of the Association for Computing Machinery 19 (1976) 113-126, rpt. John Haugeland, ed., Mind Design. Zenon W. Pylyshyn, Computation and Cognition (Cambridge, MA: MIT Press, 1985). Alonso H. Vera and Herbert A. Simon, "Situated Action: A Symbolic Interpretation," Cognitive Science 7.1 (1993) 7-48.

7 See, for instance, Dewey, Logic: The Theory of Inquiry, 3-4; Mead,On Social Psychology, 184-186.

8 Dewey, Logic; James J. Gibson, The Ecological Approach to Visual Perception (Boston: Houghton, 1979); Judith A. Effken and Robert E. Shaw, "Ecological Perspectives on the New Artificial Intelligence," Ecological Psychology 4.1 (1992) 247-270; Tom Burke, Dewey's New Logic: A Reply to Russell (Chicago: The University of Chicago Press, 1994), ch. 3.

9 Contrary to the view in Vera and Simon, "Situated Action"

10 Immanuel Kant, The Critique of Practical Reason, trans. Lewis White Beck, 3rd ed. (1788; New York: Macmillan, 1993). Mead, On Social Psychology 185-186.

11 Vera and Simon, "Situated Action."

12 Aggressive posturing as a prelude, if not a substitute, for fighting is another example. Playful imitation and simple games are others. See, for instance, Mead, On Social Psychology, 214-228.

13 Newell and Simon, "Computer Science"; Vera and Simon "Situated Action."

14 Jerry Fodor, Representations: Philosophical Essays on the Foundations of Cognitive Science (Cambridge, MA: MIT Press, 1981); "Propositional Attitudes," Monist 61.4 (1978), rpt. Fodor, Representations; The Language of Thought (Cambridge, MA: Harvard UP, 1979).

15 Mead, "A Behavioristic Account," 246-247.

16 See, for example, Bertram Raphael, The Thinking Computer: Mind Inside Matter (New York: Freeman, 1976) 252, 275-281.

17 See, for instance, Effken and Shaw, "Ecological Perspectives." See also Rodney Brooks, "Elephants Don't Play Chess," Robotics and Autonomous Systems 6 (1990) 3-15; "Intelligence Without Reason," Computers and Thought, Proceedings of the International Joint Conference on Artificial Intelligence, Sydney (Los Altos, CA: Kauffman, 1990); "Intelligence Without Representation," Artificial Intelligence 47 (1991) 139-159.