From Günther to Dennett: The Mechanics of Consciousness

 

“In order to demonstrate that consciousness demands an integrating unit, Plato uses the example of the Trojan horse. Inside this horse were seated many Greek heroes, like Ulysses, Diomedes; and others. But although there were brain functions going on ‘inside’ the horse, this wooden monster did not derive any consciousness from them. Accordingly, young Theaethetus is told: ‘It would be a singular thing, my lad if each of us was, as it were, a wooden horse, and within us were seated many separate senses, since manifestly these senses unite into one nature, call it soul or what you will; and it is with this central form through the organs of sense that we perceive sensible objects.’

 

There is little doubt that our present ‘thinking’ machines are hardly more than wooden horses.”  

 

 - Gotthard Günther, 1953

 

 

 

            Gotthard Günther, a renowned German philosopher of the early 20th century who invented the theory of polycontextural logic, suggests that modern attempts to create artificial beings with a thinking component are simply exaggerated versions of the ancient Greek’s Trojan Horse; that is, they are merely a conglomeration of parts lacking a unifying interface. These creations may have complicated circuitry and a center for logic, but beyond logic there must be something else making a judgement on the inputs of the system - some mechanism that can make the system “aware” that the information going in is separate from the system itself. His “thinking machines” of the fifties are no more than scraps of metal wired with programs to input data and deduce simple answers to puzzles. They are not nearly as complex as current conceptions of thinking machines that are proposed to integrate sense-imitating software with synthetic organic material in addition to the baseline robotic features. In fact, in Günther’s 1953 paper Can Mechanical Brains Have Consciousness?, the example of high technology he frequently makes reference to is the calculator; not the computer or the robot or the android. But despite the obvious differences between Günther’s huge, clunky single-task machines and current multi-task robots with “brains” consisting of thousands of microchips and “bodies” comprised of synthetic organic polymers, Günther’s comparison of modern technology to the Trojan horse is still applicable within the field of 21st century robotics. When postulating the question: does my robot have consciousness, it is important to go back to the idea of unifying parts of the whole so there is an emergent entity larger than the sum of the constituent parts.

 

            To understand where Günther is coming from in his assertion on mechanical brains and Trojan Horses, it is necessary to delve deeper into his philosophies on human consciousness and consciousness in general. He has a rather outlandish view of the mind that is more of a conceptual design rather than an empirical science. But it is fascinating nonetheless for its proposition that all of consciousness can be reduced to a series of mechanisms.

 

            The basis for Günther’s philosophy of mind comes from his polycontextural logic, which is merely an extension of Kant’s classical, or what Günther calls transcendental, logic.[1] Günther asserts that transcendental logic alone can explain how consciousness works; and for this he proposes a design that relies on the transcendental idea of self-reflection. The design for consciousness is as follows: suppose there is a system that is trying to incorporate the concept of “a rose” into its sphere of knowledge. This system, this brain, will take the concept of “a rose” and imprint a copy of itself:

                                    a rose

onto an imaginary projector screen within the brain. The projector screen is a converging point where further information can be compared and interpreted; but for now, there is just the screen with the concept “a rose” imprinted on its surface. This concept, or idea, then passes through a series of filters, which Günther refers to as mechanism 1, until it gets to the second converging point of information: the logical processing center. At this point, the concept of the rose is “acknowledged”, and proceeds to pass through another set of filters called mechanism 2 that translates the concept into a percept. Instead of being “a rose”, the entry becomes “I see a rose”. Though Günther does not refer to neuroscience explicitly, the changing of the concept to the percept is like raw data entering the senses and ultimately being registered as a perception of vision through electronic impulses in the nervous system. At this point, the percept “I see a rose” is sent back to the screen and superimposes a copy of itself:

                                    I see a rose

onto the preexisting imprint of “a rose”. Consciousness, Günther attests, is the point at which the system notices the discrepancy between the two ideas “a rose” and “I see a rose”. In effect, they are equivalent, because the latter is just a logical reflection of the former back onto itself - thus, self-reflection. But somehow they are different, and the system notices this difference between identity and non-identity. When the system receives these two ideas simultaneously, without crashing, consciousness has been created.

 

            There is a difference between consciousness and self - consciousness that Günther strongly urges the reader not to confuse. Consciousness is the state in which a system is aware of objects existing outside the system of awareness. Self-consciousness is the awareness of  the awareness of objects existing outside the system of awareness. If this seems confusing, think of it as the awareness of consciousness. Once the system knows of its own ability to discern objects outside its state of being, it develops a sense of self. Günther does not propose a mechanism for this second tier of consciousness, nor does he believe one way or the other if a mechanism even exists. Regardless of whether or not there is a mechanism, currently there is no diagram for outlining the process of self-consciousness, so in his mind it is not reproducible in robots or in any other sort of synthetic thinking machine.

 

Consciousness on the other hand is with a mechanism, so under the right set of circumstances and with the right tools, a conscious robot can be created. The only thing Günther leaves out of his grand design is an explanation of what exactly mechanism 1 and mechanism 2 are! He describes what they do - how they turn over inputs to reflect a concept on itself - but he does not describe how this is achieved or what sub-systems are involved in the processing. His design falls short on the practical level. True, Günther only claims to be able to prove consciousness is replicable by using transcendental logic and the axiom “anything with a mechanism can be reproduced”. But when put into action, his grand design does not bring scientists any closer to creating a conscious robot. Simply proposing the idea that consciousness can be achieved outside the sphere of human existence without offering any empirical evidence or a mechanical blue-print for future engineers does not facilitate the production of thinking robots. But perhaps a theoretical design with no immediate application was all Günther had in mind from the outset. His task was merely to present a design for subsequent engineers, neuroscientists, and biologists to dissect. It is others’ tedious job, not his, to work out the intricacies of each individual mechanism.

 

Regardless of whether or not there are practical applications to Günther’s mechanical consciousness, the design is a great piece of logical machinery (A É B É C É ~A).  Not only does Günther propose an intelligent mechanism, but he also defines an ambiguous variable - consciousness - and says exactly how and when the mechanism achieves what he has defined as this variable. Part of the reason why Günther is able to sell his idea of consciousness to a large subset of his audience is by toying with the semantics of the word “consciousness”. He defines consciousness as something different from self-consciousness thus breaking one seemingly incomprehensible concept down into two smaller concepts, one of which has a solution and the other which is still inexplicable. M.I.T. professor Marvin Minsky describes this common tactic of people attempting to explain something extremely complex: ”In particular, consider the problem of describing the brain in detail - in view of the fact that it is the product of tens of thousands of different genes. We can certainly see the attractiveness of proposing to get around all that stuff, simply by postulating some novel ‘basic’ principle by which our minds are animated by some vital force or essence we call Mind, or Consciousness, or Soul” (Minsky, 1991). Instead of accepting the fact that certain things are too complex to be summarized, many scientists or philosophers propose an explanation to such a concept that incorporates simpler terms and logics that they themselves can understand. If there is an exception to the theory, or a question that cannot be explained, that idea is summed up in a vague, metaphysical term such as “soul”. It is the default answer for any question unanswerable by the present design. For example, Descartes had the idea that man was just a complex automaton built from millions of sub-particles interacting in a perfect machine-like fashion. His treatise on the intricate workings of the human body is long and meticulously detailed, but has one short, escape clause: that the fundamental difference between man and man-made automata is man’s endowment with a soul (Mayr, 1996). The soul is the scapegoat explanation for everything else about man (such as reason, emotion, free will and morality) that is inexplicable under Descartes’ theory of the mechanistic body.

 

While it may be obvious that philosophers from hundreds of years ago, like Descartes, pulled this trick of semantics, it may be less apparent that this is the exact same problem arising from Günther’s division of consciousness from self-consciousness. He provides a detailed explanation for the workings of consciousness, but leaves a conspicuous loose end that begs the reader to ask: so if mechanical brains can have consciousness, can they then have self - consciousness? Günther exchanges one loaded question for another, and unfortunately, the reader is left unsatisfied with the answer. Perhaps his original question had no simple answer to begin with, and in delving into an explanation of self-consciousness one unavoidably surfaces another set of unanswerable questions which in turn pose even more ad infinitum.

 

One does not have to break apart a complex idea into several smaller ones with individual answers like Günther does. Setting the intricacies of Günther’s philosophy aside for a moment I wish to briefly introduce another man’s contemporary theory of mechanical consciousness and how it compares to the earlier theory of Günther. Daniel Dennett, the director of Cognitive Studies at Tufts University who is perhaps more famous for his work on COG - a humanoid robot - than for his theories on mechanical consciousness, brings up several good points about the way consciousness is represented in modern thought.

 

It is of popular belief that consciousness is a gestalt of awareness resulting from the perfect interaction of components and mechanisms. Günther believes this; his screen analogy describes consciousness as resulting the moment a system recognizes the superimposed concepts of identity and non-identity. In fact, most people believe that consciousness, in some way or another, is an overarching emergent property of the system that, once achieved, is unchanging, and unique to the system. Dennett, on the other hand, does not believe that consciousness is an all or nothing property. He says,

The creation of conscious experience is not a batch process but a continuous process. The micro-takings have to interact. A micro-taking, as a sort of judgement or decision, can’t be just inscribed in the brain in isolation; it has to have its consequences … the interaction of micro-takings has the effect that a modicum of coherence is maintained, with discrepant elements dropping out of contention, and without the assistance of a Master Judge. Because there is no Master Judge, there is no further process of being appreciated- in- consciousness, so the question of exactly when a particular element was consciously (as opposed to unconsciously) taken admits no nonarbitrary answer (1994).

 

Instead of defining consciousness as a singular event that has either occurred or not occurred, Dennett proposes consciousness to be a continuous, ever-changing process. He describes consciousness as a stream “with swirls and eddies, but - and this is the most ‘architectural’ point of our model - there is no bridge over the stream” (1994). He uses the idea of a stream of consciousness to quite literally mean one cannot grasp hold of consciousness, for it amorphous and constantly moving. He also brings in the concept of a “bridge” passing over the stream of conscious events. Whereas some people might hold that there is a tangible, steadfast bridge of awareness passing over the conscious events, Dennett says no. Consciousness is the water, not the bridge. And again, like Günther, there is a quibbling over the basic definitions of the terms in question.

           

            Another popular notion is that consciousness arises from a particular place in the brain. Descartes said the soul was located in the pineal gland; Günther said metaphorically that consciousness occurred at the level of the brain’s “projector screen”. Dennett opposes the idea of a converging spot within the cerebrum:

I call this mythic place in the brain where it all comes together (and where the order of arrival determines the order of consciousness) the Cartesian Theater. There is no Cartesian Theater in the brain. That is a fact. Besides, if there were, what could happen there? .. if all the important work gets done at a point (or just within the narrow confines of the pea-sized pineal gland), how does the rest of the brain play a role in it? (1994)

 

Like Damasio, Kinsbourne, and other neuroscientists, Dennett supports the idea of an interface between the entire brain and the system of awareness. Through this interface, when certain events produce enough salient activity within the neurological circuitry, conscious episodes emerge. Consciousness comes into being when an event becomes part of a temporarily dominant activity in the cerebral cortex. In effect, it is the summation of all the “micro-takings” (each brain event and its consequences). A problem with this model that Dennett has confronted is the pinpointing of a threshold value for the number cerebral events necessary to makes an idea conscious or not. He says that there may or may not be such a value; he is more inclined to believe that consciousness is a spectrum - on one end there are the events with a weak summation of cortical activity, and on the other end there are the events with a strong summation. Technically each summed event along the spectrum is an element of consciousness, but depending on the person and the strength with which an event is recalled, there will always be an arbitrary set of conscious elements that is dominant within the system of awareness.

 

             

Dennett’s hypothesis pertains to the high end of consciousness and the final products of the mind. While it does not provide a diagrammatic mechanism with inputs, outputs, arrows, and returns like Günther’s polycontextural design, it does explain how the machinery of the brain can give the illusion of a singular, emergent property of the system known as consciousness. Both Günther and Dennett ultimately apply their theories to robotics, and surprisingly converge on the idea that mechanical consciousness can be attained in non-human beings. Their conclusions are based on different schemes of logic, however, and are contingent upon personally constructed definitions of what exactly consciousness is. Evaluating the idea of mechanical consciousness by dissecting the arguments of these two brainchildren of philosophical thought has left us still pining for an answer to the original question: can mechanical brains have consciousness? But more importantly, it has raised the fundamental question, is consciousness mechanical?  Until there is an agreement on the definition of consciousness there will always be questions left unanswered, theorems half-completed, innovative design matrixes visualized, and Trojan Horses lurking in the closet.

      

 

 


 

References

            Damasio, Antonio R. (1994), Descartes’ Error. Putnam’s Sons, New York, NY.

 223-244, 248-252.

 

            Dennet, Daniel. (1994). Consciousness in Human and Robot Minds, for IIAS Symposium on Cognition, Computation and Consciousness, Kyoto, Sept.1-3, 1994.

 

            Dennett, Daniel.(1998). Julian Jayne’s Software Archeology. Brainchildren: Essays on Designing Minds. MIT Press. Cambridge, MA. 121-130.

 

Dennett, Daniel. (1998). Real Consciousness. Brainchildren: Essays on Designing Minds. MIT Press. Cambridge, MA. 131-140.

 

            Dennett, Daniel (1998). The Practical Requirements for Making a Conscious Robot. Brainchildren: Essays on Designing Minds. MIT Press. Cambridge, MA. 153-170.

 

            Mayr, Otto (1986) The Clockwork Universe. Authority, Liberty & Automatic Machinery in Early Modern Europe. Johns Hopkins University Press. Baltimore, MD. 65.

 

            Minsky, Marvin. (1991). Conscious Machines. Machinery of Consciousness. Proceedings, National Research Council of Canada, 75th Aniversary Symposium on Science and Society.

 

 

 

           



[1] There are other influences, such as the early proponents of cybernetics, that have led to the formation of polycontextural logic, but within the context of this paper, it is important to know only that Günther was a follower of Kant and Hegel.