Fire-Control and Human-Computer Interaction: Towards a History of the Computer Mouse (1940-1965)

by Axel Roch


Nowadays the mouse is a standard input device in graphical user interfaces. This article tracks its history back to World War Two. The interaction between fire-control systems and human operators in gunfire control framed post-war interaction with computers. At Stanford Research Institute during the 1960s, the inventors of the mouse, Douglas Engelbart and William English, designed interfaces for commercial applications that they had experienced perviously by operating radar devices.


Human-Computer Interaction, History of the Computer Mouse, History of Graphical User Interfaces

"The fully augmented or command display does not tell the operator what is happening but instead tells him what to do." Charles R. Kelley

"When I first heard about computers, I understood from my radar experience that if these machines can show you informations on printouts, they could show that information on a screen." [8 , p.74] Douglas C. Engelbart

1. Introduction

The mouse is threatening to replace the keyboard as a standard input device for graphical user interfaces. This is because the mouse has virtually a flat learning curve. The principles of pointing and clicking to select texts, pictures or areas on the screen are sometimes called "interaction" in virtual realities. As a model or standard of computer interactivity, the mouse calls for an archeological analysis of the historical and technical conditions of its possibility.

At the end of the 1950s the Stanford Research Institute (SRI) in Menlo Park, California, researched interactivity for the average person and came up with the mouse on the desk. But screen-oriented computer applications had no need for improvisation. In the historical genealogy of computer interfaces, radar-networked defense systems like Whirlwind and SAGE had been using Braun's cathode-ray tubes instead of telex-systems as output devices since the early 1950s. The cathode-ray tubes used in Whirlwind showed the first symbolic code on the screen: the letters T and F, for target and fighter. Long before Alan Kay at Xerox PARC divided the screen into windows and long before the first graphical user interfaces appeared, the Augmented Human Intellect Center at SRI was working with on-screen text manipulation. Then and now, researchers called input onto the screen using control devices "interaction", probably out of embarrassment. The input devices and the accompanying screens did not have to be invented. The preliminary decision of which type of input devices to use and how to use it had already been made. In the competition at the SRI, the most important devices, with the fewest mistakes and the highest hit rate of texts on the screen, were the joystick, the lightpen, and the mouse, all of which can be dated back to military and strategic "dispositifs". [see 1, p. 5]

2. Lightpen and Joystick

The lightpen, a device which measured off a marked area on the screen, had its precursor in the so-called lightgun. Project Whirlwind, which was developed after 1945 at the Massachusetts Institute of Technology to explore tactical control systems, produced the lightgun as a device to select discrete symbols on the screen. The decision to terminate Project Whirlwind and to continue with SAGE caused the lightgun to become the device used for tactical real-time-control of a radar-networked airspace. [2, p. 375]

The joystick, which competed with the mouse at SRI and much later became the standard input for the video games of the 1980s, is no less military. Today, it is still used for the remote control of guided missiles. The German engineer Herbert Wagner planned the first remote controlled weapons against movable point-targets, on the orders of the Reichsluftfahrtministerium (Ministry of Air Defense), just after the outbreak of war in 1939. As with torpedoes in the beginning of the Twentieth century, the control sticks in 1942 were used to guide glide-bombs, either by sight from the cockpit of a carrier airplane, with the help of a trail of light on the weapon, or with a television camera in the nose of the bomb, and a transmitted picture to a guiding operator. [5, p. 106ff] Max Kramer built a remote-controlled air-to-air missile for the Deutsche Versuchsanstalt für Luftfahrt (German Aviation Establishment) in Berlin. The intercepting fighter planes of the Luftwaffe were supposed to fire the new self-propelled guided missiles against the superior forces of an Allied bombing wing, while avoiding the danger within the coverage of the bombers' guns. Due to the high altitude of the missiles, which were launched from carrier planes, they had to be guided out from the cockpit of the plane rather than from a ground station. After firing the rocket, the fighter pilot had to use the control stick to correct only two dimensions of a Cartesian guiding system (see Fig.1). [9, p. 166ff]

Figure 1. Joystick in a cockpit, 1942.

Figure 1. Joystick in a cockpit, 1942.

However, this air-to-air missile was never put into service because the Allied forces bombed the production sites. After the war, the joystick, armed with electronics, became the successor to the control stick. Guided missiles needed only two-dimensional control devices, because their self-propulsion necessitated only minor corrections to their flight path. The third coordinate axis automatically coincided with the target.

Instead of guiding rockets with the joystick, the scientists at SRI used it to control an electric spot of light on the screen. The newly-invented mouse served the same function. In contrast to the variable picture transmission from the camera in the nose of a rocket, a monitor defines a static picture in a two-dimensional coordinate system. The mouse and the joystick are pointer devices that move a cursor in a direction relative to the current position on an absolute plane. The necessary technical conditions for the guidance of cursors on cathode ray tubes become clear through an examination of the history of anti-aircraft artillery systems.

3. Electric and Tactical Fire-Control

Mechanical and optical anti-aircraft components of the First World War demanded physical and intellectual effort from soldiers. Strenuous and error-prone operation of increasingly heavy guns soon could not keep up with the increasing mobility of targets. Until 1940, it was a matter of fact that anti-aircraft guns had to be operated by a whole crew, using charts and manual adjustments. Power-assisted mechanical controls and servomechanics promised better anti-aircraft systems. To cope with the acceleration and increased mobility of aerial targets at the beginning of the Second World War, the Americans armed their anti-aircraft guns with electrical and computer-supported guidance systems.

The young engineer David B. Parkinson at Bell Laboratories developed the first electric fire-control system. In the same year, the National Defense Research Committee -- Section D-2, Fire Control -- commissioned Bell Labs with the development and Western Electric with the production of a prototype. At the time of the first tests and shortly after the Japanese attack on Pearl Harbor, the United States entered World War Two. The success of the electric fire-control strategy was proved during the Second Battle of Britain, when the American anti-aircraft systems shot down most of the V-1 flying bombs, the so-called miracle weapon of the German Wehrmacht. [3, p.148]

Through simplified operation and computer-assisted firepower, the new fire-control resulted in better hit rates. An analog computer extrapolated the future position of a flying object from the electric signal of its tracked course, which was entered via rotation of manual wheels by soldiers who were tracking the enemy optically (see Fig.2).

Figure 2. Typical Tracking Unit of an Anti-Aircraft Fire-Control System, 1941.

Figure 2. Typical Tracking Unit of an Anti-Aircraft Fire-Control System, 1941.

The relative velocity of a projectile was too slow in comparison to that of its target. The rate of downed planes was improved by analyzing and especially by predicting the future of the flight path of the targets. Thanks to the prediction theories of Norbert Wiener and Claude E. Shannon the destruction of fast flying objects was made possible by target tracking. [11, p.24]

During the course of the Second World War, Bell Laboratories continued to improve the technical control of area coordinates in air defense. Radar, an achievement of the War for attack and defense, served to automate optical tracking, leading to the electric evaluation of trajectories, and radar supported air defense during enemy pursuit.

Bell Laboratories was familiar with radar systems using several types of representation, of which two are important here. [3, p. 49] One system of representation was a radar for tracking. Using manual wheels as in optical tracking, the operator could now align two different echoes of the same target on the radar screen, in order to 'point' directly at the target with the radar beam. The operator had the task of smoothing the flight path of the target for prediction purposes by matching the signals on the screen to each other. The other radar system, developed by R.M. Page at the Naval Research Laboratories, showed the radar echoes in a planar image representation (PPI). The map-like view of an air space or surface area in a two-dimensional coordinate system was originally used for navigation and bombing runs as well as a long-range search-radar and early warning system. The search-radar used in air defense showed the radar echoes of targets as shooting stars radiating from the center of a dark screen. After the war, the Army and the Navy both integrated this form of radar into their air defense systems as a tool for locating possible targets. These techniques for selecting the enemy on a radar screen had emerged during World War Two.

Figure 3. Operation at a tactical Fire-Control Unit, 1952.

Figure 3. Operation at a tactical Fire-Control Unit, 1952.

Shortly before the Korean War, Bell Laboratories placed a fully integrated computer- and radar-supported air defense system at the Army's disposal. [3, p. 360ff] This system, which supported manual, semi-automatic, and automatic tracking, featured a radar console for the tactical selection of the most dangerous targets, in addition to the tracking screens (see Fig. 3). The improvements in automated tracking meant that soldiers were needed only for the tasks of marking the enemy and selecting potential targets. Flying objects could be marked electronically on the radar screen and then turned over to the system to be tracked and subsequently shot down.

It is important here to note two points. First, the electronic markings, which simplified the selection of targets on two-dimensional radar screens from World War Two on, constitute the first type of cursor (e.g. [4, p. 1]). Second, targets on planar radar screens were selected exactly as in tracking radar by using manual hand-wheels. The tracking technology and the search-radar display merged to make the tactical selection of enemy objects on the screen possible: human-machine interaction.

4. Pointing at Targets

After the War, the Americans did not forget the catastrophic losses inflicted upon their fleet by Japanese Kamikaze attacks. New fire-control systems were developed to solve the Navy’s problem of control over tactical air space. As in the Army, the defense systems which emerged left the commanding officers with only one job: making the tactical decision as to which target objects were most dangerous. After making this decision, they used control devices to select the object on the screen; the target-tracking and subsequent shooting once in firing range was fully automated. Today there are only a few ships left in the fleet which are still equipped with manual control devices for locating targets on the screen. The structure of fire-control -tracking, computing/predicting, and firing- framed future interaction with machines: input, processing, and output.

The automation of the classic fronts-- ground, air, and water-- means that only orbital space remains as a front today, according to the words of Paul Virilio. Today, military equipment floats in outer space, in essence keeping watch over geographic power relations and crises when shifts in power occur, with the ability to coordinate an intervention if necessary. This reveals the commercialized computer and its control devices to be a spinoff of long-gone war strategies. Thanks to the inventors of the mouse, Douglas C. Engelbart and William K. English, we now find these control devices at every computer terminal rather than on fire-control systems, and we can use these devices to select texts, icons, and other items on the screen. In fact, English served as an officer in the U.S. Navy for five years, and Engelbart was a radar technician in the U.S. Navy, before the Stanford Research Institute had even begun investigating the relationship between human and machine. [8, p. 73]

Figure 4. Human-Machine Interaction at SRI with Keyboard, Push-Buttons, and Mouse, 1960s.

Figure 4. Human-Machine Interaction at SRI with Keyboard, Push-Buttons, and Mouse, 1960s.

Therefore, the SRI researchers' achievement was to separate the technology used to locate targets on radar screens, the military control devices, from their integrated environment and to adapt this technology to the problem of screen-oriented computer applications. Targeting the enemy was reborn in the form of a mouse on a ordinary computer desktop. In the First World War, artillery searchlights had the function of illuminating the target and starting the process of tracking and firing. [7, p.183ff] The function of this optical lightening was to register the target for subsequent firing. Apart from the fact that the armed eye was replaced by radar technology, we can regard the cursor used in air defense as the return of the searchlight on a tactical command level: the computer screen.

5. Clicking for Commands

One of the SRI scientists' inventions has gone unmentioned as yet. Engelbart and English supplemented their control devices with push buttons, whose genealogy is no less associated with military history. In contrast to the First World War, World War Two did not consist of fixed front lines, but mobile fronts that had to be coordinated quickly. For this reason, in 1940 the American intelligence service drew up specifications of the needs on mobile ground communications. Once again, Bell Laboratories was called in to apply their civilian radio expertise to the problem of portable military radios. In response to the specifications requested by the Signal Corps, Bell Labs introduced the push-button. After a short time, this type of communication strategy was even labeled 'Push-Button-Warfare'. [10, p. 70]

More than 100,000 push-button radios were used by light and armored artillery on the various front lines. [6, p. 52ff] The push-button, which replaced the complicated knobs found on radios, was not only more resistant to mechanical shock; it also accelerated the command sequence, because the radio operator no longer had to waste time and attention on the fine-tuning of frequencies. Since the process of switching preset frequencies was quickly learned, serially produced radio devices could be easily changed in the rapid sequence of wartime production, as could radio operators under battle conditions. In graphical user interfaces, push-button technology has provided the possibility not only for push buttons and radio buttons; it also enriches keyboards with today's function keys (see Fig. 4). In any case, the research scientists at the SRI can be credited with combining the button with the mouse.

Thus, long before the advent of personal computing, practical buttons and tactical control devices demonstrated the commanding nature and elegance of device interfaces, in other words, the adjustment to a command flow.


The translation of this essay from an earlier German article (also available online) published in 1996 in Lab. Jahrbuch der Kunsthochschule für Medien in Köln, Germany, has been made possible through Prof. David Mindell from the Program in Science, Technology, and Society at the Massachusetts Institute of Technology. His sponsorship is therefore gratefully acknowledged.

Biographical Sketch:

Axel Roch (click here to visit his home page) received his masters degree in cultural studies from the Humboldt University in Berlin, Germany. He is currently scientific assistant at the Academy of Arts and Media, Cologne (


1. English, W. K. et al. 1967. Display-Selection Techniques for Text Manipulation. IEEE Transactions on Human Factors in Electronics. 8(1): 5-15.

2. Everett, R. R. 1980. Whirlwind. In: Metropolis, N. et al. A History of Computing in the Twentieth Century, Place: Publisher

3. Fagen, M. D. 1978. A History of Engineering and Science in the Bell System. New York: BTL Inc.

4. Godell, W. F. 1945. Electronic Cursor for AN/APS-15. M.I.T. Radiation Laboratory. Technical Report M-175. Boston: M.I.T. Archive.

5. Hermann, J. 1987. The remote-controlled Glidebomb Hs 293 (in german). In: Benecke, T. et al. Flugkörper und Lenkraketen, Berlin: Bernhard und Graefe

6. Hilliard, V. 1944. Radio Telephones Guide the 'Blitz Buggies'. Bell Telephone Magazine 23

7. Kittler, F.A. 1994. A Short History of Searchlights ... (in german). In: Wetzel, M. et al. Der Entzug der Bilder. Place: Publisher.

8. Rheingold, H. 1991. Virtual Reality. New York: Summit Books.

9. Schliephake, H. 1987. The steered Air-to-Air Rocket X 4" (in german) in: Benecke, T. et al., op. cit.

10. Thompson, G. R. et al. 1957. The Signal Corps: The Test. Place: Publisher.

11. Wiener, N. 1948. Cybernetics or Control and Communication in the Animal and the Machine. New York: M.I.T. Press.