all but war is simulation:
the military-entertainment complex
Tim
Lenoir
Stanford
University
To appear
in Configurations, Fall, 2000
The box
office smash from spring 1999, The Matrix,
projects a vision of a world in which "real" world objects are actually
simulations emerging from streams of bits. Finding themselves pursued on a
rooftop with no escape except a helicopter, the movie's hero, asks his guide,
"Can you fly that thing?" "Not yet," she says, as she calls
their home base systems administrator for software that uploads just in time.
In a
similar vein, one of Intel's 1999 ads for the Pentium II processor articulates
the consumer's desire for ever faster uploads and ultimately for fusing the
digital and the real. As a skydiver plummets to earth alternating anxious
glances between the camera and his chute, which appears one agonizing row of
pixels at a time on the screen, the voiceover asks "Time for a Pentium II
Processor?"
Such
images are amusing fantasies. They are also reminders that we are becoming
immersed in a growing repertoire of computer-based media for creating,
distributing and interacting with digitized versions of the world. In numerous
areas of our daily activities, we are witnessing a drive toward fusion of
digital and physical reality: not the replacement of the real by a
hyperreal—the obliteration of a referent and its replacement by a model without
origin or reality—as Baudrillard predicted, but a new country of ubiquitous
computing in which wearable computers, independent computational
agent-artifacts, and material objects are all part of the landscape.
To
paraphrase the description of the matrix by Gibson in Neuromancer, data is being made flesh.[1]
These new media are reshaping the channels of our experience, transforming our
conception of the "real," redefining what we mean by
"community," and some would maintain, what we mean by our
"selves."[2]
As we come to entrust more of our lives to Internet communications and as we
spend more time in virtual, electronic space, our notions of materiality and
reality will inevitably change.
I am
intrigued by the notion that we are on the verge of a new renaissance, which
like the Renaissance of the fourteenth and fifteenth centuries is deeply
connected with a revolution in information technology. That most celebrated
Renaissance is frequently heralded as the birth of humanism. I sympathize with
several contemporary theorists who characterize our renaissance as heralding a
posthuman era in which the human being becomes seamlessly articulated with the
intelligent machine. In the posthuman state, there are no demarcations between
bodily existence and computer simulation, between cybernetic mechanism and
biological organism.[3]
A minimal
condition for a new, “post”-human condition would certainly be a fundamental
shift in our notions of material reality. By exploring the recent history of
what I am calling the military-entertainment complex I hope to suggest some of
the pathways through which a so-called posthuman future might emerge. Our
experience of materiality is deeply tied to technologies that affect how we
experience space and time and how we use our bodies. Changes in these
technologies have a profound impact on our sense of the real.
A sign of
these posthuman times is the rapid fusion of the digital and the real going on
around us, taking place in personal digital assistants, cell phones, and Palm
Pilots™ (about to become wearable servers) that accompany us throughout the
day. The sign is more clearly perceptible perhaps in technologies such as
web-based personal shopping assistants that learn our preferences and then
crawl the web in search of software upgrades, information, and commodities that
define us as consumers of information.
No less
important for effecting these changes in our notions and experience of material
reality will be the implementation of research and development efforts to embed
information technologies in the world around us, in objects other than
communications devices. For a generation we have been used to thinking of the
computer as the symbol of the information revolution, but one way to think
about our present stage within this revolution is that the computer is in fact
disappearing. If developments funded by the military research agencies such as
DARPA at several research universities and at organizations like Xerox PARC come
to pass, that large box we are used to staring into all day will vanish. In its
place will be a world filled with special purpose chips, “smart” devices, and
agents that interact with us constantly. These agents and devices will not sit
on our desktops, but rather will be embedded in wearable microdevices and
implants, leading to a world of ubiquitous computing.
Since
1996, for instance, the DARPA Smart Modules program has been developing and
demonstrating novel ways of combining sensors, microprocessors, and
communications in lightweight, low-power, modular packages that offer war
fighters and small fighting units new methods to enhance their situational
awareness and effectively control their resources on the battlefield. Smart
modules are integrated into personal and portable information products that
sense, compile, analyze, display, compare, store, process, and transmit
information. The resulting products create opportunities to exploit data-rich
battlefield environments at the individual war fighter level. Instead of the
normally limited set of information resources at the disposal of the individual
war fighter (maps, compasses, hand-held global positioning systems) and limited
connectivity (primarily voice radio) to information infrastructures, Smart
Modules allow individuals to better perceive their environment (see, hear, and
feel the electromagnetic spectrum), augment their ability to remember and make
decisions through use of electronic devices, and provide mechanisms for
connection to wireless distributed data networks. Modular information products
are part of clothing, worn on a belt or put into a pocket. These products will
capitalize on current rapid developments in micro electromechanical systems,
head-mounted and small direct-view displays, optoelectronics, integrated
sensors and video modules, energy storage, and low-power electronics.
DARPA’s
“smart matter” programs go beyond the wearable modular communication devices
and information systems described above. Smart Matter research is based in
large part on MEMS (micro electromechanical systems), very small sensors and
actuators that are etched into silicon or other media using
photolithography-based techniques. Integrated with computation, these sensors
and actuators form a bridge between the virtual and physical worlds, enabling
structures to dynamically respond to conditions in their environment. Smart
materials and structures mimic the natural world where animals and plants have
the clear ability to adapt to their environment in real time. Designers and
promoters of these “biomimetic” technologies dream about the possibilities of
such materials and structures in the man-made world, engineering structures
operating at the very limit of their performance envelopes and to their
structural limits without fear of exceeding either. “Smart” structures could
give maintenance engineers a full report on their performance history, as well
as the location of any defect as it occurs. Furthermore, that same structure
could be given the capacity to self-repair or the ability to counteract
unwanted or potentially dangerous conditions such as excessive vibration.
The nexus
between computer simulation and virtual reality for military purposes and the
entertainment industry has a thirty-five year history tracing its origin to
Ivan Sutherland's head-mounted display project.[4] The project usefully illustrates both the
synergy between problem-focussed environments of industry and government-funded
(military and otherwise) projects, and the less product-oriented research focus
of university work that spills across disciplinary boundaries. In 1966
Sutherland moved from ARPA to Harvard as an associate professor in applied
mathematics. At ARPA Sutherland had participated in implementing J.C.R.
Licklider's vision of human-computer interaction, and he returned to academe
inspired to pursue his own program of extending human capabilities.[5]
One such project was his head-mounted display.
Funding
for this project came from a variety of sources: military, academic, industry.
The CIA provided $80,000, and funding support was also provided by ARPA, the
Office of Naval Research, and Bell Labs. Bell Helicopter provided equipment.
The Air Force provided a PDP-1 computer, while MIT Lincoln Labs, also under an
ARPA contract, provided an ultrasonic head-position acoustic sensor.
Sutherland's experiments built on the network of personal and professional
contacts he had developed at MIT and at ARPA as well as on earlier work on
head-mounted displays at the Bell Helicopter Company, centered on input from
servo-controlled cameras which would move with the user's head and thus move
the user's visual field. At Bell Helicopter Company, the head-mounted display
was coupled with an infrared camera that would give military helicopter pilots
the ability to land at night in rough terrain. An infrared camera, which moved
as the pilot's head moved, was mounted on the bottom of a helicopter. The
pilot's visual field was the camera's.
The
helicopter experiments demonstrated that a human could become totally immersed
in a remote environment through the "eyes" of a camera. With the
viewer inside a building, a camera was mounted on the roof, with its field of
view focused on two people playing catch. The viewer immediately responded to
the motion of the ball, moving the camera to follow the game of catch by moving
his head. Proof of the viewer's involvement in this remote environment came
when the ball was thrown at the camera and the viewer ducked. When the camera
panned the horizon, the viewer reported a panoramic skyline. When the camera
looked down to reveal that it was "standing" on a plank extended off
the roof of the building, the viewer panicked.[6]
In 1966,
as an associate professor at Harvard, Sutherland and his student, Bob Sproull,
took the "Remote Reality" vision systems of the Bell Helicopter
project and turned them into "Virtual Reality" by replacing the
camera with computer-generated images.[7]
The first such computer environment was no more than a wire-frame room with the
cardinal directions—North, South, East, and West—initialed on the walls. The
viewer could "enter" the room by way of the West door, and turn to
look out windows in the other three directions. What they called the
"Head-Mounted Display" later became known as Virtual Reality.
Sutherland
later recalled that at the time he formulated the head-mounted display project
he was clear that there was no hope of immediately realizing it. But the
project was important, he recalled, "as an 'attention focuser' which defined a set of problems that
motivated people for a number of years." VR was a target impossible to
reach. It provided a holy grail, "a reason to go forward and push the
technology as hard as you could. Spinoffs from that kind of pursuit are its
greatest value."[8]
In
Sutherland's view, the most important spinoff from such projects were the
students; the personal and professional connections supported future work in
the area. Sociologists of science talk about the importance of "core
sets" of individuals who define the intellectual and technological
direction of a domain. Certainly the bevy of students Evans and Sutherland
trained constitute one of the most dramatic examples of such a core set in the
history of computer science. Among the students who worked on the "holy
grail" of VR with Sutherland at Harvard were Charles Seitz, Robert
Sproull, Ted Lee, Dan Cohen, and Quintin Foster. In 1968 Sutherland left for
Utah, where he joined the Computer Science Department at the University of Utah
founded by Dave Evans in 1965, the first computer science program to focus on
graphics and graphical interfaces. Sutherland had known Evans from his ARPA
days, and together they founded Evans & Sutherland Computer Corporation in
1968, which manufactured graphical display systems and constructed military
flight and tank simulators under government contract. A number of Evans' and Sutherland's students worked on an
ARPA-supported project on 3-D graphics, and several worked at Evans &
Sutherland on simulations. Of the original Harvard group several came with
Sutherland to form Evans and Sutherland, including Chuck Seitz who joined the
faculty in 1970, and remained until 1973 when he moved to Cal Tech and founded
Myricom with Dan Cohen, another of the original Harvard team who contributed to
the head-mounted display. The interaction between the research on basic
problems and development-directed hardware and software systems for government
and military projects at E&S was an important feature of work at Utah.
At
Harvard briefly and then from 1968-1974 at the University of Utah Sutherland
set out a research program for work in interactive computer graphics that
guided the field in much of its early development and continues to be relevant
for the discussion of current trends in medical graphics.[9]
For Sutherland the display screen was to be considered a window, through
which the user looks at a virtual, conceptual 3-D universe. Sutherland’s program called for inventing ways to make the image
in the window more and more realistic, until at last it becomes
indistinguishable from the image in a real window, a real window augmented,
that is, by “magical” powers of scaling, labelling, rotating and
cross-sectioning.
In
addition to visible realism Sutherland sketched two other directions. A second
class of graphical applications related purely to representing abstractions,
such as force fields, molecules, mathematical objects, and data graphing, for
which visual realism is irrelevant. But in this context Sutherland considered
that it would be useful to extend the domain of information available to the
user by incorporating information from other sensory modalities. Sutherland
coined the term “virtual worlds” for systems in which users are immersed in
scenes created completely by computer graphics; and he urged that the goal of
this work should be to make the objects in the scene “look real, sound real,
feel real, and move realistically as the user interacts with them.”[10]
The third
form of interactive graphics Sutherland outlined is one particularly relevant
to current medical applications such as virtual surgery: namely, the ability to
superimpose abstract representations on an object, as in cartography, where
abstractions are superimposed on a realistic rendering of a geographical space.
One of Sutherland's first attempts at practical application of the head-mounted
display was in fact in pursuit of this third form of graphical interface. The
first published research project deploying the head-mounted three-dimensional
display engaged problems of representing hemodynamic flow in models of
prosthetic heart valves; the goal of this research was to generate the results
of calculations involving the application of physical laws of fluid mechanics,
a variety of numerical analysis techniques in order to generate a synthetic
object that one could walk towards, move around, or even into.[11]
The
period from the late 1960s through the late 1970s was a golden era of computer
graphics at Utah, and students of the Utah ARPA-funded program contributed to a
number of exploratory systems in computer graphics and the identification of
key problems for future work. Among these were various efforts to develop fast
algorithms for removing hidden surfaces for color and 3-D graphics, a problem
identified as a key computational bottleneck.[12]
Two important contributions in this field were made by students of the
Utah program, including an area search method by Warnock,[13]
and a scan-line algorithm developed by Watkins which was constructed into a
hardware system.[14]
Perhaps the most important breakthrough came just at the close of the decade
with Henri Gouraud's development of a simple scheme for continuous shading.[15]
Unlike polygonal shading, where an entire polygon was shaded with a single
level of grey, Gouraud's scheme involved interpolation of surface normals to
describe continuous shading across a single polygon, and thus a closer
approximation to reality. The effect made a surface composed of discrete
polygons appear to be continuous.
The list
of alumni from the Utah program in the years between 1968-1978 is impressive
indeed. Below are listed a few members of this illustrious group and their
accomplishments.
TABLE
1. Select Alumni of the University of
Utah’s Computer Graphics Program
Name |
Affiliation |
Accomplishments |
Alan Kay |
Ph.D. 1969 |
Developed the notion of a graphical user interface at Xerox
PARC, which led to the design of Apple MacIntosh computers. Developed Smalltalk. Director of Research, Disney Imagineering |
John Warnock |
Ph.D. 1969 |
Worked on the Illiac 4 Project, a NASA space flight simulator,
and airplane simulators at Evans & Sutherland. Developed the Warnock recursive subdivision algorithm for hidden
surface elimination. Founder of Adobe
Systems, which developed the Postscript language for desktop publishing. |
Chuck Seitz |
Faculty 1970-73 |
Pioneer in asyncronous circuits. Co-designer of the first graphics machine, LDS-1 (Line Drawing
System). Designed the Cosmic Cube
machine as a research prototype that led to the design of the Intel
iPSC. Founder of Myricom Corp. |
Nolan Bushnell |
B.S. 1969 |
Developed the table tennis game Pong in 1972, which launched the
video game industry. Founder of
Atari, which became the leading company in video games by 1982. |
Henri Gouraud |
Ph.D. 1971 |
Developed the Gouraud shading method for polygon smoothing - a
simple rendering method that dramatically improved the appearance of
objects. |
Ed Catmull |
Ph.D. 1974 |
Pioneer in computer animation.
Developed the first computer animation course in the world. Co-founder of Pixar Animation Studios, a
leading computer graphics company which has done work for LucasFilm and was
recently involved in the production of the movie Toy Story. Received a
technical Academy Award (with Tom Porter, Tom Duff, and Alvy Ray Smith) on
March 2, 1996 in Beverly Hills from the Academy of Motion Picture Arts and
Sciences (AMPAS) for "pioneering inventions in Digital Image
Compositing". |
Jim Clark |
Ph.D. 1974 |
Rebuilt the head-mounted display and 3-D wand to see and
interact with 3-dimensional graphic spaces.
Former faculty at Stanford University. Founder of Silicon Graphics Inc., Netscape Communications
Corporation, and most recently Healtheon. |
Bui Tuong-Phong |
Ph.D. 1975 |
Invented the Phong shading method for capturing highlights in
graphical images by modeling specular reflection. Phong's lighting model is still one of the most widely used
methods for illumination in computer graphics. |
Henry Fuchs |
Ph.D. 1975 |
Federico Gil Professor, University of North Carolina at Chapel
Hill. Research in high-performance
graphics hardware; 3D medical imaging; head-mounted display and virtual
environments. Founder of Pixel
Planes. |
Martin Newell |
Ph.D. 1975; Faculty 1977-79 |
Developed procedural
modeling for object rendering.
Co-developed the Painter's algorithm for surface rendering. Founder of Ashlar, Inc., which develops
computer-assisted design software. |
James Blinn |
Ph.D 1978 |
Invented the first
method for representing surface textures in graphical images. Scientist at
JPL, where he worked on computer animation of the Voyager fly-bys. |
The work
of these individuals alone suggests the high level of fundamental research that
was done at the University of Utah under federally sponsored projects in a
variety of graphics fields, including surface rendering, simulations, computer
animation, graphical user interface design, and early steps toward virtual
reality.[16]
The number of significant
commercial firms generated by the members of this group is astounding. No less
than 11 commercial firms, several of which ship more than $100 million in
product annually, were the offspring of the Utah program.
Many of
these firms have their own research divisions and have contributed importantly
to the fundamental research base in computer graphics (both hardware and
software) that has been essential to the take-off of VR. But here, once again,
the importance of long-term government support, particularly by DARPA, to
sustaining innovative research directions emerges as clearly as in our earlier
example. The case of Atari illustrates this point dramatically. Founded by Utah graduate in computer
science, Nolan Bushnell, Atari at one point in it history was the fastest
growing company in America. Started in 1972 with an initial investment of $500,
Atari reached sales of $536 million in 1980. During the late 1970s and early
1980s Atari hosted exciting developments in software and chip design for the
home entertainment market, and a joint venture with LucasFilm in 1982, in which
Atari licensed and manufactured games designed by LucasFilm, established
cross-pollination between videogames and film studios. Atari was also a center
of developments in VR, and several of the pioneering figures in the VR field
got their start at Atari. For instance, Warren Robinett, who has directed the
head-mounted display and nanomanipulator projects at the University of North
Carolina, Chapel Hill (discussed below), developed the extremely popular
videogame Adventure at Atari from
June 1977 through November 1979. Jaron Lanier, who developed the DataGlove in
1985, got his start by creating the videogame Moondust, the profits from which Lanier used to launch VPL-Research
in 1984, the first commercial VR company.
In 1980
Atari created its own research center, directed by Alan Kay, who came over from
Xerox PARC and assembled a stunning team of the best and brightest in the field
of interface design and VR research. Kay's team at the Atari Research Lab
included Brenda Laurel (who had been at Atari since 1979), Scott Fisher, who
had studied with Nicholas Negroponte at MIT before coming to Kay's lab to work
on visual displays and virtual reality, and William Bricken, a recent Ph.D.
from Stanford in computer science and educational psychology.
But Atari
fell on hard times. Having reached annual sales of $512 million in 1980, Atari
registered $536 million in losses for 1983. The Atari Research Lab was,
obviously, one of the casualties of the economic crash in the video game
industry (and computer industry more generally). Most of the people working in
VR at Atari either migrated to work in federally funded VR projects, like Jaron
Lanier, who created VPL-Research in 1984 and landed a government contract to
build the DataGlove for NASA. What emerges from this example is not that
federal projects provided fortunate safety nets for failed industry
initiatives, but more importantly that centers such as NASA Ames and UNC had
the right mix of basic research and long-term vision to move the technology
forward. Thus, Scott Fisher moved from Atari to NASA Ames where he directed the
Virtual Environment Workstation Project and VR project. Joining Fisher were
Warren Robinett and Brenda Laurel. As noted above Robinett eventually moved from
NASA to Chapel Hill in 1989. William Bricken moved from Atari to Advanced
Decision Systems, where he pioneered high-performance inference engines, visual
programming systems, and instructable interfaces, then on to Autodesk Research
Lab, where he developed the Cyberspace CAD prototype of virtual reality.
Bricken then moved from industry to the University of Washington's Human
Interface Technology Laboratory, where he designed and implemented the Virtual
Environment Operating System and interactive tools of the VR environment.[17]
There was little question that the continued development of virtual reality
technology in the 1980s was not something industry was prepared to do on its
own: indeed Lanier's failed efforts to market for Nintendo a consumer entertainment
version of the DataGlove, called PowerGlove, demonstrated that the time was not
yet right for a sustained industry push. Federal support was crucial to
building the array of hardware and software necessary for industry to step in
and move VR forward. The impressive synergism of federally funded projects and
industry developments bringing about the emergence today of the new VR
technologies in surgery and other fields would not have been possible without
sustained federal funding in centers where the different components of VR work
were developed in tandem. As several pioneers in the field observed in a 1991
senate hearing, the merging of the substantially different technologies at
stake in virtual worlds could not be undertaken by commercial interests whose
horizon of return on investment is short, particularly while the technologies
at issue remained in a precompetitive situation for so many years.[18] It is instructive to explore how a sustained
mixture of government, industry, and university-based research and development
turned the dim portrait of the future depicted in these 1991 senate hearings
into the extremely bright picture of the late 1990s.
By the
mid-1980s it was universally acknowledged that the creation of virtual worlds
technology depended upon developments in several fields, including computer
architectures, processors, operating systems, and languages. DARPA funding
played the crucial role in these initiatives.
One critical turning point for enabling this next phase of development
was the DARPA VLSI (very large systems integration) and reduced instruction set
computing (RISC) programs begun in the late 1970s. For the first 15 years of
its life, the microprocessor improved its performance by an impressive 35% per
year. But these performance gains began to slow down, and increasing chip
fabrication costs led DARPA program managers to be concerned about future
growth. In 1976, they commissioned a RAND study on the problem.[19]
The study showed that
the U.S. computer technology environment of the mid-1970s was characterized by
(1) a tapering off in the rate of improvement in computer performance as the
marginal costs rose and marginal gains from extending prevailing technologies
declined; (2) extensive insulation of commercial microelectronics firms,
concentrating on their own proprietary developments, from academic communities
which were limited in their access to advanced equipment and industry
technologies; and (3) exponential growth in the cost of equipment and of
implementing device design, as industry concentrated on incremental efforts to
pack more gates and transistors into semiconductor devices. The authors of the
study also realized that university engineering and computer science
departments were getting shut out of much of the microelectronics revolution
because they couldn't afford the equipment necessary to manufacture silicon
chips. Even those universities that could afford some equipment could never
keep up with the rapidly advancing state of the art.[20]
It was in this
environment that DARPA originated the VLSI RISC programs. Through his relations
with the academic community going back to the early 1970s, Dr. Robert E. Kahn
was aware of both the technology potentials of work being done at academic
centers of excellence in computer science, and of the cost and limits placed on
their ability to implement, validate, and demonstrate their work because of the
proprietary practices of industry.[21]
The VLSI and RISC programs were undertaken specifically to revitalize and tap
creativity in the academic community, which had played an important role in
earlier computer and semiconductor developments but which had a declining role
by the mid-1970s due to its increasing distance from technology developments in
industry. As a result of research at universities and
industrial laboratories supported by the DARPA programs, performance gains
began to increase by 1987 to about 55% per year -- a doubling of performance
every 18 months.[22]
[Figure
1: The Development of Processing Power and Memory]
RISC processors
have advanced the field of interactive graphics and contributed significantly
to the development of VR. Silicon Graphics, co-founded by Jim Clark in 1982,
was an early adopter of RISC processors and led in the recent development of
high-end graphics, including virtual reality. Clark joined the Stanford
engineering faculty in 1979 having done his Ph.D. with Ivan Sutherland on
problems related to the head-mounted display. Clark worked with John Hennessy
and Forrest Baskett on the Stanford VLSI program and was supported by DARPA for
a project on the Geometry Engine, the goal of which was to harness the custom
integrated-circuit technology of MIPS to create cost-effective high-performance
graphics systems. In 1981 Clark received a patent for his Geometry Engine, the
3-D algorithms built into the firmware that allow the unit to serve up realtime
interactive 3-D. The patent on the Geometry Engine formed the basis of Silicon
Graphics founded in 1982 with Kurt Akeley, then a research assistant working
with Clark at Stanford. Clark also invented the GraphicsLibrary,which is the
graphics interface language used to program SGI's computers. These systems
offered built-in 3D graphics capabilities, high speed RISC processors and
symmetrical (multiple processor) architectures. The following year in 1983, SGI
marketed its first graphics terminal, the IRIS 1000 graphics terminal.
The
development of Silicon Graphics not only shows that federal funding initiatives
have had major impacts on the economy. It also represents the contribution of
commercial developments to the field of interactive graphics and VR. Silicon
Graphics, Evans & Sutherland, HP, Sun Microsystems, DEC, and others have
generated products enabling simulations of all sorts, scientific visualizations,
and CAD programs for engineering. No less significant has been their
contribution to the entertainment industry, particularly to the film and video
game industries. Indeed, as I have noted above, the entertainment industry has
been a major stimulus to graphics throughout its history, in providing sources
not only of employment and markets for products but also of substantial
research contributions.[23]
The relationship between these different partners has been mutually enriching;
the arrows of influence point in both directions.
Several
spectacular examples of the contribution of the entertainment industry to
graphics might be discussed here, but one of the most widely appreciated is
RenderMan, developed by Pixar Animation Studios. Ed Catmull, another alumnus of
the Utah graphics program in the 1970s, joined Alvy Ray Smith in the computer
graphics lab at LucasFilm in 1979. Catmull and Smith had colloborated on the
integrated alpha channel in 1977 at the New York Institute of Technology, a
fundamental technology in computer graphics.[24]
Smith then went on to direct the genesis scene of LucasFilm's Star Trek II, a sequence several
minutes long generated by computer graphics depicting the spread of life across
a new world. In the view of George Lucas and his organization, such work
signaled that computer animation was finally coming of age as a tool for
building movies. To realize the dream of constructing an entire film from
computer-generated material, Smith and Catmull recruited a number of young
computer-graphics talents to LucasFilm, among them, Loren Carpenter from the
Boeing Company in Seattle, Washington, who had studied Mandelbrot's research
and then modified it to create realistic fractal images. At the 1980 SIGGRAPH
conference Carpenter had presented a stunning film entitled "Vol
Libre," a computer-generated high-speed flight through rugged fractal
mountains. In 1981 Carpenter wrote the first renderer for Lucasfilm, called
REYES (Renders Everything You Ever Saw), which was the beginning of RenderMan.
In 1986 the
computer graphics division of LucasFilm's Industrial Light and Magic was spun
off as a separate company, Pixar, with Catmull as president and Smith as
vice-president. Under their direction
work continued at Pixar on developing a rendering computer. Pat Hanrahan joined
the REYES machine group at Pixar in 1986. At the University of Wisconsin and
then at the New York Institute of Technology Computer Graphics Laboratory,
where he was Director of the 3D Animation Systems Group, Hanrahan published a
number of path-breaking papers on methods of volume rendering, including papers
on ray-tracing algebraic surfaces and beam-tracing polygonal surfaces. Hanrahan
joined Robert Drebin and Loren Carpenter in developing the first
volume-rendering algorithms for the Pixar image computer.[25]
These algorithms were quite different from earlier approaches in that they
created images directly from three-dimensional arrays without the intermediate
steps of converting to standard surface representations such as polygons.
Hanrahan was responsible for the interface as well as the rendering software
and the graphics architecture of RenderMan.
The
rendering interface of the system evolved into the RenderMan standard now
widely used in the movie industry. The RenderMan standard describes everything
the computer needs to know -- the objects, light sources, cameras, atmospheric
effects, and so on -- before rendering a 3D scene. Once a scene is converted to
a RenderMan file, it can be rendered on a variety of systems, from Macs to PCs
to Silicon Graphics Workstations. This opened up many possibilities for 3D
computer-graphics software developers. With RenderMan all the developer had to
do was give the modeling system the capability of producing
RenderMan-compatible scene descriptions. Once it did this, then the developer
could bundle a RenderMan rendering engine with the package, and not worry about
writing a renderer. Another strength of RenderMan is its "shaders,"
pieces of programming code for describing surfaces, lighting, and atmospheric
effects. The spatial texture of an object is generated by the computer in 3D
space. In contrast to most texture-mapping techniques which map the texture to
the outside surface of the object, Hanrahan's procedural textures run
completely through the object in 3D, so that if, for example, a cube of wood is
sectioned, you see wood grain running through the whole cube. When the initial
specification of RenderMan was announced, over 19 firms endorsed it, including
Apollo, Autodesk, Sun Microsystems, NeXT, MIPS, Prime, and Walt Disney.
RenderMan was used in creating Toy
Story, the first feature-length computer-animated film, the dinosaurs in Jurassic Park, the cyborg in Terminator 2, and numerous other major
effects. But this powerful tool has not been limited to use in the film
industry. It has also been an important tool in recent work on scientific
visualization and volume rendering in a number of fields in science,
engineering and medicine. Moreover, the hardware and software components are
not the only things that have circulated between industry and academe. The
people have circulated too. Thus Ed Catmull and Alvy Ray Smith moved from
academic environments of NYIT and Berkeley (in Smith's early career) to
LucasFilms, and Pixar; Pat Hanrahan, after starting at NYIT and then Pixar,
moved back to an academic lab, first as an associate professor at Princeton,
and more recently as professor at Stanford, where he has gone on to contribute
to several areas of graphics, including development of applications of the
Responsive Workbench, a 3D interactive virtual environment workspace, to areas
of scientific visualization, architecture, and medicine. The work in Hanrahan's
lab on the workbench has been a cooperative project between Stanford University
and the GMD (the German Institute for Information Design), and has been
supported by grants from Interval Research Corporation, DARPA (visualization of
complex systems), and NASA Ames (virtual windtunnel). Equipment donations have
been provided by Silicon Graphics and Fakespace, Inc.
Through
films such as Jurassic Park and Toy Story media industries have created
desire for computer-generated imagery. Entertainment such as IMAX films, the Star Tours simulation ride at
Disneyland, and more recently “Magic Edge” flight simulators all have
contributed to creating the desire for sensory immersion experiences. The film Titanic is emblematic of our current
desire for digital effects. James Cameron and his organization actually pursued
digital effects as ends in themselves—indeed they drew upon effects generated
by 19 different visual effects and graphics companies—stealing pride of place
from older film techniques, stage effects, and models (which the film also
employs to a limited extent). We have come to desire these effects even when
the film could be made without them. The desire for “realism” in visual effects
forms a feedback loop with whatever technologies are currently available, being
inspired in part by them at the same time the imaginary inspires more extreme
and exotic visions. The science-fiction novel Enders Game by Orson Scott Card provides an example of how this
desire for the fusion of the digital and the real actually preceded the full
availability of the technology. Enders
Game centers on a boy-ninja who saves the world from aliens in a war game
where the video game simulation becomes not only the training ground for real
world warriors but the actual war itself. Originally written in 1977, years
before flight simulators were invented, the training scenario in Enders Game has nonetheless so inspired
military training programs that it was adopted as required reading by the
Marine University in Quantico, Virginia. Graphics designers and computer
scientists frequently cite science fiction as a source of inspiration. For
example, Perlin and Goldberg of Disney Imagineering, the authors of Improv, a
system for scripting autonomous interacting actors for virtual worlds, note the
influence of Neil Stephenson’s description of the problems in constructing
authoring tools for avatars in the Metaverse in his novel Snow Crash. Numerous programmers of contemporary (1999) video games
and military flight simulators report the inspiration they have derived from
this novel.[26]
Desire
for realistic computer-generated images has combined with the stimulus to the
computer graphics and hardware markets provided by exponential improvements in
processors (Moore’s Law) and new chip architectures to fuel the growth of
companies like Silicon Graphics, driving the prices of machines equivalent to
first-generation Onyx workstations at over $20,000 to the price of powerful
desktop computers around $5000. The potential markets for multimedia have
stimulated the search for new architectures for image caching and compression
techniques which can greatly reduce bandwidth and memory requirements of
expensive high-end machines like the SGI InfiniteReality Engine with its tens
of megabytes of graphics memory and multiple memory buses hundreds of bits
wide, in order to bring high-end multimedia performance to PC prices.[27] A sense of the market forces driving this
convergence of high-end computer architectures, graphical rendering hardware
and software with low-end commercial markets for computer graphics ultimately
bringing VR to everyone can be seen by Silicon Graphics' partnership with
Nintendo to produce Nintendo64.
On August
23, 1993, Silicon Graphics, NEC, and Nintendo announced a partnership to build
the world's most powerful game machine. Speaking to a crowd of analysts, news
media, and industry pundits, Silicon Graphics founder and then CEO Jim Clark
outlined an ambitious project, Project Reality, which he claimed would
revolutionize the consumer electronics industry. Never one for understatement,
Clark declared that Project Reality would harness the "combined computer
power of hundreds of PCs" for less than $250. Clark's often-stated goal
since he started the company, the plan called for Silicon Graphics to design
two chips to form the heart of the system: the R4300i processor and the Reality
CoProcessor (RCP). The R4300i processor, a low-cost, low-power MIPS RISC CPU,
would handle the interaction with the game player and manage the game's control
tasks. The RCP, a media-processing engine, would handle all the high-performance
graphic and music-synthesis tasks. The R4300i processor team was already in
place at MIPS, recently acquired by Silicon Graphics and staffed with
experienced engineers. However, the Project Reality team, slated to design the
RCP and write the software, had to be built from scratch. NEC fabricated the
RCP chips on a totally new, state-of-the-art chip fab line in Japan, built at a
cost of more than one billion dollars. The chips in Nintendo64 were the first
microchips produced in volume using .35 micron semiconductor technology.
Nintendo's partners, Silicon Graphics and NEC, succeeded in getting the world's
most advanced semiconductor technology into a consumer product. Nintendo64,
shipped in April of 1996, has been one of the most successful entertainment
products in history. By the end of 1997 SuperMario64 had enabled Nintendo to
capture a worldwide base of 6 million users with video game revenues breaking
the $2 billion mark.
In 1997,
Silicon Graphics CEO Ed McCracken explained the importance of this development
in his letter of introduction to the Silicon Graphics booth at the National
Broadcasters convention:
Through the years, many of you have asked why the entertainment
market is critical to the success of Silicon Graphics. The answer is simple.
Our entertainment customers drive our technological innovation. And
technological innovation is the foundation of Silicon Graphics.[28]
Indeed,
in the 12 months ending in March 1994, SGI reported revenues of $1.5 billion.
In 1997 revenues were reported as $3.66 billion.[29]
SuperMario was certainly super to SGI. Kurt Akeley, a cofounder of Silicon
Graphics, echoed McCracken's sentiments
to a group of SGI developers' at a meeting in Munich in the spring of
1998:
That's what Silicon Graphics has been about since 1982, when I was
one of the people that started it. We've had a huge impact, with you, making
that come true. We've done it in
domains that seemed obvious at the time: computer-aided design scientific
visualization, as well as domains that were not anticipated.
It's easy to imagine that we've affected more people directly with
the technology in the Nintendo64 than we have collectively with all of our
other computers. We've certainly sold more of them - by far - than all of the
rest of the workstations we've done. So we've had an effect, not just in the
technical domain, not just in the places that would have been fairly obvious to
applied 3-D technology, but across the board-- in people's homes and in their
lives, and we're going to continue doing that.[30]
By making
the technology more affordable, by finding ways to scale it to large consumer
markets, by aiming, in short, to make technologies like the RISC chip
everywhere present, developments such as those illustrated by the
research-entertainment nexus including Pixar, Silicon Graphics, and Nintendo
have made use of imaging technology in science and medicine possible on a scale
and at a pace that would not otherwise be imaginable.
In
addition to the central role of the research-entertainment complex, the
examples discussed in the preceding sections point to the importance of federal
funding of university research as well as research in government-funded labs
(primarily through DARPA contracts) as crucial in creating and sustaining the
hardware developments critical to the fields of 3-D graphics, simulation
technology and virtual reality. This is only half of the picture. Although
networks are usually thought of apart from computer graphics, network
considerations are in fact crucial to large-scale interactive 3-D graphics.
Graphics and networks have become two interlocking halves of a larger whole:
distributed virtual environments. Central to this work have been DARPA funding
and the US Army's creation of SIMNET, the military's distributed SIMulator
NETworking program.
Simulators
developed prior to the 1980s were stand-alone systems designed for specific
task-training purposes, such as docking a space capsule or landing on the deck
of an aircraft carrier. Such systems were quite expensive, for example, more
than $30-$35 million for an advanced pilot simulator system in the late 1970s,
and $18 million for a tank simulator at a time when an advanced individual
aircraft was priced around $18 million and a tank considerably less. High-end
simulators cost twice as much as the systems they were intended to simulate.
Jack A. Thorpe was brought into DARPA to address this situation based on a
proposal he had floated in September 1978. Thorpe's idea was that aircraft
simulators should be used to augment
aircraft. They should be used to teach air-combat skills that pilots could not
learn in peacetime flying, but that could be trained with simulators in
large-scale battle-engagement interactions. Thorpe proposed the construction of
battle-engagement simulation technology as a 25-year development goal.[31] Concerned about costs for such a system Thorpe
actively pursued technologies developed outside the DoD such as video-game
technology from the entertainment industries.[32] In 1982 Thorpe hired a team to develop a
network of tank simulators suitable to collective training. The team that
eventually guided SIMNET development consisted of retired Army Colonel Gary W.
Bloedorn, Ulf Helgesson, an industrial designer, and a team of designers from
Perceptronics of Woodland Hills, California, led by Robert S. Jacobs.
Perceptronics had pioneered the first overlay of computer graphics on a display
of images generated by a (analog) videodisc as part of a tank gunnery project
in 1979.
The
SIMNET project was approved by DARPA in late 1982 and began early in the spring
of 1983 with three essential component contracts. Perceptronics was to develop
the training requirements and conceptual designs for the vehicle simulator
hardware and system integration; BBN Laboratories Inc, of Boston, which had
been the principal ARPANET developer, was to develop the networking and
graphics technology; and the Science Applications International Corporation
(SAIC) of La Jolla, California was to conduct studies of field training experiences
at instrumented training ranges at the National Training Center in Fort Irwin,
California.
Affordability
was the chief requirement Thorpe placed on the development of SIMNET
components. Sticking to this requirement led to the most highly innovative
aspects of SIMNET. Prior to the late 1980s simulators were typically designed to emulate the vehicles they represented
as closely as engineering technology and the available funds permitted. The
usual design goal was to reach the highest possible level of physical fidelity
-- to design "an airplane on a stick," as it were. The SIMNET design
goal was different. It called for learning first what functions were needed to
meet the training objectives, and only then specifying the needs for simulator
hardware. Selective functional fidelity, rather
than full physical fidelity, was SIMNET's design goal, and as a result, many
hardware items not regarded as relevant to combat operations were not included
or were designated only by drawings or photographs in the simulator.
Furthermore, the design did not concentrate on the armored vehicle per se. Rather, the vehicle simulator was
viewed as a tool for the training of crews as a military unit. The major
interest was in collective, not individual, training. The design goal
was to make the crews and units, not the devices, the center of the
simulations.[33]
This approach helped minimize costs, thus making possible the design of a
relatively low-cost device.[34]
An early crisis that
threatened to undo the project was that the visual-display and networking
architecture being developed by BBN would not support the SIMNET system concept
within the limits of the low-cost constraints. Analyses and expert judgments,
from both within and outside of DARPA, indicated that the planned use of
available off-the-shelf visual-display technology would not support the
required scene complexity within the cost, computer, and communications
constraints set by the SIMNET goals. However a proposal from Boeing allowed
Thorpe to take advantage of the new generation of DARPA-funded microprocessor
advances in VLSI and RISC for development of a new low-cost
microprocessor-based computer image generating technology for visual displays.
The technology proposed by M. Cyrus of Boeing met the scene complexity
("moving models") requirements at acceptably low dollar and
computational costs. Also, it permitted use of a simpler, less costly
networking architecture. The proposed technology would use microprocessors in
each tank simulator to compute the visual scene for that tank's own
"virtual world," including the needed representations of other
armored vehicles, both "friendly" and "enemy." The network
would not have to carry all the information in the visual scenes (or potential
visual scenes) of all simulators. Rather, the network transmission could be
limited to a relatively small package of calibration and "status
change" information.[35]
With these
architecture and design elements in place SIMNET was constructed of local and
long-haul nets of interactive simulators for maneuvering armored vehicle combat
elements (MI tanks and M2/3 fighting vehicles), combat-support elements
(including artillery effects and close air support with both rotary and
fixed-wing aircraft), and all the necessary command-and-control, administrative
and logistics elements for both "friendly" and "enemy"
forces. A distributed-net architecture was used, with no central computer
exercising executive control or major computations, but rather with essentially
similar (and all necessary) computation power resident in each vehicle
simulator or center‑nodal representation.[36]
The terrains for the
battle engagements were simulations of actual places, 50 kilometers by 50
kilometers initially, but eventually expandable by an order of magnitude in depth
and width. Battles were to be fought in real time, with each simulated
element—vehicle, command post, administrative and logistics center, etc.‑being
operated by its assigned crew members. Scoring would be recorded on combat
events such as movements, firings, hits, and outcomes, but actions during the
simulated battle engagements would be completely under the control of the
personnel who were fighting the battle. Training would occur as a function of
the intrinsic feedback and lessons learned from the relevant battle-engagement
experiences. Development would proceed in steps, first to demonstrate platoon‑level
networking, then on to company and battalion levels, and later perhaps on to
even higher levels.
Each simulator was
developed as a self-contained stand-alone unit, with its own graphics and sound
systems, host microprocessor, terrain data base, cockpit with
task-training-justified controls and displays only, and network plug-in
capability (Figure 2). Thus, each simulator generated the complete battle-engagement
environment necessary for the combat mission training of its crew. For example,
each tank crew member could see a part of the virtual world created by the
graphics generator using the terrain data base and information arriving via the
net regarding the movements and status of other simulated vehicles and battle
effects. The precise part of the virtual world was defined by the crew member's
line of sight—forward for the tank driver, or from any of three viewing ports
in a rotatable turret for the tank commander.
The visual display
depended primarily on the graphics generator resident in each simulator. This
computer image generation (CIG) system differed in several important
characteristics from earlier CIG systems. First, it was microprocessor-based
(vs. large mainframe or multiple minicomputer based), and therefore relatively
low in cost (less than $100,000 per simulator visual-display subsystem, vs.
more than $1 million per visual channel; typical flight simulators have at
least five visual channels). Secondly, it was high in environmental complexity
with many moving models and special effects, but low in display complexity with relatively few pixels, small viewing
ports, and a relatively slow update rate of 15 frames per second (vs. the opposite
with earlier CIG systems and the technology being developed to improve and
replace them). The development of the essentially unique graphics generator for
SIMNET was a principal factor in permitting the system to meet the
low-cost-per-unit constraint of the plan.
Figure
2. Architecture of a Single M1 (Abrams Tank) Simulator in SIMNET
(From J.A. Thorpe, "The New Technology of Large Scale
Simulator Networking: Implications for Mastering the Art of Warfighting,"
in Proceedings of the 9th Interservice/Industry
Training Systems Conference, Nov. 30-Dec. 2, 1987, American Defense
Preparedness Association, 1987, p. 495.)
The architecture of
the microprocessor-based graphics generator permits anyone or any simulator so
equipped to connect to the net. This, combined with the distributed computing
architecture of the net, provides an extremely powerful and robust system. New
or additional elements can be included simply by "plugging into" the
network. Once connected to the net, simulators transmit and receive data
"packets" from other simulators or nodes (such as stations for
combat-support or logistics elements), and compute their visual scenes and
other cues (such as special effects produced by the sound system). Because the
data packets need to convey only a relatively small amount of information
(position coordinates, orientation, and unique events or changes in status),
the communications load on the net and the increase in load with the addition
of another simulator are both quite modest. Also, where updating information is
slow in coming from another simulator, its state can be inferred, computed, and
displayed. Then, when a new update is received, the actual-state data are used
in the next frame, and any serious discontinuity is masked by the receiving simulator's
automatic activation of a transition-smoothing algorithm. Should a simulator
fail, the rest of the network continues without its contribution. Thus, network
degradations are soft and graceful.
The prototypes and
early experiments with SIMNET elements were carried out between 1987-89, and
the system was made operational in January 1990. The Army bought the first
several hundred units for the Close Combat Tactical Trainer CCTT system, an
application of the SIMNET concept, the first purchase of a system that would
eventually contain several thousand units at a total cost of $850 million.[37]
Throughout
the period examined here a key characteristic of federal funding of university
research through agencies such as the NSF, NASA, and NIH, as well as through
defense department agencies such as IPTO and DARPA has been the interest in
sustaining imaginative, exploratory, often “holy grail” research expanding the
frontiers of knowledge.
But as
examples such as the VLSI program suggest, support of federal agencies has also
been directed toward seeing that the products of federal research funding get
transferred to technologies in service of both national defense and the
commercial sector. For most of the period covered to this point—up to the end
of the 1980s—policy discussions about these goals—of seeing that research
served national defense and that it ultimately benefited the commercial
sector—were either kept rigidly separate or delicately balanced in a
complicated dance.
With the
end of the Cold War, a stronger emphasis was placed during the 1990s on running
a fiscally efficient military built on the practices of sound business and of
making military procurement practices interface seamlessly with commercial
industrial manufacturing processes. With pressure to reduce military spending
applied by the Federal Acquisitions Streamlining Act of 1994, the Department of
Defense remodeled policies and procedures on procurement (through DOD
Directives 5000.1 and 5000.2) that had been in place for over 25 years. Among
the policies the new directives established was a move away from the
historically based DOD reliance on contracting with segments of the US
technology and industrial base dedicated to DOD requirements, moving instead by
statutory preference toward the acquisition of commercial items, components,
processes and practices. In the new mandated hierarchy of procurement
acquisition, commercially available alternatives are to be considered first,
while choice of a service-unique development program has the lowest priority in
the hierarchy. DOD components were directed to acquire systems, subsystems,
equipment, supplies and services in accordance with the statutory requirements
for competition set out in directive 10 USC 2304. Organizational changes were
required to implement these changes. Adapting technology development and
acquisition to the fast-paced high technology sector of the US economy meant
adopting simplified flexible management processes found in commercial industry,
including the institutionalization of Integrated Product Teams, treating cost
as an independent variable, and implementing a paperless procurement system of
electronic commerce by the year 2000. Program managers were informed that this
mandated change meant that military planners would work more closely with
industrial partners in team fashion sharing information on designs and
specifications. In effect these changes, introduced by Secretary of Defense
William Perry, have transformed military contracting units into business
organizations. In keeping with this new shift in mentality, “Company” websites
now routinely list their “product of the month.”
As we
have seen, the DOD has been the major source of long-term funding for 3-D
graphics and work on VR throughout their 30-year history. As a result of its
changes in procurement and indeed its entire culture for contracting, the DOD
will continue to be a major force in developing these technologies in the near
future, both through DARPA funding for support of graphics labs at universities
and through DOD funding of military projects. Directive 5000.1 on defense
procurement acquisition mandated that models and simulations be required of all
proposed systems, and that “representations of proposed systems (virtual
prototypes) shall be embedded in realistic, synthetic environments to support
the various phases of the acquisition process, from requirements determination
and initial concept exploration to the manufacturing and testing of new
systems, and related training.”[38] The total 1998 budget for programs for
modeling and simulation exceeded $2.5 billion.[39]
When such considerable resources are channeled through the new DOD procurement
system intent upon seamless integration into the civilian high-tech industrial
sector, a new and important role of federal funding in the post-Cold War era as
accelerator of the development and dissemination of modeling and simulation
technologies becomes evident.
An
example suggesting the crucial role federal funding will continue to play in
the future of visualization and simulation technology is provided by the
growing synergy between the U.S. Army’s Simulation Training and Instrumentation
Command (STRICOM) and the entertainment industry. For the last several years,
the videogame industry has been one of the fastest growing sectors of the
entertainment business.[40] Physicians and computer scientists working on
real-time volume rendering of medical imaging data are quick to point out that
the systems they are developing to depend on the ability to deliver live 3-D
images on a desktop computer in a physician’s office.[41] This will require improved graphics
capabilities in PCs and higher bandwidth networking technologies. Developments
in the entertainment industry such as those emerging from the partnership
between Nintendo and Silicon Graphics produce such capabilities. In a similar
fashion, those engaged in the VR field have argued that VR's breakthrough to
acceptance has depended on the dissemination of VR technologies in the entertainment
market for videogames and video arcades. One of the brightest new players in
that industry is Real3D of Orlando, Florida.
Large DOD Development Programs in Modeling
and Simulation
Project Name |
Description |
Estimated Program
Cost ($millions) |
Close
Combat Tactical Trainer |
Networked
simulation system for training army mechanized infantry and armor units. It
is composed of various simulators that replicate combat vehicles, tactical
vehicles, and weapons systems interacting in real time with each other and
semiautonomous opposing forces. |
$ 846 |
Battle
Force Tactical Training |
Tactical
training system for maintaining and assessing fleet combat proficiency in all
warfare areas, including joint operations. It will train at both the
single-platform and battle group levels. |
165 |
Warfighter's
Simulation 2000 |
Next-generation
battle simulation for training Army commanders and battle staffs at the
battalion through theater levels. It has a computer-assisted exercise system
that links virtual, live, and constructed environments. |
172 |
Joint
Tactical Combat Training System |
Joint
effort by the Navy and Air Force to create a virtual simulation at the battle
group level in which combat participants will interact with live and
simulated targets that are detected and displayed by platform sensors. |
270 |
Synthetic
Theater of War (STOW) Advanced Concept Technology Demonstration |
STOW is
a program to construct synthetic environments for numerous defense functions.
Its primary objective is to integrate virtual simulation(troops in simulators
fighting on a synthetic battlefield), constructive simulation (war games),
and live maneuvers to provide a training environment for various levels of
exercise. The demonstration program will construct a prototype system to allow
the U.S. Atlantic Command to quickly create, execute, and assess realistic
joint training exercises. |
442 |
Joint
Simulation System (core) |
A set
of common core representations to allow simulation of actions and
interactions of platforms, weapons, sensors, units, command, control,
communications, computers, and intelligence systems, etc., within a
designated area of operations, as influenced by environment, system
capability, and human and organizational behavior. |
154 |
Distributed
Interactive Simulation |
A virtual
environment within which humans may interact through simulation at multiple
sites that are networked using compliant architecture, modeling, protocols,
standards, and databases. |
500 |
TOTAL |
|
$2,549 |
SOURCE:
U.S. Department of Defense, Office of the Inspector General. 1997. Requirements Planning for Development, Test,
Evaluation, and Impact on Readiness of Training Simulators and Devices, a
draft proposed audit report, Project No. 5AB-0070.00, January 10, Appendix D.
While its
present incarnation is new, Real3D has a venerable history tracing its origins
back to the first GE Aerospace Visual Docking Simulator for the Apollo lunar
landings. In 1991, GE Aerospace began exploring commercial applications of its
real-time 3D graphics technology, which led to a contract with Sega Enterprises
Ltd. of Japan, the largest manufacturer of arcade systems in the world. Sega
was interested in improving its arcade graphics hardware so their games would
present more realistic images. GE Aerospace adapted a miniaturized version of
their real-time 3D graphics technology specifically for Sega’s Model 2 and
Model 3 arcade systems, incorporating new algorithms for features such as
antialiasing and able to provide a visual experience far exceeding
expectations.[42]
To date, Sega has shipped more than 200,000 systems that include what is today
Real 3D technology.
This
spinoff of technology originally developed for defense contracts is not in
itself new, but the next phase of the story points to the impact of the
procurement reforms in creating a synergy between government and industry
sectors of potential benefit to both the research and the industrial
communities. In the newly streamlined, flexibly managed military of the 90s,
STRICOM is the DOD's executive agent in charge of developing the Advanced
Distributed Simulation Technology Program behind much of the military’s
simulator training efforts. STRICOM has an interesting web presence. On one
side of STRICOM's spinning weblogo is a figure in what might be either a space
suit or a cleanroom suit worn by a chip worker. In the background are objects
that could be tanks or chips on a board. The figure holds what could be a laser
gun. Just when the viewer begins to wonder,"Is this a video game?",
the reverse side of the spinning logo dispels that illusion. The figure there
holds a lightning bolt as a weapon, but is otherwise a traditional helmet-clad
soldier. The rim of the logo reads, "All But War Is Simulation."
In its
capacity as manager of the military simulation training effort STRICOM arranged
a partnership of the San Diego-based Science Applications International
Corporation (SAIC) and Lockheed Martin to develop hardware, software, and
simulation systems for, among other things, networking simulations in live
simulation environments such as SIMNET. Given the new imperative to build on
products supplied by commercial industry, one key to success in this program of
“integrated product development” is the development of standards for
distributed interactive simulations (DIS standards) and the high-level software
architecture (HLA) that sets specifications, interfaces and standards for a
wide range of simulations.[43]The
adoption of these standards across the board by industry and by the American
National Standards Institute prepares the ground for assimilating networked
videogaming and more robust military simulations.
Developments
connected with companies like Real3D can be seen as seminal in the historical
evolution of the Post-Cold War effort to create a seamless environment in which
research work carried out for the high-end military projects can be integrated
with systems in the commercial sector. In 1993, GE Aerospace was acquired by
Martin Marietta, another leader in the field of visual simulation. Martin
Marietta not only advocated expansion of the relationship with Sega, but also
encouraged further research and analysis to look at other commercial markets,
such as personal computers and graphics workstations. In 1995, Martin Marietta
merged with Lockheed Corporation to form Lockheed Martin, and shortly
thereafter launched Real 3D to focus solely on developing and producing 3D
graphics products for commercial markets. To that end in November 1996 a
strategic alliance was formed between Real3D and Chips and Technologies, Inc.
of San Jose, CA, aimed at selling and distributing Real 3D®'s R3D/100 two-chip
graphics accelerator exclusively to the PC industry, and bringing world class
3D applications in the PC environment to professionals who use 3D graphics
acceleration on Windows® NT machines.[44]
Finally, in December 1997, Lockheed Martin established Real 3D, Inc. as an
independent company and at the same time announced Intel had purchased a 20
percent stake in the firm. Real 3D thus builds on more than three decades of
experience in real-time 3D graphics hardware and software going back to the
Apollo Visual Docking Simulator, experience in a variety of projects related to
construction of real-time distributed simulations, and its considerable
intellectual property, consisting of more than 40 key patents on 3-D graphics
hardware and software. These assets, together with its strategic relationships
to Lockheed Martin, Intel, and Chips, positions the company well for getting
high-end graphics from leading edge research environments onto the desktops of
physicians, engineers, and scientists. The company profits from its role as a
supplier of commercial videogame technologies developed by companies like Sega
to the research community developing military training simulators.
But it is
not just the 3D graphics capabilities that are being made more widely
accessible through such developments. High level research on distributed
simulation environments such as SIMNET and on the use of artificial
intelligence in generating synthetic agents, both high priority research
problems in computer science, are other examples of federally funded research
work being more rapidly disseminated through the military’s new integrated
product teams. Once again, Real3D’s relation
to Intel and the entertainment industry is thought-provoking. Intel is
committed to advancing the capabilities of the PC platform; with its Pentium II
processor with MMX technology, the corporation has launched an all-out campaign
focused on bringing 3D technology to mainstream PCs. In July 1997 Intel with 60
hardware and software manufacturers in the arcade industry including Real 3D,
Evans and Sutherland, 3Dfx Interactive, and Quantum 3D, joined in the Open
Arcade Architecture Forum to encourage the development of hardware and software
for open arcade systems through proactive market development efforts that
ensure systems and software compatibility, while delivering arcade-game
performance equaling or exceeding proprietary systems. The Open Arcade
Architecture (OAA) specification, which Intel announced in April 1997, supports
dual processor-based arcade systems, which allow for faster, richer games and
provide additional processing power for networking, video and voice
conferencing.[45]
Examination
of the work and careers of individuals who have participated in both the
military simulation community and the entertainment industry suggests paths
through which the dissemination of research ideas across these seemingly
different fields takes place. For example, prior to joining Walt Disney Imagineering
in 1992, Dr. Eric Haseltine was an executive at Hughes Aircraft Co., where he
held a series of posts in the Human Factors, Flight Simulation, and Display
System areas. Haseltine joined Hughes in 1979 after completing a Ph.D. in
physiological psychology at Indiana University and a post-doctoral fellowship
in neuroanatomy at Vanderbilt University School of Medicine. Haseltine has
published in the fields of Sensory Physiology, Neuroanatomy, Flight Simulation,
Training Systems Development, and Display Systems Engineering; and he holds a
number of patents in laser projection and electro-optical imaging. At Disney
Imagineering Haseltine is vice president and chief scientist of research and
development of projects including advanced head-mounted displays, optical
systems, wireless communications, user interfaces, paperless animation systems
data security, and biomedical imaging.
Dr.
Robert S. Jacobs, currently director and president of Illusion, Incorporated,
offers a similarly illustrative profile. He has a B.S.E. in systems engineering
from the University of California, Los Angeles, an M.S. in management science
from the University of Southern California, and a Ph.D. in engineering
psychology from the University of Illinois, Urbana-Champaign. Having headed up
the design team at Perceptronics that worked on the original design of SIMNET,
he has been a technical contributor to the majority of later, related training
programs. At Illusion Jacobs has directed the definition, development, and
manufacturing of advanced technology training and simulation products including
analytical studies, hardware design, software development and courseware
production.
SIMNET
has been an incubator for the ideas and technology behind many
current-generation video games. Consider the company description of WizBang!
Software Productions, Inc., which created the 3D environments for Hyperblade and Microsoft Baseball:
("WizBang!") is a 3D computer games company founded in
1994. WizBang!'s founders and staff combine expertise and years of experience
in military simulation, artificial intelligence, traditional gaming, music
composition and theater production, as well as game development. With this
unique perspective, they continue to be at the forefront of the ever-evolving
high-tech game industry.[46]
Indeed
among WizBang!'s illustrious team members is company founder Stuart Rosen, with
experience in both the development of computer games and military simulations.
Rosen's computer game development experience began at Atari where he managed
the PAC MAN project for Atari's home
computer and advanced video game. Rosen also headed the design team for one of
the first movie-to-computer game spin-offs: Stephen Spielberg's E.T. Rosen left Atari to manage the
Image Generation Department at Singer-Link Flight Simulation, one of the early
companies in the flight simulator business, which built such systems as the
Apollo Docking Station and the DC8 flight simulator used in airlines around the
world, and many others. For Singer-Link
Rosen developed virtual reality databases and advanced modeling tools for pilot
training simulators. Rosen then moved to Bolt Beranek & Newman Advanced
Simulation, where he led the design, development and integration of networked
interactive simulation systems for U.S., British and Japanese forces. This
included extensive work on the SIMNET project.[47]
Andrew
Johnston, WizBang!'s other founder and president, was also a key contributor to
SIMNET. Along with M. Cyrus from Boeing Johnston was the co-founder, vice
president and director of engineering of Delta Graphics (later acquired by Bolt
Beranek & Newman), and he directed
the software development effort for the Computer Image Generator (CIG) I have
described above, the CAD modeling system for the CIG database, and commercial
computer animation software. Prior to that, while at the Boeing Aerospace
Company in Seattle, Johnston managed a group of 45 engineers involved in
research and development in advanced computer-image generation; he was a key
architect of a real-time 3D computer-image generation system under contract
with DARPA. This system was the basis of the Boeing B1-B Weapons System
Trainer, a large scale computer-image generation system.[48]
For my
purposes an example of how such career trajectories can work in disseminating
research ideas is illustrated by the work and career of Real3D senior software
engineer Steven Woodcock, who has been lead software engineer for Gameware
Development at Lockheed-Martin Real3D since January 1995. Woodcock began his career in the development
of game simulations for Martin Marietta. From October 1989-January 1992
Woodcock was senior software engineer and from 1992-95 lead software and
technical engineer for Martin Marietta Information Group, National Test Bed,
where he was responsible for all weapons code development, testing,
integration, and documentation for ARGUS, the Advanced Real-time Gaming
Universal Simulation.[49] ARGUS is a real-time, distributed,
interactive command and control simulation focusing on Ballistic Missile
Defense (BMD) and Theater Missile Defense (TMD), running on a TCP/IP network
consisting of a Cray-2 supercomputer and more than 50 Silicon Graphics
workstations. As noted above Martin Marietta contracted with Sega to build the
Model 2 arcade platform. Woodcock contributed to this effort. From March 1995
-March 1997 Woodcock shifted his venue from military network simulations to the
interactive game industry where he was lead programmer and oversaw all aspects
of game development on the Sega-produced Model 2 arcade game Behind Enemy Lines, featuring a true 3D
environment and use of AI. Woodcock has noted that his previous experience at
Martin Marietta on the NTB and ARGUS from 1989-95 in distributed applications,
real-time simulations, and artificial intelligence has proven invaluable in the
designing real-time, 3D, multi-player environments of games he has been working
on since 1995.During the same period, from September-October 1996 he worked
with another of the companies in the Intel-initiated Open Arcade Architecture
Forum, Dreamality Technologies, on the location-based entertainment (LBE)
simulator DreamGlider. For that project Woodcock integrated a
message layer based on the military Distributed Interactive Simulation (DIS)
protocols, designed to support large-scale, many-machine, network connectivity.
From January-June 1996 he was AI and game engine developer for a Sony
PlayStation project named Thundering
Death. On this project Woodcock implemented the first goal-based AI on the
PlayStation using neural networks to provide an ever-learning opponent.
If the
career of Steven Woodcock illustrates the ways in which ideas, technologies,
and personnel have flowed from military simulation efforts to the entertainment
industries, doom II produced
by Id Software, and falcon 4.0,
one of Spectrum Holobyte’s videogames provide glimpses into how the exchange is
being accelerated in the opposite direction at the present time.
The shift
in culture of the military reflected in procurement policies discussed above is
also evident in new military approaches to developing critical thinking.
Emblematic of this shift is Marine Corps
Commandant Gen. Charles C. Krulak's directive 1500.55 issued in 1996 aimed at
implementing improvements in what he termed "Military Thinking and Decision
Making Exercises." In his comments on the planning guidance Gen. Krulak
wrote: "It is my intent that we reach the stage where Marines come to work
and spend part of each day talking about warfighting: learning to think, making
decisions, and being exposed to tactical and operational issues." He
identified an important way to exercise these skills:
The
use of technological innovations, such as personal computer (PC)-based
wargames, provide great potential for Marines to develop decision making
skills, particularly when live training time and opportunities are limited.
Policy contained herein authorizes Marines to use Government computers for
approved PC-based wargames.[50]
General Krulak
directed furthermore that the Marine Combat Development Command assume
responsibility for the development, exploitation, and approval of PC-based
wargames. In addition, they were to maintain the PC-based Wargames Catalog on the Internet[51]. With this incentive some Marine simulation experts from
the Marine Corps Modeling and Simulation Management Office in the training and
education division at Quantico, Virginia tracked down a shareware copy of the
commercial game doom produced by Id Software, Inc. and began
experimenting with it. This led to the adaptation of this game as a fire team
simulation, with some of the input for the Marine version coming from Internet doom gamers employing shareware
software tools.[52] They then rewrote the code for the commercial game doom II. Instead of employing
fantasy weapons to face down monster-like characters in a labyrinthine castle,
real-world images were scanned into WAD files along with images of weapons such
as the M16(a1) rifle, M-249 squad automatic weapon,
and M-67 fragmentation grenades. The game was also modified from its original
version to include fighting holes, bunkers, tactical wire, "the fog of
war," and friendly fire. marine
doom trainees use Marine-issue assault rifles to shoot it out
with enemy combat troops in a variety of terrain and building configurations. In addition to training fire teams in various
combat scenarios, the simulation can also be configured for a specific mission
immediately prior to engagement. For example, Marines tasked with rescuing a
group of Americans held hostage in an overseas embassy could rehearse in a
virtual building constructed from the actual floor plans of the structure.
Users needed only to purchase version 1.9 of the commercial game and add the
Marine rewrite code to run the new tactical simulation. The Quantico-based
software could not run without the original commercial package, so no licensing
violations occurred. Indeed, any personal computer owner with doom II can download the code
for marine doom from the
Modeling and Simulation Management Office's web page. You too can become a
military assault commando.
The success of the doom II simulation rewrite led
the Marines to look ahead to the next step in commercial war gaming.
Discussions with MÄK (pronounced "mock")
Technologies (Cambridge, MA), a commercial game manufacturer specializing in network simulation tools for distributed
interactive simulations, lead to the design of a tactical operations
game built to Marine specifications. According to the contract the Marine Corps
would help develop the software code and in turn would receive a site license
to train on this game, while MÄK would sell it
commercially as an official Marine Corps tactical training game. This
from-the-ground-up development would eliminate all of the nuances of the other
adapted games that are not particular to Marine combat.
MÄK was
founded in 1990 by two MIT engineering graduates, Warren Katz and John
Morrison. After graduating from MIT both were original members of Bolt Beranek
& Newman's SIMNET project team from 1987 to 1990, which developed low-cost,
networkable 3D simulators for the Department of Defense. MÄK's corporate goal
is to provide cutting-edge research and development services to the Department
of Defense in the areas of distributed interactive simulation (DIS) and
networked virtual reality (VR) systems and to convert the results of this
research into commercial products for the entertainment and industrial markets.
MÄK's first commercial product, the VR-Link™ developer's toolkit, is the most
widely used commercial DIS interface in the world. It is an application
programmer's toolkit that makes possible networking of distributed simulations
and VR systems. The toolkit complies with the Defense Department's DIS
protocol, enabling multiple participants to interact in real time via
low-bandwidth network connections. VR-Link is designed for easy integration
with existing and new simulations, VR systems, and games. Thanks to such
products, MÄK was ranked 36th in the 1997 New England Technology Fast 50 and
380th in the 1997 National Technology Fast 500 based on revenue growth between
1992 and 1996.
In
addition to its work in the defense community, the company's software has been
licensed for use by several entertainment firms, such as Total Entertainment
Network and Zombie Virtual Reality Entertainment, to serve as the launching pad
for real-time, 3D, multi-user video games. One such game, Spearhead, a
multi-user tank simulation game released in mid-1998, was written by MÄK and
published by Interactive Magic. Spearhead can be played over the Internet and
incorporates networking technology similar to that used in military simulations.
MÄK's
products use technologies called Distributed Interactive Simulation (DIS) and
High Level Architecture (HLA). Both technologies efficiently connect thousands
of 3D simulations together on a computer network. Replacing the DIS standard
for net-based simulations, HLA has been designated as the new standard
technical architecture for all DoD simulations. All simulations must be
HLA-compatible by the end of 1999. The transition to HLA is part of a DoD-wide
effort to establish a common technical framework to facilitate the
interoperability of all types of models and simulations, as well as to
facilitate the reuse of modeling and simulation components. This framework
includes HLA, which represents the highest priority effort within the DoD modeling
and simulation community. MÄK intends to leverage its technology for both the
military and commercial markets by taking advantage of the nearly $500 million
a year spent by the US government on optimizing the speed and capabilities of
DIS and HLA. State-of-the-art military DIS systems are now capable of running
over 10,000 simulations simultaneously, networked together across far-ranging
geographies. As low-cost commercial data services (bi-directional cable TV,
ADSL, etc.) become more widely available to consumers, industry analysts
project the market for on-line, 3D, multi-user simulations to reach $2 billion
in the year 2000. The networking capabilities of distributed simulation
technology developed by MÄK and other government suppliers will enable entertainment
providers to create platforms for 3D worlds supporting up to 100,000
participants simultaneously. Katz has described his vision provocatively in a
chapter for the book Digital Illusion:
Entertaining the Future with High Technology. The chapter is titled
“Networked Synthetic Environments: From DARPA to Your Virtual Neighborhood.”[53] In the near future MÄK co-founders Katz and
Johnson are betting that Internet-based populations the size of a mid-sized
U.S. city will be able to stroll through an electronic shopping mall, explore
and colonize a virtual universe, or race for prizes in cyberspace's largest 3D
road rally.
The
contract awarded by the US Marine Corps to MÄK in 1997 will assist this vision
of vastly shared virtual reality; it further erodes the distinction between
military simulation technology and the technology available to ordinary users.
The contract is for meu 2000, a computer-based tactical decision-making game for US
Marines which will also be released simultaneously as a commercial computer
game. The player of meu 2000 assumes the role of a
Marine officer coordinating the actions of a "Marine Expeditionary
Unit—Special Operations Capable [MEU (SOC)]." The player will see the
battle from a 3-D tactical view, enabling him to select units, issue orders,
and monitor the progress of his forces. meu 2000 will be a multiplayer
game. Each player may assume a position in the command hierarchy of either US
or opposing forces. (Players will only be able to command US equipment).
Additionally, players of platform-level simulations will be able to assume
their appropriate positions in the hierarchy. meu 2000 will be a
real-time, networkable, 3D strategy game simulating modern US Marine Corps
warfare, developed in cooperation with the US Marine Corps in order to ensure
that a high level of realism is incorporated into the simulation. MÄK will use the same game engine in both its military and
civilian versions. The military version will add more accurate details about
tactics and weapons, while the civilian game will be less demanding. But both
versions will allow multiple players to compete against each other over a
local-area network or the Internet.
While a
number of military simulations and commercial airline flight simulators have
been adapted to the commercial game market, falcon 4.0
is the first flight simulation video game to be adapted to military training. falcon 4.0 is a network-based game which supports either
single player or multiplayer modes. Multiplayer mode supports dogfights with up
to four squadrons of four F-16s each. The game’s whopping 600-page manual suggests the seriousness of play involved
and indicates why the military finds it attractive for its own training
purposes. As producer Gilman Louie explains, the falcon 4.0
is a detailed simulation re-creating the
feel of being an F-16 pilot operating over a modern battlefield. The simulation
has a highly accurate flight model and avionics suite that incorporates
flight parameters conforming to real-world
specifications. falcon 4.0
accurately re-creates such effects as deep
stall (to escape, the player must use the real-world procedure of flipping the
Manual Pitch Override switch and "rocking" the aircraft out—the
standard game trick of simply lighting the afterburners won't restore normal
flight in this simulation). Weapon modeling is equally realistic and, except
for omitting a few classified details, provides an amazingly accurate
representation of weapons deployment. The simulation is so detailed, in fact,
that reviewers of the game report consulting a real-world "Dash 1"
manual for the F-16 when playing the game. The realism of falcon 4.0 is further enhanced by graphics generated from
actual aerial photographs and map data from the Korean peninsula. In its
current version, the game plays best on a computer with a processor of 400 MHZ
or higher.
The extreme realism in this video game led Peter Bonanni, graduate of the F-16 Fighter Weapons School and pilot instructor of
the Virginia Air National Guard, to work with Spectrum HoloByte Inc. to modify
the falcon 4.0 flight simulator
game for military training. According to Bonanni, falcon 4.0
mimics the look and feel of real military aircraft and allows users to play
against computer-generated forces or, in a networked fashion, against other
pilots, which facilitates team-training opportunities. Another reason for Bonanni’s enthusiasm is the virtual world around the
player. Although the product features scripted Tactical Engagement missions as
well as an Instant Action mode for newcomers, the heart and soul of the product
is the dynamic campaign mode, where the player assumes the role of a pilot in
an F-16 squadron during a conflict on the Korean peninsula. The campaign engine runs an entire war,
assigning missions to units throughout the theater. A list (displayed either by
priority to the war effort or by launch time) shows the missions available to
the player's squadron. The player can fly any of these missions, with the
freedom to choose air-to-air or air-to-ground sorties. Unlike games with
pre-scripted outcomes the campaign engine allows story lines, missions, and
outcomes to be dynamically generated. Each play of the game influences the
next. If a player is first assigned a mission to destroy a bridge but fails,
the next mission may be to provide support to friendly tanks engaged by an
enemy that just crossed the bridge.
Networked video games such as falcon 4.0 are emblematic of the
calculated emergence of a military-entertainment complex but also of the fusion
of the digital and the real happening around us. It is hardly surprising that
Bonanni not only helps adapt the video game to military training needs but also
writes a regular column for the www.falcon4.com website on tactics and
has designed several of the 31 pre-built training missions included with the
game. He is co-author of two best-selling books on falcon 4.0,
one with colleague James Reiner, also an
F-16 instructor pilot and graduate of the F-16 Fighter Weapons School, and like
Bonanni a consultant on the game. Beginning with some basics on the game and
the various gameplay options, falcon 4.0: Prima's Official Strategy Guide gives readers a guide to instant action
missions, multiplayer dogfights, and full-fledged campaigns. The book is a
serious no-nonsense manual, devoting separate chapters to laser-guided bombs
and even the AGM-65 Maverick missile. Bonanni’s second book, falcon 4.0 Checklist, is scheduled to appear soon and is already high
on the Amazon.com sales list before it has even hit the bookstores. Recalling
that Ender’s Game has been taught in
flight schools, would-be Falcon pilots will probably want to add a copy to
their Amazon.com shopping cart for inspirational reading.
Until the
last two or three years these crossovers from military simulations and the
entertainment industries have been unplanned and opportunistic. In December of
1996 the National Academy of Sciences hosted a workshop on modeling and
simulation aimed at exploring mutual ground for organized cooperation between
the entertainment industries and defense.[54] The report stimulated the Army in August 1999
to give $45 million to the University of Southern California over the next five
years to create a research center to develop advanced military simulations. The
research center will enlist film studios and video game designers in the
effort, with the promise that any technological advances can also be applied to
make more compelling video games and theme park rides. The idea for the new
center, to be called the Institute for Creative Technologies, reflects the fact
that although Hollywood and the Pentagon may differ markedly in culture, they
now overlap in technology. Moreover, as we have seen, military technology,
which once trickled down to civilian use, now often lags behind what is
available in games, rides and movie special effects. As STRICOM Chief Scientist
and Acting Technical Director Dr. Michael Macedonia wrote in a recent article
in Computer:
As Siggraph—the computer-graphics community’s showcase—has demonstrated
over the past several years, the demands of digital film development are making
way for computer games’ even more demanding real-time simulation requirements.
As a mass market, games now drive the development of graphics and processor
hardware. Intel and AMD have added specialized multimedia and graphics
instructions to their line of processors in their battle to counter companies
such as Nvidia, whose computer graphics chips continue breaking new performance
boundaries.
…
By aggressively maneuvering to seize and expand their market
share, the entertainment industry’s biggest players are shaping a 21st century
in which consumer demand for entertainment—not grand science projects or
military research—will drive computing innovation. Private-sector research-and-development
spending, which now accounts for 75 percent of total US R&D, will increase
to about $187.2 billion in 2000, up from an estimated $169.3 billion in 1999,
according to Battelle Memorial Institute’s annual R&D forecast.[55]
In
opening the new Institute for Creative Technology Secretary of the Army Louis
Caldera said, "We could never hope to get the expertise of a Steven
Spielberg or some of the other film industry people working just on Army
projects." But the new institute, Caldera said, will be "a win-win
for everyone."
While
putting more polygons on the screen for less cost is certainly one of the
military's objectives at the Institute for Creative Technologies and in similar
alliances, other dimensions of simulated worlds are equally important for their
agenda. Military simulations have been extremely good at modeling hardware
components of military systems. Flight and tank simulators are excellent tools
for learning and practicing the use of complex, expensive equipment. However,
movies, theme park rides, and increasingly even video games are driven by
stories with plot, feeling, tension, and emotion. To train for real world
military engagements is not just to train on how to use the equipment but how
to cope with the implementation of strategy in an environment with
uncertainties, surprises, and participants with actual fears. As Marine Corps Commandant Gen. Charles C. Krulak's directive
on "Military Thinking and Decision Making Exercises" emphasized,
decisions made in war must frequently be made under physical and emotional
duress. The directive stated that the PC-based wargame exercises in peacetime
should replicate some of the same conditions: "Imaginative combinations of
physical and mental activities provide Marines the opportunity to make
decisions under conditions of physical stress and fatigue, thereby more closely
approximating combat."[56]
Early military simulations incorporated very rote behaviors. They
did not capture "soft" characteristics well. An effort to go beyond
this was launched in 1991 by the Institute for Defense Analyses in their effort
to construct a computer-generated "magic carpet"
simulation-recreation of the Battle of 73 Easting, based on in-depth
debriefings of 150 survivors of a key battle that had taken place during the
Gulf War.[57] The goal of the project was to get timeline-based
experiences of how individuals felt, thought and reacted to the dynamic
unfolding of the events--their fears and emotions as well as actions--and
render the events as a fully three-dimensional simulated reality which any
future cadet could enter and relive. Going a step beyond the traditional
"staff ride"--a face-to-face post-battle tutorial at the site itself
in which a commander leads his staff in a verbal recreation of the skirmish--this
tour of a battle site was a simulacrum of the war itself. Work on data
gathering for the simulation began one month after the battle had taken place.
The IDA brought the soldiers who had actually taken part and had them sketch
out the battle. They walked over the battlefield amidst the twisted wreckage of
Iraqi tanks, recalling the action as best they could. A few soldiers supplied
diaries to reconstruct their actions. Some were even able to consult personal
tape recordings taken during the chaos. Tracks in the sand gave the simulators
precise traces of movement. A black box in each tank, programmed to track three
satellites, confirmed its exact position on the ground to eight digits. Every
missile shot left a thin wire trail which lay undisturbed in the sand.
Headquarters had a tape recording of radio-voice communications from the field.
Sequenced overhead photos from satellite cameras gave the big view. A digital
map of the terrain was captured by lasers and radar.[58]
With this data a team
at the IDA Simulation Center spent nine months constructing a simulation of the
battle. A few months into the project, they had the actual desert troops, then
stationed in Germany, review a preliminary version of the recreation. The
simulacra were sufficiently fleshed out that the soldiers could sit in tank
simulators and enter the virtual battle. They reported corrections of the
simulated event to the technicians, who modified the model. One year after the
confrontation the recreated Battle of 73 Easting was demo-ed for high-ranking
military in a facility with panoramic views on three 50-inch TV screens at the
resolution of a very good video game.
The
Battle of 73 Easting is an extremely accurate historical reconstruction of a
battle whose outcome is known. It set the standard of a future genre of
training simulations, something like the Saving
Private Ryan of staff rides. Although the cost of creating the simulation
is not available, it was undoubtedly expensive. As a computer simulation with
programmable variables, however, the scenario could be replayed with different
endings. Indeed the next logical step after creating this fantastically
accurate simulation would be to use the data and behaviors of the simulation as
inputs to a game engine, like marine
doom, or a more current best-seller, quake.
By making the simulation reprogrammable, the staff ride could become an
adaptable tool for battle training. Embedded simulations involving real
global-positional data, information on opposing forces and their capabilities
could be built into the M1 tank units, attack helicopters, or F-16s themselves
as real soldiers train for an impending mission right up to the hour of the
engagement.
How might
the interest in pursuing this line of development in new settings like the Institute
for Creative Technology (ICT) proceed? At this early date we can only
speculate. In light of the new military practice of forming product development
teams consisting of military, industry and possibly academic partners, and in
light of effort to merge military and entertainment projects for their mutual
benefit, I would like to propose an imaginary scenario of teamwork involving
elements from each of these sectors. Several of the members of the new ICT work
on constructing semi-automated forces and multiple distributed agents for
virtual environments, such as training programs. Others in the ICT work on building models of emotion for use in synthetic training
environments. The work of professors Jonathan Gratch
and Jeff Rickel are prototypical. Prior to the formation of the ICT these
researchers had been working on the construction of intelligent agent technology for incorporation into
state-of-the-art military simulation systems. More interested in modeling
training behaviors, they have not been particularly interested in developing
"believable agents" for video games or film. The goal of one of their
projects is to develop command and control agents that can model the
capabilities of a human military commander, where commander agents must plan,
monitor their execution, and replan when necessary.
We could
imagine lots of potential collaborations with commercial videogame companies
that would leverage the skills and knowledge of both commercial and academic
partners interested in artificial agents and historically accurate "staff
ride" training scenarios that build in uncertainty, fear, emotion, and a
gripping sense of story and narrative. I find Atomic Games an interesting
candidate. Its personnel and company history map the trajectory from military
to commercial applications we have explored above. Atomic Games is a company of ten persons founded in 1991 by
Keith Zabalaoui. Today Atomic is a subsidiary of Microsoft Games. Before
entering the video game business Zabalaoui and his colleagues worked for Rockwell
International at the Johnson Space Center in Houston, Texas. Zabalaoui worked
on a space-based robotic retriever for recapturing astronauts, tools, or
anything else that might become detached from the space shuttle. After the
retriever project was canceled Zabalaoui shifted his activities full-time to
what had been until then his recreation during breaks at the Center: a board
game called atlantic wall with
three boards set up in different rooms for the Allies, Axis and referees.
Zabalaoui started bringing his Macintosh computer with him to the game and
between moves began writing the first v for
victory game that has become the trademark of Atomic Games. v for victory, utah beach,
which was selected as Game of the Year by Strategy Plus in 1992.
Atomic
Games' most successful attempt to build an historically accurate game is close combat 2: a bridge too far.
This game is based on an historically accurate rendering of a WWII
German-American tank battle. The game has won many awards for its realism. In part
this is achieved by the addition of sound and movie-like visual effects, but a
key element is provided by models of the behavior of men under fire. This human
aspect of combat has been provided by advisors, such as Dr. Steven Silver, who
is a combat psychologist.
Whether or not this
imaginary alliance between Atomic Games and AI researchers in the ITC is ever
realized, my point is to illustrate how the Army's goals of leveraging
technology for its own purposes from the film and video game industry at sites
like this incubator institute might be achieved. The military has contributed
enormously to the development of the digital technologies that are transforming
our world, but they have become a backseat player in the new digital economy.[59] According to the Interactive Digital Software Association
(IDSA), the sale of game and edutainment software for computers, video
consoles, and the Internet generated revenues of $5.5 billion in the U.S. alone, making it the fastest growing
entertainment industry in the world. Video game rentals accounted for a
further $800 million in 1998. The interactive entertainment software industry
that created these products did so with only about 70,000 employees. Compare
these figures with the motion picture business, which generated $6.9 billion,
but employed more than 240,000 people in doing so.[60] In 1998, software sales
continued to skyrocket, increasing by 22 percent on a dollar basis, making it
the third consecutive year the industry experienced double-digit growth. Video
game sales racked up more than $3.7 billion, and computer game sales topped
$1.8 billion. Retail sales remained strong throughout the year, with each month
outperforming the same month a year ago. In addition, unit sales increased by
33 percent, selling 181 million units of PC and video games in the U.S. alone,
or almost two per household. Through the first three quarters of 1999, video
game unit sales were up 31 percent, and dollar sales were up 21 percent. Unit
growth for computer games increased 22 percent and dollar sales increased
almost 20 percent. Total sales reached $3.3 billion, a 19 percent increase
compared to the same period in 1998.
What these figures
suggest is that sufficient economic incentives exist alongside the policy and
organizational structures I have been describing to fuel the continued rapid
diffusion and improvement of military SIMNET technology through its fusion with
videogame and film. Companies like Perceptronics, one of the original
contractors for SIMNET, has been committed to the redeployment and further
development of that technology into its Internet Collaborative 3D™ Framework
(IC3D™) for mass-market, people-oriented 3D experiences on the web in which
multiple users can interact fully, naturally, collaboratively and in real-time
within virtual environments.[61] For those who see such
developments as contributing to the fusion of the digital and the real, and as
I have argued, creating the precondition for a “posthuman” future, the ride
isn’t over yet.
The Institute for
Creative Technology seeks to merge the military's interests in interactive
simulation technology with shared interests in technology of academics and the
film industry. Such incubators of mutual interests in computing and
communications technologies have been launched in other domains as well. A few
days following the Army's announcement of its investment in Creative
Technologies, the CIA announced that it was investing $28 million in a venture
capital firm, In-Q-It, headed by 39-year-old Gilman Louie, the former head of
game company Spectrum Holobyte, now Hasbro, which produces falcon 4.0 among other leading games.[62] Just exactly what the CIA hopes to gain from this
arrangement was unclear, but Louie said that the new company is designed to
move information technology to the agency more quickly than traditional
government procurement processes allow. Among the new company's board members
are John Seeley Brown, director the Xerox Corporation's Palo Alto Research
Center; Lee Ault, director of Equifax Alex Brown; Stephen Friedman of Goldman
Sachs; Norm Augustine, chairman of Lockheed Martin, and William Perry, former
Secretary of Defense, the person I have argued contributed enormously to
transforming the military into a commercially efficient engine.[63]
At the
outset I discussed developments related to ubiquitous computing, MEMS, and
smart matter supported by a consortium of private companies and government
funded research, particularly DARPA. To some the remaking of the world
suggested by proponents of those projects has more the chilling fantastic
character of science fiction depicted in The
Matrix or in Stephenson’s Snow Crash
than of warm-blooded reality. Earlier audiences no doubt voiced similar
reactions to Ender’s Game and even 2001 as unrealizable, paranoid figments
of the cultural imagination (indeed the introduction to the 1988 edition of Enders Game claims that any such
realization on the Internet will be far in the distant future). As an exercise
in thinking about how such a world might develop, given events and programs
currently in place, I have drawn an analogy to a parallel but closely related
development within the military-entertainment complex. I have attempted to show
how the boundaries between exotic graphics and computer simulations for
military purposes on the one hand and video games and entertainment graphics on
the other have dissolved into bonds of mutual cooperation symbolized powerfully
by the creation of the joint military/film industry-funded Institute for
Creative Technologies. I have also argued that in the course of that
development a fusion of the digital and the real has taken place, and with it
the disappearance of the boundary between fantasy and reality. The fact that
the Campaign Engine driving preparations for F-16 missions and tank maneuvers
in future Bosnias and Serbias is the very same technology we use to engage our
skills in Internet gaming is certainly suggestive. That it represents a fusion
of the digital and the real is perhaps even more strongly indicated by the
midterm report filed in August 1999 by Colonel Mark E. Smith, the director of
the Joint Advanced Distributed Simulation Joint Test Force (JADS JTF),
responsible for monitoring progress in implementation of the simulation
projects discussed above. Among the successful tests of the ADS system reported
was a “Live Fly Phase”(LFP) conducted in October 1997 in which live units were
interconnected with simulation units in a training scenario. Distributed simulation techniques were
used to link two live F-16 aircraft (flying on the Gulf Test Range at Eglin Air
Force Base, Florida) representing the shooter and target to a simulated
air-to-air missile in the “Hardware-in-the-Loop” (HWIL) laboratory (also at
Eglin). The shooter aircraft “fired” the air-to-air missile in the missile lab
at the F-16 target and provided data link updates of the target position and
velocity to the missile during its flyout. Other combinations of
simulation-live unit fusion are being tested as well. Smith’s report pronounced
the tests a surprising success, the only down-side being a 3.1 second latency
in one of the data links.
On
September 1, 1999 Intel Corporation announced the first of a new series of
network processors designed to solve bandwidth problems of the sort encountered
in the LFP test. The new processors comprise programmable, scalable switching
and formatting engines and physical layer devices. In all, 13 different
components of the new processors can be used to develop network devices for
local and wide area networks (LAN and WAN) as well as Internet-based networks.
Such technologies are aimed at delivering real-time voice and video
transmission over the Internet resolving the discrepancy between real world and
simulated experience. Orson Scott Card’s vision of a young squadron of Enders
switching between live and simulated versions of a military engagement may not
be that far off.
Endnotes
[1] William
Gibson, Neuromancer, p. 51.
[2]For
discussions of computer-mediated communication and computational sciences see:
See Richard Mark Friedhoff and William Benzon, The Second Computer Revolution: Visualization, New York: W.H.
Freeman, 1989. Other important discussions are in Information Technology and the Conduct of Research: The User's View,
Report of the Panel on Information Technology and the Conduct of Research,
National Academy of Sciences, 1989. B.H. McCormack, T.A. DiFanti, and M.D.
Brown, Visualization in Scientific
Computing, NSF Report, published as a special issue of Computer Graphics, Vol. 21 (6) (1987). An equally impressive survey
is the special issue on computational physics in Physics Today, October, 1987. See especially the articles by Norman
Zambusky, "Grappling with Complexity," ibid., pp.. 25-27; Karl-Heinz
A. Winkler, et al., "A Numerical Laboratory," ibid., pp. 28-37;
Martin Karplus, "Molecular Dynamics Simulations of Proteins," ibid.,
pp. 68-72. For a consideration of computer-mediated communication and
computational science in relation to theory, see Timothy Lenoir and Christophe
Lécuyer, "Visions of Theory: Fashioning Molecular Biology as an Information
Science," in M. Norton Wise, ed., Growing
Explanations, Princeton: Princeton University Press (in press). For
computer-mediated communication and notions of the self see: Sherry Turkel, Life on the Screen: Identity in the Age of
the Internet, New York: Simon & Shuster, 1995, especially chapter 7,
pp. 177-209, and Chapter 10, pp. 255-270; Brian Rotman, "Going Parallel:
Beside Oneself," 1996
<http://www-leland.stanford.edu/class/history204i/Rotman/Beside/top.html
[3] N. Katherine
Hayles, How We Became Posthuman: Virtual Bodies in
Cybernetics, Literature, and Informatics (Chicago; University of
Chicago Press, 1999), pp. 2-3.
[4] The first
reference I am aware of to the "military-entertainment complex" was
the lead article in the first issue of Wired
Magazine by Bruce Sterling in 1993. See Bruce Sterling, "Virtual War
is Hell," Wired Magazine, Vol 1,
No. 1, January 1993, online at: http://www.wired.com/wired/archive/1.01/virthell.html?topic=&topic_set=
See especially page 7.
[5] Ivan E.
Sutherland, "Virtual Reality Before It Had That Name," Videotaped
lecture before the Bay Area Computer History Association.
[6] Ibid.
[7] Other
head-mounted display projects using a television camera system were undertaken
by Philco in the early 1960s. For a discussion see: Stephen R. Ellis,
"Virtual Environments and Environmental Instruments," in Simulated
and Virtual Realities, K. Carr and R. England, Editors. Taylor & Francis: London, 1996, pp.
11-51.
[8] Ivan E. Sutherland, “Virtual Reality Before It Had That
Name.”
[9] Ivan E.
Sutherland, "The Ultimate Display," Proceedings of the IFIP Congress, 1965: pp. 506-508; Ivan E. Sutherland, "Three Kinds of
Graphic Data Processing,"
Proceedings of the International Federation for Information Processing, 2
1965: pp. 582-83; These ideas are
elaborated in Ivan E. Sutherland, "Computer Displays," Scientific American, 222(no
6, June) 1970: pp. 56-81.
[10] Ivan E.
Sutherland, "The Ultimate Display," Proceedings of the IFIP Congress, 1965: pp. 506-508.
[11] Harvey
Greenfield, Donald Vickers, Ivan Sutherland, Willem Kolff, et al., "Moving
Computer Graphic Images Seen From Inside the Vascular System," Transactions of the American Society of
Artificial Internal Organs, 1971. XVII: pp. 381-85, from p. 381.
[12] Ivan Sutherland, Robert F. Sproul, and Robert A. Schumacker,
"A Characterization of Ten Hidden Surface Algorithms," ACM Computer Surveys, 1974.
(March).
[13]John E.
Warnock, A Hidden Surface Algorithm for
Computer Generated Half-Tone Pictures, Ph.D. thesis, University of Utah,
1969.
[14] Garry S. Watkins, A
Real-Time Visible Surface Algorithm, Ph.D. thesis, University of Utah,
1970.
[15] H. Gouraud, "Computer Display of Curved Surfaces," IEEE Transactions in Computers, 1971. June.
[16] Other
noteworthy graduates of the Utah program in the late 70s include:
Jim Kajiya, Ph.D. 1979, developed the frame buffer concept
for storing and displaying single-raster images.
Gary Demos, who started several major computer graphics
production companies and had a big impact on the introduction of computer
graphics technology in the film industry, was also a graduate of the Utah
program.
[17] M. Bricken,
"Virtual worlds: No interface to design," Technical Report R-90-2,
Human Interface Technology Lab, University of Washington, 1990.
W. Bricken, "Virtual Environment Operating System:
Preliminary functional architecture," Technical Memorandum M-90-2, Human
Interface Technology Lab, University of Washington, 1990.
W. Bricken, "Coordination of multiple participants in
virtual space," Technical Memorandum M-90-11, Human Interface Technology
Lab, University of Washington, 1990.
[18] United States
Congress. Senate Committee on Commerce Science and Transportation, New
Developments in Computer Technology: Virtual Reality: Hearing before the
Subcommittee on Science, Technology, and Space of the Committee on Commerce,
Science, and Transportation, One Hundred Second Congress, First Session, May 8,
1991, especially pp. 33-34, 42-49.
[19] I.E.
Sutherland, C.A. Mead, T.E. Everhart, Basic
Limitations in Microcircuit Fabrication Technology, RAND Corporation Report
No. AD-A035149, Santa Monica, California, Nov. 1976, 58 pages, prepared under
DARPA Contract No. DAHC15-73-C-0181.
[20]
"Homebrew Chips," by John Markoff, Phillip Robinson and Donna Osgood, BYTE, May 1985, p. 363
[21] During this
period, Kahn advanced from Chief Scientist to Deputy Director of DARPA's
Information Processing Techniques Office (IPTO) in 1976 and became Director of
IPTO in November, 1979
[22] See
"Computer Hardware and Software for the Generation of Virtual
Environments," pp. 247-303, in Nathaniel I. Durlach and Anne S. Mavor,
eds. Virtual Reality: Scientific and
Technological Challenges. National
Academy Press: Washington, DC, 1995, especially figure 8-4, "The History
of Workstation Computation and Memory," on p. 257.
[23] Discussed by
Scott Fisher in his presentation to the Committee on Virtual Reality Research
and Development, Woods Hole, Mass., August, 1993. See Nathaniel I. Durlach and Anne
S. Mavor, eds. Virtual Reality:
Scientific and Technological Challenges. Washington, DC: National Academy
Press, 1995, p. 508. Also see, Frederick P. Brooks, "Project GROPE: Haptic
Displays for Scientific Visualization,"
ACM Computer Graphics, 24(no 4) 1990: pp. 177-85, especially
p. 184.
[24] See Alvy Ray
Smith's Academy Award citation: http://research.microsoft.com/research/graphics/alvy/memos/award.htm
[25] R.A. Drebin,
L. Carpenter, and P.Hanrahan , "Volume Rendering," SIGGRAPH 88 Conference Proceedings, Computer Graphics, Vol. 22, No. 4, Aug.
1988, pp. 65-74.
[26] See for
example, Tony Parisi, "VRML: Low-Tech Illusion for the World Wide
Web," in Clark Dodsworth, Jr., ed., Digital Illusion: Entertaining the Future with High Technology. New York, ACM Press,
1998, pp. 129-136, especially p. 134. Also see Ken Perlin and Athomas Goldberg,
"IMPROV: A System for Scripting Interactive Actors in Virtual Worlds, Siggraph 96, Conference Proceedings, Computer Graphics, 1996, pp. 205-216,
especially pp.205-206.
[27] Martin
Randall, “Talisman: Multimedia for the PC,” IEEE
Micro, Vol. 17(2), March/April, 1997, pp. 11-19.
[28] Edward
McCracken. Inspired by Vision: A Letter from Ed McCracken. in National
Association of Broadcasters '97 & National Association of Broadcasters
MultiMedia World. 1997. http://www.sgi.com/features/studio/nab/index.html.
McCracken also noted:
While there
have been incredible advances across many areas of science and technology, the
new Craylink architecture for supercomputers, new improvements on the space
shuttle, sheep cloning, - no advance has been more prolific, more ubiquitous, more wide reaching than consumer oriented
entertainment developments.
[29] See Ed
McCracken, Strategic Computing:
Defining the Workflow Across the Organization. Silicon Graphics Computer
Systems Summary Annual Report, 1997: http://www.sgi.com/company_info/investors/annual_report/97/ceo.html
also see the comparative financial data reported for 1993-97 at: http://www.sgi.com/company_info/investors/annual_report/97/fin_sel_info.html
[30] Kurt Akeley. Riding the Wave. in Silicon Graphics European Developers Forum. Munich and Tel
Aviv, 1998. http://www.sgi.com/developers/marketing/forums/akeley.html
see especially slide 7: http://www.sgi.com/developers/marketing/forums/akeley7.html
[31] Jack A.
Thorpe, "Future Views: Aircrew Training 1980-2000," unpublished
concept paper at the Air Force Office of Scientific Research, 15 September
1978, discussed in Richard H. Van Atta,
Sidney Reed, and Seymour J. Deitchman. DARPA
Technical Accomplishments: An Historical Overview of Selected DARPA Projects, 3
Volumes, Institute for Defense Analysis, IDA Paper P-2429, 1991: Vol. 2,
chapter 16, p. 10.
[32] Ibid., note
50. Chapter 16, p 10.
[33] The training
concept was to provide a means of cueing individual behavior, with the armored
vehicle being part of the cueing. When individuals and crews reacted, they
would provide additional cues to which others would react. Thus, the technology
was to play a subservient role in the battle-engagement simulations, making no
decisions for the crews, but rather simply and faithfully reproducing
battlefield cues.
[34] Van Atta,
Chapter 16, p. 13.
[35] Once the
decision to remove BBN from the graphics portion of the project Cyrus then left
Boeing and formed an independent company, Delta Graphics, in order to devote
his full energies to developing the graphics technology for SIMNET. The initial
contractor, BBN, continued with responsibility for the network technology, but
with the needed change in architecture, i.e., with use of microprocessor-based
graphics generators.
[36] See Jack A.
Thorpe, "The New Technology of Large Scale Simulator Networking:
Implications for Mastering the Art of Warfighting," in Proceedings of the 9th Interservice Industry
Training Systems Conference, November 30-December 2, 1987, American Defense
Preparedness Association, 1987, 492501.
[37] R.J.
Lunsford, Jr., US Army Training Systems
Forecast, FY 1990-1994, Project Manager for Training Devices (US Army
Materiel Command), Orlando, Florida, October 1989, p. 14. Cited in Van Atta,
Chapter 16, p. 31.
[38] DoD Directive
5000.1, March 15, 1996, Section D: Policy, Para 2: Acquiring Quality Products,
item (f): Modeling and Simulation.
[39] U.S.
Department of Defense, Office of the Inspector General. 1997. Requirements Planning for Development, Test,
Evaluation, and Impact on Readiness of Training Simulators and Devices,
sited by Committee on Modeling and Simulation, Modeling and Simulation: Linking Entertainment and Defense. Washington, D.C.: National Academy Press,
1997, Table 1.1, p. 17.
http://www.nap.edu/readingroom/books/modeling/table1.1.html
http://www.nap.edu/readingroom/books/modeling/
(main url)
[40] In 1999 video
games alone grossed $6Billion. According to a recent survey by Entertainment Weekly of entertainment
preferences in American households 35% listed reading books as their favorite
entertainment. In second place was playing video games at 30%, while watching a
video ranked 17%.
[41] According to
responses in interviews I have done for a project on the development of
computers in medicine and frequently mentioned in articles for the popular
press.
[42] See the
discussion by Jeffrey Potter of Real 3D in
Modeling and Simulation: Linking Entertainment and Defense, pp. 164-165.
Also see: http://www.real3d.com/sega.html
[43] For the
program description see: http://www.stricom.army.mil/STRICOM/PM-ADS/ADSTII/
[44] The R3D/100
chipset directly interfaces with Microsoft® compliant APIs (application
programming interfaces), such as OpenGLª.
[45] See the press release on the Open Arcade
Architecture forum: http://www.intel.com/pressroom/archive/releases/CN71497B.htm
Also see the speech by Andy Grove at the June 20, 1997
Atlanta Entertainment Expo, "The PC Is Where the Fun Is," http://www.intel.com/pressroom/archive/speeches/asg62097.htm
[46] See
"Company" at: http://www.wizbang.com/
[47] Ibid.
[48] Ibid.
[49] For Steven
Woodcock’s bio see: http://www.cris.com/~swoodcoc/stevegameresume.html. Also
see Steven Woodcock Interview on the future of AI technology and the impact of
multi-player network-capable games in Wall
Street Journal Interactive Edition (May 19, 1997). Also see Donna Coco,
“Creating Intelligent Creatures: Game Developers are Turning to AI to Give
Their Characters Personalities and to Distinguish Their Titles from the Pack,” Computer Graphics World, July 1997, Vol.
20, No. 7, pp. 22-28. http://www.cgw.com/cgw/Archives/1997/07/07story1.html
[50] General
Charles C. Krulak, Marine Corps Order 1500.55, "Military Thinking and
Decision Making Exercises," online at http://www.tediv.usmc.mil/dlb/milthink/
[51] For the
PC-Wargames Catalog, see: http://www.tediv.usmc.mil/dlb/milthink/catalog/title.html
[52] For an
interesting discussion of Marine Doom, see Rob Riddell, "Doom Goes to War:
The Marines are Looking for a Few Good Games," in Wired Magazine, Vol 5, No. 4, April 1997. Online at: http://www.wired.com/wired/archive/5.04/ff_doom.html?topic=&topic_set=
[53] The book is published by Addison-Wesley. For more
information, visit http://www.aw.com/cseng/.
[54] Cited in Note
38 above: Committee on Modeling and Simulation, Modeling and Simulation: Linking Entertainment and Defense. Washington, D.C.: National Academy Press,
1997.
http://www.nap.edu/readingroom/books/modeling/
(main url)
[55] Michael
Macedonia, "Why Digital Entertainment Drives the Need for Speed," Computer, Vol 33, no. 3, 2000: http://www.computer.org/computer/co2000/r3toc.htm
[56] Loc. Cit.
Note 49 above.
[57] For
interesting discussions of the Battle of 73 Easting see Bruce Sterling,
"War Is Virtual Hell," Wired
Magazine, Vol 1, No. 1, January 1993, online at: http://www.wired.com/wired/archive/1.01/virthell.html?topic=&topic_set=
see especially pp. 6-7 of the online article. An important
discussion of the place of the 73 Easting Simulation in military strategic
planning is to be found in Stephen Biddle, "Victory Misunderstood: What
the Gulf War Tells Us about the Future of Conflict," International Security, Vol 21, No. 2 (1996): pp. 139-179.
For details on the construction of the simulation, see:
Jesse
Orlansky and Colonel Jack Thorpe, eds., 73 Easting: Lessons Learned from Desert
Storm via Advanced Distributed Simulation Technology, IDA D-1110 (Alexandria,
Va.: Institute for Defense Analyses, 1992).
Colonel
Michael D. Krause, The Battle of 73 Easting, 26 February 1991: A Historical
Introduction to a Simulation (Washington, D.C.: U.S. Army Center for Military
History and the Defense Advanced Research Projects Agency [DARPA], August 27,
1991.
J.R.
Crooks et al, 73 Easting Re-Creation Data Book (Westlake, Calif.: Illusion
Engineering, Inc., 1992), IEI report No. DA-MDA972-1-92.
W.M. Christenson and Robert Zirkle, 73
Easting Battle Replicaton, IDA P-2770 (Alexandria, Va.: Institute for Defense
Analyses, 1992).
[58] See, Kevin
Kelly, "God Games: Memorex Warfare" from Out of Control, (New York; Addison Wesley, 1994): http://panushka.absolutvodka.com/kelly/ch13-e.html
[59] On the role
of the military in creating and sustaining the computer revolution, see Thomas
P. Hughes, ed., Funding a Revolution:
Government Support for Computing Research. Washington, D.C. National
Academy Press, 1999; Thomas P. Hughes, Rescuing
Prometheus. New York; Pantheon Books, 1998.
[60] See the
Interactive Digital Software Association's 1999
State of the Industry Report. http://www.idsa.com/
[61] See
descriptions of Perceptronics' recent work at http://www.perceptronics.com
[62] CIA Press
Release, September 29, 1999: http://www.odci.gov/cia/public_affairs/press_release/index.html
[63] See John
Markoff, "High-Tech Advances Push C.I.A. Into New Company," New York
Times, September 29, 1999