Previous Next Up Comments
SEHR, volume 4, issue 2: Constructions of the Mind
Updated 4 June 1995

book review

practical philosophy

human reliability analysis: context and control

Erik Hollnagel, (London: Academic Press, Computers and People Series, 1993)


Niklas S. Damiris

It might, at first glance, seem odd to find a review of an ergonomics textbook in a humanities journal. The hope is, however, that the reader will see the pertinence of the subject by the end of this short commentary. Ever since Heidegger first problematized the issue of technology in his essays "The Question Concerning Technology" and in the "Letter on Humanism," scholars in the humanities have been preoccupied with, what he called, "enframing,"that is

that way of revealing which holds sway in the essence of modern technology and which is itself nothing technological. [. . . .] Enframing means the gathering together of that setting-upon which sets upon man, i.e. challenges him forth, to reveal the real, in the mode of ordering, as standing reserve.[1]

However there has been a shift from the Heideggerian preoccupation with Being and Technics to an engagement with the Artificial Intelligence practitioners' claims about dissolving traditional philosophical problems, like the dualism of mind and matter. Such claims must have sounded to many humanist ears as compatible with, if not similar to Heidegger's and Derrida's announcement of the end of metaphysics.

At least since the mid-seventies there has been a rising fascination with the Artificial Intelligence community's highly publicized claims that there are computational systems in the making whose wondrous intelligence will inaugurate a new age, that of the smart machine (an age, which I might add, will transcend the fallible human). Of course, the literary imagination itself being the creator of Golems, robots, and monsters like Frankenstein, was already primed to receive such fantastic claims. A philosophical discussion nevertheless ensued concerning the kind of "mind," "intelligence," "expertise" that was to be exhibited by the new machines and whether they would or could eventually replace human mentality.

However, by looking into the development and commercial application of actual machines that participate in various work environments, a much more complicated picture emerges; one which cognitive science, as presently practiced, has difficulty theorizing about. Here, too, as in metaphysics, one forgets, that it is the exigencies of the concrete situation and the specific techno-logies (writing and computation, respectively) which allow or disallow for things to happen, not some unified theory. Such theorizing makes it difficult to see that there is cognition already at work -- albeit cognition of an embedded, embodied, and practical kind.

Enter Erik Hollnagel and his monograph Human Reliability Analysis: Context and Control. For starters, it is written clearly and free of jargon. The only thing I find bothersome is the excessive use of acronyms, but this, I understand, is standard practice in his field.

The field is cognitive ergonomics, which means

system design with the characteristics of the joint cognitive system (the operator and the computer) as a frame of reference.

One does not find here inflated claims about computational minds. Instead Hollnagel offers, what I would call, a phenomenological description -- he calls it "task analysis" -- of the interaction of human operators and information technology; and he is concerned with the precarious balance that has to be maintained if the joint system they form is to be a) reliable, b) robust, and c) adaptable. Specifically, in Hollnagel's own words,

Reliability is the ability of the system (human and machine) to perform its required function under specified enviromental conditions during a given interval of time; Robustness is the ability of the system to perform its required function in case of environmental conditions which the system is not designed or constructed to tolerate; Adaptability is the ability of the system to perform its required function in case of environmental conditions which prevent it from using the normal procedures.

Do not let any Robocop phantasies at this point lead you astray! Rather imagine a situation more akin to what Donna Haraway describes in her Cyborg manifesto.[2] Think next of Three Mile Island, or Bhopal, or Chernobyl: all cases of what C. Perrow has called "normal accidents," meaning that the possibility for the occurence of accidents is built into the very structure of complex technological systems and cannot generally be eliminated by improved organizational designs, more complete information or "smarter" organizational staff.[3] Perrow's rather gloomy diagnosis is that disaster is the "normal price" we must pay for highly complex systems. In other words, any technologically sophisticated system always brings with it a proportional rise in unavoidable and irreducible risk. This is particularly so in cases of tight coupling where the system is precisely this magma of human actions and machine operations or, alternatively, machine behavior and operator decisions.

I bring this up so that the reader may appreciate the difficulties involved in the field of human reliability analysis that Hollnagel is a leading practitioner. Unlike the ubiquitous Herbert Simon and his school, he does not try to do applied epistemology; he does not allow the metaphysical assumptions underlying his model dictate how the system modelled should behave. In short, he does not see the usefulness of a model in its formal properties, but rather treats it as a prosthetic device for those caught in concrete problematic situations. That is, a model is good if it enables cognitive skills to be deployed and rationed in a socially accountable and response-able way.

A.I. ideologues prefer instead to use accident statistics to push for the design of systems which, they claim, not only conpensate for mistakes but eventually replace the fallible human operator. Such an effort is both a hybris and the easy way out. By privileging some formal, that is, internal criteria of rationality or efficiency, it blatantly ignores the much thornier issue of contextual control where "messy" parameters are the norm; for example, estimating subjectively available time, responding to sudden changes in the situation, evaluating on the spot the effects of several possible lines of action, and so on. Hence the concern with reliability, rather than rationality "bounded" or "unbound."

Academic cognitive science, also, omits something else that is crucial: To err is to be human; and humans do learn from their errors. Thus, in order for them to develop and improve, provisions have to be made for mistakes. Instead of pretending that eventually a way can be found that will eliminate all human fallibility, one needs to make room for human error, for it is only through such errors that the joint system improves.

Hollnagel, a Dane, belongs to the Scandinavian tradition for which cognitive skills like reasoning, judgement, and interpretation, are not abstract mental functions best analyzed through formalization. Instead cognition is the embodied capacity to act appropriately and ethically in the situation one is part of.

Previous Next Up Comments

Notes

1. Martin Heidegger, The Question Concerning Technology and Other Essays (New York: Harper, 1977) 20.

2. See Donna Haraway, Simians, Cyborgs and Women (New York: Routledge, 1991).

3. See Charles Perrow, Normal Accidents (New York: Basic, 1984).