Mary Keeler

(Here is the version I posted on the PORT list, to which you will see some references.)

------------------

I spent a great deal of time on this report because the conference offered me the chance to collect and review many subtle and difficult issues in one document, from which I might hope to distill a core argument to cover the circumstances we face in on-line resource development.

[Part I is short (around 2 pages), and may serve as an "executive summary." Part II is a collection of more detailed notes on the presentations and discussions (around 8 pages). My project's discussion list includes LIS professionals, resource developers, and technologists from industry and academe]

----------------------------------------------

A PORTean View of the Sloan Conference

Part I. Discussion of Conclusions

"Using the World Wide Web for Historical Research in Science and Technology," (Aug. 20-21), organized by Harvard and Stanford Universities, for the Sloan Foundation to present and discuss the projects Sloan has funded in a program called "Web Interactivity/Collaboration: Technologies That Work" (http://sloan.stanford.edu/SloanConference/SloanProjects.htm). Results of five STIM (Science and Technology in the Making) projects were presented.

1.MouseSite (for exploring the history of human-computer interaction) 2.Making PCR (the story of the invention of Polymerase Chain Reaction in biotechnology) 3 EV Online (the history of electric car development) 4.The Blackout History Project (memories of the two major power failures in New York City) 5.The Big Dig (collection of original materials used in planning the Central Artery/Tunnel at MIT)

The Sloan Foundation program officer (Jesse Ausubel) concluded that "the job of digitizing pre-existing knowledge is great, but not as great as the job of maintaining the documentation for recent technology development." [PORT has both jobs to do!]

Part I. I have listed four conclusions that I think are most significant, from the comments offered by participants at the conference. (Numbers and asterisks in brackets refer to contexts in my more detailed notes of the meeting, listed in Part II.)

1. No one (alone) can create the complex online projects such as the five that the Sloan Foundation has funded. No one realm of expertise is enough, no one can know everything required. These are necessarily collaborative projects, which means that managing or facilitating (or supporting) the coordination and cooperation of team members is crucial to the success of project development. Ideally, that team should include a project manager, a curator, a designer, a content expert, and a technical developer. Robert Bud (Science Museum in London) [13* and 17*] offered many pertinent remarks, and Douglas Engelbart (Bootstrap Institute) [16*] presented a detailed, pragmatic plan of operation for a collective, project-building operation. [See marks at 13*.] Effectively engaging the resource users is a consideration that is integral to online resource design, not an after thought! [See the most ambitious attempt at 12*.] [Also, see comments at 6*-8*]. Many participants stressed the dire need for guaranteed persistence of resources, by broadly-based institutional programs of responsibility and cooperation. [See comments at 14.] Several of the project developers mentioned the need to examine what e-developers in other disciplines are doing, because problems to be solved (both behavioral and technical) are often similar across widely diverse content domains.

2. Peter Lyman (School of Information Management and Systems, UC Berkeley) [1*] gave an excellent keynote address about the "politics of information," in which he stressed the difficulty--but necessity--of taking the user's view. Neither technologists nor library science experts represent this view effectively. The current model of "digital library systems" takes a traditional library "collections view," from which to create standards (such as the Dublin Core); these cannot adequately serve the needs of scholars in the context of their research. At what might be called the "resource content level" of operation, users need far more flexible and diverse methods of access to and interaction with digital materials [as the SGML-TEI initiative has demonstrated]. Jim Coleman (Harvard's Library Digital Initiative) [details some requirements, 18*] and others mentioned the serious problem of real world institutional recognition and rewards for "Web-world work," and that John Unsworth (Director of UVA's Institute for Advanced Technology in the Humanities) is the only academic to have received tenure through "e-work." Participants agreed that tenured faculty (who have no political risk) need to take the initiative in developing e-projects, but that (also) effective engagement of resource users constitutes the major barrier to online resource development. Both behavioral and technical components of that problem must be addressed as serious issues. Lyman and others urged the collaboratory method but no one specified what constitutes that method (in terms of research program, infrastructure, and testbeds).

3. Jerry McGann (one of the first tenured faculty to take the e-project initiative) was quoted [10*]: "We no longer need books to study books; we no longer need to be limited by our tools." And the projects presented at this conference demonstrate very well that we certainly do not need (or want to be limited by) books to study material that has never been represented in books. Roy Rosenzweig (Center for History and New Media, George Mason U.) [10*] concluded that "these works transform the relation between artifact and argument: more _is_ different," but that the conventions of the book make books easy to read, and that these new media presentations "defy the agreement between reader and writer." Lyman mentioned that we should critically examine the most common mode of interaction on the web, the scroll (much like papyrus publishing), for its suitability. The _American Quarterly, Hypertext Scholarship in American Studies_, an online journal, was repeatedly mentioned as a model for the review and critique of online resources obviously needed for all disciplines. Most digital project developers seem unaware that they have become software developers, in the "new medium." As Paul Heckel (in _The Elements of Friendly Software Design_) very elegantly argues, writing friendly software is a _communication_ task, and to do it effectively we must critically apply the techniques that have been developed by writers, filmmakers, and other communicators for 2500 years. He says that the successful designer must "learn to think like a communicator and to practice an artistic craft as well as an engineering one," the designer must know the medium, and must be able to assume the user's view in order to judge the success of any design.

4. That brings me to what I think is the most serious issue of all. Abby Smith (Director of Programs, Council on Library and Information Resources) [2*] reminded us that, without better understanding of what we are doing, what is possible, and what we want, we are suffering the effects of trying to transfer old methods into new media. Without a mechanism for "self-critical investigation," we cannot effectively learn by experience. We are largely not even aware of this need, and I am reminded of the last line from a song written in response to the Kansas Board of Education's recent decision on the teaching of evolution, by the Montana Logging and Ballet Company (performed on National Public Radio): "As long as dogma rules, you don't need to learn a thing." Misinterpretations and perpetuated false conceptions result from poor-quality representations and limited access to archived primary evidence, which are fundamental causes of dogmatic scholarship (and boring learning experiences). Effective research in any field depends on high-quality (and useful) primary resources to provide rich evidence for continued interpretation and discovery (by means of a testbed mechanism, I might add). Clifford Lynch (Coalition for Networked Information) [15*] and Abby Smith warned that, in the drive to serve "audience reach," we must not "corrupt archival standards" of quality. Particularly, Jed Buchwald (Dibner Institute at MIT) [11*] and Spencer Weart (American Institute of Physics) [5*] stressed that any online resource project should use technological advantages to provide views (or displayed versions) of archived material for many purposes, but that these views should always be effectively connected with high-quality archive source versions, for unlimited discovery of yet more evidence. [Also, see 9*.] "All knowledge has its origins in our perceptions." --Leonardo da Vinci

Briefly, here is my general assessment of what took place and what was missing.

Understandably, there was a great deal of "back-patting"; the STIM projects were quite ambitious ventures. But, without pre-defined criteria of what would constitute success (or progress hoped for) and no documented discussion of lessons learned, or of what the (many disappointing) project results might imply _should_ be done in further attempts, what they learned from the experience is difficult to say. There was almost no provision for any "self-critical mechanism" (evaluation or assessment) either within or among projects, and (apparently) no recognition that such was needed. These projects were conducted very unscientifically (that is, unpragmatically, in the Peircean sense of what it takes to learn from experience). Although they were called "experiments," they were certainly not conducted as such, in any respect that I could tell. At least, no hypotheses were explicitly formulated and tested (or, if they were, that process was not discussed in the presentations).

Even so, I think they constitute impressive "suggestions" for what might be done, and certainly indicate what must be done to address the problem identified by the Sloan Foundation, as I explain in Part II. (I particularly like the MouseSite, with a multi-views archive by which to track the careers of pioneering computer technologists. Perhaps, I am biased in judgment by the content?) Better understanding and institution of a _pragmatic_, collaboratory approach (in terms of required infrastructure, research program, and testbeds) would surely improve their learning and development progress!

Part II. My Notes on the Sloan Conference.

The Sloan Foundation is concerned about what Stewart Brand (3*) and others have called the "Digital Dark Age." Much of the last 20-30 years of computer technology development has been accounted for in digital documents, which are now either unreadable or have been destroyed. Historians will find few records as evidence for a history of this period. [Tim Lenoir (MouseSite Director) told us that Stanford recently acquired Apple's records, shortly before the company had planned to destroy them.] Sloan has funded the five projects as experiments in new methods of historical research, by means of engaging technology research communities in the process of creating and archiving the records needed for their historical study.

I do not specifically refer to the STIM projects in this report, except for the Blackout Project (which offers the best model for PORT's critical examination, I think). In a short list of "needs," here are the most pragmatic observations we can gain from the STIM "experiments," I think. These _needs_ were all mentioned at some time during the event, but were not explicitly covered in any concluding remarks. All relate to the work we have proposed for PORT/ACCORD. 1. A registry for projects that categorizes their similarities and differences. 2. A system for sharing experimental tools, methods, and results. 3. A mechanism of evolving guidelines for collectively observing project advancement.

Jesse Ausubel, Program Director, explained the difficulties of documenting recent history of science and technology and concluded that no one can afford to do history in the old way, which relies on collected documents. We need a "new habit" of doing historical recording, by which participants become active allies (at least, through professional societies). Future scholars will want to create 4-D models, not traditional papers. We must overcome the discontinuities of the current "vanity press" Web and "harness the desire to exhibit," and to improve the inefficiencies of traditional scholarship. His concluded: "How can we learn to be self-organizing?"

Several projects, not among the STIM group, were invited for their model qualities and experience (I presume). One specially invited group represented three projects from the Max Planck Institute for the History of Science: the Cuneiform Origins of Mathematics, the Galileo project, and the Virtual Lab in Physiology (VLP) The projects are not yet completely on-line (planned for the end of the year) but three project members attended the conference. I was particularly eager to see what they had done and are planning, since one of our NSF reviewers urged us to consider their Galileo project as a model. I was able to talk with their Head of Computing Systems (Joerg Kantel), at length, and he demonstrated their system.

They provide automatic processing of Web pages through Python (for scripting) and DTML (see Zope), with Java applets for animation. A filemaker object base for data input and a Frontier (UserTalk) manager handle structured and unstructured data. They have digitzed 300 notebook pages, which can be viewed from three levels (overview of folio page, working view for individual comments, and a high-resolution facsimile view). Their VLP project operates as a "platform for discussion groups in research and learning" (for students, too), and is organized in three levels (called "exposition," "concepts," and "library." Their research program includes hardware and software evolution and growing support of XML . They have certainly reached a stage that prepares them to appreciate the sort of community-based development operation that we have in mind, but they have not yet begun to think about how to manage user engagement effectively.

>>Now, here is a more detailed list of what I think were important points made.

1. Peter Lyman (School of Information Management and Systems, UC Berkeley) stressed the serious need to examine the changing relationship between research and publication as a result of network advancement. Researchers must be able to make use of primary research materials (in archives) in an organized way, by means of high-quality resources. We must consider the need to "redefine the reader as a participant in the construction of knowledge, through interviews, participant observation, 'show-and-tell,' and email." The new kinds narrative that emerge will require new kinds of interfaces, and "current search engines are impossibly stupid for the resources that are emerging." Adequate cataloguing is missing; without cataloguing, you have "papyrus technology" (digital documents have vague boundaries). On the Web, we have no control of context; how do we control (or even recommend) a context for any publication? The XML approach can make some improvement, but we need to "personalize how resource sites respond to users." He mentioned the issue of "shrink-wrap licenses" that is emerging.

Lyman thinks we must ask the right questions, such as: "can this new medium constitute a literature?" A literature must be _preserved_ (CDs might last 50 years, in terms of the physical medium, but are unlikely to be readable for that long). Web literature is "fugitive." Brewster Kale's Archive the Internet project is one approach (growing at 2 terabytes a month); another is the Digital Library Federation (around 20 libraries in the US). Quality control (or "the art of throwing away") is only possible in the context of preservation and the structure it provides. On the Web, that will involve the analysis of realtime use of documents, but who will participate? He proposes three strategies: 1. take advantage of existing social networks, 2. build communities or readers as participants, and 3. develop collaboratories.

2. Abby Smith (Director of Programs, Council on Library and Information Resources) warned that *we are in danger of simply transferring old schemes of operation into a new medium, using the dumbest proprietary systems, hoping that data will be "readable" for the longest time. But "books" now need to be conceived more like spors, and cataloguing is the thread that tracks their productivity. She mentioned a model being developed at the University of Ghent (by Herbert van de Sompel; see the Preservation and Access Newsletter at the CLIR website), which would allow Web users to go directly to the creator, through a library archive, or "a pyramid repository." We need a model for sustainability of resources, "who will be responsible (or serve as the archivist) for the 'care and feeding' of 'digital objects'?" Who will acquire, select, and organize (create records and manage to keep a collection comprehensive and redundant, not merely _save documents_. Decisions to save should depend on life-cycle relevance, and the keys for accountability are integrity, completeness, accuracy, usability. She stresses the need for a historian-scientist interdependency to make identifying a discrete digital object safe. Vital research is based on access to more evidence, which depends on the researcher's access the original material! She warned against using too much technology, too fast, and advocated a "less-is-more" approach to project design. [A well-managed testbed (with user adaptation and technology modification, balanced), it seems, would be required in this approach?)

3. Stewart Brand: "The Digital Dark Age" will be caused by too many poor storage media that become unreadable (the rule, so far, is to migrate every 10 years). Although, the older the data, the more valuable it becomes; but no one has a business plan in longevity. We have no precedent for destructive obsolescence. "Moore's Law is Moore's Wall." Metcalfe's Law: Power of the net grows as the square of its nodes. The Open Source movement is very good but they have not yet "stepped up to the longevity problem." We need "write once, read anytime technology"! It's up to us, will our data be immortal or ephemeral? "The best resource for innovation is the collected past." [Charles Faulhaber (Bancroft Library, Berkeley) told us that IBM put their Columbus celebration project (10 million documents) on optical worm (in 1992), and it's now obsolete!]

4. Thom Edgar (Institute of Chemical Engineers, from U. of Texas) says we need collaboratories as Learning Community Portals, of students, scholars, and industrial practitioners, (sharing data, equipment, and preprints), but how large can they scale up? --We need business models that save faculty time, encourage participation, and increase productivity. --We need hypertext-based journals, links to more data, 3-D graphics. --We need to get the root of these issues

5. Spencer Weart (American Institute of Physics) mentioned that the AIP (Brewster Kale's Archive the Internet Project) has 90 finding aids! Web was invented as a medium for scholarly interchange but now we use it to go get little bits of information, not for interaction much at all. Will interactive pre-prints have self-correction features (see Einstein Online)? In science, a whole lot of people work together to correct, not yet so in historical (or humanist) research. He described what the "journal model of scientific work" (e.g., Los Alamos) should be in e-form: supplementary data archives (backup data), so that from a "boiled down level" above you can "drill down" to the source-data archives. History has been presented as "a 'picture' of everything happening at once." Now, we need hyperlinks as chains of inference by user choice. Readers organize data to "structure their own narratives." We must study how readers use these structures, and not only should the content be dynamic but the structures should also be dynamic. We have problems with heavily text-based materials on a 70dpi screen. When we have 300dpi, we will have dynamic narratives structures!

6. John Roberts (Senior Archivist at IBM, in charge of converting 8000 cubic feet of paper documents) has the challenge of matching archival advances to corporate and marketing needs--but also to scholars needs. Gerstner (current CEO) wants to preserve the heritage of the company (by next year!), in a Lotus database and modified museum system (by Gallery Systems). They want index description down to the folder level, to link archive and library search methods. Problem is that technology is advancing ahead of content preparation and may render currently used technology useless.

7. Leo B. Slater (Chemical Heritage Foundation) says the most significant question is "can we use Web technology for intellectual management?" (CORE and TULIP studies were mentioned.) We are prevented by these "infrastructural facts": --don't know how to work together, --don't get rewarded for doing so, and --don't have a new mode of understanding what accounts to keep in the process of collaboration.

8. Faulhaber suggested developing a protocol for what to keep. Smith said we need the experience of senior faculty in experiments, since they have nothing to lose by serving as models. Many participants agreed that we need scholarly and managerial models for changes in practice. And we need "registries for Web objects"--to establish who is taking responsibility for the persistence of links.

9. Alexa McCray (Lister Hill National Center for Biomedical Communications at the National Library of Medicine) explained that the NIH had early experience in digital conversion (in 1992 they digitally archived 40,000 pages). In the Profiles of Science project they learned the critical role of metadata (independent records as templates for document types) the need for topical terms, and TIFFs (as digital archive master copies) of the original documents, saved as GIFs (for service copies), then PDF files (for web serving), so that users can always go back to high-quality images for detail. Their Profiles website became operational in 1998, "as incentive for more scientists to participate in developing their biographies, and so that students can look behind the scenes." The archive includes text, audio, still images, and video, and is organized into 3 levels.

(1)"Electronic Exhibit" as the introduction [initial interface] with other material "behind." (they use standard terminology from the A&A thesaurus for document types, the PDFs have zoom facility, and they underlay the OCR for text search; they use QuickTime video, with Real Media. (2) "Online Annotation" for the profiled scientist to interact with and maintain development. ("Design Principles" can be found on their Website, with an architecture diagram.) (3) "Metadata for collection management" links all aspects (programs generate KML[? or XML] from metadata RDBMS) to generate alternative views and provide filtering. "Dublin Core elements are derived from the metadata entry system for simplicity, semantic interoperability, international consensus, and modularity." As for the problem of providing context? Alexa hopes for a "distributed approach."

10. Roy Rosenzweig (Center for History and New Media, George Mason University), showed samples of his projects, such as an "Internet roundtable" on interpreting the Declaration of Independence. *Also showed us the _American Quarterly, Hypertext Scholarship in American Studies_, an online journal (esp. a sample of Thurston's presentation, with anchor text and frames). *Quoted Jerry McGann: "We no longer need books to study books; we no longer need to be limited by our tools." Showed several other interesting examples, such as <sscnet.ucla.edu>, and said that *these works "transform the relation between artifact and argument: more _is_ different." He commented that the conventions of the book make them easy to read, and that these new media presentations "defy the agreement between reader and writer."

11. Jed Buchwald (Dibner Institute at MIT) uses MS Frontpage, and showed us Katherine Renne's "Hydrographic Topology of Baroque and Ancient Rome" (Aquae Urbis). *Emphasized that the Institute includes the Burnly Library (a science collection beginning in the 15th century) and that resource users must be able to "drill down" from a Web-presented PDF to high-quality archive views of material. (Any project should be bound to an extensive library of background material, he contends.) They use a Web-based cataloguing system (TLC) on UNIX and NT systems. He reminded us that we can't expect simply to put a printed page on screen; any display system transforms the format; need designers to create an appropriate new format. [Tim Lenoir mentioned the Virtual London project, in that regard.]

12. Jim Sparrow (Director, New York Blackout Project, the History of Technological Failure) took as the premise for the project: "the audience can make its own history." They used the Web to collect, arrange, and present dialogue between the public and utility employees about the 1965 and 77 NYC blackouts. Their plan was to model and facilitate engagement in historical analysis. *User comments inherit metadata from the topics offered for response, from which would be generated an "automatic catalogue" so the site could "grow itself." They hoped to create a "multi-faceted online space that would be "all things to everyone engaged," introduced by a chronology and narrative. There is a "Highlights" page. *They used dialogue "to improve participants' memory specifics, but many suffered stage fright." They found that the Web "lacks the authority of a broadcast documentary or a book," and one of their biggest problems was "the blurring of the duality represented by the utilities employees and the public." *Their sources (or respondents) "did not comply with Web-robustness" (or the communicative potential offered by their interface to the Web), but used only the simplest modes of response and analysis ("lowest common denominator" prevailed).

Using the metadata to "grow the site" became a problem, mostly because "the Web is not transparent for most users." Their advice was: "start with more primitive technology, then port to more advanced as users develop their habits and adapt." I think they said they have used Tango to build a replacement system that is elegant and robust. *They used face-to-face interviews, also, and found that communication mode better for gaining participants' attention; then, they could use the Web for maintaining contact and developing persistence of their contributions. *[Their Web page looks like someone from _Wired_ designed it, I think their difficulties in getting responses stem from complex (and confounded) causes that need to be carefully sorted out, all the way from the site's initial perceptual impact to its techniques for "cultivating cognitive strategies." "All knowledge has its origins in our perceptions." --Leonardo da Vinci]

13. *Robert Bud (Head of Research, Collections at the Science Museum of London) said, "Anybody who invents a better mouse trap will go bust, without marketing." [All these projects would have benefited from traditional media marketing techniques, and certainly graphic design, interface, and programming (that is, media programming) would have improved their operation.]

14. David Kirsch asked if the STIM metadata system was salvageable. Their metadata structures were based on the Dublin Core subset of MARC record standards. Every entry in the database was required to have at least that minimum metadata associated with it. [I'll have more to relate from my continuing discussions with David, soon.] Jim Coleman responded that metadata probably need to serve as a particular tool for each particular project ("perhaps, compatible across all projects at some level of abstraction"). Jim Sparrow remarked that they need to be able to share code across all projects, for scholarly pursuits--an OS (Open System), "scholars need a forum for exchange." *Alexa McCray said, "scholars need a system that they can use and continue to develop further, and the metadata to make that possible!"

15. Clifford Lynch (Director of the Coalition for Networked Information) spoke generally, about "environment and trends" for online projects development. General Trends will lead to more rather than less documentation. 1. storage is getting cheaper, with ever-less pressure to discard anything; 2. we will have much more video coming along as part of research data; 3. we will use OCR, speech recognition, image analysis in an explosion of source material; However, he emphasized that the copyright act (just passed in the U.S.) gives control to the author for life+75 years, which will make intellectual property an enormous barrier. He sees two distinct domains to be developed: Archives, with institutional commitment for data source persistence, and Scholarship, the activity of synthesizing based on the data. We now need new relations between scholars and archives. There is new emerging technology for authenticating and provenance protection (public key encryption), and just in time! "The Patent Office is losing its collective mind, issuing patents for things invented long ago; they need to be better able to keep track. We need to be able to identify e-works as scholarly communication."

*Lynch emphasized "the big difference between the archival and delivery forms" (high-quality capture of source vs. audience reach); "audience reach" must not corrupt archiving standards! *He made a plea for attention to naming and linking (to avoid breaks, increase persistence): website developers must identify objects--not locations (or local lookup paths). We need to be thinking at "a level up from sites," and develop methods of organizing sites *(scoping sites and subject portals). Format migration must be part of site maintenance (it takes active architecture work). *We need "financial frameworks for maintenance, not just for creation of materials." This is a profound difference from traditional library responsibility, which is to collect books, _when they are done_; but when is a live site done, and when is it in maintenance mode? On-going research in the new Web mode is experiencing difficulties in obtaining contributions (as these Sloan projects show). "We must understand better the need for mediation." The Internet Engineering Task Force archives may serve as a model (for its dialogue format)? He concluded that "we must examine the relation between the traditional way of creating historical documents and the new way that science is done."

16. Douglas Engelbart (Founder and Director, Bootstrap Institute -- "Bootstrapping Organizations into the 21st Century") Way back in 1969, he maintained that computers are not calculating devices, they can be used to _augment human intelligence_. His strategy is now based on NICs (Networked Improvement Communities) for collective IQ and Strategic Improvement. The conditions we must recognize can be explained this way:

*A* is a collective capability for coping with complex, urgent problems (CUP). Pursuit of *A* _is_ a CUP! The strategic goal of *A* applies to the pursuit of *A*, and that's bootstrapping.

*But there is a "paradigm drag" in the evolution of human habits (that is, we fail to recognize threats and opportunities). "We need a radically improved 'nervous system'." *We need "pragmatic co-evolution, because no one can think big enough to determine what will happen (no one-vender document system). You might say, we need an 'investment strategy' for creating 'collective IQ'."

Engelbart has proposed an "Open Hyperdocument System" since the 1960s. Now he calls it an "Improvement Infrastructure," for faster, more effective development cycles. He also calls for an "Improvement Alliance on the C Level" -- a NIC of NICs, or the Bootstrap Alliance. Such a facility would be built on recorded dialog, intelligence collection, and knowledge production (scanning, ingesting, and interacting). He uses a presentation (in Adobe Director), to demonstrate the architectural plan of his system. [Conference participants raised the issue of how individual credit and reward would be possible, but Abby Smith pointed out how recent and unique the idea of personal intellectual property really is, after all.]

17. Robert Bud (Head of Research and Collections at the Science Museum, London) reminded us that "an electronic archive is not just stuff but also interpretation, which doesn't end and is not individually constructed (if it's well-done)." His collections (Defiant Modernism 1939-1968) are presented with timelines and a highly designed graphical interface (must see to appreciate; developed by the Art of Memory Co., programmed in Director), that can be used to re-center the archive around the user's focus of interest. Historical narratives are linked to unique narrative roots to make the acts of interpretation explicit. *Bud emphasized that a range of skills is required for such "exhibits" (project manager, curator, designer, content expert). He also mentioned Media Lab's "Interface of the Future, a walk-through of the 20th century, with Jerome Wiesner" ().

18. Jim Coleman (Projects Manager, Library Digital Initiative at Harvard University) summarized the conference content and purpose as addressing the problem of making the transition from "the Real World" to "the Web World," and "getting it done in the interim." He explained:

"Real world work is well-understood and has a social infrastructure, while Web-world work searches for the question to which it is the answer. These Sloan projects test the 'membrane' between the two."

*He mentioned the serious problem of real world institutional recognition and rewards for Web-world work, and that John Unsworth (Director of UVA's Institute for Advanced Technology in the Humanities) is the only academic to have received tenure through "e- work." He asked, "What seems to make Web-work 'real'"? He offered the following observations: 1. Pre-formed communities that can be enhanced and continued through Web operations, 2. Conditions that allow new communities to self-evolve, and 3. Community members must form their Web world interactively (which encourages "stickiness"). He listed what he thinks the STIM Core Site Objectives should be: --to offer consulting and design services for PI's, --to provide core technical infrastructure and support, --to build a toolkit for project development, and --to promote team management skills. He advised (based on the experience of the current STIM projects) that 1. Technical support should not be a responsibility of the project, itself. Technical staff should be trained in the technical skills for which they are responsible, not trained content experts. 2. Project management skills are essential and must be learned, not acquired by trial and error. 3. There must be clear agreement on needs and implementation plans. 4. Projects need stronger integration into a research agenda, with clear vision of when the research goal is met.

He concluded that we need to "understand the Web work lifestyle better, and that its success is dependent on real-world transactions." He warned that, at this stage, "both costs and possibilities are unbound." Real world economics work against persistence; market forces require obsolescence for technology to remain competitive. "Market hegemony is both blessing and curse" (advancement and disability). "The interim is forever," and we are constantly troubled by the circumstance that standards need market support. "The economic model of scholarly interchange will inevitably change, because the Web invigorates scholarly practices": --it renews concern for primary documentation, --its products are open to more community scrutiny, and --incremental costs are insignificant. What will be involved in "Getting it Done"? 1. Decide what level of risk is reasonable. 2. Develop an economic model, locally, that reflects global conditions and objectives. 3. Concentrate on bridge-building between Real world and Web world. 4. Risk behaving 'as if' the community business model and infrastructure were in place.

19. David Kirsch led an wrap-up session for answers to the question: "Based on the conference discussions, what do we need?" He divided the responses into three domains represented by groups of conference attendees. The primary identified needs are listed. _Scholars_ need more Web development skills. _Professional societies and publishers_ need a business model for persistence. _Libraries and archives_ are still working blind and need a vision for their future. The group generally concluded that they need to be able to share methods among projects, and some attendees pointed out that they need to find out what related work is being done in other content areas.