measurement in information science

13
Book Reviews In-Depth Review Computational Linguistics in Information Science: Information Retrieval (Full-Text or Conceptual), Automatic Indexing, Text Abstraction, Content Analysis, Information Extraction, Query Languages: Bibliography. Conrad F. Sabourin. Montreal: In- folingua; 1994: 2 ~01s. 1046 pp. Infolingua Series in Linguis- tics-Informatics-Communications, No. 16.1, 16.2. Price: $150 (pbk.). (ISBN vol. 1: 2-92 1173-23-9; vol. 2: 2-92 1173-24-7.) This work came to my attention from a books received col- umn in Irzformation Technology and Libraries (Harrison, 1995, p. 60). From the title proper, it sounded like a state-of- the-art review, and I called it to the attention of JASIS’ book review editor. When the volumes arrived, however, I realized that the ZTAL entry had omitted the subtitle indicating form, i.e., Bibliography. Bibliographies can be discussed from two main perspec- tives: ( 1) scope and coverage, and (2) organization and presen- tation. Both aspectsare treated in this review. The broad scope of the work is evident from the topics enu- merated in the subtitle; all are of interest to ASIS members. The series title and number indicate that this is one of many bibliographies in the field of information-linguistics-all by Sa- bourin, and all published in 1994. lnfolingua can thus be char- acterized as an omnibus indexing service, like Excerpta Med- ica: numerous sources are scanned, and relevant document surrogates are routed to specialized bibliographies. The Preface, which seems to be a general one for the entire series,indicates that “there were 73846 entries in the database” in January 1994 (p. iii), and that “approximately 15% of refer- ences can be found in more than one volume” (p. vi). Such duplication is common among specialized indexes produced by omnibus indexing services. Other titles in the serieslikely to be of interest to ASIS mem- bers include no. 6, Natural language interfaces; no. 7, Machine translation; and no. 13, Quantitative and statistical linguistics. Table I in the Preface indicates that the work in hand, with 6,390 entries, is one of the largest in the seventeen-part series. Table 2 provides data on the forms of document that are covered; these include periodical articles, proceedings papers, reports, monographs, and dissertations. Coverage is interna- tional and multilingual, with an admitted bias towards North America and Western Europe (p. iv). The chronological cov- erage ranges from 1950 to 1993, with the majority of docu- ments published since 1985 (Table 4). Relevant documents were identified through systematic scanning of 400 periodicals and 800 conference proceedings, 0 1996 John Wiley &Sons. Inc. i.e., primary sources, as well as the references to these papers, admittedly secondary sources likely to contain inaccuracies ( p. v). The secondary sources are not identified with the device commonly used by bibliographers: an asterisk indicating “not seen by compiler,” nor are there notes on the sources of indi- vidual secondary citations. The three citations for John O’Con- nor that lack dates (entries 4253-55) are no doubt secondary. Bibliographic ghosts may have crept in. A list of periodicals scanned is not provided, but in flipping through volume 1, which contains the bibliographic references, all the expected titles are found, among them, Computers and the Humanities, Information Processing & Management, and Communications of the ACM. Bradford’s Law tells us that ap- proximately 80% of the documents on a topic are identified by the scanning of core journals (Bradford, 1948, Chapter IX); the “citation indexing” done by Sabourin no doubt added to this percentage. One wonders why no mention is made of well-established secondary servicesthat cover much ofthe same territory: Z&r- mation Science Abstracts and Linguistics and Language Be- havior Abstracts. These could have been searched in hard copy or online, albeit not back to 1950. For the past few decades, however, these indexes could have served as a check on the completeness and accuracy of Infolingua’s database. INSPEC, which according to a study by Candy Schwartz has excellent coverage of information science, does cover the early period of computation and should have been searched.It is unlikely that these databaseswere not consulted because of concerns about copyright violation. Hans Wellisch’s ( 1980) retrospective bib- liography contains a substantial section on automatic indexing; it is not cited in the Preface as a source, although there is a reference to it (entry 6089). I spot-checked Wellisch’s entries on automatic indexing against Sabourin’s; not all items listed in the older source are in the work under review. The volume of bibliographic references is arranged alpha- betically by author, with multiple entries for one person subar- ranged by date. One can check known works to see whether they are cited, but for an examination of topical coverage, one must refer to the first part of volume 2, which contains the sub- ject index. Here we make the transition from a discussion of the coverage of the work to its organization. The subject index is arranged alphabetically by broad topics; first-level subheadings follow a colon on the same line as the heading; second-level subdivisions are indented under the main heading. The locator is the sequential number assigned to the bibliographic reference in volume 1. A typical entry is: SYSTEM : DIALOG ANALYSIS : SEMANTIC 640 The title of the proceedings paper in entry 640 is “Natural language information retrieval dialog”-a false drop for some- one who interprets the subject entry to mean “a semantic anal- ysis of the DIALOG information system,” as I did. Wellisch’s ( 1983) paper on the disadvantages of using all capital letters in bibliographic references comes to mind; lowercasing dialog would have prevented the incorrect interpretation. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 47(3):247-259, 1996 CCC 0002-8231/96/030247-l 3

Upload: vijay-raghavan

Post on 06-Jun-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Book Reviews

In-Depth Review

Computational Linguistics in Information Science: Information Retrieval (Full-Text or Conceptual), Automatic Indexing, Text Abstraction, Content Analysis, Information Extraction, Query Languages: Bibliography. Conrad F. Sabourin. Montreal: In- folingua; 1994: 2 ~01s. 1046 pp. Infolingua Series in Linguis- tics-Informatics-Communications, No. 16.1, 16.2. Price: $150 (pbk.). (ISBN vol. 1: 2-92 1173-23-9; vol. 2: 2-92 1173-24-7.)

This work came to my attention from a books received col- umn in Irzformation Technology and Libraries (Harrison, 1995, p. 60). From the title proper, it sounded like a state-of- the-art review, and I called it to the attention of JASIS’ book review editor. When the volumes arrived, however, I realized that the ZTAL entry had omitted the subtitle indicating form, i.e., Bibliography.

Bibliographies can be discussed from two main perspec- tives: ( 1) scope and coverage, and (2) organization and presen- tation. Both aspects are treated in this review.

The broad scope of the work is evident from the topics enu- merated in the subtitle; all are of interest to ASIS members. The series title and number indicate that this is one of many bibliographies in the field of information-linguistics-all by Sa- bourin, and all published in 1994. lnfolingua can thus be char- acterized as an omnibus indexing service, like Excerpta Med- ica: numerous sources are scanned, and relevant document surrogates are routed to specialized bibliographies.

The Preface, which seems to be a general one for the entire series, indicates that “there were 73846 entries in the database” in January 1994 (p. iii), and that “approximately 15% of refer- ences can be found in more than one volume” (p. vi). Such duplication is common among specialized indexes produced by omnibus indexing services.

Other titles in the series likely to be of interest to ASIS mem- bers include no. 6, Natural language interfaces; no. 7, Machine translation; and no. 13, Quantitative and statistical linguistics. Table I in the Preface indicates that the work in hand, with 6,390 entries, is one of the largest in the seventeen-part series.

Table 2 provides data on the forms of document that are covered; these include periodical articles, proceedings papers, reports, monographs, and dissertations. Coverage is interna- tional and multilingual, with an admitted bias towards North America and Western Europe (p. iv). The chronological cov- erage ranges from 1950 to 1993, with the majority of docu- ments published since 1985 (Table 4).

Relevant documents were identified through systematic scanning of 400 periodicals and 800 conference proceedings,

0 1996 John Wiley &Sons. Inc.

i.e., primary sources, as well as the references to these papers, admittedly secondary sources likely to contain inaccuracies ( p. v). The secondary sources are not identified with the device commonly used by bibliographers: an asterisk indicating “not seen by compiler,” nor are there notes on the sources of indi- vidual secondary citations. The three citations for John O’Con- nor that lack dates (entries 4253-55) are no doubt secondary. Bibliographic ghosts may have crept in.

A list of periodicals scanned is not provided, but in flipping through volume 1, which contains the bibliographic references, all the expected titles are found, among them, Computers and the Humanities, Information Processing & Management, and Communications of the ACM. Bradford’s Law tells us that ap- proximately 80% of the documents on a topic are identified by the scanning of core journals (Bradford, 1948, Chapter IX); the “citation indexing” done by Sabourin no doubt added to this percentage.

One wonders why no mention is made of well-established secondary services that cover much ofthe same territory: Z&r- mation Science Abstracts and Linguistics and Language Be- havior Abstracts. These could have been searched in hard copy or online, albeit not back to 1950. For the past few decades, however, these indexes could have served as a check on the completeness and accuracy of Infolingua’s database. INSPEC, which according to a study by Candy Schwartz has excellent coverage of information science, does cover the early period of computation and should have been searched. It is unlikely that these databases were not consulted because of concerns about copyright violation. Hans Wellisch’s ( 1980) retrospective bib- liography contains a substantial section on automatic indexing; it is not cited in the Preface as a source, although there is a reference to it (entry 6089). I spot-checked Wellisch’s entries on automatic indexing against Sabourin’s; not all items listed in the older source are in the work under review.

The volume of bibliographic references is arranged alpha- betically by author, with multiple entries for one person subar- ranged by date. One can check known works to see whether they are cited, but for an examination of topical coverage, one must refer to the first part of volume 2, which contains the sub- ject index. Here we make the transition from a discussion of the coverage of the work to its organization.

The subject index is arranged alphabetically by broad topics; first-level subheadings follow a colon on the same line as the heading; second-level subdivisions are indented under the main heading. The locator is the sequential number assigned to the bibliographic reference in volume 1. A typical entry is:

SYSTEM : DIALOG ANALYSIS : SEMANTIC 640

The title of the proceedings paper in entry 640 is “Natural language information retrieval dialog”-a false drop for some- one who interprets the subject entry to mean “a semantic anal- ysis of the DIALOG information system,” as I did. Wellisch’s ( 1983) paper on the disadvantages of using all capital letters in bibliographic references comes to mind; lowercasing dialog would have prevented the incorrect interpretation.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 47(3):247-259, 1996 CCC 0002-8231/96/030247-l 3

The Preface notes the development of a “specialized thesau- rus” for the index (p. v). Sabourin’s use of the term key’ywourls (pp. v-vi) for descriptors demonstrates his lack of knowledge of the terminology of thesauri. The controlled vocabulary does not meet the criteria of the NISO ( 1994) thesaurus standard: there are no cross-references from synonyms or from direct forms of the many inverted headings. The heading WORD : COMPOUND, for example, has no cross-reference from COMPOUND WORD.

The subheading ANALYSIS : SYNTACTIC under the heading WORD : COMPOUND brought to mind R. B. Lees’ ( 1960) classic book on this subject. I flipped to volume 1 to see whether there is an entry for it: there is not, although the book is cited in one of my ASIS Proceedings papers (entry 2439). and Sabourin claims to have indexed references. Perhaps Lees’ book, which does not have an information science focus, is listed in another volume in the Infolingua series.

WORD : COMPOUND is one of many headings entered under head nouns that are unlikely to be sought. The inverted terms are less of a problem than the generic postings, however. POPULATION : CHILD has no alternative access point under C. Conversely, PRECIS cannot be found under P, but is hidden under C, with the heading CONNECTIVE and the subheading SYSTEM. The advice given in the Preface, “If a keyword can- not be found . look under a more generic term” (p. vi) is not helpful. as these superordinate terms do not readily come to mind.

Owing to the lack of cross-references, I initially thought that none of my publications in the area of information-linguistics were cited in the bibliography, but then I serendipitously en- countered nine citations under “HASS-WEINBERG Bella,” a form of name I have never used. Referring from the last ele- ment of a compound name is standard cataloging practice (AACR2, 1988, rule 26.2A3). but such access is not provided, even in the author index.

Cross-references from alternative forms of compound sur- names would have prevented the scatter of the index entries for Corinne Jorgensen, a coauthor of two papers entered under “LIDDY Elizabeth Duross” (entries 3509- 10). The bibliogra- phy gives her name first as “JOERGENSON Corinne Lyon,” and then as “LYON-JOERGENSON Corinne.” The index has one entry under J and the second under L, both with the spell- ing JORGENSON, without an umlaut. (Weren’t the index en- tries generated from the database records?)

The inconsistent Jorgensen entries contradict the general pattern of uniform headings for authors throughout the work. Full forenames are given, even where the original source has initials and there are no homographic names. Thus, while the title page of Wellisch’s bibliography features “Hans H..” the reference has “Hans Hanan.” Sabourin describes his descrip- tive cataloging policies on p. vi.

Gerard Salton would have been pleased to find 128 docu- ment numbers after his name in the author index, but this is not a useful array from a user perspective. These heading-locator combinations, while providing no information to help a user locate a specific entry, waste space: ten full lines are taken up with the consecutive numbers 4935 - 5053. It would have been more helpful to indicate this range in elided form, and then to give the numbers of references in which Salton is a secondary author. Alternatively, the author index could have been elimi- nated, and see also references to jointly authored papers could have been incorporated into the bibliographic volume.

The numerous postings under ANONYMOUS in the au- thor index do not complement the bibliographic volume in any way; the locators merely point to the references that have ANONYMOUS as the main entry. There is no title access for these documents. There are a few corporate entries in the index; they are subheadings of ANONYMOUS in the bibliography

(following the heading with a role indicator: “ANONYMOUS, (editor)” - p. 16). For example, System Development Cor- poration is indexed under S, while its study on indexing is cited under A (entry 20 1).

There is an almost complete lack of typographic variation in the bibliography. Given the ubiquity of laser printers and fancy fonts, this is inexplicable, especially for a series that treats computational character processing ( no. 12).

The bibliographic volume gives author/editor surname, journal titles, and imprint data in all caps; the index volume has no lowercase characters at all. The formatting of subject entries has already been described; author index entries, in all caps, include no punctuation marks to separate surnames and forenames. The hyphen is preserved in (occasionally incorrect) compound names.

The work features no italic or boldface type. There are no running heads in volume 1 to identify the title of the bibliogra- phy or series. Users often photocopy a single page with adesired entry; interlibrary loan librarians need to know the source of the citation, however. Running heads would have been useful in volume 2 to call attention to the two parts of the index: these are not noted on the title page, nor is there a table of contents listing the components of the bibliography.

It is expected that errors will occur in a bibliography con- taining thousands of data elements, but spell-check could have been used on the subject index to avoid such headings as “WORD LENGHT” (p. 96 1).

As NISO ( 1995) has issued a draft standard for filing, men- tion should be made of the unusual arrangement of entries in the index. Corinne Jorgensen’s compound name serves to illus- trate the unexpected position ofthe hyphen-after letters ofthe alphabet:

LYONS PATRICK LYON-JORGENSON CORINNE.

This is followed by the sequence LYTINEN; L’HOMME. In the subject index, single-word headings follow compound

terms:

WORD SIGNATURE WORD : ABSTRACT-CONCRETE.

This seems to be a straight ASCII sort, without any of the nor- malization routines commonly used in indexing software. The sequence is counterintuitive, no matter whether one prefers word-by-word or letter-by-letter filing; whether one subscribes to the field-subfield principle (Hines & Harris, 1966) or prefers to file straight across a heading, disregarding punctuation pre- ceding a subdivision.

The work is substandard in terms of organization and pre- sentation, but it is still a useful compilation. I believe that it will primarily serve the researcher with a good knowledge of the literature who needs to cite a classic paper. For example. I have noticed that many recent papers on computer handling of Ara- bic script fail to cite Mohammed Aman’s early JASZS paper on the subject. This bibliography makes it easy to find the refer- ence (entry 106) without incurring online fees.

It has long been recognized that an author-title catalog fa- cilitates a known-item search; another observation regarding the divided catalog is that subject searches are done mainly by junior scholars who do not know the literature. I believe that the subject index to this work is not likely to be used much: the terminology is poorly structured, and there are no links be- tween related headings. I heartily echo the compiler’s state- ment: “. a refinement of the thesaurus followed by a rein- dexation of some of the references . . would result in a sig- nificant improvement” ( p. vii ).

There are no document surrogates under the subject terms

246 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996

to assist with relevance judgments, only entry numbers. I can- not imagine anyone checking the I69 locators for INFORMA- TION RETRIEVAL : ENGLISH : ANALYZED (p. 733). Au- thor-date references would have been preferable and would not have taken much more space than entry numbers. A great deal of space was wasted by the double posting of entries under headings such as INDEXATION : AUTOMATIC-INFOR- MATION RETRIEVAL ( 171 locators) and their permuted forms.

The bibliographic volume lacks abstracts. and the compiler notes in the Preface (p. v) that the “keywords” compensate for this, but they are accessible only in the index. Displaying descriptors under the references would not have been very space-consuming, and would have helped the user identify doc- ument content.

It is amazing that someone could undertake a large biblio- graphic project apparently without examining the coverage of related sources, and without studying cataloging, indexing, the- saurus design, filing rules, or good models for the organization of printed bibliographies. I find it amusing that the copyright page states that parts of this book “may not be reproduced in any information storage and retrieval system now known or to be invented”; much of what is known in information science has not been applied here. This is far from a “rigorous system- atic bibliography” (Bates, 1992), but it brings together numer- ous sources reporting on attempts to apply computers to the processing of natural language for the purpose of information retrieval.

Bella Hass Weinberg Division of Library and Information Science St. John’s University Jamaica, Ives! York 11439

References

AACRZ ( 1988). Anglo-American cataloguing rules. 2nd ed.. 1988 re- vision. Chicago: American Library Association.

Bates, M. J. ( 1992). Rigorous systematic bibliography. In H. D. White, M. J. Bates, and P. Wilson, For ir$ormaiion specialists: Inter- pretations ofwfirence and bibliographic work (pp. I 17-l 30). Nor- wood, NJ: Ablex.

Bradford, S. C. ( 1948). Documentation. London: Lockwood. Harrison, S. B. (Ed.), ( 1995). Book reviews. Itzformafion Technology

and Libraries, 14, 55-60. Hines, T. C. & Harris, J. L. ( 1966). Computer,filing of index, biblio-

graphic, and catalog entries. Newark, NJ: Bro-Dart Foundation. Lees, R. B. ( 1960). The grammar of English nominalizations. Bloo-

mington: Indiana University Press. NISO ( 1994). National Information Standards Organization. Guide-

lines .for the construction, format and management qf monolingual thesauri: An American National Standard. Bethesda, MD: NISO Press. (ANSI/NISOZ39.19-1993).

NlSO ( I995 ). Alphabetical arrangement ef letters and the sorting of numerals and other symbols: A draft American National Standard. Bethesda, MD: NISO Press. (ANSI/NISO 239.75-199X).

Well&h H. H. ( 1980). Indexing and abstracting: An international bib- liography. Santa Barbara, CA: ABC-Clio.

Wellisch, H. H. ( 1983, February). Capital games: The problem ofcom- patibility of bibliographic citations in the databases and printed pub- lications. Darabase, 6, 54-57.

Seeking Meaning: A Process Approach to Library and Infor- mation Services. Carol Collier Kuhlthau. Norwood, NJ.: Ablex Publishing Corporation; 1993: 199 pp. Price $24.50 (pbk.). (ISBN l-56750-019-6.) References and Index.

The simple announcement, “A research paper is required for this course,” is sure to promote high levels of stress among most high school and college students. In the past, librarians have tried to alleviate this stress by improving access to infor- mation. Emphasis on physical access supports the biblio- graphic paradigm of collecting, organizing, and retrieving in- formation. The author argues that, although this paradigm has been the traditional basis of the information profession, it is time to recognize the importance of intellectual access to col- lections. Kuhlthau defines intellectual access as interpretation of information and ideas, and states that her book “is about library and information services for intellectual access to infor- mation and ideas, and the process of seeking meaning” (xvii).

It seems that a number of the aforementioned students helped crystalize the author’s interest in intellectual access when she ob- served that students working on research papers all displayed sim- ilar difficulties, regardless of the topic or their ability level. Partic- ularly at the beginning of their research, students were confused, irritated, anxious, and frustrated. Kuhlthau points out that this common scenario conllictcd with the bibliographic paradigm, which assumed these same students should have been able to pro- ceed with confidence when given clear instructions, ample assis- tance, and access to a well organized collection of information.

As a result of these discrepancies, the author describes her efforts to develop a process theory for library and information services based on constructivist views of learning proposed by John Dewey, George Kelly, and Jerome Bruner. Emphasis is placed on Kelly’s work, which describes learning as an active, confusing, complex process of continually constructing one’s personal world by making sense of new experiences. A thor- ough analysis of the literature and insightful comments on the application of learning theory to the research process will have anyone with reference experience nodding their heads in recog- nition (and possibly) agreement.

Four separate studies were conducted to investigate the in- formation search process within the framework of constructiv- ist theory. The studies involved only individuals seeking infor- mation for research papers, but did include various levels of students (high school and college), as well as different types of libraries (public, school, and academic). Methodology, with accompanying graphs and charts, is described in detail. Analy- sis of these studies resulted in the identification of six distinct stages in what is referred to as the Information Search Process.

These six stages are summarized from Chapter Three of the book. The first stage is Task Initiation, which occurs when a person first recognizes information will be needed to complete an assignment. At this stage, feelings of uncertainty and appre- hension are common.

Topic Selection is the second stage and it is here that a general topic for investigation is selected. Students become optimistic af- ter their selection, but these feelings are quickly replaced by con- fusion, uncertainty. and doubt as they begin to investigate infor- mation on their topic to increase their personal understanding. This is the third stage, labeled Prefocus Exploration.

In the fourth stage, Focus Formulation, students narrow their search from a general topic to a specific concept or idea. Not all students complete this stage. Those who do not will experience difficulty with the rest ofthe research paper process, but those who complete this stage demonstrate feelings of renewed confidence and a Sense of “ownership” of the research topic.

Information Collection is the fifth stage and the task is to collect information on the focused topic. Interaction between the user and the information system is most effective at this

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996 249

stage and the user can accurately specify information needs. The final stage, Search Closure, brings the search for informa- tion to an end and begins preparation for presenting the infor- mation discovered. Not surprisingly, feelings of relief are com- mon, along with a sense of satisfaction ifthings have gone well, and disappointment if they have not.

The identification of these six stages provides the founda- tion for what Kuhlthau describes as “a proposal for an emerg- ing theory of intervention” ( 110) to be used by information professionals in assisting with the research process.

“The Uncertainty Principle” forms the basis for this pro- posal and is defined as follows: “Uncertainty is a cognitive state that commonly causes affective symptoms of anxiety and lack of confidence. Uncertainty and anxiety can be expected in the early stages of the Information Search Process. The affective symp- toms of uncertainty, confusion, and frustration are associated with vague, unclear thoughts about a topic or question. As knowledge states shift to more clearly focused thoughts, a paral- lel shift occurs in feelings of increased confidence. Uncertainty due to a lack of understanding, a gap in meaning, or a limited construct initiates the process of information seeking” ( 111).

You say you already knew that? Probably so, but Kuhlthau goes on to suggest that students should also know it. They should be told at the beginning of the research process to expect confu- sion and frustration, and accept these feelings as a normal part of research procedure. In addition, information providers need to maintain an awareness of the uncertainty principle so they can be effective mediators in their patrons’ search for information.

To that end, five levels of mediation were suggested as ap- propriate intervention strategies for librarians. These included: ( 1) Organizer, (2) Locator. (3) Identifier, (4) Advisor, and (5 ) Counselor. Each one of these strategies is effective at different steps along the way in the research process. Accurate identifi- cation of the six stages of research allows effective intervention with a research problem because the librarian can then select the most appropriate type of mediator role for each situation.

With this in mind, the author then goes on to suggest that the continued use of technology will require even more media- tion by librarians as they must explain how to use the over- whelming amount of information now available to most stu- dents. Unfortunately, there is no explanation of how librarians will find extra pieces of the day for the time-consuming role of counselor, but the author does an admirable job of emphasiz- ing the need for this type of service. She concludes by stressing that information professionals need to place a new emphasis on the process of learning from information access, rather than focusing solely on the retrieval of information.

Although it reads like a doctoral thesis, this book would be an excellent addition to the collection of any librarian or teacher involved with the research paper process. Admittedly, this is not a comprehensive look at all the ramifications of the topic, but the author makes no such claim. In fact, she seems to regard her efforts as a first step and encourages others to take up the cause and continue looking at the myriad implications suggested by her preliminary work.

Some might argue the need for what appears, at first glance, another typical user study. However, the emphasis here is on the user’s learning process while doing research, and how it im- pacts the research process. This gives the book a different per- spective, one that could prove beneficial to almost anyone in- volved with the reference function.

Susan Dunman Waterfield Library Murray State University Murray, KY 42071

The Information Society: A Study of Continuity and Change. John Feather. London: Library Association Publishing, Inc.; 1994: 168 pp. Price: $35.00. (ISBN I-85604-058-5.)

Though the title of John Feather’s book is The Information Society: A Study of’ Continuity and Change, it is neither a re- search-based study (in the sense of presenting new findings) nor a comprehensive analysis of current thinking about social changes going under the rubric of “the information society.” Instead, it briefly describes the evolution and current state of various information technologies (especially documentary media), their economics, and the legal structures pertaining to them. The book is clearly aimed at a generalist audience, per- haps librarians wishing to learn about new technologies or other interested professionals needing a condensed review of the legal and economic effects of information technologies.

Feather proceeds from a couple of implicit assumptions. First, he takes the technologically deterministic view that infor- mation and communication technologies are themselves re- sponsible for the changes he observes: “. . . the computer is driving the revolution of the late twentieth century” (p. 2); “These tools are the building blocks ofthe information society” (p. 4). In this respect, this book is typical of much of the infor- mation society “genre” ofwriting, which often neglects the cul- tural, political, relational and persona1 choices that individuals make in the process of employing technology, in order to main- tain the “value-free” image of technology and its builders and users: Technology drives society, not the reverse.

The second assumption of Feather’s work is the primacy of documentary technologies. i.e., those which record and distrib- ute information for large markets, rather than those that sup- port “oral” or “conversational” forms of interaction and com- munication. He calls publishing-specifically, book publish- ing--the “paradigm of information transfer” (p. 36) for all other information technologies that have followed it. This as- sertion is questionable on several grounds; for example, histor- ically book publishing has not been organized along the same monopolistic or oligopolistic lines as broadcasting, film, or the telephone system. at least until recently. And unlike other me- dia, publishing has enjoyed exceptional legal protection in the form of the First Amendment in the U.S. However, Feather’s emphasis on documentary media does reflect the conventional perspective of librarianship, and on this basis relates his points to a familiar framework for his readers.

Feather spends two chapters out of seven describing the his- tory of media. beginning with the innovation of the alphabet and written language. Consistent with the assumptions men- tioned above, documentary technologies (especially print) are given the most extensive treatment, though he does admit that “Even at the height of its domination, print never displaced the spoken word as the most common means of human communi- cation and information transfer” (p. 24). Interactive technolo- gies like the telephone and electronic mail are given far shorter shrift. He glosses the demonstrated interpersonal power of e-mail and listserver-type services and suggests that their real value will only be realized by imposing editorial control to make them more like conventional documentation: “In a sense, they have added a new dimension to the dissemination and communication of information. It is, however, possible to make them both more formal and more limited in their appli- cations, by controlling input and by limiting the capacity of recipients to respond. The system which best explains this model of usage is that of the electronic journal. .” (emphasis in the original) (p. 73). Feather also refers to television as the “last and greatest ofthe mass media” (p. 30) but does not make a strong case demonstrating whether or how television followed the publishing paradigm.

Two chapters each are devoted to the economics and poli-

250 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996

tics of information technologies, respectively. Overall the au- thor seems to take an economically as well as technologically deterministic view, and advocates the cornmodification and market model for the distribution of information. “At the heart of the creative process lies a commercial transaction without which the results of the author’s creativity cannot be shared with an audience” (p. 4 1). He does not seem to recognize that the market mode1 is not inevitable, but a culturally-constructed agreement among members of a society about the appropriate- ness and value of certain forms of interaction and relationship. Instead, Feather adopts a pragmatic stance: “The chain of com- munication from author to reader is informed and determined at every stage bv commercial considerations” (emphasis added) (p. 44).

His pragmatism allows Feather to marginalize librarians’ ethical concerns about access to information. “Outside the in- stitution of the library, information providers have few of the inhibitions which have traditionally made librarians look askance at such matters” (p. 6). He does not acknowledge that librarians and other information professionals have a clearly articulated ethical position regarding access, and do not merely have “inhibitions.” He reiterates (p. 60), “The new competi- tors in the information marketplace do not share, or perhaps even understand, the benevolent desire to inform which is the common heritage of the traditional information providers, the librarians”-nor does Feather suggest that the “benevolent de- sire” has any grounding in political or economic considera- tions.

Feather sees information inequity as the principal political issue associated with the use of new technologies, but warns that observers must “. . . be careful not to equate information wealth and information poverty with particular kinds of infor- mation supplied through particular media or institutions” (p. 90). He notes that different people in various social contexts have different kinds of information needs and sources of knowledge, not all of which are necessarily adaptable to elec- tronic forms.

However, he points out that information wealth has ordi- narily been associated with economic wealth in developed nations, and links differential access to global networks with differential levels of national development (though he also ar- gues that access to information networks may make countries information rich at the national level while information pov- erty persists on the local or individual level). In developed countries, he believes, there is no real problem with access. Rather, in those places the problem lies with the kind ofcontent carried by these accessible channels. Therefore, “Ignorance is deliberate, a result of exercising a choice not to know. The means of remedying the ignorance are, however, at hand; the problem is lack of information, not lack of access” (p. 99).

At the same time, Feather seems to contradict himself: He stresses that new technologies present a paradox, making infor- mation both more available but also restricting its access. He leaves the impression (primarily via the passive voice of the text) that this situation is an inevitable consequence of the na- ture of the technologies themselves, rather than the result of deliberate decisions by individuals which affect how technolo- gies are implemented and used. He tends to treat computeriza- tion as a development which simply “emanated” without ac- tive agency on anyone’s part. In the end, his views about whether an “information society” exists are mixed and take the form of a three-page “Afterword.”

To summarize, Feather states clearly that the book is aimed at the beginning librarian rather than the specialist. Specialists will not find much new material presented here. Feather does not cite sources in the text, and instead provides a short list for further reading at the end of the book. The text contains occasionally confusing or contradictory arguments due to a de-

terministic approach, and adopts the ideology of commoditiza- tion largely without question. However, the book is a readable and clear synopsis of many of the issues related to the use of information technologies, even if it does not succeed as a sum- mary statement about the “information society.” It may serve as a useful introduction for those who are unfamiliar with this complex and multi-layered set of problems.

Leah A. Lievrouw Associate Professor Department of Library & Information Science University of California Los Angeles 216 GSE & IS Building, Box 951520 Los Angeles, CA 90095-1520

The Cult of Information: A Neo-Luddite Treatise on High- Tech, Artificial Intelligence, and the True Art of Thinking, 2nd Ed. Theodore Roszak. Berkeley, CA: University of California Press; 1994: 270 pp. Price: $10.00. (ISBN 0-520-08584-l .)

As if we didn’t have enough with which to concern our- selves-disease, the economy, and terrorism, The Cult oflnfor- mation tosses one more log on the fire: Technology. Yes, it seems as if society’s acceptance of the computer as a tool for enhancing access to information, among myriad other uses, may well lead to the end of the civilized world. Humanity will no doubt fall victim to the inherent evils of technology and those who control it. This idea constitutes an interesting and quite popular premise for a book, one which Roszak employs continually in The Cult of Information.

Granted, the societal implications of the Information Age are many. Not a notion to be taken lightly, few could deny that there are a plethora of social and moral implications with which one must deal when technological advances are merged into every aspect of life. And it is difficult to disagree with the idea that machines cannot function in the same fashion as the hu- man mind. Roszak himself professes an admiration for the technology and insists that this work, as well as the first edition, is merely “measured criticisms” that will be completely agree- able to “serious students” who maintain a “reasonably bal- anced view.” After offering this explanation, Roszak abandons his self-professed moderation and plunges head first into an as- sault on our “object of veneration,” the computer.

After establishing a powerful point, the “mind has never been dependent on machines to reach the peak of ideas,” Ros- zak begins to employ a multitude of examples of technology at its worst. With the exception of a handful of extremists, few could disagree with the dangers so clearly demonstrated by the author. Unfortunately, he chooses to use these extremists to depict the evils of a world with computers and consequently, his arguments, if not examined closely, seem quite logical.

Oddly enough, Roszak readily admits to having written The Cult of Information on a word processor and performed his re- search using on-line databases. But he fears that the computer will not be kept in its proper place, and continually warns of the dangers if we allow it to infiltrate our lives. Invasion of pri- vacy, over reliance on polling, relinquishing military decision making, corruption of education and our children, and eco- nomic ruin, all are lurking just around the corner if we fail to heed his advice.

Those who predicted the popularity and power of comput- ing, “the hackers and the hucksters,” are the very individuals of whom we should all be wary. Their failed attempt at creating

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996 251

artificial intelligence, which we now know is not possible, comes as no surprise to Roszak. He repeatedly points out that duplicating the human thought process will never be possible, and any who even considered the idea were being bamboozled by the computer industry.

One of Roszak’s biggest fears is the influence computers will have on education. He believes that they have entered the school “on a wave of commercial opportunism,” depicting teachers as helpless creatures who are left to fend for themselves under the relentless attack of vulture-like vendors. Even more victimized by the computer industry’s opportunistic and eco- nomically motivated attacks on education are the children. Roszak believes that the prime example of this is Logo, a pro- gramming language developed by Seymour Papert (one of those dangerous computer types) for use as part of an interna- tional computer literacy campaign.

The author agrees with Papert’s assertion that the computer is often misused in schools, but this is where the agreement ends. Papert believes that the computer has the potential to “transform the world of education.” Building on the belief that the computer “can be an instrument for teaching anything,” Papert constructed a language basic enough to be taught at the kindergarten level, a development the author finds to be a trav- esty. Logo, of course, can only teach programming. It cannot teach creativity, the type of developmental activities that stu- dents need. He cautions that unless teachers clearly inform the youngsters, they will grow up believing that any problem can be solved with programming.

As if the risks were not great enough, the average American must also beware of what Roszak calls the “data glut.” The availability of vast quantities of information, we are told, is not a good thing. It is dangerous. Even worse, it is part ofan attempt on the part of the government and special interest groups to control the population. One must then assume that Roszak carefully tiptoed on the fringe of this glut when electronically collecting information for his book. How fortunate he was not to have been completely engulfed and controlled by this sinister mass.

Roszak also addresses libraries in The Cult ofIrzformation, labeling them as “genuinely idealistic institutions dedicated to a high standard of public service-offering sustenance to our society since the days of Benjamin Franklin.” Coupled with his obvious distrust for technology this does not lead one to believe that Roszak thinks libraries need to look in the direction of computers, much less become digital entities. Surprisingly, this is not quite the case. He is an avid supporter of the “use of electronic apparatus in this environment,” acknowledging the value ofthe reference librarian’s use of databases and electronic information. They can protect the public by navigating the da- tabases for them. The librarians shall become the guides and the guardians for the unsuspecting public, providing access to all information and sheltering them from the dangers of the computer.

As evidenced in his previous works, Roszak can be an “en- thusiastic, persuasive and caustic” writer. The Cult ofZqJi,rma- lion presents no evidence to the contrary. This could explain his insistence that the computer and the industry that promotes it are the Pied Pipers that will lead all of society to its demise. One would think that an examination of history would reveal that for all of our faults, human beings have been adept enough to avoid such travesties. Gullible perhaps. but we are a pretty astute group when it comes right down to it. Roszak’s view must differ. but then one would expect as much from a self- proclaimed Neo-Luddite.

The Cult qf I@rmution: A Nco-Ludditc Treatise on High- Tech, Artificiul Intelligence, and the True Art of Thinking was originally published in 1986 as The Cult of Information: The Folklore qf Computers and the True Art of Thinking. Roszak

claims that his first edition was “far from being a wholesale rejection of high-tech.” Rather it was aimed at discriminating between the “use and abuse of computers.” The critics of the first edition were not convinced of this.

Indeed Roszak does not present an attack on the machine itself. When detailing his examples, however, he continually depicts the most extreme cases of the misuse of technology. The descriptions themselves are essentially one-sided. How could anyone disagree with such flagrant abuses of technology? As more and more people have become aware of the tremendous possibilities afforded them by this technology, Roszak has gone to work. warning the naive and technophobic of the inherent dangers of computing.

After a lengthy preface and introduction, The Cult of Zrzfor- mation is composed of eleven chapters, the last of which, “De- scartes’s Angel: Reflection on the True Art of Thinking,” re- minds us that computer scientists have failed to give the Angel of Truth her due credit, as we overlook the fact that we are unable to design machinery that mimics human thought. But has it been the attempt ofcomputer scientists to “revolutionize the foundations of human thought?’ Are we really trying to teach our children to think like machines? As these questions are considered-and they should be, Roszak reminds us to keep a watchful eye on our computers.

Pamela Cobbs School qflibrary and Information Science University of Kentucky Lexington, KY 40506

The Creative Process: A Computer Model of Storytelling and Creativity. Scott R. Turner. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers; 1994: 298 pp. Price: $40.00 (ISBN O- 805% 1576-7.)

Turner begins with the statement, “Someday computers will be artists.” (p. ix) MINSTREL, the “author” program de- scribed in this book is an attempt to “illuminate many of the interesting and difficult issues involved in constructing a cre- ative computer.” (p. ix) Turner suggests that the book will be of interest to three types of people: AI researchers, psychologists interested in the study of creativity, and people interested in how authors write. In this review, I discuss three aspects of this book: Its discussion of creativity, problem solving, and writing. MINSTREL itself. and Turner’s presentation of the material.

Turner takes the view that creativity, an essential part of writing, is a type of problem solving. Given a problem, a person trying to solve the problem will first attempt to recall if he or she has solved the problem before, and if so, how. If the problem is a novel one. or the conditions for a known solution are not met, the solver will utilize creative means to try to adapt a known solution to the current situation, or to reformulate the problem so that a solution can be found. Turner uses examples such as “how to clean up spilled milk” to illustrate “standard” (use a towel ) and “creative” (get a kitten to lap up the milk) problem solving. Writing is viewed as a type of problem solving. For example, how can two characters encounter each other by ac- cident? Creative solutions tend to result in a more interesting story, but they must still be appropriate for the context. Turner provides a rather superficial discussion of the nature of creativ- ity and the relationship between creativity and problem solving in writing as background for the description of MINSTREL.

MINSTREL is a computer program that writes theme-

252 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996

based stories set roughly in the time of Ring Arthur. MIN- STRELS overall goal is to produce a story that illustrates a say- ing such as “pride goes before a fall,” using characters such as knights, princesses, and dragons. The resulting stories are rather awkward, containing passages such as:

Grunfeld wanted to impress the king. Grunfeld wanted to move towards the woods so that he could fight a dragon. Grunfeld moved towards the woods. Grunfeld was near the woods. Grunfeld fought a dragon. The dragon died. (p. 9)

MINSTREL’s knowledge base contains several types of infor- mation, but it is not always clear just how the information is represented and organized. nor is it made clear how much in- formation MINSTREL starts with, and how much it learns. Stereotypical information about characters is represented by a class hierarchy of schemas. For example, knights are violent people, and princesses and hermits are peaceful people. Other kinds of information such as “people pick berries in the woods” are also available to MINSTREL. Schemas representing story fragments, both those that MINSTREL knows about initially and those it writes, are stored in episodic memory indexed by slots and slot values. This allows the retrieval of story fragments that are similar in some way to a known fragment. For exam- ple, all instances of a knight killing a monster can be found starting at a known episode ofa knight killing a dragon. Further search could find all instances ofa knight killing anyone or any- thing. MINSTREL’s world knowledge and repertoire is ex- tremely limited, which contributes to the flatness of its stories.

The more interesting components of MINSTREL’s knowl- edge representation are the heuristics that represent writing strategies. These are of two types: Those that represent author level plans. and those that represent creative problem solving routines. Author level plans (ALPS) include very general strat- egies such as those that set the scene of a story and introduce the characters, and more specific ones such as those intended to add characterizations, suspense, or foreshadowing to a story. An agenda keeps track of current goals and subgoals, and a story is finished when the agenda is empty. ALPS are an attempt to model an author’s decisions about story structure, pacing, and other aspects that contribute to literary quality. The ideas behind all these strategies are interesting, but their realization in MINSTREL is somewhat simplistic. It is difficult to evaluate the merit ofa strategy intended to add suspense to a story when the stories themselves are so limited. For example, one such strategy is to get a character into a life-threatening situation, and have the character’s attempt to escape fail. This is certainly a well-respected plot device, however when MINSTREL uses it, the result is a rather pedestrian passage.

Later, Bebe believed that he would die because he saw a dragon moving towards him and believed it would eat him. Bebe was very scared. Bebe tried to run away but failed! ( p. 126 )

Transform-Recall-Adapt Methods (TRAMS) are the second type of heuristic, which model creative problem solving. These case-based reasoning strategies utilize story fragments stored in episodic memory to provide solutions to the current problem. In essence, the goal is to find solutions to similar problems in memory, and adapt them to the current situation. The trick is to relax enough constraints so that a similar problem can be found, but not so many that the solution to the old problem cannot be adapted to the current problem. For example, if an ALP calls for a knight and a princess to meet by accident, MIN- STREL searches for other instances of these two kinds of char-

acters meeting by accident. If a solution has not been used too many times in previous stories, (so that it still may be consid- ered “creative”), that solution is used. If it has been overused, TRAMS are applied to look for solutions to closely related problems, for instance, a knight meeting some other kind of character by accident. TRAMS may also be applied recursively, so that a variant of a variant ofa solution is adapted to the story. Turner claims that these types of strategies model creative problem solving in general, not just problems in writing. As with the ALPS, it is difficult to evaluate the success of these strategies. MINSTREL shares a problem with many AI pro- grams, in that its implementation is so limited that it is not possible to tell whether the ALPS and TRAMS are really a first attempt at some important general-purpose strategies, or merely some clever programming that works in what is essen- tially a blocks world.

Overall, this book mentions some interesting issues in a gen- eral way, and describes an AI program that contains some in- teresting heuristics on a small scale. Unfortunately, the mate- rial is not presented very well. There is a great deal of repetition throughout the book, with an accompanying lack of integra- tion. For example, Chapter 3 introduces MINSTREL as though this is the reader’s first encounter with the program, although Section 1.3 contains essentially the same information. The flow of the narrative is occasionally interrupted by jarring statements. For example, on p. 4, Turner says, “Humans spend the first 20 years of their lives learning about the world, about how people act, and about ways to understand the world.” This is true as far as it goes, but what about the rest of their lives? Why stop at 20? In discussing MINSTREL as a model of cre- ativity, Turner claims that it “implements an artistic drive to create.” (p. 26) This is done by including a “boredom assess- ment”; if a solution has been used too many times in the past, it will not be used again. Clearly there is more to the “drive to create” than a desire to avoid repetition.

There are several errors of varying severity in the book. In an effort to provide some evaluation of how readers perceive the quality of MINSTREL’s stories, 9 subjects were recruited in some way (we are not told how) from the Internet. They read two versions of a story. The first version was just as MIN- STREL produced it, and the second was post-edited to make the language a little more polished. Subjects filled out a brief survey after reading each version. In addition to questions such as how old they thought the author was and how much educa- tion they thought the author had, the subjects were asked whether they thought the author was male or female. They were to answer “0” if female, “1” if male. The table reporting the results of the first survey show that the author’s sex was 0.4. The text then states that in the second survey, “the author was thought to be more male” (p. 229), with a reported value of 0.6. Now it is true that there are some interesting issues of gen- dered voices in discourse, however, since the question did not ask “how male” the subjects thought the author was, it seems likely that these values really refer to the proportion of re- sponses of “0” or “ 1.” In a couple of places, a section of a story that is discussed in the text does not match the printout of the full story given on the preceding pages. In addition, there are several typos that add to the impression that there was a lack of attention to detail in creating the book.

In summary, I cannot recommend this book with any en- thusiasm. Someone with a great interest in AI programs that create stories ought to read this book for the sake of complete- ness. Those with interest in models of creativity or authorship will not find any deep discussion or analysis here, and there are few references to allow further exploration of these topics. Some of the heuristics that MINSTREL uses are of potential interest, but it is difficult to tell whether they will scale up in

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996 253

any useful way. Finally, more effort should have been spent on organization and editing.

Stephanie W. Haas Associate Professor School of Information and Library Science CB# 3360,100 Manning Hall University qf North Carolina at Chapel Hill Chapel Hill, NC 27599-3360 E-mail:[email protected]

Managing Internet Information Services. Cricket Liu, Jerry Peek, Russ Jones, Bryan Buus, and Adrian Nye. Sebastopol, CA: O’Reilly & Associates; 1994: 630 pp. Price: $29.95. (ISBN l-56592-062- 1.)

This work has been out since last winter, but has not been reviewed as often as works in the enormous category ofgeneral Internet publications. This thorough and readable text stands head and shoulders above most in the crowd of Net books. Managing Internet iqformation Services is technical, but the authors consistently move from the general to the specific, and ease the reader into learning more about UNIX-based Internet services along the way. It was cited as a resource on the Library Information Technology Association’s “Guide to Internet Guides” handout in the Internet Room at the American Li- brary Association Conference in Chicago in June 1995. There are a few inaccuracies in the work that I detected (noted below) but, in general, this is a useful text. The work is published by O’Reilly and Associates (ORA), a large UNIX publisher and Internet provider. ORA works include Ed Krol’s seminal book The Whole Internet User’s Guide and Catalog, and the GNN (Global Network Navigator) pages and indices on the World Wide Web (Web) at URL http://gnn.com/.

“About the Authors” on p. 631 summarizes their educa- tional and skills backgrounds. A search of Wilson’s Library Lit- erature indicates that most of the authors and the editor have solid publications to their credit (except for new author Russ Jones, and young author Bryan Buus). The authors note in Chapter 1 (p. 19) that they never met in person! They worked with each other and the editor over the Internet, mainly via E- mail. For this book, the authors collaborated overall, but each had the lead on certain sections: Liu wrote the introduction, legal, FTP, and WAIS sections; Peek wrote the E-mail services sections; Jones wrote the World Wide Web (Web) sections; and Buus wrote the Gopher sections. Twenty people reviewed the text prior to publication, according to the authors’ comments in the Preface. The text was coded in Standard Graphical Markup Language ( SGML). There is a detailed explanation of all applications and filters the authors used in the colophon at the back of the book.

In “Why we wrote this book” (xxvii of the introduction), the authors state their goal of sharing tools for Internet publish- ing toward information diversity on the Net: “. . . it’s impor- tant that we [all Net users] establish individual and small scale publishing . otherwise we [the authors] fear the future In- ternet may be like the worst of television-a few enormous conglomerates generating trash for the mass consumer.” Ac- cording to the clear information in the Preface, the text is in- tended primarily for UNIX system administrators. However, there are also sections geared toward “data librarians”-the au- thors’ term for people who are not necessarily UNIX gurus, but who do create documents (pages, files, etc.) for Internet sites.

An open book icon highlights this less technical, but key, infor- mation throughout the text. This reviewer is more on the “data librarian” side, but I found the text accessible because of the clear writing style and step-by-step explanations. Indeed, I have learned a great deal more about UNIX-based Net services- particularly WAIS and FTP-by reading this work. The au- thors do mention libraries as Internet communicators and in- formation providers, so librarians should not feel excluded by this text. Managing Internet Information Services is written in a friendly, informative tone. Liu et al. have an emphasis on maintenance of World Wide Web pages that is lacking in many of the other how-to-publish-on-the-Web books extant today. The writing style is conversational. The authors often pose a question, then answer it. There are occasional lapses into collo- quialisms like, “more - than one can shake a sharp stick at,” that are a bit jarring. I was pleased to find that the authors vary gender-specific pronoun usage, e.g., sometimes saying “the user she,” other times “the recipient he.”

The book comprises 30 chapters, 8 appendices, a preface, a clear and detailed table of contents, and an index. The work moves from general to specific. It includes: two introductory chapters; one chapter on finger, inetd and telnet services; three chapters on FTP, highlighting Washington University’s ser- vices as an example; two on WAIS; eight on Gopher and related services including Veronica and Jughead (they do not cover Archie); five on World Wide Web, including an especially use- ful authoring chapter; five on E-mail, separate chapters on fire- walls, xinetd, legal and copyright issues; appendices on gopher examples; one appendix on Hyper Text Markup Language (HTML) tagging; and four appendices on Web sites and infor- mation. The Preface includes one-sentence summaries of each main and appendix chapter. This is atypical, but very useful. Each chapter has a shadow box and bullet points on the key items in the chapter prominently displayed on the first chapter page. These finding aides, along with the Preface and the good index, make this work one of the most accessible technical books ever encountered by this reviewer! It is vastly better than most software product manuals. The text has a clear and stated UNIX focus, though PC platforms are discussed. A good defi- nition of client/server, used throughout, first appears on p. 7: Client-interacts with user; server-performs tasks as directed by the client. This book is primarily about servers. Good ses- sion examples appear throughout, with inputting in boldface. Screen replications are a little blurry in places, but mostly legi- ble. The authors begin by telling readers where to get utility and service applications. There are instructions provided through- out on where to get (usually via FTP) files. Many technical terms are asterisked, with definitions appearing as footnotes on the pages on which the terms first appear. This format is helpful and usable. The authors include good reminders throughout about users’ different levels and means of access to information on the Internet. Remember, not everyone has a graphical browser!

Liu et al. frequently include price information when giving examples of UNIX server set-ups. This is helpful, taken along with the caveat that prices always change. Another approach that sets this work apart from many others is that the authors discuss the strengths and the weaknesses of Internet services. Additionally, they note product-specific variations when rele- vant. One such note-on how the Sun Solaris operating system with its dynamic linking requires both a runtime loader as well as a standard, shared C library (p. 60)-was helpful for our Lab’s team that is setting up an FTP archive as part of a fulltext fileserver. Cited references to other ORA books do not usually note the authors’ names. It would have been helpful (and more courteous) to do so. They call the University of Illinois research computing unit “CSO” in discussions of telnet sites and else- where. As a University alumna, this reviewer must note that the

254 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996

unit has been called CCSO (Computing and Communications Services Office, they added the communications) for several years now (though some E-mail addresses have not changed). One of their Web page examples includes a Mosaic error image for a misloaded graphical file (p. 329), but this is not explained.

In the chapter on authoring for the Web, the authors state (p, 326) that line breaks cannot be controlled in HTML docu- ments. This is not true: the (br) command inserts a line break! However, in Appendix D they do note the usefulness of the line break command in HTML. Netscape was not yet available at the time this text was written, so it does not contain any discus- sion of this now-extremely-popular browser, nor of Netscape’s HTML extensions (though proposed HTML 3 standards are discussed). Nevertheless, the authors are careful to note that all clients are not the same, and to give guides and examples that are common enough to work on all, even text-based, browsers. So, this book can still enable one to set up good Internet ser- vices, even in this time of Netscape prominence.

Despite the inaccuracies just discussed, this ORA work proved quite helpful for this reviewer on the job. At my library we are currently updating our Library Web pages and planning how to keep them updated as a front end to both the Library online catalog and the fulltext fileserver for technical publica- tions. I think other information professionals could also benefit from this practical text.

Sara Tompson’ Fermi National Accelerator Laboratory P.O. Box 500 Library-MS 109 Batavia, IL 60510 E-mail: [email protected]

The Trouble with Computers: Usefulness, Usability, and Pro- ductivity. Thomas K. Landauer. Cambridge, MA: MIT Press; 1995: xiii+ 425 pp. Price: $27.50. (ISBN O-262- 12 186-7.)

Tom Landauer’s name is familiar to many of us in informa- tion science for his work on a variety of human factors and IR- related problems during his years directing Cognitive Science Research at Bellcore. In a variety of experiments conducted with talented colleagues (such as Dennis Eagan, Susan Dumais, Louis Gomez, and George Furnas), Landauer has explored in- dexing problems in retrieval, evaluated the “SuperBook” re- trieval system, compared command-based with menu-based approaches, and measured the efficiency of various types of text-editors. He has been a faithful cheerleader for the impor- tance of usability measures in the design of the computer in- terface.

As the title of his book indicates, Landauer believes that “computers are in deep trouble . . . they do a lot of great things that could make us more efficient, and they do a lot of stupid or unintended things that get in our way, and they’re not cheap . . . Not only are the economic data equivocal on the produc- tivity effects of computing, but most direct evaluations of their effects on work efficiency are pretty disappointing” (pp. 4-5). That quote illustrates both Landauer’s folksy writing style as

’ Sara R. Tompson is the Library Administrator at Fermi National Ac- celerator Laboratory. operated by Universities Research Association, Inc. for the U.S. Department of Energy. Her opinions are her own, and do not necessarily reflect those of URA or DOE.

well as the theme of the first half of his book: Why investments in computing have not paid off thus far. In contrast, the second half puts forward quite a different theme: That the kind of re- search programs that he directed at Bellcore, if applied to com- mercial software, could turn the equation around and make a real difference in human productivity. The contrast between the beginning and ending points to Landauer’s book adds a schizophrenic aspect to this volume-can we really hope to re- verse 30 years of commercial shortsightedness?

Beginning what we could call the “con” side of his book, Landauer trots out study after study illustrating that computers have made little impact at either a macro- or micro-economic level. The great growth in white-collar work in the 1970s and 1980s was not matched by productivity gains among office workers and managers. In perhaps the most-studied example- the efficiency gains made by secretaries using word-proces- sors--the advantage is obscured by the greater costs of equip- ping and training such workers, and by the fact that some of the typing has been passed “upwards” to workers who are even more highly-paid (e.g., managers). Micro-level studies of word-processing show that it tends to increase the number of drafts of documents, which further dilutes the productivity gain. Finally, typing is simply a tiny proportion of labor and one which is low paid to begin with-not a prime candidate for saving money. Compared with the hundred- and even thou- sand-fold cumulative increases in productivity spurred by au- tomation in manufacturing, the tiny percentages found in office work are hardly worth considering.

Other applications reviewed by Landauer include com- puter-assisted instruction, information retrieval systems, hy- pertext, and groupware for decision-making. In these cases measures of efficiency seem mildly positive while increases in effectiveness are much harder to measure; in any case, few stud- ies have factored in the considerable costs of the equipment and training required to automate the tasks. Even computer-aided design applications, while absolutely uncontested in improving the design of integrated circuits and airplane wings, fall down in less esoteric work domains. Finally, another well-known ex- ample of alleged productivity gains-the automated teller ma- chine-is shown to have virtually no effect on the bottom line of banks; while an ATM transaction costs half as much as a manual one, they represent less than 5% of a bank’s total transactions and, again, require substantial investment in equipment and other resources.

Ranges of claimed productivity gains for all of the above- named applications together mostly range from “none” to 100% (p. 63). (Information scientists can take heart in the fact that it is only IR applications that have produced studies claim- ing more than 100% efficiency gains-some as high as 600%- although Landauer is skeptical of such claims.) Looking in- stead at industries, the picture is brighter: There are at least a few, such as telephony, package delivery services, and retailing, in which computing seems to have improved services while holding the line on costs; as with manufacturing, well-defined and routinized tasks offer the best opportunity for automation. But turning our attention to collective measures of industrial productivity, the picture turns gloomy again; it seems that in- vestments in information technology were a break-even prop- osition in most industries (and for most firms); hence, it was the firms and industries that invested less in IT that tended to have better returns on assets in the 1970s and 198Os!

Landauer then offers-and subsequently defeats-several common counter-arguments against the claim that computers have not improved efficiency; for example, that not enough time has passed to show an economic effect, or that we are not measuring the right effects (e.g., “competitive advantage”- even though that is tangential to productivity), or that some aspects of human productivity have permanently plateaued, or

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996 255

that many “individual testimonials” to the power ofcomputing are all the evidence that we need to consider. Most of these arguments boil down to “computers are powerful, computers are popular, and therefore they must be efficient, too.”

The rea.7on.s for the lack of productivity gains in computer applications are all too obvious, says Landauer: Equipment and software are costly, much ofthe labor force is unskilled and slow to adapt. incompatibilities among hardware and software cost time, computers are frequently “down,” organizations make poor judgments about buying and applying technology, the technology tends to be “misused” to merely redistribute activities rather than improve them, and, most importantly, the applications themselves are poorly designed. Here Landauer gives a number of reasons why the best intentions of companies and programmers go awry as they develop and bring their soft- ware to market. A major source of the problem is that applica- tions, features, and services are invented for which the actual need is marginal. if it exists at all. (Landauer later provides the example of navigation systems in cars as an “obvious” applica- tion that proved to be not-so-useful when tested with ordinary drivers; it turns out that drivers perform much better with ver- bal instructions than with customized maps; therefore a better application to pursue would be directions given in synthesized speech.)

At this point the author changes gears and begins, with case studies drawn from a variety of companies (DEC, IBM, Xerox, Bellcore) to illustrate what makes for successful applications. Among the tools examined here include DEC’s electronic mail system. the 1984 Olympic Message System, the interface to Xe- rox’s photocopiers, and Bellcore’s SuperBook project. In each case, a better product was developed through repeated testing with, and observation of, actual users-along with iterative im- provements at each step. The secret, says Landauer, is putting the user at the center of design, development, and deployment efforts. One does this by carefully analyzing the tasks one hopes to augment, using formative evaluation to improve design, measuring usability throughout the process, and carrying those techniques into the field as the product is deployed.

Landauer is convinced that “user-centered” design could re- sult in 40-80s improvements in those tasks that are computer- assisted, leading to at least a 4.5% annual productivity growth overall (assuming that about 12.5% of all work time will use computers). In a concluding chapter. “Fantasy Business Sys- tems,” Landauer explores some familiar examples-intelligent ordering systems, persona1 workstations, computer-based shopping systems, and electronic libraries-with an eye toward how they might evolve, given the right design methods. Throughout are sprinkled disclaimers like this one (p. 363): “It won’t work-not until it is based on thorough task analysis in a realistic setting, not until it’s mocked-up and tried and re- vised, then again; then prototyped and tried and revised, then again; then experimentally implemented and evaluated and re- vised; then again.”

The frequent disclaimers illustrate a problem: Landauer’s opening arguments are rather hard to overcome at latter stages in the book. lfso many designers and users are willing to accept sub-optimal design-due to ignorance, greed, or cynicism- what is going to turn them around? If many employees (including managers) are so intent on resisting fundamental change, what will it take to make them change? If most compa- nies are obsessed with taking business away from their compet- itors, rather than creating new business (or improving productivity), what hope is there? Doesn’t an economic system that promotes short-term thinking among producers and lem- ming-like acceptance among consumers work against advances in quality and efficiency?

The Trouble with Computers remains less than convincing on these points. And yet it is a book that deserves a thorough

readnot only for its advice (and evidence) about the value of research in software design, but also for an introduction to one of the thorniest problems of our time: Economic productivity. Landauer’s book should be of great interest to students of in- formation retrieval systems. human-computer interaction, and information society issues.

Donald 0. Case School of Library and Information Science 502 King Library South University ofKentucky Lexington, KY 40506-0039 E-mail: [email protected]

Building IBM: Shaping an Industry and Its Technology. Emer- son W. Pugh. Cambridge: MIT Press; 1995: 405 pp. Price: $29.95. (ISBN O-262- 16 147-8.)

Emerson W. Pugh’s most recent book on the history of IBM is, above all else, an insider’s fond retelling of how IBM became the dominant force in computer technology. Unlike so many of IBM’s gleeful critics, Pugh finds more to applaud than to deplore in IBM’s effective responses to the dramatic social and technological changes that have taken place since Herman Hol- lerith incorporated the Tabulating Machine Company in De- cember of 1896. Despite IBM’s disastrous mishandling of the PC market and its continuing inability to deliver good software on time, a major theme of Pugh’s narrative is that for more than seventy years IBM’s outstanding performance was largely the result of consistently outstanding leadership by top man- agement ( p. xi).

Although Mr. Pugh begins his story of IBM with the accom- plishments of Herman Hollerith, Thomas Watson, Sr., was, in Pugh’s opinion, the indispensable leader who provided the vi- sion and guidance that eventually led IBM to first dominate punched card data processing and then business computing. While Tom Watson, Sr., as Pugh notes, will always be “remem- bered for dress code, company songs, alcohol prohibitions, pa- ternalistic policies, authoritarian practices, and salesmanship,” he also deserves to be remembered for his “remarkable ingenu- ity and flexibility as he readied the company for its post-World War II thrust into electronic computers” (p. 36). With remark- able prescience Watson allocated money for an Education De- partment in 1932-a time when conventional managerial wis- dom dictated that slashing all unnecessary expenses (including education and training) was the best way to survive the depres- sion. Watson’s belief in education could also take some thor- oughly unconventional paths. For instance, Watson was one of the first executives to recognize that women represented an enormous, untapped talent pool. In 1935 Watson ordered IBM’s personnel department to hire women trainees selected from top colleges such as Bryn Mawr, Smith, and Wellesley. After a three month training course the first class of women trainees were assigned to branch offices as systems service women. A graduate of this program. Ruth M. Leach, became IBM’s first corporate vice president in 1943 (p. 59).

When describing Watson’s role in funding research on Howard Aiken’s Mark I computer, Pugh points out that it was Watson, acting on the advice of James Bryce, his most trusted technical advisor, who personally made the decision to un- derwrite the entire cost of building the Mark I for the then siz- able sum of $200,000. In 1946 Tom Watson, Sr., brought his

256 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996

son into Bryce’s laboratory to see what would become the IBM 603 Electronic Multiplier (p. 149). The performance of the IBM 603 convinced the son that computers would play an in- creasingly central role in data processing. The accepted view that the elder Watson was reluctant to move into electronic data processing is simply not true. As Pugh notes, the develop- ment of the IBM 603 was undertaken with the full support of Tom Watson, Sr. In Pugh’s version of the story, it was Watson, Sr. who pushed his son into the development of electronic pro- cessing equipment. Pugh’s mastery of IBM’s internal records allows him to marshal convincing evidence for this revision of computing history; for example, he notes that a 1943 letter to engineering expressed concern that IBM was lagging behind in the development and use of electronic technology (p. 149 ) _

Pugh’s portrait of Tom Watson, Sr., depicts a man whose talents extended far beyond slogans, suits, and salesmanship. There is no doubt that Watson, Sr., remained, above all else a salesman-he never pretended to be an engineer or even to possess any real understanding of the new electronic technolo- gies. Nevertheless, he was a truly visionary leader who suc- ceeded brilliantly in guiding his company through the danger- ous transition period in which the old card processing technol- ogy was replaced by mainframes. It was a fitting accomplishment for the man who said in 1932 (p. 109):

There is no business in the world which can hope to move for- ward if it does not keep abreast ofthe times, look into the future and study the probable demands ofthe future.

The second noteworthy revision of computing history pre- sented in Building IBM is the widespread belief that IBM was a late entrant in the development of electronic computing. Once again, drawing upon his matchless knowledge of internal IBM documents, Pugh weaves a convincing argument that supports his contention that:

. IBM had the fastest start in the market for electronic com- puting capability. It did not have to catch up as most accounts of this era suggest.

Pugh notes that as early as 1945 IBM had assembled a team to develop a supercomputer: The SSEC (p. 124). He goes on to state that the SSEC was, in many respects, more advanced than ENIAC (p. 134), claiming that the SSEC was the first opera- tional computer “to satisfy the modern definition of a stored- program computer” (p. 136). Pugh concludes by speculating that the erroneous view of IBM as slow to perceive the impor- tance of computers to data processing stems from Watson’s re- sistance to the term “computer”. This term, of course, was the title used for a person who did computing with the help of cal- culating equipment. By calling their machine a computer Eck- ert and Mauchly conjured up the perception of ENIAC as pos- sessing the intellectual capabilities of a person. Watson, the consummate salesman, resisted such terminology because he feared that the public would be more likely to resist the new technology if it was likely to replace people in the workplace. “All that resulted from Watson’s effort was a common misper- ception that early machines that were called calculators must have lacked some important quality of a computer” ( p. 143 ).

As the title indicates, Pugh confines his story to the Building ofZBA4. His story essentially ends with the IBM 370. The mel- ancholy saga of the IBM PC, the RISC fiasco, and IBM’s recent woes are all compressed into less than ten pages. Surprisingly, Pugh has little to say about what went wrong. In what is certainly the weakest part of the book, Pugh argues that IBM executives were distracted by the protracted (thirteen years) antitrust suit. He also notes that with the retirement of Tom Watson, Jr., there were no “take charge” leaders available ( p. 3 18).

No doubt there is some truth to both of these explanations.

Nevertheless, they remain too superficial to be convincing. Af- ter all, when a corporation loses some $75 billion of its stock- market value and writes off $20 billion of assets, a more sub- stantive explanation is needed (Carroll, 1993, p. 4). But “the unmaking of IBM,” to use Paul Carroll’s evocative phrase, is not the story Mr. Pugh strives to tell.

Despite these minor limitations, Building IBM: Shaping an Industry and Its Technology provides an indispensable and marvelously entertaining biography of one of the most impor- tant corporations-if not the most important corporation-in the development of the information age. After all, for nearly three decades IBM was “. . . the most profitable, the most ad- mired, the best company in the world, maybe in the history of the world” (Carroll, 1994, p. 3). As such, it is a worthy addition to Mr. Pugh’s earlier monographs on the development of IBM’s early computers and the famous 360 / 370 systems.

Stan Hannah Nova Southeastern University 3301 College Avenue Fort Lauderdale, FL 33314

Reference

Carroll, P. ( 1994). Big blues: The unmakingofIBM. New York: Crown Publishers. The decline of IBM is brilliantly told by the reporter who covered IBM for the U’all Street Journal for seven years.

Measurement in Information Science. Bert R. Boyce, Charles T. Meadow, and Donald H. Kraft. San Diego: Academic Press; 1994: xvii + 283 pp. Price: $59.95. (ISBNO-12-121450-S.)

The authors of this book observe that, in contrast to physical sciences, studies in information science have not adequately employed measurement techniques. Their book is written with the objective of helping readers in defining and using measures precisely and, hence, pave the way for results of future studies on information retrieval and database management systems to be rigorous and repeatable. In addition, it is expected that knowledge of material covered will allow one to become a pro- ductive critic of the literature involving measurement and en- courage information scientists to use precise measurement in their work to a greater extent.

The book is divided into four parts. The first deals with cer- tain fundamentals of measurement. Chapter 1 defines the meaning of measurement, gives reasons for measuring and dis- cusses various facets of the problem in the context of informa- tion science. In Chapter 2, commonly used methods for mea- suring are presented. A myth that surrounds the quantification of imprecise concepts by the use of measures is that the process automatically makes the outcome objective. The authors promptly dispel this by observing that measurement is NOT objective per se since there must already exist an intuitive no- tion of the property that is to be measured. For example, one should have the concept of what it means for A to be “taller” than B even before a measure is devised. Chapter 3 considers criteria by which the quality of measuring devices can be inves- tigated. Finally, Chapter 4 introduces the notion of scale and its relevance for deciding what manipulations on the measure are permissible.

Part 2 is an overview of concepts from statistics. Chapters 5 and 6 cover some introductory concepts of descriptive statistics

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996 257

and statistical inference. Coverage of such a vast topic in two chapters leads to lack of precision and completeness. I think that the material on probability and sampling fits better with the focus of Chapter 6 (statistical inference and hypothesis testing), rather than that of Chapter 5. Alternatively, the rela- tionship between probability and frequency can be more fully exploited in Chapter 5 to go from histograms to density func- tions. Chapter 7 covers statistical methods such as factor anal- ysis, clustering, and multi-dimensional scaling that are of par- ticular importance to information science. This reviewer finds that the connection of the material on fuzzy sets to the rest of the chapter is not clearly brought out.

Given that the book is aimed at information science profes- sionals, Parts 3 and 4 can be said to constitute the real meat of this work. Part 3 is on measures of text and bibliographic phenomena. Part 4, which includes Chapters 10 through 14, elaborates on measurement problems and current solutions in the context of database and information retrieval systems, as well as users of such systems. Given the scope of the material and the difficulty of presenting rigorous material to mathemat- ically naive readers, the authors provide, for the most part, a good balance between generality and precision. There are, how- ever, several exceptions.

The whole field of information Theory, for example, is cov- ered in about two pages (Section 9.7). Relevance feedback is described in half a page (Section 13.4.3). Furthermore, these topics seem to be a misfit in their current locations. Perhaps, relevance feedback could have been included with background material on information retrieval systems, in Chapter 10.

The important formal relationship between finding an opti- mal query through relevance feedback and discriminant analy- sis (Section 7.2.1) is not at all mentioned. Although matching functions such as cosine, dot product, and Jaccard are intro- duced in Chapter 7 (in the context of document clustering), no mention is made of their common usage for retrieval purposes (i.e., for matching documents and queries). This could have been done in Section 13.4.

In terms of IR system evaluation measures, certain compar- isons are very vague. For example, on page 198, it is mentioned that Shaw’s measure is close to Vickery’s. But, in fact, the mea- sures of Heine, Vickery, and Shaw are all strictly monotonic transformations of each other (as in cm vs. km, for length). Hence, as ordinal scales, they will yield identical results (conclusions). More generally, although scale types are intro- duced, those ideas are not applied in any way throughout the book. For example, is recall or precision an interval scale? Why or why not? Under what conditions will either be one?

Part 5, which pertains to Information System measures, has two chapters. Chapter 15 covers software metrics. Although I recognize the importance of this area for professionals in soft- ware engineering and development, I fail to see its importance for students and practitioners in information science. How- ever, in the context of recent interest in the creation of software repositories, a chapter with an orientation towards databases containing software (analogous to Chapter 9, for text, and Chapter 1 1, for structured data) would have been more fitting. Chapter 16 concerns Information Services, but provides little detail concerning measures.

Given the general topic of this book, it is clear that results of measurement theory are ofutmost importance. But the authors do not adequately refer to decades of work on measurement theory. For example, even the milestone book of Krantz, Lute, Suppes, and Tversky is not mentioned.

To summarize, this book handles a subject area of great im- portance to information science and brings together material that one cannot find from a single volume. Questions such as what sorts of things are important to measure in information retrieval and database systems and what measures are currently

used are handled well. The various problems mentioned are easily corrected. for example, in a new edition. Therefore, I be- lieve this work would serve well as a solid reference book for most information scientists.

Vijay Raghavan Centerfor Advanced Computer Studies P.O. Box 44330 Lafayette, LA 70504

Reference

Krantz, D. H., Lute, R. D., Suppes, P., & Tversky, A. ( 197 I ). Foundu- lions ofmeusuremenf (Vol. 1). New York: Academic Press.

Handbook of Usability Testing: How to Plan, Design, and Con- duct Effective Tests. Jeffrey Rubin. New York: John Wiley & Sons; 1994: 330 pp. Price: $34.95. (ISBN O-47 I-59403-2.)

Jeffrey Rubin’s contribution to the growing assortment of writings on evaluation of computer-based systems has identi- fied a specific niche and done a good job of nesting there. Rubin asserts that many people are expected to perform usability tests without formal training in human factors or usability engineer- ing. His text is intended to help these practitioners faced with directives to develop user-friendly systems.

It will come as no surprise to information professionals that “many high-tech products, such as computer-based systems, electronic equipment, and even everyday appliances . . . [are] hard to use” (p. 3). Rubin presents various reasons for the ab- sence of user-centered design in product development, includ- ing the traditional emphasis on machines/systems and techni- cal expertise rather than on the user and the user’s needs. This discovery of the importance of users’ perspectives can be ob- served as well in the increase in user-centered research related to information retrieval systems and information seeking.

Many information professionals encounter poorly designed systems as they cope with their own or other users’ anxiety and frustration; occasionally one feels that tools which should assist the information seeker are designed instead to thwart every ra- tional, and sometimes irrational, approach to information re- trieval. Rubin argues that “the design of usable systems is a difficult, unpredictable endeavor, yet many organizations treat it as if it were just ‘common sense’ . . . [E] veryone has an opinion about usability, and how to achieve it” (p. 6).

Most of the Handbook of Usability Testing aims to provide basic assistance “in the fast-paced, highly pressurized develop- ment environment in which most readers find themselves” (p. 28 ). Moreover, the author observes, poorly conducted research can lead to inaccurate findings, which can be worse than not assessing a system’s usability at all because it may instill false confidence in the design.

Rubin contrasts scientific studies of usability with the prac- tical orientation he espouses, implying that the more complete model of scientific research, with hypothesis testing and ran- dom samples. is beyond the capability of those untrained in human factors research methods. In fact, the exigencies of con- ducting a usability evaluation will ring true to any social science researcher, and deviations from the classical model occur when social scientists use qualitative methods and exploratory stud- ies to understand the complexities of human-computer in- teraction.

258 JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-March 1996

Rubin’s practical approach can be used in settings other than those he envisions. His experience as a usability consul- tant backs up his advice, often with first person accounts, and the presentation makes generous use of graphics, lists, and ta- bles which provide ready access to the material. This handbook has the feel of a Rubin workshop translated onto paper. For example, he describes his experience of allowing product devel- opers to suggest the maximum time for task completion, in the absence of any other information, to set benchmarks. Even with these generous definitions, system problems emerged and the developers were more willing to recognize the problems when they had, in essence, set the standards for judging perfor- mance.

The sections on developing skills as a test monitor, conduct- ing the test, and debriefing participants will be especially help- ful to novices and reassuring to those moderately experienced with observational research techniques. Rubin’s confidence in interactive design and the ability of each phase in the testing to turn up the most significant design flaws helps the reader overcome fears of conducting a less-than-perfect test. The eight-page index provides detailed entries, helping one find vaguely remembered bits of advice.

The handbook concludes with a preliminary plan for phas- ing in a usability program, again presented from a highly prac- tical, support-building perspective. Several usability-oriented

professional societies and their publications are mentioned in this section, and the code of ethics from the Human Factors and Ergonomics Society is presented as an appendix.

By way of comparison, A Practical Guide to Usability Test- ing( Dumas & Redish, 1993) covers similar topics and provides comparable advice with a slightly more scholarly approach. Rubin’s book is more individual, conveying the informality of a workshop. As such, it is especially suited to the practitioner assigned to make systems user friendly without much prepara- tion for the job; the Dumas and Redish book seems to be de- signed more for students in classes on human-computer interaction/systems design and evaluation. Either or both would be valuable to anyone conducting usability tests.

Debora Shaw School ofLibrary and Information Science Indiana University Bloomington, IN 4 7405 E-mail: [email protected]

Reference

Dumas, J. S., & Redish, J. C. ( 1993). A practical guide to usability testing. Norwood, NJ: Ablex.

JOURNAL OF THE AMERICAN SOCIETY FOR \NFORMATlON SCIENCE-March 1996 259