measuring the readability of business writing: the cloze procedure versus readability formulas

18
http://job.sagepub.com/ Communication Journal of Business http://job.sagepub.com/content/29/4/367 The online version of this article can be found at: DOI: 10.1177/002194369202900404 1992 29: 367 Journal of Business Communication Kevin T. Stevens, Kathleen C. Stevens and William P. Stevens Versus Readability Formulas Measuring the Readability of Business Writing: The Cloze Procedure Published by: http://www.sagepublications.com On behalf of: Association for Business Communication can be found at: Journal of Business Communication Additional services and information for http://job.sagepub.com/cgi/alerts Email Alerts: http://job.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://job.sagepub.com/content/29/4/367.refs.html Citations: at Bibliothekssystem der Universitaet Giessen on October 18, 2014 job.sagepub.com Downloaded from at Bibliothekssystem der Universitaet Giessen on October 18, 2014 job.sagepub.com Downloaded from

Upload: w-p

Post on 09-Feb-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

http://job.sagepub.com/Communication

Journal of Business

http://job.sagepub.com/content/29/4/367The online version of this article can be found at:

 DOI: 10.1177/002194369202900404

1992 29: 367Journal of Business CommunicationKevin T. Stevens, Kathleen C. Stevens and William P. Stevens

Versus Readability FormulasMeasuring the Readability of Business Writing: The Cloze Procedure

  

Published by:

http://www.sagepublications.com

On behalf of: 

  Association for Business Communication

can be found at:Journal of Business CommunicationAdditional services and information for    

  http://job.sagepub.com/cgi/alertsEmail Alerts:

 

http://job.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

http://job.sagepub.com/content/29/4/367.refs.htmlCitations:  

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

What is This? 

- Sep 1, 1992Version of Record >>

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

367

Measuring the Readability of BusinessWriting: The Cloze Procedure VersusReadability Formulas

Kevin T. StevensDePaul UniversityKathleen C. StevensNortheastern Illinois UniversityWilliam P. StevensDePaul University

Readability formulas determine the readability level of a passage by examining worddifficulty and sentence length. Computer software packages are now available that usereadability formulas to assign a readability score for the "average reader." However,readability formulas have severe methodological flaws and are not appropriatemeasures of the readability of materials written for adults.

In contrast, the cloze procedure as a method of determining the readability of adultmaterial by testing the target audience itself. The cloze procedure is widely consideredto be the accepted method of assessing the readability of college level material. The pro-cedure may be particularly useful for business writers who have access to specific populations of readers on whom the procedure can be performed.

The personal computer is a powerful business communication toolthat permits writers easily to revise material and check spelling andgrammar. However, simply because a technique can be computerizeddoes not mean that it is valid. We refer to the use of computerizedreadability formulas, such as the Flesch (1948) formula either tomeasure the readability of various business-related documents such asannual reports (for example, Courtis, 1987; Haar & Kossack, 1991;Jones, 1988; Razek & Cone, 1981; Schroeder & Gibson, 1990) or tomeasure and improve the readability ofbusiness writing by students andothers (Bates, 1984; Nelson, 1987; Penrose, Bowman, & Flatley, 1987;Schroeder & Gibson, 1987; Sterkel, Johnson, & Sjogren, 1986; Wedell &Allerheiligen, 1991). Several popular software programs incorporate oneor more of these formulas for computing an instant &dquo;readability&dquo; score.However, as we will argue below, the use of formulas is almost alwaysan inappropriate means of assessing readability. The fact that thoseformulas are computerized does not make them any more appropriate.Instead, recent research on reading suggests that the cloze procedure,rather than readability formulas, is the method of choice for assessingthe readability of adult-level reading material (Klare, 1988). The purposeof this paper is to contrast readability formulas to the cloze procedure,

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

368

to describe the use of the cloze procedure, and to recommend the use ofthe cloze procedure by business writers.

Readability formulas were developed on the comprehensive readingperformance of middle grade readers (fourth, fifth, and sixth graders).The applicability of readability formulas to adult, specialized popula-tions is open to question. In contrast, the cloze procedure gives businesswriters a method of assessing whether written material is comprehen-sible by a specific audience. Testing the readability of business writingthrough the cloze procedure is possible whenever writers have access toa sample of the intended audience. For example, writers of employeetraining manuals can test the readability of passages intended to be readby an audience possessing specialized vocabularies and skills. In con-trast, documents such as instructions to Internal Revenue Service formsare intended for a broader audience with varying knowledge bases andvocabularies. Writers of these types of materials can use the clozeprocedure to test the readability of the documents by sampling a broadspectrum of the users of the instructions.

ARE READABILITY FORMULAS VALID AND RELIABLE?

Readability formulas have greatly influenced the teaching ofbusinesswriting (Selzer, 1981). Readability formulas measure two aspects of apiece of text: sentence length and word difficulty. Each formula thenapplies a slightly different equation to get a score which is expressed asa grade level (for example, readable by a person with a 10th gradereading level). However intricate the equation, the measures upon whichthe readability score is based remain the same: sentence length and worddifficulty. There are problems with each of these measures.

Readability formulas based on sentence length assume that a longersentence is more difficult to comprehend than a shorter one. However,this is not always the case as longer sentences are not necessarilysyntactically more complex. Recent reading research now considersreading formulas to be methodologically flawed and inappropriatemeasures of adult reading material. Many linguists have questioned theassumption that a longer word or sentence is necessarily harder to readthan a shorter one (Davison, 1984). For example, consider the followingtwo sentences:

1. To be or not to be, that is the question.2. I went to the store and bought eggs, bacon, and bread,

brought them home and made a big breakfast.

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

369

The Flesch formula would consider the second sentence more difficult toread than the first (and four times as difficult as &dquo;I think, therefore Iam.&dquo;)

In addition to the fact that it is simplistic to associate syntacticcomplexity with length of sentences, one longer sentence is sometimeseasier to comprehend than two shorter sentences. For instance, Ander-son, Armbruster and Kantor (1980) cite these two sentences from a sixth

grade science book revised to be more &dquo;readable&dquo;:

1. You probably saw lily pads, grass, reeds and water weedsgrowing in the shallow water near the shore.

2. You probably saw lily pads, grass, reeds and water weeds. Theseplants grow in the shallow water near the shore.

While the revised sentence (2) has a lower (that is, more readable)formula score because the two sentences are shorter than the one longsentence, the sixth grade readers found version (1) to be more com-prehensible because the relationship between the plants and the waterin which they grew was direct and explicit. Oftentimes, causal links arelost when sentences are shorter:

1. It is raining. The picnic is canceled.2. Because it is raining, the picnic is canceled.

While version 1 rates as more &dquo;readable&dquo; (because of shorter sen-tences), version 2 is easier to comprehend because the causal link isdirect and explicit. Thus, shorter sentences do not always translatedirectly into more readable material. For example, poetry often has veryshort sentences, yet it can be obtuse and difficult to comprehend.

Assessing the difficulty of words presents even greater sources oferror. Some formulas (Fry, 1977) use the syllabic length of the word asa measure of word difficulty. In other formulas (Dale & Chall, 1948), apiece of prose is rated as more difficult to read if it has a high proportionof words not appearing on common word lists. However, these lists failto account for the multiple meanings of words. For example, the term&dquo;boot&dquo; is on formula lists of common words, but has a significantlydifferent meaning in tax parlance than in common usage. A reader couldperfectly comprehend the meaning of &dquo;boot&dquo; as footwear but not under-stand the arcane meaning of that term as it applies to tax accounting.Yet, because the word boot is &dquo;common,&dquo; formulas based on lists offamiliar words will judge it to be easy to comprehend.

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

370

The same arguments that were made against sentence length as ameasure of semantic complexity can be made against word length.Shorter words are not necessarily easier to comprehend than longerwords. Consider for example the word &dquo;thane&dquo; versus &dquo;television.&dquo; TheFlesch formula would judge &dquo;television&dquo; to be four times as difficult tocomprehend as &dquo;thane&dquo; simply because &dquo;television&dquo; is four syllables but&dquo;thane&dquo; is one syllable. Nor are word lists of &dquo;uncommon&dquo; words a truemeasure of semantic difficulty. Not only are the multiple meanings ofmany English words a problem, but also the familiarity of word conceptsis highly dependent upon the reader’s pre-existing knowledge.

As these examples suggest, a serious limitation of formulas is thatthey do not consider the prior knowledge (schema), language ability, ormotivation of the reader. Communication involves not only elements oftext difficultly but also elements of reader ability. That is, formula scoresdo not assess the interactive nature of reading comprehension. Formulasgive an estimated grade difficulty score (with a standard error of meas-urement of plus or minus 2.0 years) based solely on the length of thesentences or the purported difficulty of the words. Formulas do notconsider, among other things, &dquo;the match between the conceptual back-ground of the reader and the conceptual load of the text&dquo; (Courtis, 1987,p. 21).

THE AVERAGE READER?

Essentially, readability formulas measure only two factors in readingcomplexity, word difficulty (measured by either length of word or&dquo;familiarity&dquo;) and sentence length. They do not (and cannot) measuresuch reader specific factors as &dquo;motivation, interest, competitiveness,and purpose&dquo; (Bruce, Rubana, & Starr,1981; Roundy,1981). They do notconsider the varied backgrounds of the readers, but, instead, compute areading score for an &dquo;average&dquo; reader. These scores are particularlyinadequate measures of the comprehension ability of specially educated,adult readers (for example, college graduates) who possess a specializedvocabulary and knowledge base not held by the &dquo;average&dquo; reader(McKenna & Robinson, 1980; Means, 1981; Singer & Donlan, 1980;Zakaluk & Samuels, 1988).

Formulas (and computer programs based on those formulas)encourage writers to use shorter words and sentences (Selzer, 1981).Hagge (1984) states that writing instructors should encourage flexibilityrather than adherence to rigidly structured rules on sentence length andcomplexity. Furthermore, writing style should be highly influenced by

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

371

the audience. Thus, the superior writer first considers who will bereading his work. For instance, an article on baseball written for theneophyte (even the neophyte with a very high reading level) will use anentirely different vocabulary and structure than an article on baseballwritten for the seasoned student of the game (Voss & Bisanz, 1985). Thegood writer is not considering the &dquo;reading level&dquo; of her audience (nor isshe considering a readability formula score), but, rather, the knowledgeand sophistication of her readers. A formula score offers absolutely nohelp in this regard. Soldow (1982) notes that it is important to tailormessages to the intended readers and to provide cognitively complexreaders (for example, readers of annual reports) with syntacticallycomplex messages. Means (1981) comments that readers of annualreports are probably well able to read material judged to be &dquo;difficult&dquo; or&dquo;very difficult&dquo; by readability formulas.

Those who use formulas to assess the readability of business docu-ments such as annual reports state that formulas are reliable and validmeasures of readability (Razek & Cone, 1981; Schroeder & Gibson,1991). However, there has been little empirical evidence to support thatclaim and some evidence to the contrary (Karlinsky, 1983; Klare, 1976,1988). When such formulas are developed, the creators of the formulamust find a criterion measure; that means that they have to find matterthat they are sure is written at a designated level. If the material onwhich the readability formula is not accurately graded, then the formulacan never be valid (Ekwall & Shanker, 1989). The early reading formulassuch as the Flesch (1948) and Dale-Chall (1948) were based on theMcCall-Crabbs Standard Test Lessons inReading. Stevens (1980, p. 414)notes that the McCall Crabbs Test Lesson was

not based on extensive testing and the grade level scores they yield[ed]lack reliability and comparability. Complete technical information on thelessons is not available from the publisher [of the McCall-Crabbs TestLessons] because the lessons were never intended to be used as tests.

As a matter of fact, the McCall-Crabbs Test Lessons (from which theFlesch and many other readability formulas derived their readabilitygrade levels) were given only to several hundred New York students ingrades 3-8; they were never intended to be general indicators of readingability across all age or educational groups (Stevens, 1980).

Since all widely-used readability formulas measure the sameattributes, sentence length and word difficulty, one would expect similarformulas to provide similar measures of readability and one would alsoexpect these measures to tap actual reader comprehension. However,Klare (1976) reviewed 65 studies in which readability formula estimates

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

372

of dif~culty and reader comprehension were correlated. In only 20 ofthose studies were positive correlations reported. That is, fewer thanone-third of the studies found a correlation between formula scores andactual readers’ comprehension. Klare acknowledges that formulas canbe useful but urges extreme caution in their application. Zakaluk andSamuels (1988) and McConnell (1982) go further in criticizing formulasand observe that the lack of inter-formula consistency in judging thedifficulty of reading material calls the reliability of reading formulas intosevere question. Davison and Kantor (1982) state that readability for-mulas &dquo;fail to give any adequate characterization of readability&dquo; (p. 207).

Since formulas are so flawed, why are they so widely used? The answeris undoubtedly that measuring the reading audience’s ability to under-stand a printed message is extremely important. Writers want theirmessages to communicate, and they want to see readability formulas asa gauge of readability. Thus, business writers and teachers of businesswriting may well be tempted to use computerized readability formulasas a means of assessing and revising writing. However, using com-puterized readability formulas to revise business writing can, in fact,make the writing less readable (Davison & Kantor, 1982). Revisingwritten material is successful only when &dquo;the adaptor function[s] as aconscientious writer rather than someone trying to make a text fit a levelof readability defined by a formula&dquo; (Davison & Kantor, 1982, p. 187).

However, a more interactive measure of measuring comprehensibilityexists-the cloze procedure. The cloze procedure, rather than readabilityformulas, is the &dquo;criterion of choice&dquo; for assessing the readability of adultreading material (Klare, 1988, p. 24). The procedure has come to beaccepted by the education community as a &dquo;very reliable and objectivemeasure&dquo; of directly measuring the comprehensibility of writtenmaterial (Haar & Kossack, 1990, p. 192). Indeed, alter long study, theprofessional association in the field of reading, the International ReadingAssociation, has urged the use of the cloze procedure to measurereadability (Klare, 1988).

THE CLOZE READABILITY PROCEDURE

Communication between reader and writer can occur only when thereis a commonality in language and in pre-existing knowledge structures(schemata). That is, the ability to comprehend a passage depends on thereader’s pre-existing general knowledge of the language used in thepassage and the knowledge of the topic presupposed by the author(Bransford & Johnson, 1973; Voss & Bisanz, 1985). A large body of

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

373

research illustrates the fact that the reader’s prior knowledge is the keyvariable in comprehension. Bransford and Johnson (1973) demonstratedin their research that when the topic of a passage (washing clothes) wasknown to the reader, comprehension increased markedly. Anderson,Reynolds, Schallert, and Goetz (1977) found that when the readingmatter was baseball, prior knowledge of baseball was actually a morepowerful determinant of comprehension than the general measuredreading ability of the individual. This effect of prior knowledge, orschema, has been found to be operative in comprehension in empiricalstudies done from second grade (Pearson, Hansen, & Gordon, 1977) tocollege, and at every level in between (Freebody & Anderson, 1983;Hocevar & Hocevar, 1978; Langer, 1984; Reynolds, Taylor, Steffenson,Shirey, & Anderson, 1982; Robovich, 1979; Stevens, 1980). Indeed,research done in the 1970s and 1980s illustrates that the reader’s priorknowledge is what reading comprehension is based upon.

Thus, the reader must be familiar with the concepts (for example,efficient market), terms (for example, revenue), and context (for example,financial report) if the reader is to comprehend the message that thewriter hopes to convey. The meaning of a passage &dquo;cannot be determinedindependently of the context into which a reader is trying to assimilateit&dquo; (Bransford & Johnson, 1973, p. 414). One might note that an assess-ment of such context-independent understanding is exactly whatreading formulas attempt to do. In contrast, the cloze procedure is ameans of determining the difficulty of reading and comprehending agiven piece of material for a particular audience. It takes into accountthe vocabulary, background, and language of the reader vis-a-vis thespecific piece of writing. Essentially, the cloze procedure measures thedegree of the interaction (including their prior knowledge) between thereaders and the material read (Anderson, 1974; Bormuth, 1966, 1968,1969; Kaplan, Carvellas, & Metlay, 1971; Nesvoid, 1972).One conducts the procedure by constructing a cloze passage wherein

every fifth word from the reading material of interest, up to a minimumof fifty words, is deleted. Prior statistical research (Bormuth, 1965;Taylor, 1956) indicates that a minimum of three passages randomlyselected from the reading material of interest provides a sufficientlyrepresentative sample of the material. Readers are given the passagefrom which the words have been deleted. Their task is to fill in the blankswith the one, exact word that they think the author actually used.A readability measure is then derived by calculating the percentage

of words correctly predicted by each reader. The more words successfullypredicted by the reader, the more readable the material is for that reader,

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

374

indicating more commonality (in knowledge and language) between thereader and the writer. Norms have been developed indicating the mini-mum percentage of correct predictions necessary for a passage to be&dquo;readable&dquo; (Bormuth, 1965, 1966; Miller & Coleman, 1967; Peterson,Peters, & Paradis, 1972; Taylor, 1956). For adult readers, a percentagescore of 57 percent exact word replacements represents an independentreading level. That is, subjects who correctly predict at least 57 percentof the author’s words are able to read the material unaided (withoutinstruction from others) and with 90 percent comprehension. They haveshown the ability to interact with the writer by successfully predictingthe writer’s message. Prior research indicates that the norm of 57 percentas an indicator of 90 percent reading comprehension is stable acrossadult populations (McNinch, Kazelkis, & Cox, 1974; Peterson et al.,1972). In addition, a score of 44 percent or better indicates an &dquo;instruc-tional level&dquo; (75 percent comprehension). At this level, the reader couldunderstand the material if given instructional help (for example, if thedifficult concepts are taught or arcane terms defined prior to the readingassignment). Presumably, then, this 75 percent comprehension level isthe reading level at which instructional-type texts should be aimed.

The cloze procedure allows one to measure the readability of aparticular document by its intended audience. Thus, if a writer or userof reading material wishes to determine if what was written is com-prehensible by its readers, she can measure that directly by providing acloze passage to a sample of the audience. Indeed, the cloze procedurewas devised in the field of journalism to ascertain if the mass marketmagazines were actually reaching their audience (Taylor, 1956). Whilereadability formulas were available, magazine editors were aware thatthe &dquo;grade level of an average reader&dquo; would tell them nothing about thecomprehensibility of their prose to the targeted audience. The businesscommunity’s needs to reach its audience are arguably every bit asimportant as magazine editors’. How might the cloze procedure beapplied here?

POSSIBLE USERS OF THE CLOZE PROCEDURE

The importance of readability seems apparent: documents that arenot readable are costly both to the preparer and the user of the document.The costly and diflicult part of the cloze procedure is, of course, obtainingsufficient subjects willing to fill out a cloze passage. However, manyorganizations could, for at least three reasons, easily implement the clozeprocedure to assess the readability of their written communications.

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

375

First, many organizations have large &dquo;captive&dquo; audiences, (for example,IBM has nearly 400,000 employees). Second, actually completing a clozepassage takes only about 15 to 20 minutes; therefore, compensatingsubjects for their time need not be prohibitively expensive. And third,the benefits of the cloze procedure, for example, increased comprehen-sibility of training manuals, may well exceed the costs of conducting thetest.

Possible users of the cloze procedure include school systems, govern-ment agencies, and large corporations. For example, textbook adoptersat large universities or university systems have access to many studentreaders in large lecture classes and could easily implement the clozeprocedure to assess the readability of textbooks. Indeed, high schoolsoften do use the cloze procedure to decide among otherwise equivalenttexts. The several texts under consideration provide material (randomlyselected) for cloze passages. Then, the passages are randomly distributedto classes of students who are potential users of the textbooks. The textthat the users find most readable (that is, receives the highest cloze score)is thus identified, and readability can then be factored into the textbookdecision, along with such other factors as quality of supplementalmaterial or depth of coverage. Colleges could just as easily ascertain thereadability of potential texts by pretesting them via a cloze procedure.The costs of such a test would appear to be amply offset by the benefitsof discovering and eliminating unreadable texts from adoption con-sideration.The cloze procedure can also be used by the course instructor to assess

the reading level of assigned textbooks when instructors do not have achoice of texts. By ascertaining if the text is at the independent orinstructional level, for a given class, the instructor can tailor his teachingstyle to the background of the class. For example, if the book is at anindependent level for the group, the instructor can assume that thestudents can comprehend the material prior to class discussion. How-ever, if the book is at an instructional level, perhaps the instructor oughtto preteach the important concepts and terms.The U. S. government also has a vested interest in the readability of

its documents and could use the procedure by sampling its huge popula-tion of employees. For example, the Internal Revenue Service (IRS) coulduse the cloze procedure to assess the readability of the instructions tothe over 100 million individual income tax returns filed annually (IRS,1988). The cost to both the government (for example, lost tax revenue orincreased audits) and the filers of returns (for example, fines orpreparers’ fees) is high if the instructions are unreadable. Anecdotal

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

376

evidence indicates that many individuals find the instructions to taxreturns to be very difficult to read. For example, in 1987, over 50 percentof those filing either of the relatively simple Forms 1040EZ or 1040A(essentially, taxpayers with only wage and interest income and noitemized deductions) paid a preparer to complete the form (IRS, 1988).Admittedly, it has not been proven that the reason that taxpayers paypreparers to do their returns is because the instructions to the returnsare difficulty to read. However, we argue that when over half the popula-tion cannot complete their relatively easy tax returns, it seemsreasonable to conclude that the readability of the instructions is animportant factor. Perhaps the IRS could use the cloze procedure onreaders from its own workforce or from other government agencies todetermine which forms and their instructions are readable.

Other possible users of the cloze procedure are the U.S. armed forces.Military personnel (many of them with no more than a high schooleducation) spend many hours reading highly technical instructions (forexample, on the maintenance of the M-1 tank). Obviously, the armedforces have access to a huge population of readers and could use itscaptive audience and the cloze procedure to assess the readability of itsvarious training manuals for their various audiences. The relatively lowcosts (for example, organizing classes and printing and scoring the clozepassages) to the military of implementing the cloze procedure wouldsurely be outweighed by the benefits of discovering whether the instruc-tions to its expensive and lethal equipment are readable.

A wide array of nongovernment writers of business or technicalreports feasibly could also use the cloze procedure. For example, considercompanies who make similar products and are highly reliant on cus-tomer satisfaction for repeat sales (for example, manufacturers of carsor computer software programs). Manufacturers of competing softwareprograms (for example, spreadsheet packages such as Lotus 1-2-3 versusExcel) could examine the relative readability of their instructions byadministering the cloze procedure either in-house or at conventions andtrade shows. It seems reasonable to argue that customer satisfactionwith material such as computer software packages is strongly influencedby whether the customer can read the instructions and that customerswill prefer and purchase the product with more readable instructions.

Similarly, car manufacturers could sample either their customers (forexample, at car shows) or their employees to determine the readabilityof their service manuals. Clearly, the cost to both the car manufacturerand the customer will be high if the customer cannot read the instruc-

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

377

tions, fails to maintain the car properly, and is dissatisfied with theproduct.

Furthermore, companies with many employees (for example, GeneralMotors or IBM) could easily implement the cloze procedure to review thereadability of crucial in-house documents. For example, consider theimportance to General Motors and its customers that the technicalinstructions to assembly line workers are clearly understood. GeneralMotors could set aside a half-hour of its regular training schedule toassess the readability of the instructions. The costs of paying the workersto perform the cloze test might be much smaller than the cost ofdowntime or product defects caused by unreadable instructions.

Quasi-governmental organizations such as the Financial AccountingStandards Board (FASB) or the American Bar Association could also usethe cloze procedure to determine whether their pronouncements arereadable. For example, the FASB promulgates generally acceptedaccounting principles. These principles directly affect the financialreporting of all publicly held businesses. Presumably, the readability ofthese pronouncements affects the costs (for example, accurateimplementation of new standards) of financial reporting. The FASBcould request the assistance of such accounting organizations as theAmerican Accounting Association (for example, at national or regionalmeetings) or the American Institute of Certified Public Accountants (forexample, with volunteer employees from large public accounting firms)to provide readers of cloze passages. Those pronouncements that thesesophisticated audiences find difficult to read could be rewritten andrefined until understanding resulted. The results of the cloze test willindicate when that understanding is achieved.

CONCLUSIONS

The cloze procedure allows a writer to determine if a match existsbetween the written material and its intended audience. Readabilityformulas do not consider the target audiences, therefore, they cannot beused to determine comprehensibility.

However, the cloze procedure is not designed as a tool for student orother writers’ self-testing of the readability of their own writing. Fur-thermore, the cloze procedure does not provide guidance on how to reviseand improve writing. Use of the cloze procedure does enable writers tocheck their prose against a demonstrable measure of understandability.That is, writers can actually &dquo;field test&dquo; their prose to see if the intendedaudience finds it comprehensible. If the results of the cloze procedure

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

378

indicate that the audience is having difficulty understanding thematerial, the writer can revise the prose to make it more accessible,paying particular attention to possible gaps in pre-existing knowledgestructures on the part of the audience. Furthermore, the author canexamine the prose for excessive jargon, for inexact causal links, and forpossible ambiguities. In short, the cloze procedure is a useful tool thatcan only diagnose poor writing. The cure for poor writing is, as always,constant revision and conscientious attention to the intended audience.

Certainly, few would argue against the use of personal computers asa means of improving business communication. We are not Luddites(Munter, 1986) blindly opposing the use of computers as too newfangled.However, computers can be misused if the underlying technique is itselfinvalid. We argue that this is the case with computerized readabilityformulas. These outdated formulas have severe methodological flawsthat preclude their use either manually or electronically. Indeed, Selzer(1981, p. 24) describes the influence of readability formulas on theteaching of business writing style as &dquo;pernicious&dquo; because formulas&dquo;eliminate choice-perhaps the most important word in any writingcourse.&dquo; Business educators and writers should eschew using theseformulas to measure the readability of business materials or to improvebusiness writing. Instead, those interested in the readability of businesscommunication can use the cloze procedure. The cloze procedure ispreferable because it allows for one to measure how difficult it is for agiven population of adult readers to read a piece of material. Thisprocedure allows one to assess the readability of material by its intendedaudience, an audience often sophisticated enough to comprehend longersentences and an audience which, because of specialized prior knowledgebases, can also comprehend &dquo;difficult&dquo; words.

NOTE

Kevin T. Stevens (C.P.A., D.B.A.) is an Assistant Professor in the School ofAccountancy at DePaul University; Kathleen C. Stevens (Ph.D.) is a Professorin the Department of Reading at Northeastern Illinois University; William P.Stevens (C.P.A., Ph.D.) is an Associate Professor in the School of Accountancyat DePaul University.

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

379

REFERENCES

Anderson, R. C., Reynolds, R., Schallert, D., & Goetz, E., (1977) Frameworksfor comprehending discourse. American Educational Research Journal, 14,367-381.

Anderson, T. H., (1974). Cloze measures as indices of achievement comprehen-sion when learning from prose. Journal of Educational Measurement, 11,83-92.

Anderson, T. H., Armbruster, B., & Kantor, R., (1980). How clearly written arechildren’s textbooks? (ERIC Document Reproduction Service, No. ED 192-275).

Bates, P., (1984). How to turn your writing into communication. PersonalComputing, 8, 84-93.

Bormuth, J. R., (1965). Optimum sample size and cloze test length inreadability measurement. Journal o f Educational Measurement, 2, 111-116.

Bormuth, J. R., (1966). Readability: A new approach. Reading ResearchQuarterly, 2, 79-132.

Bormuth, J. R., (1968). Cloze test readability: Criterion reference scores.Journal of Educational Measurement, 5, 189-196.

Bormuth, J. R., (1969). Factor validity of cloze test measures of readingcomprehension ability. Reading Research Quarterly, 4, 358-365.

Bruce, B., Rubin, A., & Starr, K., (1981). Why readability formulas fail. IEEETransactions on Professional Communication, 24, 50-52.

Courtis, J. K., (1987). Fry, Smog, Lix, and Rix: Insinuations about corporatebusiness communications. The Journal of Business Communication, 24,19-28.

Dale, E., & Chall, J., (1948). Aformula for predicting readability. EducationalResearch Bulletin, 23, 11-20, 37-54.

Davison, A., (1984). Readability: The situation today. (Technical Report #359).University of Illinois, Urbana: Center for the Study of Reading.

Davison, A., & Kantor, R., (1982). On the failure of readability formulas todefine readable texts. Reading Research Quarterly, 17, 187-209.

Ekwall, E., & Shanker, J., (1989). Teaching reading in the elementary school.Columbus, Ohio: Merill.

Flesch, R., (1948). Anew readability yardstick. Journal of Applied Psychology,32, 221-223.

Freebody, P., & Anderson, R. C., (1983). Effects of vocabulary difficulty, textcohesion, and schema availability on reading comprehension. ReadingResearch Quarterly, 18, 277-294.

Fry, E., (1977). Fry’s readability graph: Clarifications, validity, and extensionto level 17. Journal of Reading, 21, 242-252.

Gunning, R., (1952). The technique of clear writing. New York: McGraw-Hill.Haar, J., & Kossack, S., (1990). Employee benefits packages: How under-

standable are they? The Journal of Business Communication, 27, 185-200.

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

380

Hagge, J., (1984). Review essay 1: Whatever will be, will be: Frequency analysisof English usage in business and technical writing. The Journal of BusinessCommunication, 21, 25-36.

Hayes, D. A., & Tierney, R. J., (1982). Developing readers’ knowledge throughanalogy. Reading Research Quarterly, 18, 256-280.

Hocevar, S., & Hocevar, D. (1978). The effect of meaningful versus nonmean-ingful material on oral reading errors in 1st through 3rd grade children.Journal of Reading Behavior, 10, 297-299.

Internal Revenue Service, 1988. Statistics of Income Bulletin. Washington,D.C.: U.S. Government Printing Office.

Jenkinson, M., (1957). Selected processes and difficulties in reading comprehen-sion. Unpublished doctoral dissertation, University of Chicago.

Jones, M. J., (1980). A longitudinal study of the readability of the chairman’snarratives in the corporate reports of a UK company. Accounting andBusiness Research, 10, 297-298.

Kaplan, I. T., Carvellas, T., & Metlay, W., (1971). Effect of context on verbalrecall. Journal of Verbal Learning and Verbal Behavior, 10, 202-212.

Karlinsky, S. S., (1983). Readability is in the mind of the reader. The Journalof Business Communication, 20, 57-70.

Klare, G. R. (1976). A second look at the validity of readability formulas.Journal of Reading Behavior, 8, 129-152.

Klare, G. R. (1988). In B. Zakaluk & S. J. Samuels (Eds.), Readability: Pastpresent and future. Newark, Delaware: International Reading Association.

Langer, J., (1984). Examining background knowledge and text comprehension,Reading Research Quarterly, 19, 240-245.

Means. T. I., (1981). Readability: An evaluative criterion of stockholder reac-tions to annual reports. The Journal of Business Communication, 18, 25-34.

McConnell. G. R., (1982). Readability formulas as applied to college economictextbooks. Journal of Reading, 26, 14-17.

McKenna, M. C., & Robinson, R. (1980). An introduction to the cloze procedure.International Reading Association.

McNich, G., Kazelkis, R., & Cox, J. A., (1974). Appropriate cloze deletionschemes for determining suitability of college textbooks. In P. L. Nacke(Ed.), Interaction: Research and practice for college-adult reading. NationalReading Conference.

Miller, G. R., & Coleman, E. B., (1967). A set of thirty-six prose passagescalibrated for complexity. Journal of Verbal Learning and Verbal Behavior,6, 851-854.

Munter, M., (1986). Using the microcomputer in business communicationcourse. The Journal of Business Communication, 23, 31-42.

Nelson, R., (1987). Word processing: Let’s hear it for CAW. Personal Comput-ing, 11, 49-52.

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

381

Nesvold, K. J., (1972). Cloze procedure correlation with perceived readability.Journalism Quarterly, 49, 592-594.

Pearson, P. D., Hansen, J., & Gordon, C. (1979) The effect of backgroundknowledge on young children’s comprehension of explicit and implicitinformation. Journal of Reading Behavior, 11, 201-209.

Penrose, J. M., Bowman, J. P., & Flatley, M. E., (1987). The impact ofmicrocom-puters on ABC with recommendations for teaching, writing, and research.The Journal of Business Communication, 24, 79-91.

Peterson, J., Peters, N., & Paradis, E., (1972). Validation of the cloze procedureas a measure of readability with high school, trade, and college populations.In F. P. Greene, (Ed.), Investigations relating to mature reading. NationalReading Conference.

Razek, J. R., & Cone, R. E., (1981). Readability of business communicationtextbooks—an empirical study. The Journal of Business Communication,18,33-40.

Reynolds, R., Taylor, M., Steffenson, M., Shirey, L., & Anderson, R.C., (1982).Cultural schemata and reading comprehension. Reading Research Quar-terly, 18, 353-366.

Ribovich, J.K., (1979). The effect of informational background on variousreading-related behaviors in adult subjects. Reading World, 18, 240-245.

Roundy, N., (1983). A program for revision in business and technical writing.The Journal of Business Communication, 20, 55-66.

Schroeder, N., & Gibson, C., (1990). Readability of management’s discussionand analysis. Accounting Horizons 4, 78-88.

Schroeder, N., & Gibson, C., (1987). Using microcomputers to improve writtencommunication. CPA Journal, 57, 50-57.

Selzer, J., (1981). Readability is a four letter word. The Journal of BusinessCommunication, 57, 23-34.

Singer, H., & Donlan, D., (1980). Reading and learning from text. Little Brown.Soldow, G. F., (1982). A study of the longitudinal dimensions of information

processing as a function of cognitive complexity. The Journal of BusinessCommunication, 19, 185-200.

Sterkel, K. S., Johnson, M. I., & Sjogron, D. D., (1986). Text analysis withcomputers to improve the writing skills of business communication stu-dents. The Journal of Business Communication, 23, 43-61.

Stevens, K., (1982). Can we improve reading by teaching background informa-tion ? Journal of Reading, 25, 526-529.

Stevens, K., (1980). Readability formulas and the McCall-Crabbs standard testlessons in reading. The Reading Teacher, 33, 413-415.

Stevens, K., (1980). The effect of background knowledge on the reading com-prehension of 9th graders. Journal of Reading Behavior, 12, 151-154.

Taylor, W. L., (1956). Recent developments in the use of the cloze procedure.Journalism Quarterly, 42, 48-99.

at Bibliothekssystem der Universitaet Giessen on October 18, 2014job.sagepub.comDownloaded from

382

Voss, J. F., & Bisanz, G. L., (1985). Sources of knowledge in reading comprehen-sion : Cognitive development and expertise in a content domain. In A. M.Lesgold, & C. A. Perfetti, (Eds.) Interactive processes in reading (pp. 215-239). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Wedell, A. J., & Allerheiligen, R. (1991). Computer assisted writing instruction:Is it effective? The Journal of Business Communication, 28, 131-141.

Zakaluk, B., & Samuels, S. J., (Eds.). (1988). Readability: Past, present, andfuture. Newark, Delaware: International Reading Association.

First submission 4/8/91

Accepted by NLR 12/19/91

Final revision 1/30/92