adapting the kirkpatrick model to technical communication products and services

6
by Saul Carliner Saul Carliner is an executive vice president of Fredrickson Communications, Inc. He may be reached at Fredrickson Communications, lnc., 119 North Fourth Street, Suite 513, Minneapolis, Minnesota 55401; via telephone: (612) 339-7970; or by fax: (612) 339-6516. raining evaluation attempts to put economic value on the T work of those who produce knowledge products like training and documentation for a living. With its ultimate focus on business results, the Kirkpatrick model makes a strong con- tribution in that effort. Its various levels represent a staged effort to measure the economic value of our work, starting with noneconomic data that assesses participants’ perceptions of their satis- faction with a course, working through measures of short- and long-term learn- ing, and finally assessing training’s impact on the bottom line. Figure 1 summarizes the various levels of the model. For all of the limitations that others have identified (Kaufman, 1994; Lincoln and Dornet, 1995), the Kirkpatrick model represents an impor- tant step towards quantifying “knowl- edge” work like ours in ways that accountants can place on the corporate financial books. One of the Kirkpatrick model’s many advantages is its wide use. According to Training magazine’s annual industry survey, nearly all organizations conduct some of the model’s levels of evaluation (TRAINING, 1995). That means train- ers have a common language. In fact, so many human performance technolo- gists use and understand this model that we refer to it in short-hand: Level 1, Level 2, Level 3, and Level 4. These lev- els provide not only a common lan- guage, but a common tool that lets us compare results much like price-to- earnings ratios let businesspeople com- pare performance among otherwise unlike companies. On the other hand, many human per- formance technologists develop techni- cal manuals and other forms of com- munication for which the Kirkpatrick model does not apply. When we work with those methods of communication, how can we assess the value of our work? In this article, I propose a model for doing just that. This model adapts the Kirkpatrick model in ways that suit technical communication products and services. This information can help us more effectively price communication products and information design and development services. Assessing the Value of Our Work Assessing technical communication products such as users’ guides, service guides, and references has not reached the level of sophistication of assessing training materials. Technical communi- cators have just completed their first comprehensive study of the value tech- nical communication adds (Redish and Ramey, 1995). Most of the assessment done so far has focused on the quality of the products rather than their impact on users. Examples include:

Upload: saul-carliner

Post on 06-Jul-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Adapting the Kirkpatrick Model to Technical Communication Products and Services

by Saul Carliner

Saul Carliner is an executive vice president of Fredrickson

Communications, Inc. He may be reached at Fredrickson

Communications, lnc., 11 9 North Fourth Street,

Suite 51 3, Minneapolis, Minnesota 55401; via telephone:

(612) 339-7970; or by fax:

(612) 339-6516.

raining evaluation attempts to put economic value on the T work of those who produce

knowledge products like training and documentation for a living. With its ultimate focus on business results, the Kirkpatrick model makes a strong con- tribution in that effort. Its various levels represent a staged effort to measure the economic value of our work, starting with noneconomic data that assesses participants’ perceptions of their satis- faction with a course, working through measures of short- and long-term learn- ing, and finally assessing training’s impact on the bottom line. Figure 1 summarizes the various levels of the model. For all of the limitations that others have identified (Kaufman, 1994; Lincoln and Dornet, 1995), the Kirkpatrick model represents an impor- tant step towards quantifying “knowl- edge” work like ours in ways that accountants can place on the corporate financial books.

One of the Kirkpatrick model’s many advantages is its wide use. According to Training magazine’s annual industry survey, nearly all organizations conduct some of the model’s levels of evaluation (TRAINING, 1995). That means train- ers have a common language. In fact, so many human performance technolo- gists use and understand this model that we refer to it in short-hand: Level 1, Level 2, Level 3, and Level 4. These lev- els provide not only a common lan- guage, but a common tool that lets us compare results much like price-to- earnings ratios let businesspeople com-

pare performance among otherwise unlike companies.

On the other hand, many human per- formance technologists develop techni- cal manuals and other forms of com- munication for which the Kirkpatrick model does not apply. When we work with those methods of communication, how can we assess the value of our work?

In this article, I propose a model for doing just that. This model adapts the Kirkpatrick model in ways that suit technical communication products and services. This information can help us more effectively price communication products and information design and development services.

Assessing the Value of Our Work Assessing technical communication products such as users’ guides, service guides, and references has not reached the level of sophistication of assessing training materials. Technical communi- cators have just completed their first comprehensive study of the value tech- nical communication adds (Redish and Ramey, 1995).

Most of the assessment done so far has focused on the quality of the products rather than their impact on users. Examples include:

Page 2: Adapting the Kirkpatrick Model to Technical Communication Products and Services

the U-metric, a list of 100-plus char- acteristics that are associated with the quality of communication prod- ucts. The presence of these character- istics does not guarantee quality and the effort of measuring them proved so cumbersome that the client for whom the U-metric was created chose not to use it.

Technical communicators have increas- ingly tried to assess the usability of their communication products, although most see this as a type of formative eval- uation rather than as an integral part of evaluating the effectiveness of technical communication products.

In their landmark study, Redish and Ramey concluded that technical com- munication products can reduce sup- port costs, eliminate rework, generate sales, and provide several other similar benefits (1995). The financial benefits similarly vary depending on the prob- lems that the communication products were commissioned to address.

Limitations in Evaluating Technical Communications Although some aspects of the Kirkpatrick model might be useful in evaluating technical communication products, other aspects are not. Consider the limitations of the model at various levels of evaluation.

Level 1: Reaction. Although printed technical communication products such as users’ guides and reference manuals end with forms for readers’ comments, most technical communica- tors use these forms to ferret out errors in the information, not to learn opin-

ions about the product. In addition, technical communicators do not uni- versally include such comment forms with every communication product. Many human performance technolo- gists distribute “smiley sheets” and most online communication products do not even provide users with an opportunity to provide feedback.

Level 2: Learning. When we develop training materials, we should measure whether users learned what the materi- al was intended to teach. Most technical communication products, however, are not intended for training. Instead, the information is intended for one of the following:

Page 3: Adapting the Kirkpatrick Model to Technical Communication Products and Services

Figure 2. Kirkpatrick model adapted to assess technical communication

16 performance improvement / april1997

products and services. no longer need it or are not likely to use it again for a long time and are therefore not expected to retain it. Measuring learning in such instances seems inappro- priate. Instead, we should measure users’ ability to perform the tasks described in the communication product.

Level 3: Transfer. Most assessments of users’ ability to perform tasks also eval- uate how effectively users can transfer technical information to the task at hand. For “disposable” technical com- munication products or those intended for occasional use, these two levels can be assessed at once. For technical com- munication products intended to train or motivate, a separate assessment of transfer is still appropriate.

Level 4: Business results. This is the measurement of the value technical communication products add. It is nec- essary to begin each project with a busi- ness objective; that is, before developing a user’s guide or reference (even a train- ing course), it is necessary to state what the communication product is intended for and how the performance technolo- gist will determine whether the product achieved its goals. For example, if a communication product is supposed to increase revenue by 4 percent, do the business’ measures indicate this? If it is supposed to reduce support costs by five percent, did the support costs go down? If not, why not?

In addition to these limitations, the Kirkpatrick model also fails to assess client satisfaction with the materials pro-

duced for them. One of the key reasons for tracking the effectiveness of technical communication products and services is to promote repeat business. By measur- ing the products’ economic benefits, it is possible to help clients more easily justi- fy the need for our services in the future. Justifying the economic benefit alone does not guarantee future business; the need for technical communication prod- ucts and services is driven, in part, by needs that are still intangible, at least as expressed by quantifiable measures. We therefore should assess client satisfaction and adjust our services as necessary to increase it.

A Four-Level Model for Assessing Effectiveness In light of these limitations, the Kirk- patrick model should be adapted to assess technical communication products and services effectively. I therefore sug- gest adapting it as shown in Figure 2.

Level 1: Assessing user satisfaction. This type of evaluation can assess how users feel about a given communication prod- uct. Post-class surveys explore reactions to the instructor, classroom facilities, handouts, and visuals. A similar survey could ask users if they are able to obtain the information they need. Do they feel that the information addresses their questions? How satisfied were they with gathering this information? Often, the way that users feel about a product affects their performance with it and their acceptance of the product, service, or concept it describes.

When collecting data about user satis-

faction, it is a good idea to ask ques- tions about several specific issues. First ask how users feel about the product. Ask them to demonstrate their point in words as well as in numbers: words provide a mirror into their real feelings and numbers provide a measurement to track. Then ask a series of questions about specific aspects of the product to ferret out what drives this overall impression.

Second, ask how users read the product: from beginning to end or just parts that pertain to certain needs? Users’ feelings about a product often depend on the intended use and how well the product is perceived to be designed to fulfill that intention.

Third, ask about how users search for information in the product. Where do they look first-the table of contents or the index? It is then possible to design communication products that better meet the users’ searching patterns.

Fourth, ask how easily users can find information in the product. Are they satisfied with the speed at which they find information? This question touches a related issue: the ease with which they perceive finding information. Do they perceive they can find it easily or not? Although users might sometimes find information in what we believe to be a brief time, they might think that they should be able to find it more quickly. This perception colors their overall impression of the product.

Fifth, ask how clear users believe the information to be. Even if editors and technical reviewers certify the informa- tion, if users do not perceive the infor- mation as clear, they will probably not be satisfied with the product.

Sixth, ask how easily users follow the instructions. If they can understand how to use the product, they are more likely to feel satisfied with it.

Seventh, how well do users understood the technical subject before and after using the communication product? If

Page 4: Adapting the Kirkpatrick Model to Technical Communication Products and Services
Page 5: Adapting the Kirkpatrick Model to Technical Communication Products and Services

the level of satisfaction.

Consider the main objective “copyedit text,” which is intended for profes- sional writers using a particular word processor. Supporting objectives address “checking spelling,” “checking gram- mar,” and “recognizing when the sug- gestions of the built-in spelling and grammar checkers are inappropriate.” A usability scenario would ask users to copyedit text in a real-life situation. Figure 5 shows an example of a sce- nario.

Prepare a guide sheet for the observer instructing him or her to watch whether users perform certain tasks and to record the time needed for the task and any errors. Figure 6 contains a guide sheet for the observer.

Note that in all cases of performance assessment, the assessment tool emerges directly from main tasks that were established before product development began. As with educational evaluation, by testing the main tasks, it is also pos- sible to test the supporting tasks because users must be able to perform the supporting tasks before they can perform the main one.

Level 3: Assessing client performance. This type of evaluation measures the value technical communication prod- ucts add; that is, the extent to which a product is able to meet its business objective. To perform this type of evalu- ation, identify quantifiable business goals and express them in financial terms before developing the product. Business objectives identify the extent to which a communication product helps an organization realize one of the fol- lowing benefits:

18 performance improvement / april1997

Page 6: Adapting the Kirkpatrick Model to Technical Communication Products and Services

compliance with corporate, industry, or government regulations.

To assess client performance success- fully, it is necessary to identify a busi- ness measure in one of these areas when developing the evaluation plan. Follow the changes in that measure- ment before the release date to long afterward because most business goals indicate a change over time, such as an

increase in revenue or a decrease in expenses. Unless you track the trend in these measurements before you intro- duce the product, you cannot assert that introducing the product caused the change in business performance with any credibility. Similarly, indicate other events that occurred while you were tracking the business measure that might have influenced the changes. Track changes for longer than one measurement period to assess the product’s impact.

For example, if the objective is to keep support costs to the projected level, begin tracking support costs immediate- ly after the project begins. Were support costs increasing before the product was released? If so, how does this affect the measurement? If not, is the new product likely to affect a situation that is already under control? Or is its purpose to ensure that the situation stays under control? Continue to track support costs through the entire year after the product is published. Note the trend and other

performance improvement / voI36, #4 19