value of training evaluation

Upload: prashantuttekar

Post on 09-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Value of Training Evaluation

    1/67

    The Value of eValuaTion:Making Training eValuaTions

    More effecTiVe

    an asTD research sTuDy

  • 8/8/2019 Value of Training Evaluation

    2/67

    The Value o eValuaTion:Making Training eValuaTions

    More e ecTiVe

    an asTD research sTuDy

  • 8/8/2019 Value of Training Evaluation

    3/67

    2009 by the American Society or Training &Development. All rights reserved.

    No part o this publication may be reproduced, distributed,or transmitted in any orm or by any means, includingphotocopying, recording, or other electronic or mechanicalmethods, without the prior written permission o thepublisher, except in the case o brie quotations embodiedin critical reviews and certain other noncommercial usespermitted by copyright law. For permission requests,write to ASTD Research, 1640 King Street, Box 1443,Alexandria, VA 22313-1443.

    orDering in orMaTion

    Research reports published by ASTD can be purchasedby visiting our website at store.astd.org or by calling800.628.2783 or 703.683.8100.

    ASTD Product Code: 790907ISBN-10: 1-56286-563-3ISBN-13: 978-1-56286-563-4

    asTD eDiTorial sTa

    Director o Content: Dean SmithManager, ASTD Research: Andrew ParadiseManager, ASTD Press: Jacqueline Edlund-BraunSenior Associate Editor: Justin BrusinoInterior Design and Production: Kristi KingCover Design: Rose Mcleod and Kristi King

  • 8/8/2019 Value of Training Evaluation

    4/67

    | 1

    3 | oreWorD

    4 | execuTiVe suMMary

    7 | inTroDucTion

    11 | secTion iWhaT eValuaTionTechniques are coMpanies using?

    19 | secTion iihoW anD When DoorganizaTions conDucT eValuaTions?

    25 | secTion iiiWhaT are The BarriersTo Training eValuaTion?

    29 | secTion iVWhaT are coMpaniesspenDing on Training eValuaTion?

    33 | secTion VhoW can organizaTionsiMproVe Training eValuaTion?

    36 | conclusion anD policyrecoMMenDaTions

    39 | re erences

    41 | appenDix The Value o eValuaTionsurVey oVerVieW

    61 | aBouT The auThors anD conTriBuTors

    63 | aBouT The conTriBuTing organizaTions

    | conTenTs |

  • 8/8/2019 Value of Training Evaluation

    5/67

  • 8/8/2019 Value of Training Evaluation

    6/67

    | 3

    | oreWorD |

    oreWorD

    Measuring the impact o learning continues tobe one o the most challenging aspects o thelearning unction. The tough economy demandsthat business leaders scrutinize costs to nd even greater e -

    ciencies, but it can be di cult to show hard dollar savingsor areas like training and development that have so many

    intangible processes. Companies employ myriad strategiesto identi y and quanti y the results o training, but most arenot satis ed with the evaluation e orts.

    A 2006 ASTD/IBM report titled C-Level Perceptions o theStrategic Value o Learning noted that while chie learningo cers and chie executive o cers have di erent needswith regard to learning evaluation, at a high level theyagree learning is strategically valuable. Both groups alsoagreed that isolating and measuring learnings nancialcontribution to business are di cult, and o ten perceptionso stakeholders (employees, business unit leaders, andexecutives) are a key indicator o learnings success. BothCLOs and CEOs recognized the need or strong governanceprocesses or planning, allocating, and managing learninginvestments, and that e ciency o the learning unction canbe increased by streamlining and standardizing processes,leveraging technology, and right-sourcing the unction.Agreement on these broad topics is important because itmeans common ground exists on which to build a better,more e ective learning evaluation process, which will inturn positively a ect business goals.

    This study takes a close look at the value o learning.The Value o Evaluation: Making Training EvaluationsMore E ectiveexplores the complex issue o learningevaluation, the techniques being used, barriers toe ective implementation, and strategic use o learningmetrics. With only about one quarter o the surveyrespondents telling us they get a solid bang or theirbuck rom their training evaluation e orts, I believethe best practices in this report can help many organi-zations become much more pro cient and strategic inevaluating learning.

    I hope you will use the in ormation in this report to takeaction as you make learning evaluation a process thatstrengthens your organization.

    Tony BinghamPresident and CEOASTD

  • 8/8/2019 Value of Training Evaluation

    7/67

    4 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    | execuTiVe suMMary |

    be used. Many experts recommend that organizationsevaluate at all ve levels but trim the number o programsthat get evaluated as they move up the levels. Instead,it appears that organizations are evaluating at the rst

    ew levels and then dropping o completely. Companiesalso employ the Brinkerho Success Case Method, whichcan be described as evaluation studies with success ultrainees. About hal o respondents use some version o this method, which highlights individual training successstories to communicate the value o learning.

    What are the Barriers toeffective evaluation?Respondents cite numerous barriers to the evaluation o learning. For example, metrics such as business resultsand return-on-investment (ROI) are sometimes seen astoo complex and time-consuming to calculate. Di cultyin isolating training as a actor that a ects behaviors andresults also impedes learning evaluation.

    Perhaps most disturbing is that many training pro essionals

    claim that leadership isnt actually interested in trainingevaluation in ormation. This is worrisome in an age whenemployee skills are more critical than ever, being one o the

    ew di erentiators among businesses in a global economy.Learning pro essionals need to gain a better understandingo the roots o such apathy toward evaluation.

    What are companies spendingon training evaluation?Depending on their size, companies can spend anywhere

    rom a ew thousand dollars to tens o millions o dollars ontraining annually. About 5.5 percent o that gets spenton evaluating the traininga little more among smallcompanies and a little less among large companies. Mosto the money is spent on internal resources, but thesmaller the company, the higher the external expenditures.Organizations tend to spend the largest share o their evalu-ation budget on Level 1 (reaction) evaluations and thenless on subsequent levels. On average, companies spendhal o their evaluation budget measuring the reactionso participants. The question or learning pro essionals is

    The pursuit o excellent learning evaluations continues,but so ar ew organizations think theyve masteredthem, according to our study. In act, only aboutone-quarter o respondents to our Value o EvaluationSurvey said they agreed that their organization got a solidbang or the buck rom its training evaluation e orts.Reactions were only slightly better when respondents wereasked i their learning evaluation techniques helped meetlearning and business goals.

    These ndings are in line with some other recent studies,including a 2009 survey that Chie Learning O fcer mag-azine conducted among its Business Intelligence Board.In that survey, 41 percent o respondents said they weredissatis ed with their organizations learning measurementversus 35 percent who said they were satis ed (Anderson,2009). In a 2007 survey conducted by Expertus andTraining Industry Inc., 77 percent o respondents saidtheir evaluation e orts all short in providing meaning ulin ormation or business planning (Radhakrishnan, 2008).

    But organizations are not giving up on success ulmeasurement o the learning unction. They continue toexplore ways to communicate and document the value o the training and development they provide to employees.Our study nds that most companies do at least some

    orm o evaluation, although they may be unsure howto go about it and not sure what to do with the results.We believe the data in this report can help many rmsbecome much more pro cient in these areas.

    What evaluation techniquesare companies using?The ve-level Kirkpatrick/Phillips model o learning

    evaluation is the most common practice. The ve levelsinclude participant reaction (Level 1), level o learningachieved (Level 2), changes in learner behavior (Level3), business results derived rom training (Level 4), andthe ROI rom training (Level 5). Ninety-two percent o respondents said they measure at least Level 1 o themodel. But the use o the model drops o dramaticallywith each subsequent level. This tendency suggests thatmanagers may not ully grasp how the model should

  • 8/8/2019 Value of Training Evaluation

    8/67

    | 5execuTiVe suMMary

    This study also looked at speci c practices. It ound, orexample, that rms using evaluation to gauge whetheremployees are learning what is required or to calculatelearnings e ects on business outcomes are ar morelikely to view their evaluation e orts as success ul andto have better market per ormance. Other importantactions include making sure learning positively infu-ences employee behavior and demonstrating the value o learning to others in the organization. The most valuableuse o evaluation, however, is to improve overall business

    results. Companies that say they do this to a high extenttend to see results, as this action was the most positivelycorrelated to market per ormance.

    The key ndings suggest that todays organizations donot need to settle or learning evaluations that are doneas more o a ritual than as a business-improvement strat-egy. Good evaluation techniques are available, and theycan be used to make the whole organization stronger.

    whether this is the best use o their limited resources. Ourstudy suggests otherwise.

    hoW can organizations improvetraining evaluation?This study uncovers how companies use in ormationgathered rom learning evaluations. It also identi eswhich techniques are most strongly linked to suc-cess ul evaluation and, more generally, better marketper ormance.

    Our study ound that, in most cases, using Kirkpatrick/ Phillips (K/P) levels is associated with greater success inthe area o learning metrics. That is, i respondents saidtheir organizations used a given K/P level o evaluation,then they were also more likely to give their organizationsa higher score in terms o learning evaluation success.The exception to this rule is when organizations onlymeasure Level 1.

  • 8/8/2019 Value of Training Evaluation

    9/67

  • 8/8/2019 Value of Training Evaluation

    10/67

    | 7inTroDucTion

    | inTroDucTion |

    When only about a quarter o participants say their companies get a good return on their learningmetrics investments, then

    something is deeply wrong.

    These ndings illustrate that theres much more progressto be made in the area o learning evaluation in mostorganizations. When only about a quarter o participantssay their companies get a good return on their learningmetrics investments, then something is deeply wrong.One o the most important purposes o evaluationsshould be to determine what is most and least e ectiveand then to invest more resources in whats most e ec-tive. Evaluation is a eedback tool intended not only to

    help employers meet their learning goals but also to makelearning itsel more e cient. With these ndings in hand,learning pro essionals in many companies clearly needto take a hard look at their evaluation programs. I theydont think theyre getting a solid return on investment,then they should conduct a rigorous analysis o how, inthe uture, this can be achieved.

    in a di cult economy, business leaders begin toscrutinize every aspect o their organization withan eye toward costs. From the supply chain to realestate to o ce supplies, companies want to make surethey are getting their moneys worthand training is noexception. A large company can spend tens o millions o dollars every year on training and developing employees,yet very rarely do leaders get a clear picture o what theyare getting in return. As with most so t issues, it canbe di cult to show tangible results rom such intangible

    processes. Companies employ various strategies to iden-ti y and quanti y the results o training, and this studyexamines some o the more common ones, how they areused, what prevents evaluation success, and what couldlead to uture successes.

    satisfaction With learningmetrics and the evaluationsuccess indexWhen asked i their company got a solid bang or thebuck rom learning metrics, more than one-third o

    the respondents remained neutral, neither agreeing nordisagreeing. Slightly less said they disagreed, while onlyabout one-quarter said they agreed that their organiza-tion did indeed get a good return on its investment.Reactions were slightly more positive when respondentswere asked i their learning evaluation techniques helpedmeet learning and business goals. Speci cally, more thanone-third o respondents (36.5 percent) reported thattheir learning evaluation techniques help them meettheir organizations business goals to a high or very highextent. Slightly more (41.3 percent) reported that theirevaluation techniques help them meet their organizationslearning goals to a high or very high extent. These resultsare in line with a 2009 survey Chie Learning O fcer magazine conducted among its Business IntelligenceBoard. In that survey, 41 percent o respondents saidthey were dissatis ed with their organizations learningmeasurement, and 35 percent said they were satis ed(Anderson, 2009).

  • 8/8/2019 Value of Training Evaluation

    11/67

    8 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    Another evaluation technique that has gained headwayin recent years is the Brinkerho Success Case Method.That method, developed by Robert O. Brinkerho ,pro essor o education at Western Michigan Universityand author o several human resource developmentbooks including The Success Case Method , entailsidenti ying success ul learners and interviewing themto nd out what made the learning experience work

    or them and what they achieved as a result. It canalso be applied to unsuccess ul candidates to determine

    what went wrong. These case studies can then be usedto communicate the value o learning throughout theorganization, identi y what works and what doesnt,and help improve learning programs.

    When study respondents were asked about other types o learning evaluation techniques, their answers included avariety o responses, including

    Action learning projects ADDIE (assess, design, develop, implement, evaluate)

    model Alignment measures Balanced scorecards Benchmarking, o ten against local metrics, ASTD

    metrics, or other industry standards Blooms taxonomy Cost avoidance Customer/client assessments Employee engagement

    Responses to these three items were combined to createan Evaluation Success Index (ESI), which was used tocorrelate against the speci c evaluation practices. Readerswill see this index requently cited in this study, alongwith the Market Per ormance Index (MPI). The MPI wasdetermined by asking a series o questions about respon-dent organizations overall market per ormance over thepast ve years. The results o these indices were mappedto other survey responses.

    descriBing learningevaluation methodsWhile there are various ways to evaluate the e ectivenesso learning within an organization, most methods tendto all somewhere into the range o evaluation techniquesmaking up the Kirkpatrick/Phillips model. That modelconsists o ve levels o evaluation and was developed byDonald L. Kirkpatrick, pro essor emeritus at the Universityo Wisconsin and author o Evaluating Training Programs:The Four Levels , who developed Levels 1 through 4, and

    Jack Phillips, author o The Human Resource Scorecard,

    who developed Level 5. The rst our levels measure(1) the reaction o participants, (2) the level o learningachieved, (3) changes in learner behavior, and (4) businessresults derived rom training. Level 5 measures the returnon investment (ROI) that learning programs deliver.

    Figure 1 | Please state the degree to which you agree with theFollowing statements on evaluation Practices

    41.3%

    36.5%

    25.6%

    (p t t t t )

    o v t t m t t

    o v t t m t b

    W t d b b w tm t m t

  • 8/8/2019 Value of Training Evaluation

    12/67

    | 9inTroDucTion

    Evaluation capacity building Evidenced-based analysis Executive satis action with learning being delivered

    to business units Impact maps Impressions o trainers Interviews and ocus groups Leading and lagging indicators o business trends

    Linkage model Relevance index Return on expectations Seat count and number o hours delivered Stakeholder satis action Technical certi cation training.

    It was beyond the scope o this study to investigate everylearning evaluation technique in a detailed manner, andthere can be debate about how many o these othermethods could legitimately all under the umbrella o theKirkpatrick/Phillips model. Still, we ound it enlighten-

    ing to see how many other techniques participants viewas available to them, and there is clearly room or uturestudies that go into greater detail on these topics.

    Another evaluation techniquethat has gained headway inrecent years is the Brinkerho Success Case Method.

    a note about correlation datam p p

    . t

    p . Fp , +1,

    p p . b , j

    - - p . w

    , p p

  • 8/8/2019 Value of Training Evaluation

    13/67

  • 8/8/2019 Value of Training Evaluation

    14/67

    | 11secTion i: WhaT eValuaTion Techniques are coMpanies using?

    Level 3 brings more complexity and another drop in usage.This level evaluates the change in employee behavior thatthe training achieved. Only 54.6 percent o respondentssaid their companies evaluate at this level, signi ying itsgreater level o di culty. According to Kirkpatrick, thisdi culty stems rom the unpredictability o behaviorchanges. First, employees must be presented with an op-portunity to use the new behaviors. Second, it is impossibleto know when the change will occur, i at all. Even i thechange does occur, says Kirkpatrick, an employee may or

    may not continue the new behavior. All o this conspiresto make behavior changes somewhat di cult to measure.

    Kirkpatricks nal level, Level 4, measures what is arguablythe most important metric: results. Yet only 36.9 percento respondent companies evaluate at this level to anydegree. Again, the di culty o such measures may be oneo the reasons, although we could argue that business resultsare easier to measure than employee behaviors. Anotherpossible reason is that, according to some experts,

    | secTion i |W hat E valuation t EchniquEs a rE c ompaniEs u sing ?

    The majority o the respondents (91.6 percent) saidtheir companies evaluate at least some o theirlearning programs at Level 1, the reaction o theparticipants. This makes sense as it is typically the easiestmeasurement to take. Whether it is an online survey givena ter completing a web-based course or smile sheetshanded out ollowing classroom instruction, it is commonpractice to assess the experiences o learners ollowingtraining. Companies can easily gauge reactions anddetermine which courses employees enjoy and which

    ones they dont.

    The usage o other types o evaluation drops o as wemove up the K/P model levels. A little more than ourout o ve survey respondents said their organizationsevaluate at Level 2, where companies try to nd out whatknowledge was actually gained, which skills were developed orimproved, and what employee attitudes were changed bythe training. The di culty in obtaining this in ormationmay explain the drop in usage, but the value that participantsattach to this type o evaluation helps explain why thedrop isnt larger.

    Figure 2 | Percentage who use these KirKPatricK/PhilliPs levels intheir organizations and Percentage who attribute a high

    or very high level oF value to those levels

    Percentage who use the corresPonding level to any extent

    Percentage who say this level has high or very high value *

    r t t t 91.6% 35.9%ev t 80.8% 54.9%ev t b v 54.5% 75.0%ev t t 36.9% 75.0%r t i v tm t 17.9% 59.4%n t b v 4.1%

    * Only among those who say they use this level to some extent

  • 8/8/2019 Value of Training Evaluation

    15/67

    12 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    ROI = (Bene ts/Costs) x (100).

    A modi ed alternative is

    ROI = ((Bene ts - Costs)/Costs) x (100).

    Theres also a orecasted calculation:

    ROI = ((Bene ts - Costs)/Costs) x (100) x (%A) x (%P).Where %A is the percentage o learners who are

    expected to use the training and %P is the percentageo employees who received the training (Murray and

    E endioglu, 2007, pp. 374 375; Phillips, Pulliam,& Wurtz, 1998).

    These and other, more complicated equations such as inter-nal rate o return and net present value can be intimidating,especially when it can be so di cult to identi y what exactlyto put in the Bene ts spot. But these equations can beused to determine the bene ts o training over time, allow-ing companies to calculate training ROI similarly to theway in which they calculate ROI or more tangible assets(Is Training Worth It?, 2008). According to Jack Phillips,data collected in Level 4 through ollow-up questionnaires,program assignments, action plans, and per ormance moni-toring can all be converted into monetary data (Phillips,Pulliam, & Wurtz, 1998). Designing training programswith traceable outcomes is the key to being able to translatetraining bene ts into dollar amounts that can be used in anygiven equation (Platt, 2008).

    It was somewhat surprising to nd that the use o theKirkpatrick Levels 14 is more prevalent at larger or-ganizations but that the use o Level 5 is more likely atsmaller companies. Its possible that the data is easier tomeasure or less time-consuming to gather or a smallercompany and is there ore a more manageable processthan or a larger rm.

    Our study shows that Level 5 gets the least participation,

    with only 17.9 percent o respondent companies

    using it to any extent.

    companies should measure learning via Levels 1 through3 be ore moving on to Level 4. By then, however, manycompanies will have stopped bothering to evaluate.Another reason is that its simply hard to prove therelationship between business results and training.Businesses are complex, multidimensional entities, andits o ten di cult to tease out cause and e ect. It becomesvirtually impossible to prove beyond a shadow o adoubt that a learning program has been responsible

    or better business results. The best many businesses canhope or is that a preponderance o evidence points to

    the idea that training produces better business results.

    While some would argue that Kirkpatricks Level 4includes the idea o return on investment, or ROI, JackPhillips proposed that the word results does not go arenough. He put orth Level 5, dedicated solely to ROI.Our study shows that Level 5 gets the least participation,with only 17.9 percent o respondent companies using itto any extent.

    With ROI or all business lines being so important tosenior leadership, why is it not tracked more or learningprograms? First, it is probably the most di cult aspect o

    learning to measure. It can be a di cult process to trans-late the value o a training program into dollars and cents.There are, o course, numerous calculations that can beused to determine ROI, including a cost-bene t analysis:

  • 8/8/2019 Value of Training Evaluation

    16/67

    | 13secTion i: WhaT eValuaTion Techniques are coMpanies using?

    We need to ask why ROIisnt seen as having as muchvalue as Levels 3 and 4.

    We need to ask why ROI isnt seen as having as muchvalue as Levels 3 and 4. One theory is that it simplyisnt used widely enough or it to be seen as having highvalue. That is, as the next section will show, even inorganizations where it is used, it is used or only a small

    raction o learning programs (though not out o linewith Kirkpatricks recommendations). Given this, it maybecome di cult or rms to assign it greater value. Itsalso possible that executives, who are accustomed to see-ing ROI attached to more measurable business practices,are sometimes skeptical o seeing it assigned to learning-related practices.

    the type of evaluation and

    perceived successOur study ound that, with the exception o Level 1,using Kirkpatrick/Phillips levels o evaluation is alwaysassociated with higher Evaluation Success Index (ESI)scores. That is, i respondents said their organizationsused a given K/P level o evaluation, then they were alsomore likely to report learning evaluation success.

    companies value some levelsmore than othersAs was shown in gure 2, the degree o usage o a K/Pevaluation level does not tell us much about its perceivedvalue. Although Level 1 is the most commonly used typeo evaluation, it is not the most likely to be seen as hav-ing high or very high value. In act, only 35.9 percent o respondents whose companies use Level 1 evaluation saidit had high or very high value.

    By comparison, 54.9 percent said Level 2 had high orvery high value, and three-quarters said the same aboutLevel 3 and Level 4. This puts Level 1 at the bottom o the value chain or the our Kirkpatrick levels. It shouldbe noted that this data pertains only to those respondentswhose organizations actually use these levels o learning evaluation.

    Although these ndings might strike some as puzzling,it makes a certain sense. There is obviously more valuein knowledge gained, behaviors changed, and resultsachieved than in participants reactions. What isnt as

    clear is why more companies arent evaluating more pro-grams at these higher levels.

    Phillips ROI-based level was seen as having high or veryhigh value by 59.4 percent o respondents, counting onlythose companies that measure it. By contrast, only 8.6percent went so ar as to say ROI carried no value at all,the highest or any level.

    Figure 3 | in instances where you use it, how much valuedo you thinK each oF the Following tyPes or levels

    oF evaluation has For your organization?

    r esPonses n o valuea little value s ome value h igh value

    v ery high value

    r t t t 1.2% 14.2% 48.6% 22.8% 13.1%

    ev t 0.5% 4.2% 40.4% 38.1% 16.8%

    ev t b v 1.3% 2.8% 21.0% 46.4% 28.6%

    ev t t 2.8% 3.4% 18.8% 35.7% 39.3%

    r t i v tm t 8.6% 6.0% 25.9% 26.7% 32.7%

  • 8/8/2019 Value of Training Evaluation

    17/67

    14 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    evaluating different typesof trainingAs noted above, those who responded that their organiza-tions use the Kirkpatrick/Phillips levels to evaluate train-ing do not typically use them or all o their programs. In

    act, respondents in organizations that use Level 1 evalua-tions say that, on average, only 78 percent o all learningprograms use such evaluations. Whats more, rom therethe percentage o programs evaluated drops o steeply as

    the level increases. Respondents in organizations that useLevel 2 evaluation said such evaluation is used or abouthal o all learning programs. The percentage or Level 3programs is 25.2 percent.

    Such results are somewhat out o line with Kirkpatricksown recommendations, which target measuring 100 percento programs at Level 1, 60 percent at Level 2, 30 percentat Level 3, 10 percent at Level 4 and 5 percent at Level 5(ASTD, 2005). Our respondents were nowhere near 100percent or Level 1, and they were well short o expectations

    or Level 2 as well. Program evaluation or Levels 3, 4, and5 were more in line, or companies that used these levels.

    The method in which trainingis delivered can be a strong

    determinant or whether and how that training will

    be evaluated.

    The exception to this rule is at Level 1. More than nineout o 10 respondents say their organizations gatherin ormation on the reaction o participants to learningprograms, but this type o evaluation is not signi cantlyrelated to success. Levels 2 through 5 are, however,strongly correlated with success. Whats more, three o the levels are statistically linked with market per ormancescores as well (see gure 4).

    Correlation is not the same as causation, o course, butthe bottom line is that using most K/P levels and types o

    learning metrics is linked with sel -reported evaluation success.

    Figure 4 | Percentage oF organizations usingKirKPatricK/PhilliPs levels and signiFicant correlations between

    the usage oF these levels and scores on esi and mPi

    r esPonsesa verage

    Percentage

    c orrelations with e valuation s uccess index

    c orrelations with m arket

    Performance index

    r t t t 91.6%ev t 80.8% ** **ev t b v 54.6% ** *ev t t 36.9% ** *r t v tm t 17.9% **

    *ANOVA analysis results show that the mean responses are signifcantly di erent at the p

  • 8/8/2019 Value of Training Evaluation

    18/67

    | 15secTion i: WhaT eValuaTion Techniques are coMpanies using?

    Learning pro essionalsshould check to see i thereare evaluation opportunitiesbeing lost in regard totechnology-based learning.

    ar more classroom programs, with an average o 70 percent o learning being delivered this way,while an average o 30 percent is delivered electronically.

    Still, we remain somewhat puzzled by the relative lacko evaluation among technology-based methods o learning.It would seem a relatively simple matter to deliver aLevel 1 evaluation to those who use such systems. And,by comparing evaluation data rom technology-basedlearning systems with data rom other systemssuch as

    Results or individual programs such as leadership devel-opment and technical learning were similar to the overallresults, but companies seem to evaluate sales programsdi erently. Organizations are ar less likely to evaluateemployee reaction to sales training when compared withother types o learning programs. But they are slightlymore likely to evaluate the ROI. This may be becauseROI or sales training is easier to determine. I sales goup, it is likely that the training was e ective.

    The method in which training is delivered can be a strongdeterminant or whether and how that training will beevaluated. While more than 80 percent o live classroomtraining programs are evaluated by employee reaction,only 52 percent o e-learning programs are measured atthis level. In act, classroom programs were more likelyto be evaluated at every level than e-learning programs.It could be that the time, e ort, and money needed toproduce classroom programs warrants greater scrutiny,while e-learning is sometimes cheaper to produce andrequires less o the learners time. Companies also use

    Figure 5 | what Percentage* oF your learning Programs is evaluated ateach level oF the KirKPatricK/PhilliPs Five-level evaluation model?

    l v 1 l v 2 l v 4l v 3 l v 5

    6 9

    . 7 %

    4 9

    . 2 %

    2 7

    . 1 %

    1 4

    . 4 %

    7 . 0

    %

    5 4

    . 0 %

    3 6

    . 7 %

    2 3

    . 8 %

    1 7

    . 7 %

    9 . 9

    %

    7 5

    . 3 %

    4 3

    . 0 %

    2 8

    . 1 %

    1 6

    . 3 %

    7 . 1

    %

    7 8

    . 4 %

    4 8

    . 9 %

    2 5

    . 2 %

    1 5

    . 4 %

    7 . 1

    %

    a lp m

    l dD v m t

    p m

    s p m T p m

    * Percentages are based on the group o respondents that use these Kirkpatrick/Phillips levels to any extent

  • 8/8/2019 Value of Training Evaluation

    19/67

    16 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    These stories can then be disseminated throughout thecompany, highlighting the positive e ect o the learn-ing experience. The method can also be used to identi ynon-success stories, allowing learning pro essionals todiscover what is not working. We speci cally askedrespondents i their organizations conducted evaluationstudies or interviews with success ul trainees, though wedid not name the Brinkerho Success Case Method. Thestudy ound that almost hal o respondents said they did,with several more using an other selection as a chance

    to write in Brinkerho or Success Case Method in aprevious question.

    per ormance management, customer relationships, orsalescompanies might be able to more easily nd cor-relations between training program successes and othertypes o business or behavioral results. In short, learningpro essionals should check to see i there are opportunitieslost in regard to evaluation o technology-based learning.

    the Brinkerhoff successcase methodAnother popular technique used to evaluate training isthe Brinkerho Success Case Method. This method in-volves identi ying likely success cases and then interview-ing those individuals to veri y that they are a true caseo learning success. I they are, companies learn how theperson used the training and what results were achieved.

    Figure 6 | what Percentage* oF your delivery methods is evaluated ateach level oF the KirKPatricK/PhilliPs Five-level evaluation model?

    T -b dl v c m

    5 2

    . 0 %

    4 3

    . 4 %

    1 4

    . 6 %

    9 . 6

    %

    5 . 8

    %

    8 0

    . 9 %

    1 3

    . 6 %

    6 . 1

    %

    2 4

    . 7 %

    4 9

    . 5 %

    l v 1 l v 2 l v 4l v 3 l v 5

    * Percentages are based on the group o respondents that use these Kirkpatrick/Phillips levels to any extent

  • 8/8/2019 Value of Training Evaluation

    20/67

    | 17secTion i: WhaT eValuaTion Techniques are coMpanies using?

    This study ound that, among those who said theirorganization has conducted an evaluation interview withsuccess ul trainees, just three-quarters said the inter-views have helped them develop more e ective learningservices, only 59.7 percent said they disseminate thepositive stories throughout the company, and only a littlemore than a third use the studies to identi y actors thatenhance or impede business impact.

    According to Brinkerho , this method can be used to ndout how many employees are using the training success-

    ully, what the business value o those successes are, andhow much value rom the training has been unrealized(Brinkerho , 2006). Among the companies that use thismethod, however, we ound that many do not use theevaluation studies the way Brinkerho intended.

    Success ul case studies should be disseminated through-out the company. Also, the results o both success ul and

    unsuccess ul case studies should be used to determinewhat actors impede or acilitate learning. And nally, theresults o the Brinkerho Success Case Method should beused to improve learning programs.

    Figure 7 | has your organization ever conducted an evaluationstudy (e.g., interviews) with successFul trainees?

    N o52.3%

    Yes47.7%

  • 8/8/2019 Value of Training Evaluation

    21/67

    18 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    There ore, although the Brinkerho approach does,in act, seem to be infuential as a learning evaluationtechnique, it apparently is not used as systematically as itscreator would advocate. This nding bears a resemblanceto the data about the Kirkpatrick/Phillips model in thatmany organizations apparently use components o themodel without using it in the kind o integrated mannerthat learning experts would recommend.

    We also looked at the potential e ectiveness o theBrinkerho Success Case Method. Using an ANOVAanalysis, we compared the mean scores o respondentswho answered that they had conducted an evaluationstudy with success ul trainees against respondents who didnot. The group that answered they had done so had a meanscore o 3.2 on the Evaluation Success Index, compared to2.8 or the no group. This di erence was signi cant,which means that those respondents who have conductedevaluation studies with success ul trainees report higher

    ESI scores compared with those who did not use the suc-cess case method.

    Figure 8 | in regard to those evaluation studies oFsuccessFul trainees, Please state whether your organization

    has taKen the Following actions(p t t t t )

    76.5%

    59.7%

    36.5%

    W v d v d m t v d t d v

    W v t d t t t t m d b m t

    W v t t t t v bm t m d d m t d

    t m t t t m

  • 8/8/2019 Value of Training Evaluation

    22/67

    | 19secTion ii: hoW anD When Do organizaTions conDucT eValuaTions?

    More than a quarter o responding organizations useaction planning to a high or very high extent, and anadditional one-quarter use on-the-job observation andper ormance record monitoring to the same extent. Theleast likely technique used is ollow-up ocus groups (9.1percent), which, ironically, is one o the strategies moststrongly correlated with the ESI. Such ocus groups may be more time-consuming and re-quire more managerial sophistication than various other

    ollow-up practices, but they would likely have a greaterimpact. That so ew organizations are currently usingthis strategy, despite its e ectiveness, suggests it maybe a cutting-edge practice that delivers a truecompetitive advantage.

    Another notable strategy is monitoring per ormancerecords. Although there is sometimes skepticism aboutthe accuracy o per ormance appraisals in todays orga-nizations, linking per ormance recordings with learningexperiences does, in act, appear to be a use ul practice.That is, theres a signi cant correlation, even a ter doinga multiple regression analysis, between per ormancerecord monitoring and the e ectiveness o learning evalu-ations in organizations. O course, the success o thispractice hinges on other actors, such as the excellence o per ormance eedback systems, but i those systems arein order, then using them or Level 3 evaluations shouldreap dividends.

    | secTion ii |h oW and W hEn d o o rganizations c onduct E valuations ?

    There are several points at which a company canmeasure changes in job behavior (that is, Level 3o the Kirkpatrick/Phillips model), and organizationso ten take measurements at more than one time. A major-ity (55.8 percent) o respondents whose organizations useLevel 3 evaluations say they take measurements betweentwo weeks and two months a ter the training is administered,and 51.7 percent say measurements occur more than twomonths a ter the learning event.

    More than one-third take this measurement within therst two weeks o training, which is somewhat surprising

    because Kirkpatrick recommends that ample time begiven or employees to have an opportunity to use thebehavior (Kirkpatrick, 2007). It is also surprising thatonly about 40 percent o the companies that evaluate atLevel 3 take a measurement prior to training. Accordingto Kirkpatrick, measuring the behaviors o a groupo employees prior to training creates a control groupagainst which urther measurements can be compared(Kirkpatrick, 2007).

    The most common way to measure changes in job behavioris with ollow-up surveys o participants, used to a highor very high extent by 31.0 percent o respondent companies.These surveys are also signi cantly correlated with theEvaluation Success Index (ESI). Supplemental statisticalanalyses also identi ed ollow-up surveys as a good strategyto drive evaluation success.

    Figure 9 | iF your organization uses level 3 learning metrics (that is, theevaluation oF behaviors), when does it regularly taKe measurements?

    p t t v t

    imm d t t- t v t (w tt tw w t t v t / )

    s t-t m t- t v t (tw wt tw m t t t v t / )

    l -t m t- t v t (m ttw m t t t v t / )

    39.3%

    55.8%

    37.9%

    51.7%

  • 8/8/2019 Value of Training Evaluation

    23/67

    20 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    measuring program impactHow do organizations most commonly measure theresults o learning, that is, Level 4 evaluation? The most

    requently used metric is customer service, used to a highor very high extent by 38.9 percent o respondentsorganizations. Measuring learners perceptions o program impact is also common, as is measuringpro ciency and competency levels.

    The least likely metric is actual business outcomes, whichnearly a quarter o respondents said they do not measureat all in terms o learning evaluation. Only 22.4 percentsaid they use this approach to a high or very high extent.About 18 percent also said they do not use productivitymetrics at all to evaluate the impact o learning programs,and just 26.1 percent use such metrics to a high or veryhigh extent.

    Such ndings strike us as alarming. A ter all, one o theprimaryi not the primarygoals o learning experienc-es is to positively infuence business outcomes. Everything

    Figure 10 | to what degree does your organization use the FollowingaPProaches For measuring the behavior and the aPPlication or

    transFer oF inFormation (that is, level 3 oF KirKPatricK/PhilliPs model)?

    r esPonsesPercentage who use these aPProaches

    to a high or very high extent

    c orrelation with e valuation s uccess index

    w- v t t 31.0% 0.21**

    a t 26.9% 0.21**

    p m d m t 24.3% 0.19**

    ob v t t j b 23.9% 0.17**p m w- 18.5% 0.25**

    w- v t t v 16.5% 0.17**

    i t v w w t t t 15.8% 0.14*

    i t v w w t t t v 15.2% 0.17**

    w- 9.1% 0.23*** Correlation is signifcant at the 0.05 level (2-tailed)** Correlation is signifcant at the 0.01 level (2-tailed)

    iF an organization must choose:which aPProaches worK best Forlevel 3 measurement?

    n e s i . w , ,

    q . s , pp , ,

    ? b p , astd/ 4 p

    : F - p p P F - p p p

    t k p, q p p

    . t , qp pp ,

    k p p .

  • 8/8/2019 Value of Training Evaluation

    24/67

    | 21secTion ii: hoW anD When Do organizaTions conDucT eValuaTions?

    else rom boosting retention and engagement to ensur-ing workers have solid skill setsis generally intended toincrease business results. Indeed, we were not surprisedto nd that a multiple regression analysis showed thatactual business outcomes was one o three Level 4outcomes to explain a sizable portion o success whenit comes to learning evaluations. I companies have tochoose three approaches to ocus on in terms o Level4, we recommend actual business outcomes, learning/ employee perceptions o impact, and the pro ciency/com-

    petency levels o learning recipients.

    Having said this, we should note that each o the Level 4approaches we listed in the survey is signi cantly correlatedto the ESI as well as the Market Per ormance Index (MPI),with the exception o supervisor perceptions or the MPI.Employee satis action was the metric most highly correlatedwith market per ormance. None o them is likely to be awaste o time. And a survey o ASTDs BEST Award-win-ning organizations ound that those rms use employee andcustomer satis action, quality o products and services, cycletime, retention revenue, and overall productivity as mea-sures or e ective learning (Murray & E endioglu, 2007).

    iF an organization must choose:which aPProaches worK best Forlevel 4 measurement?

    e e s i . g

    , p p .

    s , pp , , ? b

    p , as4 p

    : l / p p p p a ( . ., , ) P / p

    u , q p 21 p

    . t , qp pp ,

    k p p .

    Figure 11 | to what degree does your organization use theFollowing aPProaches to measure Program imPact or results

    (that is, level 4 oF the KirKPatricK/PhilliPs model)?

    r esPonses

    Percentage who use these aP -Proaches to a high or very high

    extent

    c orrelation with e valuation s uccess index

    c t m t t 38.9% .33**

    em t t 37.0% .25**

    l / m t m t 36.3% .33**

    p / m t v 33.0% .35**

    B d / v t m t 31.4% .29**

    p d t v t d t ( . ., t m , t t m ) 26.1% .31**

    T v / m t 24.8% .27**

    a t b t m ( . . v , ) 22.4% .31**** Correlation is signifcant to the 0.01 level (2-tailed)

  • 8/8/2019 Value of Training Evaluation

    25/67

    22 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    holding managers accountaBleincreases evaluation successSupervisors play a key role in training evaluation, but notmany companies use them to their ull potential. In act, aminuscule 11.4 percent o respondents say their organiza-tions hold managers accountable or tracking pre- andpost-training per ormance to a high or very high extent.But organizations that do this are much more likely thanothers to say their learning evaluation systems are suc-

    cess ul. There ore, this uncommon practice may well helporganizations boost the per ormance o their learningevaluation systems.

    The least likely metric isactual business outcomes,which nearly a quarter o respondents said they do not measure at all.Such fndings

    strike us as alarming.

  • 8/8/2019 Value of Training Evaluation

    26/67

    | 23secTion ii: hoW anD When Do organizaTions conDucT eValuaTions?

    Are training activities simply squandered in many o todays organizations?

    It is also important or supervisors to set goals with em-ployees prior to training, yet more than a quarter o par-ticipants said that supervisors are not responsible or this,and just 17.4 percent said their companies hold managersresponsible or this to a high or very high extent.

    Companies were more likely to make supervisors respon-sible or giving employees the opportunity to use theknowledge gained rom training, but even here the lowscores are surprising. Only 22.9 percent did this to a high

    or very high extent, raising the question o whether train-ing activities are simply squandered in many o todaysorganizations. Firms in which managers are, in act, heldaccountable or this are more likely than others to enjoyhigher market per ormance.

    r esPonsesn ot

    at alls mall extent

    m oderate extent

    h igh extent

    v ery high extent

    c orrelation with e valuation s uccess index

    c orrelation with m arket Performance

    index

    tt w t m t t

    26.7% 33.7% 22.3% 13.3% 4.1% .32**

    v m

    t t t w w dt t

    15.1% 29.1% 33.0% 19.0% 3.9% .37** .14**

    t -d t-t

    m36.5% 32.4% 19.7% 8.8% 2.6% .37**

    ** Correlation is signifcant at the 0.01 level (2-tailed)Note: Only statistically signifcant correlations are presented.

    Figure 12 | to what extent are suPervisorsheld accountable For the Following:

  • 8/8/2019 Value of Training Evaluation

    27/67

  • 8/8/2019 Value of Training Evaluation

    28/67

    | 25secTion iii: WhaT are The Barriers To Training eValuaTion?

    | secTion iii |W hat a rE thE B arriErs to t raining E valuation ?

    a signi cant negative correlation between the extent towhich respondents see this as a barrier and their ownlearning evaluation e ectiveness (see gure 13).

    One possible takeaway is that companies should not try toprove that a learning experience a ects results but rathershow that, given the preponderance o evidence, it verylikely does. Managers and executives should understand thisargument i its made care ully, especially i they recognize,as this study indicates, that viewing this as an insurmount-

    able barrier does the company more harm than good, interms o both learning evaluation and market per ormance.

    The next-most commonly noted barrier, cited by 40.8percent o respondents, is the lack o a use ul evaluation

    unction as part o the companys learning managementsystem (LMS). This may signal that, as companies relymore heavily on so tware solutions to create and deliverlearning, they become dependent on those systems toprovide metrics. I their system does have the proper re-porting unctions, it may not be robust enough to providethe data they need or e ective evaluation.

    g iven the act that learning evaluations can havesizable bene ts or organizations, why dontcompanies use them to a greater extent? Toget at this question, our research team asked about thebarriers to the proper measurement o learning.

    We ound that the barrier that looms largest is thedi culty companies have isolating learning as a actorthat has an impact on results. O course, this tendsto come into play mostly in Level 4 and Level 5 o

    the Kirkpatrick/Phillips model. It is not a barrier orLevel 1, since there are no other actors involved in theparticipants reactions. Phillips suggests that the e ectso training can be isolated through the use o controlgroups, trend-line analysis, orecasting models, andimpact estimates (Phillips, Pulliam, & Wurtz, 1998).

    We should note, however, that there might be a sel -ul lling prophecy problem here. Companies that simply

    give up on evaluation because they cant prove it hasresults might well be sabotaging their own learning e ec-tiveness. A ter all, data rom this study shows that theres

    Figure 13 | to what extent are the Following seen as barriersto the evaluation oF learning in your organization?

    Percentage who see these as barriers to a high or very high extent

    c orrelation with e valu -ation s uccess index

    T d t t t t m t t v t t f 51.7% -.18**

    o lMs d t v v t t 40.8% -.21**

    ev t d t t t d d d t m w t 38.0% -.23**

    it t t m t d t - vv t 32.2%

    l d d t b tv t d t 24.1% -.12**

    ev t d t t d t t t t m t 18.9%

    ev t t d b 14.5% -.20**

    ** Correlation is signifcant at the 0.01 level (2-tailed)Note: Only statistically signifcant correlations are presented.

  • 8/8/2019 Value of Training Evaluation

    29/67

    26 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    There might be asel - ulflling prophecy

    problem here.

    Another and sometimes related problem that organizationsencounter is that the learning data they collect is notstandardized across unctions and is, there ore, di cultto compare. This can be both a process and a technologyproblemone that can be di cult to address without thewilling participation o a multi unctional team.

    High costs also keep some companies rom evaluatinglearning e ectively, especially when it comes to the higherlevels o the K/P model. This raises the interesting question,What is the ROI o calculating ROI? I companies simplydont think the ROI is high enough, then theyll be morelikely to cut such calculations rom the learning budgets.

    Apathy toward the data can also be a barrier. Almost a

    quarter o respondents said that business leaders whodont actually care about the evaluation data representa barrier to a high or very high extent. Its possible thatleaders dont value the data because there is not muchthey can do with participation level and employees reac-tion data, which is what gets measured most. I leadersknew they were going to get in ormation on how trainingimproves results and boosts the bottom line, they mightbe more interested. On the other hand, some studiesshow that leaders can also be skeptical o certain businessresults data, especially i the assertions about trainingsimpact seem uncertain or slippery.

    Other research produces these same recurring themeswhen it comes to the di culties in measuring learning.The Chie Learning O fcer survey cited earlier oundthat the most common barriers to e ective learningevaluation included a lack o resources, interest, andleadership support (Anderson, 2009). Companies o ten

    nd it so challenging to isolate training as a actor a ectingbottom-line results that they abandon trying to calculateROI altogether, saying that knowing the in ormation isnot worth the time and e ort it takes to measure

  • 8/8/2019 Value of Training Evaluation

    30/67

    | 27secTion iii: WhaT are The Barriers To Training eValuaTion?

    The strongest correlation with the ESI occurs whenevaluation data is not standardized enough to comparewell across unctions (r=-.23**). In other words, themore a respondent says that his or her evaluation datacant be compared easily across unctions, the less likelythat respondent is to report organizational success withoverall evaluation e orts.

    Another barrier with a strong correlation to the ESI isOur LMS does not have a use ul evaluation unction

    (r=-.21**). This negative association suggests that themore respondents believe that their LMS does not havean evaluation unction that meets their needs, the lesslikely they are to give their evaluation e orts high marks.

    These ndings provide insights into speci c steps organizationscan take to avoid learning evaluation pit alls. First,learning evaluation data loses some o its value when itcannot be compared across unctions. The investmento time to coordinate across unctions may be seen asexcessive, yet it appears that when organizations ail tostandardize key metrics initially, the overall e ectivenesso the evaluation process su ers. Additionally, taking stepsto ensure the use ulness o the evaluation unction o anorganizations LMS can be worth the e ort.

    Generally speaking, an LMS system should be purchasedor modi ed with evaluation usage in mind. A systemthat does not help learning pro essionals gain clearinsight into the success o their learning programs is notdoing its job well enough. I learning pro essionals areessentially stuck with such a system, then at least theyshould make it clear to the vendor or creator that a betterevaluation unction will be expected in the next versiono the application.

    An LMS system should bepurchased or modifed with

    evaluation usage in mind.

    (Naughton, 2008). Others say that senior managementisnt actually interested in it or that they do not acceptthe numbers as valid (Wilhelm, 2007).

    A 2004 study by IBM and ASTD asked business executiveswhich actors should be used to hold learning departmentsaccountable or adding value to the business, and ROIranked as one o the least important actors (Red ord,2007). Another report, this one rom the CorporateUniversity Xchange, says that management o ten doesnot accept the data that is presented and, in many cases,simply believes that training adds value without asking

    or evaluation data at all (Todd, 2008).

    Another interesting point o view comes rom a 2008

    WorldatWork survey o European HR leaders. It oundthat ROI or learning was high on the scale or importancewhen it comes to the ROI metrics that HR can report,ranking them higher than data related to demographicsand diversity, compensation and bene ts, productivity,and workplace health and sa ety. But when it comes toHRs ability to report those metrics, ROI or training isdead last (Collins & Hill, 2008).

    addressing evaluation BarriersEach o the barriers documented in this study was correlatedwith the Evaluation Success Index (ESI), a measure o theextent to which respondents believe their learning metricsare a worthwhile investment o time and resources. Thepurpose o these correlations is to understand whichbarriers are most strongly associated with success ul (orunsuccess ul) evaluation. In short, the more that barriersare seen as hindering learning evaluations, the less likely arespondent is to say that his or her organization has a suc-cess ul learning evaluation program.

  • 8/8/2019 Value of Training Evaluation

    31/67

    28 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    The more respondents say their leadership doesnt

    hold evaluation metrics inhigh regard, the less likely

    they are to report strongmarket per ormance across

    leading indicators.

    In addition to correlations with the ESI, each barrier wascorrelated with the Market Per ormance Index (MPI).Compared with the ESI, there were ewer signi cantcorrelations, and those that were signi cant were not asstrong. However, three o the six barriers were ound tobe signi cantly and negatively correlated with the MPI.The strongest correlation with the MPI (-.12**) waswith the idea that leaders dont generally care aboutevaluation data. In this case, the more respondents saytheir leadership doesnt hold evaluation metrics in highregard, the less likely they are to report strong marketper ormance across leading indicators. In general, i speci c leaders or the prevailing organizational culturedownplay the value o evaluation data, it is possible thatthe quality o decisions and con dence in those decisionssu er and lead to compromised market per ormance.

  • 8/8/2019 Value of Training Evaluation

    32/67

    | 29secTion iV: WhaT are coMpanies spenDing on Training eValuaTion?

    | secTion iV |W hat a rE c ompaniEs s pEnding on t raining E valuation ?

    There are instances whenparticipants will love thetraining sessions becausethey are un and exciting,but there is no change in

    behavior at all.

    l earning budgets vary widely across respondentcompanies, with some reaching tens o millionso dollars per year. The proportion o the budgetspent on evaluating learning is relatively consistent regardlesso company size, although small companies, those with

    ewer than 100 employees, spend an average o 6.5 percento their learning budget on evaluation, whereas largecompanies, those with more than 10,000 employees,allocate 4.4 percent o the learning budget to evaluation.

    About two-thirds o that evaluation budget is then allocatedmostly to internal resources, while the rest is spentexternally. Large companies spend more internally (72.4percent) than small companies (58.2 percent), most likelybecause they have the resources to do so. Small companiesmay need to rely on more outside assistance to conductany evaluation.

    Figure 14 | Please indicate what Percentage oF your organizationstotal exPenditure For emPloyee learning is allocated to evaluation

    sm ( w t 100 m )

    M d- ( m 100 t 10,000 m )

    l (10,000 m m )

    a t

    6.5%

    4.4%

    5.7%

    5.5%

  • 8/8/2019 Value of Training Evaluation

    33/67

    30 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    Measuring the reaction o learners is probably the leastexpensive endeavor o all ve levels, yet that is where themost money is being spent simply because it is done themost. O course, as weve noted elsewhere in this study,Level 4 and Level 5 evaluations are done much less re-quently and or a smaller range o programs than lowerlevels o evaluation, so it makes sense that it is seen asbeing a much smaller portion o total expenditures.

    However, this raises the issue o whether such upper-

    level evaluations are really as expensive as is commonlythought. Jointly, Level 4 and Level 5 are only 10.6 percento all evaluation spending.

    The clear ront-runner,based on the percentageo evaluation spending, is

    Level 1 evaluation.

    Where companies are spendingtheir BudgetsWhen companies use the Kirkpatrick/Phillips model, theytend to spend their budgets in parallel to how they usethe levels. Nearly hal o the budget is spent on Level 1,the reaction o the participants, where the majority o programs are evaluated. From there, the spending dropsby about hal or each subsequent level until it reachesthree percent or Level 5.

    Figure 15 | oF the total exPenditure For learning evaluation,what Percentage is sPent on:

    o t d d t / v , t t , w , dt t m

    i t t , dm t t vt , d v m t t , d t iT t

    a t

    l t

    M d- t

    sm t

    68.2%31.8

    72.4%27.6%

    69.1%30.9%

    58.2%41.8%

  • 8/8/2019 Value of Training Evaluation

    34/67

    | 31secTion iV: WhaT are coMpanies spenDing on Training eValuaTion?

    evaluating something is better than doing nothing, result-ing in a signi cant positive correlation. In this case, how-ever, a strong inverse relationship exists between Level 1spending and both indices.

    This is not suggesting, however, that Level 1 measurement isan inherently fawed measure or investment o time. Morelikely, these results suggest that spending on Level 1evaluation is not e ective unless it is combined withother levels.

    Clearly, asking how participants eel about the trainingalone may not gather any good, substantive in orma-tion. Ian Cunningham, visiting ellow at the Universityo Sussex in the United Kingdom, says he has seen manyexamples where participants gave bad reviews to thetraining in Level 1 evaluations but their behavior hadbeen positively changed when measured at Level 3(Cunningham, 2007).

    improper spendingAlthough the budget allocations or Kirkpatrick/Phillipsmake sense based on how companies use the model, theydont seem to make as much sense or executing e ectiveevaluations or even or having a success ul company.

    As mentioned above, the clear ront-runner based onthe percentage o evaluation spending, is Level 1 evalua-tion. Almost 50 percent o evaluation budgets are spenton measuring the reactions o participants to learningprograms. Despite the act that this method is used most

    requently, the correlations point to problems. Speci -cally, the larger the percentage o the evaluation budgetspent on participant reactions, the less likely our surveyrespondents were to score well on the Evaluation SuccessIndex (ESI) (r=-.38). They were also more likely to scorelower in terms o their market per ormance (r=-.12).

    Finding no relationship at all between an investmentin evaluation and the ESI would be surprisingsurely

    Figure 16 | oF the total exPenditure For learning evaluation,what Percentage is sPent on the Following:

    r esPonsesPercentage who see these as barriers to

    a high or very high extent

    c orrelation with e valuation s uccess index

    r t t t 49.8% -.38**

    ev t 22.8% .13**

    ev t b v 14.3% .25**

    ev t t 7.6% .32**

    r t v tm t 3.0% .26**

    ot m d v d m v t 2.5%

    ** Correlation is signifcant at the 0.01 level (2-tailed)Note: Only statistically signifcant correlations are presented.

  • 8/8/2019 Value of Training Evaluation

    35/67

    32 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    Level 4 appears to deliver the greatest value.

    The opposite can be true as well. There are instances whenparticipants will love the training sessions because theyare un and exciting, but there is no change in behavior atall. Our study results suggest that many companies wouldnever know i their training was a ailure because theynever went any urther than Level 1 and were satis edthat the employees gave the program high marks. Itseems clear that companies using predominantly Level 1measurements are less success ul in both their evaluatione orts and their business results.

    Interestingly, the correlations between the ESI and everyother level o measurement are signi cant and positive.This data suggests that the larger the percentage o theevaluation budget that is spent on any level exceptLevel 1, the more likely respondents are to reportoverall evaluation success.

    This seems to be especially true or Level 4, the evaluationo results. Not only does evaluation o results have thehighest correlation with the ESI (r=.32**), but it also has theonly signi cant positive correlation with the MPI (r=.11**).According to these results, regardless o the other levels anorganization chooses to measure, the most value appears tobe gained by evaluation o training results.

  • 8/8/2019 Value of Training Evaluation

    36/67

    | 33secTion V: hoW can organizaTions iMproVe Training eValuaTion?

    | secTion V |h oW c an o rganizations improvE t raining E valuation ?

    identi ying which programs to evaluate, determininghow extensively to measure them and conducting theactual evaluations are really only hal o the equation.Once the metrics have been acquired, what can a companydo with the in ormation that has been gathered?

    It is only through proper application o the data thatevaluation can be considered a success. To that end, thesurvey asked respondents what actions are taken basedon the in ormation derived rom evaluation. Out o a

    list o 14 possible uses or evaluation results, no actionis taken to a high or very high extent by a large majorityo respondents.

    The action taken to the highest extent involves helpingimprove learning programs, although it is a bit puzzlingwhy this number isnt even higher. It is hard to imaginethe reason an organization wouldnt use such evaluationresults to improve its learning programs.

    Figure 17 | to what extent does your organization use learningevaluation results to taKe each oF the Following actions?*

    * Percentage who answered they do this to a high or very high extent

    52.9%

    47.8%

    47.4%

    46.9%

    39.1%

    36.2%

    33.1%

    32.6%

    31.4%

    27.8%T d m t t t t t t tv t t

    T t m t m v v b t

    T t m m t t t t t t d t j b

    T t t m t vf t b v m

    T m t t m m

    T w t t m w t d t m

    T t m t t d d t

    T m m t m

    T t m d t b t t t

    T m v m

    It is only through proper application o the datathat evaluation can beconsidered a success.

    It is interesting that, in every case, companies clearlybelieve they should be taking these actions to a higherextent than they actually are. The biggest gap betweenwhat companies are doing and what they should be doingcomes with calculating the e ect that learning has onbusiness results, where only 17.5 percent said they aredoing this to a high or very high extent and 85.8 percentsaid they should be doing it.

  • 8/8/2019 Value of Training Evaluation

    37/67

    34 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    This correlates to the act that so ew companies evaluateat Level 4 (results) to any extent. It appears they knowthey should be evaluating at Level 4 more o ten in orderto measure learnings e ect on business results.

    In contrast, it seems companies are satis ed with theiruse o participant reaction measurement (Level 1). Therewas only an 8.6 percentage-point di erence betweenrespondents who said they use evaluations to make sureemployees like the programs to a high or very high extent

    and those that said they should do so.

    Making sure employees like programs was one o the mostcommon evaluation actions takenmost o the averagecompanys evaluation time, energy, and money are spent onit. Ironically, Level 1 is actually seen as the least important.

    Respondents also said their organizations were not calcu-lating learnings ROI enough, with only 10 percent sayingthey calculated ROI to a high or very high extent and70.6 percent saying they should . Again, this is borne outby the small percentage o programs evaluated at Level 5and the small portion o budgets allocated to that area.

    which uses oF evaluation resultsare most imPortant?

    F e s i . m p

    k 14 j ,

    . t , astd/4 p

    p p . b p

    , :

    t p t

    t p p

    f p .t k p, p 31

    p . t, p

    , k p p .

    Figure 18 | to what extent does your organization uselearning evaluation results to taKe each oF the Following actions,

    and to what extent should it?*

    * Percentage point di erence between the degree to which companies actually take these actionsto a high or very high extent and the degree to which they say they should do so

    T t dv t t b t- t t m m t t m

    T m v m

    T m t t m m

    T w t t m w t d t m

    T d t m w t t t- t v

    T d m t t t t t tt v t t

    T t t m t vf t b v m

    T t roi m

    T t m t m v v b t

    T t t t t t m t t b t 68.3%

    62.8%

    60.5%

    58.7%

    55.5%49.6%

    48.6%

    47.6%

    40.2%

    40.2%

  • 8/8/2019 Value of Training Evaluation

    38/67

    | 35secTion V: hoW can organizaTions iMproVe Training eValuaTion?

    Until these types o issuesare addressed, behaviorswill not change and thereslittle chance that a learningexperience can have a

    positive impact onbusiness results.

    That is, behaviors arent really changed by training alone.In act, success is o ten related to how well managers ol-low up on the training with their employees. Do they allowthe training to be put to use? Are workers encouraged touse it? Until these types o issues are addressed, behaviorswill not change, and theres little chance that a learningexperience can have a positive impact on business results.

    The th action that has a very large should/do gap isdemonstrate to others in the organization the valueo the learning unction. That sounds sel -serving orlearning pro essionals, and perhaps it is to some extent.But theres also a legitimate need to demonstrate andrein orce the most e ective learning experiences becausethose will be one o the best means o ratcheting up theskill levels and e ectiveness o the organization over time.

    Our gap analysis o these ve areas is buttressed by amultiple regression analysis showing that three out o these ve practices (overall business results improvement,

    positive infuences on employee behaviors, and demon-strating the value o the learning unction) do much tohelp explain the success o learning evaluation. I compa-nies want to make their learning evaluation systems bet-ter, these items, in particular, may be great places to start.

    Study participants also wish their companies were usinglearning evaluation more to improve business results,ensure learning programs positively infuence employeebehavior, and demonstrate the value o learning to othersin the organization. These are actions that are easier toundertake when the proper evaluation is done, whetherit is Level 3 or behaviors, Level 4 or results, or theBrinkerho Success Case Method or demonstrating thevalue o learning.

    priorities for improvementWe believe that the practices with the ve largest should/ do gapsall with percentage-point di erences exceed-ing 50provide major clues about what learning pro es-sionals can and should be ocusing on in the near uture.Here are the top ve:

    Calculate the e ect that learning has on importantbusiness results.

    Ultimately improve overall business results. Gauge the ROI o learning programs.

    Ensure that learning programs positively infuencethe behaviors o employees. Demonstrate to others in the organization the value

    o the learning unction.

    Note that the rst three o these are related to businessresults and one to ROI in particular. Its clear that, ingeneral, respondents believe their rms should be doinga better job o using learning evaluations to improve busi-ness results. There needs to be a plan to accomplish thisin many o todays organizations.

    But learning and learning evaluations cant do much toimprove business results until they change and track thebehaviors o employees. Training that is well learned,but never used, or poorly used, produces no value orthe business that invested in the training, Brinkerho has noted. He cites research estimates claiming that onlyabout 15 out o 100 people ever use new training to pro-duce valuable per ormance results (Brinkerho , 2006).

  • 8/8/2019 Value of Training Evaluation

    39/67

    36 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    | conclusion anD policy recoMMenDaTions |

    completed. These are the basics o learning evaluation, o course, but this study clearly shows that the basics arentall that common in todays organizations.

    recommended actionsThis study has resulted in excellent insights into thecurrent state o learning evaluation. But, having gonethrough the ndings, what are the main takeaways thatlearning pro essionals can quickly act on? Every orga-nizations situation will be di erent, o course, and notevery practice will work equally well in all o them. Giventhose caveats, below are our suggestions or those intenton improving learning evaluations:

    Collect data that is meaning ul to leaders. Otherwise,they will never see the value o evaluations.

    Where possible, standardize evaluation data acrossdi erent unctions within the organization to makeit easier to use the data e ectively.

    Spend more time and money evaluating behaviorsand results and less on participant reactions. At thevery least, dont rely on reactions so completely.

    Give supervisors more responsibility when it comesto learning evaluation training. They should giveemployees opportunities to use their training, andthey should help track per ormance both prior toand a ter the training.

    When evaluating changes in behavior, use strate-gies such as ollow-up sessions, ocus groups, andparticipant surveys. Used with action planning andper ormance monitoring, these strategies are themost highly correlated with evaluation success.

    its clear that learning pro essionals want to demon-strate the value o the programs they deliver and atthe same time improve those programs. They want todeliver programs that employees enjoy, impart knowledgeand skills, change behaviors or the better, deliver busi-ness results, and provide the best bang or the buck.

    But organizations are having a hard time determining i their programs do these things, and that makes it hard toimprove the learning unction. There are strategies avail-

    able to achieve this goal, but the in ormation has to becollected properly and used e ectively.

    The type o evaluation that is done is greatly infuenced bythe audience or the data. Instructors are mostly ocusedon employees reactions and the amount o learning thattakes place, which allow them to critique and improvetheir programs. Yet business leaders are not as interested inthat in ormation. They want to know how per ormance,productivity, and other business metrics have improved asa result o the training (Berge, 2008). With so much ocuson Levels 1 and 2, its not surprising that many leaders donot buy into conventional learning evaluation.

    To obtain valuable learning metrics, one main questionhas to be answered: What is the purpose o the train-ing? The answer to that question may not be as simpleas to improve per ormance. Training can be used orany number o additional reasons, whether to change be-haviors, to increase retention, or or motivation (Rands,2007). Whatever the reason, it becomes the guide bywhich the learning is measured.

    There should be clear objectives and goals to be measuredrom the outset o the training program. Trying to identi y

    things to measure a ter the act can be di cult and notentirely e ective. I evaluating at Level 3, or example, thebehaviors that are to change need to be identi ed and mea-sured be ore and a ter the training. The business resultsidenti ed in Level 4 need to be measured prior to learningto provide a baseline or comparison a ter the program is

  • 8/8/2019 Value of Training Evaluation

    40/67

    | 37conclusion anD policy recoMMenDaTions

    When choosing a learning management system,investigate the evaluation tools available. A robustsolution can make evaluation much easier and theresults more reliable. In addition, it becomes a wayto audit the e ectiveness o the LMS itsel .

    Most important, dont simply give up on evaluation.While learning metrics must be used wisely to be e -

    ective, simply abandoning them is no solution. A terall, using metrics well is associated not only withevaluation success but also with overall organiza-tional success.

    When evaluating the impact that training has had onresults, nd some ways to help lter out actors otherthan training that may have an infuence. But alsokeep in mind that organizations are not laboratorieswhere all the actors can be controlled. The learning

    unction should not be held to a higher standardthan others. What learning pro essionals should betrying to show is that a preponderance o evidenceindicates that learning has an impact. Leaders shouldnot try to hold the learning unction to a beyond

    a shadow o a doubt level o evidence. I they do,then evaluations are likely to ail, and this is associ-ated with weaker organizations.

    When evaluating results, ocus on metrics such aspro ciency and competency levels, customer satis ac-tion, and employee perceptions o training impact.Actual business outcomes and productivity measuressuch as time and employee output are also metricsworth capturing. In any case, prior to training, iden-ti y the key per ormance indicators to be measured.

  • 8/8/2019 Value of Training Evaluation

    41/67

  • 8/8/2019 Value of Training Evaluation

    42/67

    | 39re erences

    | re erences |

    Anderson, C. (2009, May). Overcoming Analysis Paralysis.Chie Learning O fcer, 54-56.

    ASTD (American Society or Training & Development).(2005, November). ASTD Benchmarking Forum 2005Learning Evaluation Practices Report. Alexandria, VA.

    Berge, Z. (2008). Why It Is so Hard to Evaluate Train-ing in the Workplace. Industrial and Commercial Training, 390-395.

    Brinkerho , R. (2006). Increasing Impact o TrainingInvestments: An Evaluation Strategy or BuildingOrganizational Learning Capability. Industrial and Commercial Training, 302-307.

    . (2003). The Success Case Method. San Francisco:Berrett-Koehler.

    . (2005, February). The Success Case Method:A Strategic Evaluation Approach to Increasing theValue and E ect o Training. Advances in Developing Human Resources, 86-101.

    Collins, M., and Hill, S. (2008, May). Trends inEuropean Human Capital Measurement. workspan, 64-70.

    Cunningham, I. (2007). Sorting Out Evaluation o Learningand Development: Making It Easier or Ourselves.Development and Learning in Organizations, 4-6.

    Is Training Worth It? (2008). Development and Learning in Organizations , 34-36.

    Kirkpatrick, D. (1994). Evaluating Training Programs:The Four Levels . San Francisco: Berrett-Koehler.

    . (2007, January). The Four Levels o Evalua-tion. In oline.

    Murray, L., and E endioglu, A. (2007). Valuing theInvestment in Organizational Training. Industrial and Commercial Training, 372-379.

    Naughton, J. (2008, September). IOL: Determining theImpact o Learning. Chie Learning O fcer, 32-36.

    Platt, G. (2008, August). The Hard Facts About So tSkills Measurement. Training Journal, 53-56.

    Phillips, J., P. Pulliam Phillips, and R. Stone. (2001).The Human Resources Scorecard . Boston:Butterworth-Heinemann.

    Phillips, J., P. Pulliam, and W. Wurtz. (1998, May).Level 5 Evaluation: Mastering ROI. In oline.

    Radhakrishnan, M. (2008, October). LearningMeasurements: Its Time to Align. Chie Learning O fcer, 36-39.

    Rands, A. (2007, February). Extending the Hal -Li e o Training. Training Journal , 40-43.

    Red ord, K. (2007, June). Whats the Point o ROI?Training & Coaching Today, 12-13.

    Todd, S. (2008, September). Mission Accomplished?Measuring Success o Corporate Universities. Chie Learning O fcer, 38-40.

    Wilhelm, W. (2007, December). Does Senior LeadershipBuy ROI or Learning? Chie Learning O fcer, 86-88.

  • 8/8/2019 Value of Training Evaluation

    43/67

  • 8/8/2019 Value of Training Evaluation

    44/67

    | 41 Value o eValuaTion surVey oVerVieW

    | appenDix |t hE v aluE of E valuation s urvEy o vErviEW

    survey processt s pThe target survey population o the ASTD/i4cp Value o Evaluation Survey consisted o an email list o primarily high-level business, HR, and learning pro essional contacts rom ASTD and i4cp. In total, 704 people responded to the survey.Respondents represented a variety o organizational sizes and industries.

    s iIn this survey, multiple questions used the well-accepted 15 Likert-type scale, with a 1 rating generally designated as notat all and a 5 rating as a very high extent. Additionally, several questions asked respondents to assign percentages ( orexample, percentage o learning programs are evaluated at each level) in their organizations. There were 38 questions inall, including those geared toward the demographics o respondents.

    Some questions were open-text types, and that data does not appear below. Moreover, survey data rom multiple questionswas combined in certain tables. In a ew cases, low-priority data is not shown at all in this appendix.

    pA link to an online survey was emailed to the target population during May 2009.

    demographic/company profile questions and resultsQ1: What is your current title?Respondents represented a broad array o mid- to high-level management positions, with more than hal o those surveyedat the director level or above. Directors make up the largest proportion o those respondents (43.2 percent).

    w hat is your current title ?

    r esPonses Percentceo/p d t/c m 4.8%

    e t v Vp/s Vp 2.8% V p d t 9.8%

    D t 43.2%c l o 3.3%

    M 20.9%s v 0.9%ot 14.4%

  • 8/8/2019 Value of Training Evaluation

    45/67

    42 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    Q2: Describe your organizations type o operation.Forty percent o survey respondents represented rms that were multinational (de ned as operations that act independent-ly o one another) or global (de ned as having a high level o global integration) in nature. Fi ty-nine percent o respon-dents were rom national organizations that were currently operating in one country only.

    d escribe your organization s tyPe of oPeration .

    r esPonses Percentn t 59.4%

    M t t 17.6%

    g b 22.9%

    in what region is your organization headquartered ?

    r esPonses Percentn t am 82.7%l t am 1.1%

    e 6.6%

    M d t 2.4%a 0.7%

    a 5.7%o /a t 0.9%

    Q3: In what region is your organization headquartered?A large majority o the respondents were rom organizations headquartered in North America (82.7 percent). Participantsalso included rms headquartered in Europe (7 percent) and Asia (6 percent), with smaller representations among rmsheadquartered in Oceania/Australia, Latin America, the Mideast, and A rica.

  • 8/8/2019 Value of Training Evaluation

    46/67

    | 43 Value o eValuaTion surVey oVerVieW

    Q4: In what sector does your organization operate?Respondents represented a wide variety o industries. The largest speci ed proportion was employed inhealth care (12.2 percent).

    w ithin which sector does your organization Primarily work ?r esPonses Percent

    a & D 1.3%a t 0.4%

    a t m t v & T t 1.1%

    B 3.3%

    B v 0.1%B s v 5.6%

    c t b o t 0.1%

    c m 1.1%

    c m t h dw 0.4%c m t s v 1.4%

    c m t s tw 1.7%

    c t t 2.5%

    c m p d t M t 2.5%c m s v 0.5%

    c t i t t t 0.0%

    ed t 8.9%

    e t 0.6%

    e & ut t 2.7% s v 8.1%

    d 3.0%

    d t 0.0%g v m t 4.5%

    h t c 12.2%

    i d t M t 4.0%

    i 6.3%l 1.1%

    M d 1.0%

    M mb o t 0.6%

    M t & M 0.4%n t- - t 6.0%

    p m t 2.5%

    r e t t 0.6%

    r t 2.5%s t p d t & s v 0.2%

    T mm t e m t 0.4%

    T mm t s v 1.6%

    T t t s v 1.9%ot 8.9%

  • 8/8/2019 Value of Training Evaluation

    47/67

    44 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    Q5: What is the size o your organizations work orce?Organizations o all sizes responded to the survey. More than hal o respondents are employed by organizationswith work orces o 1,000 or more, while about a quarter work in organizations with more than 10,000 employees.

    w hat is the size of your organization s workforce ?

    r esPonses P ercentw t 100 m 16.2%

    100-499 17.6%

    500-999 11.9%

    1,000-3,499 17.7%3,500-4,999 4.5%

    5,000-9,999 6.7%10,000-24,999 9.6%

    25,000-49,000 6.7%50,000-99,999 4.8%

    100,000 m 4.2%

    w hat is your organization s total revenue ?

    r esPonses Percentl t $10 m 20.5%

    $10 t $24 m 7.9%$25 t $49 m 9.1%

    $50 t $99.9 m 8.6%$100 t $249 m 9.1%

    $250 t $499 m 6.2%

    $500 t $999 m 6.9%$1 b t $2.99 b 11.1%

    $3 b t $9.99 b 9.8%$10 b m 10.7%

    Q6: What is your organizations total revenue?Participating organizations reported a wide range o revenue. The largest single revenue category noted revenues less than$10 million. Nonetheless, respondents were well distributed among total revenue categories, with just more than three outo 10 representing rms with revenues o $1 billion or more.

  • 8/8/2019 Value of Training Evaluation

    48/67

  • 8/8/2019 Value of Training Evaluation

    49/67

    46 | Value o eValuaTion: Making Training eValuaTions More e ecTiVe

    Q9: Please select all o the Kirkpatrick/Phillips levels that you use to any extent in your organization.O study participants who use the Kirkpatrick/Phillips model, the most common level o measurement is the reactions o participants (Level 1), with 91.6 percent o companies evaluating learning with this technique.

    P lease select all of the k irkPatrick /P hilliPs levels that you use to any extent in your organization .

    r esPonses

    Percentage who use the corresPonding level to any extent

    s mall organizations

    m idsize organizations

    l arge organizations

    a ll organizations

    r t t t (l v 1) 88.9% 89.7% 97.8% 91.6%

    ev t (l v 2) 72.6% 79.0% 90.4% 80.8%ev t b v (l v 3) 54.7% 50.6% 63.5% 54.6%ev t t (l v 4) 40.2% 31.8% 46.6% 36.9%

    r t v tm t (l v 5) 26.5% 13.4% 22.5% 17.9%n t b v 5.1% 5.1% 1.1% 4.1%

    d o you use other forms of learning Program evaluation aside from the five levels described in the k irkPatrick /P hilliPs model ?

    r esPonses Percentage y 17.3%

    n 82.7%

    Q10: Do you use other orms o learning program evaluation aside rom the fve levels described inthe Kirkpatrick/Phillips model?The Kirkpatrick/Phillips model is clearly the pre erred model or learning evaluation, with 82.7 percent o respondentsreporting that they use this model exclusively. However, this data should be interpreted cautiously because responses tosubsequent questions indicate there may be greater diversity o evaluation techniques than is apparent rom this question.In particular, the results rom Q12 indicate nearly hal o the respondents conduct evaluation studies, so there appears tobe some con usion about what is technically included in the K/P model.

  • 8/8/2019 Value of Training Evaluation

    50/67

    | 47

    Q12: Has your organization ever conducted an evaluation study ( or example, interviews)with success ul trainees?Responses are divided almost equally between organizations that have and have not conducted an evaluation study withsuccess ul trainees. Almost 48 percent o survey participants say they have used this as a learning evaluation method.Although this does not necessarily contradict responses to Q10, it does show that many companies are do