the assessment centre is not dead

Upload: santosh-nair

Post on 05-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 The Assessment Centre is Not Dead

    1/15

    TheAssessmentCentre isNot Dead!

    5 1

    The Assessment Centre is NotDead! How to Keep it Alive andWellbySteven H. Appelbaum, Frank Kay and Barbara T. ShapiroConcordia University, Montreal, Canada

    Human Resource Management (HRM) is responsible for the planning,implementation and evaluation of the organisation's recruitment,selection, appraisal, compensation, training and development activities,and providing the necessary ingredient to assist the management ofthe firm to plan and focus its strategic objectives effectively. To theextent that appropriate human reso urce s w ill provide the organisationwith the opportunity to continue fulfilling its objec tives in an effectivemanner, talent assessment may be the most important task today inHRM. The accurate assessment of human resources is critical to anorganisation's sele ctio n, promotion and business/m anp ower planningdecisions. The soundness of a business depends on a succession ofpersonnel with the knowledge, skills and ability to manage and thelong-range foresight to ensure it s continued viability as an organ isation.There is a nee d for proper and accurate as ses sm en t and a greater needfor the vehicle where th is can occur the a ssess m en t centre.Most human resource managers agree that it is desirable to match human abilitieswith job requirements andfilling obs with key individuals is a dynamic processwhich is extremely complex and difficult to manage. Common alternatives availableto aid management in assessing the knowledge, skills and abilities that are criticalfor success in key positions include interviews, paper-and-pencil tests, worksamples, reference checks and assessment centres[1]. This article will exploresome of these methods which can be expedited by comparing/contrasting themwith four major criteria: job-relatedness, validity, cost-effectiveness and descriptiveor predictive outcomes. The question of what constitutes an assessm ent centrewill provide the necessary basis for analysing its effectiveness and answeringmanagement's questions as to whether this process and device should beentertained as an HRM tool for assessment. Despite some shortcomings andproblems, the assessment centre is important and fills an important role fororganisations but to keep it viable and alive, there are issues and processes inneed of managing. This article will address these and propose a blueprint for thefuture.Assessment Centres: Potentials and FuturesAssessment centres today are a "standardised" form of employee appraisal whichrelies on multiple types of evaluation and multiple rate rs. Individuals are given the

  • 8/2/2019 The Assessment Centre is Not Dead

    2/15

    2

    opportunity to participate in a series of situations which resemble what they mightbe called on to do in the real world[2]. The centre combines a se ries of job-relatedexercises and simulations which have been designed to enable an applicant todemonstrate whether he/she has the skills required[3]. These exercises areobserved by a team of trained assessors who later pool their information and reachan agreed assessment on each candidate on the basis of previously identifiedperformance dimensions. Besides ensuring that sound decisions are made, namelythe candidate with the closest "fit", this also provides the organisation with alist of each candidate's training and development needs which can be used as abasis for that individual's development plans. Thus the assessment centre is neitherexclusively a selection nor a development programme, rather a combination ofthe two[4].Assessment centres are not places where test batteries are administered butrather a composite of techniques used in selection decisions. One of the reasonsfor the successful emergence of these centres has been inadequate performanceappraisals to accurately reflect an individual's ability to perform. If traditionalappraisals of management potential were valid, there would be little need forassessment centre evaluations. Essentially a future-oriented appraisal systemfocuses on future performance by evaluating employee potential, and this m ethodis usually applied to groups of middle managers who appear to have the potentialto perform at more responsible levels in the organisation[5]. It is a systematicapproach to identifying precisely what is required for success in a particular joband then labelling these requirements in terms of a short-list of tightly definedcriteria[3]. Leadership, integrity, tenacity and team building are typical criteriawhich might be included for a middle management position. Information from thesesessions is placed into the human resource management information system toassist HR planning and other personnel type management functions. Results canthus be extremely useful for aiding management development and placementdecisions which ultimately impact on the future for the firm and individual as well.

    As an approach to selection, assessment centres were originally used as aselection tool in the military[6,7]. They are most widely used in industry as a wayof selecting managers who have the potential to benefit from accelerateddevelopment, a transition begun by the director of human resources research atAmerican Telephone & Telegraph in the 1950s[5]. Today, more than 2,000 NorthAmerican companies use it as a primary selection device. Among programmesto assess managerial candidates, the number of dimensions of effective performancein companies studied have varied from about 10 to 52. Several companies havefactor analysed criteria to try to explicate the most important constructs.Generalising from the managerial dimensions selected and factored, the followingseem to be important: (a) leadership, (b) organising and planning, (c) decisionmaking, (d) oral and written communications skills, (e) initiative, (0 energy, (g)analytical ability, (h) resistance to stress, (i) use of delegation,(j)behaviourflexibility,(k) human relations competence, (1) originality, (m) controlling, (n) self-direction,and (o) overall potential[8]. However, the entire process of assessment centresis not without its own problems, nor is it a managerial panacea.

    As anyone familiar with the traditional psychometric literature can corroborate,

  • 8/2/2019 The Assessment Centre is Not Dead

    3/15

    TheAssessmentCentre isNot Dead!

    53

    the whole idea of assessment centres is preposterous[8]. The basic principlerequires that candidates, usually for management positions in organisations, gothrough a series of individual and group tests and exercises in one concentratedperiod while being evaluated by a group of assessors. T he absurdity is that mostof the procedures used to predict future job success are the very ones experiencehas demonstrated do not work. For example:(1) Clinical, not actuarial, predictions are typically relied on, although moststudies have shown the latter to be more accurate.(2) Multiple predictors are used in spite of evidence that clinical predictionmay be worse with the inclusion of more than a few variables.(3) Projective te sts may be included, although their reliability and validity arehighly questionable.(4) An interview is usually an integral part of the process, in spite of its dubious

    validity.(5) Personality tes ts are often included, although it has been claimed thatthey have little or no value for personnel selection.(6) Situational tests are relied on most heavily, although they are still in anembryonic stage compared to classical psychometric tests and faileddismally at predicting the performance of clinical psychologists.(7) Managers are asked to integrate all this information and predict behaviouraltraits as well as potential success, even though psychologists are stillstruggling to demonstrate that even they can do it well[8].Due to serious concerns about the nature and validity of this technique and thenecessity for some minimal professional standards, a group of professionals activelyengaged in the assessment centre method met throughout the latter 1970s, anddeveloped a set of guidelines by which to distinguish and implement these centres(see Figure 1). Endorsed by the Third International Congress on the Assessm entCentre Method in 1975, these guidelines have since been used successfully asevidence defending assessment centre reliability in a number of litigation cases.Continued acceptance of assessment centres as a viable selection and developmenttool will rest ultimately with their ability to satisfy the three important effectivenesscriteria: job-relatedness, validity and cost-effectiveness as well as outcomes, bothdescriptive and/or predictive. To date, they appear to be on target. However, ananalysis of these criteria will help to illuminate several enigmatic issues.Criterion I: Effectiveness Requirements: Job-relatednessThere are six activities which are components of job-rela tedness. These include:

    (1) Job Analysis. The assessment centre approach is a formal system of variedmanagement simulations intended to assess management ability whichhelps to assure affirmative action compliance. Effective assessment centresmust analyse appropriate job characteristics and develop successfulmeasures of ability on as many levels as possible[10]. Job analysis researchwill determine the appropriate knowledge, skills, ability and personal

  • 8/2/2019 The Assessment Centre is Not Dead

    4/15

    4

    The Task Force on Development of As sessm ent C entre Standards has recomm endedtha t a programm e be considered an assessment ce ntre only if it meets the fo llowin gminimum requirements.1. Mu ltiple assessment techniques m ust be used. A t least one of these techn iquesmust be a simulat ion.2. Mu ltiple assessors mu st be used, and mus t receive training prior to pa rticipatingin the centre.3. Judge me nts resulting in outcom e (i.e. recomm endation for prom otion, specifictraining or development) m ust be based on pooling informa tion from assessorsand techniques.4 . An overall evaluation of behaviour must be made by assessors at a separatet ime from observat ion of behaviour.5. Sim ulation exercises are used . These exercises are developed to tap a varietyof predetermined behaviours and have been pre-tested prior to use to ensurethat the techniques provide reliable, objective and relevant behaviouralinformation for the organisat ion in question.6. The dime nsions, attribute s, cha racteristics or qualit ies evaluated by theassessment centre are determined by an analysis of relevant job behaviours.7. The technique s used in the assessm ent centre are designed to provideinformation that is used in evaluating the dimensions, attributes or qualit iespreviously determined.The fol lowing act ivi t ies do not consti tute an assessment centre.1. Panel interviews or a series of sequential interviews as the sole technique.2. Reliance on a specific technique (regardless whether a simulation or not) asthe sole basis for evaluation.3. Using only a test battery com pose d of a numb er of paper-and-pencil measures,regardless of whether judgements are made by a stat ist ical or judgementalpooling of scores.4 . Single assessor assessment (measurement by one individual using a varietyof techniques).5. Use of several simu lations with more than one assessor wh ere there is nopooling of data (i.e. each assessor prepares a report on performance in anexercise and individual, unintegrated reports are used as the final product ofthe centre).6. A physical locat ion labelled as "assessmen t cen tre" that does not conformto the requirements noted above.Source:[9].t an Assessment

    characteristics necessary for the job under consideration. For assessmentcentres to function properly this initial requirement must be developedas accurately and concisely as possible since subsequent exercises andmeasurement criteria will be developed on the basis of these initial findings.(2) Skill Identification. Data from the job analysis must now be used todetermine the skills necessary for performing the job effectively. For the

    most p art , such skills will serve as common labels frequently identified

  • 8/2/2019 The Assessment Centre is Not Dead

    5/15

    TheAssessmentCentre isNot Dead!

    55

    with success in the job. Examples include decision making, organising,planning and interpersonal skills. Since this identification is based on specificjob analysis procedures, it is representative of the elements required foreffective performance in the target position[11]. These in turn willdetermine the performance dimensions, which assessors will seriouslyconsider in developing exercises and evaluating candidates.

    (3) Setting Objectives. Insofar as a clear statement of behavioural and skillobjectives must be developed prior to conducting the centre, assessm entswill provide an accurate reflection of the degree to which a participantalready exhibits the desired skills and behaviours[12]. Objectives wouldinvolve identifying potential for senior management and/or establishing theindividual's training and career development needs as well as assistingin developing potential which has already been identified. Each wouldrequire different exercises and performance emphasis tailored to theobjective in need of developing.

    (4) Exercise Development. Working from the job analysis results and thesituational data and work samples, exercises need to be developed whichwould stimulate the most critical essential task areas in the classification.Simulated work exercises often include in-basket exercises, decisionmaking exercises and computer-based business games. Candidates areoften subjected to a variety of psychological tests, analysis, interviews,peer ratings and leaderless group discussions. Each of these activitiesis assigned a different weight by the human resources professionaladministrator on the basis of its importance to the position underconsideration[7]. The more these exercises emulate the task and job, thegreater their validity.

    (5) Centre Design. More than most other aspects, the design and deliveryof an assessm ent centre programme is most influenced by an organisation'sunique characteristics, human resources, management commitment,policies, personnel and location. To ensure the best possible alignmentbetween the job to be assessed and the candidate, each assessment centreshould be designed to reflect and match these organisational factors withoutsacrificing the principles on which the process was founded[13]. An accuraterepresentation of the organisation and its climate and idiosyncratic profileduring the assessment stage means a better transition by the chosencandidate to the actual job itself since his/her abilities have been testedin a realistic manner within a realistic context.

    (6) Observation, Evaluation and Feedback. Since candidates are tested onlyon job-relevant abilities and behaviour, criteria for evaluation need to betightly linked to exact job requirements. This, along with the use of multipleraters, will provide a high degree of validity and objectivity to the wholeassessment process. Such objectivity will reinforce a candidate's beliefsin assessm ent cen tres as a credible experience, increasing the likelihoodof (feedback) acceptance regardless of performance results[ll, 14].

  • 8/2/2019 The Assessment Centre is Not Dead

    6/15

    ,56

    Criterion II: ValidityPermeating all job-related considerations are issues dealing with the validity ofthe assessment centre. Defined simply, validity is the degree to which it servesits purpose and measures what it is intended to measure. The validity of theassessment is closely linked to how accurately it measures the candidate'sperformance in the job dimensions under consideration. Although it is possibleto dichotomise between job-relatedness and validity, it is important that they beconsidered in relation to one another since validity is a function of how well therequirements of the job are being tested and measured.There are several types of statistical analyses which should be utilised inevaluating assessment centre predictions. They both involve the same approachbut differ on how the results are presented. The approach involves relating oneset of scores with another, typically assessm ent centre performance ratings withthose based on job performance. This is the correlational technique that is basedon the inference that if the effective or ineffective behaviour required of andevaluated by one set of measures (assessm ent centre exercises) is similar to thatrequired of another set of measures (on-job tasks), the two sets of measures areassociated or related to each other. In terms of assessment centre results, a highpositive relationship will be obtained when candidates who do well or poorly inthe assessment centre also do well or poorly, respectively on their jobs. Ifcandidates who perform well in the assessment centre perform poorly on theirjobs and vice versa, the correlation will be strong but in the opposite (or negative)direction from what would be desired. Finally, if little or no discernible patternbetween performance in the assessment centre and that on the job is found, thecorrelation would be negligible or near zero[15].

    Criteria-oriented validity which we need to endorse, refers to the degree ofconcentration on criteria which are best predicted by the assessment centre[16].A high degree of criteria-oriented validity can be better ensured when theassessment centre considers the organisation's contextual factors in developingthe centre design and can provide trained assessors with better informationregarding unique situational demands and role requirements. To establish contentvalidity, assessm ent centres need to be designed to ensure the correspondenceof dimensions and jobs, dimensions and exercises, and exercises and jobs.The concept of content validity is a more realistic barom eter against which toevaluate assessm ent centre results . Content validity refers to the extent to whichscores on a test, or other indices, represent performance within the specificallydefined content area the tests purport to sample. Related to the assessment centreprocess, this simply refers to the extent to which the simulation exercises representthe performance content of the job which they were designed to sample. Thisis based on an inference that the factors responsible for performance on one setof measures (i.e. exercises) are similar to those on another set of measures (i.e.job performance). For example, if an exercise is designed to m easure the conceptof leadership, it is content valid if and only if some evidence exists that it samplesthe requirements of leadership (however defined) on the job. The definition ofleadership can only be defined as the context of the particular job in question.It actually does not matter then what the concept is called but rather how it is

  • 8/2/2019 The Assessment Centre is Not Dead

    7/15

    behaviourally defined. Ultimately, the label is unimportant. It is those behaviours,variables, or other factors representing the job that are used to define the label.These serve as the only basis for the content validity of that label.The difficulty in applying the content validity concept to evaluating the assessm entcentre process, or, for that matter, any other predictive tool, is that it does notreadily lend itself to statistical analyses. Oddly enough, because of problems inobtaining accurate statistical results, content validity has emerged as a viablealternative to validating such a programme. The concept of content validity hasalways been the very basis for the design and implementation of the assessm entcentre process although unfortunately never spelled out as such[15]. This isdepicted in Figure 2.

    TheAssessmentCentre isNot Dead!

    57

    AJob Act iv i t ies BDimensions CExercisesA. Dimens ions mu st be job-related and describe all com m on and impo rtant partsof the job.B. Exercises mu st be job-related and represent the co m m on and sign ificant jobact ivi t ies. These exercises must be comparable in complexity and dif f icultylevel to that on the job.C. Performance dimensions mu st be observable in the exercises.Source;[3]

    Figure 2.Relationships whichMust be Established toShow the ContentValidity of anAssessment Centre

    The be tter the selection procedure simulates and resembles the job, the greaterthe assurance of content validity. The major validity argument used for advocatingthe assessm ent centre approach which involves improvements to content validityis reflected by efforts to incorporate as many actual job demands as possible intothe various tailored simulations[17]. To better determine the training anddevelopment needs for each candidate, attempts to isolate the extent to whichassessment centre performance is a function of management skills, personalityvariables or business knowledge need to be undertaken early, thereby increasingthe degree of construct validity[16].While it is recognised that this process is not Utopian, there are minor problemswith interpreting correlations. However, the correlation coefficients obtained inthe research on assessm ent centres have generally been very strong. In almostall cases, the correlations are in the positive direction. Out of 23 studies reviewedby one author, only one was found with a negative correlation. Furtherm ore, 22of the studies showed the assessment centre process to be more effective thanother approaches; none showed it less effective. Correlations between assessmentcentre predictions and various on-job performance measures ranged as high as0.64[18]. Another review study of 18 research studies on the assessment centreapproach showed it to be consistently related to a variety of job criteria. These

  • 8/2/2019 The Assessment Centre is Not Dead

    8/15

    58

    correlations averaged 0.40 when a number of promotions beyond the candidate'slevel was used as the criterion and 0.63 when manager's ratings of the candidate'spromotion potential was used[19].In examining how the centre measures what it is intended to measure or inessence validity, there are three issues to be considered at this juncture. Theyinclude: assessor training, pre-screening and research support.Assessor Training. The competence of assessors is a major factor which candistinguish between an accurate assessment and an inaccurate one. When exercisesare designed to be validated objectively, the role of the assessors is relativelyunimportant as evaluation is based only on a non-discretionary score-keepingprocess[20]. However, the problem arises when exercises are designed to achievevalidity through the intersubjective consensus of assessors for a valid assessmentof candidates' abilities, assessors must be knowledgeable about most technicalrequirements and skilled in interpreting how candidates meet them. This is wherethe training of assessors becomes crucial in ensuring validity.Research has indicated that experienced assessors are more proficient ininterviewing management candidates and obtaining relevant information about them,in verbally communicating and defending information on these qualifications formanagement, and in concisely presenting this information in written form[21]. Sinceproficiency in acquiring, evaluating and communicating information is importantfor management effectiveness, assessorship training is in itself critical formanagement training as well as the key intellectual honest formulation andunderpinning of the process. Another inexpensive by-product will involve theimprovement of assessors' managerial skills as a result of the assessm ent process.

    The Task Force on Assessment Centre Standards has published a list of theminimum training goals necessary for assessors (see Figure 3). Whatever theapproach to assessor training, the objective must remain one of generating accurateassessor judgements. Any efforts to the contrary will give rise to potentially biasedassessments which may invalidate the whole assessment centre process anddamage the credibility of the human resource management effort.1. Thorough knowledge and understanding of the assessment techniques used, includingthe kinds of behaviours elicited by each technique, relevant dimensions to be observed,expected or typical behaviours, examples or samples of actual behaviour, etc.2 . Thorough knowledge and understanding of the assessment dimensions, includingdefinit ions of dimensions, relationship to job performance, examples of effective andineffective performance, etc.3. Skil l in behaviour observation and recording, including know ledge of the forms usedby the centre.4 . Thorough knowledge and understanding of evaluating and rating procedures, includinghow data is integrated by the assessment centre staff.5. Thoroug h know ledge and understanding of assessment policies and practice of theorganisation, including restrict ions on how data is to be used.6. Thoroug h know ledge of feedback procedures whe re appropriate.7. Length of training. This may vary due to a variety of considerations tha t can becategorised into three major areas.

  • 8/2/2019 The Assessment Centre is Not Dead

    9/15

    TheAssessmentCentre isNot Dead!

    5 9

    Pre-screening. Organisations choosing to utilise assessment centres shouldrecognise that the pre-screening process helps determine the validity of theassessment centre itself[22]. To the extent that pre-screening can: (1) save moneyby reducing the number of assessments required, (2) increase morale by decreasingthe number of unsuccessful candidates, (3) increase objectivity, reducing thelikelihood of lawsuits, and (4) supplement the validity of assessment centres,organisations should pay more attention to this process. The final selection ofa particular technique needs to be influenced by both the intrinsic characteristicsof the method as well as its validity, legal defensibility and acceptability. Thesecriteria must not be negotiable throughout this experience.Research Support. Overall, testimonies to the validity of assessment centreshave been well documented. A ssessment technology has repeatedly demonstratedvalidity and reliability even when subjected to controlled researchexperimentation[ll]. Research findings indicate that assessment centres are a goodpredictor of promotability within organisations[23] and actual on-the-job performance75 per cent of the time[24]. This data has similar implications for the selectionof outstanding performers as an added testimonial to the process.A further support for the objectivity and reliability of assessment centre findingswas established early in 1973 when the Equal Rights Opportunity Commissionissued a consent decree calling for their use to correct existing appraisal problems.This is consistent with findings which indicate that it is more effective than anyother means for identifying and analysing a candidate's management potentialregardless of sex and race[25]. Assessment centres have fared just as well in thecourts. Rulings have consistently emphasised the consistency and validity ofassessment centres as a non-discriminatory and job-related mode of selection anddevelopment. This is most encouraging for human resource professionals andadministrations concerned about litigious activities arising from appraisal andevaluation functions which they are responsible for.

    Criterion III: Cost-effectivenessThe most common criticism levied against assessm ent centres is their costliness.Aside from being time-consuming, costs can run as high as several thousand dollarsa candidate[26]. Since activities are usually conducted for a few days at a locationphysically removed from the job site, candidates are away from their jobs, traveland accommodation must be paid for, and evaluators are often company managersassigned to these centres expenses do pile up[5]. These activities do not takeinto account the cost of psychologists and personnel professionals who operatethe centre and are responsible for evaluations which impact not only on theprofessional/personal lives of the assessees but also on the effectiveness of thesupporting organisations prescribing this activity.One of the major challenges to contemporary human resource professionalsmust be to estimate the costs of ineffective employee performance in the questto determine the cost-effectiveness of the assessment centre.One starting point might be to focus on turnover due to lack of ability. If wecan enhance the probability of job success by more effective prediction, it is possible

  • 8/2/2019 The Assessment Centre is Not Dead

    10/15

    anagement50

    to reduce the occurrence of ineffective performance. Some of the costs that needto be calculated are those associated with (a) start-up time required of a replacementfor the incumbent; (b) down-time associated with the incumbent changing jobseither internally or externally; (c) training and/or retraining associated with boththe replacement and the incumbent; (d) travel and moving expenses if applicable;(e) the difference in productivity (related to dollars) that might have accrued ifthe incumbent was effective; and (0 the more difficult, yet undeniably present,psychological impact on the "failed" incumbent, and also on the morale of thosesurrounding him/her[15].The question remains: how can the firm start to evaluate the usefulness of anassessment centre?A modest way to begin would be to estimate the assessm ent centre costs andcompare them with what it would cost the organisation for only one person notto succeed in the illuminated position for which the assessment centre is to bedeveloped. If the assessment centre programme can prevent one candidateminimally from being selected who otherwise might have been selected throughalternative methods, it has covered its worth. This does not even consider thepotential for improved productivity and revenues with the selection of one personwho succeeds for the firm and him or her self.The fact remains, however, that developing and operating an effective appraisalprocess is neither inexpensive nor simplistic. The question often boils down towhether alternative assessment methods can contribute as much to the assessmentof managerial potential as the assessment cen tres. The issue of economic costmust be balanced with the objective of effectiveness.To this end, assessmentcentres may be regarded as an investment and not a cost which must be expandedand resourced in the interest of the organisation's long-run viability. This isespecially important given the reality that most assessments are conducted forthe purposes of isolating the company's available and potential pool of keymanagement personnel. A realisation that "redundancy begins with recruitm ent"and a desire to "live slim" and manage with fewer people places a great onuson the selection process but recruitment is rarely cheap at any level and thetime and money expended on assessment centres can have benefits going farbeyond the particular selection system[27]. This is a critical dualism which humanresource professionals must carefully weigh in their decision-making matrix.

    Criterion IV: Asse ssm en t Centre Outcomes (Descriptive or Predictive)Most assessment programmes are geared to a specific outcome, such as selectinga candidate for further advancement or using data collected to help place anindividual in a future assignment or prescribing development or training needs.Assessm ent centre outcomes are often best reflected in the composite or overallrating provided. Centres that are geared to assist with selection or advancementactivities rely on an overall rating. Centres which do not provide an overall ratingbut emphasise individual dimensional ratings such as leadership and decision makingtend to be used for placement and/or development purposes. Often, however,these distinctions are blurred or confused.

  • 8/2/2019 The Assessment Centre is Not Dead

    11/15

    TheAssessmentCentre isNot Dead!

    6 1

    If an overallrating s provided, it makes a big difference if this rating is a compositearrived at by averaging or weighting dimensions according to some predeterm inedformat. These result in descriptive judgements. Most centres for entry-level salesor management jobs are of this nature. Assessors are trained to observe behavioursand determine whether or not they occurred. The evaluation process focuses onwhether behaviours did or did not occur. Exercise reports and evaluation discussionsare used to describe what did occur. For example, in-basket exercises can bescored based on the relationship of a response to an item and its assumedimportance to an external criterion.A second kind of assessment overall rating is the result of inferences andjudgement. These result in predictive outcomes. A ssessors in this type of settingare trained to integrate information in order to make a prediction concerning theoverall likelihood of success in the target job. These kinds of judgements are morecommon in assessm ent centres for higher-level management jobs. The emphasisof the evaluation process shifts from a discussion of whether or not a behaviourhas occurred to the impact of observed behaviour when predicting subsequentperformance. This kind of discussion also requires agreement on whether or notthe behaviour did occur but goes beyond a descriptive dialogue to indicate whetheror not the behaviour appears to be typical (and therefore highly predictive) orless typical. Assessors are trained to integrate data from many simulations ratherthan "scoring" each simulation in a discrete fashion[28].There are major differences in overall ratings arrived at by description orprediction approaches. As noted description-oriented assessment programmestend to rely on a composite assessment score. Assessors are trained to eitherweigh specific dimensions to reflect job analysisfindingsor to add up and averageindividual ratings in order to determine a composite score. Assessors may useformal or informal approaches depending on their training or the folklore of thecentre. In any event, the goal of these centres is to ensure consistency in ratings.Discussions are designed to resolve differences in assessor ratings and to arriveat a consensus decision on individual dimensions. Most of the evaluation sessionis aimed at making sure that the individual dimension ratings are as accurate andas descriptive as possible. When an overall rating is m ade, there is usually littledisagreement since it represents an averaging of composite dimension ratings.Prediction-driven assessment ratings tend to look for patterns of performancerather than a composite. Compensatory judgements often appear. These arejudgements which reflect the fact that an individual may perform differently indifferent situations. For example, an individual's leadership style may vary indifferent simulations. Let us hypothesise that a participant was very effective inone situation and demonstrated a great deal of leadership talent. In another groupexercise, however, this person did little to lead others but was a helpful, supportivegroup member. Rather than averaging leadership ratings, i.e. very effective asa leader in one situation and less effective as a leader in another situation equalsan "average" leadership rating, prediction-oriented centres tend to look at thecontext in which the behaviour occurred. For example, this individual may be mosteffective when leading others in structured tasks and has more difficulty takingthe lead when the task is unstructured; or this person is best when leading others

  • 8/2/2019 The Assessment Centre is Not Dead

    12/15

    of

    ,52

    in competitive situations but is less effective when faced with co-operative tasks.The overall assessment rating reflects more than a composite rating. It is aprediction of overall effectiveness; i.e. an outstanding rating means that thisindividual will have little difficulty in almost any aspect of more complex managementjobs simulated at the centre, and a poor rating suggests that this individual willhave difficulty making a transition to more complex job demands[28].As seen from this discussion, description-oriented programmes emphasisedimensions whereas prediction-oriented programmes focus on simulations.Prediction-oriented approaches allow greater flexibility for placement, developmentand selection outcomes, while description-oriented programmes work best whenthey focus exclusively on selection issues.Unfortunately these distinctions are often blurred. Description-orientedassessment programmes require a less sophisticated staff. Assessors can comefrom almost anywhere and may not need to know much about the host organisation.The training of assessors is usually shorter and less detailed than in prediction-oriented programmes. The typical training consists of teaching assessors to indicatewhether or not a behaviour occurred and assessors are encouraged to usechecklists, behaviour observing report forms, predetermined report forms, or othertechniques to ensure consistency. Description-oriented programmes tend to usecommercially available assessment exercises, forms and rating instruments. If ajob analysis has been performed, its purpose was to verify that existingcommercially prepared exercises do reflect aspects of performance in the hostorganisation. As noted, these kinds of programmes are very popular for assessingentry-level sales and management jobs.

    Prediction-oriented assessment programmes tend to require a more sophisticatedassessment staff. While some description-oriented programmes have exceptionallytalented assessors, many operate with assessors who are available rather thanrequested. Assessors in a prediction-oriented programmes must have extensiveknowledge about the target jobs and m ust be able to differentiate between importantaspects of these jobs. For example, assessors should be able to determine whichpositions require considerable work as an individual contributor versus which jobsrequire team effort; which jobs demand little supervision of the incumbent versuswhich jobs are characterised by close supervision; which positions are fundamentallyunstructured versus jobs that follow standard procedures; which positions requirean incumbent to think on his or her feet versus which positions allow the individualto have a great deal of time before giving a response. In contrast to description-oriented programmes, prediction-oriented assessment programmes tend to developrather than purchase simulations and are most frequently used for middle andupper management positions[28].

    Is There a Future for an Effective Assessment Centre?The key to proper management selection is not to focus on one evaluation pointor even on several points, but to match the skills of the candidate with the primarydemand of the job at the present time[20]. Assessment centres are an important

  • 8/2/2019 The Assessment Centre is Not Dead

    13/15

    TheAssessmentCentre isNot Dead!

    63

    tool to human resource managers who are empowered to match individuals tojobs on a constant, consistent and equitable basis.Although assessm ent centres are in a sense an independent activity, in practicethey cannot be isolated from the total organisational context. The objective valuesand procedures of the organisation shape every stage of the assessment centreprocess[27]. To be effective, they must become integrated into the totalorganisational system for dealing with the identification and development ofmanagement talent. In planning and implementing an assessment centreprogramme, other dimensions of the HRM system m ust be considered such ascompensation, training, development and career planning. Part of the plan forgaining a realistic commitment should include an understanding of where the centrewillfitand also reinforce the existing promotion system, or whether a new designwill be required.Assessment centre results must, accordingly, be kept in realistic perspective:not an end in themselves, assessments from these centres are part of a largerappraisal system. If seen as the sole determinant of future career progress,employees will see the assessment process as threatening; however, if they areused to appraise an individual's strengths and weaknesses with options forimproving areas of deficiency, then the centre can be a positive force for developingfuture talent within the company. Also as the need to document performanceassumes a greater role for the future, the failure of most performance appraisalsystems will become a major problem in need of a solution. Most of the criticismsand pitfalls surrounding assessment centres occur when short-cuts are taken.Organisations will occasionally dilute a full job analysis in an effort to "speed up"the assessment procedures, invalidating the whole process. On aggregate, theprimary threats come more from poor design and implementation rather than anycharacteristic deficiencies specific to assessment centres themselves. Despite somepitfalls, the assessment centre is considerably more effective than most othermeans for analysing and identifying a candidate's management potential. Furtherexamination and rejection of alternative appraisal systems, such as psychologicalmeasurement, that have already been rejected by human resource professionalsin selection decisions, and interviews in addition to other performance reviewswhich are laden with proven inherent problems, will narrow the range of possibleinstruments to content valid simulation instruments the basis of assessmentcentre philosophy and technology.Assessment centres must be entered into and developed thoughtfully to providethe basis for effective human resource utilisation. As an extension of theorganisation's HRM policy, commitment to the concept of the assessment centremust exist for it to succeed. To this end, support for the programme must bepreceded by an understanding of the HRM approach to management.It appears to be quite clear that the assessment centre does have a future ifit adheres to the model balanced by job-relatedness, validity, cost-effectivenessand m easuring outcomes. In the quest to enhance effectiveness, an assessmentcentre will demand a job analysis with the skills to be measured being observable,relevant and measurable. Instruments will need to be prepared that reflect the

  • 8/2/2019 The Assessment Centre is Not Dead

    14/15

    4

    job demands, a staff of assessors will need to be trained, and the process mustbe properly integrated into a system that most effectively uses the informationfor the proper development of the individual and the organisation. This is the balancewhich will guarantee its future.References1. Friedman, B.A. and Mann, R.W., "Emp loyee Asse ssme nt Me thods As ses sed ", Personnel,Vol. 58, 1981, pp. 69-74.

    2 . Jaffee, C.L. and Sefcik, J.T., "Wh at is an Assess me nt C en ter ?" The Personnel Adm inistrator,Vol. 25, February 1980, pp. 40-43, 67.3. Byham, W.C., "Starting an Assessment Center the Correct Way", The PersonnelAdministrator, Vol. 25, February 1980, pp. 27-34.4. Nichols, L.C. and Hudson , J., "Du al Role Ass essm ent Cen ter: Selection and Developm ent",Personnel, Vol. 60, 1981, pp. 380-86.5. Werther, W.B. el al., Cana dian Personnel Mana gement and Human Resources, 2nd ed.,McGraw-Hill Ryerson, Toronto, 1985, pp. 256-7.6. Lee, C. "Assessm ent Cen ters: A Method with Proven Metho ds", Training, Vol. 2 2 , 1985,pp . 69-70.7. Steines, J., "Assessm ent Ce nters: Promoting the Right Employee", Security Management,Vol. 29, 1985, pp. 23-5.8. Howard, A., "An Assessm ent of Assess ment Centers ", Academy of Management R eview,Vol. 17, 1974, pp. 115-34.9. "Stan dar ds and Ethical Considerations for Ass essm ent Centre Op erations ", chaired byMoses, J.L., Task Force on Assessment Centre Standards, 1978.

    10. Quarles, C.L., "Public Services Managerial Selection Methods: A Comparison withPerformance Appraisal", Public Adm inistration Qua rterly, Vol. 7, 1983, pp. 310-22.11. Cohen, S.L., "Pre-packaged vs Tailor-made: The Assessment Debate", Personnel Journal,Vol. 59, 1980, pp. 939-91.

    12 . Quick, J.C., Fisher, W.A., Schkade, L.L. and Ayers, G.W., "Developing AdministrativePersonnel through the Assessment Center Techniques", Th e PersonnelAdministrator, Vol.25 , February 1980, pp. 44-6, 62.13. Russell, C.J.,' 'Individual Decision Pro cess es in an Assessm ent Ce nte r", Journal ofAppliedPsychology, Vol. 70, 1985, pp. 737-46.14. Teel, K.S. and Dubois, H., "Participants' Reactions to Assessment Centers", PersonnelAdministration, Vol. 28, 1983, pp. 85-91.15. Jaffee, C.L. and Cohen, S.L., "Improving Human Resource Effectiveness throughAss essm ent Cen ter Technology: Emerg ence , D esign, Application and Evaluation", in Miller,E.L ., Burack, E.H . and Albrecht, M.A .(Eds.), Management of Human Resources, Prentice-Hall, Englewood Cliffs, New Jersey, 1980, pp. 350-79.16. Thornton, G.C. and Byham, W.C., Assessment Centers and Managerial Performance,Academic Press, Toronto, 1982.17. Frank, F.D. and Preston, J.R., "The Validity of the Asse ssm ent Cen ter Approach and RelatedIssues" , Personnel Adm inistrator, Vol. 27, 1982, pp. 87-95.18. Byham, W.C., "Assessment Centers for Spotting Future Managers", Harvard BusinessReview, Vol. 48, 1970, pp. 150-60.19. Cohen, B.M., Moses, J.L. and Byham, W.C., The Validity ofAssessment Centers: A LiteratureReview, Monograp h II, Developmental Dimensions Press, Pittsburgh, Pennsylvania, 1974.20. Cunningham, R.B. and Olshfski, D.F., "Evaluating Task Leadership: A Problem for

    Assessment Centers" , Public Personnel Management, Vol. 14, 1985, pp. 293-9.

  • 8/2/2019 The Assessment Centre is Not Dead

    15/15

    TheAssessmentCentre isNot Dead!

    6 5

    21. Lorenz o, R.V., "Effects of As sess ors hip on Ma nag ers ' Proficiency in Acquiring, Evaluatingand Communicating Information about People", Personnel Psychology, Vol. 37, 1984, pp. 617-34.22. Warmke, D.L., "Pre-selection for Assessment Centers: Some Choices of Issues toConsider", Journal of Management Development, Vol. 4, 1985, pp. 18-39.23. Collins, R.D., "The Assessment Center", Management Insights, Vol. 83, 1982, pp. 85-6.24. Bucalo, J.P. , "T he A ssessm ent Center A More Specified Approach", Human ResourceManagement, Fall 1974, pp. 2-13.25. Badawy, M.K., "Managing Career Transitions", Research Management, Vol. 26, 1983, pp.28-31.26 . Edwards, M.R., "OJQ Offers Alternative to Assessment Center" , Public PersonnelManagement Journal, Vol. 12, 1983, pp. 146-55.27. Rothwell , S. , "M anpower M atters: The U se of Assessm ent C enters" , Journal of GeneralManagement, Vol. 10, 1985, pp. 79-84.28. M oses , J., "Assessm ent Ce nters", in Craig, R. (Ed.), Training and Development Handbook,3rd ed., McGraw-Hill, New York, 1987, pp. 248-62.