mentornet manual: conduct moderation of ...druafet.online/attachments/moderation-115759-lg.docx ·...

254
™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted Conduct Moderation Outcomes-based Assessments DAPFETC

Upload: others

Post on 13-Sep-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Conduct Moderation Outcomes-based Assessments

DAPFETC

Page 2: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Dirk Andrews

First edition, 2009

Published by: DRU-A Professional Further Education and Training College

Distributed by:

DRU-A Professional Further Education and Training College (DAPFETCTelephone: +27-53 244 0880Facsimile: +27-086-274 0207email: [email protected] / [email protected]

This manual is based on the following Unit Standard: Conduct

Page 3: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

moderation of outcomes-based assessment.

NLRD Number: 115759

Level: 6.

Credits: 10.

PART 1: MODERATION OF ASSESSMENT

Page 4: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

INDEX PAGECHAPTER 1: MODERATION WITHIN THE CONTEXT OF AN OUTCOMES-BASED ASSESSMENT SYSTEM

8

Introduction. 8

The assessment policy and procedure. 9

Procedure for the appointment of Moderators. 17

Requirements for moderating. 17

Moderation of examination scripts. 19

Moderation of assessment results. 19

Principles of assessment. 20

Summary of chapter 1. 25

CHAPTER 2: PLAN AND PREPARE FOR MODERATION 26

Introduction. 26

Planning for moderation. 28

Components of a moderation system. 29

Establishing the scope of the moderation. 31

Candidates with special needs. 32

Moderation of recognition of prior learning (RPL) assessment. 33

Designing a moderation system. 34

Summary of chapter 2. 34

CHAPTER 3: CONDUCT MODERATION 36

Introduction. 36

How does moderation occur? 36

Appeals procedure. 42

Summary of chapter 3. 44

CHAPTER 4: ADVISE AND SUPPORT ASSESSORS 46

Introduction. 46

Providing advice to Assessors. 46

Disclosure of information. 47

Award recommendations. 49

Report, record and administer moderation. 49

The management structure. 51

Summary of chapter 4. 52

CHAPTER 5: REVIEW MODERATION SYSTEMS AND PROCESSES 55

Introduction. 55

Page 5: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The improvement of moderation systems. 57

Summary of chapter 5. 62

LIST OF FIGURESFigure 1: The position of moderation in the quality assurance system. 9

Figure 2: The assessment plan compared to assessment of learner

achievement.

11

Figure 3: The relationship between assessment of learning outcomes

and moderation of assessment.

25

Figure 4: Planning and preparing for moderation. 35

Figure 5: Conducting moderation. 45

Figure 6: The certification process. 49

Figure 7: The role and responsibilities of the Moderator. 53

Figure 8: Administration of moderation. 54

Figure 9: Structure of the moderation system. 63

LIST OF TABLESTable 1: Checklist for the moderation of assessment design. 15

Table 2: Checklist for the moderation of the Assessor. 44

LIST OF EXERCISESExercise 1: Advise and support Assessors. 15

Exercise 2: Collect, analyse, organize and evaluate assessment

information.

20

Exercise 3: Meeting the requirements for consistency. 23

Exercise 4: Principles of assessment. 25

Exercise 5: Logistical arrangements. 30

Exercise 6: Plan, prepare and conduct moderation. 31

Exercise 7: Materials to be moderated. 32

Exercise 8: Special moderation needs. 32

Exercise 9: Moderation of RPL. 33

Exercise 10: Role players in moderation. 34

Exercise 11: Objectives with moderation. 40

Exercise 12: Methods of moderation. 42

Exercise 13: Evidence for moderation. 43

Page 6: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Exercise 14: Planning for moderation. 44

Exercise 15: Moderation procedures. 44

Exercise 16: Circumstances for moderation. 44

Exercise 17: ETQA requirements in terms of assessment design. 47

Exercise 18: Communication of assessment results. 48

Exercise 19: Recording and communication of moderation findings. 48

Exercise 20: The Moderator’s role in the approval of award

recommendations.

49

Exercise 21: ETQA policy regarding record keeping. 51

Exercise 22: Learning institution’s policy regarding record keeping. 51

Exercise 23: Moderation documentation. 51

Exercise 24: Components of a moderation system. 55

Exercise 25: Evaluation of a moderation checklist. 58

PART 2: VERIFY MODERATION OF ASSESSMENT

PREFACE 64

CHAPTER 6: PLAN AND PREPARE FOR VERIFICATION 67

Introduction. 67

Confirming the scope and purpose of verification. 68

Selecting verification methods. 68

Planning the scope and nature of verification activities. 69

Preparing verification documentation. 70

Doing a Verifier visit. 85

Physical and human resources required for verification. 86

Verification techniques. 88

Summary of chapter 6. 90

CHAPTER 7: CONDUCT VERIFICATION 91

Introduction. 91

The verification plan. 93

Verifier preparation and documentation review. 94

The verifier visit. 94

Page 7: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Verifier follow-up. 95

Checking and judging the verification process. 95

Requirements for validity. 96

Requirements for reliability and consistency. 100

Evaluating the verification plan and processes. 102

Summary of chapter 7. 107

CHAPTER 8: VERIFICATION STATISTICS 109

Introduction. 109

Frequency distribution. 110

The normal distribution. 113

Measures of variability. 116

Test validity. 121

Correlation and prediction. 123

The scatter plot. 129

Interpreting correlation coefficients. 130

Correlation vs. causation. 131

The influence of sample size. 132

Test reliability. 133

Determining test reliability by means of the test-retest method. 135

The equivalent test method. 137

The divided half method. 137

Determining test item/learning outcome congruence. 141

Item analysis. 144

Determining the effectiveness of distracters in multiple-choice items. 150

Summary of chapter 8. 151

CHAPTER 9: RECORD AND REPORT VERIFICATION FINDINGS AND RECOMMENDATIONS

153

Introduction. 153

Reporting verification findings. 154

Content of the verification report. 154

Verification recommendations. 156

Record keeping. 157

Page 8: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Confidentiality of information. 157

Summary of chapter 9. 158

CHAPTER 10: ADVISE AND SUPPORT MODERATORS AND PROVIDERS

159

Introduction. 159

Feedback to Moderators. 160

ETQA requirements. 161

Improvement of moderation processes. 162

Handling disputes. 164

Summary of chapter 10. 166

REFERENCES 167

ANNEXURE A: FACILITATOR’S NOTES ON EXERCISES 169

LIST OF FIGURESFigure 10: Potential performance in an outcomes-based learning system. 65

Figure 11: Symmetrical vs. asymmetrical distribution curves. 114

Figure 12: Distribution curve of 30 scores. 115

Figure 13: The effect of skewing on the mean, median and mode. 115

Figure 14: A comparison of the degree of dispersion between the test

scores at two different learning institutions.

118

Figure 15: Example of a scatter plot. 128

Figure 16: Example of a scatter plot depicting a strong relationship

between two variables.

129

Figure 17: Example of linear relationships. 130

LIST OF TABLESTable 3: Register of learners who completed courses. 70

Table 4: Quality assurance of learner achievement. 75

Table 5: Assessment activities evaluation checklist. 77

Table 6: Evidence evaluation form. 78

Table 7: Aspects to observe regarding the assessment situation. 79

Table 8: Quality assurance of learner achievement checklist. 79

Table 9: Moderation process checklist. 80

Table 10: Evaluate verification plan and process. 81

Page 9: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Table 11: Irregularities report. 83

Table 12: Verification report and result summary. 85

Table 13: Checklist for the evaluation of a verification plan. 106

Table 14: Frequency distribution of 30 scores. 111

Table 15: Determining the median. 112

Table 16: Test scores at two different learning institutions. 117

Table 17: Determining the mean. 120

Table 18: Rank difference. 127

Table 19: Example of the test-retest method. 136

Table 20: First example of the divided half method. 138

Table 21: Second example of the divided half method. 140

Table 22: Worksheet for item analysis. 146

Table 23: Determining two indexes. 147

Table 24: The discrimination index. 149

Page 10: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 1: MODERATION WITHIN THE CONTEXT OF AN OUTCOMES-BASED ASSESSMENT SYSTEM

INTRODUCTION

Moderation forms part of the system of quality assurance in learning. You probably

are already qualified in assessment of learning outcomes. Moderation of assessment

is a further step in the process. This means that you probably already know how to

design and develop assessment instruments and how to plan and conduct

assessment of learning outcomes. Being knowledgeable and experienced in

assessment will definitely be to your advantage in this course. Nevertheless, we will

briefly touch on the assessment issues that are necessary for moderation purposes.

The quality assurance specialists among you will probably wish to go even one step

further, by gaining competence in verification of moderation. We will briefly cover

verification, seeing that the moderator and assessor should keep in mind that their

efforts will, eventually, be verified before candidates can be issued accredited

certificates.

The position of moderation in the quality assurance process can be illustrated as

follows:

Specific outcome: Demonstrate understanding of moderation within the context of an outcomes-based assessment system.Assessment criteria:1. Moderation is explained in terms of its contribution to quality assured

assessment and recognition systems within the context of principles and regulations concerning the NQF.

2. Key principles of assessment are described in terms of their importance and effect on the assessment and application of the assessment results. Examples are provided to show how moderation may be effective in ensuring that the principles of assessment are upheld.

3. Examples are provided to show how moderation could verify the fairness and appropriateness of assessment methods and activities used by Assessors in different assessment situations.

Page 11: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Figure 1: The position of moderation in the quality assurance system.

Moderation is defined as follows in the ETDQA Guidelines for Moderation dated

October 2003: “Moderation is a process that ensures that assessments conducted

by registered assessors meet the specified outcomes as described in the NQF

standards and qualifications, and are fair, valid and reliable. Moderation is the

responsibility of providers, who may train and register members of their own staff or

bring in moderators from outside to perform the function.”

THE ASSESSMENT POLICY AND PROCEDURE

Seeing that we work in a system, it is only logical that the assessment Policy and

Procedure will include moderation. Furthermore, the Moderator must ensure that an

assessment policy and procedure exists, that it is in line with ETQA requirements

and that it is adhered to in the planning and conducting of assessment of learner

achievement. The following are the elements that an assessment Policy and

ASSESSMENT OF LEARNING(This can include a first and second

Assessor, and in exceptional cases, even additional Assessors)

MODERATION OF ASSESSMENT(Done by internal quality assurance bodies,

sometimes called “internal verification”)

VERIFICATION OF MODERATION(Normally done by external quality

assurance bodies, sometimes called “external verification”)

The National Qualifications

Framework (NQF) with all

the Quality Assurance

bodies (SAQA, NSBs, SGBs, SETAs, ETQAs) and procedures (standard setting,

Legislation, National

Skills Development Strategy, etc)

Page 12: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Procedure should have:

1. Management of Assessment. Management of assessment consists of the

following processes:

1.1. The assessment plan. The Facilitator/Assessor must discuss the

assessment plan with all learners. Both the Assessor/Facilitator and

candidate/leaner must sign the assessment plan once it has been

discussed with him or her.

1.2. Processing of Examination Marks and Examination Scripts. Both

formative and summative assessment should be provided for. For

formative assessment learners may be assessed in terms of “class

attendance”, participation in interactive communication during lectures

and objective-type questionnaires. Summative (final) examinations

must always be done under supervision, unless done online, in which

case the online platform will provide the required security and

confidentiality of information. Final examinations may also be done at

the workplace of the learner. Remember that the Assessor and/or

Facilitator are not the only people who can supervise examinations.

The employer or supervisor can also do so.

1.3. Communication of results. Results must be communicated in

confidentiality and learners are to receive the results first.

1.4. Safe-keeping of Examination Scripts . Examination scripts must be kept

safe and only people who are authorised to see the scripts, may be

allowed access.

1.5. The Memorandum. Both the contents and the safekeeping of the

memorandum must be managed.

1.6. Compilation and Moderation of Question Papers. This is discussed in

detail later in this manual, seeing that it concerns moderation more

directly.

1.7. Reporting Procedures. It must be specified clearly when, where and

how learners will receive examination results.

2. Assessment of learner achievement. In this respect the assessment policy

should cover the following:

Page 13: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

ASSESSMENT PLAN

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

2.1. It must be clear whether a course/subject/year mark is required and if it

will count towards the final mark.

2.2. Time to be spent on learning per Unit Standard, module or subject.

2.3. Hours of contact learning versus self-study and experiential learning,

constitute notional hours, which will ultimately contribute towards

credits.

2.4. In what format final marks will be expressed (mostly as a percentage)

and the weight of each assessment (theoretical, practical, formative,

etc).

2.5. Policy in respect of adjustment of marks, e.g. where a learner is not

competent based on a small margin or misses a distinction by, say,

less than 2 percent.

2.6. Procedure for the briefing of learners on what will be expected of them

in terms of assessment.

The difference between management of assessment and management of learner

achievement can be illustrated as follows:

Figure 2: The assessment plan compared to assessment of learner achievement.

3. The Purpose of Assessment. The purpose of formative and summative

assessment must be defined.

ASSESSMENT PLAN

MARKS AND SCRIPTS

REPORTINGCOMMUNICATE RESULTS

SAFEKEEPING OF SCRIPTS

COMPILATION OF

INSTRUMENTS

MEMORANDUM

MODERATION =

+

+

+

ADJUSTMENT OF MARKS

FINAL MARK

CONTACT AND EXPERIENTIAL

LEARNINGTIME SPENT

YEAR MARK?

BRIEFING OF LEARNERS

ASSESSMENT OF LEARNER

ACHIEVEMENT

Page 14: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

4. The Role of the Learners in Assessment. It is important that the learner

understands from the onset what his or her role and responsibilities are

regarding the assessment. The learner needs to understand what the process

is and why it is so. The learner needs to know what to expect from the

Assessor. The Assessor needs to explain to them what he/she expects from

the learners. The learner and Assessor should both be satisfied that the

timing of the assessment is suitable, that the opportunities identified are

suitable, and the place of assessment is suitable.

Informing the learners about their assessment is also important. The learner

may, based on his or her maturity and experience, be in a position to alert the

Assessor to other aspects that the latter should consider in planning the

assessment.

The learner should be questioned early in the learning process on his or her

prior experience, in order to determine his or her knowledge and skills. This

will be used to identify relevant unit standards to be achieved.

The language in which the assessment will be conducted is decided upon

through consultation with the learner. If it is the learning institution’s policy to

offer learning in one language only, this should be stated clearly, in which

case proficiency in the particular language becomes an entry requirement.

Administrative arrangements, such as appropriate documentation, records

and forms, the venue for assessment, etc is also discussed with the learners.

5. Assessment Design. Assessment design should cover the following:

5.1. The process of assessment design must meet the ETQA requirements.

5.2. The requirements of the particular Unit Standard or qualification.

5.3. The appropriate assessment materials, e.g. question paper, checklist,

Page 15: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

portfolio of evidence.

6. Moderation of the design of assessment. Moderation of assessment

design ensures that the assessment conducted is consistent, accurate and

well designed. The process includes design, implementation and review. The

following questions are used to verify the consistency, accuracy and quality of

design:

6.1. Does each test item measure an important learning outcome included

in the test specification?

6.2. Is each item appropriate for the particular learning outcome to be

measured?

6.3. Does each item present a clearly formulated task?

6.4. Is the item stated in simple, clear language?

6.5. Is the item free from extraneous clues?

6.6. Is the difficulty of the items appropriate?

6.7. Is each test item independent, and are the items as a group free from

overlapping?

6.8. Do the items to be included in the test provide adequate coverage of

the test specifications?

Most ETQAs will require at least the following in terms of assessment design:

1. The design of the instrument (test, written assignment, portfolio, practical

demonstration and tasks, observation, interviews, combination of tools, etc) is

appropriate (meets the outcomes as specified in the standards, is at the right

level, etc.)

2. The design of the instrument is based on information taken from relevant

source documents (unit standards, and any other documents which prescribe

what must be assessed and the criteria on which judgements will be made).

3. The design of the instrument is linked to an appropriate assessment strategy

(e.g. taking into account opportunities for integrated assessment, or for

gathering naturally-occurring evidence).

4. Instructions to learners are clear and unambiguous.

5. Time given for gathering and presentation of evidence (whether in one sitting

or over time) is sufficient to allow an average learner to demonstrate

Page 16: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

competence.

6. There is a relationship between course methodology, content and the

assessment (not applicable to RPL systems).

7. Grading design (assessment criteria, issues of weighting, format of

judgements, etc) is done concurrently with instrument design, and is

compatible with the instrument.

8. Assessment design includes the development of an assessment guide laying

out details and instructions for the assessment activity. Explicit grading

instructions are developed (for marking, recording on observation sheets, or

evaluating a product such as a lesson plan or training event, etc.)

9. The design makes provision for special needs without compromising the

validity of the assessment.

10. The assessment is implementable within any reasonable site, cost and time

requirements.

The following is an example of a checklist that can be used for moderation of

assessment design:

THE ASSESSMENT INSTRUMENT Yes No

1Was the assessment instrument designed in accordance with

the quality assurance policy?

2 Were instructions to the learners clear and unambiguous?

3Was the assessment instrument sufficient to protect the

integrity of standards and qualifications?

4

Were the choice and design of assessment methods and

instruments appropriate to the unit standards and qualifications

being assessed?

5Is the assessment instrument consistent, accurate and well

designed?

6Does the assessment instrument make provision for

reassessment?

7 Will it be necessary to redesign the assessment instrument?

8Has the memorandum been prepared according to the

organisation’s quality assurance policy?

Page 17: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

9If annotated drawings are required, do complete drawings with

annotations appear in the memorandum?

10

Is the design of the assessment instrument linked to an

assessment strategy? (Environmental analysis to find the best

assessment opportunities and approach.)

11

Is the grading design compatible with the assessment

instrument? (assessment criteria, weighting, format for

judgements, etc.)

12Is the assessment instrument implementable with regard to

reasonable site, costs and time requirements?

13 Are marks for sections and subsections shown clearly?

14Did the assessment instrument make provision for special

needs without compromising the validity of the assessment?

15 Was the assessment instrument career- and practice-oriented?

16

Does the assessment instrument endeavour to determine the

attitude of the learner towards his or her vocation as well as his

or her sense of responsibility towards his or her vocation?

17 Are critical cross-field outcomes also assessed?

Table 1: Checklist for the moderation of assessment design.

7. Marking requirements. The following guidelines regarding reaching an

assessment result should be borne in mind:

7.1. Assessors should work to explicit grading instructions.

7.2. Adequate and specific training must be provided for Assessors to arrive

at comparable results, e.g. when large groups of Assessors such as

markers are required in a particular session. (To mark examination

scripts, assignments or portfolios.)

7.3. Appropriate documentation (mark memos, observation sheets, etc)

should be provided.

7.4. Indicators for the identification of irregularities should exist (false

EXERCISE 1: As Moderator, how would you advise and support Assessors?

Page 18: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

evidence, work has not been done by the registered learner, etc).

7.5. A system for dealing with identified irregularities must be in place.

7.6. The capturing and recording (electronic or manual) of results must be

checked at least twice.

7.7. Results must be processed timeously.

7.8. An appeals system for learners to query their results must be in place.

7.9. Learner results must be submitted to the ETQA for certification

according to the specified procedures.

7.10. Feedback about the assessment must be given to all relevant parties,

including candidates, Facilitators, Moderators, etc as applicable.

8. Admission to culminating assessments. The conditions under which a

candidate will be admitted to culminating assessments (examinations) must

be specified.

9. Examination Irregularities and disciplinary measures. The manner in

which examination irregularities or any other irregularities concerning

assessment will be dealt with, should be explained.

10. Powers of the Assessor. The powers of the Assessors in terms of who may

be admitted to a place of assessment, dealing with candidates and problems

they may have, should be explained.

11. Appointment of Assessors. The Policy and Procedure should specify who

appoints Assessors, what the requirements for an Assessor are, and what

their responsibilities will be.

12. Appointment of Moderators. The Moderators will be appointed by a person

or committee authorised to do so. The Moderator’s responsibilities should be

included in his or her appointment and should include to:

12.1. Moderate draft question papers.

12.2. Re-mark examination scripts of these subjects, in accordance with the

instructions on re-marking, where applicable. Yes, it is admissible for a

Moderator to remark examination papers, but this will normally be done

Page 19: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

by a second Assessor, rather than the Moderator.

12.3. Moderate a sample of or all the scripts in accordance with the

provisions for moderating. Example: “A sample is normally not less

than 5 scripts or 10% of the scripts, if 10% is more than 5, i.e. in the

event that there are more than 50 learners on a course.”

PROCEDURE FOR THE APPOINTMENT OF MODERATORS

The following additional procedures apply in respect of the appointment of

Moderators for exit-level courses (towards a national qualification):

1. The course leader/Facilitator concerned must make arrangements with the

Moderators involved, either telephonically or in person, and must obtain their

approval.

2. For each subject/course/module a Moderator must be appointed internally

and he or she must first moderate the question paper and memorandum

before they are submitted to the person responsible for quality assurance in

the organisation.

3. The official appointment of moderators must be dealt with by the person

responsible for quality assurance in the organisation.

REQUIREMENTS FOR MODERATING

The following requirements regarding moderating must be borne in mind:

1. Moderation planning includes addressing assessment design, and activities

before, during and after assessment, as well as assessment documentation.

2. The design of the assessment instrument must be checked by at least one

other person with the appropriate expertise. Feedback and suggested

alterations should be carried out, and final design endorsed by the Moderator.

3. The design of the instrument’s grading system (assessment criteria,

weighting, format for judgements/decisions/findings, etc) must be checked by

at least one other person with the appropriate expertise. Feedback and

suggested alterations should be carried out, and final design endorsed by the

Moderator.

Page 20: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

4. Procedures for arriving at results must be moderated. The form and frequency

of this will depend on the nature of the event. For example, where

assessment is based on observation of performance by an individual, a

Moderator will attend a particular Assessor’s sessions at prescribed times to

compare findings with the Assessor. Modular assessment will be moderated

by peer review. Where assessment is based on a ‘marking session’ bringing

together a team of Assessors, Moderator(s) must be present to check that

there is consistency across individual judgements, and that consensus

agreements are applied. Sample moderation of other forms of resulting is also

required.

5. Moderators will check that accommodation of special needs has not

compromised assessment standards.

6. Moderators will produce a moderation report at specified intervals. This report

will serve as a management review reflecting on the system, noting problem

areas and commenting on the consistency (in relation to quality and

efficiency) of the assessment cycle over time. This could mean, for example,

being alert to anomalies in learner results between two different sessions, or

to patterns over a longer span of time.

7. The Moderator must study the draft question paper and the memorandum,

comment on them and ensure that there are no grammatical or other errors in

the draft question paper; that it corresponds to the unit standard and the

instructions; that the desired standard is maintained; that the allocation of

marks was done correctly; and that it will be possible to answer the questions

within the allocated time.

8. If necessary, the Moderator must return the question paper, with the

suggested adjustments indicated on it, to the Assessor.

9. The assessor must then make the necessary corrections or comment on the

suggested adjustments, after which final improvements can be made.

10. The Moderator must moderate the draft question paper within ten days of

receiving it, sign it if it is in order and hand it to the first Assessor.

11. Each Assessor must also check a hard copy of the question paper and certify

it as correct.

12. The question paper itself may not be amended at this stage.

13. The Assessor must be available during the examination, in case mistakes or

Page 21: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

lack of clarity do occur.

MODERATION OF EXAMINATION SCRIPTS

The following is an extract from the Mentornet Policy and Procedure on the

moderation of examination scripts:

1. The Moderator must moderate in respect of all examination papers, at least 5

scripts if there are less than 50 scripts, or a sample of at least 10% of the

scripts if there are more than 50 scripts.

2. The Moderator must compile a report containing information such as:

2.1. Name of the course being examined.

2.2. Number of scripts received and number of scripts moderated.

2.3. Moderation findings regarding the assessment practice, e.g. coverage

of the outcomes, appropriateness of the assessment methods, validity

of the evidence obtained, and compliance with established assessment

principles.

2.4. Conclusions about the fairness and accuracy of the assessment and

recommendations for future improvements.

2.5. The Moderator must return the scripts, together with the Moderator’s

report, to the administrative office within a reasonable time, normally

five days, of having received them.

MODERATION OF ASSESSMENT RESULTS

The Moderator should attend to the following concerning assessment results:

1. Establish systems to standardize assessments including plans for moderation.

2. Monitor consistency of assessment records.

3. Through sampling, check the design of assessment materials for

appropriateness before they are used, monitor assessment processes, check

candidates’ evidence, and check the results and decisions of Assessors for

consistency.

4. Co-ordinate Assessor meetings.

5. Liaise with verifiers.

6. Provide appropriate and necessary support, advice and guidance to

Page 22: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Assessors.

7. Moderate assessment practices.

8. Maintain and monitor arrangements for processing assessment information.

PRINCIPLES OF ASSESSMENT

As assessment is central to the recognition of achievement, therefore, the quality of

that assessment is important in order to provide credible certification. Credibility in

assessment is assured through assessment procedures and practices being

governed by certain principles. These principles are: (SAQA Guidelines, 1999: 6.)

Transparency. The purpose of the assessment needs to be clear to everyone

involved, especially the learner. Questions as to what is being assessed, why

it is being assessed, how evidence will be collected, judged and used, should

be clearly answered.

Validity. Validity refers to assessment measuring what it says it is measuring;

be it knowledge, understanding, subject content, skills, information,

behaviours, etc. Assessment procedures, methods, instruments and materials

have to match what is being assessed. Judgements or results showing

measurement of other things outside of what is stated, are invalid. In the case

of assessing research skills, a learner’s ability to write may not necessarily

provide evidence that the learner has the ability to do research. The

assessment must assess the learner’s ability to perform. If a learner is being

assessed on their ability to carry out research, then the learner should be

assessed on the various activities of the stages of research – formulation of

the research question, literature review, development of research instruments,

collection of data, analysis of data and writing a report.

In order to achieve validity in the assessment, Assessors should:

State clearly what outcomes are being assessed.

Use an appropriate type or source of evidence.

EXERCISE 2: As Moderator, how do you collect, analyse, organize and evaluate

assessment information?

Page 23: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Use an appropriate method of assessment.

Select an appropriate instrument of assessment.

When designing assessment, it is important that the Assessor looks at the

specific outcome/s, assessment criteria and range, so as to determine the

kind and amount of evidence that should be generated by the learner. The

kind and amount of evidence required, should determine the assessment

method and instruments that should be used and selected.

Assessment should measure what it is intended to measure. The evidence

collected should be directly related to the skills and knowledge being

assessed, i.e. the outcomes of the unit standard are either competently or not

yet competently demonstrated and explained.

Validity will be re-enforced if the following preconditions exist: (Spady &

Schwan, 1999: 10.)

Expect all learners to successfully demonstrate learning performance

at a high level of fluency and maturity.

Expect all learners to practice demonstrating learning performance at

appropriate levels of difficulty, complexity, and fluency.

Establish and display the standards and criteria for learning

performance publicly prior to proceeding with instructional planning and

deliberately focus instructional planning around the standards and

criteria.

Conduct formal assessment of learning performance in settings that

require learners to deal with the content, challenges, and conditions

described in the outcomes.

Allow learners to tailor their demonstrations of learning performance to

their unique interests and life-long goals, using diverse methods,

content, and situations to exhibit them.

Implement an electronic record-keeping and reporting system, which

learners and educators can monitor in real time, that directly embodies

and reflects the expected learning performances and awards credit for

improved performance on any outcome whenever or wherever it

Page 24: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

occurs.

Maintain a portfolio of authentic evidence for all learners pertaining to

their highest level demonstration of learning performances, and

supporting competences that learners exhibit.

Reliability/consistency. Evaluation must be reliable. This is achieved by

using assessment instruments (questions, interviews, competency tests, etc.),

which are aligned, consistent and supportive towards the achievement of the

outcome. To ensure reliability, the assessment instruments and methods

should be simple, clear and unambiguous. Reliability in assessment is about

consistency. Consistency refers to the same judgements being made in the

same or similar contexts each time a particular assessment for specified

stated intentions is administered. It is important to ensure appropriate

judgement of evidence and to minimise inconsistency of one Assessor among

learners, or inconsistency among Assessors on one unit standard or

qualification. Assessment results should not be perceived to have been

influenced by variables such as different Assessors applying different

standards, Assessor stress and fatigue, not enough evidence gathered,

Assessor bias in terms of the learner’s gender, ethnic origin, sexual

orientation, religion, like/dislike, appearance, and Assessor assumptions

about the learner based on previous performance.

To avoid such variance in judgement (results), assessments should ensure

that each time an assessment is administered the same or similar conditions

prevail and the procedures, methods, instruments and practices are the same

or similar. In addition:

Assessors should give clear, consistent and unambiguous instructions.

Assessment criteria and guidelines for unit standards and qualifications

should be adhered to.

Assessors should meet and talk to each other.

Assessors should have knowledge of their learning field.

Internal and external moderation procedures for assessment should be

in place.

Clear and systematic recording systems should be in place.

Page 25: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Where possible more than one Assessor should be involved in the

assessment of one learner.

Assessors should use checklists where appropriate.

Assessment should produce similar results over time and across different

Assessors. Enough evidence needs to be collected and people should be

able to rely on the results of the assessment.

Relevance. Assessment should focus on evidence that is relevant to the

skills, knowledge and understanding that are being assessed and the agreed

purposes of the assessment. Unnecessary requirements should not be built

into the assessment.

Flexibility. Assessment must be flexible in order to be appropriate to the

range of knowledge, skills and understanding encompassed by competency

standards, as well as to the range of delivery modes, standards of delivery

and needs of the training. The assessment should always be flexible to

accommodate the focus of the candidate as well. It is crucial to understand

that variations, like individual approaches, creativity and other factors, should

also be taken into account.

Sufficiency. Assessment must be sufficient through adequately gathering

evidence with regards to the assessment criteria related to the critical- and

specific outcomes, as well as the achievement of the outcome itself.

Fairness. Fairness refers to assessment that does not in any way hinder or

advantage a learner. Complaints about unfairness in assessment may be

based on perceptions about inequality of opportunities, resources and

appropriate teaching and learning approaches in place for the acquisition of

knowledge, understanding and skills being assessed. They may also be

based on perceived bias such as ethics, gender, age, disability, social class

EXERCISE 3: How do you ensure that the requirements for consistency are

achieved?

Page 26: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

and racial bias in that assessment approaches, methods, instruments and

materials may be seen as not taking into account these differences.

Comparison of learners’ work can also be viewed as unfair, particularly in

situations of diversity (learning styles, home language, values, gender, race,

life experiences, etc.) The process needs to be transparent about the values

which underpin the assessment system. It should not discriminate against

learners in terms of gender, race, language, age or disability. Learners must

be prepared and consulted throughout.

Authenticity. Assessment should be based on evidence that is directly linked

to or created by the assessee.

Developmental. Assessment should be designed and implemented in ways

that enable people to develop new skills, knowledge and understanding. It

should be linked to learning opportunities. Support mechanisms such as

counselling, advice and guidance need to be provided.

Legitimacy. The assessment should be designed and implemented with full

participation of all stakeholders, so that it is regarded as credible, relevant and

valuable.

Practicability. Practicability refers to ensuring that assessment takes into

account available financial resources, facilities, equipment and time. It is

important that assessment is feasible in terms of money, facilities, equipment

and time. Assessment that is too costly may cause the assessment system to

fail. Assessment that requires elaborate arrangements of equipment and

facilities and that takes too long to perform, may overburden the system and

lead to failure of the system. In instances where the ideal assessment

requires a great deal of financial resources, time, elaborate arrangement of

equipment and facilities, such assessment can be done through simulation

exercises, or in the case of workplace learning, through the collection of

evidence during work.

EXERCISE 4: What principles of assessment do you check for when moderating

an assessment?

Page 27: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

SUMMARY OF CHAPTER 1

The objective of this chapter is to link the assessment process and procedure with

moderation of assessment. In order to do so, some of the most important elements

and requirements for assessment and moderation respectively were discussed. The

relationship between assessment of learning outcomes and moderation of

assessment can be illustrated as follows:

Figure 3: The relationship between assessment of learning outcomes and

moderation of assessment.

Safekeeping of examination marks and

scripts

Processing of examination marks and

scripts

Assessment of learner

achievement

Compilation of question

papers and scripts

THE PROCESS OF QUALITY ASSURANCE IN LEARNING

ASSESSMENT POLICY AND PROCEDURE

CERTIFICATIONVERIFICATIONMODERATIONASSESSMENT

Reporting procedure/ communication of

results

Moderation of question papers and

scripts

The memorandum

Page 28: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 2: PLAN AND PREPARE FOR MODERATION

INTRODUCTION

Moderation ensures that people who are being assessed are being assessed in a

consistent, accurate and well-designed manner. It ensures that all Assessors who

assess a particular unit standard or qualification are using comparable assessment

methods and making similar and consistent judgements about learners’

performance. Moderation of assessment occurs at both the level of the provider

(internal moderation), and the level of the ETQA (external moderation, done by a

verifier or team of verifiers).

The SAQA moderation system is framed thus: (SAQA Guidelines, 1999: 31.)

Unit standards and qualifications that NSBs submit to SAQA and recommended for

registration have to include requirements for moderation. ETQAs accredited with

responsibility of quality assuring the delivery of registered unit standards and

qualifications have to establish moderation systems and procedures for the providers

they accredit. Providers have to establish internal moderation systems consistent

with the requirements of the ETQA moderation system and procedures and the

moderation requirements stated in unit standards and qualifications. SAQA may

Specific outcome: Plan and prepare for moderation.Assessment criteria:1. Planning and preparation activities are aligned with moderation requirements.2. The scope of the moderation is confirmed with relevant parties.3. Planning the extent and methods of the moderation activities ensures

manageable moderation. Planning makes provision for sufficient moderation evidence to enable a reliable judgement to be passed on the assessments under review.

4. The contexts of the assessments under review are clarified with the Assessors or assessment agency, and special needs are taken into consideration in the moderation planning.

5. Moderation methods and processes are sufficient to deal with all common forms of evidence for the assessments to be moderated, including evidence gathered for recognition of prior learning.

6. The documentation is prepared in line with the moderation requirements and in such a way as to ensure moderation decisions are clearly documented.

7. Required physical and human resources to be ready and available for use. Logistical arrangements are confirmed with relevant role-players prior to the moderation.

Page 29: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

appoint moderating bodies to ensure credibility of assessment decisions with regard

to unit standards and qualifications of one or more ETQAs.

The NQF system is one in which centralised mainly public examinations at exit

levels, as we have known them thus far, are only a small part of the assessment

system. A substantial amount of assessment is devolved to the provider and

individual Assessors. The importance of moderating systems cannot be

overemphasised in a system such as this in order to ensure that the system is

credible and Assessors and learners behave in ethical ways. Moderation within the

NQF primarily functions as a means for professional interaction and the improvement

of skills that will continuously improve the quality of assessment.

The main functions of the moderation system are:

To verify that assessments meet the requirements for assessment.

To identify the need to redesign assessments if required.

To provide an appeal procedure for dissatisfied learners.

To provide a procedure for the reassessment of learners.

To evaluate the performance of Assessors.

To provide procedures for the de-registration of incompetent Assessors.

To provide feedback to CPs.

Although many writers use the terms “internal Moderator” when referring to the

person or group responsible for the moderation of assessment within a learning

institution, and “external Moderator” when referring to the person or body verifying

the moderation process and work done by the Moderator, we will use the term

“Moderator” when referring to the person or body responsible for internal moderation

and “Verifier” when referring to the person or body responsible for external

verification.

Page 30: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

PLANNING FOR MODERATION

As with any planning project, one should ask (and reply satisfactorily to) the following

questions when planning for moderation:

What?

Will all registered standards be moderated?

Will all candidates be moderated?

Will all assessments be moderated?

Will all training programmes be moderated?

Who will conduct the moderation?

How will the moderation be done?

Before assessment?

Post-assessment?

Both?

‘Rolling’ over a five-year period with moderation of different aspects

each year?

When?

Continuously?

Monthly?

Quarterly?

Annually?

What will the costs of moderation be and who will pay the costs?

Reporting?

Who provides information?

To whom?

What system will be in place to evaluate the effectiveness of the moderation

system itself?

Page 31: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

COMPONENTS OF A MODERATION SYSTEM

The components of a moderation system include: appropriate timing, extent,

materials and personnel.

Timing. Moderation can take place at different intervals. All assessment

guides should be moderated prior to the Assessor beginning with

assessments. Recently trained Assessors may require more regular

moderation of their ability to conduct assessments than more experienced

Assessors. However, all Assessors need to be moderated at designated

intervals.

Time-frames for moderation should be linked to time-frames for assessment

and deadlines for turnover times for feedback to candidates and clients.

Clients can, of course, also be the candidates. The gap between submitting

assessment tasks or portfolios and receiving feedback must not be too long.

Both assessors and moderators must bear this in mind when they assess and

moderate tasks, exam papers or portfolios.

Extent. Each and every unit standard and qualification, assessment materials

and Assessor fall within the moderation process. Moderation activities need to

be sufficient to protect the integrity of standards and qualifications. The quality

of the registered Assessors and moderation systems will be a key factor.

Initially, fairly frequent moderation might be a requirement and there may be a

need to conduct spot checks on a case-by-case basis in an evolving system.

This could taper off once providers have earned the right to conduct

decentralised assessment by proving over time that they have the capacity to

maintain credible assessment systems.

Materials. Materials might include the following:

Assessment activities or assessment activity exemplars.

Assessment guides or assessment guide exemplars.

Case studies or exemplars.

Assessed learners’ work samples.

Page 32: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Providers need to know in advance when these materials are to be made

available for verification. When these materials are expected to have a

particular shape or format, the requirements should be negotiated with

providers by the ETQAs. For the system to work, the ETQA will have to avoid

making unreasonable demands or intruding on the academic freedom of

providers. Verification is focused on ensuring that assessment is to the

required standard and therefore it should not become excessively controlling

and rigid.

Personnel. Moderators can be drawn from the providers and companies

where assessments are being conducted, or they could be external

appointees.

Wherever Moderators are drawn from, they will need to have standing and

unquestionable skills in the curriculum and in assessment practices, as well

as having close understanding of the expectations of all users.

Any person who is appointed to act as a Moderator or to chair a moderation

panel will also need to have sound communication skills. Criteria and

procedures of selecting Moderators must be established.

It is not only personnel that must be ready and available for moderation, but

also physical resources, such as forms, stationary, a video recorder in some

instances, for example when oral presentations are to be moderated.

Required physical and human resources must thus be ready and available for

use. Logistical arrangements need to be confirmed with relevant role-players

prior to the moderation.

Moderation methods. It will be necessary to plan for the moderation system to

evolve. This will require changing the methods used over time. The range from which

one or the other combination of methods is used, could include:

EXERCISE 5: What logistical arrangements are necessary for moderation?

Page 33: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Revising exemplars and benchmark materials.

Recognising expert Assessors.

Doing statistical moderation.

Common assessment activities and assessment guides.

Having Moderators to do site visits.

Having Moderators to conduct panel meetings.

Please note that the last two methods above refer to the linking of the moderation

process with verifications, whereas the first four refer to the assessment of learning

outcomes process.

ESTABLISHING THE SCOPE OF THE MODERATION

The Moderator will have a good idea of what each particular moderation intervention

should consist of and what will be needed to evaluate the moderation against.

Bearing this in mind, he or she will finalise the scope of the moderation with relevant

parties. (Parties include the Assessors or assessment agencies and moderating

bodies.)

Planning of the scope and nature of the moderation activities, ensure the

manageability of moderation and enable a fair judgement to be passed on the

assessments under review.

The contexts of the assessments under review are clarified with the Assessors or

assessment agency, and special needs are taken into consideration in the planning.

The Moderator will require assessed learner’s work samples from the Assessor.

You should consider the following factors when designing a moderation system:

1. The management structure.

2. The functions allocated to your moderation system.

3. The components of the moderation system.

EXERCISE 6: As Moderator, how do you plan, prepare and conduct moderation?

Page 34: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

EXERCISE 7: What materials will you as Moderator require from the Assessor?

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

4. The moderation methods you will use.

CANDIDATES WITH SPECIAL NEEDS

I am deliberately writing this section on special needs, so that you will take

cognizance that there is a difference between special needs for moderation on the

one hand, and providing for special needs of learners in assessment on the other

hand.

It is not only people in wheelchairs who could be candidates with special needs.

People with special needs are all people with disabilities, be they physical,

psychological or emotional. These are the people who may struggle with some sort

of learning barrier, and, even more importantly, barriers that can make it difficult, if

not impossible, for them to be subjected to standard forms of assessment. So, it is

often necessary to cater for such people’s special needs to allow them the privilege

of writing or doing assessments.

People with disabilities often cannot or do not wish to speak about their disabilities.

Therefore, it is up to the Facilitator and Assessor (if they are not the same person) to

identify the disability. Once the disability has been identified, it is advisable to

discuss the problem with the candidate to see if the Assessor and candidates could,

perhaps, find a way in which the disability can be accommodated. It is important that

the disabled candidate be given an equal opportunity to be assessed as candidates

who are not disabled.

Candidates with special needs may only be excluded from assessment or

moderation if their disabilities prevent the actual performance or if it could result in an

unsafe act. All candidates with special needs who do not fall within the exclusion

categories, must have their special needs addressed.

EXERCISE 8: What special moderation needs may exist in the Unit Standards

for which you are responsible?

Page 35: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

MODERATION OF RPL ASSESSMENT

Moderation of RPL assessment does not differ much from moderation of normal

assessment. As with normal assessment, the objective of moderation in RPL

assessment is to assure the quality of the assessment process and outcomes within

an enterprise, in line with the relevant ETQA’s policy and standards.

The Moderator is responsible for the following:

1. To collect, analyse, organize and evaluate assessment information.

2. To plan, prepare and conduct moderation.

3. To advise and support RPL practitioners, including Assessors and Facilitators.

4. To report, record and administer the moderation of RPL.

5. To review moderation systems and requirements.

6. To report, endorse or refer assessment results according to the RPL centre’s

policy and procedures, and to comply with the relevant ETQA’s policy and

standards for assessment of RPL.

7. To approve/endorse awarding recommendations.

Moderators are to liaise with other practitioners within their enterprises, learning

institutions or RPL centres that are involved with the recognition process. They

should have regular meetings with them to plan and schedule the RPL process as

well as to resolve outstanding issues which may include special needs.

Moderation processes are sufficient to deal with all common forms of evidence

including evidence gathered for recognition of prior learning.

What we have discussed thus far are the moderation system requirements. The

moderation documentation must be prepared in line with the moderation system

requirements and in such a way as to ensure moderation decisions are clearly

documented.

EXERCISE 9: What evidence do you require for moderation of RPL purposes?

Page 36: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

DESIGNING A MODERATION SYSTEM

Whether centralised, decentralised or a mix of both, the form of moderation systems

which are eventually established will be determined by capacity at operating level in

terms of management structures and the functions which can be allocated

satisfactorily to ETQAs and providers of education and training.

SAQA will need to consider NSB’s moderation proposals in the light of the

management structures it assumes will be available, the functions allocated to the

moderation system and the moderation methods proposed.

The guiding question to be asked is: “What is the least amount of moderation

needed to ensure that assessment is fair, valid, reliable and practicable?”

When designing a moderation system the following factors need to be considered:

(SAQA Guidelines, 1999: 36.)

The management structure.

The functions allocated to the moderation system.

The components of the moderation system.

The moderation methods to be used.

SUMMARY OF CHAPTER 2

The management structure, functions and components of the moderation system,

and the moderation methods to be used are all factors to consider when designing a

moderation system. Furthermore, they all impact upon the components of the

moderation system, including the scope, candidates and types of assessment to be

moderated. All of this can be summarised by means of the following illustration:

EXERCISE 10: Who may be the role players in your moderation?

Own staffExternal appointees

MATERIALFormsStationaryEquipment

WHATRegistered standardsCandidatesAssessmentsTraining programmes

WHO - PERSONNEL

EXTENT

Page 37: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Figure 4: Planning and preparing for moderation.

Page 38: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 3: CONDUCT MODERATION

INTRODUCTION

Moderation systems combine moderation and verification. Both moderation and

verification systems must ensure that all Assessors produce assessments that meet

the requirements for assessment (fair, valid, reliable, etc.). The Moderator actually

ensures that the Assessor complied with the requirements for assessment. The

same evidence of assessment information thus needs to be gathered. The

Moderator, however, must not reassess the assessments submitted by the learners.

HOW DOES MODERATION OCCUR?

In simple terms, moderations should be carried out as follows:

1. The Assessor submits the assessment instruments to the Moderator for

moderation.

Specific outcome: Demonstrate understanding of moderation within the context of a standards-based assessment system.Assessment criteria:1. A variety of moderation systems and methods are described and compared in

terms of strengths, weaknesses and applications. The descriptions show how moderation is intended to uphold the need for manageable, credible and reliable assessments.

Specific outcome: Conduct moderation.Assessment criteria:1. The moderation is conducted in accordance with the moderation plan.2. Unforeseen events are handled without compromising the validity of the

moderation.3. The assessment instruments and process are judged in terms of the extent to

which the principles of good assessment are upheld.4. Moderation confirms that special needs of candidates have been provided for

but without compromising the required standards.5. The proportion of assessments selected for checking meets the quality

assurance body’s requirements for consistency and reliability. The use of time and resources is justified by the assessment history or record of the Assessors and/or assessment agency under consideration.

6. Appeals against assessment decisions are handled in accordance with organizational appeal procedures.

7. The moderation decision is consistent with the quality assurance body’s requirements for fairness, validity and reliability of assessments to be achieved.

Page 39: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

2. The Moderator evaluates the methods of assessment against specific criteria

with special concern for candidates with special needs.

3. The Moderator evaluates the judgement against the standard applied using

the assessment instruments.

4. If the assessment instruments uphold the Assessor’s judgement, the

Moderator countersigns the documentation and returns them to the Assessor.

The Moderator should include any comments that might help to improve the

quality and fairness of the assessment. Moderation should always be used as

a learning opportunity for the Assessor.

5. If the Moderator suspects or finds a problem, he or she discusses this with the

Assessor to clarify the reasons for the judgement. The Moderator’s judgement

can supersede the judgement of the Assessor.

6. The Moderator endorses the Assessor’s finding or modifies it if necessary and

sends the final results for reading into the candidates’ records and being

processed for submission to the ETQA for accreditation purposes.

Moderation. Moderation ensures that assessment that is conducted by a single

learning provider is consistent, accurate and well designed. The three main stages of

moderation are:

1. Design : Ensuring that the choice and design of assessment methods and

instruments are appropriate to the unit standards and qualifications being

assessed. It could be seen as the check to ensure that Assessors selected

the correct assessment instruments.

2. Implementation : Ensuring that assessment is appropriately conducted and

matches the specifications of unit standards and qualifications – it involves

making sure that appropriate arrangements have been made and having

regular discussions amongst Assessors.

3. Review : Ensuring that any lessons learnt from the other two stages are

considered and that the necessary changes required, are made.

The ability and means to moderate is one of the prerequisites for private learning

institutions to be accredited by a SETA or the CHE. Providers would have to show

that they have the capacity to implement a moderation system that will facilitate and

ensure that these activities will be carried out effectively and efficiently in order to

Page 40: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

gain accreditation.

The roles of the Moderator show that individuals who are designated as such in

learning institutions and sites, should be experienced Assessors whom other

Assessors have confidence in. They should also have undergone training in

moderation and have knowledge of the learning area.

Sampling. Once the initial assessment has been made, the Moderator might have

some difficulty deciding on a sample size. If the Moderator is to assess the quality of

the candidates’ evidence, a 10 percent sample can be taken from an alphabetical list

of all the candidates submitting portfolios at the learning institution. If the Moderator

is to sample a standard of the Assessor’s marking, the sample will have to be

chosen from each Assessor’s list of candidates. This might be difficult if there is a

large variation in the number of candidates with each Assessor.

Regardless of the sample, size, it is necessary that at least two sets of real

assessment materials (two methods in which the candidate was assessed) and six

assessor decisions (at least based on six questions or practical tasks) be moderated.

The learning institution may have several satellites or outside centres which need to

be compared. If that were the case, the Moderator would have a third sampling

choice because the standards of outside assessment would be of particular

importance to the Verifier.

Whichever sample is chosen, moderation is always improved if shared by a team of

Assessors. Moderation is an excellent way of helping all members of the team to

keep up to date and to get direct feedback on the overall standard of assessment.

Samples which contain at least one example of the Assessor’s own candidate, give

an ideal chance for a direct comparison with other Assessors’ work. The Moderator

may be asked to select a sample for the Verifier, and such a good staff training

opportunity should not be missed. When any sample of candidates’ work is selected,

the Moderator should give the assessment team members a chance to inspect the

portfolios at the same time. (Cotton, 1995: 126.)

Page 41: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Moderation of objective tests. In objective tests the assessment and moderation take

place before the test is carried out. It may not be the learning institution team who

writes the outcome items, but they have to be constantly updated with material from

people at the Assessor level.

Formal examination methods. There are two methods which can be applied for

moderation of examination marking. The first method is to appoint a subject matter

specialist as Moderator for a section, or the whole of an examination if there are a

small number of candidates. The Moderator has the following duties:

To agree to a marking plan for each question with the team of markers

assigned to the question. This can be carried out before the examination, and

if the team are new to the job, the marking team can be briefed at the same

time.

Once a marker starts on a question, the Moderator checks every tenth

answer. This re-marking has to be done quickly at the beginning because if

the Moderator is not satisfied with the standard of marking, he or she may

have to give further instructions about the required standard and the marker

will be asked to start again.

The second method of moderation that can be used in an examination is the

resolution of marks after ‘blind double marking’. The method works as follows:

The first marker is given the examination paper, a pile of answers to one of

the questions and a form which has the name of the examination, the number

of the question, the first marker’s name and a list of the candidate

examination numbers which corresponds to the answers in the pile.

The first marker does not write comments on the scripts but makes notes and

puts a mark on the form opposite the candidate’s examination number.

The examination officer keeps the form containing the first marker’s

comments and marks and issues the same pile of answers with an

examination paper and a new form with the second marker’s name at the top

and the same list of candidate numbers.

When all the marking has been carried out a meeting is called for the

resolution of the marks. Sometimes the differences between markers are easy

to resolve. In the event of difficulty in coming to a conclusion, the scripts and

Page 42: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

memorandums are available. The examinations officer records the final mark

agreed upon.

Both methods are excellent for inducting new members of staff to marking standards

and both help to maintain an even standard of marking across all scripts. (Cotton,

1995: 132.)

In general, moderation is done with one or more of the following purposes in mind:

1. Moderation to address the design of the assessment, activities before, during

and after assessment, and assessment documentation.

2. Moderation to include assessments of candidates with special needs and for

RPL cases.

3. Evidence must be gathered for on-site and off-site moderation.

4. Evidence must be gathered for moderation in situations where the moderation

process confirms the assessment results, and where the moderation process

finds it cannot uphold the assessment results.

The moderation is conducted in accordance with the moderation plan. Unforeseen

events are handled without compromising the validity of the moderation.

The assessment instruments and processes are checked and judged in terms of

their appropriateness, fairness, validity and sufficiency for assessment. The

moderation decision enables the quality assurance body’s requirements for fairness,

appropriateness, validity and sufficiency to be achieved. (Requirements include the

interpretation of assessment criteria and correct application of assessment

procedures.)

Confirmation of assessment decisions enables the quality assurance body’s

requirements for consistency to be achieved.

Moderation confirms that special needs of candidates have been provided for but

without compromising the required standards.

EXERCISE 11: What are your objectives with moderation?

Page 43: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Moderation can include one or more of the following:

1. Draft question papers.

2. Re-mark examination papers.

3. Moderate a sample of the scripts in accordance with the provisions for

moderating.

4. To check the standard, consistency and fairness of the allocation of marks.

The moderation design should provide for the following:

1. Moderation planning includes addressing assessment design and activities

before, during and after assessment, as well as assessment documentation.

2. The design of the assessment instrument must be checked by at least one

other person with the appropriate expertise. Feedback and suggested

alterations should be carried out, and final design endorsed by the Moderator.

3. The design of the instrument’s grading system (assessment criteria,

weighting, format or judgements/decisions/findings, etc) must be checked by

at least one other person with the appropriate expertise. Feedback and

suggested alterations should be carried out, and final design endorsed by the

Moderator.

4. Procedures for arriving at results must be moderated. The form and frequency

of this will depend on the nature of the event. For example, where

assessment is based on observation of performance by an individual, a

Moderator will attend a particular Assessor’s sessions at prescribed times to

compare findings with the Assessor. Modular assessment will be moderated

through peer review. Where assessment is based on a ‘marking session’

bringing together a number of Assessors, Moderator(s) must be present to

check that there is consistency across individual judgements, and that

consensus agreements are applied. Sample moderation of other forms of

resulting is also required.

5. Moderators will check that accommodation of special needs has not

compromised assessment standards.

6. Moderators will produce a moderation report at specified intervals. This report

will serve as a management review reflecting on the system, noting problem

areas and commenting on the consistency (in relation to quality and

Page 44: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

efficiency) of the assessment cycle over time. This could mean, for example,

being alert to anomalies in learning results between two different sessions, or

to patterns over a longer period of time.

APPEALS PROCEDURES

Appeals against assessment decisions are handled in accordance with the appeals

procedures. The following are important aspects for which the appeals procedure

must make provision:

1. Learners must have the security of knowing that, if they feel that unfairness,

invalidity, unreliability, impracticability, inadequacy of experience or expertise,

and unethical practices were present in assessment, they may appeal against

the results of the assessment.

2. Appeals must be brought to the attention of the person in the learning

institution who can investigate and deal with such matters. Normally it would

be the Academic Director, dean, training manager, principal, etc.

3. The investigator may appoint a Moderator other than the original Assessor to

moderate the script, assignment, etc.

4. Where learners appeal against the scoring of a given item or items, they

should be asked to explain their answers.

5. The investigator/Moderator may also decide to ask other learners who got the

answer correct, to explain why they chose their answers.

6. The Assessor should explain the concept on which the test item was

developed and explain how the item was derived out of that concept.

7. If the item analysis shows that an overwhelming portion of the learners got the

right answer, the question was probably correctly formulated and in line with

the assessment criteria.

8. If evidence of ambiguity is shown, the Moderator may eliminate the item from

the scoring.

9. Arbitration may be used in cases of strong remonstrations (remonstration =

strong appeal or objection to an assessment ruling).

EXERCISE 12: Which methods do you use for moderating assessments?

Page 45: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

We have already discussed an example of a checklist that can be used to moderate

assessment instruments. An important element of post-assessment moderation, i.e.

moderation after the candidates have been subjected to the examination or test, is

the moderation of the assessor – how the assessor prepared and executed the

assessment process. The following is an example of a checklist that can be used for

moderation of the Assessor:

THE ASSESSOR YES NO

1Have the learners been assessed in a consistent, accurate and

well-designed manner?

2Did the assessment meet the requirements for assessment?

(fair, valid, reliable, etc.).

3Did the Assessor provide for the reassessment of learners if

necessary?

4Was the assessment appropriately conducted and did it match

the specifications of unit standards and qualifications?

5Was the quality of the candidate’s evidence judged in a fair and

consistent manner?

6Was the standard of marking acceptable?

7Were a satisfactory number of specific outcomes assessed?

(More than 75%.)

8

Did the Assessor clearly indicate that learners may appeal if

they feel that they have not been assessed fairly and

objectively?

9Were the learners briefed on what the Assessor expected of

them before they were subjected to the assessment?

10Was the timing of the assessment correct? (Did the learners

have enough time to prepare for the assessment and to

complete assignments?)

EXERCISE 13: What kinds of evidence do you require for moderation purposes?

Page 46: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

11Was the language in which the assessment is to take place

decided upon through consultations with the learners?

12Did the Assessor process the assessment results timeously?

13Did the Assessor clearly sign the marked assessment

instruments?

14Did the Assessor adhere to the rules and procedures for

marking as specified in the Mentornet Policy and Procedure?

15Did the Assessor re-mark all borderline cases?

Table 2: Checklist for the moderation of the Assessor.

SUMMARY OF CHAPTER 3

Conducting moderation of assessment is actually quite simple, compared to

preparing for moderation. If the moderation instruments and procedures are well

prepared, implementation is almost a formality.

The following is an illustration of how moderation can be conducted:

EXERCISE 14: Describe how you would plan for moderation.

EXERCISE 15: Describe the procedures you follow when doing moderation.

EXERCISE 16: What should typically be moderated?

DESIGN: Submission of assessment instruments for moderation. Evaluation of methods of assessment.

IMPLEMENTATION: Evaluation of judgements. Endorse assessment.

Page 47: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Figure 5: Conducting moderation.

FEEDBACK IN ALL STAGES

REVIEW: Clarify assessment problems. Submission for verification.

Page 48: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 4: ADVISE AND SUPPORT ASSESSORS

INTRODUCTION

Moderators are normally selected from the ranks of experienced Assessors.

Consequently, they usually have substantial knowledge and experience of

assessment, so that they can be utilized as mentors for less experienced Assessors.

In addition to providing advice and support on assessment procedures and

principles, they can also render valuable assistance in ensuring the maintenance of

high quality standards. The nature and quality of advice facilitate a common

understanding of the relevant standards and issues related to assessment.

PROVIDING ADVICE TO ASSESSORS

The nature and quality of advice promotes assessment in accordance with good

assessment principles and enhances the development and maintenance of quality

management systems in line with ETQA requirements. Ideally ETQA requirements

should be included in the assessment policy and procedure of learning institutions.

Even if this is the case, Moderators can still advise Assessors on the interpretation of

requirements and any changes that might have occurred since the policy and

procedure was compiled. Moderators work closely with ETQAs, often including

verifiers, so that they will probably be informed about changes in ETQA

requirements.

Specific outcome: Report, record and administer moderation.Assessment criteria:

1. Moderation findings are reported to designated role-players within agreed time frames and according to the quality assurance body’s requirements for format and content.

2. Records are maintained in accordance with organisational quality assurance and ETQA requirements.

3. Confidentiality of information relating to candidates and Assessors is preserved in accordance with organisational quality and ETQA requirements.

Page 49: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The Moderator should give the following advice to the Assessors:

1. They must be familiar with the unit standards they will be assessing.

2. How to use assessment guides.

3. How to plan their assessments.

4. How to guide learners in the collection and submission of evidence.

5. What feedback is required.

6. How assessment results should be recorded and reported.

7. The assessment processes to follow.

DISCLOSURE OF INFORMATION

One of the areas in which especially inexperienced Assessors often transgress is in

disclosing information on assessment results. Common errors that occur include:

1. Communicating assessment results before they have been endorsed by the

Moderator.

2. Communicating assessment results to individuals who are not entitled to the

information.

3. Not communicating assessment results to stakeholders.

4. Communicating incomplete or inaccurate assessment results.

All communications must be conducted in accordance with relevant confidentiality

requirements. This implies that the individual’s right to privacy of information must be

respected. It would be wrong to disclose information about assessment results to

people whom the learner would rather have preferred not to know the results, often

at least not before him or her. The Moderator must ensure that procedures are in

place that will ensure confidentiality of information about assessments.

Moderation findings must be reported to designated role-players/stakeholders within

agreed time-frames and according to the quality assurance body’s requirements for

format and content. (Role-players could include ETQA or Moderating Body

EXERCISE 17: What are the requirements of the ETQA with whom you liaise in

terms of assessment design?

Page 50: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

personnel, internal or external Moderators and Assessors.)

Just as confidential information must be communicated to role players according to

specific prescribed procedures, so must Moderators communicate moderation

decisions with the same measure of confidentiality. Timing is quite important here,

since it is important that assessments results are not disclosed before moderation

results have been received. The Moderator will provide a moderation report to the

Assessors(s), which each individual Assessor must study and sign. Where

necessary the Moderator may opt to discuss the moderation results with the

Assessor. This is a good opportunity for the Moderator to provide advice and support

to the Assessor and the practice should be encouraged. It is, furthermore, advisable

that the Assessor should provide feedback to the Moderator within a reasonable time

what he or she did about the recommendations for improvement as stated in the

moderation report. An example of the format of the Moderator’s report and feedback

by the Assessor is given Chapter 6. Of course records must be kept and maintained

in accordance with ETQA requirements.

Confidentiality of information relating to candidates, Assessors and assessing

agencies must be preserved in accordance with the requirements of the assessing

agency and ETQA requirements. Your own learning institution will mostly be the

assessing agency, but some learning institutions, especially those who do not have

their own registered Assessors, may make use of external Assessor agencies.

AWARD RECOMMENDATIONS

EXERCISE 18: How are assessment results communicated to learners in your

organisation? What improvements would you like to introduce to your existing

procedure?

EXERCISE 19: How would you record and communicate moderation findings to

Assessors and learners?

Page 51: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The Moderator has a crucial role to play in the awarding of certificates. No

certificates issued without the Moderator first endorsing the assessment results can

ever be legitimate. In fact, even with the Moderator’s endorsement, accredited

certificates cannot be issued without endorsement of the results by the verifier(s).

The NQF awarding procedure includes the entire process of quality assurance, and,

unless the process has been completed, accreditation of qualifications or Unit

Standards cannot take place. Accreditation of qualifications or Unit Standards is only

complete once the learner’s number of credits (on a particular level) has been read

into the National Learners’ Record Database (NLRD), and this only happens once

the moderation of assessment has been verified. The process can be illustrated as

follows:

Figure 6: The certification process.

REPORT, RECORD AND ADMINISTER MODERATION

“The job has not been completed until the paperwork has been done”. In my youth

this was a slogan that could be found in almost every public service office all over

the country. I never found it very clever or inspiring, but would not deny that there is

an element of truth in it. Unfortunately for those of us who dislike paperwork,

moderation of assessment also has its fair share of paper generation, and paper

EXERCISE 20: What is your (as Moderator’s) role in the approval of award

recommendations?

6Certification

5Qualification or Unit Standard is read into the NLRD by the ETQA

4Verification of moderation

3Moderation

(endorsement of assessment)

2Assessment

1Learning

intervention

Page 52: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

generation inevitably goes hand in hand with organisational structures. So, a

prerequisite for the administration of moderation is that the paperwork (reporting and

recording) must not be neglected.

The recording of moderation is important for the ETQA, and, more specifically, the

verification team who will need to verify the moderation before the learner

achievements can be endorsed and read into the National Learners’ Record

Database (NLRD). The following is a list of what needs to be recorded:

1. A record of the assessments that were moderated.

2. Assessment review documents.

3. A record of the meeting with the Assessor(s).

4. The moderation check lists for each moderation done.

5. A visit report.

The visit report must:

1. Be accurate.

2. Identify the purpose and frequency of the visits.

3. Describe the moderation process in use by the learning institution.

4. Provide clear recommendations for improvement.

A typical policy regarding record keeping should include at least the following:

1. All assessment evidence and results for individual learners (including those

that do not form part of the summative record) are securely recorded for a

specified period of time, in case of appeals or availability of validation

procedures.

2. Data capture systems must be suitable for:

2.1. Registering learners with the organization.

2.2. Processing and recording of results, and submission of results to the

ETQA in the required format.

2.3. Undertaking the kinds of data analysis that might be required by the

Moderator, or by the ETQA.

Page 53: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The record keeping policy of a learning institution must comply with the ETQA policy

and can, as an example, include the following:

1. A file is opened for each learner who registers with the learning institution, be

it for a Unit Standard, part-qualification or full qualification.

2. Copies of files are kept for a period of ten years from the last date on which

the learner registered for a learning programme.

3. Files will contain all information regarding the particular learner, namely:

3.1. Personal particulars of the learner.

3.2. Course for which the learner is registered.

3.3. Academic history.

3.4. Experiential learning and academic qualifications and credits given on

account of formal learning, experiential learning, RPL and previous

learning.

THE MANAGEMENT STRUCTURE

The kind of (moderation) management structure most likely to work for the

organisation should be organised around the following questions:

Requirements

Who within the organisation will make moderation policies for particular

standards and qualifications and how will this be done?

Who within the organisation will implement these policies and how will this

be done?

Who within the organisation will evaluate policies and implementations and

how will this be done?

EXERCISE 21: What is your ETQA’s policy regarding record keeping?

EXERCISE 22: What is your learning institution’s policy regarding record

keeping?

EXERCISE 23: What documentation do you prepare for and after moderation?

Page 54: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Resources

What are the costs of setting up and operating a moderation system likely

to be?

Who will pay for this?

How will resources effectiveness be achieved?

It would be rather difficult to provide answers to the questions that will apply to all

organisations responsible for moderation. It is a fact that every organization is

different in terms of size, type and levels of learning and assessment provided,

mission, personnel profile and many more. You will revisit the issue when doing your

practical summative assessment assignment.

SUMMARY OF CHAPTER 4

The role of the Moderator. We have reached the stage where we can actually

already summarize the role and responsibilities of the Moderator. I am doing it now,

rather than at the end of the last chapter because I would like to round the last

chapter off with an overview of the entire moderation process. By now you will

probably have noticed that the Moderator acts as a link between four groups, namely

the Assessors, the learners and candidates, the verifiers and the organizations

involved in quality assurance, namely SAQA, the NSBs, SETAs and the ETQAs. The

role and responsibilities of the Moderator can be illustrated as follows:

Page 55: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Figure 7: The role and responsibilities of the Moderator.

THE MODERATOR

VERIFIERS Liaise with

verifiers. Feedback. Attend panel

meetings. Attend site visits.

SAQA, NSBs, SETAs, ETQAs Feedback. Suggestions for

improvement. Negotiate assessment

material.

LEARNERS/CANDIDATES Monitor consistency. Moderate assessment practice. Principles of assessment. Appeal procedure. Special needs. Moderate RPL assessment.

ASSESSORS Coordinates Assessor meetings. Support, advice, guidance. Monitors processing of assessment information. Quality of assessment. Evaluates performance. De-registration of Assessors. Assessment design.

Page 56: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The administrative process. The administrative process supporting moderation

should consist of the following:

Figure 8: Administration of moderation.

Visit reportModeration checklist

Meeting with Assessors

ReviewAssessments

Recording of moderation

APPROVED MODERATION POLICY

Cost of setting up a moderation system

Implementation of the moderation policy

Review the moderation policy

Evaluation of the moderation policy

DRAFT MODERATION POLICY

Page 57: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 5: REVIEW MODERATION SYSTEMS AND PROCESSES

INTRODUCTION

Strengths and weaknesses of moderation systems and processes are identified in

terms of their manageability and potential to make judgements on the quality and

validity of assessment decisions.

As we have already discussed, the components of a moderation system are:

1. Timing (The intervals at which moderation will take place.)

2. Extent. (The quality of registered Assessors and moderation system.)

3. Materials. (When and which assessment materials will be made available for

moderation?)

4. Personnel. (Who will take part in the moderation?)

THE MODERATION POLICY

These requirements for a Moderation Policy are based on the ETDQA Guidelines for

Moderation dated November 2003. The Moderation Policy should include the

following elements:

1. Guidelines to moderators stating the principles of good assessment and the

EXERCISE 24: After having considered the strengths and weaknesses of different

moderation systems, what components would you select for yours?

Specific outcome: Review moderation systems and procedures.Assessment criteria:1. Strengths and weaknesses of moderation systems and processes are

identified in terms of their manageability and effectiveness in facilitating judgements on the quality and validity of assessment decisions.

2. Recommendations contribute towards the improvement of moderation systems and processes in line with ETQA requirements and overall manageability.

3. The review enhances the credibility and integrity of the recognition system.

Page 58: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

rules of evidence and outlining the process to be followed.

2. A procedure for the quality manager for assessment to follow when requesting

moderation, and a form to be completed.

3. The overall average percentage of assessments to be moderated.

4. Time-frames for moderation in different contexts, which should be clearly spelt

out and justified in terms of the assessment plan for the programme in

question.

5. Criteria for selection of assessments to be moderated, for example:

5.1. A random sample of 10%.

5.2. A sample of assessments conducted by any new or inexperienced

assessors.

5.3. A sample across a variety of assessors and sites in order to check

reliability and consistency in the interpretation of standards.

5.4. A sample including assessments where the candidate was found not

yet competent, as well as those found competent; the feedback to

candidates should be appropriate and constructive.

5.5. A sample including RPL candidates with special needs, where these

exist.

5.6. A sample including assessments from learning sites where

achievement is unusually low or unusually high. (Remember, however,

that outcomes-based assessment is not governed by a normal

distribution. There is nothing wrong with all learners being found

competent, provided high standards of facilitation and assessment are

maintained.)

5.7. Any assessments that the assessor has queried and asked for

moderation. (Often such situations will, however, be resolved by

making use of a second assessor rather than a moderator.)

5.8. Samples from specified candidates or specified sites or associates.

5.9. Any assessments where the assessor or candidate or a relevant third

party has made a complaint or an appeal.

6. Reasons for selection of any given sample, which should be recorded on the

request for moderation, form, using an combination of the above or other

suitable criteria; these will vary from one request for moderation to the next.

7. A form to be completed by moderators in drawing up a moderation plan.

Page 59: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

8. A form to be completed by moderators when making a moderation report; this

should include a check-list to ensure that the correct assessment process has

been followed, and the principles of good assessment and the rules of

evidence have been observed.

9. A feedback form from moderators to the relevant assessors to assist in their

professional development.

10. A form to be completed by moderators if they decide they need to make an

intervention and change the assessor’s results.

11. Guidelines for moderators including a check-list for moderating assessment

instruments and guides (when new or being piloted) in terms of validity;

fitness for purpose, sufficiency, authenticity, and, where appropriate, currency.

12. A feedback form for each assessor to provide feedback on unit standards if

they wish to give this; and a process for moderators to collate this information

and give it to the provider’s quality manager for assessment to be forwarded

to the ETQA.

13. A plan indicating dates set for professional development meetings called by

moderators of assessors in a particular field or sub-field to discuss issues of

interpretation of standards and share problems, solutions and experiences.

14. A set of administrative and filing procedures to ensure that a record is kept

and data is accessible on request.

THE IMPROVEMENT OF MODERATION SYSTEMS

Recommendations to moderation systems and processes have the potential to

facilitate their improvement in line with ETQA requirements and overall

manageability. These recommendations, however, must be the product of a sound

evaluation of the existing system. The system must only be changed if and when

definite deficiencies are identified. There must be a need for change, and change

must lead to an improvement to the previous system and the overall manageability of

the system. This can be ensured by the following:

1. The learning institution must have a central person (training manager,

academic director) who will liaise with the ETQA on assessment and

moderation matters.

2. All required information must be submitted to the ETQA on an annual basis

Page 60: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

(Annual Training Report or Workplace Skills Plan (WSP) Implementation

Report) or when specifically asked for.

3. Any interventions by the ETQA must be accommodated and the ETQA must

be given full support, e.g. during site visits.

4. Assessment instruments and procedures must be reviewed on a regular

basis, either based on own observation, learner inputs or ETQA feedback.

CHECK LIST: MODERATION OF ASSESSMENT

Name of the Assessor: …………………………………………………

Name of Moderator: ………………………………………………….

Course: …………………………………………………………………

Duration: ………………………………………………………………

Customer: ……………………………………………………………..

This moderation of assessment is based on the practical assignments submitted by

the following learners:

1. ……………………………..

2. ……………………………..

3. ……………………………..

4. ……………………………..

5. ……………………………..

EXERCISE 25: The following is an example of a checklist that can be used for moderation

of assessment design. Evaluate the checklist based on the contents of this course so far,

as well as on your own exposure and experience in moderation. What recommendations

can you make to improve the checklist?

Page 61: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Type of assessment: ……………………………………………………

THE ASSESSOR YES NO

1Have the learners been assessed in a consistent, accurate and

well-designed manner?

2Did the assessment meet the requirements for assessment?

(fair, valid, reliable, etc.).

3Did the Assessor provide for the reassessment of learners if

necessary?

4Was the assessment appropriately conducted and did it match

the specifications of unit standards and qualifications?

5Was the quality of the candidate’s evidence judged in a fair and

consistent manner?

6Was the standard of marking acceptable?

7Were a satisfactory number of specific outcomes assessed?

(More than 75%.)

8

Did the Assessor clearly indicate that learners may appeal if

they feel that they have not been assessed fairly and

objectively?

9Were the learners briefed on what the Assessor expected of

them before they were subjected to the assessment?

10

Was the timing of the assessment correct? (Did the learners

have enough time to prepare for the assessment and to

complete assignments?)

11Was the language in which the assessment is to take place

decided upon through consultations with the learners?

12Did the Assessor process the assessment results timeously?

13

Did the Assessor clearly sign the marked assessment

instruments?

Page 62: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

14Did the Assessor adhere to the rules and procedures for

marking as specified in the Mentornet Policy and Procedure?

15Did the Assessor re-mark all borderline cases?

THE ASSESSMENT INSTRUMENT YES NO

16Was the assessment instrument designed in accordance with

the Mentornet quality assurance policy?

17Were instructions to the learners clear and unambiguous?

18Was the assessment instrument sufficient to protect the

integrity of standards and qualifications?

19

Were the choice and design of assessment methods and

instruments appropriate to the unit standards and qualifications

being assessed?

20Is the assessment instrument consistent, accurate and well

designed?

21

Does the assessment instrument make provision for

reassessment?

22Will it be necessary to redesign the assessment instrument?

23Has the memorandum been prepared according to the

Mentornet quality assurance policy?

24If annotated drawings are required, do complete drawings with

annotations appear in the memorandum?

25

Is the design of the assessment instrument linked to an

assessment strategy? (Environmental analysis to find the best

assessment opportunities and approach.)

26

Is the grading design compatible with the assessment

instrument? (assessment criteria, weighting, format for

judgements, etc.)

Page 63: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

27Is the assessment instrument implementable within any

reasonable site, costs and time requirements?

28Are marks for sections and subsections shown clearly?

29Did the assessment instrument make provision for special

needs without compromising the validity of the assessment?

30Was the assessment instrument career- and practice-oriented?

31

Does the assessment instrument endeavour to determine the

attitude of the learner towards his or her vocation as well as his

or her sense of responsibility towards his or her vocation?

32Are critical cross-field outcomes also assessed?

33 Was a memorandum available?

GENERAL REMARKS

1. It is not clear ……………………………...

2. The Facilitator provided learners with …………………………………

3. Etc.

…………………………………………. ………………………………………

SIGNED: MODERATOR SIGNED: ASSESSOR

The Moderator’s report must be completed in twofold. It is meant for the Assessor

and not the learners. The Assessor keeps one copy and signs the second. The

signed (second) copy is filed with the other course documentation. The Assessor

must report to the Moderator in writing what he or she did regarding the aspects

pointed out in the Moderator’s report as deficiencies. The Assessor may use the

following format for preparing a feedback report:

Page 64: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

FEEDBACK ON MODERATION OF ASSESSMENT

Course: ……………………………………………………………………..

Client: ……………………………………………………………………….

Type of assessment: ………………………………………………………

Moderator: ………………………………………………………………….

Assessor: ……………………………………………………………………

Ref Moderator’s comments Feedback/Implementation

1Indicate the learners may appeal. Clause has been included in the

revised assessment marksheet.

2Redesign the assessment

instrument.

The assessment instrument has

been redesigned.

Etc.

(signed)

………………………………………

ASSESSOR

SUMMARY OF CHAPTER 5

Actually moderation is quite simple and to the point. We should not make more of it

than it really is, namely but one aspect of the quality assurance process. In

summary, moderation should be designed and executed around the assessment

plan. This can be illustrated as follows:

Page 65: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Figure 9: Structure of the moderation system.

The assessment plan

Discussion with learners/candidates. Minutes written on discussions. Appeal procedure. Based on Workplace Skills Plan (WSP). Outcomes-based.

ASSESSMENT MODERATE AGAINST

Assessment design

ETQA requirements. Unit Standard or qualification requirements. Assessment material requirements. Consistency and accuracy. Admission to assessment. Provide for special needs. Standards not compromised because of special needs.

Reporting and briefing

procedure

Language. Time. Type of assessment. Rules for assessment. Equipment needed. Venue for assessment.

The memorandum

Draft question papers. Corresponds with the Unit Standard. Corresponds with assessment instructions. Standard is maintained. Allocation of marks. Time for writing/doing the assessment is realistic.

Processing of examination

marks

Grading instructions. Appointment of Assessors and Moderators. Training of Assessors. Documentation. Signs of irregularities. Capturing and recording results. Appeals procedure.

Communication of results

Confidentiality. Respect for individual privacy. Accuracy. Piety. Reaction to appeals. Time-frame. Learner registers.

Safekeeping of scripts and

results

Assessment documentation. Security. Access. Period of safekeeping. File per learner. Assessment review documents.

Page 66: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

PART 2: VERIFY MODERATION OF ASSESSMENT

PREFACE

In general assessment in education and training is about collecting evidence of

learners’ work so that judgments about learners’ achievements or non-achievements

can be made and decisions arrived at. These decisions may have to do with the

learner:

Is the learner able to do a certain job?

Is the learner able to embark on a particular course of study?

What other learning does the learner need in order to be deemed competent?

They may also have to do with the learning programme:

What is the quality of the programme?

What improvements or changes are needed?

Decisions may need to be made about the education and training system itself, and

judgments made in the assessment process can inform such decisions. The most

important use of assessment, though, is to judge the performance of learners in

education and training so that qualifications may be awarded. SAQA views

assessment as a structured process for gathering evidence and making judgments

about an individual’s performance in relation to registered national standards and

qualifications. (SAQA Guidelines, 1999: 6.)

In an educational system based on mastery learning, learning was not a finite

resource but was unlimited, allowing the potential of every learner to be maximized

and not regulated by a belief in the random distribution of intelligence. The

successful achievement of one learner did not hinge upon the failure of another

learner; all had the opportunity to achieve. The potential performance level for every

learner within the new outcomes-based system is represented by the graph which

follows. As indicated in the new learning curve, the learning system will boost a

substantial percentage of learner test scores to a high performance level. (Desmond,

1996: 89.)

Page 67: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Figure 10: Potential performance in an outcomes-based learning system.

According to Guskey and Bailey (2001: 36) grading and reporting should always be

done in reference to specific learning criteria (outcomes) rather than in reference to

normative criteria or “on the curve” (see figure 1). It is certainly true that using a

normal distribution curve as a basis for assigning grades will yield consistent grade

distributions from one Facilitator to the next. In other words, the classes of every

Facilitator will have the same percentage of A’s, B’s, C’s, and so on. But the

consequences of this practice are overwhelmingly negative. Strong evidence shows

that it is detrimental to the relationships among learners and to the relationships

between Facilitators and learners.

Grading “on the curve” makes learning a highly competitive activity in which learners

compete against one another for the few scarce rewards (high grades and

certificates) distributed by the Facilitator. Under these conditions, learners readily

see that helping others become successful threatens their own chances for success.

High grades are attained not through excellence in performance but simply by doing

better than one’s classmates. As a result, learning becomes a game of winners and

losers, and because the number of rewards is kept arbitrarily small, most learners

are forced to be losers. (Guskey and Bailey, 2001: 36.)

Many learners can relate horror tales based on their experiences in classes where

HIGH

PERFORMANCE

LEVEL

OUTCOMES-

BASED

LEARNING

SYSTEM

PREVIOUS LEARNING

SYSTEM

Learner test scores

Number of learners

Page 68: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

they were graded “on the curve”. Many recall the anger they felt towards the high-

scoring learners in the class who “inflated the curve” and, in their minds, caused

other class members to receive a low grade. Some remember being the object of

their classmates’ anger because they were the high-scoring inflators of the curve.

Perhaps most important, grading “on the curve” communicates nothing about what

learners have learned or are able to do, at least not whether they are competent in

terms of the learning outcomes or not.

Outcomes-based assessment is about testing competence, and it is the

responsibility of the verifier to ensure that assessment and moderation are done in

terms of this and not in terms of a theoretical distribution curve. Stated differently – it

is the responsibility of the verifier to ensure that learners are subjected to valid,

consistent and reliable assessment and moderation.

This brings us to the aim of the second part of this publication. Verifiers need to be

taught certain skills and knowledge to enable them to do their job. I hope this section

on verification will contribute towards ensuring high quality learning, assessment and

moderation.

Page 69: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 6: PLAN AND PREPARE FOR VERIFICATION

INTRODUCTION

Verification means offering the evidence to others to confirm or disagree with one’s

interpretation. This can take a variety of forms: external examiners as impartial,

independent subject experts appointed by the learning institution to monitor a

particular programme, scrutinise purposive evidence – all the assessment material at

the end of the learning period. Incidental evidence may be verified informally by

listening to the views of the service users – the clients, learners or employers. (Ilott

and Murphy, 1999: 105.)

For the purpose of this manual we will see verification, also known as external

moderation, as a means of ensuring that two or more providers delivering to the

same unit standard and qualifications are assessing consistently to the same

standard, and in a well-designed way. Verification systems are mostly done by

ETQAs, but the possibility of other external bodies, or even bodies belonging to the

learning institution, may conduct verification, should not be ruled out. Verification

involves:

Specific outcome: Plan and prepare for verification.Assessment criteria:1. The scope and purpose of the verification is confirmed with Moderators and/or

moderating bodies, and the policy and criteria for the verification are confirmed with the relevant ETQA.

2. The verification methods selected are appropriate to the context and enable the purposes of the verification to be met.

3. Planning of the scope and nature of the verification activities ensure the manageability of verification and enable a fair judgement to be passed on the moderation of all the assessments under review.

4. The documentation is prepared in line with the quality assurance body’s system requirements and in such a way as to ensure the results are clearly documented.

5. Required physical and human resources are ensured to be ready and available for use. Logistical arrangements are confirmed with relevant role-players prior to the verification process.

6. A variety of verification techniques are described and compared in terms of strengths, weaknesses and applications. The descriptions address the need for manageable, credible and reliable verification.

Page 70: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Checking that the system required for supporting the provision across the

institution/learning site is appropriate and works effectively.

Providing evidence and guidance to providers.

Maintaining an overview of provision across providers.

Checking that all the staff involved in assessment are appropriately qualified

and experienced.

Checking the credibility of assessment methods and instruments.

Checking (internal) moderation systems.

Monitoring and observing assessment processes and candidates’ evidence,

through sampling.

Checking Assessors’ decisions.

CONFIRMING THE SCOPE AND PURPOSE OF VERIFICATION

A verification schedule will normally be drawn up by the relevant ETQA, and verifiers

should be allocated to providers depending on their areas of expertise, availability

and location. It would be impractical to allocate verifiers residing in Johannesburg to

a learning provider in Cape Town, for example. However, seeing that most ETQAs, if

not all of them, are dependent on the services of external consultants, it would have

been more objective to allocate verifiers who have the least interest in the outcomes

of the verification process, i.e. verifiers who cannot be competitors for learning

providers should be used as far as possible.

It is customary, and in some instances ETQA policy, to verify 10% of all moderated

assessments.

SELECTING VERIFICATION METHODS

In general, there are two main methods of verification, namely off-site verification

and on-site verification.

Off-site verification normally takes place at the offices of the verifier and can range

from simple confirmation of the authenticity of information given by the provider in

the early stages of learning information to Assessor information and the submitting of

Page 71: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

learner achievement for verification.

On site verification takes place at the site of the learning provider. This can either be

done by verifiers employed by the quality assuring body (usually an ETQA) or by

contracted verifiers who will be deployed, according to their areas of expertise, to

carry out verification of assessment of learner achievements. On-site verification

may follow different patterns, in which some of the following methods may be used:

1. Sampling moderated assessments.

2. Attending a selection of actual performance or practical assessments to

observe the assessment process.

3. Checking that the evidence supplied is appropriate to the unit standards being

assessed.

4. Checking that the Assessor, as moderated by the Moderator, has followed the

assessment process.

5. Ensuring that the learner has been properly informed of their rights and

responsibilities.

6. Checking that the judgement regarding the outcome of assessment is valid.

7. Checking that assessments are moderated and that this is according to a

moderation plan.

8. Checking that the record of learner achievements includes all relevant

information.

9. Ensuring the unit standards/qualifications that the learner has been assessed

against are registered on the NQF.

10. Checking that the assessments are carried out at an appropriate NQF level.

PLANNING THE SCOPE AND NATURE OF THE VERIFICATION ACTIVITIES

The verifier may request to be present at an actual assessment in progress, for

example a performance assessment that is being moderated, or a feedback session

between the Assessor and the candidate. Attendance at an assessment may be

rather intimidating for the learner, because the Assessor, Moderator and the verifier

would then all be present, and this may interfere with the judgement made regarding

their competence against that unit standard. However, since the verifier is mostly

concerned with the quality of moderation which checks the assessment process,

Page 72: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

assessment tools and Assessor quality, this should not happen often.

In brief, the verification process will consist of the following steps:

1. Preparing for verification.

2. Doing a Verifier visit.

3. Following the verification up.

4. Writing and submitting the verifier report.

PREPARING VERIFICATION DOCUMENTATION

Verification requires some planning and preparation before the time. The following

are some of the preparations that should be done:

1. The verifier should write a letter of introduction, informing the provider that

verification will take place as well as when it will take place. This letter should

reach the provider early enough for him to prepare for the verification visit,

normally two weeks in advance.

1. The Verifier should draw up a Verifier’s Plan including selected methods and

feedback mechanisms to Moderators.

2. The verifier must prepare the tools that he or she will use. This can include a

laptop with certain (dedicated) software, background information on the

provider, checklists, notepaper and pen, etc.

3. The Verifier must study the unit standard or unit standards to be verified in

advance.

4. The verifier must have the Report Form readily available and not ask the

provider to obtain it for him or her.

It is important for ETQAs and learning providers to use a standardized set of forms,

for the following reasons:

1. To ensure that the information is received in a standardized format by the

ETQA, so that it will not be necessary to rewrite all the information in a format

that the ETQA (computerized) management information system can accept.

2. So that the information can be forwarded to SAQA in the same format, to be

downloaded into the NLRD without having to rewrite the information in a

different format.

Page 73: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

3. To simplify the verification process.

There are two main groups of verification documentation, namely:1. Forms containing data that will trigger the verification process, the contents of

which should be tested by the verification process.

2. Forms to be completed by the verifier during the verification process, and which

form the basis of the verification report.

The following is a breakdown of the documentation that will typically be required

during the verification process:

1. The Learner Information Form. This form captures all the basic information

about learners, independent of provider enrolment, qualifications or completion

data. The Facilitator must have the learners complete the form on the first day

of the course, so that the administrative staff can immediately start reading the

information into the computer. Gradually ETQAs are adopting electronic

systems, so that much of the information can be entered into dedicated

software and forwarded electronically to the ETQA. Learner Information Form

will normally containt the following information:

1.1. Dates (received and captured).

1.2. The learner’s personal details (ID, names, surname, address,

telephone number, etc.

1.3. Race and gender.

1.4. Provider details (name, accreditation status, etc.)

1.5. Learning details (U/S number, learning programme, learnership, etc.)

1.6. Date on which summative assessment will be completed.

1.7. Assessor registration number.

2. The Assessor Information Form. This form should capture information

about Assessors as part of the learning achievement records. It can be read into the

computer together with the learner achievements, once they are available, or even

before the course commences, once it has been confirmed that the course will take

place. It is read into the computer and submitted to the ETQA once only, after which

the computer programme will be able to link it to the achievement forms via the

Assessor registration number. Of course this will not work if the information systems

Page 74: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

does not comply with the SAQA specifications.

3. The Unit Standard Achievement Form. This form should be submitted

once a learner has completed a unit standard. It contains information on the learner,

the unit standard and the Assessor. It contains information that confirms the

achievement of competence against a particular unit standard and must; obviously,

contain detail information on the learner and the unit standard.

4. The Qualification Achievement Form. This form should be submitted once

a learner has completed a full qualification. It contains information on the learner, the

qualification identification, the Assessor and the period of learning.

5. The Course/Learning Programs Form. This form should be submitted once

a learner has completed a course/learning programme. This form has been designed

to address qualifications that have been achieved under older “Legacy” definition of

qualifications as well as the new NQF qualifications. The presence of an NQF-

compliant qualification identification in the achievement record will indicate that this

is not a legacy qualification achievement.

6. A Learner Register. Although this is not prescribed by SETAs, it is a handy

document to assist the management staff of a learning institution to find learner

information quickly and to control the administration of learning progress. A learner

register will typically contain the following information:

6.1. Learner name, ID, postal address, telephone and fax numbers, e-mail

address.

6.2. Unit standards attended.

6.3. Assessment marks.

6.4. Results per unit standard (competent or not yet competent).

The following is an extract from such a register:

Page 75: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

MENTORNET REGISTER OF LEARNERS WHO COMPLETED COURSES

ABBREVIATIONS

:

ID = IDENTIFICTION OR FORCE NUMBER

C = COLOURED ASS = ASSESSMENT

B = BLACK AVE = AVERAGE

W = WHITE W/D = WITH DISTINCTION

A = ASIAN DNS = DID NOT SUBMIT

F = FEMALE

M = MALE

NA

ME

AD

DR

ESS

ID N

UM

BE

R

RA

CE

GE

ND

ER

CO

UR

SE N

AM

E

DA

TE

OF

CO

UR

SE

CO

UR

SE

CO

DE

NQ

F

LE

VE

L

CR

ED

ITS

ASS

1

MA

RK

ASS

2

MA

RK

ASS

3

MA

RK

AV

ER

AG

E

MA

RK

CO

MPE

TE

NT

(C) O

R N

OT

YE

T

CO

MPE

TE

NT

K.F. Anderson W/M SDF (Special

course)

SDF II 5 18 63% DNS DNS FAIL NYC

6712415090180 9 May – 5 Jul 02

P.O. Box 119 Facilitating FAC I 4 12 75% 76% N/A 75.5% C W/D

Jan Dons Park Note: W/D = with distinction.

Pretoria, 0029

Tel: (011) 653

5550

E. W. Bandeda W/M Assessment ASS I 5 18 75% 50% 60% 61.67% C

6511344429084 9 May - 30 Nov

01

P.O. Box 10244

Moreen Park

0177

Fin ETD Trainer

Tel: (021) 565

2319

Table 3: Register of learners who completed courses.

7. Unit standard Files. Learning institutions should maintain a file for every unit

standard offered as a course. It will normally be a box file, because it contains all

information on a particular unit standard-based course. The following documentation

should typically be on the file:

7.1. Marked theoretical exams papers.

Page 76: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

7.2. Learner Information Forms for all learners.

7.3. Facilitator evaluation sheets completed by all learners.

7.4. Endorsement of Learner Achievement form.

7.5. Endorsement of Learner Achievement Form 1: Unit Standards for each

learner.

7.6. Assessor Information Form.

7.7. A summary of all achievements of all learners for both theoretical and

practical assessments.

7.8. All mark sheets for practical assignments.

7.9. All practical assignments submitted by the learners.

7.10.A copy of the course report on each learner. (A second copy will have

been forwarded to the leaner, together with his or her course

certificate.)

The following is a list of possible forms which the Verifier can use during verification:

Checklist Who uses it? When is it used?Checklist of verification. Moderator and

verifier.

When assembling documents for

verification.

Verifiers Preparation

Checklist.

Verifier. When planning verification

exercise.

Assessment Activities

Evaluation.

Verifier/Moderator. When evaluating the assessment

activities.

Assessment Process

Evaluation Checklist.

Verifier/Moderator. When evaluating the assessment

process.

Evidence Evaluation

Form.

Verifier/Moderator. When evaluating overall evidence.

Observation Checklist

for Assessment

Situation.

Verifier/Moderator. Only on the occasions where the

selection of moderated

assessments specifies that

observation is to be a part of

verification.

Quality Assurance of

Learner Achievement.

Verifier/ETQA

Auditor.

When verifying quality assurance

of learner achievement.

Page 77: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Moderation Process

Checklist.

Verifier/Moderator. When evaluating the Moderator’s

report.

Evaluate Verification

Process and Plan.

Verifier. This should be done after the

verification.

Irregularities Report. Verifier. For the verification report.

Verification Report and

Result Summary.

Verifier. After the verification.

Table 4: Quality assurance of learner achievement.

The Verifier must summarise his overall findings as part of his report to the ETQA. A

standard form will probably be available for this purpose, and the following

information should be reported on:

1. Verifier’s findings on the availability and legitimacy of assessment evidence.

In this respect details should be given on the conduct and competence of the

Assessor, the fairness and relevance of the assessment instrument’s used,

the objectivity of the assessment processes, the validity of the assessment

results, etc.

2. Accuracy of Moderator’s finding on assessment.

3. If the Moderator is qualified and registered.

Assessment Activities Evaluation. The verifier will evaluate assessment activities

to prompt the examination of the Moderator’s findings to consider whether the

Moderator covered the scope of principles, and whether they considered each factor

in detail. The Verifier will evaluate both the Assessor and Moderator’s views on the

assessment tools, and ultimately to decide whether to endorse the Moderator’s

evaluation or not. The Verifier is not to re-evaluate the assessment tools, but to use

this checklist to see whether the Moderator covered all areas of consideration.

Should the Verifier not agree with the Moderator’s evaluation, he/she will make a

judgement whether to endorse their decision regarding certification or not.

Assessment Process Evaluation. The verifier will prompt their examination of the

Moderator’s findings and consider whether the Moderator covered the scope of

Page 78: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

assessment. Assessment process evaluation is used to consider the Moderator’s

views on the assessment tools, and ultimately to decide whether to endorse the

Moderator’s evaluation or not. Remember that the Verifier is not required to re-

evaluate the assessment process, but to ensure that the Moderator covered all areas

of consideration. Should the Verifier not agree with the Moderator’s evaluation,

he/she will make a judgement whether to endorse their decision regarding the

certification or not, and indicate this in the final report. The following should be

evaluated regarding the assessment process:

Assessment Process

1

A pre-assessment discussion was conducted between the Assessor and the

candidate to clarify the rights, roles and responsibilities. The candidates

acknowledge their awareness of their rights, roles and responsibilities (which

could be signaled through signing a form).

2The assessment plan (including time frames) is agreed to by the candidates

and the Assessor prior to the assessment taking place.

3The assessment guide (assessment methodologies and activities) is agreed to

by the candidate and the Assessor prior to the assessment taking place.

4Documented assessment methods are compiled and readily available for all

outcomes, and/or unit standards that need to be assessed.

5The learner had access to the relevant assessment tools before the

assessment took place, where appropriate.

6The Assessor informed the candidate of the evidence requirements in relation

to the unit standard in question.

7The Assessor explained what the candidate and the Assessor would be doing

during the assessment.

8 The candidate acknowledges the authenticity of the evidence they submitted.

9The assessment was conducted during circumstances that reflect realistic or

lifelike conditions.

10

There is an indication that reasonable accommodation had been made for

candidates with disabilities, e.g. provision of wheelchair ramps to gain access

to the assessment venue.

11 Assessment was conducted without any obtrusive disruptions to the

Page 79: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

candidate’s performance.

12The Assessor gathered naturally occurring evidence during the assessment,

where appropriate.

13The Assessor accurately evaluated candidate’s evidence and performance

against specific outcomes and assessment criteria.

14Candidates had access to an appeal procedure if they wished to challenge

assessment decisions.

15The Assessor gave clear and constructive feedback during (if appropriate) and

following the assessment decision.

16The candidate and Assessor agree on a re-assessment option in the case of a

“not yet competent” decision.

17Assessment results and consequent action steps were formally recorded and

actioned by the Assessor and candidate.

Table 5: Assessment process evaluation checklist.

Evidence Evaluation. The Verifier should prompt the examination of the

Moderator’s findings and consider whether the Moderator covered the scope of the

assessment process. This information is used to consider the Moderator’s views on

the assessment tools, and ultimately to decide whether to endorse the Moderator’s

evaluation or not. The following should be considered:

The Assessor should consider the following in assessing the candidate evidence

1

The evidence was valid.

Was the evidence appropriate for the Specific Outcomes and Assessment

Criteria being assessed?

Did the learner have any difficulties in performing the assessment

activities due to circumstantial reasons such as a lack of resources, or

time or timing for e.g. fatigue after night shift?

Did the evidence enable a clear decision of Competent/ Not Yet

Competent to be reached by the Assessor?

2 The evidence was authentic.

Is there proof that the evidence was the candidate’s own work?

Page 80: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Were testimonials of observed work signed by the responsible individual?

3

The evidence was current.

Did the evidence relate to the latest version of the unit standard or

qualification?

Does the evidence relate to current competence?

4

The evidence was consistent.

Does the evidence reflect consistent competence across different

circumstances?

5

The evidence was sufficient.

Is there evidence for all selected Specific Outcomes and Assessment

Criteria?

Is there evidence for the entire selected range statements?

Table 6: Evidence evaluation form.

Observing the Assessment Situation. The Verifier should prompt the examination of

the Moderator’s findings and consider whether the Moderator covered the scope of

the assessment process. This information is used to consider the Moderator’s views

on the assessment tools, and ultimately to decide whether to endorse the

Moderator’s evaluation or not. Again, the Verifier will use this information, together

with all the other information gathered, to make a judgement whether to endorse the

Assessor and Moderator’s recommendation regarding certification or not. The

following should be checked:

Observation of the Assessor

1The Assessor asked questions that were clear and did not unfairly lead the

learner towards the answer.

2 The Assessor gave feedback relating to the candidate’s performance only.

3 The Assessor gave feedback within an agreed timeframe.

4The Assessor provided feedback on all outcomes and all assessment

criteria.

5The Assessor provided constructive feedback in instances of a candidate

being “not yet competent”.

6 The Assessor provided for sensitive handling in instances of the learner

Page 81: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

being upset or nervous.

7 The Assessor listened effectively.

8The Assessor clarified or confirmed his/her understanding wherever

necessary.

9The learner was made aware of the outcome of the assessment and the

reasons for the decision.

Table 7: Aspect to observe regarding the assessment situation.

Quality Assurance of Learner Achievements. Actually all of the above forms part

of the quality assurance process. However, there are a number of issues specifically

concerning quality assurance that should specifically be checked. Her are some

examples:

Observation of the Assessor

1 All assessed learners are enrolled with the Provider or workplace provider.

2A database for the recording of learner information, learner achievements

and learner certification is in place.

3 The learner database is current.

4The database enables the organisation to submit reports to the SETA and

SAQA.

5 A policy to protect the confidentiality and security of the data is in place.

6 Access to the database is controlled.

7The record of learner achievement is endorsed by the Assessor and

Moderator.

8 An appeals and grievances policy and procedure for assessment is in place.

Table 8: Quality assurance of learner achievement checklist.

The Moderation Process. The Verifier should consider whether the Moderator

followed a fair, and valid process in evaluating the Assessor, and assessment

process. The Moderator will need to provide evidence that they (the Moderator

and Assessor) considered each of the following in the moderation process:

Page 82: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Moderator’s Report – check the following:

1There is evidence of a Moderation plan, which complies with the ETQA

requirements.

2There is evidence that the Moderator has provided feedback to the

Assessor/s on the moderation findings.

3There is evidence that the assessment tools were moderated prior to the

assessment.

4There is a system in place to audit, moderate and update the validity,

currency and accuracy of the assessment tools on an ongoing basis.

5 Moderation results and findings of the moderation are recorded.

6

The moderation process includes a consideration of:

The moderation tools.

The assessment process.

The Assessor.

The assessment results.

7The moderation process is consistent and rigorous (as per Unit Standard

ASSMT03).

8 There is evidence of the Moderator’s report and it is available to the Verifier.

9The Moderator’s results, including recommendations for non-conformance

areas, are fed back to the ETQA.

10A valid and representative sample of assessments was moderated according

to the ETQA sampling criteria.

11 The Moderator considered whether the Assessor’s judgement was sound.

12The Moderator considered whether the Assessor’s behaviour was

reasonable.

13 Advise and support is provided to the Assessors and assessment agencies.

Table 9: Moderation process checklist.

Evaluating the Verification Plan and Process. In summarizing the evaluation

process, the Verifier should address some critical points. To a large extent these

point would be the aspects checked. The following should be included:

Process

Page 83: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

1 Preparation Process

2 Verifiers Tool

3 Verification Plan

4 Verification of moderated assessments process and results

5 Reporting Documentation

6 Feedback and support mechanisms for the Moderators

Table 10: Evaluate verification plan and process.

The Irregularities Report. The Verifier must complete an Irregularities Report, the

format of which can differ between different ETQAs. This form is used to capture and

summarise the Verifier’s findings relating to irregularities and to assist him/her to

decide on the consequence. The final decision must accompany the final report. At

all times the verifier must make recommendations as to how the situation may be

rectified, possibly through providing Moderator support. The following is an example

of an Irregularities Report:

Irregularity Consequence Recommendations

Assessment instruments not available.

The assessment instruments and

assessment specification for unit

standards were not available for

scrutiny by the Verifier.

Hold the

certification on the

Unit Standard or

Qualification.

Inappropriate assessment instruments.The assessment tools selected did not

assess the relevant outcomes validly,

e.g. where a practical competency is

assessed by a written test.

Hold the

certification on the

Unit Standard or

Qualification.

Incorrect or inappropriate assessment specification.

Either the assessment criteria or the

range and evidence requirements

Hold the

certification on the

Unit Standard or

Qualification.

Page 84: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

defined in the unit standard were not

followed.

No evidence or insufficient evidence of

candidates’ performance.Sufficient evidence of assessment of all

relevant units was not supplied.

Hold the

certification on the

Unit Standard or

Qualification.

Inappropriate judgement of candidates’ performance.

The Assessor has incorrectly

interpreted the evidence supplied by

candidates and has judged the

candidates competent when the

standard specified in the assessment

criteria has not yet been met or not yet

competent although the evidence is

sufficient.

Hold the

certification on the

Unit Standard or

Qualification.

No record of candidates’ achievement.The center has failed to keep records

about candidates’ competence.

Hold the

certification on the

Unit Standard or

Qualification.

Moderation arrangements unsatisfactory.

The Provider or workplace Provider has

failed to:

(a) Show that the outcomes of the

assessment are consistent

between Assessors.

(b) Show that the outcomes of

assessment are consistent across

candidates.

(c) Show that all assessments

designed for a particular unit

standard are comparable in terms

of demands made on the

Verifier to suggest

corrective action

and schedule a

return date.

Page 85: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

candidate.

Other – please specify.The learning/assessment site has failed

to meet the following requirements for

assessment:

Venue unsuitable or difficult to

access.

Equipment unserviceable for the

purpose of the assessment.

Unsuitable time for the candidate,

imposing unnecessary pressure.

Use of substandard materials for the

assessment.

Table 11: Irregularities report.

8. The Verification Report and Result Summary. The Verification Report and

Result Summary must be submitted to the ETDQA. The remainder of the

completed Verifiers Tool is to be retained by the Verifier as their evidence. Her

is an example:

Details of verification.

Verifier Name

Verifier ETQA Number

Moderated Assessments

being verified

Assessor ETQA number

Moderator being verified

Unit Standard Title(s)

Unit Standard(s) Number

Unit Standard(s) Level

Unit Standard(s) Credit

RESULTS OF

Page 86: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

VERIFICATION

Moderated Assessment

endorsed

Moderated Assessment

endorsed, with minor

revisions

Moderated Assessment not

endorsed

Comments and

recommendations:

Results of verification.

Identified Irregularities and

explanation/proof:

Recommendations regarding

the irregularities:

Verifier signature:

Moderator signature:

Provider signature:

Date:

Follow-up support plan for Moderators.

Date:

Follow up

objectives and

requirements:

Table 12: Verification report and result summary.

Page 87: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

DOING A VERIFIER VISIT

During the Verifier visit the Verifier should allow the provider to first do a presentation

on the learning done and systems in place. This will help the Verifier understand

much better why certain things are done the way they are, thus saving substantial

time which would otherwise have been necessary to ask questions and consult

additional documentation. The Verifier should check the following:

1. Check if the Assessor:

1.1. Has adhered to the ETQA requirements. The Moderator should

actually already have checked, but this is important enough to justify a

measure of duplication.

1.2. Is registered. This can be done by checking the Assessor’s ETQA

registration number.

2. The Verifier should also check the assessment guide and evaluation form.

The Moderator/provider must have the following available as evidence:

2.1. The unit standards against which the learning took place.

2.2. The assessment guide.

2.3. Assessment tools and activities.

2.4. Completed principles of good assessment checklist.

3. Although the Moderator should check the assessment process, the verifier

can still check if the Moderator did this properly. The verifier can check the

following:

3.1. The Assessment Plan endorsed by the Assessor and the candidate.

3.2. Completed assessment documentation and candidate’s evidence.

3.3. Completed Evidence Evaluation Checklist, if available.

3.4. Proof of candidate and Assessor meetings/discussions e.g. signed

minutes of meetings.

3.5. Proof of completed appeals procedure, if required.

3.6. Proof of re-assessment, if required.

4. The assessment results. Assessment results must be professionally

documented and endorsed. Both the Assessor and candidate must have

signed the results. Records of learner achievement must be kept.

5. The Verifier must check if the Moderator has adhered to the ETQA

requirements and is registered.

Page 88: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

6. Most importantly the moderation process must be checked. This includes

checking the following:

6.1. The Moderation Plan.

6.2. Evidence of the Moderator’s evaluation of the assessment results, tools

and processes.

6.3. The Moderator’s Report.

7. Once the Verifier has completed his or her evaluation, he or she must prepare

the following:

7.1. An Irregularities Report. It must be specified if there were not

irregularities.

7.2. The Verification Report.

PHYSICAL AND HUMAN RESOURCES REQUIRED FOR VERIFICATION

Verification is probably the most important step in the quality assurance process. As

a sub-system of the quality assurance process, it is in actual fact a process in itself.

Being part of the open system of quality assurance verification it should not be

regarded or approached as a process divorced from the learning system. This

implies that the physical and human resources involved in the learning process

should, as far as possible, also form part of the verification process. Then again, like

in any game or sport, the game of learning also needs referees. After all, how will we

know if we are achieving anything with our learning efforts if it is not measured, and if

nobody can tell us so?

The following are probably the most important role players in the verification process:

1. The verifier, who can be an individual or a group of individuals, each with a

particular role to play.

2. The Moderator or Moderators.

3. The Assessor or Assessors.

4. The Facilitator, if he or she is not the Assessor anyway.

5. The learners.

6. Administration and management staff.

The Verifier is the person or persons who are responsible for externally verifying the

Page 89: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

moderation process and confirming or overturning moderation findings. The verifier

verifies whether specific learners have actually met the outcomes that they, through

their providers, claimed to have achieved. Verifiers can also check if the assessment

instruments meet the requirements for fair and valid assessment and if the

assessment process is managed properly and professionally.

The Moderator is responsible for reviewing and socially ratifying the Assessors

assessment, that is, their judgement of the value of learners’ work. Moderation is

mostly a review process aimed at monitoring the quality of assessment and to

ensure that it is fair, that acceptable procedures are followed in assessment, and to

check on interpretations – i.e. how criteria have been applied in the assessment

process. (Torrance, 1995: 124.)

The Assessor in the context of this section is responsible for assessing

competence, i.e. to ensure that the learner has achieved the learning outcomes and

essential embedded knowledge. Assessment may also be aimed at the identification

of strengths and weaknesses in the learner, identifying potential areas of learning

difficulty, to provide guidance and feedback, to provide a grade which contributes to

the final award (standard or qualification) and to serve as a source of external

discipline to ensure that the learner does not fall too far behind in his or her learning.

The Facilitator can also be the Assessor. As Facilitator, he or she will be

responsible to ensure that learners are prepared for the requirements of

assessment, especially for providing evidence of competence. The Facilitator and

Assessor will discuss the contents and form of assessment to ensure that the

assessment is fair and valid.

The learner identifies unit standards to be assessed, participates in the drawing up

of an assessment plan, decides whether he or she would need an interpreter

present, produces evidence of prior achievement and current competence, and

produces evidence in a structured format.

Administrative and management staff is responsible for the administration of the

assessment process. They communicate information on the assessment to the

Page 90: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

learners prior to as well as after the assessment. They are also responsible for

ensuring that the assessment venue and instruments are well prepared and

available on the date on which the assessment is to take place. Administrative and

management staff maintain current and accurate files on assessment results and

ensure that all assessment activities (preparing and marking assessment papers,

moderating assessment, sending learner and assessment information through to the

verifying body and filing assessment results) are addressed. They are also

responsible for the safekeeping and security of assessment instruments and results.

VERIFICATION TECHNIQUES

The principle of triangulation is useful to apply when collecting, collating and

synthesizing the evidence. This means obtaining information from different sources

and by different methods to obtain a complete picture and so reduce bias or

personality clashes. (Ilott and Murphy, 1999: 105.)

Inviting second opinions. There is much reassurance when your judgement is

confirmed by a respected other person. Seeking a second opinion also protects the

learner and Facilitators from the possibility of a personality clash. A second and third

opinion, through moderation and verification, is an intrinsic part of most learning

systems. Group decision-making usually encompasses informal staff discussion to

formal quality assurance bodies which ratify the results. Implicit assessment

constructs and incidental evidence may be a feature of informal discussion. These

include the learner’s profile of performance throughout the whole course, whether

they ‘deserve’ to fail, attendance and professional behaviour, and possible grounds

for an appeal. (Ilott and Murphy, 1999: 106.)

Second opinions are equally important for work-based Assessors and Moderators.

There are stark differences between the collective and anonymous academic

procedures and the personal, face-to-face, direct judgements and continuous contact

in work settings. Greater reliance upon incidental evidence, the one-to-one

supervisory relationship with less opportunities for comparison or verification, all

make work-based assessments and moderations extra difficult. This means

alternative arrangements, either existing or one-off, need to be made to ensure not

Page 91: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

yet competent grades are a team rather than an individual decision. Such

arrangements may include informal consultation with peers or as part of a split

placement (between different work-based settings or supervisors). Line managers

can contribute through supervision and support. The views of other members of the

multidisciplinary team and support workers may also be sought about overall

impressions or specific concerns. Another source of evidence is the opinion,

feedback and reactions of the service users. They are customers and consumers of

the service provided by the learner, i.e. how well the learner can perform his or her

newly acquired skills and knowledge.

Work-practice organisers, where they exist, are another source of background

information and support. These are academic staff who straddle the academic and

work-based components of the course. They highlight the interrelationship between

theory and practice through having access to both environments, but without really

belonging to either. The role of work-practice organisers may be compared with that

of verifiers. They are influential, external agents who oversee the assessment and

moderation processes. However, they may be perceived as less objective or

impartial because they are employees of the higher education establishment.

Especially in instances where ETQAs make use of external verifiers who are

themselves employees, be it full- or part-time, of private learning institutions, their

objectivity and motives may be regarded as suspect. Work-practice organisers act as

independent advisers, giving advice on alternative supervisory strategies and

information about regulations, procedures and paperwork. (Ilott and Murphy, 1999:

107.)

SUMMARY OF CHAPTER 6

Verification is a means to ensure that two or more providers delivering to the same

unit standards and qualifications are assessing consistently to the same standard,

and in a well-designed way.

ETQAs are responsible for planning and conducting verification.

In general there are two main methods of verification, namely off-site and on-site

Page 92: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

verification. The Moderator plays an important role in the moderation process, seeing

that his or her findings are verified. Therefore, it is important for the Moderator to be

present during verification visits.

The verifier may request to be present at an actual assessment, but this should not

be allowed if it may have a negative effect on the performance of the candidates.

Planning checklists can greatly simplify the task of the verifier. There are two groups

of verification documentation, namely:

1. Forms containing data that will trigger the verification process, the contents of

which should be tested by the verification process.

2. Forms to be completed by the verifier during the verification process, and which

form the basis of the verification report.

The physical and human resources involved in the learning process should, as far as

possible, also form part of the verification process. The only additional role players

are the verifiers.

Unbiased and accurate verification is dependent on accurate and complete

assessment information. Therefore, second opinions should be invited in addition to

the findings of the Assessor and the Moderator.

Page 93: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 7: CONDUCT VERIFICATION

INTRODUCTION

Two basic approaches may be used to conduct verification, i.e. to measure the

quality and accuracy of moderation and, by implication, assessment. The first

approach is through judgements by knowledgeable people using established criteria

(checklists). The second approach is through appropriate statistics. These two

approaches are not necessarily mutually exclusive – statistics can, for example, be

used to confirm findings based on criteria.

In this chapter we will discuss criteria-based approaches to verification.

In all areas of learning, verifiers are appointed to have overall control of the quality of

educational and training programmes. To follow is a general list of their

responsibilities: (Cotton, 1995: 117.)

To judge impartially.

To compare an individual’s performance with his or her peers.

To approve the form and content of examinations and other forms of

assessment.

To agree on changes in assessment procedures.

To attend examiner’s meetings.

To observe all learner’s work.

To overview marks given by Moderators.

Specific outcome: Conduct verification.Assessment criteria:1. The verification is conducted in accordance with the verification plan. Unforeseen

events are handled without compromising the validity of the verification.2. The moderation process is checked in moderation processes and results are

identified and are justifiable in terms of verifiable data.3. Verification of moderation decisions enables the quality assurance body’s

requirements for consistency to be achieved.4. The proportion of moderation decisions selected for verification meets the quality

assurance body’s requirements for consistency and reliability and the use of time and resources.

Page 94: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

To conduct a viva voce (oral) examination (usually when a candidate has

unavoidably missed an examination).

To check that processes follow the assessment regulations.

To report on the effectiveness of the assessment methods.

Verifiers have a great deal of power and it is important that they are experienced and

responsible. He or she is the link between the awarding body and SAQA and the

Department of Education. He or she is expected to provide accurate, complete,

current and relevant support on the following topics: (Based on Cotton, 1995: 118.)

Planning, resourcing, quality assurance and recording arrangements in the

learning institution.

The needs of candidates with special assessment requirements.

Centre staff training and development for their assessment roles.

Interpretations of the awarding body guidance and criteria.

National standards.

Quality assurance arrangements.

Equality of access and the elimination of unfair discrimination.

The verifier helps the learning institution to establish and maintain a verification plan

which covers the following principles: (Cotton, 1995: 119.)

Inspection of a representative sample of assessments.

Effective use of resources in meeting awarding body verification

requirements.

The quality and consistency of Assessor’s judgements.

Positive feedback when good practice is identified.

Prompt action if there is a departure from awarding body requirements.

Checks that national standards are met.

Checks that valid assessment is achieved.

Disputes regarding assessment are resolved equitably.

An ETQA would have to show evidence of the capacity to manage an external

moderation system that facilitates and ensures that these activities can be done

effectively and efficiently before gaining accreditation.

Page 95: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Individuals that are to carry out the roles of verifiers should be experienced, know the

learning area well, have undergone training for moderation, and have credibility

among Assessors and within their area of knowledge and expertise.

ETQAs will have to ensure that moderation systems established are consistent with

capability and means. The external moderation system could be centralised and

directive or it could consist of a system of local networks.

If a centrally directed system is set up by an ETQA, it could allocate the moderation

function to one or a combination of agents. The following are examples of agents:

A panel established to oversee the assessment of unit standards or

qualifications.

A national professional association.

An individual provider or consortium of providers.

Private consultants.

If a system of local networks is the choice, the moderation system could be designed

by providers. Local user group representation could be included in this option.

THE VERIFICATION PLAN

The first step in the verification plan is initiated by the learning provider. The learning

provider must submit learner achievements for endorsement at the scheduled time

indicated on the learner information form. The ETQA will agree on a time and date

on which to send verifiers to the provider.

Typically the verification process will consist of four steps, namely:

1. Verifier preparation and documentation review.

2. The verifier visit.

3. Verifier follow-up.

4. Verifier report.

Page 96: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

VERIFIER PREPARATION AND DOCUMENTATION REVIEW

The documentation involved in the verification process has already been discussed.

Suffice to say that it is the responsibility of the verifier to ensure that he or she has all

documents required for the verification process readily available.

A verification schedule will be drawn up by the relevant ETQA, and verifiers will be

allocated to Providers depending on their areas of expertise, availability and location.

At least 10% of all moderated assessments will be verified.

The ETQA, in co-operation with the verifier or verifiers, will inform the learning

institution of the date and time of the verification, and ask them to have the following

available for the verification process:

1. A list of documentation to be available for verification.

2. An agreement as to who should be present for the verification. All Moderators

should, for example, be available.

3. A list of documentation that the learning provider should make available to the

verifier prior to the visit. The verifier(s) will study these documents prior to the

verification visit.

THE VERIFIER VISIT

The Moderator should have all the moderated assessments for the last period

available for verification purposes. Moderated assessments should be sorted out into

the following categories:

1. Those that confirmed the Assessor’s decision, split into competent and not yet

competent.

2. Those that rejected the Assessor’s decision, split into competent and not yet

competent. The issues should be resolved as far as possible by the Moderator,

and the verifier will check that the process followed has been fair.

The verifier will select at least 10% of the assessments from each category. The

ETQA will specify prior to the verification whether the verification will relate to

consistency across unit standards or across Assessors. If only one Assessor was

Page 97: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

involved, the verification will, of course, relate to consistency across unit standards.

The verifier will ensure that a spread of the different Assessors is selected for

verification and ensure that a spread of the different unit standards being assessed

are selected.

Verifiers may request to be present at an actual assessment in progress, for

example a performance assessment that is being moderated, or a feedback session

between the Assessor and the candidate. Bear in mind that attendance at a

performance assessment may be intimidating for the candidate as the Assessor, the

Moderator and the verifier may be present, and this may interfere with the judgement

made regarding the candidate’s competence against a unit standard. (ETDPQA,

2002: 2.)

VERIFIER FOLLOW-UP

It is important for the verifier to follow-up each verification process. The verifier

follow-up is an important step in the quality assurance process, because it

demonstrates the involvement of the verifier in the learning process and ensures that

deficiencies in the moderation process are addressed. The verifier follow-up can take

two forms, and it is advisable that both are used. The most obvious follow-up step is

to ask the Moderator for feedback on the Verification Report. Feedback should

include an explanation and confirmation that weaknesses in the moderation process

have been corrected. From time to time the verifier can re-visit the provider to ensure

that there is an improvement in the moderation process.

CHECKING AND JUDGING THE VERIFICATION PROCESS

As with assessment and moderation, the verification process must also be subjected

to quality assurance. This is the responsibility of the Quality Assuring Body.

However, Providers may also report on the conduct of verifiers during verification.

A second step in checking and judging the verification process would be to identify

and correct deficiencies in the verification process. This should be part of the

Page 98: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

research and development process. Research and development is an inherent

characteristic of a metanoic (growth) organization, and should therefore be a

standard working procedure of every Quality Assurance Body.

REQUIREMENTS FOR VALIDITY

Validity of a moderation system, is the degree to which it measures what it was

meant to measure. It is not realistic, nor is it advisable, to divorce validity of

moderation from validity of assessment. As with assessment, validity of moderation,

and, ultimately, validity of verification, is determined by the validity of the assessment

instruments and processes used.

Validity does not refer to assessments or information gathering procedures

themselves. Rather, it refers to the appropriateness and adequacy of interpretations

made from that information. (Guskey and Bailey, 2001: 46.) For example, if

assessment is to be used to describe learners’ ability to do a needs analysis, we

would like our interpretations to be based on evidence that actually reflects the ability

to do a needs analysis and not other irrelevant factors, such as learners’ gender or

background characteristics. In this way, validity is always specific to the particular

interpretation or application.

So, validity refers to the appropriateness, meaningfulness, and usefulness of the

specific inferences made from test scores. Test validation is a process of

accumulating evidence to support such inferences. A variety of inferences may be

made from scores produced by a given test, and there are many ways of

accumulating evidence to support any particular inference. Validity, however, is a

unitary concept. Although evidence may be accumulated in many ways, validity

always refers to the degree to which that evidence supports the inferences that are

made from the scores. The inferences regarding specific applications of a test are

validated, not the test itself. (Osterlind, 1998: 63.)

No assessment is valid for all purposes. The results of a test in statistics, for

instance, may have a high degree of validity for indicating learners’ computational

skill, a low validity for indicating their mathematical reasoning, and essentially no

Page 99: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

validity for predicting success in reading or writing. (Not that a test in statistics need

to test reading or writing ability. In fact, it should not be tested at all, if possible,

unless it is an additional learning or critical cross-field outcome.)

Particularly important in assessments of learners’ learning is construct validity. Reading, mathematics, and social studies achievement are examples of constructs.

So too are personal attributes such as responsibility, diligence, and effort. We

address construct validity by asking questions such as these (Guskey and Bailey,

2001: 46):

Does this procedure, intended to measure problem solving, actually reflect

higher order abilities rather than recall?

Does this set of questions on scientific concepts provide a sound basis for

generalizing about learners’ understanding of the domain of science?

How much do difficulties with the English language impair performance of

some learners on mathematics assessment?

The construct validity of an assessment or information-gathering procedure is

frequently undermined by construct-irrelevant influences. These occur when

learners’ performance is affected by knowledge, skills, or traits other than those the

assessment is intended to measure. Such ancillary attributes can have either

positive or negative effects. Testwiseness (knowing how to take a specific type of

test) and neatness of writing, for example, can inflate scores and confound

interpretations of results. Learners’ inability to understand the language used in

assessment is perhaps the most common example of a negative effect along with

personal characteristics such as test anxiety and impulsivity.

In addition to its use with assessments and information-gathering procedures,

validity can describe the appropriateness and adequacy of interpretations made from

grades or other reporting methods. For example, if grades are to represent learners’

achievement in a subject area over a specified period of time, we must ensure that

the grade is based on evidence that actually reflects achievement and not other

irrelevant factors, such as learners’ effort or their punctuality in turning in

assignments. To include these construct-irrelevant factors diminishes the validity of

our interpretation of the grade as a descriptor of achievement.

Page 100: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Similarly, if grades are to represent learners’ current level of achievement or

performance, we must ensure that they are based on evidence of what learners can

do now and not what they were able to do last week or last month. This means that

including scores from assessments administered weeks or months earlier, lowers the

validity of our interpretation of the grade as a descriptor of students’ current level of

accomplishment or proficiency.

We must always remember, therefore, that validity refers to the interpretation of an

assessment or grade, and it is a matter of degree. Validity does not exist on an all-

or-none basis. Consequently, we should avoid thinking of assessment results or

grades as valid or invalid. Validity is best considered in terms of categories such as

high validity, moderate validity, and low validity, depending on the use or

interpretation of the information. (Guskey and Bailey, 2001: 47.)

The relevant type of validity in the measurement of achievement is called content validity. In assessing the content validity of an achievement test, one asks, “To what

extent does the test require demonstration by the learner of the achievements that

constitutes the learning outcomes in this area?” For a test to have high content

validity, it should be a representative sample of both the content/topic and the

cognitive processes/abilities outcomes (competence) of a given course, unit

standard or learning programme. (Hopkins, 1998: 73.)

Content validation is primarily a process of logical analysis. By means of a careful

and critical examination of the test items in relation to the learning outcomes and

instruction, one must make the following professional judgement:

1. Does the test content parallel the learning outcomes in content and processes?

2. Are the test and curricular emphasis in proper balance?

3. Is the test free from prerequisites that are irrelevant or incidental to the present

measurement task? (For example, is the reading level of the statistics test

appropriate for the learners?)

Validity is also dependent on quantity. Any single source of evidence on learner

learning can be flawed or misinterpreted. Even the most carefully designed

Page 101: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

assessments can, under certain conditions, yield imprecise measures of what

learners have learned or are able to do. It is not uncommon, for example, for some

unusual occurrence in a learner’s life to impinge in his or her performance on that

particular day and adversely affect assessment results. Perhaps the learner fell ill at

the time of the assessment or had an experience earlier in the day that had an

enduring emotional impact. Because of the potential fallibility of any single indicator,

it is essential that we use multiple sources of evidence in grading and reporting

learner learning. In other words, more evidence is usually better evidence. (Guskey

and Bailey, 2001: 48.)

For an assessment instrument to be valid, it must meet the following requirements:

1. The assessment instrument must be constructed in such a manner that it is

appropriate, meaningful and useful. Stated differently, the test items must

measure the learning outcomes for the course.

2. The assessment instrument must meet the purpose for the test, i.e. it must

measure what the test was intended to measure. The Assessor must prepare

and the Moderator must evaluate if the test items are consistent with the test’s

purpose.

3. The assessment instrument must be conducive to lifelong learning, i.e. it must

enable the learner to continually assemble, reassemble, structure, and tune

the accumulating body of knowledge and skills into functional systems for use

in thought, behaviour and in further learning.

4. The test content must be clear. Simple language and subject-related

terminology must be used.

5. The test content and item specifications must not be too prescriptive. It is not

realistic to try to describe nearly every conceivable delimitation to every test

item. Very narrow or laboriously defined specifications may inhibit the

learners’ creativity. A delicate balance needs to be struck between providing

too much and too little information.

Page 102: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

REQUIREMENTS FOR RELIABILITY AND CONSISTENCY

Reliability refers to the consistency of assessment results. If, for example, we attain

very similar scores when the same assessment procedures are used with the same

learners at two different times, we would conclude that our results have a high

degree of reliability. Similarly, if two different Facilitators independently rate learners’

performance on the same task and obtain similar scores, we would say the results

are highly reliable from one rater to another, or that they have a high degree of “inter-

rater” reliability. (Guskey and Bailey, 2001: 48.) The same is true in terms of grading.

If two different Facilitators independently look at the same collection of evidence on a

learner’s achievement and assign the same grade, we would say that the grading

process is highly reliable. In each of these cases our major concern is with

consistency of results.

Sometimes, reliability and validity are confused, especially by those new to the areas

of assessment and evaluation. Reliability is necessary for assessment results to be

valid, but results can also be reliable and not valid. In other words, certain measures

may be highly consistent and yet provide the wrong information or be interpreted

inappropriately. Results from a standardized assessment composed of multiple-

choice items dealing with grammar and punctuation, for example, might be highly

reliable and yet have very low validity as a measure of learners’ writing skills. A valid

assessment of writing skills is likely to require learners to do some writing. Similarly,

a grading procedure that outlines the specific elements to be included in the grade,

and describes how those elements will be weighed and combined, may yield

consistent results. But this doesn’t make the grade a valid indicator of learners’ level

of achievement and performance, especially if some of those elements have low

validity as measures of learners’ learning. Nor does it make the grading process any

more objective. Facilitators still subjectively decide what elements are included in

determining the grades and how those elements will be combined. In all areas of

assessment and grading, reliability is concerned with the consistency of results,

whereas validity is concerned with the appropriateness of the interpretations made

from the results. (Guskey and Bailey, 2001: 48.)

Causes of unreliability. Moderation judgements can be unreliable (Tombari and

Page 103: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Borich, 1999: 62):

1. If the sample of behaviour was too small to provide a reasonable probability if

being accurate.

2. If the number of times behaviour was observed were too small to produce a

reasonable probability of being accurate.

3. If the tasks that were assessed are unclear.

4. Scoring imprecision.

Let’s consider each of them.

Limited sample of behaviour. For many practical reasons, moderation has to be

done within a limited space of time. Thus, the number of items that can be

moderated is limited. This might have a detrimental influence on the reliability of the

outcomes. The fewer the number of items moderated or tasks observed, the more

likely it is that the verifier’s judgement of the Moderator’s work will be unreliable.

Random influences can also affect the Moderator during moderation. The more

items moderated, the better the likelihood that the effect of these random influences

can be eliminated.

Small number of observation occasions. Moderators have good and bad days,

just like Assessors and, for that matter, learners. A test, a one-time observation, a

single opportunity to demonstrate procedures, each increases the likelihood that

factors other than the Moderator’s moderation skills will affect the outcome.

Judgements about non-cognitive attributes such as motivation, attitude or getting

along with others are especially susceptible to unreliability due to limited

opportunities to observe. Thus, the more occasions on which the Moderator bases

his or her judgement, the more reliable it will be.

Unclear tasks. Ambiguity and lack of clarity are major hazards to reliability. If a

Moderator did not meet the requirements for moderation (and assessment), it must

not be because he or she was not given proper guidelines or did not know what was

expected of him or her. Vague questions, ambiguous requirements or unclear

Page 104: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

guidelines allow Moderators to interpret the task in ways the verifier did not intend. In

such cases, the verifier’s judgement about what is lacking or missing in the

Moderator’s performance, may say more about the guidelines given, than about the

quality of the moderation.

Scoring imprecision. Scoring contributes significantly to unrealiability, especially if

the Moderator did not use well-structured checklists or did not prepare proper

Moderator’s reports. This leaves room for subjectivity and incorrect decisions on the

part of the Moderator, and eventually the verifier, if he or she did not recognize the

flaw.

Building reliability into the verification process presents a great challenge to the

Moderator and verifier’s design capabilities in the case of practical assignments as

opposed to theoretical examination papers. Still, regardless of the learning outcomes

one wishes to verify, there are some general rules that should be adhered to, in

order to ensure reliability. They are:

1. Increasing the number of performances.

2. Increasing the number of occasions.

3. Writing clear guidelines.

4. Increasing scoring objectivity.

EVALUATING THE VERIFICATION PLAN AND PROCESSES

Reliability and validity – these are the key concepts, prerequisites, principles or

whatever you may wish to call them, in the entire system of assessment. As much as

reliability and validity in assessment are important, so it is in moderating assessment

and, of course, in verifying moderation. This means that much of what has already

been discussed and will still be discussed in this manual on reliability and validity,

also applies to this evaluation of the verification plan and processes.

How do we know that the verification plan and processes will work and that they will

produce reliable and valid results? Probably the most efficient way in which to

evaluate them is by making use of simple problem-solving techniques. Some of you,

especially those of you who studied management, will probably know some very

Page 105: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

good problem solving techniques, and also how to use them. You are most welcome

to do so. Identifying strengths and weaknesses of the verification plan and processes. There are many problem-solving techniques that can be used to identify strengths

and weaknesses. This, however, falls outside the objectives of this course. More

important is that strengths be identified to ensure that they are maintained and

improved upon, and weaknesses, so that they can be eliminated.

How will we know if an element of the verification plan is a strength or a weakness?

Simply by asking others. We are all subjective human beings, and as such tend not

to perceive ourselves and what we do in the same manner as we are being

perceived by others. Therefore, it is important to ask the opinion of others, especially

those who have no emotional or other interest in the verification process. People

who may be consulted include Moderators (remember, they probably are emotionally

involved), colleagues, ETD researchers, consultants and verifiers from other quality

assurance bodies. In the final analysis, however, the verifier needs to analyse the

available information and to take corrective action if necessary.

The verification plan should not be a static instrument. It must be updated

continuously, so that the verification process will improve all the time. The following

checklist can be used to evaluate a verification plan:

Yardsticks Yes No Recommendations

1

Does the verification plan provide evidence

that can help guide providers of learning,

assessment and moderation to improve the

assessment process?

2

Does the verification plan check that all staff

involved in assessment and moderation are

appropriately qualified and experienced?

3Does the verification plan check the credibility

of the moderation process and instruments?

Page 106: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Yardsticks Yes No Recommendations

4Does the verification plan monitor the

assessment and moderation processes?

5Does the verification plan check the accuracy

and validity of moderation decisions?

6Does the verification plan check a sample of at

least 10% of all moderated assessments?

7Does the verification plan confirm authenticity

of moderation information?

8

Does the verification plan provide for

interviewing Moderators on the moderation

process?

9Is a balanced spread of Moderators and

different unit standards verified?

1

0

Does the verification plan provide for verifier

preparation and documentation review?

1

1

Does the verification plan provide for a verifier

visit?

1

2

Does the verification plan provide for follow-

up?

1

3

Does the verification plan provide for

preparation and submission of a verifier

report?

1

4

Is the verification planning checklist complete?

1

5

Is the verification planning checklist still valid?

1

6

Is the verification plan based on a

standardized set of forms?

1

7

Are the verification forms still relevant?

1

8

Does the verification plan provide for statistical

analysis?

1 Are the statistics relevant?

Page 107: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Yardsticks Yes No Recommendations9

2

0

Does the verification plan support impartial

judgement?

2

1

Does the verification plan provide for changes

in the moderation process?

2

2

Does the verification plan check that

assessment and moderation policies are

followed?

2

3

Does the verification plan provide accurate,

complete, current and relevant support to the

moderation process?

2

4

Does the verification plan support effective use

of resources?

2

5

Does the verification plan provide positive

feedback where it is deserved?

2

6

Does the verification plan ensure prompt

corrective action if the moderation process

does not meet the requirements?

2

7

Does the verification plan ensure that disputes

regarding assessment and moderation are

resolved equitably?

2

8

Does the verification plan include multiple

reporting tools, each with its own specific and

well-defined purpose?

2

9

Does the verification plan provide for correct

and accurate record keeping?

LIST THE STRENGTHS IDENTIFIED AND WHAT SHOULD BE DONE ABOUT

THEM

Page 108: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Yardsticks Yes No Recommendations

LIST THE WEAKNESSES IDENTIFIED ABOVE AND HOW THEY CAN BE

ELIMINATED/IMPROVED UPON

Table 13: Checklist for the evaluation of a verification plan.

Recommendations to improve verification systems. The recommendations to

improve the verification system should be identified and listed as shown above. The

best method to evaluate the strengths and weaknesses is by mustering the

assistance of other experts in the field. Working as a team has the advantage that a

better variety of ideas can be obtained, and it is less subjective than it would be if the

verifier were to work alone.

It is important that recommendations lead to action. Probably one of the most serious

and frustrating problems encountered in verification is that the process leads to no

improvement in assessment or moderation. It is true that verification has an

accrediting function, but this should not be the only outcome. Verification is a most

powerful tool by means of which to ensure positive change. Besides, if no

improvement in the verification process takes place, chances are very good that the

same will happen to assessment and moderation.

Credibility and integrity of the recognition system. The credibility and integrity of

Page 109: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

the recognition systems is dependent upon the following:

1. The standard of the verification process.

2. The knowledge and experience of the verifiers.

3. The strictness with which the verification findings are monitored.

4. The manner in which providers of learning, assessment and moderation

support and act on verification findings.

5. The support that verifiers receive from the ETQA and higher quality assurance

bodies.

SUMMARY OF CHAPTER 7

Criteria-based verification should be supported by appropriate statistical analysis.

Verifiers should control the quality of education and training programmes by closely

monitoring and evaluating moderation of assessment. Thus, it is the responsibility of

the verifier to support the learning process. Verifying bodies must be able to fulfill

their tasks effectively and efficiently. Likewise, individuals who act as verifiers should

be experienced and qualified to do the work.

The verification process is initiated by the learning provider submitting learner

achievements for verification. This is followed by the verification process, consisting

of four steps, namely:

1. Verifier preparation and documentation review.

2. The verifier visit.

3. Verifier follow-up.

4. Verifier report.

The verification process must be valid, i.e. it must measure what it was meant to

measure. Validity consists of construct validity (relevance of the assessment

procedure), content validity (outcomes-relatedness) and quantity (more than one

source of evidence should be consulted).

The verification process must be reliable and consistent. This means that the same

procedures must deliver the same results with the same learners at different times.

Page 110: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Similarly, different Assessors should rate different learner’s performance the same

on the same task. Of course it will not be possible to achieve perfect reliability and

consistency, but the measure of consistency is an indication of the accuracy of the

verification findings.

The verification plan and process must be tested for reliability and validity. Perhaps

the most efficient way to test for reliability and validity is to identify the strengths and

weaknesses of the verification plan. Strengths should be maintained and capitalized

on, while weaknesses should be eliminated or at least reduced.

It means very little to identify strengths and weaknesses if nothing is done about

them. An action plan should be prepared and it must be implemented if the

assessment and moderation processes are to be improved.

The credibility and integrity of the recognition system (verification) are dependent on

a number of factors. These factors should be borne in mind to ensure that

verification is accepted, supported and protected by learning institutions and the ETD

community at large.

Page 111: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 8: VERIFICATION STATISTICS

INTRODUCTION

If information is important, it must be communicated as accurately as possible.

Numbers can be used in the reporting and interpretation of quantitative information.

Numbers lend themselves to the language and procedures of statistics and

mathematics. Many variables can be described more precisely by using numbers

rather than, or in addition to, words.

Increasingly, education and training development (ETD) practitioners are expected to

interpret information using concepts of central tendency, variability, percentiles,

standard scores, reliability and validity. Some understanding of these concepts is

essential for proper use and interpretation of assessments and moderation. In

addition, at least a rudimentary grasp of basic statistical concepts is required for

comprehending much of the current ETD verification and reporting procedures.

(Hopkins, 1998: 18.)

The statistical methods that we will discuss will not be complicated ones – this is not

a course in statistical methods. Learners who are strong in, or feel comfortable with

statistics may use the methods discussed here, but they are not a prerequisite for

achieving competence. Also remember – the calculations required by the

mathematical algorithms on which the statistical methods are based are mostly done

by means of dedicated computer programmes, so that in-depth knowledge of the

formulas for statistics are not really needed.

Specific outcome: Evaluate verification plan and processes.Assessment criteria:1. Strengths and weaknesses of the verification plan and processes are identified

in terms of their manageability and potential to make judgements on the quality and validity of moderation decisions.

2. Recommendations to verification systems and processes facilitate their improvement in line with ETQA requirements and overall manageability.

3. The review enhances the credibility and integrity of the recognition system.

Page 112: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Understanding the meaning and value of statistical concepts are more important

than the ability to manipulate numbers, because we have computers and computer

programmes at our disposal that can do almost any statistical calculations that we

may need. So, the purpose of this chapter is not to make you experts in statistical

methods, but rather to create an awareness of the calculations that can be used and

the rationale behind them. Let’s kick off with some important statistical concepts.

FREQUENCY DISTRIBUTION

Although it is true that, as I said in the Preface to this manual, the normal distribution

curve does not belong in an outcomes-based approach to learning, we can still learn

something from the distribution of test results. The most obvious inference to be

made from the distribution of test marks is how difficult or easy the test was.

Suppose a learner scored 15 out of 20 possible marks for a test (any test). It would

be easy to determine that the learner scored 75% for the test – that already gives us

some information. But we can evaluate the learner’s performance much better if we

know something about the difficulty of the test. A percentage of 75 may be poor if

most of the learners scored more than this. Or it may be very good if it is the highest

score in the class. When we examine the scores of a class of 30 learners, the

particular learner’s score begins to acquire more meaning. Suppose those 30 scores

were: 13, 12, 15, 13, 14, 18, 13, 13, 12, 14, 16, 17, 14, 15, 11, 16, 15, 14, 19, 14, 16,

17, 11, 9, 18, 12, 17, 16, 15, and 20. It can already be sensed that the learner’s

score of 15 is somewhat near the middle of the group. The score can be interpreted

more accurately if these raw scores are tallied into a frequency distribution.

(Hopkins, 1998: 20.)

A frequency distribution shows the number of persons (frequency, or ƒ) who

obtained each score (X). If we list each score (X) with the number of times it was

obtained, we are constructing a frequency distribution. The following is a frequency

distribution of the 30 scores mentioned above:

Page 113: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

X ƒ X ƒ X ƒ

20 1 15 4 10 0

19 1 14 5 9 1

18 2 13 4 8 0

17 3 12 3 7 0

16 4 11 2 6 0

Table 14: Frequency distribution of 30 scores.

From the frequency distribution we can see that:

1. The highest score was 20 and the lowest score was 9, resulting in a difference

of 11 (which we will call the range for the test.)

2. The range gives us an idea of the variability of the test. We will discuss

variability in more detail later.

3. Where the scores tend to cluster it can be expressed as the mean, median or

the mode.

4. The mean is the sum of the scored divided by the number of scores.

5. The score that occurs most frequently is called the mode. For this test the

mode is 14 (it occurs five times).

6. The median is the point that divides the distribution of scores into two equal

halves. The median is another name for the fiftieth percentile (or P50) because

50 percent of the scores fall below it. The simplest way to determine the

median is to rank order the scores and then count up to the middle score, that

is, to the point that exceeds one-half of the scores. When there is an even

number of scores, like in our example, the median will be the value between

the two middle scores. This can be illustrated as follows:

SCOR

E

SCORE

NUMBERMEDIAN

If we had an

uneven number

of scores, the

median would

have been the

score halfway

SCORESCORE

NUMBERMEDIAN

20 1 20 1

19 2 19 2

18 3 18 3

18 4 18 4

Page 114: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

between the

highest and the

lowest score.

Suppose we did

not have the

learner who

obtained a score

of 9, i.e. we had

29 learners only.

The median

would then be

determined as

illustrated in the

three columns to

17 5 17 5

17 6 17 6

17 7 17 7

16 8 16 8

16 9 16 9

16 10 16 10

16 11 16 11

15 12 15 12

15 13 15 13

15 14 15 14

15 1514.5

15 MEDIAN = 15

14 15 14 14

14 14 14 13

14 13 14 12

14 12 14 11

14 11 14 10

13 10 13 9

13 9 13 8

13 8 13 7

13 7 13 6

12 6 12 5

12 5 12 4

12 4 12 3

11 3 11 2

11 2 11 1

9 1

Table 15: Determining the median.

Even though the median is best for most descriptive purposes, the most widely used

measure of central tendency, however, is the mean or average. The mean is simply

the sum of the scored divided by the number of scores. The procedure for computing

the mean is defined by the following formula (Hopkins, 1998: 22):

Page 115: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

ΣX

Mean = μ = , where:

N

Σ = a symbol meaning “the sum of”.

X = represents scores.

N = the number of scores.

This formula is simply a shorthand way of saying that the mean equals the sum of

the scores divided by the number of scores, which is:

(13 + 12 + 15 + 13 + 14 + 18 + 13 + 13 + 12 + 14 + 16 + 17 + 14 + 15 + 11 + 16 + 15

+ 14 + 19 + 14 + 16 + 17 + 11 + 9 + 18 + 12 + 17 + 16 + 15 + 20) / 30 = 439/30 =

14.63.

The mean is sensitive to every score in the distribution, which is not true of the mode

and the median.

THE NORMAL DISTRIBUTION

I mentioned in the preface of this manual that outcomes-based learning no longer

judge success by means of a normal distribution curve. This, however, does not

mean that the normal distribution curve cannot provide us with some valuable

information about the quality of learning and assessment. A well-designed learning

programme and learning event plans will provide learners with knowledge and skills

based on specific standards. Likewise, a well structured assessment instrument will

test the knowledge and skills that the (well-designed) learning programme and

learning event plans would have taught them. In theory this should give us a

symmetrical distribution curve. It is, however, very important not to misuse the

normal distribution curve to increase or decrease requirements for success

artificially, but rather to improve the quality of learning.

Page 116: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Theoretical distributions that are precisely symmetrical and have a certain

mathematical bell shape are termed normal distributions. In a true normal distribution

the mode, median and the mean have the same value. If the mean and the median

of a distribution curve differ considerably, the shape of the distribution is not

symmetrical and therefore cannot be normal. The difference between a symmetrical

and asymmetrical distribution curve can be illustrated as follows:

SYMMETRICAL ASYMMETRICAL

Figure 11: Symmetrical vs asymmetrical distribution curves.

The shape of the distribution can provide some useful clues about the adequacy of a

test. The unimodal (one-mode) asymmetrical distribution of the above example (the

curve on the right) is said to be skewed (it appears as if a normal curve has been

pushed to one side). When the tail points to the left, as in the above example, the

curve is said to be skewed negatively. The curve can be skewed to the left or the

right, and it is irrelevant which is called what, because this often causes confusion

and frustration amongst Assessors, Moderators and verifiers. Let’s attach values to

the curve by referring back to our example of 30 marks.

Page 117: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

FRE

QU

EN

CY

5 X

4 X X X

3 X X

2 X X

1 X X X

0 X

MARKS 8 9 10 11 12 13 14 15 16 17 18 19 20

Figure 12: Distribution curve of 30 scores.

This distribution curve is slightly skewed to the right, meaning that there are more

high scores than low ones. Actually it is not a bad distribution and the verifier should

be quite satisfied with the results. Also note that the curve has been “smoothed”

somewhat, to give it a less uneven appearance.

When skewing is present and the number of scores is large, the three measures

(mode, mean and median) will differ accordingly. The mean is always pulled most

towards the tail and the mode the least; the median is between the mean and the

mode. Since the height of the curve represents frequency, the highest point in the

curve indicates the mode. The movements of the measures can be illustrated as

follows:

Figure 13: The effect of skewing on the mean, median and mode.

If the majority of the values are high (always on the right-hand side of the curve), we

Mode

Median

MeanMode

Median

MeanLow High Low High

Page 118: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

say the curve is skewed negatively. If the majority of the values are low, we say the

curve is skewed positively. If the mean is considerably greater than the median, the

distribution is probably skewed positively. If the mean has the lowest value of the

three measures of central tendency, the distribution is probably skewed negatively. If

the median, mean and mode have the same value, the shape of the distribution may

be symmetrical.

MEASURES OF VARIABILITY

Although measures of central tendency indicate the values about which the scores

tend to cluster, they provide no information on the degree of individual differences or

variables that exists amongst learners. We have already been introduced to one

measure of variability, namely range. To refresh your memory – range is the

difference between the highest and lowest scores. (Hopkins, 1998: 30.)

To illustrate why measures of variability is needed, consider the following normal

distributions, which could depict test scores from the learners at two different

learning institutions. The tests scores can be tabulated as follows:

LEARNE

R

LEARNING INSTITUTION A LEARNING INSTITUTION B

SCORE

S

FREQUENC

Y

MEDIA

N

SCORE

S

FREQUENC

Y

MEDIA

N

1 802

802

2 80 80

3 752

75

5

4 75 75

5 702

75

6 70 75

7 65

5

75

8 65 70

49 65 70

10 65 70

11 65 70

12 60 6 = mode 60 6 = mode

Page 119: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

13 60 60

14 60 60

15 6060

6060

16 60 60

17 60 60

18 55

5

50

5

19 55 50

20 55 50

21 55 50

22 55 50

23 50

3

45

5

24 50 45

25 50 45

26 452

45

27 45 45

28 302

302

29 30 30

30 20 1 20 1

TOTAL 1730 1730

MEAN 57.67 57.67

Table 16: Test scores at two different learning institutions.

The differences in distribution can be seen more clearly if we plot the scores on

distribution graphs:

Page 120: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

FRE

QU

EN

CY

6A

B

5 B B A A B

4 B

3 A

2A

BA A A

A

B

1A

B

SCORES 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100

Figure 14: A comparison of the degree of dispersion between the test scores at two

different learning institutions.

We can see that the central tendencies (modes, means and medians) are exactly the

same, but the difference in degree of dispersion is quite evident from the graphs.

The more heterogeneous learning institution (the one with the greater degree of

individual differences in scores) is learning institution B. The more homogeneous

learning institution (the one with the lesser variables in score) is learning institution

A.

Since the range is a crude, undependable indicator of variability, more refined

measures of dispersion have been devised. The most widely used is the standard

deviation, which is symbolized for a population with the Greek letter σ (sigma), or for

a sample by the symbol s. We know now that the mean, median and mode are

measures of central tendency, while the range, variance and standard deviation are

measures of variability.

Now, lets apply our newly-found knowledge of variance to the example above. The

formula for the variance is:

Σx2

σ2 = , where

N

Page 121: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

σ2 is the variance of the distribution of scores.

N is the number of scores, and

Σx2 is the sum of the squared deviation of each of the N scores from the mean, µ,

i.e., x = X - µ.

Let’s apply the formula to the scores of learning institutions A:

X x x2

80 22.33333 498.77778

80 22.33333 498.77778

75 17.33333 300.44444

75 17.33333 300.44444

70 12.33333 152.11111

70 12.33333 152.11111

65 7.333333 53.777778

65 7.333333 53.777778

65 7.333333 53.777778

65 7.333333 53.777778

65 7.333333 53.777778

60 2.333333 5.4444444

60 2.333333 5.4444444

60 2.333333 5.4444444

60 2.333333 5.4444444

60 2.333333 5.4444444

60 2.333333 5.4444444

55 -2.66667 7.1111111

55 -2.66667 7.1111111

55 -2.66667 7.1111111

55 -2.66667 7.1111111

55 -2.66667 7.1111111

50 -7.66667 58.777778

50 -7.66667 58.777778

Note: These figures have been calculated with a

standard spreadsheet. This minimizes errors and saves time. It is but one of many

computer programmes/methods that

can be used to simplify statistical procedures.

Page 122: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

50 -7.66667 58.777778

45 -12.6667 160.44444

45 -12.6667 160.44444

30 -27.6667 765.44444

30 -27.6667 765.44444

20 -37.6667 1418.7778

SUM 1730 5686.6667

MEAN 57.66667

Table 17: Determining the mean.

So, the variance is:

Σx2

σ2 = = 5686.6667/30 = 189.56

N

Although the variance is widely used in statistical inference, it is not a good

descriptive statistic. What does the 189.56 actually mean? Its square root, the

standard deviation, is used extensively in interpreting test performance. A normal

hand calculator or spreadsheet can be used to obtain the square root of 189.56.

According to my calculator it is 13.77 (to the second decimal).

So what? How can we use the standard deviation to interpret performance or any

other learning tendency for that matter? Remember, central tendency and variability

convey important information about the quality of an assessment instrument or

moderation process. Central tendency is an indicator of an assessment instrument’s

level of difficulty, and variability indicates the degree of individual differences among

the scores.

Again, you may ask, what does this mean for the verification of learning quality?

Well, no distribution is adequately described solely by a measure of central

tendency. Differences in variability must also be considered. And, as we have seen,

Page 123: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

variability is the extent to which the scores of a group tend to differ or spread above

and below a central point in the distribution. Clearly, it is important to have some

convenient method of describing the variability of a group. Two common measures

of variability are the range and the standard deviation. Whereas measures of central

tendency are points, these measures of variability are expressed as differences; the

larger their values, the greater the variability in the distribution scores.

The standard deviation is an important measure of performance when norms are

available. A small standard deviation indicates great heterogeneity. In the spirit of

outcomes-based learning, one must be cautious not to misuse standard deviation as

a norm for success or failure. Learners should not be penalized (failed) if they don’t

measure up to the norm for the course – each individual learner must be judged

against his or her ability to prove competence in the learning outcomes of the course

or learning programme. The standard deviation can, however, be a useful indicator

of whether the course is offered on the right level and if assessment instruments are

prepared on the right level (not too difficult or easy for the level of the course).

TEST VALIDITY

Validity defined. As we have already seen in Chapter 2, the validity of a measure is

how well it fulfills the function for which it is being used. The validity of an

assessment can be viewed as the “correctness” of the inferences made from

performance in the assessment. These inferences will pertain to (Hopkins, 1998: 72):

1. Performance on a universe of items (content validity).

2. Performance on some criterion (criterion-related validity).

3. The degree to which certain constructs are actually represented by

assessment performance (construct validity).

Content validity. The relevant type of validity in the measurement of learning

achievement is content validity. In evaluating the content validity of an assessment

instrument, one asks, “To what extent does the test require demonstration by the

learner of the achievements that constitute the learning outcomes in this area?” For

an assessment instrument to have a high content validity, it should be a

representative sample of both the knowledge (essential embedded knowledge) and

Page 124: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

skills (learning outcomes) of a given course or unit of learning.

Content validation is primarily a process of logical analysis. By means of a careful

and critical examination of the test items in relation to the learning outcomes and

essential embedded knowledge, one must make the following professional

judgements:

1. Does the test content parallel the learning outcomes in content and

processes?

2. Are the test and curricular emphasis in proper balance?

3. Is the test free from prerequisites that are irrelevant or incidental to the

present measurement task? (For example, is the reading level of the statistics

test appropriate for the learners?)

4. Does the assessment instrument test critical cross-field outcomes?

Content validity should not be confused with face validity. Face validity lacks the

systematic rational analysis required in the assessment of content validation. An

assessment instrument is said to have face validity if, on first impression, it appears

to measure the intended knowledge or skills. It is important that assessment

instruments have face validity; otherwise learners may feel they are being unfairly

assessed. Typically, a content-valid test will also have face validity, but it is possible

to have one without the other. Easy tests usually have high face validity, especially

for people who are unfamiliar with research on the topic, but they may lack reliability,

an indispensable prerequisite for valid assessment. Conversely, a test item that is

not presented in a practical context may actually measure an important concept of

competence, but it may lack face validity for some learners. (Hopkins, 1998: 77.)

Multiple-choice questions formulated in the negative often appear unfamiliar to

learners, i.e. it lacks face validity. Formulating questions in such a way that the

learner must discriminate between degrees of accuracy can be even worse, e.g. if

the Assessor were to ask: “Which one of the following is the least accurate

determinant of learning needs?” Assessment instruments with good content validity

usually have face validity; the reverse, however, is much less likely to be true.

The content validity of an assessment instrument is the degree to which the items of

that test are a representative sample of the essential embedded knowledge and the

Page 125: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

learning outcomes of the unit standard or learning programme being assessed.

Criterion-related validity. In contrast to content validity, which is based almost

entirely on rational analysis, criterion-related validity is purely an empirical matter.

There are two subclasses of criterion-related validity. The more common of these is

predictive validity, in which the assessment instrument has the task of predicting

some subsequent performance on a criterion. For example, a business wishes to

test which employees are most likely to be successful in management roles, so that

they can be trained as supervisors. The objective of the testing is to minimize risk,

i.e. to invest in the candidates with the best chance of being successful. (Hopkins,

1998: 78). The accuracy of the predictions is described by the correlation coefficient

between test scores and a subsequent empirical criterion. This criterion is a validity

coefficient because it defines the degree of criterion-related validity of the test.

Decisions involving selection are predictive in nature. The meaning of correlation and

correlation coefficients is essential to an understanding of criterion-related validity.

CORRELATION AND PREDICTION

Two traits are correlated if they “go together”. If high scores on a variable X tend to

be accompanied by high scores on variable Y, then the variables X and Y are

correlated, since the scores “covary.” For example, there is a tendency for some dog

breeds to bark more than others (at least, this is the perception of my neighbour!).

Since dog breed and barking tend to covary, they are said to be correlated.

We can describe the degree of correlation between variables by terms such as

strong, low, or moderate, but these terms are not sufficiently explicit. A more precise

method is to compute a coefficient of correlation between the sets of scores. A

coefficient of correlation is a statistical summary of the degree of relationship

between two variables. Correlation coefficients range in magnitude from +1.0 to –

1.0. A positive correlation coefficient means that high scores on one measure tend to

be associated with high scores on another measure, and low scores on one tend to

be associated with low scores on the other. (Hopkins, 1998: 78.) For example, if the

perception were true that direct, face-to-face communication promotes co-operation

between individuals, then Facilitators who deliberately seek face-to-face

Page 126: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

communication with learners will enjoy better co-operation from them. The

correlation coefficient for this example would be positive. On the other hand, if the

perception proves to be false, and direct, face-to-face communication does not

promote co-operation between individuals, the correlation coefficient for this example

would be negative.

The sign (+ or -) of the correlation coefficient indicates the direction of the

relationship. When low scores on variable A are accompanied by low scores on

variable B, and high scores on A by high scores on B, rAB (the coefficient of

correlation between A and B) is positive. If high scores on A are associated with low

scores on C (and vice versa), rAC would be negative.

The sign of r does not indicate the strength of a relationship. For example, if test

scores correlate -.3 with number of days absent for a group of learners, then the

correlation between test scores and days present would be .3, or +.3. (Remember,

the sign does not indicate the strength of a relationship – it is the number that is

important.) So, the numeric value of r denotes the degree of relationship; the higher

the absolute value (the value irrespective of the sign), the stronger the relationship. If

rAB = +.55 and rAC = -.70, there is a stronger relation between variable A and variable

C than between variable A and variable B.

When r = +1.0 or –1.0, there is a perfect relationship between the two variables. In

both cases a knowledge of one of the variables makes it possible to predict the

second variable without error. With r = +1.0, the z-score of each individual would be

the same on both measures. With r = -1.0, the highest score on one measure would

be associated with the lowest score on the second measure, and so on. Any pair of

z-scores would be identical in value but opposite in sign, for example, +1.3 and –1.3.

If two tests correlate +1.0 and learner A ranked third on a test, he or she would rank

third on test B also. If her z-score on test A is +1.8, her z-score on test B would (also)

be +1.8.

Although a correlation cannot be interpreted as the percent of agreement, it does

reflect the expected percentage of deviation from the mean of the second variable.

For example, if one could determine through scientific testing that the correlation

Page 127: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

between the findings of a particular verifier and Moderator is 0.8, and that the

average value of a representative sample of assessments is 100, then we can

predict that, if the Moderator gave a value of 60 to a particular evaluation of

assessment results, then the verifier should give (1 - 0.8)(100 – 60) = 0.2(40) = 8

more or less than the Moderator for his findings. This is based on a numeric

evaluation, of course.

It is important to bear in mind that a correlation coefficient expresses the ratio of the

average or expected deviation from the mean on the predicted variable (Y) to the

known deviation from the mean on the predictor value (X) in standard deviation units.

The following regression equation makes this clear (Hopkins, 1998: 80):

źy = rzx , where

źy is the expected or predicted z-score on variable Y,

r = the correlation between variables X and Y, and

zx = the known z-scores on variable X.

Although the correlation can be done by means of a scientific hand calculator, it is a

rather cumbersome procedure. A spreadsheet on a desktop computer is much

simpler to use. If you do not have a computer and spreadsheet at your disposal, it

would probably be the easiest to use the Spearman rank correlation coefficient rranks,

which is a very close relative of r, and which can quite easily be computed manually.

(Hopkins, 1998: 80.)

Correlation can be viewed simply as the degree to which persons maintain the same

relative positions or ranks on two variables, i.e. it “predicts” a tendency. If there is

much change, the correlation coefficient will be low; if there is little change, the

coefficient will be high. A correlation coefficient can be obtained between any two

variables if scores are available on both. Let’s take, for example, test results by 11

learners for spelling (S) and grammar (G). To compute the Spearman rank

correlation coefficient, we will follow the following procedure:

Page 128: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

1. Rank the learners on the first variable (S) from the highest score (a rank of 1)

to the lowest score (a rank of 11 in our example). The highest score for

spelling is 50, so we will call it VS (Value Spelling). The VS of 50 receives the

highest ranking of 1. 49 is the next-highest score and receives a ranking of 2,

and so on. The lowest VS score is 35, and has a ranking of 11.

2. Rank the learners on the second variable (VG) in the same way. The highest

score is 61, which receives a ranking of 1, and so on. The lowest VG is 32,

which has a ranking of 11.

3. Find the difference between ranks for each learner, putting this value in the

column headed “Rank difference” (D). (The sign of the difference is not

relevant, since all values will be squared.)

4. Square each rank difference (D) value and place the result in the column, D2.

5. Sum the values in the D2 column to obtain the sum of D2, which we can

express as ΣD2.

Page 129: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Score Rank Rank difference

Learner VS VG VS VG D D2

A 50 51 1 5 4 16

B 49 56 2 4 2 4

C 48 59 3 2 1 1

D 47 48 4 6 2 4

E 46 37 5 9 4 16

F 45 32 6 11 5 25

G 44 58 7 3 4 16

H 43 44 8 8 0 0

I 42 34 9 10 1 1

J 41 61 10 1 9 81

K 35 46 11 7 4 16

ΣD2 = 180

Table 18: Rank difference.

6. Now that we have ΣD2, we are ready to compute rranks. We do this by using the

formula:

6 ΣD2

rranks = 1 -

N(N2 – 1) (N is the number of learners.)

= 1 – 6(180)/11[(11)2 – 1]

= 1 – 1080/11(121 – 1)

= 1 – 1080/11(120)

= 1 – 1080/1320

= 1 - .82

Page 130: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

= .18

The low relationship between the VS and VG scores can be illustrated graphically by

a scatterplot – a plot that shows the pattern of relationship between the two

variables. Here it is:

VG

Ran

k

1

1

(6, 11)

1

0

(9, 10)

9 (5, 9)

8 (8, 8)

7 (11, 7)

6 (4, 6)

5 (1, 5)

4 (2, 4)

3 (7, 3)

2 (3, 2)

1 (10, 1)

1 2 3 4 5 6 7 8 9 10 11

VS Rank

Figure 15: Example of a scatter plot.

A scatter plot depicting a strong relationship between VS and VG would have looked

something like this (don’t try to find the values in a table – this is a purely

hypothetical example):

Page 131: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

VS

Ran

k

1

1

1

0

9

8

7

6

5

4

3

2

1

1 2 3 4 5 6 7 8 9 10 11

VG Rank

Figure 16: Example of a scatter plot depicting a strong relationship between two

variables.

THE SCATTERPLOT

Let’s discuss the scatter plot and what it means. The scatter plot enables one to

study the nature of the relationship between two variables. Each dot of the scatter

plot indicates the intersection of the two measures, X and Y, for one individual. The

scatter plot reveals whether the relationship between two variables is linear. The

relationship between two variables is linear if the straight line more closely fits the

dots of the scatter plot than any curved line does. (Hopkins, 1998: 84.) The following

are examples of some extreme cases:

Page 132: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Figure 17: Examples of linear relationships.

INTERPRETING CORRELATION COEFFICIENTS

In interpreting a coefficient of correlation, several factors must be considered. The

first is the sign of the coefficient: the second is the magnitude of the coefficient. The

sign (+ or -) indicates the direction of the relationship. Positive coefficients indicate

direct relationships, that is, the tendency for higher scores on X to be associated with

higher scores on Y, and vice versa. High values in one column are associated with

high values in the other column; and so on. Negative rs indicate an “upside-down”

relationship – a tendency for the two scores to vary in opposite directions – high

values on one variable are associated with low values on the other variable, and vice

versa. (Hopkins, 1998: 86.)

Example of a perfectly positive linear relationship (r = 1.0).

Example of a perfectly negative linear relationship (r = -1.0).

Example of no relationship between the two variables X and Y (r = 0).

Example of an intermediate relationship, e.g. r = .60.

Some variables are related, but not linearly related. This is an example of a curvilinear (or nonlinear) relationship: Although the figure shows there is no linear relationship between X and Y, if an individual’s score is known for measure X, one can predict the score of the second measure Y with considerable accuracy despite the fact that if the value of r were computed, it would be zero. For scatter plots of this sort, r will greatly underestimatethe degree of relationship between the two measures, and therefore is not an appropriate measure of relationship if the relationship is not linear.

Page 133: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The magnitude of the value of r indicates the degree or closeness of the relationship.

If there is no relationship between X and Y, the value of r is 0. A perfect relationship

is denoted by an r-value of +1 or – 1. If r is either 1.0 or –1.0, the exact value of Y

can be predicted from knowledge of X. Similarly, all other values of the same size,

such as -.50 and .50 indicate the same degree of relationship – the size, not the

sign, of the coefficient indicates the closeness of degree of relationship.

Bear in mind that r cannot be interpreted directly as a percentage in the usual sense.

The value of r can be interpreted as a percentage if standard deviations are used.

Also bear in mind that a coefficient of correlation is only a best estimate of the

relationship between two variables. No more than this can be obtained, for with an

imperfect relationship, which is what we will work with in almost all scenarios, there

cannot be only one answer which must be right. (Gregory, 1973: 209.)

CORRELATION VERSUS CAUSATION

It is important to understand that because two variables correlate, it does not

necessarily mean that there is a causative relationship between the two. Another

variable other than the two under consideration may be responsible for the

association. Furthermore, problems in the same disciplines are sometimes too

complex to be explained in terms of a single cause. The fact that people who have

taken formal driver training have fewer car accidents than those without formal driver

training, is a far cry from establishing a causal relationship between car accidents

and lack of formal driver training.

Let us consider a second example. Research showed that the percentage dropouts

in college carries inversely (is negatively correlated) with the number of books per

learner in the libraries of those colleges. Common sense though, tells us that merely

piling more books into the libraries will not affect the dropout rate.

Attributing causation to correlation is a very common error. It may be true that

learners from homes where at least one newspaper is bought every day achieve

higher marks than learners from homes where newspapers are not bought. (This is

Page 134: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

hypothetical example, not necessarily based on fact.) But this does not mean that

learners will achieve better marks if they subscribe to (more) newspapers. The

underlying cause of the higher achievement may be the culture of the family,

regarding the value of reading and staying informed rather than subscribing to

newspapers.

Another factor influencing correlation is the size of the sample.

THE INFLUENCE OF SAMPLE SIZE

The size of the sample upon which the correlation coefficient was determined is

another factor of importance in interpreting correlation coefficients. To keep us from

being misled by high correlation coefficients resulting from small samples,

statisticians have developed procedures to assist in determining whether a given

correlation coefficient can be attributed to change (sampling error) or whether a

genuine relationship exists. If a correlation is statistically significant, we can be

confident that there is some degree of true relationship between the two variables.

A common error in interpreting statistical significance is to assume that because they

are statistically significant, they are therefore large relationships. “Statistically

significant” indicates that one can be confident that the true correlation in the

population (i.e. the parameter, ρ) is not 0. If the sample size is very large, the

relationship may be statistically significant and yet be trivial. For example, the

relationship between age and academic performance might (in some isolated cases),

be statistically significant, but too low to be of value for practical purposes. (Hopkins,

1998: 89.)

To guard against the error in interpreting statistically significant relationships as

strong relationships, confidence intervals should be used. These intervals give the

upper and lower limits for the value of the true correlation coefficient in the

population. For example, if a highly significant correlation of r = .32 was observed

between study habits and learning performance for 172 learners in training

management, how much error is there in the value of .32? The confidence interval

around r gives a reasonable estimate as to how high or low the value of the

Page 135: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

correlation coefficient might be if the entire population in question were included in

the sample. In this instance the 95 percent confidence interval is (.16, .44).

Consequently, we can be “95 percent confident” that, in spite of the sample of only

172, the true correlation coefficient in the population is at least .16, and perhaps as

high as .44.

If our sample is representative and if we repeatedly use this strategy, we will be

correct 95 percent of the time.

Since high or low values for r are much more likely to occur by chance with small ns

(sample sizes), high values of r are required for statistical significance when n is

small. So we can conclude that (Hopkins, 1998: 89):

Correlation coefficients based on small samples are not very reliable.

Very weak relationships can be statistically significant with a very large

sample, or n.

TEST RELIABILITY

Anyone who regularly plays a sport is acutely aware of the variability in human

performance. No one operates at his or her personal best on all occasions, be it in

physical or mental activity. Quantification of the consistency and inconsistency in

assessment performance constitutes the essence of reliability analysis.

To do its job well, a measure must yield accurate results. A test has very little value if

learner A’s score today is quite different from the score it would yield for him or her

under similar conditions tomorrow. It is theoretically possible, however, for a test to

yield highly consistent results from day to day without having any validity. A highly

reliable measure of reaction time may be useful for predicting the quickness with

which one applies an automobile break, but it has no validity for indicating how well

one reads or solves statistics problems.

A measure can be reliable without being valid, but it can never have any validity if

totally unreliable. Evidence Facilitators and Assessors in recognition of prior learning

must be especially wary of the low reliability that is often associated with the

Page 136: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

assessment of portfolios of evidence.

Suppose we wish to develop a simple inventory to measure the trait of honesty. Let’s

call it the Honesty Inventory (HI). (Hopkins, 1998: 108.) We will develop a series of

questions that appear to sample honest behavior, such as:

“Do you ever deliberately distort the truth?”

“When you have had the opportunity, have you ever cheated on a test?”

“If a clerk makes a mistake and gives you too much change, do you ignore the

error?”

Suppose the HI were administered to 100 persons. Are the HI scores reliable and

valid? If we re-administered the HI to the same group a week later, do you think the

two sets of scores would correlate highly? The correlation coefficient between the

two sets of HI scores (i.e., the first score with the second score) is a test-retest

reliability coefficient. The coefficient may range from 0 to +1; the reliability coefficient

may be .9, or even higher. If each person answered all the questions exactly the

same way both times, can we be sure that the answers were truthful? The reliability

coefficient could be 1.0, yet the HI’s validity is the extent that the reported answers

are truthful answers. It is unlikely, but possible, that the honest person has the best

HI score. The least honest person may wish to conceal his or her imperfections;

thus, validity of the HI could be nil even if the reliability of the HI is perfect. If the

examinees took the HI anonymously, the HI’s validity would probably increase

greatly – people are unlikely to distort the truth unless a dishonest answer is

perceived to be self-serving.

The point is that reliability (measurement precision) is necessary, but not sufficient,

for validity. If repeated measurements yields disparate scores, we cannot have

confidence in their validity. Reliability is an essential prerequisite for validity.

If the HI contained a representative sample of honest behavior, it would be viewed

as having good content validity; but if the conditions under which the HI was

administered did not disarm examinees of all incentives to “fake it”, the inventory

would lack construct validity (i.e. the extent to which scores on the test represent the

examinees’ standing on the trait or construct of honesty). The criterion-related

Page 137: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

validity of the HI is the extent to which HI scores correlate with some observed

criterion measure (e.g., anonymous honesty ratings given by peers, Facilitators, or

supervisors) or more direct behavioural measures (e.g., “cash shortages” for clerks).

DETERMINING TEST RELIABILITY BY MEANS OF THE TEST-RETEST METHOD

In the test-retest method the same test is given to the same group on two different

occasions and the correlation between the two sets of scores is then calculated.

One of the most important problems experienced with this method is to determine

how long the period should be between the first and second applications of the test.

If the test is repeated too soon, memory will play a role, as some of the candidates

will be able to remember the answers to certain items. When the time lapse,

however, is too long, candidates can obtain more knowledge which will increase their

achievements, or they could spend so much time on the new section of the work,

that their knowledge of the terrain covered by the test, has diminished and they will

not achieve as well as the previous time.

Here is an example of how the calculations in the test-retest method are done:

Candidate Test (X) Retest (Y) (X2) (Y2) (XY)

A 15 15 225 225 225

B 14 14 196 196 196

C 14 16 196 256 224

D 13 10 169 100 130

E 13 12 169 144 156

F 13 13 169 169 169

G 12 11 144 121 132

H 12 10 144 100 120

I 11 9 121 81 99

J 11 6 121 36 66

K 11 10 121 100 110

L 10 9 100 81 90

M 10 10 100 100 100

Page 138: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

N 9 9 81 81 81

O 8 9 64 81 72

P 8 8 64 64 64

Q 7 6 49 36 42

R 6 6 36 36 36

S 3 3 9 9 9

T 2 2 4 4 4

20 202 188 2282 2020 2125

(N) (ΣX) (ΣY) (ΣX2) (ΣY2) (ΣXY)

Table 19: Example of the test-retest method.

Correlation (r) = NΣXY – ΣXΣY

√[NΣX2 – (ΣX)2] [NΣY2 – (ΣY)2]

= (20 X 2125) – (202 X 188)√ [(20 X 2282) - 2282] [(20 X 2020) - 2020]

= 42,500 – 37,976

√(45,640- 2282) (40,400 – 2020)

= 4,524

√43,358 X 38,380

= 4,524

40,793.13717

= 0.111

There is thus a very low positive correlation.

THE EQUIVALENT TEST METHOD

This method is very similar to the test-retest method, but in the second application an

Page 139: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

equivalent test is used instead of the same one. By equivalent test is meant that the

second test must be totally equivalent to the first test in terms of contents and degree

of difficulty. It must therefore be comparable item for item with the first test. It is

obvious that it will not be easy to compile such a test.

THE DIVIDED HALF METHOD

Both the previous methods can create problems and the results may not always be a

very trustworthy (reliable?) indication of reliability. A test method that generally has

fewer problems is the divided half method. In this method the test is only done once,

but the items with even numbers are grouped together and the items with uneven

numbers are grouped together. The test is then regarded as consisting of two half

tests.

This test method actually measures the internal consistency of the test, rather than

the consistency of the scores of the candidates, but the results still give an indication

as to the test’s reliability.

After the candidate has completed the test, the answers are marked. A working-

sheet is then compiled so that the candidates’ achievements in items with even

numbers (represented by X) and uneven numbers (represented by Y) can be

grouped together. The following table is an example of such a working-sheet:

Can

dida

te Items

Tota

l

Eve

n

Une

ven

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (X) (Y) (X2) (Y2) (XY)

A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 15 7 8 49 64 56

B 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 14 6 8 36 64 48

C 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 14 7 8 49 64 56

D 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 14 6 8 36 64 48

E 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 14 7 7 49 49 49

F 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 13 6 8 36 64 48

G 1 1 1 1 1 1 1 1 0 1 0 0 1 1 1 12 7 6 49 36 42

Page 140: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

H 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 12 7 5 49 25 35

I 1 1 1 1 0 1 0 1 1 1 0 1 0 1 1 11 6 4 36 16 24

J 1 1 1 1 1 0 1 1 1 1 0 1 0 0 1 11 5 6 25 36 30

K 1 0 1 1 1 1 0 1 1 0 0 1 1 0 1 10 5 6 25 36 30

L 1 1 0 1 1 1 0 1 0 1 1 1 1 0 0 10 6 4 36 16 24

M 1 0 0 0 1 0 1 0 1 0 1 1 1 1 1 9 2 7 4 49 14

N 0 1 0 0 0 1 1 1 1 1 1 0 1 0 1 9 6 5 36 25 30

O 1 0 1 0 1 0 1 0 1 0 1 1 0 1 1 9 1 7 1 49 7

P 0 1 0 0 0 1 1 1 0 1 0 1 0 1 1 8 5 2 25 4 10

Q 0 1 1 0 1 0 1 0 0 0 1 0 1 1 0 7 2 5 4 25 10

R 0 0 1 0 1 0 1 0 0 1 0 1 1 0 1 7 3 5 9 25 15

S 1 0 0 1 0 0 0 1 0 1 0 1 0 0 1 6 4 2 16 4 8

T 0 0 1 0 1 0 1 0 0 0 1 0 0 1 0 5 0 4 0 16 0

N =

20

15 14 15 12 16 13 15 13 12 15 11 15 14 13 17 210 98 115 570 731 584

(ΣT) (ΣX) (ΣY) (ΣX2) (ΣY2) (ΣXY)

Table 20: First example of the divided half method.

The correlation can now be calculated as follows:

Correlation (r) = NΣXY – ΣXΣY

√[NΣX2 – (ΣX)2] [NΣY2 – (ΣY)2]

= (20 X 584) – (98 X 115)

√[(20 X 570) – (98 X 98)] [(20 X 731) – (115 X 115)]

= 11,680 – 11,270

√ (11,400 – 9,604) (14,620 – 13,225)

= 410

√(1,796 X 1,395)

= 410/1582.851857

Page 141: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

= 0.259

In this example the correlation is rather low. Let’s look at an example that shows a

somewhat better correlation:

Can

dida

te Items

Tota

l

Eve

n

Une

ven

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (X) (Y) (X2) (Y2) (XY)

A 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 15 7 8 49 64 56

B 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 14 6 8 36 64 48

C 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 14 7 8 49 64 56

D 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 14 6 8 36 64 48

E 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 14 7 7 49 49 49

F 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 13 6 8 36 64 48

G 1 1 1 1 1 1 1 1 0 1 0 0 1 1 1 12 7 6 49 36 42

H 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 12 7 5 49 25 35

I 1 1 1 1 1 1 0 1 1 1 0 1 0 0 1 11 6 5 36 25 30

J 1 1 1 1 1 0 1 1 1 1 0 1 0 0 1 11 5 6 25 36 30

K 1 0 1 1 1 1 0 1 1 0 0 1 1 0 1 10 5 6 25 36 30

L 1 1 0 1 1 1 0 1 0 1 1 1 1 0 0 10 6 4 36 16 24

M 1 0 1 1 1 0 1 1 0 0 0 1 1 1 1 10 4 6 16 36 24

N 0 1 0 0 0 1 1 1 1 1 0 0 1 0 1 8 6 4 36 16 24

O 1 1 1 1 0 1 0 0 1 0 0 1 0 1 0 8 3 3 9 9 9

P 0 1 0 0 0 1 1 1 0 1 0 0 1 0 1 7 6 3 36 9 18

Q 0 1 1 0 1 0 1 0 0 0 1 0 1 1 0 7 2 5 4 25 10

R 0 0 1 0 1 0 1 0 0 1 0 1 1 0 1 7 3 5 9 25 15

S 1 0 0 1 0 0 0 1 0 1 0 1 0 0 1 6 4 2 16 4 8

T 0 1 1 0 1 0 1 0 0 0 0 1 0 1 0 6 1 3 1 9 3

N = 20 15 16 16 14 16 14 14 14 11 15 7 15 15 11 16 209 104 110 602 676 607

(ΣT) (ΣX) (ΣY) (ΣX2) (ΣY2) (ΣXY)

Table 21: Second example of the divided half method.

Page 142: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Correlation (r) = NΣXY – ΣXΣY

√[NΣX2 – (ΣX)2] [NΣY2 – (ΣY)2]

= (20 X 607) – (104 X 110)

√[(20 X 602) – (104 X 104))] [(20 X 676) – (110 X 110)]

= 12,140 – 11,440

√ (12,040 – 10,816) (13,520 – 12,100)

= 700

√(1,224 X 1,420)

= 700/1,318.36262621

= 0.53

The correlations in the above two examples actually are correlations between the

scores of two half tests. By dividing the test in two parts, the effective length of the

test is shortened. To make provision for the fact that the test is twice as long as the

test-retest method, the Spearman-Brown formula can be applied. (Van Niekerk,

1985: 146.) This formula indicates how the reliability of a test is affected (increased)

when similar items lengthen the test. The Spearman-Brown formula is:

rtt = reliability of the lengthened test.

r = reliability of the shorter test.

n = the number of times that the lengthened test is longer than the shorter one.

nrrtt =

1 + (n – 1)r

Page 143: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

In the dividing half method n = 2 (the test is actually twice as long) and the formula is

therefore amended as follows:

rtt = reliability of the whole test.

roe = correlation between even and uneven items.

Therefore, using the correlation in the second example above:

rtt = 2roe/(1 + roe) = 2(0.53)/(1+0.53) = 1.06/1.53 = 0.692810457 = 0.69

So, with a correlation of 0.53 between even and uneven items, the reliability for the

whole test is 0.69.

DETERMINING TEST ITEM/LEARNING OUTCOME CONGRUENCE

Determining if a test item tests what it is meant to test. It is important to determine if

test items measure what they are supposed to measure, i.e. if they are congruent to

a particular learning outcome. This normally requires a consensus of informed

opinions about the degree of congruence between particular test items and learning

outcomes. This typically requires convening a panel of expert verifiers who will rate

the item-to-outcome congruence according to some established criteria. Perhaps I

should mention that this should actually be done prior to subjecting learners to a test.

A panel of Assessors and, perhaps, one or more Moderators, should do the testing,

while the verifier or verifiers will probably only check if the testing was accurate and

valid.

It would not be practical or efficient to use many people to do the testing. Especially

in smaller learning institutions, where small numbers of learners are involved, testing

can be based on determining consensus between a panel of three or four people

who are expert in the subject area and registered Assessors. In such an instance the

number of opinions can be tested by voting, and a two-thirds vote or better can be

roe

rtt =1 + roe

Page 144: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

accepted as confirmation that there is congruence between test items and the

learning outcomes they are supposed to test.

Where large numbers of learners are involved, the panel should consist of a

correspondingly large number of experts. A panel of eight experts should suffice. In

such a case one can make use of statistical methods to test item/outcome

congruence. For example, if the panel decides that for every test item there should

be one, and only one, clear match to a learning outcome, an index of the item-

outcome congruence may be derived. For this procedure, panel members would be

instructed to assign a +1 if there is a strong match between the item and the

outcome, a 0 if a panel member is uncertain whether congruence exists, and a –1 if

the item does not match the objective.

A formula indicating that any particular item, k, is congruent with a specific outcome,

i, can be applied to the panel member’s ratings. This formula is (Osterlind, 1998:

263):

Although the formula might look rather imposing at first glance, it is actually quite

simple and can easily be worked through with a set of data.

In the formula: Ijk is the index value.

i is the specific learning outcome.

k is the test item.

X ijk is the rating assigned by a particular panel member.

∑ is, of course, the symbol for summative (adding up a set of

n n n (N – 1) ∑ X ijk + N∑ X ijk - ∑ X ijk i=1 i=1 i=1 I jk =

2(N – 1) n

Page 145: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

numbers).

N is the number of learning outcomes to be assessed.

Suppose an examination paper has 36 items that are intended to assess five

learning outcomes. Suppose, furthermore, that we are testing the item/outcome

congruence between the first test item and the second learning outcome. Imagine a

panel of nine experts has rated the item for congruence to the particular learning

outcome. One of the panel members rated the item as a poor match (-1), a second

panel member rated the item as a moderate match (0), and seven of the panel

members rated the item/outcome match as strong (+1). The sum (∑) of the panel

member’s ratings is 6 (i.e., (-1) + (0) + 7(+1) = 6). Applying these numbers to the

item/outcome congruence formula yields the following:

(5 – 1)6 + 5(6) – 6 (4)6 + 30 – 6 24 + 30 – 6 48

Ijk = = = = = .67

2(5 – 1) 9 2(4)9 (8)9 72

This formula will yield an index score of between +1 and –1. A +1 will be obtained if

all the panel members agree that there is a strong item/outcome match. Conversely,

if none of the panel members agree that the item is matched to a particular learning

outcome, the formula will yield an index of –1.

To use the index as a yardstick of item/outcome, the Assessors, Moderators or

verifiers, depending on who does the testing, will have to decide on a criterion level

for the index. This criterion level may be set by deciding on the poorest level of rating

that would be acceptable.

ITEM ANALYSIS

It is quite obvious that each item in a test will have an influence on the final results.

This contribution to the marks obtained by the learner can be either positive or

negative. The aim of item analysis is to determine what each item’s contribution to

the test is, so that good items can be selected and ineffective items can be removed

Page 146: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

from the test. Item analysis comprises three aspects, namely:

1. The difficulty index of each item.

2. The discrimination index of each item.

3. The effectiveness of the distractors in multiple-choice items.

Calculation of difficulty and discrimination indexes. The difficulty and

discrimination indexes of items are especially connected with the reliability of a test.

If a test complies with the other requirements (such as validity and objectivity) but the

test’s reliability is low, then item analysis can be of much value as an indicator of

reliability.

The difficulty index of an item is the proportion of learners tested who correctly

answers the item. The higher the value of this proportion, the easier the item is. If the

level of the learning outcomes and essential embedded knowledge is correct, then

an average difficulty is the most desirable result. Items with a much higher difficulty

index are too easy and those with a very low difficulty index, are too difficult; such

items should rather be omitted from the test.

The discrimination index indicates how well the item discriminates (distinguishes)

between the learners tested who, in general, are good achievers and those who are

weak achievers. The following guidelines for the judgement of an item’s

discrimination index may be used:

A value of .40 or higher indicates a good item.

A value between .30 and .39 indicates a fairly good item, but there is room for

improvement.

A value between .20 and .29 is a border case that can usually be improved

upon.

A value of .19 or lower indicates a weak item that can either be omitted or

revised.

Suppose a group of 22 learners wrote a test consisting of 20 test items and that the

Facilitator wants to determine the difficulty and discrimination indexes of the test.

Although the evaluation can easily be done by means of a computer, I am showing

you the manual process merely to give you an understanding of how it works. If done

Page 147: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

manually, the answers to test items can be checked (1 for each correct answer and 0

for each incorrect or unmarked answer). The Facilitator then puts the answer sheets

in sequence (from the learner with the highest total to the learner with the lowest).

If a complete item analysis is required, the Facilitator will compile a working sheet for

the purpose. Table 31 is an example of such a worksheet.

Lear

ners Items

Tota

l

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

A 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 19

B 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 17

C 1 1 1 1 1 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 17

D 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 0 17

E 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 1 0 17

F 1 1 1 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 16

G 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 0 0 1 1 16

H 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 16

I 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 0 1 0 1 16

J 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 1 1 0 16

K 1 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1 1 17

L 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 0 1 1 1 1 16

M 1 1 1 1 1 1 0 1 1 1 0 1 0 1 1 1 1 1 1 0 16

N 1 1 1 0 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 16

O 1 0 1 1 1 0 1 1 1 1 0 1 0 1 1 1 1 1 0 1 15

P 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 15

Q 1 0 1 0 1 0 1 1 1 1 0 1 0 1 0 1 1 1 1 1 14

R 1 0 1 1 1 0 1 0 1 1 0 1 1 1 0 1 1 1 0 1 14

S 1 0 1 0 1 0 1 1 1 1 0 1 1 1 1 0 0 1 1 1 14

T 1 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 15

U 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0 1 0 0 0 0 12

V 1 0 1 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 12

22 17 22 16 19 13 18 18 20 22 13 20 11 17 15 17 16 19 16 12

Page 148: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Table 22: Worksheet for item analysis.

As soon as the Facilitator has completed the worksheet, he or she will have some

useful information on the test items, e.g.

1. The achievement of learners tested varied from 12 to 19 out of a possible 20.

2. Item 20 was answered correctly by only 12 of the 22 learners.

3. Items 1, 3 and 10 were answered correctly by all 22 learners.

An easy method to determine difficulty and discrimination indexes, is to compare the

highest scoring group of learners tested with the lowest-scoring group, while the

middle group is omitted. It was found that the best results were obtained when the

top 27% of the learners tested is compared with the lowest scoring 27%, while the

remaining 46% is omitted. For the data in Table 13, 27% is determined as follows:

27% of 22 is 27/100 X 22 = 5.94, which we can safely round off to 6. So, the highest

scoring group of six learners would be learners A to F, while learners Q to V forms

the lowest scoring group.

The following table can now be prepared to determine the two indexes. (Remember

– we will normally do this by computer – this is merely to illustrate the procedure.)

Page 149: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Lear

ners

Items

Tota

l1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

A 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 19

B 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 17

C 1 1 1 1 1 1 1 0 1 1 1 0 0 1 1 1 1 1 1 1 17

D 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 0 17

E 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 1 0 17

F 1 1 1 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 16

6 6 6 5 6 6 6 4 5 6 6 5 3 3 6 5 6 5 6 2

Q 1 0 1 0 1 0 1 1 1 1 0 1 0 1 0 1 1 1 1 1 14

R 1 0 1 1 1 0 1 0 1 1 0 1 1 1 0 1 1 1 0 1 14

S 1 0 1 0 1 0 1 1 1 1 0 1 1 1 1 0 0 1 1 1 14

T 1 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 1 1 1 1 15

U 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0 1 0 0 0 0 12

V 1 0 1 0 0 1 0 1 1 1 0 1 1 1 1 0 1 1 0 0 12

6 2 6 3 4 2 4 4 6 6 2 6 4 5 2 3 4 5 3 4

Gro

up Items

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

H 6 6 6 5 6 6 6 4 5 6 6 5 3 3 6 5 6 5 6 2

L 6 2 6 3 4 2 4 4 6 6 2 6 4 5 2 3 4 5 3 4

H + L 12 8 12 8 10 8 10 8 11 12 8 11 7 8 8 8 10 10 9 6

H - L 0 4 0 2 2 4 2 0 -1 0 4 -1 -1 -2 4 2 2 0 3 -2

M 100 67 100 67 83 67 83 67 92 100 67 92 58 67 67 67 83 83 75 50

D 0 67 0 33 33 67 33 0 -17 0 67 -17 -17 -33 67 33 33 0 50 -33

Table 23: Determining two indexes.

In the above table you can see that the first step is to count how many of the 6

highest scores answered each of the 20 items correctly and this is indicated in row H

(for highest). The same procedure is followed to complete row L (for lowest). In the

Page 150: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

next row (H+L) (the highest plus the lowest) the correct answers of the two groups

are added in preparation of the calculation of the difficulty index. Row H-L (highest

minus lowest) serves as preparation of the calculation of the discrimination index.

For row M (difficulty index) the following calculations are done for each item:

(Correct answers H+L) X 100 or (H+L) X 100

Number of testees N

For item 1 it is therefore ((6+6)/22)100 = (12/12)100 = 100.

For item 2 it is (8/12)100 = 67, and so on.

The calculation for row D (for the discrimination index) is as follows for item 1:

(Correct answers H-L) X 100 or (6-6) X 100 = 0.

Half of the number of testees 6

For item 2 the discrimination index is:

((6-2)/6))100 = 0.67 x 100 = 67 (rounded off), and so on.

When we look again at the guidelines for judging the discrimination index and it is

applied on the discrimination index as determined in Table 31, we can draw the

following conclusions:

Guidelines Items Conclusions

40+ 2, 6, 11, 15, 19 High discriminatory index with no need to improve

items.

30 – 39 4, 5, 7, 16, 17 Relatively high discrimination index and are

therefore good items, with no need for

improvement.

20 – 29 Not so good – the item should be improved.

1 - 19 Index is very low – the items should either be

Page 151: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

revised or omitted.

0 to minus 1, 3, 8, 9, 10, 12,

13, 14, 18, 20

No or negative discrimination ability – these items

must be omitted.

Table 24: The discrimination index.

The difficulty index should rather be near to 50: if it is too high, the question is too

easy and if it is too low, the question is too difficult. Regarding table 32: There are no difficult questions and the level of the test is probably too low.

Questions 2, 4, 6, 8, 11, 13, 14, 15, 16 and 20 are probably right, i.e. the

difficulty index is close to 50.

Questions 1, 3, 5, 7, 9, 10, 12, 17, 18 and 19 are too easy and the items need

improvement.

When it must be decided what items should be omitted from the test, the

discrimination index is looked at first, and then the difficulty index. In some cases

both of them together will show that an item should be omitted. In other cases items

should be omitted merely on the basis of the discrimination index, even if the

difficulty index is good. If the discrimination index is reasonable, but the difficulty

index is not as it should be, the item needs improvement.

It is true that unfavourable discrimination and difficulty indexes are not the only

yardsticks in the decision on the omission of items. Sometimes such items cover

important aspects of the work or field of investigation, but in this case the items will

have to be revised and improved.

Page 152: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

DETERMINING THE EFFECTIVENESS OF DISTRACTORS IN MULTIPLE-CHOICE

ITEMS

Suppose a particular item in a multiple-choice test consists of a choice of 5

alternatives (correct answer d and distractors a, b, c and e). Suppose, furthermore,

that there were 37 candidates of whom 24 answered the question correctly. An

analysis of the answers can, for example, show the following:

Alternative a (distractor) was marked by 7 candidates.

Alternative b (distractor) was marked by 5 candidates.

Alternative c (distractor) was marked by one candidate.

Alternative d (correct answer) was marked by 24 candidates.

Alternative e (distractor) was not marked by any of the candidates.

From this we can see that alternative e serves no purpose, as no-one saw it as a

possible answer to the question. The Assessor will therefore have to substitute this

distractor by a more acceptable one (especially for the sake of weaker candidates).

As far as the other three distractors are concerned, the limit of acceptability can be

calculated as follows:

Limit of acceptability = N – C, where

2 X D

N = Number of candidates.

C = Number of correct answers.

So, he limit of acceptability in our example will be: (37-24)/(2 x 4) = 13/8 = 1.625.

As alternatives a and b were marked by more candidates than the acceptable limit (7

and 5 respectively), they can be regarded as successful distractors, because both 7

and 5 are more than the limit of acceptability (1.625). Only one candidate marked

alternative c, so it is not an acceptable distractor (1 is less than 1.625, which is the

Page 153: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

limit of acceptability). The Assessor must bear in mind that the whole item is affected

when one or more of the distractors are changed. After he or she has replaced

alternative c and e with other distractors, he/she will have to test the effectiveness of

the distractors again.

SUMMARY OF CHAPTER 8

The normal distribution curve can provide us with some valuable information about

the quality of learning and assessment. It is, however, important not to misuse it to

increase or decrease requirements for success artificially, but rather to improve the

quality of learning.

Measures of variability could be used to determine the degree of individual

differences that exists amongst learners. Measures of variability include the range,

standard deviation and variance.

The degree of correlation between sets of variables indicates if they “interact” on

one-another, and to what extent they interact. This is valuable for the improvement of

construct validity. Correlation can help the verifier “predict a tendency”. Correlation

calculations can be supported and presented visually by means of a scatter plot. The

scatter plot reveals whether the relationship between two variables is linear (i.e. a

straight line more closely fits the dots of the scatter plot than any curved line does.)

Test reliability can be determined by means of the test-retest method. This means

that learners are subjected to the same test twice, after which the difference in

performance is measured. Time lapse can impair the objectivity of the method. This

can be eliminated by the equivalent method, where an equivalent test is used the

second time. Reliability can also be determined (and improved) by using the divided

half method. With this method the test is only done once, and the items with even

numbers are grouped together, as are the items with uneven numbers. The test is

then treated as two half-tests.

Page 154: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

A further method by means of which the reliability of a test can be measured is the

Kuder-Richardson formula.

Test item/learning outcome congruence determines if a test item tests what it is

meant to test.

Item analysis can be used to determine what each item’s contribution to the test is,

so that good items can be selected and ineffective items removed from the test. Item

analysis comprises three aspects:

1. The difficulty index of each item.

2. The discrimination index of each item.

3. The effectiveness of distractors in multiple-choice items.

Page 155: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 9: RECORD AND REPORT VERIFICTION FINDINGS AND RECOMMENDATIONS

INTRODUCTION

Recording and reporting verification findings are important tools to help learning

institutions and learners meet high educational standards. The National Skills

Development Strategy and NQF provide a sound accountability system to ensure

that private and public learning institutions achieve nationally and internationally

acceptable results.

An important prerequisite for efficient reporting is the matter of time. If the reporting

process takes too long, it loses impact, importance and relevance. Consequently the

process becomes an almost fruitless exercise. In this respect the learning institution,

Moderators and verifiers all share the responsibility to provide the results of

verification as soon as possible.

Specific outcome: Record and report verification findings and recommendations.Assessment criteria:1. Verification findings are reported to designated role-players within agreed

timeframes and according to the quality assurance body’s requirements for format and content.

2. The report addresses key topics and is an accurate and complete reflection of findings based on verifiable data.

3. Presentations of statistics and their interpretations are valid, appropriate and clear.

4. Recommendations are practical and achievable, facilitate the improvement of moderation processes and promote the objectives of quality assessment and credible recognition systems.

5. Records are kept and maintained in accordance with ETQA requirements.6. Confidentiality of information relating to candidates, Assessors and assessing

agencies is preserved in accordance with the ETQA requirements.

Page 156: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

REPORTING VERIFICATION FINDINGS

Reporting procedures must be tailored to fit very specific purposes - in this case

reporting on the validity and accuracy of moderation. At the same time, reporting

procedures must be multi-faceted in order to provide an objective picture of the

moderation process. Therefore, a reporting system should include multiple reporting

tools, each with its own specific and well-defined purpose. The specific purpose of

each tool guides its development, determines its format, and establishes the criteria

by which its effectiveness will be judged.

The key to success in selecting the tools to be included in a reporting system is first

to determine what purpose or purposes we want to serve. In other words, we must

decide what information we want to communicate, who is the primary audience for

that information, and how we would like the information to be used. Then, having

decided on the specific purposes, we can select the tool or tools that best serve

those purposes. Different tools can and should be designed to serve different

purposes. Since most reporting systems are designed to serve multiple purposes,

they need to include multiple reporting tools.

The verifier should not endorse learner achievement if the outcome of the verification

mechanism has revealed the following:

1. Fraud or irregularities in the provider’s process resulting in the outcome.

2. Flaws in the assessment tools.

3. Inadequate or inconsistent assessment.

4. Learner information against learner achievement enrolment does not match.

5. Any other irregularities that may be detected during the verification process.

CONTENT OF THE VERIFICATION REPORT

Once the verifier has completed the verification process, he or she will prepare a

verification report. By using the formerly discussed verification forms, the verifier can

ensure that all areas are considered when reviewing the evidence provided by the

provider. Ultimately, the verifier will identify recommendations to the Moderator and

the ETQA on whether the assessment results are endorsed or not. The summary of

Page 157: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

the findings and recommendations are abstracted from the overall findings table and

captured in the final report to be submitted to the ETQA. (ETDP SETA, May 2002: 5

(Verifier reporting tool).)

The key questions to consider are whether the Assessor can show that the

candidate was assessed correctly using appropriate tools and following an

appropriate, fair and valid assessment process. The verifier is not to play the role of

the Moderator and is merely checking that there is evidence of the assessment

process. The form used for the consolidation of the verification findings is given in

Chapter 1 (Form V0).

This evidence may take various forms within different providers. Predominantly the

assessment tools, plan and results are documented and this evidence should be

easily available to the verifier.

The verifier must indicate whether the Moderator ‘endorsed’ or ‘did not endorse’ the

assessment findings. The verifier is considering whether the Moderator asked the

following questions in each focus area:

1. Did the Moderator endorse/not endorse that the Assessor is registered?

2. Did the Moderator endorse/not endorse the use of the assessment guide, tools

and activities?

3. Did the Moderator endorse/not endorse that the correct assessment process

was followed?

4. Did the Moderator endorse/not endorse that the Assessor found the correct

assessment results?

In essence the verifier is noting whether the Moderator agreed or not with the

Assessor’s decisions. What the verifier will pick up is whether there are

discrepancies between what is provided by the Assessor and the judgements

declared by the Moderator. This may signal to the verifier that the Moderator needs

support in particular areas. It is therefore critical that the verifier is given access to

the actual assessment documentation, although it is not the role of the verifier to re-

evaluate this information. The verifier checks that the Moderator’s findings are

consistent with the assessment information, process and results.

Page 158: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The verifier must decide whether to ‘endorse’ or ‘not endorse’ the Moderator’s

evaluation. The verifier checks the following:

1. Do I agree with the Moderator’s findings on the Assessor?

2. Do I agree with the Moderator’s findings on the assessment tools?

3. Do I agree with the Moderator’s findings on the assessment process?

4. Do I agree with the Moderator’s findings on the assessment results?

5. Do I agree with the information that the Moderator is registered?

6. Do I endorse the moderation process which was followed?

If there are irregularities which are picked up during the moderation process or

findings, the verifier must make recommendations indicating this. Once the verifier

has completed the above issues, he or she will also review the self-evaluation, and

the irregularities checklist based on the key findings from this process. This rigorous

process thus provides all the information to be included in the Final Report V9 which

is to be submitted to the ETQA. It will also provide the verifier with enough insight to

make a final judgement whether to endorse or not to endorse the Moderator’s

decision regarding certification. The forms V0 to V8 are merely used to prompt the

verifier with key issues to be considered, while evaluating the moderation. At all

times the verifier is to consider that integrity and professionalism are applied in the

verification process.

VERIFICATION RECOMMENDATIONS

In terms of SAQA guidelines within the quality assurance function of ETQAs, the

regulations (section 9(1)) state that ETQAs shall take responsibility for the

certification of constituent learners. Furthermore, the regulations specify that ETQAs

have the authority to delegate this responsibility. ETQAs must also have definite

strategies in place to prevent the issuing of fraudulent certificates.

The design of certificates should be according to SAQA and ETQA criteria and

should include:

Learner’s name.

Description of the registered standards and qualification achieved by the

Page 159: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

learner (including title, number, level, and credits).

Date of issue.

Certificate number.

NLRD number.

Provider’s signatories.

RECORD KEEPING

The verifier has to make certain that the learning institution documentation is correct

and accurate. This includes:

Verification records and the frequency and purpose of visits.

Reports for centre visits and advisory activities.

Reports to the awarding body.

Recommendations for improvements to assessment practices.

Learning providers must make sure that they have seen and signed the verifier’s

report on the verification findings.

CONFIDENTIALITY OF INFORMATION

Features must be built into the certificates to ensure that tampering and copying is

eliminated. The following can serve as measures of control (ETDP SETA QA, 2002:

4):

1. Use a SAQA logo as a hologram on tampered evident lables (TEL). These

holograms would be distributed to the provider to attach to certificates.

2. Sequential serial numbers should be included with the hologram to assist with

the auditing trail by SAQA, ETQA and providers.

3. Alternative tracking mechanisms should also be used, e.g. a seal or bar code.

Learning providers are required to have the following security measures in place

(ETDP SETA QA, 2002: 4):

Learning providers that use the SAQA hologram must develop internal security

measures to ensure tight control over the use of these holograms.

The provider that uses the holograms has to develop a register to record, for

Page 160: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

example:

Hologram sequential numbers.

Name of person issued with a certificate.

Date of issue.

Certificate number.

ID number of the person issued with the certificate.

NLRD number.

The ETQA must monitor and audit providers issued with holograms according

to SAQA auditing and monitoring criteria and guidelines.

Learning providers will issue certificates to the learners only upon receipt of

endorsed learner achievement from the ETQA.

SUMMARY OF CHAPTER 9

Recording and reporting verification findings rapidly lose value if the process takes

too long. Reporting must be:

1. Timeous.

2. Purpose-driven.

3. Multi-faceted.

The verification report must always indicate if the verifier endorses the moderation

findings or not. It should, furthermore, include recommendations to the Moderator

and ETQA. The ETQA shall take responsibility for the certification of constituent

learners.

The verifier has to make sure that the learning institution documentation is correct

and accurate. Confidentiality and security of information must be ensured. This

includes report findings as well as certificates.

Page 161: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

CHAPTER 10: ADVISE AND SUPPORT MODERATORS AND PROVIDERS

INTRODUCTION

The development of outcomes-based learning substantially increased the

importance of assessing performance (competence). The need to promote learning

of skills and competences that cannot be tested by more traditional techniques, and

to report on these, lies behind the range of related developments in assessment and

moderation practice.

There is a strong emphasis on encouraging people to continue their education rather

than to exclude them, as was the case in traditional assessment practice. Above all,

there is an urgent need for education systems to train people who will have the

appropriate range of skills and attitudes to be capable of undertaking a variety of

work roles in a climate of rapid technological change. Problem-solving ability,

personal effectiveness, thinking skills and willingness to accept change, are typical of

the general competencies straddling cognitive and affective domains that are now

being sought in young people. To the extent that the assessment practices falls short

of matching these new educational priorities with appropriate new techniques, so it

will also inhibit the pursuit of such new educational goals.

Specific outcome: Advise and support Moderators and providers.Assessment criteria:1. Moderators are informed of the verification findings timeously and in a manner

that promotes understanding of the impact of the findings on their moderation activities.

2. The nature and quality of advice facilitate a common understanding of the issues related to assessment and moderation of assessment.

3. The nature and quality of advice support moderation that upholds good assessment principles and enhances the development and maintenance of quality management systems in line with ETQA requirements.

4. The feedback is sufficient to enable Moderators and providers to further access developmental resources and/or engage in their own development.

5. The regularity and format of the communication are such that effective communication channels are maintained.

6. All communications are conducted in accordance with relative confidentiality requirements.

7. Disputes regarding moderation and verification findings are ensured to be resolved in an equitable manner.

Page 162: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The new assessment culture emphasizes the following (Torrance, 1995: 12):

1. An increasing emphasis on formative, learning-integrated assessment

throughout the process of learning.

2. A commitment to raising the level of Facilitator understanding and expertise in

assessment procedures associated with the devolution of responsibility for

quality assurance in the certification process.

3. An increasing emphasis on validity in the assessment process which allows the

full range of learning outcomes including cognitive, psychomotor and even

affective domains of learning techniques for gathering evidence of competence.

4. An increasing emphasis on describing learning outcomes in terms of particular

standards achieved – often associated with the pre-specification of such

outcomes in a way that reflects the integration of curriculum and assessment

planning.

5. An increasing emphasis on using the assessment of individual learner’s

learning outcomes as an indicator of the quality of learning provision, whether

this be at the level of the individual group, the institution, the country or for

international comparison.

It is the task of the Assessors to ensure that assessment practice and instruments

used by them support this new learning culture.

FEEDBACK TO MODERATORS

The forms used for feedback have already been discussed in Chapter 1. It is vitally

important that verification feedback be done as soon as possible, for the following

reasons:

1. It is only after feedback has been received that learning institutions may issue

learners with accredited certificates. This means that the NLRD (National

Learners Record Data) number of each learner must be included with the

feedback. Receiving the NLRD numbers can also serve as confirmation that the

unit standard/qualification for each learner has been entered in the NLRD by

SAQA.

2. Issuing certificates only a year or longer after the course took place can be very

Page 163: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

damaging to the image of the learning institution, the ETQA and SAQA.

Feedback on moderation and entering learner data in the NLRD should

therefore not take longer than three weeks from the date on which the

verification took place. Verification should be done not more than three weeks

after the learning institution submitted an application for the moderation of a

course or courses to be verified. If it takes the learning institution another three

weeks to issue certificates, the total process will take nine weeks from the date

on which the learning institution applied for verification, which is already an

almost unacceptably long period of time!

3. Feedback on moderation is needed to improve assessment, moderation and

probably also the course content and learning approach. The longer it takes to

receive and process feedback, the longer it will take before deficiencies in

assessment, moderation and learning can be rectified.

4. Feedback can often become obsolete and irrelevant if it is received too late.

The reasons for this are that dynamic Facilitators, Assessors and Moderators

change (hopefully improve) learning, assessment and moderation processes all

the time. The environment and learning requirements also change, as do the

needs of the learners.

ETQA REQUIREMENTS

According to the ETDQA, moderation planning includes addressing assessment

design, activities before, during and after assessment, as well as assessment

documentation (ETDP SETA. May 2002: 4.) The design of the assessment

instrument must be checked by at least one other person with the appropriate

expertise. Feedback and suggested alterations should be carried out, and final

design endorsed by the Moderator.

The design of the assessment instrument’s grading system (assessment criteria,

weighting, format for judgements, decisions, findings, etc, must be checked by at

least one other person with the appropriate expertise. Feedback and suggested

alterations should be carried out, and final design endorsed by the Moderator.

Procedures for arriving at results must likewise be moderated. The form and

Page 164: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

frequency of this will depend on the nature of the event. For example, where

assessment is based on observation of performance by an individual, a Moderator

will attend a particular Assessor’s sessions at prescribed times to compare findings

with the Assessor. Modular assessment can be moderated through peer review.

Where assessment is based on a ‘marking session’ bringing together a team of

Assessors, Moderator(s) must be present to check that there is consistency across

individual judgements, and that consensus agreements are applied. Ample

moderation of other forms of resulting is also required.

Moderators will check that accommodation of special needs has not compromised

assessment standards.

Moderators will produce a moderation report at specified intervals. This report will

serve as management review reflecting on the system, noting problem areas and

commenting on the consistency (in relation to quality and efficiency) of the

assessment cycle over time. This could mean, for example, being alert to anomalies

in learner results between two different sessions, or to patterns over a longer period

of time.

IMPROVEMENT OF THE MODERATION PROCESSES

Moderation ensures that people who are being assessed are being assessed in a

consistent, accurate and well-designed manner. It ensures that all Assessors who

assess a particular unit standard or qualification are using comparable assessment

methods and making similar and consistent judgements about learners’

performance. If this is not what is achieved through the moderation process, the

need for improvement is obvious.

Different ETQAs will in all probability have their own requirements for moderation,

and moderation systems will in all probability not be the same for all ETQAs,

primarily because of differences in learning approaches and content. Nevertheless,

in spite of possible differences in moderation requirements and procedures, it still

functions as a means for professional interaction and the improvement of skills that

will continuously improve the quality of assessment.

Page 165: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

The following are criteria that should generally be applicable to moderation. If they

are not met, is an indication that improvement will be required:

1. Assessment practices and instruments that do not meet the requirements for

assessment.

2. Poorly designed assessment materials.

3. Lack of consistency in results and assessment decisions.

4. Uncoordinated or no assessment meetings.

5. No or inadequate appeal procedures for learners.

6. No or inadequate reassessment opportunities for learners.

7. Poor performance by Assessors or Moderators.

8. Lack of feedback to National Standards Bodies.

9. Inappropriate choice of design of assessment methods and instruments in a

single learning provider.

10. Assessment that does not match the specifications of the unit standard or

qualification being assessed.

11. Absence of standardised assessment systems.

12. No or inadequate moderation plans.

13. No or irregular liaison with verifiers.

14. Poor management of assessment information.

15. Inadequate capacity of the learning provider to implement a moderation system.

16. Poorly trained or inexperienced Moderators.

17. Biased sampling of candidate’s evidence.

It is necessary to review assessment even after tests or examinations, to identify

good and bad practice in assessment design and process, as well as to incorporate

it in the assessment redesign. This is often done after verification feedback has been

received.

Changes to assessment can take place at different levels – the individual lecturer,

course team, department and learning institution. At any of these, it is possible to

make a change. For example, an individual Facilitator may wish to introduce a new

assessment method and may be able to do this without affecting other people. At the

other end of the scale, the learning institution may formulate a mission that requires

Page 166: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

for its achievement changes in the design and implementation of assessment – for

example, to produce more independent learners or achieve better results. Some

changes require a concerted effort at more than one level. For example, to achieve a

consistent approach to assessment in financial management would require

agreement by individual Facilitators, course teams and the department. (Freeman &

Lewis, 1998: 311.) Learners may also be involved in the review process.

Weaknesses in the assessment design and processes that could have compromised

the fairness of the assessment should be identified and changed in accordance with

the institution’s assessment policy. Weaknesses in the assessment arising from poor

quality of unit standards or qualifications should also be identified and relevant

bodies be informed if changes call for their participation.

HANDLING DISPUTES

The verifier may encounter two possible appeals, the first being the Moderator

appealing against the verification results, and the second being from learners

appealing against the assessment and moderation outcomes.

The learner should have the security of knowing that in the case of unfairness,

invalidity, unreliability, impracticability, inadequacy of expertise and experience, and

unethical practices they are able to appeal. (SAQA Guidelines, 1999: 29.)

An appeal against an assessment decision or the manner in which the assessment

was conducted may be lodged by any of the role players in the assessment process.

This is the proposed procedure to follow in the event of an appeal:

1. Assessment conducted.

2. Feedback given to learner.

3. Appeal lodged within 3 working days.

4. Review by the Moderator.

5. Review by the Education and Training Committee, if the Moderator could not

resolve the appeal.

6. Top management for final decision.

Page 167: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Appeals may often lead to reassessment. Therefore, the assessment process has to

include a built-in sub-process for reassessment. When learners have to undergo

reassessment, they have to be given feedback prior to reassessment to address

areas of weakness. Ideally, continuously conducting formative assessment should

minimize the need for reassessment, as the Assessor and learner will decide on

carrying out summative assessment when both have agreed that the learner is ready

for it.

Reassessment should comply with the following conditions: (SAQA Guidelines,

1999: 29.)

The reassessment should take place in the same situation or context and

under the same conditions.

The same method and instrument may be used, but the task and materials

should be different. The latter should, however, be of the same complexity

and level as the previous assessment. In case the methods and instruments

are changed, it must be ensured that they are appropriate for the outcomes

specified.

Care should be taken regarding how often reassessment can be done and the time

lapse between the original assessment and the reassessment. Limits should be set

to the number of times a learner can undergo reassessment and for the length of

time between assessments. A learner who is repeatedly unsuccessful should be

given guidance on other possible and more suitable learning avenues.

Appeals by a Moderator (or the provider whom the Moderator represents) should

also be lodged in writing. Different Seta’s will probably have different rules and

procedures for this, but the following general norms should apply:

1. Appeals must be lodged in writing within a specified period of time.

2. All appeals must be investigated.

3. The investigating team must consist of qualified verifiers – an individual should

not investigate an appeal all by him- or herself.

4. The results of the appeals investigation must be provided in writing, also within

a specified period of time, normally within three weeks.

5. A promise for arbitration is also possible in rare cases of strong remonstration.

Page 168: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

SUMMARY OF CHAPTER 10

One of the most important reasons for quality assurance, including assessment,

moderation and verification, is the promotion of life-long learning. This led to a new

assessment culture, strongly rooted in an outcomes-based approach to learning and

assessment.

Feedback is an important part of the new approach to assessment, because it

promotes growth in learning and assessment. This applies to all facets of the quality

assurance system, including design, assessment, moderation, verification, research

and evaluation.

Assessment instruments must be designed in such a way that they provide for

preparation (input), implementation (process) and feedback (output). This will enable

learning institutions to continuously improve upon their systems and procedures,

thus ensuring sound quality assurance and growth.

The assessment, moderation and verification processes must, furthermore, make

provision for appeals. This is to ensure that the process is fair, transparent, valid,

consistent and objective.

Page 169: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

REFERENCES

Adams, M., Bell, L.A., Griffen, P. 1997. Teaching for Diversity and Social Justice.

Routledge, New York & London.

Assessors Course. MEIETB Learner’s Manual. 1999.

Bellis, I. June 1997. Equity Issues in Education and Assessment. Outcomes-based Education: Issues of Competence and Equity in Curriculum and Assessment. South African Certification Council.

Chase, C.I., 1999. Contemporary Assessment for Educators. Longman, New

York.

Cotton, J., 1995. The Theory of Assessment. Kogan Page. London.

Craig, R.L., 1993. Training Development Handbook. A Guide to Human Resource Development. Third Edition. McGraw-Hill Book Company.

Centre for Educational Research, Evaluation & Policy (CEREP). October 1998.

Outcomes-based Education: Perspectives, Policy, Practice and Possibilities.

University of Durban-Westville.

Desmond, C.T., 1996. Shaping the Culture of Schooling. State University of New

York Press.

ETDP SETA. May 2002. Quality Assurance of Learner Achievement. Policy

document.

ETDQA Guidelines for Moderation dated October 2003.

Freeman, R. and Lewis, R., 1998. Planning and Implementing Assessment. Kogan Page. London.

Page 170: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Gregory, S. 1973. Statistical Methods and the Geographer. Third Edition. Lowe

and Brydone (Printers) Ltd, Thetford, Norfolk.Gronlund, N.E., 1998. Assessment of Learner Achievement. Allyn and Bacon. Boston.

Government Gazette. Act no. 97 of 1998: Skills Development Act, 1998.

Guskey, T.R. and Bailey, J.M., 2001. Developing Grading and Reporting Systems for Student Learning. Corwin Press, Inc. Thousand Oaks, California.

Hopkins, K.D., 1998. Educational and Psychological Measurement and Evaluation. Eighth Edition. Allyn and Bacon. Boston.

Ilott, I. And Murphy, R., 1999. Success and Failure in Professional Education: Assessing the Evidence. Whurr Publishers, London.

Marzano, R.J. March 1994. Lessons from the Field About Outcomes-Based Performance Assessment. Educational Leadership.

Olivier Cas. Let’s educate, train and learn OUTCOMES-BASED. A 3D Experience in Creativity. NQF Based. Design Book. 1999.

Osterlind, S.J., 1998. Constructing Test Items: Multiple-choice, Constructed-Response, Performance, and Other Formats. Kluwer Academic Publishers,

Boston.

Pahad, M. 1998. Outcomes-based Assessment: The Need for a Common Vision of What Counts and How to Count It. University of Durban-Westville.

Raggatt, P. Cookwood, F. (Ed.) 1994. Materials Production in Open and Distance Learning. Paul Chapman Publishing.

South African Qualifications Authority Bulletin. Volume 2, Number 31. August

1998 – January 1999.

Page 171: MENTORNET MANUAL: CONDUCT MODERATION OF ...druafet.online/attachments/Moderation-115759-LG.docx · Web viewThis manual is based on the following Unit Standard: Conduct moderation

™ DRU-A PROFESSIONAL FURTHER EDUCATION AND TRAINING COLLEGE® - © Copyrighted

Spady, W. & Schwahn, C. October 1999. The Operating Essentials and Indicators of Total Learning Communities. A Concrete Vision for Education in the Information Age. Breakthrough Learning Systems.

The National Qualifications Framework: An Overview. February 2000. SAQA

Publication.

Tombari, M.L. and Borich, G.D., 1999. Authentic Assessment in the Classroom. Application and Practice. Merril, an imprint of Prentice Hall. Upper Saddle River,

New Jersey.

Torrance, H., 1995. Assessing Assessment. Evaluating Authentic Assessment. Open University Press. Buckingham, Philadelphia.

Van Niekerk, R., 1985. Research Manual. SADF College for Educational

Technology.