assessment for learning: quality and taxonomies

16

Click here to load reader

Upload: bradford-w

Post on 24-Mar-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Assessment for Learning: quality and taxonomies

This article was downloaded by: [BIBLIOTHEK der Hochschule Darmstadt]On: 24 November 2014, At: 09:04Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Assessment & Evaluation in HigherEducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/caeh20

Assessment for Learning: quality andtaxonomiesBradford W. Imrie aa College of Higher Vocational Studies , City Polytechnic of HongKong ,Published online: 23 Jun 2008.

To cite this article: Bradford W. Imrie (1995) Assessment for Learning: quality and taxonomies,Assessment & Evaluation in Higher Education, 20:2, 175-189, DOI: 10.1080/02602939508565719

To link to this article: http://dx.doi.org/10.1080/02602939508565719

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: Assessment for Learning: quality and taxonomies

Assessment & Evaluation in Higher Education, Vol. 20, No. 2, 1995

Assessment for Learning: quality and taxonomies

BRADFORD W. IMRIE, College of Higher Vocational Studies, City Polytechnic of Hong Kong

ABSTRACT Assessment for student learning and of student performance, formative and summative, is a crucial quality issue and the paper discusses principles and procedures of taxonomies for good practice in regard to assessment. Five taxonomies, with some variations, are described; exhibits are used to present details. The implica- tions of a taxonomic approach to assessment are profound in that the use of taxonomies requires peer discussion and provides a framework for accountability and quality assurance. From a pedagogical point of view, there should be a clear and justijable link between objectives, assessment and outcomes, with appropriate teaching and assessment methods selected by teachers. From a quality point of view, a taxonomic approach is an indicator of quality assurance.

Background

In April 1993, 'Assessment Policy and Guidelines' (APG) was approved by the Academic Board of the City Polytechnic of Hong Kong (CPHK), and subsequently published by the Academic Secretary's Office. The 'publication' was prepared by a polytechnic-wide working party established to study issues of assessment and, in context, represented a systematic interpretation and communication of knowledge-two of the four major components of scholarship identified by Boyer (1990) with reference to priorities of the professoriate. The policy and the guidelines have the general aims of quality assurance and the maintenance of standards based on the adopted definition of assessment:

Derivation and communication of information and judgements about student learning and performance, for the purpose of guiding learning and reporting on student progress and achievement in a course of study. (APG, 1993)

In a pedagogical context, a study of issues of assessment should consider the appropriate use of taxonomies with reference to types and levels of learning. Surprisingly, there was strong reaction from members of the working party based on their previous experiences

0260-2938/95/020175- 15 0 1995 Journals Oxford Ltd

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 3: Assessment for Learning: quality and taxonomies

of requirements associated with Bloom's taxonomy for the cognitive domain. There did not seem to be much awareness of other taxonomies and this prompted a review of some taxonomies which might provide teachers, as professionals, with a framework of theory to ensure that assessment is an effective link between pedagogy and quality of learning and performance. For definition, Woods (1994) indicates that taxonomy is a 'fancy name for a structured set of levels for development of a subject'.

Introduction

Many commentators have identified assessment as a crucial and neglected area of quality learning. There are clear implications for professionalism and, therefore, accountability since assessment has implications for both quality assurance and for quality control-on the one hand operational systems for guiding learning and enhancing performance, on the other for maintaining standards. For quality assurance, if 'good practice' operational systems are in place then, systematically, intended outcomes are more likely to be achieved.

When Biggs (1993) gave an invited Quality Assurance Seminar at CPHK on assessing learning quality, he started by describing 'backwash' as the phenomenon of assessment methods driving institutional learning. In the institutional context, he noted corroboration (Crooks, 1988a; Elton & Laurillard, 1979). Further corroboration is provided by Entwistle et al. (1992): "Although there are substantial practical problems in changing assessment procedures, such changes represent the single most effective way of influencing the quality of student learning."

Related to awareness of backwash is the issue of neglect of the quality assurance of good assessment, as indicated by Boud (1990): "Some of the most profoundly depressing research on learning in higher education has demonstrated that successful performance in examinations does not even indicate that students have a good grasp of the very concepts which staff members believed the examinations to be testing." This is typical of other such comments in the literature with clear implications for the professionalism of the staff who have professional responsibilities for teaching, learning and assessment.

Although it is now customary to write educational objectives as outcomes, the lack of a systematic framework (taxonomy) means that quality assurance is not evident or verifiable and that there could be a mismatch between the stated (intended) outcomes and the actual behaviour of the students. Biggs (1992) makes this point in noting that, when students study, they attempt to understand the material according to their perceptions of the assessment requirements. Biggs specifically links 'understand' and 'perception' with reference to a recent study on 'forms of understanding' (Entwistle & Entwistle, 1992) which noted concerns about ( I ) the extent to which exams stood in the way of students achieving their own personal understandings of taught content, and (2) the limited extent to which examinations did indeed tap conceptual understanding.

These considerations clearly overlap with the concepts of 'surface' and 'deep' understanding (Marton & Saljo, 1976) representing two levels or tiers. Some related considerations are noted with, first of all, a definition of understanding because of its particular significance for the quality of learning. The following definition is that used by a Working Party on Assessment of the Engineering Professors' Conference (EPC, 1992) in the UK:

Understanding is the capacity to use explanatory concepts creatively; it is the basis of 'thinking', especially the ability to think logically. It enables people

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 4: Assessment for Learning: quality and taxonomies

Assessment for Learning 177

to tackle problems they have not tackled before, such as explaining new phenomena, designing new artefacts, finding the causes of unfamiliar faults and seeing how to correct them, asking searching questions, etc. Concepts, unlike knowledge, cannot be grasped instantaneously, and so understanding takes time to acquire. (p. 5)

For example, in an article 'Using examinations to encourage deep learning', Cox (1993) reports on a simple modification to examination procedures which encourages students to think about and understand their subject-matter. "Allowing students to take material into examinations has had many significant benefits. As far as we can tell there were no adverse effects and we have had to make no unfortunate tradeoffs. The examinations allow students to show their understanding, allow them to be creative and discourage rote learning." In this case the 'backwash' is positive. Similar considerations are noted for the open book examination by Clift & Imrie (1981) and by Gray (1993).

In this brief introduction, issues of perception and understanding, professionalism and practice, backwash and taxonomies, have been identified. This paper now discusses some selected taxonomies with reference to pedagogy, quality, professional practice and professionalism. Eight exhibits are appended to provide additional detail.

Taxonomies

The best known and most widely used taxonomy for objectives assessment is that developed by Bloom (1956) for the cognitive domain. Exhibit 1 gives details of the cognitive domain as represented by Hall & Johnson (1987) in a module for in-service training of academic staff. Note the 'instruction' words as a basis for systematic assessment. Taxonomies have also been developed for the psychomotor (Simpson, 1966) and affective domains (Krathwohl, 1964); also noted, for example, by Woods (1994) for their relevance to problem-based learning. A ladder analogy is used to indicate the interdependence of the cognitive and affective domains for effective learning and for development at higher levels: "The attainment of some complex goal is made possible by alternately climbing a rung on one ladder, which brings the next rung of the other ladder within reach" (Krathwohl, 1964, p. 60).

The three domains are summarised:

(a) The cognitive domain is concerned with knowledge and information and is subdi- vided into levels of cognitive ability (lower to higher): knowledge; comprehension; application; analysis; synthesis; evaluation.

(b) The psychomotbr domain is concerned with the performance of skills: readiness; guided response; mechanism; complex response; adaptation; origination.

(c) The aflective domain deals with areas of learning that include such concepts as receiving; responding; valuing; characterization; organization; conceptualization.

Much research has been conducted on such taxonomies, particularly the cognitive taxonomy. Applications have been diverse. One recent example (Benson et al., 1992) describes a model for guiding students in writing reviews of family literature, based on Bloom's cognitive taxonomy. Another example (Reeves, 1990) examines the implica- tions of the cognitive and affective taxonomies for teaching business ethics. Three examples of modification and application of Bloom's taxonomy are now described briefly.

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 5: Assessment for Learning: quality and taxonomies

178 B. W. Imrie

Category and description Examples of skills Instruction words

Knowledge (lower level) Recall terms, facts, methods, Name, define, list, label, formulae, principles. select, state, identify,

The ability to recall what has describe, reproduce, been learned. tabulate.

Comprehension Understands facts, concepts, Explain, summarise, theories, rules, principles. interpret, give examples,

The ability to show that Interprets information in various compare (simple), learned material is forms (charts, tables, graphs, contrast (simple), infer, understood. written passages). . . rewrite, precise, defend,

illustrate.

Application Applies concepts, rules, principles Apply, modify, predict, to new or novel situations. demonstrate, find, solve,

The ability to use learned Applies laws and theories to practical predict, discover. material in a new or novel situations.. . task.

Analysis Recognises unstated assumptions. Analyse, break down, Argues logically. distinguish, relate,

The ability to break up Distinguishes between facts and discriminate, separate, information logically into its inferences.. . find.. .with reasons, component parts. infer, deduce, classify

Synthesis Writes a well organised theme. Devise, design, plan, Writes a creative story, poem, piece reorganise, rearrange,

The ability to structure a of music. create, combine, situation of information to Combines information from different generate, solve, invent, form a new pattern or whole. sources to solve a problem. compose.

Devises a new taxonomy.

Evaluation (higher level) Judges whether conclusions are Compare (complex), supported by data. contrast (complex),

The ability to evaluate the Uses criteria to judge the value of a justify, appraise, worth of material, theories, work (art, music, story, plan, criticise, determine, methods, information, etc. for computer program). . . draw conclusions. a given purpose.

EXHIBIT 1. Bloom's taxonomy of education objectives: cognitive domain (Hall & Johnson, 1987).

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 6: Assessment for Learning: quality and taxonomies

Assessment for Learning 179

TABLE I. The RECAP model

RECAP Skill level Bloom's category Assessment level and task

Recall TIER 1 Comprehension

Application

Knowledge Assessment of minimum Comprehension essential skills using, e.g. Application objective, short-answer

and structured questions

Analysis TIER 2 Problem-solving skills Synthesis

Evaluation

Assessment of more advanced skills using, e.g. supply questions

The first example is the RECAP model (Imrie, 1984) (see Table 1) which is an adaptation of Bloom's taxonomy and provides a two-tier structure for both in-course (coursework) and end-course assessment (e.g. examination, project), linked to four levels of learning referred to by the acronym RECAP: REcall, Comprehension, Application, Problem solving. RECAP covers all the abilities of Bloom's taxonomy with direct correspondence for the first three levels (knowledge, comprehension, application) but groups Bloom's top three categories (analysis, synthesis, evaluation) together as problem-solving skills.

Tier I: The course objectives state the minimum essentials which students should achieve. Objective, short-answer and structured questions are usually suitable for assessing these skills; mastery testing (to see if students have achieved basic com- petence, at mastery level, in the skills being assessed) may be appropriate. Tier 2: The course objectives focus on skills which are important for problem solving. Essay, structured and case-study questions are examples of procedures suitable for assessing these skills. Both criterion-referenced and norm-referenced assessments are appropriate at this level.

Exhibit 2 (Petter, 1982) provides an example of a pedagogical analysis with reference to a law school curriculum in Canada in the context of the Bloom (1956) and Krathwohl (1964) taxonomies. The conclusions were that there should be provision for both domains but that "only one out of eleven levels of cognitive and affective learning objectives appears to be consistently addressed.. .knowledge, the bottom step on the cognitive ladder" (Petter, 1982, p. 96). In effect, the conclusion seems to be that staff (faculty) have not discharged their professional responsibilities to assess student per- formance at the appropriate or intended levels.

Exhibit 3 is another example of a scheme (Crooks, 1988b) derived from Bloom's taxonomy, in this case using three categories:

Recall or Recognition: students to demonstrate that they know particular infor- mation. Comprehension or Simple Application: students to demonstrate that they can com- prehend andlor apply information, formulae, principles or skills. Critical Thinking or Problem Solving: students to make use of principles, skills or their creativity to solve problems or work with ideas in ways which are clearly more than routine exercises.

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 7: Assessment for Learning: quality and taxonomies

180 B. W. Imrie

In Taxonomy of Education Objectives

In Taxonomy of Educational Objectives, the seminal work on learning objectives, Benjamin Bloom and his collaborators identify learning objectives as dividing into three domains: the cognitive domain, the effective domain, and the psychomotor domain. According to Bloom, the cognitive domain:

. . .includes those objectives which deal with the recall or recognition of knowledge and the development of intellectual abilities and skills. (Bloom, 1956)

The affective domain

... includes objectives which describe changes in interest, attitudes, and values, and the development of appreciations and adequate adjustment. (Krathwohl, 1964)

Because we agree with Bloom that the psychomotor domain has little relevance to the academic aspects of the university education, it will not be discussed. Our focus here, then, will be on learning objectives in the cognitive and the effective domains.

It is highly desirable, if not essential, that law schools provide for all six levels of cognitive learning objectives within their curricula. Law schools, after all, are training grounds for lawyers and, in order to prepare lawyers to act effectively on behalf of their clients, it is necessary that they teach their students to: (1) know the law; (2) comprehend it; (3) apply it to particular fact situations; (4) break it down into its component parts; (5) reorganize it and employ it creatively so as to serve clients' interests; and (6) evaluate the strength of its authority and its probable impact on clients.

What this means, then, is that if teaching within the law school is going to encompass all six levels of cognitive learning objectives, time must be set aside for cultivating intellectual skills as well as for imparting knowledge. Yet the reality in law schools today seems to be that the vast majority of teaching hours (as opposed to student study or group work time) are directed toward imparting knowledge with few hours devoted to the development of intellectual skills.

Conclusion

This paper has utilized both Bloom's and Krathwohl's taxonomies to examine learning objectives in the cognitive and affective domains. Our examination has shown that both domains and all of the levels of objectives within them are relevant to and should be provided for in a law school curriculum. At the same time, however, we have discovered that only one out of eleven levels of cognitive and affective learning objectives appears to be consistently addressed in law school teaching today; and that is knowledge, the bottom step on the cognitive ladder. In short, if we were to think of the objectives that we have discussed as comprising the 'house' of learning, then legal education, it seems, is being conducted in a closet.

EXHIBIT 2. A closet within the house: learning objectives and the law school curriculum (Petter, 1982).

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 8: Assessment for Learning: quality and taxonomies

Assessment for Learning 18 1

Books on test construction often recommend use of a test planning grid (sometimes called a table of specifications) to help ensure that a test covers the content and skills that the teacher believes should be assessed. One dimension of the grid specifies content areas to be assessed, the other the types of skills to be assessed based on that content. For example, a test planning grid for a test on biology might look like this:

Skills

Content Area

Biochemistry Cells/Tissues Genetics, Reproduction Invertebrates Vertebrates Plant Life Ecological Issues

Column Tot.

Recall Comprehension Critical thinking Row Recognition Application Problem solving Tot.

The numbers in the grid indicate the percentage of marks on the test for each particular combination of content area and type of skill. Thus, for instance, the combination of the content area of genetics and reproduction with the skills of comprehension and simple application is specified as contributing 10 percent of the total marks for the test. This might be achieve by constructing questions, counting a total of 10 percent, which demanded understanding of basic genetics principles and the ability to apply them to relatively straightforward specific examples which had not been covered in class of in the textbook.

The main benefit from drawing up a test planning grid before starting to write items for the test is that it guides the selection of items. Items which involve lower level skills (e.g. recall) tend to be easier to write, and some content areas are more fertile sources of items than others. Without advance planning, it is all too easy to construct a test which does not sample content areas and skills appropriately.

EXHIBIT 3. Use of an assessment planning grid (Crooks, 1988b).

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 9: Assessment for Learning: quality and taxonomies

182 B. W. Imrie

Use of an assessment planning grid is recommended and a test on biology (Otago University) is used for illustration. It is noted that the taxonomy is the basis of the planning grid to emphasise that, "without advance planning it is all too easy to construct a test which does not sample content areas and skills appropriately" (Crooks, 1988b, p. 20). Note that (a) and (b) correspond to Tier 1, and (c) corresponds to Tier 2 in the RECAP model.

For professional and vocational education, attitudes and values are particularly important and there are two taxonomies which provide an inclusive framework. The first (EPC, 1989) is a teachingllearning analysis for quality in engineering education that could be generalised to other fields and includes: knowledge; skills, understanding; attitudes and values; personal qualities (Exhibit 4). Rather than levels of learning, the EPC taxonomy identifies types of learning and links them with teaching: resources, process and assessment of outcome. Subsequently, Sparkes (1994) reported a further development of this taxonomy by the addition of 'know-how' as a distinct type of learning. There is reference to deep learning and surface learning with the former being the "key issue in achieving quality in education" (EPC, 1989, p. 5 ) . As noted previously, the key pedagogical issue is that of 'understanding' (EPC, 1992). Two domains (cognitive and affective) are presented with the emphasis on understanding which "involves grasping concepts and being able to use them creatively" (EPC, 1989, p. 4). With regard to assessment, "although most academics intend to teach understanding, the examinations they set can often be dealt with ... by adopting a surface approach to learning" (EPC, 1989, p. 5) . Professional abilities are involved and, three years later, EPC (1992) presented a taxonomy of examination styles which indicates potential to "test levels of learning of different kinds" (Appendix A) and takes into consideration staff concerns such as "staff time per student".

The second inclusive taxonomy is the 'experiential taxonomy' (ET) which was developed by Steinaker & Bell (1979) to provide an integrated framework related to actual learning experience with due consideration for the varying and time-dependent interactions of cognitive, affective and psychomotor learning. This taxonomy sets out educational objectives related to the following stages of learning experience and development (Exhibit 5): exposure; participation; identification; internalisation; dissemi- nation. At the dissemination stage, assessment requires the student (in some specified way) to demonstrate learning and there should be clear correspondence between the actual and the intended outcomes in terms of the 'qualitieslskills' that the course seeks to develop in students. The level of the award can then be justifiably related to the aims, objectives and content of the course. Steinaker & Bell (1979) wanted a taxonomy "that could speak to the totality of an experience"; Exhibit 5 also sets out the ET chapters to indicate the various educational applications of the taxonomy. As such, ET provides a powerful structure for the management of learning experiences to improve teaching performance and methods, and for the organisation of educational development.

Exhibit 6 (Biggs, 1992) is the last example of a taxonomy selected for this paper. The SOLO taxonomy (Biggs & Collis, 1982) sets out a 'structure of the observed learning outcome' which may be used to evaluate learning quality or to set curriculum objectives. In general, for each of five modes of learning associated with age (maturation) and corresponding to forms of knowledge (higher levels of learning), the taxonomy distin- guishes five structural levels in a learning cycle which repeats in each mode. These structural levels could also be considered as 'experiential'. The formal and postformal modes should occur in higher education.

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 10: Assessment for Learning: quality and taxonomies

Assessment for Learning 183

TEACHING RESOURCES PROCESS ASSESSMENT OF OUTCOME

Type of learning

KNOWLEDGE Provide information in best way (lectures, data bases, video or audio tapes, books, etc.).

SKILLS Provide facilities appropriate to the skills being learnt (labs., problem classes, computers, group project).

KNOW-HOW Provide equipment on which (Sparkes, 1994) relevant experience can be

obtained.

UNDER- STANDING

ATTITUDES, VALUES & PERSONAL QUALITIES

Provide a rich educational environment (lectures, lab., computers, library, tutorials, coffee bar, VCRls, problem classes, electronic mail, etc.). Further training for staff.

Provide congenial surroundings and good quality teachers and teaching methods. Provide outlets for personal projects. Provide newspapers, discussion groups, external contacts.

Show relevance of information to Engineering. Teach simple study skills. Use discovery methods where appropriate.

Instruct and demonstrate the skills and make opportunities for practice -often, but not necessarily, supervised.

Guided problem-solving activity (e.g. apprentice training).

Focus teaching on concepts. Encourage students to use as many facilities as they can to help grasp new concepts. Set projects. Add problem solving to all laboratory experiments.

Motivate where necessary (in lectures and visits, by example, by teaching 'Learning to Learn', by setting challenges that-can be met successfully, etc.).

Test for recall by questioning.

Set tasks that require the exercise of the skills.

Set tasks which test the know-how obtained on the given or similar equipment.

Set new tasks that require understanding, not just skills and memory, for their completion (projects, open-ended questions, correcting other peoples' errors, designing, explaining)

By personal contact by the effort exerted, by the extent to which challenges are welcomed, by the questions asked, by breadth of outlook, by enthusiasm.

These three kinds of 'cognitive' learning (i.e. knowledge, skills and understanding) are the main domains of educational activity of most university engineering departments and together, in an appropriate mix, make up their education of engineering capability. The resources needed, the appropriate educational methods, and the appropriate assessment methods (if required) for each kind of learning are summarised in the three columns above.

To test for understanding it is necessary to set students new challenges. Unfortunately this is not easy to do continually in an educational establishment, so there is a tendency to set exams which, though intended to test understanding, turn out to be variations of a fairly standard type . The result is that although most academics intend to teach understanding, and encourage students to be 'deep' learners, the examinations they set can often be dealt with successfully mainly by the exercise of only memory and well-practised skills-that is, by adopting a 'surface' approach to learning. Evidently this is a key issue in achieving quality in education.

EXHIBIT 4. A teaching learning analysis of engineering education (EPC, 1989).

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 11: Assessment for Learning: quality and taxonomies

184 B. W. Imrie

Experiential Taxonomy (ET) Categories

1.0 EXPOSURE (initial level)

1.1 Sensory 1.2 Response 1.3 Readiness

2.0 PARTICIPATION

2.1 Representation 2.1.1 Covertly 2.1.2 Overtly

2.2 Modification

3.0 IDENTIFICATION

3.1 Reinforcement 3.2 Emotional 3.3 Personal 3.4 Sharing

4.0 INTERNALISATION

4.1 Expansion 4.2 Intrinsic

5.0 DISSEMINATION (final level)

5.1 Informational 5.2 Homiletic

E T Chapters

4 CURRICULUM DEVELOPMENT

5 LEARNING PRINCIPLES

6 ROLE MODEL FOR TEACHERS

7 CREATIVITY, CRITICAL THINKING AND PROBLEM SOLVING

8 TEACHING STRATEGIES

9 EVALUATING THE TEACHING-LEARNING ACT

10 TEACHER CHANGE

I I STAFF IN-SERVICE PROGRAMS

EXHIBIT 5. The experiential taxonomy (Steinaker & Bell, 1979).

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 12: Assessment for Learning: quality and taxonomies

Assessment for Learning 185

Table 2.1 Modes and levels in the SOLO taxonomy

Mode Structural level (SOLO)

Next 5. Extended abstract. The learner now generalizes the structure to take in new and more abstract features, representing a new and higher mode of operation.

4. Relational. The learner now integrates the parts with each other, so that the whole has a coherent structure and meaning.

Target 3. Multistructural. The learner picks up more and more relevant or correct features, but does not integrate them.

2. Unistructural. The learner focuses on the relevant domain and picks up one aspect to work with.

Previous 1. Prestructural. The task is engaged, but the learner is distracted or misled by an irrelevant aspect belonging to a previous stage or mod

The essentials of the model are given in Figure 2.1.

MODE I Post formal

Concrete- Symbolic 1 Sensori- Motor 1

AGE (years: not to scale)

FORM OF KNOWLEDGE

FIG. 2.1. Modes, learning cycles, and forms of knowledge.

Modes typically appear at the ages indicated on the abscissa, and accumulate as indicated on the ordinate, remaining as potential media for learning throughout life. The learning cycle progresses from unistructural (U), through multistructural (M), to relational R within each mode.

r( rn c

d o

-d a C o U

Meta theoretical

EXHIBIT 6. Modes of learning, forms of knowing, and ways of schooling (Biggs, 1992).

Theoretical ,u

/"' Declarative P'M

/'R' /*M

Intuitive ,U

/" Tacit

3 a

0) u 0 I4

, F 4

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 13: Assessment for Learning: quality and taxonomies

186 B. W. Imrie

1. Prestructural Use of irrelevant information or no meaningful response.

2. Unistructural Answer focuses on one relevant aspect only

3. Multistructural Answer focuses on several relevant features, but they are not coordinated.

4. Relational The several parts are integrated into a coherent whole.

5. Extended abstract The answer generalises the structure beyond the information given; higher order principles are used to bring in a new and broader set of issues.

(From Ramsden, 1992.)

[B] The Structures Of Learning Outcomes (SOLO) taxonomy is a research-based measure of the quality of learning outcomes (cf. Biggs & Collis, 1982) and is a widely applicable framework for judging the structure of essays, answers to technical questions, medical diagnoses or students' accounts of their reading:

Level I : Ignorance The learner reveals no correct knowledge about the question.

Level 2: Unistructural The answer contains one correct feature or item of information.

Level 3: Multistructural The answer is list-like, containing a number of unconnected items.

Level 4: Relational The answer relates components together to make a case or logical whole.

Level 5: Extended abstract Level 4 with, in addition, a connection to a related area of knowledge beyond the explicit demand of the question.

These categories have an intuitive relation to degree categories. Level 3 answers gain a pass, or if the list is long enough, a lower second, level 4 answers gain upper seconds and level 5 answers gain firsts. (From Gibbs, 1992.)

[C] A similar type of scale, but possibly less complex, is that suggested by Entwistle & Brennan (197 1) for assessing qualities of learning:

I Level Description

Deep active Student demonstrates an understanding of the argument and shows how it is supported by the evidence.

Deep passive Student mentions the main argument, but does not relate evidence to conclusion.

Surface active Student describes the main points made without integrating them into an argument.

Surface passive Student mentions a few isolated points as examples.

EXHIBIT 7. Some references to the SOLO taxonomy.

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 14: Assessment for Learning: quality and taxonomies

Assessment for Learning 187

Exhibit 7 sets out two recent references to the SOLO taxonomy together with a scale or taxonomy for deep and surface learning. Reference (A) is an application by Ramsden (1992) of the structural levels of the SOLO taxonomy to the assessment of an essay or 'open-ended' problem. A similar application of these levels is made in reference (B) by Gibbs (1992). However, the attempt to link these levels to degree categories is problematic. I discussed this with Biggs and share his concern that level 3 answers "containing a number of unconnected items" might gain a pass or, "if the list is long enough", a lower second. Elsewhere, Gibbs (1991) is particularly scathing about the lack of professionalism in using assessment for learning.

Reference (C) is included partly because of the earlier reference to Entwistle's (Entwistle & Brennan, 1971) ongoing work in this area and partly because of the conceptual overlap with the SOLO taxonomy. In this case qualities of learning are linked with levels of learning for assessment of student performance.

Concluding Comments

From a pedagogical point of view, there should be clear and justifiable links between objectives, assessment and outcomes, with appropriate teaching and assessment methods selected by teachers. Assessment of student performance (formative and summative) provides the principal domain of interaction between students and teachers in the context of a course, institution or culture. This paper has set out considerations for using taxonomies to provide accountable frameworks for intended levelsltypes of learning. The examples used, together with other conclusions from the literature of higher education, indicate that levels of learning (or abilities) tend to be neglected, i.e. assessment is not an integral part of teaching: "the initiation and management of learning" (Warnock, 1990).

From a professional point of view, levels of learning need to be identified (as well as domains of knowledge) so that the rationale for assessment is accessible to students (for effective learning), to peers (for effective teaching, moderation and continuity) and for external requirements of quality assurance. If systematic procedures, such as taxonomies, are not used to relate levels of learning to teaching and assessment, then 'quality' is at risk and related issues of professionalism may need to be addressed. The development or adaptation of an assessment framework (e.g. taxonomy) by a peer group of academics would be an act of both scholarship and professional development.

Adaptation or the devising of a new taxonomy by academic staff addresses the criticism that a taxonomic approach is reductive, i.e. specification is implicitly exclusive. On the basis that professional judgement has been used to ensure that a taxonomy is 'fit for its purpose', then the outcome is a framework or synthesis (cf. Exhibit 1) as a basis for further professional judgement and development.

From a quality point of view, the concepts and principles of a taxonomic approach should be used to demonstrate quality assurance in the design and delivery of the curriculum; also to provide a systematic approach to the continuing professional development of academic staff. An appropriate comment is provided by Rieck in a letter (10 February 1993) to The Chronicle of Higher Education; he identifies three principles for improving assessment:

First, an understanding that spending time on teaching and assessment is the first priority for a faculty member, followed by publication. Second, realizing

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 15: Assessment for Learning: quality and taxonomies

188 B. W. Imrie

that students will produce, or at least try to, at the level you demand, but not higher. Finally, faculty members need to do some reading or course auditing to learn how to assess students realistically and fairly.

From a management point of view, the apparent lack of professionalism and quality in assessment needs to be addressed urgently. It is not just a question of minding the 'Ps' and 'Qs' of pedagogy and quality, it is a question of caring for the needs and rights of students. In contrast with assessment of performance, assessment should be for learn- ing-as an intrinsic part of teaching which initiates and manages learning. The quality of assessment is a professional responsibility of staff which impacts on the rights of students-professionalism implies capability as well as accountability.

Notes on Contributor

BRADFORD W. IMRIE, BSc (Eng), MSc, CEng, MIMechE, FHKIE, is Director of Professional Development & Quality Services, City University of Hong Kong where (1991-94) he was previously Principal of the College of Higher Vocational Studies. Author of Compressible Fluid Flow (1973), Assessing Students, Appraising Teaching (co-author, 1981) and some 70 papers in the fields of technology and higher education, Brad Imrie's current interests are professional development of staff, student develop- ment, vocational education and quality assurance in higher education. Correspon- dence: Bradford W . Imrie, Professional Development & Quality Services, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong. Fax: 852 2788 82 10. E-mail: [email protected].

REFERENCES

APG (1993) Assessment Policy and Guidelines (Hong Kong, Academic Secretary's Office, City Polytechnic of Hong Kong).

BENSON, M. J., SPORAKOWSKI, M. J. & STREMMEL, A. J. (1992) Writing reviews of family literature: guiding students using Bloom's taxonomy of cognitive objectives, Family Relations, 41, pp. 65-69.

BIGGS, J. B. (1992) The psychology of educational assessment and the Hong Kong scene, Bulletin of the Hong Kong Psychological Society, 28/29, pp. 5-26.

BIGGS, J. B. (1993) Assessing learning quality: reconciling institutional, staff and educational demands, Quality Assurance Seminar, City Polytechnic of Hong Kong, 17 June 1993.

BIGGS, J. B. & COLLIS, K. F. (1982) Evaluating the Quality of Learning: the SOLO taxonomy (New York, Academic Press).

BLOOM, B. S. (Ed.) (1956) Taxonomy of Educational Objectives: Handbook I , Cognitive Domain (New Y ork, Longman).

BOUD, D. (1990) Assessment and the promotion of academic values, Studies in Higher Education, 15(1). BOYER, E. L. (1990) Scholarship Reconsidered: priorities of the professoriate, Carnegie Foundation for

the Advancement of Teaching (New Jersey, Princeton University Press). CLIFT, J. C. & IMRIE, B. W. (1981) Assessing Students, Appraising Teaching (London, Croom Helm). Cox, K. (1993) Using examinations to encourage deep learning, Department of Computer Science, City

Polytechnic of Hong Kong. CROOKS, T. J. (1988a) The impact of evaluation practices on students, Review of Educational Research,

58, pp. 438481. CROOKS, T. J. (1988b) Assessing Student Pe$ormance, Green Guide 8, Higher Education Research &

Development Society of Australasia (Sydney, HERDSA). ELTON, L. & LAURILLARD, D. (1979) Trends in student learning, Studies in Higher Education, 4, pp.

87-102. ENTWISTLE, A. & ENTWISTLE, N. (1992) Experiences of understanding in revising for degree examina-

tions, Learning and Instruction, 2, pp. 1-22.

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14

Page 16: Assessment for Learning: quality and taxonomies

Assessment for Learning 189

ENTWISTLE, N. J. & BRENNAN, T. (1971) The academic performance of students, British Journal of Educational Psychology, 41, pp. 268-276.

ENTWISTLE, N., THOMPSON, S. and TAIT, H. (1992) Guidelines for Promoting Effective Learning (Edinburgh, Centre for Learning and Instruction, University of Edinburgh).

EPC (1989) Quality in Engineering Education, Engineering Professors' Conference (UK), Occasional Paper No. 1.

EPC (1992) Assessment Methods in Engineering Degree Courses, Engineering Professors' Conference (UK), Occasional Paper No. 5.

GIBBS, G. (1991) Eight myths about assessment, The New Academic, l(1). GIBBS, G. (1992) Improving the quality of student learning through course design, in: BARNETT, R. (Ed.)

Learning to Effect (Milton Keynes, SRHEIOpen University Press), Ch. 10. GRAY, T. G. F. (1993) Open book examinations, The New Academic, 3(1). HALL, C. and JOHNSON, A. (1987) Planning a test or examination, Module A5 in: IMRIE, B. W. & HALL,

C., Assessing Student Performance (Wellington, Authority for Advanced Vocational Awards). IMRIE, B. W. (1984) In search of academic excellence: samples of experience, Proceedings of the Tenth

International Conference on Improving University Experience (University of Maryland, University College), pp. 160-183.

KRATHWOHL, D. R. (Ed.) (1964) Taxonomy of Educational Objectives: Handbook 2, Affective Domain (New York, Longman).

MARTON, F. & SALJO, R. (1976) On qualitative differences in learning: I Outcome and process, British Journal o f Educational Psychology, 46, pp. 4-1 1.

PETTER, A. (1982) A closet within the house: learning objectives and the law school curriculum, in: Essays in Legal Education (Toronto, Butterworths), Ch. 5.

RAMSDEN, P. (1992) Learning and Teaching in Higher Education (London, Routledge). REEVES, M. F. (1990) An application of Bloom's taxonomy to the teaching of business ethics, Journal

of Business Ethics, 9, pp. 609-616. RIECK, W. A. (1993) Letter to the Editor, The Chronicle of Higher Education, 10 February 1993. SIMPSON, E. J. (1966) The classification of educational objectives, psychomotor domain (Illinois,

University of Illinois Press). SPARKES, J. J. (1994) Defining quality is easy, achieving it is the problem, paper presented to the 20th

International Conference on Improving University Teaching, University of Maryland University College.

STEINAKER, N. W. & BELL, M. R. (1979) The Experiential Taxonomy (New York, Academic Press). WARNOCK, M. (1990) Teaching Quality, Report to the UK Polytechnics and Colleges Funding Council. WOODS, D. R. (1994) Problem-based Learning: How to Gain the Most from PBL (Waterdown, Ont.,

Donald R. Woods).

Dow

nloa

ded

by [

BIB

LIO

TH

EK

der

Hoc

hsch

ule

Dar

mst

adt]

at 0

9:04

24

Nov

embe

r 20

14