research notes - cambridge assessment notes 1 guest editorial: 2 jayanti banerjee investigating...

Download Research Notes - Cambridge Assessment notes 1 Guest editorial: 2 Jayanti Banerjee investigating learners’…

Post on 24-Jun-2018




0 download

Embed Size (px)


  • Research NotesIssue 47February 2012

    ISSN 1756-509X

  • Research NotesIssue 47 / February 2012A quarterly publication reporting on research, test development and validation

    Guest EditorDr Jayanti Banerjee, Program Manager, Cambridge Michigan Language Assessments

    Senior Editor and Editor Dr Hanan Khalifa, Assistant Director, Research and Validation Group, Cambridge ESOLDr Fiona Barker, Senior Research and Validation Manager, Research and Validation Group, Cambridge ESOL

    Editorial BoardDr Nick Saville, Director, Research and Validation Group, Cambridge ESOLAngela ffrench, Assistant Director, Assessment and Operations Group, Cambridge ESOLRon Zeronis, Assessment Group Manager, Assessment and Operations Group, Cambridge ESOLDr Ivana Vidakovic, Senior Research and Validation Manager, Research and Validation Group, CambridgeESOL

    Production TeamCaroline Warren, Research Support Administrator, Cambridge ESOLRachel Rudge, Marketing Production Controller, Cambridge ESOLJohn Savage, Editorial Assistant, Cambridge ESOL

    Printed in the United Kingdom by Oc (UK) Ltd.

  • CAMBRIDGE ESOL : RESEARCH NOTES : issue 47 / february 2012 | 1

    UCLES 2012 The contents of this publication may not be reproduced without the written permission of the copyright holder.

    research Notes

    Editorial notesWelcome to issue 47 of Research Notes, our quarterly publication reporting on matters relating to research, test development and validation within University of Cambridge ESOL Examinations.

    This issue the first of 2012 presents the research outcomes from the first round of Cambridge ESOLs Funded Research Programme undertaken in 2010. It benefits from the guest editorship of Dr Jayanti Banerjee, Program Manager at Cambridge Michigan Language Assessments.

    Following Dr Banerjees guest editorial (see the following page) which describes the projects and suggests their impact for Cambridge ESOL and more widely, there are four articles based on the Cambridge ESOL Funded Research Programme which cover a range of topics and contexts relevant to the teaching or testing of Cambridge English. The reported research includes investigations of the validity of test items and candidates output, and the impact and use of various Cambridge English tests in two specific contexts. Such studies enable Cambridge ESOL to support research that goes beyond the normal range of studies we are able to commission or undertake ourselves, thereby enhancing our understanding of the nature and impact of the language tests we work with on a daily basis, and additionally providing important outsider viewpoints from both established and newer researchers in the language testing or teaching fields.

    The second round of research funded by this programme is close to completion, and the third round is already underway, so we look forward to reporting on these studies in future issues of Research Notes. For those readers inspired to submit their own research proposals, the Call for Proposals for the fourth round is expected to be available in August 2012 on the Cambridge ESOL Research and Validation website, so for further details visit later this year.

    We finish this issue with an update on ALTE events from Martin Nuttall of the ALTE Secretariat; the announcement of the winners of the Caroline Clapham IELTS Masters Award 2011 and the 2012 Cambridge/ILTA Lifetime Achievement Award, and details of the 30th volume to be published in the Studies in Language Testing series.

    With the new calendar year we are thinking of introducing various innovations to Research Notes, and are planning a reader survey later this year to help inform the future direction of this publication.

    editorial notes 1

    Guest editorial: 2 Jayanti Banerjee

    investigating learners cognitive processes during a computer-based Cae reading test: 3 Stephen Bax and Cyril Weir

    investigating figurative proficiency at different levels of second language writing: 14 Jeannette Littlemore, Tina Krennmayr, James Turner and Sarah Turner

    The attitudes of teachers and students towards a PeT-based curriculum at a Japanese university: 27 Jun Nagao, Toru Tadaki, Makiko Takeda and Paul Wicking

    fCe exam preparation discourses: insights from an ethnographic study: 36 Dina Tsagari

    aLTe briefing 48

    Caroline Clapham ieLTs Masters award 2011 49

    Winner of the 2012 Cambridge/iLTa Lifetime achievement award 50

    studies in Language Testing 51


  • 2 | CAMBRIDGE ESOL : RESEARCH NOTES : issue 47 / february 2012

    UCLES 2012 The contents of this publication may not be reproduced without the written permission of the copyright holder.


    English language tests matter. They matter for the children who are compiling their language portfolios as well as for young adults hoping to study in an English-medium university. They matter for university admissions personnel or employers who are selecting the best candidates for their degree programmes or jobs. English language tests have tremendous symbolic power (Shohamy 2001:118) because they confer access to privileges, certify, and by extension, delimit knowledge.

    As a result, providers of English language tests have a great responsibility to stakeholders. Test users rely on test developers to provide high-quality tests that meet professional standards. They also expect testing organisations to present evidence to support test score interpretations and uses. Cambridge ESOL takes these professional responsibilities seriously and has developed a Principles of Good Practice booklet ( that encapsulates the organisations commitment to five essential principles: validity, reliability, impact, practicality and quality.

    As part of this commitment, in late 2009 the organisation launched the Cambridge ESOL Funded Research Programme. The first Call for Proposals encouraged studies of its Cambridge English exams in the following areas:

    test validation issues

    issues relating to contexts of test use

    issues of test impact.

    This issue of Research Notes showcases the four projects that were funded in the first round and which took place in 2010. Each study provides insight into one or more Cambridge English examinations in a specific context or from a specific perspective.1

    Bax and Weir (this issue) have investigated the cognitive processes employed by participants on a computer-based Cambridge English: Advanced (CAE) Reading test in order to check the extent to which the items elicit the range and level of cognitive processes expected of an advanced level Reading test which seeks to emulate real-world academic reading processes. They used eye-tracking technology to collect data in the form of Gaze Plots and Heat Maps which indicate both how the volunteer test takers eyes moved when reading the input texts and answering the questions as well as how long the test takers looked at particular sections of the text. Bax and Weir also administered questionnaires to capture immediate retrospections from test takers. The resulting data confirmed that the test takers employed an appropriate range and level of cognitive processes as targeted by CAE items. The paper not only provides evidence for the validity of the

    CAE Reading section but it also demonstrates the value of eye-tracking technology in test validation.

    Littlemore, Krennmayr, Turner and Turner (this issue)have analysed a subset of exam scripts from the Cambridge Learner Corpus to investigate the features of metaphor that distinguish performances at different levels of the Common European Framework of Reference (CEFR, Council of Europe 2001). Using the Metaphor Identification Procedure (MIP) developed by the Pragglejaz Group (2007), Littlemore et al found that metaphor use increases with proficiency level. Metaphor clusters emerge only at the intermediate levels. Littlemore et al also found that the types of metaphors used changes with proficiency level, as well as the functions these metaphors perform. These findings suggest that descriptors for metaphor use could feasibly be incorporated into rating scales for writing.

    Nagao Tadaki, Takeda and Wicking and Tsagari (this issue) have focused on test use in specific contexts. Nagao et al have investigated attitudes towards the Cambridge English: Preliminary (PET) in Japan, an emerging market for the test. This study is particularly interesting because the PET is relatively new in Japan and the study has captured knowledge about the exam as well as attitudes towards it at a very early stage of its introduction. The study shows that the test does meet learners needs but is less popular with teachers. It identifies the need for teacher support programmes and it also sheds some light on the PETs fitness for purpose in the Japanese context.

    Tsagari has studied Cambridge English: First (FCE) test preparation classes in Cyprus. Through a combination of classroom observations and teacher interviews, Tsagari amassed a rich description of the learning activities and teacher talk. She found considerable influence of the test upon the learning activities in the classroom and also in the teacher talk, particularly the advice that teachers gave to their students. Some of this influence was very positive but there were also barriers to positive impact. Tsagari points out that the teachers were not an open conduit of information about the exam. Rather, the impact of the FCE upon the classroom was mediated through the teachers knowledge and beliefs about the exam, their professional skills, and their own language ability. As such, in addition to providing a window into FCE preparation c