challenges in educational research

3
Challenges in educational research The need for evidence-based education The need for evidence to inform both policy and practice in education has been recognised recently with the establishment of the Campbell Colla- boration and Best Evidence Medical Education (BEME). The Campbell Collaboration was established in 1999 to provide policy makers with evidence on ‘what works’ in the social and behavioural sectors, including educa- tion. 1 BEME is more focused on educational practice, and has been defined as ‘the implementation, by teachers and educational bodies in their practice, of methods and approaches to education based on the best evidence available’. 2 I welcome these developments, which are part of a broader movement to make education more evidence- based. The reasons for attempting to do this are overwhelming. The first is that medical education is expensive. The Service Increment for Teaching (SIFT) budget (which supports undergraduate medical education) in England and Wales was £431 million in 1998–99. 3 In countries where medical education is not publicly funded, students (or their parents) make considerable financial investment in their education. Educa- tors must be accountable for spending such sums, and must be able to demonstrate that educational program- mes achieve the desired outcomes. 4 Secondly, the rationale for spending time, effort and money on medical education is that we believe that it has an impact on the way doctors practice in the future, and hence on health care. This puts medical education in the same position as any other health technology. In these days of evidence- based medicine, where all new health technologies are rigorously evaluated before widespread implementation, educational changes must also be eval- uated equally rigorously. Both the Campbell collaboration and BEME are limited in their activity by the quality of educational research available, and unfortunately, much of this is still at a relatively early stage of development. I do not wish to recap the current vigorous debate as to what constitutes evidence in education, how educational research should be assessed, and to what extent the criteria of evidence-based medicine can be transferred to educational research. 5,6 What is clear is that much of the cur- rent work in educational journals is descriptive and frequently the evalua- tion is limited to determining student satisfaction with the new course. This is unsatisfactory. There are a number of reasons contributing to this, which I discuss below along with some poten- tial solutions. Improving the quality of educational research: problems and solutions Educational interventions are complex interventions Educational research shares many similarities with health services research in that the intervention under study is often complex and multifactorial. The environment in which the intervention occurs is the real world, and as such, subject to multiple economic, political and social factors, outwith the control of the investigator, which may change during the study period, making inter- pretation of results more difficult. As the interventions are complex, it is often difficult to know which part of the intervention is responsible for which effect – and to what extent the inter- vention can be adapted to fit local circumstances without losing its effect- iveness. However, health service researchers are developing methodolo- gies to deal with complex interventions, and I urge educational researchers to follow their lead. The experience of evaluating com- plex interventions in health services research suggests that both quantitative and qualitative approaches are neces- sary. Campbell et al. have suggested a phased approach to both development and evaluation of complex interven- tions 7 which I believe is highly applic- able to educational research. They suggest that the first phase is a theor- etical phase, identifying the evidence that the intervention might have the desired effect. Subsequently the differ- ent components of an intervention must be defined, and their interrela- tionships considered. This can be done through qualitative testing with focus groups, preliminary surveys or case studies. Qualitative research can also determine how the intervention works, and potential barriers to implementa- tion. The next phase is defining the trial and intervention design and consid- ering the methodological issues for the main trial (randomisation, blinding, recruitment, etc.). The final phase is promoting effective implementation. Thus although the randomised con- trolled trial was initially developed to determine the effects of a single inter- vention, such as a drug, the methodo- logy, particularly of pragmatic trials 8 can be transferred to complex interven- tions such as educational innovations. As RCTs tend to treat the intervention under study as a ‘black box’, qualitative methods are needed to explore the relative importance of the various components of the intervention and to provide meaning and explanation of the findings. 9–11 Health service researchers are devel- oping new methodologies for situations where a randomised controlled trial is not possible. Examples of these new Commentaries Correspondence: Elizabeth Murray, Depart- ment of Primary Care and Population Sci- ences, Royal Free and University College Medical School, University College London, Archway Campus, Highgate Hill, London N19 3UA, UK. E-mail: elizabeth.murray@ pcps.ucl.ac.uk 110 Ó Blackwell Science Ltd MEDICAL EDUCATION 2002;36:110–112

Upload: elizabeth-murray

Post on 06-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Challenges in educational research

The need for evidence-basededucation

The need for evidence to inform both

policy and practice in education has

been recognised recently with the

establishment of the Campbell Colla-

boration and Best Evidence Medical

Education (BEME). The Campbell

Collaboration was established in 1999

to provide policy makers with evidence

on `what works' in the social and

behavioural sectors, including educa-

tion.1 BEME is more focused on

educational practice, and has been

de®ned as `the implementation, by

teachers and educational bodies in their

practice, of methods and approaches to

education based on the best evidence

available'.2

I welcome these developments,

which are part of a broader movement

to make education more evidence-

based. The reasons for attempting to do

this are overwhelming. The ®rst is that

medical education is expensive. The

Service Increment for Teaching (SIFT)

budget (which supports undergraduate

medical education) in England and

Wales was £431 million in 1998±99.3

In countries where medical education is

not publicly funded, students (or their

parents) make considerable ®nancial

investment in their education. Educa-

tors must be accountable for spending

such sums, and must be able to

demonstrate that educational program-

mes achieve the desired outcomes.4

Secondly, the rationale for spending

time, effort and money on medical

education is that we believe that it has

an impact on the way doctors practice

in the future, and hence on health care.

This puts medical education in the

same position as any other health

technology. In these days of evidence-

based medicine, where all new health

technologies are rigorously evaluated

before widespread implementation,

educational changes must also be eval-

uated equally rigorously.

Both the Campbell collaboration and

BEME are limited in their activity by

the quality of educational research

available, and unfortunately, much of

this is still at a relatively early stage of

development. I do not wish to recap the

current vigorous debate as to what

constitutes evidence in education, how

educational research should be

assessed, and to what extent the criteria

of evidence-based medicine can be

transferred to educational research.5,6

What is clear is that much of the cur-

rent work in educational journals is

descriptive and frequently the evalua-

tion is limited to determining student

satisfaction with the new course. This is

unsatisfactory. There are a number of

reasons contributing to this, which I

discuss below along with some poten-

tial solutions.

Improving the qualityof educational research:problems and solutions

Educational interventions are

complex interventions

Educational research shares many

similarities with health services research

in that the intervention under study is

often complex and multifactorial. The

environment in which the intervention

occurs is the real world, and as such,

subject to multiple economic, political

and social factors, outwith the control

of the investigator, which may change

during the study period, making inter-

pretation of results more dif®cult. As

the interventions are complex, it is

often dif®cult to know which part of the

intervention is responsible for which

effect ± and to what extent the inter-

vention can be adapted to ®t local

circumstances without losing its effect-

iveness. However, health service

researchers are developing methodolo-

gies to deal with complex interventions,

and I urge educational researchers to

follow their lead.

The experience of evaluating com-

plex interventions in health services

research suggests that both quantitative

and qualitative approaches are neces-

sary. Campbell et al. have suggested a

phased approach to both development

and evaluation of complex interven-

tions7 which I believe is highly applic-

able to educational research. They

suggest that the ®rst phase is a theor-

etical phase, identifying the evidence

that the intervention might have the

desired effect. Subsequently the differ-

ent components of an intervention

must be de®ned, and their interrela-

tionships considered. This can be done

through qualitative testing with focus

groups, preliminary surveys or case

studies. Qualitative research can also

determine how the intervention works,

and potential barriers to implementa-

tion. The next phase is de®ning the trial

and intervention design and consid-

ering the methodological issues for the

main trial (randomisation, blinding,

recruitment, etc.). The ®nal phase is

promoting effective implementation.

Thus although the randomised con-

trolled trial was initially developed to

determine the effects of a single inter-

vention, such as a drug, the methodo-

logy, particularly of pragmatic trials8

can be transferred to complex interven-

tions such as educational innovations.

As RCTs tend to treat the intervention

under study as a `black box', qualitative

methods are needed to explore the

relative importance of the various

components of the intervention and to

provide meaning and explanation of the

®ndings.9±11

Health service researchers are devel-

oping new methodologies for situations

where a randomised controlled trial is

not possible. Examples of these new

Commentaries

Correspondence: Elizabeth Murray, Depart-

ment of Primary Care and Population Sci-

ences, Royal Free and University College

Medical School, University College London,

Archway Campus, Highgate Hill, London

N19 3UA, UK. E-mail: elizabeth.murray@

pcps.ucl.ac.uk

110 Ó Blackwell Science Ltd MEDICAL EDUCATION 2002;36:110±112

methodologies include the balanced

incomplete block design which allows

comparison of multiple treatments or

interventions with a relatively small

sample size. Patients are randomised to

receive different treatment sequences,

so that all possible combinations are

included.12 Mason et al. have described

how block designs can be used in trials

of interventions to change professional

practice.13

Problems with randomisation

There are signi®cant practical problems

facing researchers who wish to rando-

mise students, whether at the level of

whole curricula, or individual courses.

In most countries, students apply for

speci®c schools, and have an expecta-

tion of which curriculum they will

follow. Within any one curriculum, the

combination of ethical and logistical

dif®culties in randomising allocation to

a new course can seem almost insu-

perable. Do you obtain formal consent

from students before randomising them

to a new or conventional course and

what happens to students who decline

to enter the trial? Given the complexity

of most medical school timetables,

these questions can deter all but the

most committed researchers.

Despite this, the ethical and intel-

lectual basis for encouraging schools to

promote RCTs of new courses is clear.

It is ethical to randomise students when

there is genuine uncertainty about the

relative bene®ts of a new and traditional

course.14 Students should be encour-

aged to contribute to the research cul-

ture within the medical school, and

adoption of `patient preference' meth-

odologies15 within randomisation is

likely to accommodate the anxieties of

most students.

There have been some notable

successes in randomisation of stu-

dents, particularly at the level of the

whole curriculum.16 These have

demonstrated that the dif®culties are

not insuperable, where there is a

strong political will to undertake the

research. It may be that a cultural

unwillingness to prioritise educa-

tional research is at least a contribu-

tory factor to the scarcity of RCTs in

medical education. Health service

researchers face comparable dif®cul-

ties with designing and implementing

RCTs, but have been determined to

overcome them.17

Problems with funding

One vivid manifestation of the low

priority placed on educational research

amongst the research community is the

paucity of funds available for educa-

tional research, and the dif®culty in

obtaining funding for evaluation of

educational initiatives. Thus, in the

UK, despite the pressure from the

General Medical Council which has

resulted in almost every medical school

introducing a new curriculum, there

are no dedicated funds for evaluating

the impact of these new curricula.

Problems with de®ning outcomes

An issue common to much educational

research is the dif®culty in de®ning

desirable outcomes of an educational

intervention. Moreover, even when

desirable outcomes have been de®ned,

there is a real lack of assessment tools

for most outcomes other than know-

ledge and clinical skills.4 Thus it is

almost impossible to determine whe-

ther different educational methods have

any impact on traits currently consid-

ered desirable, such as team working,

respect for patients and colleagues, or

cross-cultural competency.

Cultural problems

In my view this is the single most

important issue. Able and eminent

clinicians who actively participate in

creating the evidence based for their

clinical discipline, do not, as yet, apply

the same intellectual rigor to their

teaching or curriculum change.18

Educational research is still a relatively

low status ®eld, at least in the UK, so

that it can be dif®cult to attract high

calibre applicants. As a result, research

capacity is low in many places, and many

of the academic staff involved in medical

education are overloaded with service

development and implementation.

This situation is not inevitable. Thirty

years ago academic general practice

faced many of the problems faced by

educational researchers today. The

intervening decades have demonstrated

that change is possible; the recent review

of funding of research and development

identi®ed the development of research

and research capacity in primary care as

a priority.19 New chairs in primary care

research are available, and departments

of primary care have become thriving

contributors to the academic output of

major universities.20

Conclusions

The Campbell Collaboration and

BEME are doing their best to ensure

that educational research reaches the

policy makers and teachers. There is an

onus on educational researchers to

ensure that our research is of suf®cient

quality and relevance to be of use to

these constituencies. This requires us to

develop a sound theoretical basis to our

research, and to adopt new methodol-

ogies capable of addressing the ques-

tions that need answering.

Acknowledgements

The author is grateful to Cees van der

Vleuten and Richard Grol for their

encouragement in developing the ideas

in this paper.

Elizabeth Murray

London,

UK

References1 Boruch R, Petrosino A, Chalmers I.

The Campbell Collaboration. a pro-

posal for systematic, multi-national,

and continuous reviews of evidence.

The Campbell Collaboraton 1999: pp.1±

21. http://campbell.gse.upenn.edu.

2 Lilley P. Best evidence medical

education (BEME): Report of meeting

3±5 December 1999, London, UK.

Med Teacher 2000;22:242±5.

3 Donaldson L. Service Increment for

Teaching Accountability Report 1998/

99. Leeds: Department of Health;

1999: pp. 1±55.

Challenges in educational research · E Murray 111

Ó Blackwell Science Ltd MEDICAL EDUCATION 2002;36:110±112

4 Murray E, Gruppen LD, Catton P,

Hays RB, Wooliscroft JO. The

accountability of clinical education: its

de®nition and assessment. Med Educ

2000;34:871±9.

5 Oakley A. An infrastructure for asses-

sing social and educational interven-

tions: the same or different. The

Campbell Collaboration 1999: pp. 1±10.

http://campbell.gse.upenn.edu.

6 Norman GR, Re¯ections on BEME.

Med Teach 2000;22/2:144.

7 Campbell M, Fitzpatrick R, Haines A,

Kinmonth AL, Sandercock P, Spie-

gelhalter D et al. Framework for design

and evaluation of complex interven-

tions to improve health. BMJ

2000;321:694±6.

8 Roland M, Torgerson DJ. What are

pragmatic trials? BMJ 1998;316:285.

9 Pope C, Mays N. Reaching the parts

other methods cannot reach: an

introduction to qualitative methods in

health and health services research

[see comments]. BMJ 1995;311:

42±5.

10 Bradley F, Wiles R, Kinmonth AL,

Mant D, Gantley M. Development

and evaluation of complex interven-

tions in health services research. case

study of the Southampton heart

integrated care project (SHIP). The

SHIP Collaborative Group. BMJ

1999; 318:711±5.

11 Young LE, Jillings CR. Qualitative

methods add quality to cardiovas-

cular science. Can J Cardiol 2000;16:

793±7.

12 Kraiczi H, Hedner J, Peker Y, Grote

L. Comparison of atenolol, amlodi-

pine, enalapril, hydrochlorothiazide,

and losartan for antihypertensive

treatment in patients with obstructive

sleep apneoa. Am J Respir Crit Care

Med 2000; 161:1423±8.

13 Mason J, Wood J, Freemantle N. De-

signing evaluations of interventions to

change professional practice. J Health

Serv Res Policy 1999;4:106±11.

14 Chard JA, Lilford RJ. The use of

equipoise in clinical trials. Soc Sci Med

1998;47:891±8.

15 Torgerson DJ, Sibbald B. Under-

standing controlled trials. What is a

patient preference trial?. BMJ 1998;

316:360.

16 Moore GT, Block SD, Style CB,

Mitchell R. The in¯uence of the New

Pathway curriculum on Harvard

medical students. Acad Med 1994;69:

983±9.

17 Stephenson J, Imrie J. Why do we need

randomised controlled trials to assess

behavioural interventions? BMJ 1998;

316:611±3.

18 Nelson MS, Clayton BL, Moreno R.

How medical school faculty regard

educational research and make peda-

gogical decisions. Acad Med 1990;65:

122±6.

19 Rook R. Strategic Framework for the

Use of the NHS.

In: R & D Levy (eds). Wetherby:

Department of Health; 1997: pp. 1±7.

20 Kendrick T, Campbell J, Armstrong D.

Prospects for general practice research

are bright despite research assessment

exercise. BMJ 1999;318:194.

Challenges in educational research · E Murray112

Ó Blackwell Science Ltd MEDICAL EDUCATION 2002;36:110±112