school performance, school effectiveness and the 1997 white paper

17
This article was downloaded by: [University of Central Florida] On: 22 November 2014, At: 07:48 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Oxford Review of Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/core20 School Performance, School Effectiveness and the 1997 White Paper Alex Gibson & Sheena Asthana Published online: 07 Jul 2006. To cite this article: Alex Gibson & Sheena Asthana (1998) School Performance, School Effectiveness and the 1997 White Paper, Oxford Review of Education, 24:2, 195-210, DOI: 10.1080/0305498980240204 To link to this article: http://dx.doi.org/10.1080/0305498980240204 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/ page/terms-and-conditions

Upload: sheena

Post on 16-Mar-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: School Performance, School Effectiveness and the 1997 White Paper

This article was downloaded by: [University of Central Florida]On: 22 November 2014, At: 07:48Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Oxford Review of EducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/core20

School Performance, SchoolEffectiveness and the 1997 WhitePaperAlex Gibson & Sheena AsthanaPublished online: 07 Jul 2006.

To cite this article: Alex Gibson & Sheena Asthana (1998) School Performance, SchoolEffectiveness and the 1997 White Paper, Oxford Review of Education, 24:2, 195-210, DOI:10.1080/0305498980240204

To link to this article: http://dx.doi.org/10.1080/0305498980240204

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information(the “Content”) contained in the publications on our platform. However, Taylor& Francis, our agents, and our licensors make no representations or warrantieswhatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions andviews of the authors, and are not the views of or endorsed by Taylor & Francis. Theaccuracy of the Content should not be relied upon and should be independentlyverified with primary sources of information. Taylor and Francis shall not be liablefor any losses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly or indirectly inconnection with, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: School Performance, School Effectiveness and the 1997 White Paper

Oxford Review of Education, Vol. 24, No. 2, 1998 195

School Performance, School Effectiveness and the1997 White Paper

ALEX GIBSON & SHEENA ASTHANA

ABSTRACT The rhetoric of the 1997 White Paper, Excellence in Education, appears tomark a 'rediscovery' of the role played by contextual factors in influencing the quality andperformance of schools. Despite this, the policy initiatives outlined in the White Paperremain wedded to the idea that school improvement can take place largely independentlyof contextual constraints. This paper argues that this survival of a perspective more inkeeping with New Right thinking reflects the continuing dominance of the SchoolEffectiveness Research (SER) paradigm. Providing a methodological critique of SER, weargue that the focus on the use of prior attainment data in the measurement of the 'valueadded' by schools has deflected attention from studies of how patterns of school perform-ance reflect underlying patterns of social advantage/disadvantage. Reviewing a growingbody of work which demonstrates how schooling serves to reinforce existing patterns ofsocial advantage, we conclude that genuine and widespread school improvement isunlikely unless contextual constraints on performance are better understood and directlyaddressed by policy makers.

INTRODUCTION

The 1997 election campaign, the landslide Labour victory, and the almost immediatepublication of the White Paper, Excellence in Education (DFEE, 1997), has ignited anawareness of and interest in education policy in a manner not seen in the UK sinceJames Callaghan initiated his 'Great Debate' on education in October 1976. Indeed,the series of regional meetings organised as part of the current White Paper consultativeexercise is strongly reminiscent of events twenty years ago. There are other parallels tobe drawn, not least being hints of a 'rediscovery' of the role that contextual factors playin influencing the quality and performance of schools and their pupils. The WhitePaper certainly contains a rhetoric which has hardly been heard since the late 1970s:

To overcome economic and social disadvantage and to make equality ofopportunity a reality, we must strive to eliminate, and never excuse, under-achievement in the most deprived parts of our country. ... We must overcomethe spiral of disadvantage, in which alienation from, or failure within, theeducation system is passed from one generation to the next. (DFEE, 1997,p. 3)

With the proposal to create 'Education Action Zones' at least one defining policy of the1970s—then known as Educational Priority Areas—is also being reinvented for the latetwentieth century.

Yet this is not simply a case of the wheel turning full circle and a new generation

0305-4985/98/020195-16 © 1998 Carfax Publishing Ltd

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 3: School Performance, School Effectiveness and the 1997 White Paper

196 Oxford Review of Education

returning to the ideas of a previous era. What is perhaps most surprising about thecontemporary debate, particularly since this is the first Labour government after overtwo decades of often radical Conservative education reforms, is how the traditionalLabour concern with 'equality of opportunity' has been influenced by ideas whichflourished in the political climate of the New Right. This reflects the overwhelmingdominance of the School Effectiveness Research (SER) paradigm and the success of itsadvocates in taking their ideas to policy makers in both the UK and USA (Elliott, 1996,pp. 199-200).

If there is a single principle which underpins SER it is the idea that 'Schools Matter'.Ever since Ron Edmonds outlined his 'five-factor theory' (1979a, 1979b) the underly-ing goal has thus been to isolate those characteristics which distinguish effective schoolsfrom the rest (Scheerens, 1992, pp. 47-68). The extent to which such a goal has beenachieved, or even whether it is attainable (Elliott, 1996), has concerned some critics ofthe school effectiveness paradigm and, notwithstanding attempts to forge a closerrelationship between the School Effectiveness and School Improvement traditions(Gray et al., 1996), those working within the paradigm have become increasingly awarethat practical conclusions are less forthcoming than had once been hoped (Reynolds &Packer, 1992; Hopkins et al., 1994). Yet the underlying rationale remains as it alwayshas been; that the responsibility for school performance, and thus for school improve-ment, rests with individual schools—their staff, governors, parents and pupils:

Not only did the early 'effective schools' research conclude that schools domake a difference, but there was also agreement on two further issues. First,the differences in outcome were systematically related to variations in theschool's climate, culture or ethos. Second, the school's culture was amenableto alteration by concerted action on the part of the school staff. (Hopkins etal, 1994, p. 44)

The focus has thus been on identifying factors which characterise 'effective'schools, and the practical outcome has been a philosophy of improvement whichemphasises the actions which must be taken by individual schools and teachers.Contextual factors, not least those associated with the home background of pupils, havebeen largely ignored. Indeed, the very idea that the performance of schools is to anymeaningful degree explicable in terms of the socio-economic background of pupils hasbeen largely dismissed. As a result, notwithstanding the wealth of school performancedata which has recently become available, relatively few studies have explicitly soughtto relate contextual factors with patterns of school performance. With a dearth of clearempirical evidence, contemporary policy initiatives remain firmly wedded to a paradigmof improvement which places the responsibility for improvement almost entirelyupon schools and teachers. In other words, although these initiatives expressly aimto 'overcome economic and social disadvantage and to make equality of opportunitya reality', they seek to achieve such goals whilst largely ignoring the impact ofdisadvantage on school performance.

Under the last Conservative administration this improvement was to take place in theface of the competitive pressures of a quasi-market in education. Under the presentLabour government the pressure is to be applied through performance targets. But bothregimes are predicated on the belief that schools are able to respond largely indepen-dently of contextual constraints. With failure seen as a failure of individual schools, andsolutions framed in terms of the actions those schools must undertake, the rhetoric ofreform is that it is 'standards, not structures' which matter.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 4: School Performance, School Effectiveness and the 1997 White Paper

School Performance and the 1997 White Paper 197

This paper argues that this legacy of School Effectiveness Research ignores increas-ingly compelling evidence which demonstrates a profoundly close relationship betweenperformance and context, and that until this relationship is recognised and the pro-cesses which underpin it are better understood and directly addressed, it is unlikely thatthe spiral of disadvantage alluded to in the White Paper can be broken.

THE EMERGENCE OF THE SCHOOL EFFECTIVENESS RESEARCHPARADIGM

During the 1960s and 1970s the view that schooling was, to all intents and purposes,unable to overcome the effects of social background gradually became the prevailingorthodoxy. The work of Coleman et al. (1966) arguing that the social class/priorachievement mix of schools was the only school variable which seemed to have anyimpact on academic outcomes was particularly influential. The policy focus was thus tobe engineering the social mix of schools. In the US this led to 'bussing', the physicaltransfer of students between schools in an attempt to create more socially (and racially)balanced school populations. Yet this soon proved politically untenable and theultimate rejection of what Coleman et al. had viewed as the only intervention availableto educational planners seriously compromised the post-war liberal belief that edu-cation could play a central role in equalising students' academic achievements and lifechances. Such pessimism was further compounded by research suggesting that compre-hensive schools in the UK, which also sought to engineer more balanced socialclass/prior achievement intakes, were also failing to improve life chances (Thrupp,1995, p. 187).

By the mid-1970s it was thus becoming widely accepted that schools could notcompensate for society and that educational processes and policies were powerless toaddress entrenched social inequalities. As Thrupp observes, underlying explanationsdiffered:

For the right the reason lay in the genetically determined nature of intelligence(Jensen, 1969); for the Marxist left, schools could not promote equality ofopportunity because they were effectively agents of the ruling class (Bowles &Gintis, 1976) while for liberals like Jencks (Jencks et al, 1972) the route togreater equality of opportunity lay not in education but in other social andeconomic policies. (1995, p. 187)

Whatever the logic, so long as the problem was framed in terms of the pursuit ofequality of opportunity, school performance research appeared to be at an impasse.

This was to be short-lived, for the late 1970s and 1980s saw a marked shift to theright of political debate in both the USA and UK. With the emphasis no longer onissues of equality, questions were now to be phrased in terms of the efficiency ofschools, their value for money, and their effectiveness in achieving measurable goals.Indeed, early proponents of the 'effective schools movement' viewed a concern withsocio-economic factors as little more than an unwelcome and unhelpful distraction.Drawing strength from 'exemplar' schools which flourished in the face of remarkablydisadvantaged circumstances, they argued, first, that the link between social back-ground/prior achievement and performance was in no sense deterministic (with whichfew would have disagreed), but second, that school improvement could only moveforward once the 'myth' that variations in performance were underpinned by variationsin pupil backgrounds had been dispelled. Ron Edmonds, for instance, argued that:

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 5: School Performance, School Effectiveness and the 1997 White Paper

198 Oxford Review of Education

a repudiation of the social science notion that family background is theprincipal cause of pupil acquisition of basic schools skills is probably prerequi-site to successful reform of [US] public schools for the children of the poor.(1979b, p. 23)

The desire to avoid the negative effects of labelling is a recurrent theme in the schooleffectiveness literature. For many, such as Ron Edmonds (who was, after all, a leadingblack educator striving to redress the educational disadvantages experienced byminority students in the USA), this was a moral issue. Thus Fitz-Gibbon (1996,p. 16) argues that 'to try to adjust examination results and say that the progressexpected of pupils from difficult home circumstances is less than the progress expectedfrom pupils in comfortable home circumstances presents moral and interpretativeproblems'. For others, however, the link drawn between academic performance andsocial background merely served to provide 'failing' schools with both a ready-madeexcuse and disincentive for change.

One of the most powerful underlying reasons for low performance in ourschools has been low expectations which have allowed poor quality teaching tocontinue unchallenged. Too many teachers, parents and pupils have come toaccept a ceiling on achievement which is far below what is possible. (DFEE,1997, p. 25)

This perspective is well illustrated by Murphy who maintains that prior to the effectiveschools movement explanations for student failures 'were focused on deficiencies in thestudents themselves and in the home/community environments in which they werenurtured' (1992, p. 167). The key contribution of the effective schools movement wasthus 'its attack on the practice of blaming the victim for the shortcomings of the schoolitself and its 'insistence upon requiring the school community to take a fair share of theresponsibility for what happens to the youth in its care' (ibid.). Whether drawing a linkbetween school performance and home environments could ever be described as 'victimblaming' is a moot point—bussing in the USA and the comprehensive movement in theUK were predicated on a belief in the 'educability of learners' just as much as has beenthe effective schools movement. The difference is that solutions were conceived interms of restructuring education systems rather than addressing the failings of individ-ual schools. The idea that responsibility for patterns of poor performance should beplaced at the door of individual schools has, however, clearly proved attractive topoliticians and policy makers of the New Right. As expressed in a recent SchoolsCurriculum and Assessment Authority (SCAA) report to the British Secretary of Statefor Education, 'it is increasingly being recognised that the elevation of social class intoa central position in the debate is mistaken' (SCAA, 1994, p. 47). A practical conse-quence of this perspective, as discussed by Goldstein and Cuttance (1988), was the UKGovernment's decision not to adjust school performance indicators by any reference toexternal factors, including social class.

Importantly, this rejection of home background as a significant explanatory factor hasbeen supported by the particular methodology used to assess the effectiveness ofschools; a development which, along with the political shift to the right, served tobreach the impasse faced by school performance researchers by the end of the 1970s.It was thus soon accepted that in order to understand the effectiveness of a school it isnecessary to establish the 'value' the school adds to the pupils passing through it, andthis requires measures of prior achievement against which to compare educationaloutcomes. There are three aspects of this approach which need to be highlighted. First,

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 6: School Performance, School Effectiveness and the 1997 White Paper

School Performance and the 1997 White Paper 199

it served to redefine the criterion of successful schooling. No longer was this to be'defined in absolute terms but as the value added to what students brought to theeducational process' (Murphy, 1992, p. 167). Second, as it is clearly inappropriate toemploy the same test at different stages of a student's academic career, the entry andexit measures had to be relative to other schools and judgements on value-added wereexplicitly comparative. Finally, the analytical emphasis was to be on pupil-level data.The impact on how the relationship between socio-economic status and academicperformance was conceived and measured could not have been more profound.

THE CONCEPT AND CONSEQUENCES OF A 'VALUE-ADDED' METHOD-OLOGY

To focus on the value added by schools to 'what students bring to the educationalprocess', particularly when that process is compartmentalised into a number of discretetesting intervals (in the UK between the three National Curriculum key stages, KeyStage 3 and GCSE, and GCSE and A level), is to 'design out' a concern withunderlying socio-economic explanations of variations in performance. There is nolonger a need to consider the home background of students because the baseline forschool assessment now lies with measures of prior attainment; school effectiveness is acomparative judgement made against schools with similar intakes. As various reportsby, or on behalf of, SCAA illustrate all too clearly, socio-economic factors are thus allbut ignored (SCAA, 1994; Fitz-Gibbon, 1995; Jesson, 1996).

Even where, as is increasingly the case (Cuttance, 1992; Sammons et al, 1996,p. 23), researchers have sought to investigate the impact of the socio-economic back-ground of students on their subsequent progress, the significance of such factors isinvariably found to be slight and always far outweighed by that of prior attainment(Thomas & Mortimore, 1996, p. 5). The difficulty with this, as acknowledged byFitz-Gibbon, is that 'the effects of home background are already present in themeasures of prior achievement' (1996, p. 149). It is how this observation is dealt withthat is instructive. For Fitz-Gibbon, 'to be as fair as possible to everyone, we needvalue-added measures based on cognitive measures and student-level data, not onhome-background measures and not on aggregated data'. Her aversion to aggregateddata is explored below, but the immediate point is that in declining to focus upon, oreven to genuinely engage with, the socio-economic dimensions of educational perform-ance she is making a value judgement and not drawing a research conclusion. Thomasand Mortimore (1996) also recognise that there is 'a systematic relationship betweensocio-economic measures and both pupil prior attainment measures and pupil out-comes'. However, they too choose to underplay the significance of the relationshipthrough a methodological fixation on prior attainment and value added.

In the absence of prior attainment data, socio-economic factors explain asubstantial proportion of the variation in student outcomes that is due to priorattainment as well as variance that is independently related to, say, measuresof low family income [emphasis added]. (1996, pp. 6-7)

Whilst this perspective acknowledges that socio-economic factors have a continuingimpact on pupil performance throughout their school careers, it also reveals the way inwhich prior attainment is accorded independent explanatory status. School effective-ness research demonstrates that prior attainment 'accounts for' substantially more ofthe variation in pupil outcomes than does social status and home background but this

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 7: School Performance, School Effectiveness and the 1997 White Paper

200 Oxford Review of Education

is not, as is often portrayed, an objective and 'scientific' research finding. It is rather ana priori construct; an inevitable consequence of adopting a value-added methodology.

In contrast, the emphasis placed by School Effectiveness Research on pupil-level datahas led to a clarification of the relationship between socio-economic factors andacademic performance. There is no doubt that inappropriate conclusions have all toooften been drawn from aggregated data, falling foul of the ecological fallacy outlinednearly four decades ago (Robinson, 1950). This arises when areal-unit data are utilisedto draw 'ecological correlations' when the objects of study are in fact individual-levelcharacteristics and relationships. As with the associated Modifiable Areal Unit Problem(Openshaw, 1984), the principal danger is that correlations measured using areal-unitdata are highly sensitive to both scale and grouping effects. Thus correlations usingaggregated data will usually be greater in absolute magnitude than individual-levelcorrelations and may even differ in sign (a phenomenon explored by Fieldhouse andTye's (1996) innovative use of the Samples of Anonymised Records from the 1991Census). Aitkin and Longford (1986) are often credited with bringing this issue to thefore in educational research, but the case has been pursued by a number of authors.Woodhouse and Goldstein are particularly uncompromising in their insistence that'aggregate-level analyses are uninformative and that useful comparisons [betweenschools] cannot be obtained without employing multi-level analyses using student-leveldata' (1988, p. 301), though Fitz-Gibbon is scarcely less so in concluding that there is'an overwhelmingly strong case for not using aggregated data ... in monitoring schoolperformance' (1996, p. 147).

It is important to recognise, however, that these and other criticisms are of the useof aggregate-level correlations as either a surrogate for individual-level correlations(which is undoubtedly statistically improper) or, more importantly, as a measure ofschool effectiveness. Yet the legitimacy of the latter rests upon an a priori definition ofthe meaning and thus measurement of school effectiveness. This definition is reduc-tionist in the sense that it explicitly and intentionally seeks to divorce (conceptually andmethodologically) school effectiveness from its broader social, cultural and politicalcontext.

This, we argue, is an incomplete perspective on school performance and our case isoutlined, albeit unintentionally, by Fitz-Gibbon in her argument against the use ofschool-level statistics to draw conclusions about the relationship between home back-ground and academic performance (1996, pp. 141-150). In exploring what she consid-ers to be the widely misunderstood consequences of using aggregated data, she uses theexample of GCSE mathematics performance in a sample of 695 pupils in five schoolsand how it relates to what she calls 'cultural capital'; an attempt to quantify the levelof support pupils receive at home for education. At an individual level, correlationbetween performance in mathematics and 'cultural capital' is 0.34, i.e. relatively weakand, she argues, relatively typical. As one would expect, individual-level correlationswithin each of the five schools are of a similar magnitude: 0.39, 0.37, 0.38, 0.34 and0.32. Use aggregated data, however, and very different statistics emerge. If the individ-ual pupils are randomly assigned into five groups (representing schools) the aggregate-level (or 'mean on means') correlation is essentially zero. Cluster the same students intofive completely segregated 'schools' and the correlation is almost perfect (r = 0.99).Allocate the students into their actual schools and the 'mean on means' correlation is0.83.

The lesson Fitz-Gibbon draws from this is that aggregated statistics do not tell youwhat you think they tell you, and that 'the use of data aggregated to the school level has

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 8: School Performance, School Effectiveness and the 1997 White Paper

School Performance and the 1997 White Paper 201

produced a misleading impression that SES is as strong a predictor of achievement asprior achievement' (p. 146). This is undoubtedly true if one is concerned with individ-ual pupils or, for that matter, with value-added measures based upon the experience ofindividual pupils. But there is nothing necessarily misleading about the use of school-level data. To be absolutely literal, about 68% of school-level variation in her (small andpurely illustrative) sample of pupils taking GCSE mathematics can be accounted for bybetween-school variations in 'cultural capital' (^ = 0.68). Nothing more, nothing less.It does not matter whether this relationship is primarily a function of the grouping effector due to an underlying pupil-level relationship. Both are significant to school perform-ance because, as Fitz-Gibbon herself emphasises, socio-economic segregation betweenschools, particularly in British cities, is a fact of life.

It is our contention, therefore, that a value-added approach is inadequate preciselybecause patterns of segregation between schools serve to amplify the impact of socio-economic factors on school-level performance. As very many others have observedbefore us, and as we explore further below, any school's ranking in examination leaguetables is intimately bound up with the socio-economic status of the community it servesand, whatever one thinks of the legitimacy and consequences of using examinationperformance as a proxy measure of school performance, there is no doubting thatgenuine differences between schools are being highlighted (Richmond, 1996, p. 151).If nothing else, the 'ethos' in a school with 10% of its pupils achieving five or moreGCSEs at grade C or above will be very different from that prevailing in a school with60% or more of its pupils attaining what is, after all, a benchmark of acceptance intofurther education. Thus, although value-added approaches utilising multi-level meth-ods may have an important role to play in monitoring school and, particularly,classroom effectiveness (which, as a number of studies are beginning to demonstrate,is where variations in effectiveness are most significant; Hill & Rowe, 1996), theoverwhelming dominance of this approach has been to the detriment of a broaderunderstanding of school performance. Schools do not operate in isolation and, asdemonstrated below, a very sharp social gradient underpins variations in performance.

PUTTING SCHOOL PERFORMANCE IN CONTEXT

The Conservative reforms of the 1980s rested upon the idea that it was only throughthe discipline of the market that schools would improve. Purchasers (parents) were tochoose to which supplier (school) they would send their children and funding wouldfollow the child. In principle, as more parents send their children to the 'best' schools,so 'excellence' in those schools is rewarded and expansion takes place. Conversely,ineffective schools will lose pupils and, if the policy is taken to its logical conclusion,will eventually close. Any such market requires that parents have access to theinformation necessary for them to make informed and effective choices and thus, inaddition to the circuits of informal knowledge about schools which have always existed,it was deemed necessary to both redefine the School Inspection process (with thetransition from HMI to OFSTED) and to devise and publish performance indicatorsfor schools. The outcome, the DFEE's School Performance Tables published eachNovember, quickly established itself as one of the key events of the school year andtoday represents the biggest single annual public information exercise undertaken byany Government department.

Concern has been widely expressed about two particular aspects of the DFEE'sapproach. The first is that it may serve to narrow a school's focus onto measurable

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 9: School Performance, School Effectiveness and the 1997 White Paper

202 Oxford Review of Education

outcomes at the expense of broader affective outcomes or that the emphasis laid on the'gold standard' measure of the proportion of pupils with five or more GCSE gradesA-C may lead to an undue concern with pupils on the C/D boundary (Goldstein, 1997;Murphy, 1997). The second is that the tables use 'raw' performance figures, notwith-standing a long-standing recognition that schools are themselves very different andserve different communities (Woodhouse & Goldstein, 1988; Kendall, 1995; McCal-lum, 1996; Myres & Goldstein, 1996). The DFEE's Tables, it has been argued, simplydo not compare like with like and, as such, cannot measure school performance in anymeaningful sense of the term. For instance, the examination results of fully or partiallyselective schools cannot reasonably be compared with those of schools with a compre-hensive intake, particularly when that intake is skewed by presence of selective schools.Similarly, it has long been recognised that academic performance varies systematicallybetween different categories of pupils; for instance between girls and boys and withrespect to the home background of pupils (Kelly, 1995, 1996). In an attempt to betterunderstand the extent to which these factors influence school performance as measuredby the DFEE, some Local Education Authorities (LEAs) and academics have recentlysought to contextualise school examination results according to a variety of criteria andusing a variety of methods.

The results of this research are clear: that a substantial proportion of between-schoolvariation in examination performance can be accounted for by relatively few contextualfactors. For instance, analysis by the National Consortium for Examination Results(NCER) of 1100 schools in 48 LEAs found high levels of (non-linear) correlationbetween the proportion of pupils entitled to Free School Meals (%FSM) and school-level examination performance (Kelly, 1995, 1996). The relationship between %FSMand GCSE performance varies between subjects, between boys and girls, and betweendifferent 'families' of LEAs, but taking all subjects in all LEAs no less than 53% ofbetween-school variation in GCSE performance could be accounted for by between-school variations in %FSM eligibility (r = 0.73; r2 = 0.53). Similar results were obtainedby Conduit et al. (1996) in their analysis of the relationship between material depri-vation and the percentage of pupils gaining 5 or more grades A-C in 38 schools inDudley and Walsall. Using Townsend's Index of Material Deprivation calculated forwards lying within 2 km of schools they concluded that 'across the 38 schools in the twoboroughs, more than half the variance in league table positions was accounted for byschools' location' (1996, p. 203). A study by Sammons et al. (1994), meanwhile,similarly extracted census data for neighbouring wards to derive three of the seven'background characteristics' used to describe a sample of 388 non-selective schoolsfrom throughout the country. Using standard Ordinary Least Squares (OLS) regressiontechniques they concluded that 'using a relatively small number of intake measures(and notably excluding prior attainment), it is possible to account for a substantialproportion (62%) of the school-level variation in GCSE performance' (1994, p. iii).

In Durham, 55% of between-school variation in GCSE performance could beexplained by variations in %FSM (Durham County Council Education Department,1996), whilst in Sheffield no less than 84% of between-school variation in the percent-age of pupils attaining 5 or more GCSE grades A-C, and 87% of between-schoolvariation in average GCSE point scores per pupil, could be accounted for by an OLSmodel incorporating three 'structural' factors (percentage of girls, authorised absence,and unauthorised absence) and two multi-variate measures of socio-economic status;an 'index of advantage' and an 'index of disadvantage' (Budgell, 1995). To this wealthof evidence can be added the results from our own analysis of 259 schools in 12 LEAs

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 10: School Performance, School Effectiveness and the 1997 White Paper

School Performance and the 1997 White Paper 203

(of which ten were selective). Using multiple regression techniques and a postcode-census method of estimating the socio-economic characteristics of school populations[1], we have been able to account for over 75% (r = 0.873; radj

2 = 0.752) of between-school variance in the proportion of pupils achieving five or more GCSE grades A-C(Gibson & Asthana, 1998). For the 23 schools in the study sample located in London,all of which were non-selective, no less than 87% of between-school variance in GCSEperformance can be accounted for by the model (r = 0.938; radj

2 = 0.876).The significance of contextual effects on school performance cannot be doubted. The

DFEE's School Performance Tables, as currently constituted, are fundamentallyflawed simply because they provide no way of distinguishing between schools which dowell (or poorly) in spite of their circumstances from those which do well (or poorly)because of their circumstances. What possible significance can a 'league placing' have,for instance, in either London or Sheffield when in both cities well over 80% ofbetween-school variations in the proportion of pupils achieving 5 or more GCSE gradesA-C can be accounted for by contextual differences between schools?

Yet this growing body of research does more than merely undermine the legitimacyof the DFEE's School Performance Tables as a measure of the 'quality' of schools.First, their use by parents in a competitive environment, precisely because the tablesprovide no meaningful insight into differences in the quality of education, would notnecessarily be to the advantage of those schools which do provide high qualityeducation. Rather, the advantage would lie with those schools which perform well atexaminations, and this is inextricably bound up with the circumstances within whichthey operate. Second, the research demonstrates that, apart from selection by academicability, the single most important contextual factor is the nature of the communitiesserved by schools. In our own study we found that four of the more commonly usedindices of social and/or material deprivation (Lee et al, 1995) accounted for between26% and 41% of between-school variation in GCSE performance amongst non-selective schools (the r2 figures being as follows: DOE Index, 0.255; Jarman's Under-privileged Area Index, 0.337; Townsend's Material Deprivation Index, 0.384;Carstair's Index, 0.411). Yet these are indices which were designed to assess theinfluence of socio-economic disadvantage in other contexts, in particular with regard tohealth outcomes. A key additional census variable with respect to educational outcomesis the proportion of the population aged 18 plus without higher educationqualifications. Incorporate this variable within Carstair's Index, for instance, and theadjusted r2 figure rises to 0.527.

The census variables used to derive indices of disadvantage and deprivation are, ofcourse, all highly intercorrelated and no overwhelming case can be made on purelystatistical grounds in favour of any particular combination of variables. Nevertheless, itmakes sense to utilise census characteristics directly relevant to school-age childrenwhenever possible and we have thus devised our own 'Index of Educational Disadvan-tage' using Stepwise Multiple Regression Techniques as a means of selecting variables.This resulting index is calculated as the unweighted sum of the following four variables,each standardised using Z-scores:

1) the proportion of the population 18 or over without HE Qualifications;2) the proportion of dependent children in households without a car;3) the proportion of the population unemployed or on a Government scheme;4) the proportion of dependent children not in owner-occupied accommodation.

Accounting for 53.5% of between-school variation in the proportion of pupils achieving

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 11: School Performance, School Effectiveness and the 1997 White Paper

204 Oxford Review of Education

70- ,

O

60-

I 50LUCOOCD 4 0

in

?30-

tO 2 0 -

• Q .

a. io-

++ +

<Jt

+ +. +

-i—i—i—i—i—i—i—i—i—i—I—i—i—i—i—i—i—I—T—i—r-10 -5 0 5 10 15

Index of Social Disadvantage

FIG. 1. Performance at GSCE and the socio-economic status of school populations (non-selective schoolsonly; n = 249).

five or more GCSE grades A-C (r = 0.732; r2 = 0.535), this index is marginally morepowerful at explaining differences in GCSE performance than the amended CarstairsIndex. Yet whichever Index is used, it is clear, as Figure 1 demonstrates, that a sharpsocial gradient underpins school-level performance at GCSE. Indeed, given that thegroup of six underperforming schools in the bottom left-hand corner of the graph areall located close to grammar schools, a model which incorporated the negative impacton performance of neighbouring selective schools would have resulted in an even moresharply defined relationship.

Whilst there is nothing deterministic about this link between the socio-economiccharacteristics of school populations and school-level examination performance, it is asimple fact that the more socially disadvantaged the community served by a school thevery much more likely it is that the school will appear to underachieve. This is hardlyan original observation, yet it is one which has very seldom been made over the lastdecade or so.

A key reason for this has been the dominance of the School Effectiveness Researchparadigm which, as discussed above, explicitly avoids engaging with the socio-economicdimensions of variations in academic performance, but it also reflects the influence ofthe New Right over educational thinking and policy during the 1980s and early 1990s.The neo-liberal perception of the market as 'the most effective means currently knownof organising social life in order to maximise the private interest and public good'(Ribbins & Sherratt, 1997, p. 13), meant that the key policy task was to create theconditions under which the market could operate as effectively as possible. Through'choice and diversity' (the title of the 1992 White Paper) the belief was that the marketwould act to raise standards for all, even if differences in the quality of schools were inthe short term inevitable. As Lawton (1994, p. 64) observed, 'if parental choice is

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 12: School Performance, School Effectiveness and the 1997 White Paper

School Performance and the 1997 White Paper 205

meaningful, parents must be allowed to choose between better and worse schools; ifthey are merely being allowed to choose between different kinds of 'good' (music andart, for example) then why publish league tables of achievement designed to demon-strate differences in quality?' To neo-liberal advocates of the New (or 'Market') Right,therefore, it becomes largely irrelevant whether a socio-economic gradient is found tounderpin differences between schools; any interventionist response would only serve todisrupt the proper operation of a market which, in time, will raise standards for all.Indeed, it might be argued that, as 'experts are unnecessary when the market takes over'(Lawton, 1994, p. 77), there is little reason to even study such phenomena.

If New Right ideology were the only reason for this marked lack of practical concernwith the socio-economic dimensions of school performance, then the recent change ofgovernment should have led to a dramatic shift in policy. There has certainly been achange in rhetoric, not least being the explicit recognition in the recent White Paper(DFEE, 1997, passim) of an intimate relationship between school performance and thehome background of pupils, but there is little evidence that such is being translated intopolicies explicitly designed to address the social gradient which, as illustrated by Figure1, so clearly underpins patterns of school performance.

THE 1997 WHITE PAPER AND SCHOOL IMPROVEMENT

An important and instructive exception is the concept of Education Action Zones.These are to be set up in areas 'with a mix of underperforming schools and the highestlevels of disadvantage' (DFEE, 1977, p. 39). There is an obvious parallel to be drawnwith the Educational Priority Areas (EPAs) of the 1970s (Halsey, 1972), but theconcept is extremely limited, both in scope and ambition. A pilot programme of up to25 such Action Zones will be set up over the next two to three years, each Zonetypically covering two or three secondary schools and their feeder schools. An 'actionforum' set up in each zone will be required to set up an 'action programme', and LEAswill be expected to support, using existing powers, any 'reasonable programme' putforward by the forum. The Standards and Effectiveness Unit of the DFEE will monitorthe operation of the Action Zone, and the Secretary of State will appoint a representa-tive to the 'action team' to provide advice and support.

The emphasis is clearly on 'action', but it seems that few resources have beenspecifically earmarked to support what is in any case an extremely narrowly focusedprogramme.

Zones will have first call on funds from all relevant central programmes—forexample, the literacy and numeracy initiatives, the homework centres, thespecialist school initiative—provided that satisfactory proposals are put for-ward. We will also consider whether an Action Zone can be given additionalflexibility in matters of staffing or the organisation of schools. The advantagesoffered would be conditional on ambitious and achievable targets being set outin the action programme. (DFEE, 1997, pp. 39-40)

There are three points of particular note. The first is that this initiative, even if asuccessful pilot leads to its extension in future years, is intended to address thedifficulties faced by schools only in the most difficult circumstances. Yet, as Figure 1illustrates, the relationship between socio-economic circumstance and school perform-ance is progressive and underpins educational outcomes across the entire spectrum ofschools. Second, the emphasis of the programme is not on 'top-down' intervention but

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 13: School Performance, School Effectiveness and the 1997 White Paper

206 Oxford Review of Education

on supporting 'bottom-up' school-based initiatives. Finally, although the White Paperintimates that funding formulae are to be reviewed 'so that schools' budgets fairlyreflect their circumstances—including ... the pressures of providing high quality edu-cation in disadvantaged areas' (DFEE, 1997, p. 70), the programme stands alone as adirect response to the acknowledged 'under-achievement in the most deprived parts ofour country' (DFEE, 1997, p. 3).

In general, whilst the 1997 White Paper makes it quite clear that the governmentaims 'to overcome economic and social disadvantage and to make equality of oppor-tunity a reality' (DFEE, 1997, p. 3), the belief is that this will take place through theimprovement of individual schools rather than through any radical overhaul of theeducation system within which schools operate. Thus Local Management of Schools(LMS) is to be extended (DFEE, 1997, pp. 69-70) and key aspects of the quasi-marketapproach to education show little sign of being dismantled, even if some importantaspects of the previous administration's ambitions, such as more widespread selectionby ability, will not now be implemented. Of particular concern is the fact that althoughperformance tables are to be modified (DFEE, 1997, pp. 25-6), they will be retainedand they are clearly still intended to act as a guide for parents in choosing betweenschools (DFEE, 1997, pp. 24-6). The key phrase, oft repeated, is that the governmentwishes to address 'standards, not structures'.

The fact is that, notwithstanding Education Action Zones and the rhetoric concern-ing the relationship between school circumstances and school performance, the philos-ophy which underpins the 1997 White Paper accords contextual factors very littlepractical significance. It appears, in other words, that the School Effectiveness Researchparadigm lies at the very heart of the new Labour government's thinking on education.Again and again it is made clear that underperformance is seen as the outcome of poorteachers using inappropriate techniques in poorly organised and ill-disciplined schools:

We know what it takes to create a good school: a strong, skilled head whounderstands the importance of clear leadership, committed staff and parents,high expectations of every child and above all good teaching. (DFEE, 1997,p. 12)

Where such is lacking it needs to be addressed by individual schools, albeit with the'pressure and support' of government:

All the evidence indicates that standards rise fastest where schools themselvestake responsibility for their own improvement. (DFEE, 1997, p. 24)

The main responsibility for improving schools lies with the schools them-selves. (DFEE, 1997, p. 12)

The policy thrust of the White Paper is thus two-fold. The first is on establishing aframework which encourages schools to improve themselves, largely through externallymoderated target-setting:

The main responsibility for raising standards lies with schools themselves. Butthey will be more effective in doing so if they work in active partnership withLEAs, OFSTED and the DFEE. The LEA's role is to help schools set andmeet targets. OFSTED's role is to inspect performance by individual schoolsand LEAs, and to provide an external assessment of the state of the schoolsystem as a whole. The DFEE's role is to set the policy framework, promotebest practice, and to provide pressure and support in relation to LEAs asLEAs do for their schools. (DFEE, 1997, p. 27)

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 14: School Performance, School Effectiveness and the 1997 White Paper

School Performance and the 1997 White Paper 207

If particular schools are found to be failing they 'will have to improve, make a freshstart, or close'. (DFEE, 1997, p. 12). The second policy thrust is on improving thequality of new and existing teachers and heads and, as with much of the White Paper,there is both support:

There is no instant or single solution, but the standard of teaching in schoolsis of critical importance. All of our key proposals will be linked to effectivetraining and support of new and existing teachers. (DFEE, 1997, p. 13)

and pressure:

Equally, because teachers play such a key role, they must be held accountablefor their success in sustaining and raising the achievements of their pupils. Wewill be prepared to act where the performance of teachers or heads falls belowacceptable standards. (DFEE, 1997, p. 46)

It is, of course, self-evident that school improvement is a process which takes place inindividual schools, and that the quality and performance of individual teachers iscentral to the experience of children. But is a policy which focuses almost entirely uponthe actions of individual schools sufficient when, as Figure 1 illustrates so graphically,perhaps the most striking characteristic of contemporary education is how schoolingserves to reflect, perhaps even reinforce, existing patterns of social advantage?

CONCLUSION

School Effectiveness Research, with its emphasis on within-school and within-classprocesses and a methodology which, as discussed above, explicitly avoids engaging withthe socio-economic dimensions of school-level performance, clearly remains the domi-nant interpretative framework. Its role in providing a sound basis for monitoring theimpact of different teaching methods and school structures cannot be doubted, but theutilisation of a value-added methodology has in particular served to obscure the extentto which profound differences in the educational outcomes achieved by schools areunderpinned by underlying variations in the cultural capital upon which individuals andcommunities draw (Gewirtz et al, 1995).

The practical consequence has been that the problem of underperformance has beenlargely, though not entirely, conceived of as a failure of schools and of teachers:

[National measures of pupil achievement] show that children, whatever theirbackground, can achieve a great deal if they are well taught and well moti-vated. But they also show that, in practice, schools with similar intakes ofpupils achieve widely differing results. The differences are a measure of aschool's effectiveness in teaching and motivating its pupils. (DFEE, 1997,pp. 24-5)

What School Effectiveness Research has failed to provide, and has manifestly failed toattempt, is to develop an understanding of the processes which have led to theremarkably strong and surprisingly consistent relationship between socio-economiccontext and school performance. That some 'exemplar' schools succeed in spite of theircircumstances, or that it has been possible to turn around particular schools in themanner lauded by the White Paper (DFEE, 1997, pp. 30, 69), is irrelevant; theunderlying pattern illustrated by Figure 1 remains indisputable. It is not enough todismiss this phenomenon as a statistical artefact—an illusion brought about through theuse of school-level as opposed to pupil-level data—nor does it seem at all probable that

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 15: School Performance, School Effectiveness and the 1997 White Paper

208 Oxford Review of Education

it is mere coincidence that 'failing' schools overwhelmingly serve disadvantaged com-munities. It has to be accepted that systematic explanations for the pattern of perform-ance illustrated by Figure 1 exist and, although one may be as pessimistic as Jenckset al. (1972) of the power of educational policies to address what are, after all, long-standing inequalities (Heath & Clifford, 1990), it is certain that solutions are unlikelyto be found unless a broader perspective is adopted than that advocated by proponentsof School Effectiveness Research and adopted by the new Labour government.

NOTES

[1] The socio-economic profiling of schools was undertaken using the 1991 Census,Crown Copyright. ESRC purchase.

REFERENCES

Aitkin, M. & LONGFORD, N. (1986) Statistical modelling issues in School EffectivenessStudies, Journal of the Royal Statistical Society, A, 149, 1, pp. 1-43.

BOWLES, S. & GINTIS, H. (1976) Schooling in Capitalist America (London, Routledge).BUDGELL, P. (1995) Value-added (Sheffield, Sheffield City Council).COLEMAN, J., CAMPBELL, E., HOBSON, C , MCPARTL, J. & MOOD, A., WEINFELD, F. &

YORK, R. (1966) Equality of Educational Opportunity (Washington, US GovernmentPrinting Office).

CONDUIT, E., BROOKES, R., BRAMLEY, G. & FLETCHER, C.L. (1996) The value ofschool locations, British Educational Research Journal, 22, 2, pp. 199-206.

CUTTANCE, P. (1992) Evaluating the Effectiveness of Schools, in: D. REYNOLDS & P.CUTTANCE (Eds), School Effectiveness: research, policy and practice (London, Cas-sell).

DFEE (DEPARTMENT FOR EDUCATION AND EMPLOYMENT) (1997) White Paper: Excel-lence in Schools (London, HMSO).

DURHAM COUNTY COUNCIL EDUCATION DEPARTMENT (1996) 1995 GCSE Performanceand Measures of Disadvantage using 1991 Census Data (Durham, Durham CountyCouncil Internal Report).

EDMONDS, R. (1979a) Programs of school improvement: an overview, EducationalLeadership, 37, pp. 4-11.

EDMONDS, R. (1979b) Effective schools for the urban poor, Educational Leadership, 37,pp. 20-24.

ELLIOTT, J. (1996) School Effectiveness Research and its Critics: alternative visions ofschooling, Cambridge Journal of Education, 26, 2, pp. 199-224.

FIELDHOUSE, E.A. & TYE, R. (1996) Deprived people or deprived places? Exploring theecological fallacy in studies of deprivation with the Samples of AnonymisedRecords, Environment and Planning A, 28, pp. 237-259.

Frrz-GIBBON, C.T. (1995) The Value Added National Project: issues to be considered in thedesign of a national value added system (London, SCAA).

FITZ-GIBBON, C.T. (1996) Monitoring Education: indicators, quality and effectiveness(London, Cassell).

GEWIRTZ, S., BALL, S.J. & BOWE, R. (1995) Markets, Choice and Equity in Education(Buckingham, Open University Press).

GIBSON, A. & ASTHANA, S. (1998) Schools, pupils and exam results: contexualisingschool performance, British Educational Research Journal, 24, 3.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 16: School Performance, School Effectiveness and the 1997 White Paper

School Performance and the 1997 White Paper 209

GOLDSTEIN, H. (1997) Value added tables: the less-than-holy grail, Managing SchoolsToday, 6, 6, pp. 18-19.

GOLDSTEIN, H. & CUTTANCE, P. (1988) A note on national assessment and schoolcomparisons, Journal of Education Policy, 3, 2, pp. 197-202.

GRAY, J., REYNOLDS, D., FITZ-GIBBON, C. & JESSON, D. (1996) Merging Traditions: thefuture of research on school effectiveness and school improvement (London, Cassell).

HALSEY, A.H. (1972) Educational Priority, 5 vols. (London, HMSO).HEATH, A.F. & CLIFFORD, P. (1990) Class inequalities in education in the twentieth

century, Journal of the Royal Statistical Society A, 153, 1, pp. 1-16.HILL, P.W. & ROWE, K.J. (1996) Multilevel modelling in school effectiveness research,

School Effectiveness and School Improvement, 7, 1, pp. 1-34.HOPKINS, D., AINSCOW, M. & WEST, M. (1994) School Improvement in an Era of Change

(London, Cassell).JENCKS, C.S. (1972) Inequality: a reassessment of the effect of family and schooling in

America (New York, Basic Books).JENSEN, A.R. (1969) How much can we boost IQ and scholastic achievement?, Harvard

Educational Review, 39, pp. 1-123.JESSON, D. (1996) Value added measures of school GCSE performance: an investigation into

the role of Key Stage 3 assessments in schools (London, HMSO).KELLY, A. (1995) Free School Meal Contextualisation of GCSE Examination Results: report

to National Consortium for Examination Results (London, NCER).KELLY, A. (1996) Comparing like with like, Education, 187, 1, pp. 14-15.KENDALL, L. (1995) Contextualisation of school examination results, 1992, Educational

Research, 37, 2, pp. 123-139.LAWTON, D. (1994) The Tory Mind on Education, 1979-94 (London, The Falmer

Press).LEE, P., MURIE, A. & GORDON, D. (1995) Area Measures of Deprivation: a study of

current methods and best practices in the identification of poor areas in Great Britain(Birmingham, Centre for Urban and Regional Studies).

MCCALLUM, I. (1996) The chosen ones?, Education, 187, 3, pp. 12-13.MURPHY, J. (1992) Effective schools: legacy and future directions, in: D. REYNOLDS &

P. CUTTANCE (Eds) School Effectiveness: Research, Policy and Practice (London,Cassell).

MURPHY, R. (1997) Drawing outrageous conclusions from national assessment results:where will it all end?, British Journal of Curriculum and Assessment, 7, 2, pp. 32-34.

MYRES, K. & GOLDSTEIN, H. (1996) Get it in context?, Education, 187, 7, p. 12.OPENSHAW, S. (1984) The Modifiable Areal Unit Problem (Norwich, Geo Books).REYNOLDS, D. (1994) School effectiveness and quality in education, in: P. RIBBENS &

E. BURRIDGE (Eds) Improving Education: promoting quality in schools (London,Cassell).

REYNOLDS, D. & PACKER, A. (1992) School effectiveness and school improvement inthe 1990s, in: D. REYNOLDS & P. CUTTANCE (Eds) School Effectiveness: Research,Policy and Practice (London, Cassell).

RIBBENS, P. & SHERRATT, B. (1997) Radical Educational Policies and ConservativeSecretaries of State (London, Cassell).

RICHMOND, J. (1996) Quantitative measures of secondary school performance usingschool-level data, Educational Management and Administration, 24, 2, pp. 151-162.

ROBINSON, W.S. (1950) Ecological correlations and the behaviour of individuals,American Sociological Review, 15, pp. 351-357.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14

Page 17: School Performance, School Effectiveness and the 1997 White Paper

210 Oxford Review of Education

SAMMONS, P., THOMAS, S., MORTIMORE, P., OWEN, C. & PENNELL, H. (1994) AssessingSchool Effectiveness: Developing Measures to put School Performance in Context (Lon-don, Institute for Education for the Office for Standards in Education (Ofsted)).

SAMMONS, P., MORTIMORE, P. & THOMAS, S. (1996) Do schools perform consistentlyacross outcomes and areas?, in: J. GRAY, D. REYNOLDS, C. FITZ-GIBBON & D.JESSON, Merging Traditions: the future of research on school effectiveness and schoolimprovement (London, Cassell).

SCAA (SCHOOLS CURRICULUM AND ASSESSMENT AUTHORITY) (1994) Value-added Per-formance Indicators for Schools: a report by the School Curriculum and AssessmentAuthority to the Secretary of State for Education (London, SCAA).

SCHEERENS, J. (1992) Effective Schooling: research, theory and practice (London, Cassell).THOMAS, S. & MORTIMORE, P. (1996) Comparison of value-added models for second-

ary-school effectiveness, Research Papers in Education, 11, 1, pp. 5-33.THRUPP, M. (1995) The school mix effect: the history of an enduring problem in

educational research, policy and practice, British Journal of Sociology of Education,16, 2, pp. 183-203.

WOODHOUSE, G. & GOLDSTEIN, H. (1988) Educational performance indicators andLEA league tables, Oxford Review of Education, 14, 3, pp. 301-320.

Correspondence: Dr Alex Gibson, Department of Geography, Exeter University, AmoryBuilding, Rennes Drive, Exeter, Devon EX4 4RJ, UK.

Dow

nloa

ded

by [

Uni

vers

ity o

f C

entr

al F

lori

da]

at 0

7:48

22

Nov

embe

r 20

14