humanities and metrics – a uk perspective
Post on 10-Jan-2016
Embed Size (px)
DESCRIPTIONTony McEnery, Lancaster University. Humanities and Metrics – a UK Perspective. Introduction. Political background Regulatory background The spark The reaction The counter reaction The HEFCE/AHRC group Possibilities Findings What does it all mean?. Political Background. - PowerPoint PPT Presentation
Humanities and Metrics a UK PerspectiveTony McEnery,Lancaster University
IntroductionPolitical backgroundRegulatory backgroundThe sparkThe reactionThe counter reactionThe HEFCE/AHRC groupPossibilitiesFindingsWhat does it all mean?
Political BackgroundThe UK context dual supportBoth sides conduct peer review checking this talk will focus on the generic support of research (QR) rather than project specific supportDesire to shift from a quality blind to a quality informed generic research support systemThe desire to do this in a way which allocated on the basis of a fairly detailed review
Depth of reviewUSUKNetherlands, GermanyPolandAustraliaFinland
Regulatory BackgroundAssessments in 1986, 1989, 1992, 1996, 2001 and 2008 primarily outputs based, peer review heavy These assessments have teeth in 2005-6 nearly 0.25B was allocated to A&H departments in the UK through this processWith each UK RAE there have been changes to how research quality is assessed and ratedWhat changes have been wrought over the years? A key change is that the system has downplayed the importance of quantity in research productivity in favour of measures of quality in research production over time.
The funding bodies are clear about what this has led to an ever improved system which has become ever more transparent, comprehensive and systematic. Indeed, HEFCE claimed that the 2001 RAE exercise was the most rigorous and thorough exercise to dateWe should be cautious, however government bodies rarely admit mistakes, and often engage in policy based evidence making. With that said, members of academic community have become accustomed to the RAE to the extent that they believe it is indeed a good system, as will be seen.
However, prior to the system being threatened with radical change we should also recall that the system was widely vilified.What caused an academic community that was dissatisfied with a research quality exercise system to change its mind so? Metrics.
The SparkThe RAE in the UK is undoubtedly a major burden on the University system.It is also, however, a key way in which the funders can influence the behaviours of academics.What if the behaviours had become so stable that one were able to measure compliance with them while removing the burden of assessment?Lord May, a fierce critic of the RAE, argued that there was a positive correlation between the two arms of research support research council funding corelated with QR funding.
So why bother? If one type of income can predict another, use one to allocate the other.There were a variety of problems with May's argument. Crucially, it did not encompass the A&H subjects. However, had the Philosopher's stone been found?The government thought so.
The ReactionAfter consulting a small number of Vice chancellors, the then Chancellor of the Exchequer (Gordon Brown) announced a shift to metrics based allocations in early 2006In the announcement the possible exclusion from this system of the A&H subjects was raised. This in turn raised the issue of how to deal with these subjects.
The HEFCE/AHRC GroupDue largely to the efforts of Sir Brian Follet a team was set up to answer two main groups of questions.What are the distinguishing characteristics of excellence in research in the Arts and Humanities, and how might these be recognised and reinforced through any system of quality assessment and funding allocations to deliver the policy aim of government and the public funding bodies? What behaviours, and what types of research activity, should assessment and funding systems consequently seek to incentivise?
What metrics based approaches to assessing quality and allocating funding in the arts and humanities are possible now or could become so in the next few years? To develop a robust and effective approach, how broad a field of potential metrics and related indicators of quality should be considered?
The Counter ReactionAs the group set to work a general counter reaction beganUniversities decided they did not like metricsAcademics decided they did not like metricsWhat were their motives?
UniversitiesFinancial their calculations revealed that what might be an acceptable correlation at a high level of aggregation could translate into a significant shift of resource it has been calculated that using research grants as a metric to drive QR income would have cut the grant of Nottingham by 12% but boosted that of Warwick by 26%. Such volatility was deeply unattractive to Universities.
AcademicsThe change of system may have required a change of behaviour, especially in the A&H subjects grants are new for the A&H subjects. The previous change had become normalised and entrenched. Talk of 'perverse incentives arose.A perception that peer review was valuable this has grown in strength, across all disciplines, since the shift to metrics was proposed. Subject associations were influential in the current system they would be less so in the new system.An allergy to number? Cling to nurse for fear of worse?
PossibilitiesThe group swiftly concluded that one metric alone would be unsatisfactory for the A&H subjects the diversity of the subjects was great. No clear, stable, correlation of one single X with research quality was visible across the whole of the A&H subjects. Consequently we explored a wide range of metrics which we could viably use, and mapped those to where they had been used by others.All might work to a degree for any given discipline. In what follows I note the measures proposed and note where such a measure is already in use elsewhere for these purposes
Operating income - Germany, NetherlandsStaff numbers - Germany, Netherlands, Poland
Third-party income (excluding RC income) - Germany, Poland, AustraliaIncome from international sources (EU etc.)- Germany, Poland, AustraliaRC income- Germany, Poland, Australia
Internationalisation of research
Co-operation in RC-funded national networks- Germany, FinlandVisiting lecturers/ incoming researchers- Germany, PolandIncoming graduate students- GermanyNumbers of researchers in international networks- Finland, Poland, Slovenia
Numbers of monographs- Netherlands, Finland, Poland, Australia, Slovenia Numbers of journal articles - Netherlands, Finland, Poland, Australia, SloveniaPublications in leading international journals- Germany, SloveniaIndexed in international bibliographic database- SloveniaBook chapters- AustraliaPublished conference proceedings (refereed)- AustraliaBibliometric analyses, citations- Netherlands, Belgium Patents- Netherlands, Finland, Poland, Slovenia
Development of databases etc.- SloveniaOrganisation of significant national or international conferences- PolandPhD completion rates- Netherlands, Finland, Poland, AustraliaMasters degrees- Finland, Poland, Australia
Measures of research esteem
Number of RC reviewers - GermanyMembership of national evaluation bodies- PolandInvited lectures/keynote speeches- NetherlandsAwards and prizes- Poland
Integration of research into teaching- SloveniaKT/relevance to industry- Finland, Poland, SloveniaExpert reports commissioned- Poland, SloveniaWider dissemination of research findings- Poland
Problems range across all of these measures the most essential one of which relates to the assessment of quality. How does one sift the wheat from the chaff? This is the key problem otherwise one may simply stimulate the oversupply of low grade outputs etc.The problem impacts less on some measures because they have peer review underlying them notably RC income.Yet even a measure such as journal publication may not be one which allows one to assume peer review has taken place on the output different disciplines take quite different approaches to peer review at times.
The problem of quality dogs questions related to esteem in particular. They can become overwhelming in the arts.
The FindingsIt is for reasons such as these that the group develop the distinction between a metrics led and a metrics informed approach to research quality assessment. Metrics informed uses metrics but moderates the findings based upon peer review.This view appears to be in the ascendant in the UK the solution appears to be applicable in the sciences as much as in the arts and humanities.
What Does it All Mean?Measuring research quality is a difficult and perhaps foolish task. Typically it is an individual judgement. It is certainly a task which is political (at the macro level and the micro level).Measuring research quality is about influencing research behaviours as much as measuring research quality in any objective sense indeed measurement may at times be the antithesis of this.However, in an environment in which public funds are dispensed in aid of research, we should expect those that give to want to influence behaviours and reward those behaviours they desire.
However, as an academic community we should always bear in mind a healthy scepticism about such an enterprise. As individual academics we should value our academic freedom and, mindful of the potential consequences, do what we think is right rather than what the system asks us to do when needs be.With that said, what the systems often ask us to do is so reasonable .... Perhaps I have been habituated!