how to use research metrics to improve research performanceelsevierscience.ru/files/john green...
Post on 31-Jan-2018
220 Views
Preview:
TRANSCRIPT
How to use research metrics to improve
research performance
13 - 17 February 2017
Presentation to various organizations in Moscow & Samara
1
John Green
formerly Chief Operating Officer, Imperial College London
now Life Fellow, Queens’ College Cambridge
jtg11@cam.ac.uk
UNIVERSITY OF
CAMBRIDGE
Benchmarking or ranking?
2
Rankings are compiled by various providers; each creates relative
positions based on certain weightings – some even allow a
university to choose weightings to create their own ranking.
Unsurprisingly, a Rector chooses the ranking in which his university
comes out best to market the university to new students
Benchmarking has a very different purpose and is a precise activity
which measures activity (e.g. funding, outcomes such as impact etc)
through the use of carefully defined metrics
Benchmarking results in higher
ranking
3
Rankings are important – they raise the profile of an institution –
we all want to be as high as possible in rankings
Moving higher in rankings is the result of improved performance
Higher performance comes from using benchmarking
To benchmark one needs precise measures of activity (e.g. funding,
outcomes such as impact etc) through the use of carefully defined
metrics
The Snowball partner universities believed that there was
no robust methodology for benchmarking
Snowball Metrics enable robust,
university-driven benchmarking
4
Snowball Metrics enable accurate benchmarking to drive
quality and efficiency.
Universities need standard metrics to benchmark themselves
relative to peers, so they can strategically align resources to
their strengths and weaknesses
Snowball Metrics approach
5
• Bottom-up initiative: universities define and endorse metrics to
generate a strategic dashboard. The community is their guardian
• Draw on all data: university, commercial and public
• Ensure that the metrics are system- and tool-agnostic
• Build on existing definitions and standards where possible and
sensible
• Ensure they are robust so compare apples with apples
The output of Snowball Metrics
6
www.snowballmetrics.com/metrics
“Recipes” – agreed and tested metric
methodologies – are the output of
Snowball Metrics
From Statement of Intent:
• Agreed and tested methodologies…
are and will continue to be shared
free-of-charge
• None of the project partners will
at any stage apply any charges for
the methodologies
• Any organization can use these
methodologies for their own
purposes, public service or
commercial
Statement of Intent available at http://www.snowballmetrics.com/wp-content/uploads/Snowball-Metrics-
Letter-of-Intent.pdf
Snowball Metrics delivered
7
Research Inputs Research Process Research Outputs and Outcomes
Research • Applications Volume
(enhancement)
• Awards Volume (enhancement)
• Success Rate
• Income Volume
• Market Share
Publications and citations
• Scholarly Output (enhancement)
• Citation Count
• Citations per Output
• h-index
• Field-Weighted Citation Impact
• Outputs in Top Percentiles
• Publications in Top Journal Percentiles
Collaboration
• Collaboration
• Collaboration Impact
• Collaboration Field-Weighted Citation Impact
• Collaboration Publication Share
• Academic-Corporate Collaboration
• Academic-Corporate Collaboration Impact
Impact
• Altmetrics
• Public Engagement
• Academic Recognition
Enterprise Activities/Economic
Development
• Academic-Industry Leverage
• Business Consultancy Activities
• Contract Research Volume • Intellectual Property Volume
• Intellectual Property Income
• Sustainable Spin-Offs (enhancement)
• Spin-Off-Related Finances
Postgraduate Education • Research Student Funding • Research Student to
Academic Staff Ratio
• Time to Approval of Doctoral degree
• Destination of Research Student Leavers
Denominators Institution (enhancement)
Discipline (enhancement)
HESA cost centre – HERD mapping
HESA funder types – FundRef mapping
Funding type
Post-graduate research student, and FTE proportion
Gender
How do Snowball metrics help
universities align their strategies to
their strengths and weaknesses?
8
Metrics can be size-normalized
9
Metrics can be “sliced and diced”
10
11
Research Student Funding
0
10
20
30
40
50
60
70
80
No award orfinancialbacking
Providerwaiver/award
ResearchCouncils
UK Charity UK PublicSector
UK Industry OverseasIndustry
EUGovernment
Other overseassources
Other sources
% o
f acti
ve S
tud
en
ts
Sources of Funding – Full time, UK Male, home students in Biology 2014/15
University of Cambridge University of Bristol University of Leeds
12
Time to Award of Doctoral Degree
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
University ofCambridge
University ofBristol
University ofLeeds
University ofOxford
Queens UniversityBelfast
Imperial CollegeLondon
University CollegeLondon
University of St.Andrews
Rati
o o
f T
ime t
o a
ward
of
D
octo
ral d
eg
ree : E
xp
ecte
d C
ou
rse len
gth
13
Research Student to Academic Staff
Ratio (Biosciences)
0.0
0.5
1.0
1.5
2.0
2.5
3.0
University ofCambridge
University ofBristol
University ofLeeds
University ofOxford
QueensUniversity Belfast
Imperial CollegeLondon
UniversityCollege London
University of St.Andrews
Rese
arc
h S
tud
en
t to
Acad
em
ic S
taff
Rati
o (
Bio
scie
nces)
What benefits have Imperial
College London seen?
• Understanding strengths and weaknesses
• Understanding competitors and identify our peer group
• Recruitment of faculty
• Developing strategies to focus resource and collaborate
• Increasing selective strategy (Global Themes)
• Improving research income & outputs
• Strategic approach
Some real examples
• Decrease in neuroscience income
• Recruiting a new professor
• Divestment of an institute
Benefits for universities
15
• Trusted comparison of metrics on a robust standard
(comparing apples to apples)
• Universities are in control
• Methods (recipes) are not proprietary
• Metrics are agnostic to systems or suppliers
– anyone can use them for their own purposes
• Ability to choose and control with whom one shares
/benchmarks (the crossroad/traffic light model)
• Ability to benchmark nationally and internationally
Globalizing Snowball Metrics
16
• US Michigan, Northwestern University, University of Illinois
at Urbana-Champaign, Arizona State, MD Anderson, Kansas
State
• ANZ Queensland, Western Australia, Auckland ,Canberra
• Japan 2013 to date RU11 & MEXT
• APRU Association of Pacific Rim Universities
• HKU & NTU
• European Commission for H2020
• Fundação para a Ciência e a Tecnologia (FCT) in Portugal
• Sweden
• Denmark
John Green
jtg11@cam.ac.uk
Snowball Metrics http://www.snowballmetrics.com/
17
Towards global standards for
benchmarking
Snowball Metrics denominators should enable global benchmarking as far as
possible. We do not know how this will look , but one possibility is:
18
UK metrics
Australia / New Zealand metrics
Japan metrics Illustrative only,
testing underway
Common core where benchmarking
against global peers can be conducted.
Aim is to make this as big as possible
Shared features where benchmarking
between Countries 1 and 2 can be
supported by shared or analagous data e.g.
regional benchmarking
National peculiarity can support
benchmarking within Country 1, but not
globally i.e. national benchmarking
Snowball Metrics exchange
19
• Any institution who has Snowball Metrics can use the exchange
• Institutional members are responsible for generating Snowball Metrics
according to the Snowball recipes
• An institution could be the member of one or more ‘Benchmarking clubs’
(Russell Group have agreed to form a Snowball benchmarking “club”)
• Institutions choose what to share
• Exchange service encrypts all metrics and only entitled institutions can
decrypt
• Data underlying metrics will never be exchanged
• CRIS system could act as provider and client, communicating directly with
exchange APIs
Snowball metrics exchange
20
database
Snowball Metrics
eXchange Data sources
inputs
Data sources outputs
Scholarly output = 1,376 FJ#)_!A#_+
InCites
SciVal
Symplectic
Pure
Converis
Scholarly output = 1,376
InCites
SciVal
Symplectic
Pure
Converis
spreadsheet spreadsheet
Metrics are only valuable when
used responsibly
• Always use more than 1 research metric
– A strong research ecosystem produces and recognises diverse types of
impact
– Reduces the chance of gaming
– Reduces the chance of driving undesirable changes in behaviour
• Select metrics that represent behaviour you want to
encourage
• Data source coverage, disciplinary focus, and timeline
should affect the metrics selected
• Select a set of metrics whose strengths complement each
others’ weaknesses
John Green
jtg11@cam.ac.uk
Snowball Metrics http://www.snowballmetrics.com/
22
Are research metrics alone enough to
fully describe performance? No
Is expert opinion alone enough to fully
describe performance? No
Can performance be fully described? Yes
When used with common sense, research metrics together with
qualitative input give a complete, balanced, multi-dimensional view of
performance
Two Golden Rules for using
research metrics
Always use both qualitative and
quantitative input into your
decisions
Always use more than one
research metric as the
quantitative input
24
The importance of research metrics
• Research metrics help us to make better decisions
– More informed
– Reduce chance of error
• Metrics are responsive to changes in performance
• Metrics enable benchmarking even against peers we don’t know
• Metrics can save time and money, if used wisely to indicate where peer
review should be used for validation
• Metrics are an objective complement to expert opinion
– Both approaches have strengths and weaknesses
– Valuable information is available when these approaches differ in message
• Always use both approaches in combination
Qualitative
Peer review
Expert opinion
Narratives
Quantitative
Rankings
Benchmarking
Narrative support
with
The basket of metrics is diverse…
F.
Qu
ali
tati
ve i
np
ut
Metric theme Metric sub-theme
A. Funding Awards
B. Outputs Productivity of research outputs
Visibility of communication channels
C. Research Impact Research influence
Knowledge transfer
D. Engagement Academic network
Non-academic network
Expertise transfer
E. Societal Impact Societal Impact
Number of metrics shown is indicative only. “S” denotes Snowball Metrics www.snowballmetrics.com S
S
S
S
S
S
S S
S
S
S
S
26
… and the diverse metrics are
available for all entities
Outputs e.g. article, research data,
blog, monograph
Custom set of outputs e.g. funders’ output, articles
I’ve reviewed
Researcher or group
Institution or group
Subject Area
Serial e.g. journal, proceedings
Portfolio
e.g. publisher’s title list
Country or group
F.
Qu
ali
tati
ve i
np
ut
Metric theme
A. Funding
B. Outputs
C. Research Impact
D. Engagement
E. Societal Impact
27
Users in different countries select
different metrics
Metric Russia World Australia Canada China Germany Japan United
Kingdom United States
Field-Weighted Citation Impact
? 1 1 1 3 2 4 3 1
Outputs in Top Percentiles
? 2 2 3 1 4 1 1 6
Publications in Top Journal Percentiles
? 3 4 2 2 6 2 2 5
Collaboration ? 4 6 6 5 1 3 5 7
Citations per Publication
? 5 3 7 6 3 5 4 3
Citation Count ? 6 5 5 4 8 6 6 2
h-indices ? 7 7 4 8 7 7 7 4
Usage of metrics available in SciVal’s Benchmarking module from 11 March 2014 to 28 June 2015.
A partial list of the metrics available at that time is shown, focusing on the most frequently-used. Scholarly Output it
excluded since this is the default.
How should the basket of metrics
be used?
1. Define the question clearly, so that you can
2. Select appropriate metrics for the particular situation, and
3. Calculate metrics for the entities you are investigating, and
4. For suitable peers so you can benchmark performance
29
So do metrics help?
30
• It’s not all metrics, or no metrics – it’s not a black and white decision
• Metrics can provide data points on which to build using expert
opinion (peer review) to delve deeper & deal with outliers
• Metrics aren’t a replacement for human judgment – they complement it
• Metrics aren’t the antithesis of peer review
• (Biblio)-metrics incorporate decisions made by peer review,
e.g. whether to publish, what to cite
• But metrics aren’t just bibliometrics – there are many measures
that can and should be used
• We value objective normalized universal information that
enables meaningful comparisons
• After all academia is an evidence-based activity!
• First define the question; then pick the metrics to answer them
Conclusion
• Research metrics help us to make better decisions, if used
responsibly
– Golden Rule 1: Always use both qualitative and quantitative
input into your decisions
– Golden Rule 2: Always use more than one research metric as
the quantitative input
• Successful benchmarking relies on a clearly defined question, a
selection of appropriate metrics benchmarked against suitable
peers
• Suggest a common core of metrics, with complementary
sets suitable for particular disciplines and institution types
• Use new metrics alongside traditional metrics, not instead
of them
If universities in other countries
want to explore metrics …
32
• Work bottom-up
• Have a manageable group of universities working together
– need those who are willing to work at it (it took us 5 years )
• Involve an organisation which can test the metrics
• UK universities are not very good at managing projects ….
• Need high level Steering Group (to ensure buy-in from the top)
• Need data experts from within universities
• AND… a project group which “does the work”
• Need to get buy-in from all parts of the sector: funders,
government, researchers …
• Leverage the work others have done (Snowball !! )
• Trust each other
John Green
jtg11@cam.ac.uk
Snowball Metrics http://www.snowballmetrics.com/
33
top related