Skip to main content

Responsible Use of Metrics Statement

Durham University is a world-leading research-intensive University. We recognise the potentially important role that metrics can have in facilitating research and improving research visibility and impact. We see metrics as a resource which, used responsibly, support the University’s staff to achieve research excellence and to publish in high quality journals. We also recognise, as do peers across the sector, that the relevance and appropriateness of metrics varies across disciplines and that their use is both complex and the statement below outlines the University’s approach to the responsible use of metrics. It builds on the ten Principles for responsible researchevaluation contained within the Leiden Manifesto for Research Metrics.

The University has been a signatory to Declaration on Research Assessment (DORA) since 8 November 2018.

1. Quantitative evaluation should support qualitative, expert assessment

We recognise that expert peer review remains the most accurate means of assessing research quality. However, quantitative metrics and indicators can be useful, alongside peer review, in contextualising research, challenging bias tendencies in subjective peer review, and facilitating institutional benchmarking and collective assessment of research units. However, we do not support the use of quantitative indicators as a substitute for rigorous peer review and informed judgment, recognising that quantitative indicators can also be subject to bias in a variety of ways

2. Measure performance against the research missions of the institution, group or researcher

The 2017-2027 University Strategy focusses on the delivery of world-leading research that is visible, useful and impactful on wider society. The appropriate use of quantitative evaluation can assist in achieving this objective, for example in contextualising expert judgment on the quality of a research output in line with the quality of its publication outlet (journals and other fora), and citation data. We will use metrics to track progress towards our overarching research mission (including those of our faculties and departments) rather than utilising them as goals in themselves. This is based on our recognition that quantitative data is best understood when set in a broader environment, taking particular account of different disciplinary considerations and differences.

3. Protect excellence in locally relevant research

We are aware of the potential biases present in many metrics and will uphold pluralism and local relevance where appropriate. This may include where outstanding research may be aimed at non-English-speaking audiences, where it is recognised that many quantitative indicia favour English-language research outputs or where outstanding research aimed at English-speaking audiences has strong localised impact and relevance not reflected effectively by some traditional publication or citation metrics. It may also include recognizing the value of outputs being published in local forums, such as journals in order to reach local audiences and the public. Moreover, we also recognise that the global research environment today, and technological abilities to access research outcomes, favour outputs from Western regions and with Western approaches.

4. Keep data collection and analytical processes open, transparent and simple

An inherent tension exists between easily digestible, simple and overarching data collection and the use of more complex analytical methods which can account for disciplinary differences. To this end academic departments and research units will be supported to develop specific, clearly stated indicators, in order to tailor data collection and its analysis to their particular needs. We will strike a balanced approach to transparent data collection using reproducible methodology, as well as robust analytical processes developed in consultation with the research community. We will be consistent in collection and application of these data, and wherever possible, standard reporting will be established to support this endeavour and to ensure transparency.

5. Allow those evaluated to verify data and analysis

Data collectors and analysts alongside evaluators will strive continuously to ensure accuracy and consistency throughout the research process. We expect that when data are gathered there will be steps taken to ensure that it is accurate, analysed within a relevant research context and only deployed for its stated purpose. Our expectation is that both researchers and those involved in research evaluation behave with integrity and conform to the University’s Research Integrity Policy. It is recognised that in order to support this endeavour, staff must be allocated the appropriate resources and training.

6. Account for variation by field in publication and citation practices

We recognise that publication and citation practices vary between disciplines and even subdisciplines, and that certain research indicators and sources of data underpinning those

practices may be inappropriate. As such, we will work with research units and academic departments in implementing current best practice - selecting from a range of possible research quality indicators and data sources which are most suitable for a given research unit or academic department. It is acknowledged that there are circumstances where the use of metrics is not appropriate at all.

7. Base assessment of individual researchers on a qualitative judgement of their portfolio

Evidence indicates that quantitative data used in isolation fails to account sufficiently for biases in individual circumstances, for example career stage, gender, ethnicity, and the language, format and timing of the publication. Metrics-based assessments of publications will not normally be used alone for the purposes of recruitment, probation evaluations, performance management or progression. We recognise that consideration must be given to the full range of activities, expertise, experience, and engagement of individual researchers, contextualised amongst their peers. Furthermore, under no circumstances will the gathering or analysis of quantitative data be used without due regard for their impact on equality and diversity. Data collection, evaluation and use will always be undertaken in a manner consistent with the University’s Diversity and Equality policies.

8. Avoid misplaced concreteness and false precision

We endorse the practice of using multiple indicators to provide a contextualised, robust and

accurate picture of research quality. Misplaced concreteness is cautioned against. If uncertainty and risk can be quantified, these will be incorporated into data analysis. We will guard against false precision, for example reliance solely on journal / publisher rankings or single metrics.

9. Recognize the systemic effects of assessment and indicators

We will anticipate the manner in which quantitative data might shift the behaviour of individual researchers and academic departments: in short, stated indicators for consideration change assessment processes through the incentives they establish. We reiterate our commitment to using, where practical, a suite of possible indicators rather than merely one indicator, which might invite gaming and establish perverse incentives.

10. Scrutinize indicators regularly and update them

The social, political, economic and technological context in which universities operate is subject to rapid change and evolution. The goals of research assessment shift and the research system itself co-evolves. We commit to the regular review and, where appropriate, the revision, of the range of research quality indicators used by the University and its academic departments. At all times, the paramount objective will be that the collection of quantitative data facilitates, rather than substitutes for, the accurate assessment of research quality.