Bibliometrics is the quantitative analysis of books, articles and other publications, which measures the impact and influence of research.
Bibliometrics is a term describing the quantification of publications and their characteristics. It includes a range of approaches, such as the use of citation data to quantify the influence or impact of scholarly publications. When used in appropriate contexts, bibliometrics can provide valuable insights into aspects of research in some disciplines.
However, bibliometrics are sometimes used uncritically, which can be problematic for researchers and research progress when used in inappropriate contexts. For example, some bibliometrics have been commandeered for purposes beyond their original design. The journal impact factor was reasonably developed to indicate average journal citations (over a defined time period), but is often used inappropriately as a proxy for the quality of individual articles within a journal. Further, research “excellence” and “quality” are abstract concepts that are difficult to measure directly but are often inferred from bibliometrics.
Such superficial use of research metrics in research evaluations can be misleading. Inaccurate assessment of research can become unethical when metrics take precedence over expert judgement, where the complexities and nuances of research or a researcher’s profile cannot be quantified. When applied in the wrong contexts, such as hiring, promotion, and funding decisions, irresponsible metric use can incentivize undesirable behaviours, such as chasing publications in journals with high impact factors regardless of whether this is the most appropriate venue for publication, or discouraging the use of open science approaches such as preprints or data sharing.
As such, UCL has produced a policy and associated guidance on the appropriate use of metrics at UCL. This builds on a number of prominent external initiatives on the same task, including the San Francisco Declaration on Research Assessment (DORA); the Leiden Manifesto for Research Metrics and Metric Tide report. The latter urged UK institutions to develop a statement of principles on the use of quantitative indicators in research management and assessment, where metrics should be considered in terms of robustness (using the best available data); humility (recognising that quantitative evaluation can complement, but does not replace, expert assessment); transparency (keeping the collection of data and its analysis open to scrutiny); diversity (reflecting a multitude of research and researcher career paths); and reflexivity (updating our use of bibliometrics to takeaccount of the effects that such measures have had). These initiatives and the development of institutional policies are also supported or mandated by research funders in the UK (e.g., UK Research Councils, Wellcome Trust, REF).
This Policy Statement aims to balance the benefits and limitations of bibliometric use to create a framework for the responsible use of bibliometrics at UCL and to suggest ways in which they can be used to deliver the ambitious vision for excellence in research, teaching, and learning embodied in the UCL 2034 strategy. We recognize UCL is a dynamic and diverse university, and no metric or set of metrics could universally be applied across our institution. Many disciplines or departments do not use research metrics in any way, because they are not appropriate in the context of their field. UCL recognises this and will not seek to impose the use of metrics in these cases. For those fields where metrics are used, this Policy Statement is deliberately broad and flexible to take account of the diversity of contexts, and is not intended to provide a comprehensive set of rules. To help put this into practice, we will provide an evolving set of guidance material with more detailed discussion and examples of how these principles could be applied. UCL is committed to valuing research and researchers based on their own merits, not the merits of metrics.
Principles for the responsible use of bibliometrics
Quality, influence, and impact of research are typically abstract concepts that prohibit direct measurement. There is no simple way to measure research quality, and quantitative approaches can only be interpreted as indirect proxies for quality.
- Different fields have different perspectives of what characterises research quality, and different approaches for determining what constitutes a significant research output (for example, the relative importance of book chapters vs journal articles). All research output must be considered on their own merits, in an appropriate context that reflects the needs and diversity of research fields and outcomes.
- Both quantitative and qualitative forms of research assessment have their benefits and limitations. Depending on the context, the value of different approaches must be considered and balanced. This is particularly important when dealing with a range of disciplines with different publication practices and citation norms. In fields where quantitative metrics are not appropriate nor meaningful, UCL will not impose their use for assessment in that area.
- When making qualitative assessments, avoid making judgements based on external factors such as the reputation of authors, or of the journal or publisher of the work; the work itself is more important and must be considered on its own merits.
- Not all indicators are useful, informative, or will suit all needs; and metrics that are meaningful in some contexts can be misleading or meaningless in others. For example, in some fields or subfields, citation counts can estimate elements of usage, but in others they are not useful at all.
- Avoid applying metrics to individual researchers, particularly metrics which do not account for individual variation or circumstances. For example, the h-index should not be used to directly compare individuals, because the number of papers and citations differs dramatically among fields and at different points in a career.
- Ensure that metrics are applied at the correct scale of the subject of investigation, and do not apply aggregate level metrics to individual subjects, or vice versa. For example, do not assess the quality of an individual paper based on the impact factor of the journal in which it was published.
- Quantitative indicators should be selected from those which are widely used and easily understood to ensure that the process is transparent and they are being applied appropriately. Likewise, any quantitative goals or benchmarks must be open to scrutiny.
- If goals or benchmarks are expressed quantitatively, care should be taken to avoid the metric itself becoming the target of research activity at the expense of research quality.
- New and alternative metrics are continuously being developed to inform the reception, usage, and value of all types of research output.Any new or non-standard metric or indicator must be used and interpreted in keeping with the other principles listed here for more traditional metrics. Additionally, consider the sources and methods behind such metrics and whether they are vulnerable to being gamed, manipulated, or fabricated.
- Bibliometrics are available from a variety of services, with differing levels of coverage, quality and accuracy, and these aspects should be considered when selecting a source for data or metrics. Where necessary, such as in the evaluation of individual researchers, choose a source that allows records to be verified and curated to ensure records are comprehensive and accurate, or compare publication lists against data from the UCL IRIS/RPS systems.
This policy was approved by UCL Academic Committee, 27th February 2020.
The text of this document is made available under a Creative Commons Attribution license and can be adapted or redistributed by third parties. However, to avoid confusion, please ensure that any modified version is not labelled as a UCL policy.