Bibliometrics is concerned with the analysis of research based on citation counts and patterns. The measures used are also commonly referred to as bibliometrics, or citation metrics. They can be used to evaluate the influence of an individual research output, such as a journal article, or a collection of research outputs, such as all works by a particular author, research group or institution. View the UCL bibliometrics policy and the wider context.
- Why are bibliometrics important?
- Bibliometrics can be used as an indication of the importance and impact of your work or that of a research group, department or university, and therefore of its value to the wider research community.
- Applications for funding, research positions or promotion may require bibliometric data and you may choose to include it in your CV.
- Bibliometrics are increasingly being used to measure and rank research output both within institutions and on a national or international level. University rankings may take bibliometrics into account and they are utilised in the Research Excellence Framework (REF).
- Bibliometrics can be used as a tool to identify research strengths and inform decisions about future research interests.
- Bibliometrics in different disciplines
Differences in publishing practices between disciplines means that bibliometrics cannot be compared across disciplines.
Bibliometrics are generally focussed on citation data from journal articles. They may therefore be less relevant in disciplines that are less reliant on journal publishing, such as the arts, humanities, social sciences, computing science and engineering.
- Where can I find bibliometrics?
Various tools are available to identify a range of bibliometric measures. At UCL the main tools available are Web of Science, Journal Citation Reports, Incites (requires registration) and Scopus. Google Scholar also contains citation data. When reporting bibliometrics it is important to state the source of the data. These pages provide guides to using these tools to find various types of bibliometrics.
- Limitations of bibliometrics
Bibliometric data offers a quantitative method of analysing authors' or journals' output, but there are limitations with using bibliometrics:
- Comparisons between subject areas must be avoided. Some subject areas have a higher rate of publication and citation. For example molecular biology articles are produced rapidly and cited frequently compared to computer science or mathematics articles. This means that an average molecular biologist would probably have a larger h-index than a leading computer scientist.
- It is important not to make comparisons between authors of different ages or length of professional activity. Authors who have published for many years have had more time to accumulate citations and reputation. You can to some extent limit bibliometric results to a specific date range, for a fairer comparison.
- Papers often have multiple authors – but what proportion of the work can be attributed to each author? Citation metrics assume that each named author is equally accountable, when this might not always be the case.
- Citation counts could be misleading, for example if an author includes a large number of self citations, or if a peer group agree to cite each other to boost their citation rates. The peer review process for journals should spot and prevent this.
- Negative citations are counted as valid.
- Papers may be submitted under various forms of name although it is in fact the same author. There is also a lack of standard affiliation details. Databases have ways round this including grouping name variants and assigning each researcher an individual numerical ID.
- Some publication types tend to receive more citations than others. Review articles and methods papers, for example, are likely to be more highly cited than a paper based on a laboratory study.
- Citation metrics will differ depending on the data source, as different databases include different journals and years of coverage.
In addition, bibliometrics are a measure of the impact of research on further research, not necessarily of the quality of that research. Bibliometrics should therefore always be used with caution and not be considered a replacement for peer review, but best used to complement or verify qualitative evaluation.
The Leiden Manifesto for research metrics presents ten principles for best practice in metrics-based research assessment.