Bibliometrics training

I recently attended a session on bibliometrics, led by Yvonne Nobis of the Betty & Gordon Moore Library at the University of Cambridge. Bibliometrics is the statistical analysis of written publications. Although it was developed for use in a library context to understand the relationships between different publications, and make decisions about journal purchasing, in recent times it is used more commonly in academia as a way of measuring the impact of an academic publication, such as a journal article. This is determined by gathering data about the frequency of citations of particular publications.

Yvonne took us through several flaws and dangers in using bibliometrics in this manner.

Firstly, bibliometrics can be an inaccurate measure of a journal’s impact: journals’ impact factors are determined by the journal as a whole, which can mean that a small number of heavily cited papers may artificially raise the impact of an individual journal and allow other articles published in that journal to coast on this prestige. This can have repercussions, as other researchers will compete to publish in a handful of ‘prestigious’ journals, or be discouraged from publishing in smaller journals due to fear of the effect it will have on their careers. Journals then use high impact factors to justify high prices, drive all academics to publish in these prestigious publications, and cause high rejection rates.

Secondly, when measuring the impact of an individual researcher’s publications, we have to rely on the H-index, which measures the number of papers the academic publishes against the number of citations. In principle, this sounds fair enough. The problem arises, however, in the tools we use to measure this H-index: the Scopus and Web of Science databases (it’s also possible to use Google Scholar, but this is a fraught and opaque method, as Google does not make available the range of journals they cover and thus from which they draw their data). Web of Science and Scopus cover a slightly different range of dates and journals, so the material they are drawing on to determine a researcher’s H-index may be different, leading to different H-index scores for the same academic. Problems can also arise if an academic has published under variations on the same name (think J. Smith, John Smith, and John T. Smith), or if they share a name with many other researchers.

Leaving inaccuracy aside, there is an argument that relying solely on a quantitative measure of impact can ignore many of the complexities surrounding referencing and citation. One of the most highly cited article of all time is the infamous Lancet article by Andrew Wakefield, which linked autism with the MMR vaccine and has since been discredited. However, most of the citations of this article are discrediting it, which can get lost when focusing solely on the number of citations. A newer paper is likely to have fewer citations than an older one – but this tells us nothing about the importance or accuracy of its research. Likewise, a senior academic with a long publishing history is likely to have more citations than an early-career academic.

As with any measure of prestige, there have been attempts to game the H-index system, and Yvonne touched briefly on several scandals in which academics created a number of fake researchers in order to cite their own publications and drive up the impact of their work. This behaviour seems to me to be the inevitable result of the focus on quantitative measures of impact at the expense of all other contributions to the research landscape. Yvonne mentioned that although metrics are not counted in the REF, individual universities are now using them as performance indicators for their staff, although this has not happened so far in Cambridge.

In spite of these negatives, it was a useful session, and I came away with some practical tips on how to create citation reports and maps in both Web of Science and Scopus, as well as a clearer understanding of where bibliometrics fit into the research landscape. My feeling is that they are a useful tool, particularly for tracing the evolution and development of ideas as they spread through a particular field, but that they should be used with care when assessing the impact of an article or the work of an individual academic.

Advertisements

About thelibrarianerrant

I'm a senior library assistant in one of the faculty libraries of the University of Cambridge. My posts here are in a personal capacity, and are on any topics relating to library and information services.
This entry was posted in academic libraries, resources, teaching and training and tagged , , , . Bookmark the permalink.

One Response to Bibliometrics training

  1. Pingback: Bibliometrics training | ML CPD-what I learned this month...

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s