Tomas Rees, PhD, Oxford PharmaGenesis Ltd, Oxford, UK; Stacy Konkiel, MLS/MIS, Altmetric, Farringdon, London, UK; Andrew Shepherd, PhD, Envision Pharma Group, Horsham, UK

This article was derived from the efforts of the ISMPP Social Media and Web-based Metrics Working Group. #ISMPP_socialmedia_webmetrics

Effective publication planning requires a good understanding of the uptake and impact of individual publications. In the past, our ability to measure this has been limited because publication planners have only had access to journal impact factors and citation counts. With the advent of web-native and article-level metrics, however, publication planners now have access to a vast array of data, including usage statistics (pageviews and download counts), citation counts, and alternative metrics (commonly known as altmetrics), as well as traditional citations.

The sheer volume of possible metrics can seem bewildering. However, by understanding what insights can be obtained from different metrics, it is possible to select tools that are best suited for particular purposes (see Figure 1 below). Through close examination of all available options, publication planners can also gain a fuller understanding of opportunities and potential pitfalls in these metrics’ use. This article will explore how the responsible use of metrics can help publication professionals better understand the full range of interest in, and use of, publications. In turn, this information can be used to improve our publication planning efforts.

Figure 1. Types of article interactions that can be assessed using article-level metrics

What Insights Can Article-level Metrics Provide?

The reach of research can be thought of as the number of people who have been exposed to a journal article, for example by seeing a link mentioned on social media. In a perfect world, reach could be measured by tracking metrics for how many people have at least viewed the title or abstract of a paper. In reality, it is much more difficult to measure. For example, publishers and other research platforms may not share viewership data, article titles and abstracts can be read on non-publisher platforms (such as PubMed Central), and many readers find publications through internet search engines (which do not share metrics).

Reach can also be measured indirectly, through a variety of proxy measures. Journal readership (either subscribers or number of website visitors) is a key proxy metric for reach, but it has important limitations. For example, a healthcare institution may have one subscription that is accessed by hundreds of users, which will be counted as only one subscriber. Article-level metrics can also provide proxy measures of reach, such as the number of times an article is shared on social media or is mentioned in a news article or blog post. They can also answer more detailed questions, such as: how sharing varies by country, by platform, or by user demographics; who is sharing the article and how many followers do they have; which news sites are covering the story, and what is their circulation? However, the picture will always be incomplete because readers might share the secondary sources that discuss the research – the press release, news article, or conference presentation – rather than sharing the article itself. This kind of interaction will be missed, as will any activity that takes place on websites that altmetrics services don’t track.


The number of people who have accessed and read a publication is a critical indicator for publication planners. While it is difficult to measure the true readership of an article (metrics like “time spent on page” are rarely shared by publishers), readership is often approximated through usage statistics, such as article page views and PDF downloads. Interpretation of usage statistics can be complicated if there are a variety of platforms where the research can be accessed or a variety of formats that a journal article can take. For example, a journal article might be shared as a link to the publisher’s “version-of-record” article, or as a pre-publication accepted manuscript that has been archived as a preprint on a site like bioRxiv.

Another proxy for readership is the number of times that papers have been saved to personal reference libraries (eg, Mendeley). Saving a paper to a personal reference library might indicate that an individual has either read it or intends to read it later.


Engagement follows awareness – once individuals have seen and read a paper, what do they do with the new information that they have? These actions could include discussing it with colleagues, writing an opinion piece, or sharing it with a new audience.

Engagement metrics can include discussions on expert peer review sites, like Faculty of 1000 Prime, and on social media platforms. Although most social media shares are simply tweets or retweets of the article title, some discussions do offer substantive commentary. For this reason, engagement metrics should always be accompanied by a summary of related discussions. Automated tools can be helpful at identifying spikes in interest or obvious examples of criticism (eg, sentiment analysis). However, many of these tools are still in development; for example, the citation tool can provide some basic insight as to whether a citation supports or contradicts the original research, but tags the overwhelming majority of citations as “neutral.” For the time being, human assessment remains critical to truly understand what people are saying about a paper.

Translational impact

The ultimate goal of authors and publication professionals is to make a discernible contribution to scientific endeavor or clinical practice. A key indicator of success in translational impact can be citations in clinical guidelines, published either in journals or in policy and guidance documents produced by healthcare organizations. Citations in publications may also indicate an impact on research, and citations in a published expert opinion piece or meta-analysis may indicate an impact on clinical practice. In these instances, the metrics are less valuable than the context of who is citing your article and why they are citing it.

Other useful citations that may indicate translational impact include those in Wikipedia (reportedly consulted by 50% of doctors in the course of their daily work[1]) or any of the many online “point-of-care” resources for doctors, patients, and caregivers.

Aggregate Metrics

Some providers of article-level metrics provide aggregated scores or rubrics that summarize many metrics, in order to make the data easier to interpret.

The Altmetric Attention Score:

  • Offered by the company Altmetric.
  • Aggregates 17 different metrics, each independently weighted, into a single, article-level score.
  • Attention Scores can be compared against those of other articles of the same age and in the same journal using percentile information listed on article Details Pages.
  • Because the Altmetric Attention Score combines many independent metrics, the interpretation of high scores can vary (for high-scoring articles, it often reflects high Twitter or news interest).[2]

The Plum Print:

  • Offered by Plum Analytics.
  • Segregates metrics into five different areas (citations, usage, captures, mentions, and social media), but does not weight them.
  • This gives an indication of the kind of attention that an article has received and can be useful when a more nuanced picture of engagement is needed.
Using Article-Level Metrics to Improve Publication Planning

A good appreciation of metrics can help a planner assess whether a publication plan is achieving its intended goal. A publication that makes little or no impression on the metrics collected may still be of value – for example, it may be a key part of the evidence for product or formulary approval. But, typically, publications are expected to have an uptake among their intended audiences that is measurable using one or more of these article-level metrics.

An excellent introduction to the general topic of measuring the influence of research is provided by the Leiden Manifesto for Research Metrics, which provides ten principles to guide research evaluation.[3]

Leiden Manifesto recommendations that are directly relevant to medical publication professionals include the following:

  • The variations across academic fields in citation rates and altmetric mentions should be taken into account.
  • Quantitative evaluation should support qualitative, expert assessment (not replace it).
  • The choice of metrics should be clearly aligned to the publication objectives.
  • While simple approaches are attractive, oversimplification can mask the true diversity of impact (an appropriate balance must be achieved).

There are four key stages to the use of article-level metrics to assist publication planning (see Figure 2 below).

Figure 2. Key stages in using article-level metrics to improve publication planning


A planner is often concerned with publications across a whole clinical trial, study program, or portfolio. It is, therefore, important to connect the intended objective of each publication with the relevant metrics to determine whether the publication plan is achieving the desired levels of external impact. If it’s not, the metrics can help to chart a course for improvement.


Altmetrics, citations, usage statistics, and other publication metrics paint a richer picture of publication impact when used as a group, rather than in isolation. There is a lot about research impact that cannot be directly quantified, so individual metrics, on their own, may be incomplete or imprecise.[4]


Metrics should always be viewed in context. The simplest approach is to consider how your article metrics look in comparison with similar articles, but the difficulty then is to define what counts as “similar.” This could mean articles published in the same journal or related to the same therapy. Some metrics providers offer subject-level comparisons, and there are even specialized metrics developed to do this, such as the Field Citation Ratio and Relative Citation Ratio. Above all, because metrics accumulate over time at different rates, and because altmetrics may be impacted by the number of users of a platform at the time of publication, it’s important to make any comparisons against articles of a similar age.


Once you have chosen which metrics to use, the next practical challenge is how to interpret them. Publications that underperform may have been published in a journal with insufficient reach in the target audience. Alternatively, publications may lack clarity or potentially even be redundant (perhaps indicating that they could have been merged).

If a publication is of high interest, then supplementary tactics (eg, a press release, video abstract, or infographic) may be appropriate to increase reach. If a publication has scored highly on some metrics but not others, then a consideration may be whether the choice of supplementary tactic was sufficient or appropriate. And, of course, it is vital to track and assess the content of any engagement, because the discussions around a publication can be used to improve future publication plans.

Understanding how to use and interpret article-level metrics can be a powerful tool for publication planners to evaluate the effectiveness of their plans. This, in turn, can aid in ensuring that important medical information reaches its intended audience to help inform and improve patient care.


[1]Beck J. 2014. Doctors’ #1 source for healthcare information: Wikipedia. Available from: (Accessed 10-15-2019).

[2]Rui A, et al. Top altmetric scores in the Parkinson’s Disease literature. J Parkinsons Dis 2017;7(1):81-87, 2017.

[3]Hicks D et al. Bibliometrics: the Leiden Manifesto for research metrics. Nature News 2015;520:429–31.

[4]Wilsdon J. The metric tide: independent review of the role of metrics in research assessment and management. London: Sage, 2016.

%d bloggers like this: