Are you interested in contributing to HLWIKI International? contact ... as journals and scholars have moved online, and citation indexing has been automated, the wealth of information for citation discovery and analysis has vastly increased
To browse other articles on a range of HSL topics, see the A-Z index.
- 27 May 2017
See also Altmetrics | Author impact metrics | Bibliometrics | ImpactStory | Scholarly publishing and communication | Webometrics
Measuring research "impact"
Impact factors are the consequence of the metrics-driven era in which we live. Clearly, quantitative methods in assessing scholarly impact are on the rise in academia. Impact in scholarship can refer to journal citation counts, journal impact factors and researcher specific metrics, such as the h-index, and provide a means of measuring research impact. A variety of research metrics can be used to support applications for grant funding; promotion of researchers; updating research profiles for faculty reviews. Some data sources used for measuring research impact include:
- Researcher metrics (author impact metrics) which is the # number of times a researcher is cited, # number of publications over a period of time
- Article metrics which is the # number of times an article is cited overall; Altmetrics can help, such as page views, downloads and blog post about an article (really, any database of items can provide some biblometric information that might be useful)
- Journal metrics which is the # number of articles published in a journal each year; number of journals in a specific subject area; cited half-life of journals "impact factors"
"The" impact factor
- Journal impact metrics measure the importance of a journal in a particular field relative to other journals in the same field. There are several different metrics employed to measure this relative importance.
- The impact factor (or journal impact factor (JIF)) is a "metric or measure" of significance ascribed to a journal based on the number of times articles within a journal are cited over a period of time
- IFs are determined by averaging # of citations journals and average number of times articles within journals are cited by others
- The IF was conceived by Eugene Garfield, an information scientist, in the late 1950s, and came into wider use in the 1970s. Garfield's company, ISI, was purchased by the publishing conglomerate Thomson Reuters, a worldwide US-based publisher.
- Journal IFs are well-established and used (also mis-used) for a range of purposes; to address the latter, the San Francisco Declaration on Research Assessment (DORA) was written in 2013
- Academic libraries use IFs to determine the influence of journals and see relationships across institutions between where researchers publish, and the nature of their collaborations
- Finding IFs required searching a citation index such as the Web of Science, formerly the Web of Knowledge.
- Some JIFs are determined by Google scholar, but its critics say that the tools suffers from errors and inflated counts.
- One project uses Google scholar to determine author and journal influence and called Scholarometer.
- IFs have an impact in promotion and tenure, publishing cycles, funding and/or grants. Some top journals in medicine are the New England Journal of Medicine, British Medical Journal, Journal of the American Medical Association (JAMA) and Lancet.
- York University Libraries has a good research guide about impact factors.
Beyond impact factors
- Immediacy index - average citation number of an article in that year
- Journal cited half-life - the median age of articles cited in Journal Citation Reports annually (see ref)
- in JCR 2010, Food Biotechnology journal has a citing half-life of 9.0 which means that 50% of all articles cited by articles in Food Biotechnology in 2010 were published between 2001 and 2010 (inclusive).
- only journals that publish 100 or more cited references have a citing half-life. Cited-only journals do not have a citing half-life.
- Aggregate impact factor - calculated by # of citations to all journals in category and number of articles from all the journals in the subject category.
Impact factors refer to journals not specific articles or authors. The number of citations for individual articles is called citation impact. It is possible to measure the IF of journals in which a particular person has published articles; a controversial but widespread use. Garfield warned about the "misuse in evaluating individuals" for there is "wide variation from article to article within a single journal".
IF is calculated based on a three-year period, and the average number of times papers are cited up to two years after publication. For example, the 2008 impact factor for a journal would be calculated as follows:
- A = number of times articles published in 2008-9 were cited in indexed journals during 2009
- B = number of articles, reviews, proceedings or notes published in 2008-9
- 2009 impact factor = A/B
- (Note: 2009 impact factors are published in 2010 and calculated after all 2009 publications are reviewed.)
Another way to see an IF is that where a journal is cited once each article published has an IF of 1; there are no articles to be averaged just one article. Thomson Reuters excludes certain types of articles (i.e. news items, correspondence, and errata) from the denominator of the IF. New journals indexed from inception receive IFs after two years' of indexing; citations prior to Volume 1, and number of articles published prior to Volume 1, are known as zero values.
Controversies of impact factors
Impact factors are useful metrics for the comparison of journals and their influence within a field. For example, a sponsor of research may want to compare the productivity of its projects and their impact. At times, an objective measure of the importance of publications is needed and the impact factor (or number of publications) is the only one available. It is important to remember that scholarly disciplines can have different publication and citation practices, which will affect the number of citations and how quickly articles in the subject reach their peak citation counts. In all cases, it is relevant to consider the rank of the journal in a category of its peers, rather than the raw Impact Factor value. Impact factors are not infallible measures of journal quality. For example, it is unclear whether the number of citations a paper garners measures its actual quality or simply reflects the sheer number of publications in that particular area of research and whether there is a difference between them. Furthermore, in a journal which has long lag time between submission and publication, it might be impossible to cite articles within the three-year window. Indeed, for some journals, the time between submission and publication can be over two years, which leaves less than a year for citation. On the other hand, a longer temporal window would be slow to adjust to changes in journal impact factors. Thus, The impact factor is appropriate for some fields of science such as molecular biology, it is not for such subjects with a slower publication pattern. (It is possible to calculate the impact factor for any desired period, and the web site gives instructions.)
Why IFs are useful
- Used for promotion and tenure;
- Wide international coverage at ISI;
- Web of Knowledge indexes 9000 science and social science journals from 60 countries.
- Results are widely (though not freely) available to use and understand;
- An objective measure, with a wider acceptance and reliability than the alternatives;
- One alterative measure of quality is "prestige" - a rating by reputation, which is very slow to change, and cannot be quantified or objectively used. It merely demonstrates popularity.
Drawbacks of impact factors
- Inadequate international coverage; Web of Knowledge indexes journals from 60 countries, but coverage is uneven
- Few publications from languages other than English are included, and few from less-developed countries.
- Numbers of citations to papers in particular journals do not measure quality or scientific merit.
- Journals with low circulation, regardless of scientific merit of content, will never obtain high impact factors.
- Since defining the quality of an academic publication is problematic, involving non-quantifiable factors, such as the influence on scientists, assigning a value is difficult.
- Time factors for citations are too short. Classic articles are cited frequently even after several decades.
- The number of researchers, average number of authors and nature of results in different research areas, make impact factors between different groups of scientists difficult.
- Generally, medical journals have high impact - a fact accepted by publishers but this does not mean they are useful in other fields--such a use is an indication of misunderstanding.
- By counting frequency of citations per article and disregarding the prestige of the citing journals, IF becomes merely a metric of popularity not prestige.
Manipulation of impact factors
A journal can adopt editorial policies that increase its impact factor. They may not involve improving the quality of published science as journals sometimes may publish a larger percentage of reviews. While some articles are uncited after 3 years, nearly all review articles receive one citation within three years raising the impact factor of the journal. Thomson ISI gives directions for removing these journals from calculations. For researchers or students with familiarity of the field, review journals are obvious. Editorials in a journal do not count as publications. However when they cite published articles, often articles from the same journal, those citations increase the citation count for the article. This effect is hard to evaluate for distinguishing between editorial comment and short articles is not obvious. "Letters to the editor"" might refer to either class. Editors of journals may encourage authors to cite articles from that journal. The degree to which this affects citation count and impact factor must be examined. Most of these factors are discussed along with ways for correcting the figures for these effects if desired. It is normal for articles in a journal to cite its own articles for those are the ones of the same merit in the same field. If done artificially, the effect will be significant for journals with the lowest citations and affect placement but only at the bottom of the list.
Eighty-nine (89%) percent of citations of individual papers in Nature were generated by 25% of its papers. The most cited Nature paper in 2002−03 was the mouse genome which represents the culmination of great work but is inevitably an important point of reference rather than an expression of deep insight. It has received more than 1,000 citations. In 2004, it received 522 citations. Our next most cited paper from 2002−03 (about the yeast proteome) received 351 citations. Only 50 out of the roughly 1,800 cite-able items published in those two years received more than 100 citations in 2004. The great majority of our papers received fewer than 20 citations. This emphasizes that impact factors refer to the average number of citations per paper. Most papers published in a high impact journal will be cited fewer times than the impact factor suggests, and some will not be cited at all. The journal impact factor should not be used as a substitute measure of the impact of individual articles in the journal.
Alternative journal impact factor (IF) metrics
- Bernard Becker Medical Library Project, Assessing the Impact of Research is a model for assessment of research impact
- Eigenfactor.org is an academic research project at the University of Washington. Developed by West and Bergstrom, the Eigenfactor is a rating of the total importance of a scientific journal. Eigenfactor is reminiscent of Google's Pagerank algorithm in that journals are rated according to "link love" or the number of incoming citations. Moreover, citations from highly-ranked journals are weighted higher than poorly-ranked. An Eigenfactor score rises with the total impact of a journal. Therefore, journals that generate a higher impact in the field have a larger (or higher) Eigenfactor score. Eigenfactor is also used in network analysis to develop methods to evaluate the influence of scholarly journals and map academic outputs in various disciplines.
- In 2014, Scopus announced a new journal metric called Impact per Publication (IPP). IPP measures the ratio of citations per article published, and provides an additional metric for comparing and evaluating journals. Access the IPP metric from the "Compare journals" tool in Scopus.
- SCImago Journal Rank is an open access journal metric which uses an algorithm similar to PageRank and provides an alternative to the impact factor (IF), which is based on data from the Scopus® database (Elsevier B.V.) Average citations per document in a 2 year period, abbreviated as Cites per Doc. (2y), is another index that measures the scientific impact of an average article published in the journal. It is computed using the same formula for the journal impact factor of Thomson Reuters.
- Y-Factor (see Bollen et al, 2006) proposes to use Google PageRank with the ISI impact factor to distinguish the "quality" of citations and improve IF calculations.
Impact Factor PageRank Combined
1 52.28 ANNU REV IMMUNOL 16.78 NATURE 51.97 NATURE
2 37.65 ANNU REV BIOCHEM 16.39 J BIOL CHEM 48.78 SCIENCE
3 36.83 PHYSIOL REV 16.38 SCIENCE 19.84 NEW ENGL J MED
4 35.04 NAT REV MOL CELL BIO 14.49 PNAS 15.34 CELL
5 34.83 NEW ENGL J MED 8.41 PHYS REV LETT 14.88 PNAS
6 30.98 NATURE 5.76 CELL 10.62 J BIOL CHEM
7 30.55 NAT MED 5.70 NEW ENGL J MED 8.49 JAMA
8 29.78 SCIENCE 4.67 J AM CHEM SOC 7.78 LANCET
9 28.18 NAT IMMUNOL 4.46 J IMMUNOL 7.56 NAT GENET
10 28.17 REV MOD PHYS 4.28 APPL PHYS LETT 6.53 NAT MED
The table shows the top 10 journals by Impact Factor, PageRank and a modified system that combines the two. Nature and Science are regarded as the most prestigious journals, and in the combined system they come out on top. The New England Journal of Medicine is cited more often than Nature or Science which might reflect the mix of review articles and original articles it publishes. It may be necessary to analyze data for a journal in light of a detailed knowledge of the journal literature.
Measuring research "impact"
Quantitative methods such as citation counts, journal impact factors and researcher specific metrics, such as the h-index, provide a means of measuring research impact. A variety of research metrics can be used to support applications for grant funding; promotion of researchers; add materials to their research profile for faculty reviews. Some data sources used for measuring research impact include:
- Researcher metrics (author impact metrics) which is the # number of times a researcher is cited, # number of publications over a period of time
- Article metrics which is the # number of times an article is cited overall; Altmetrics can help, such as page views, downloads and blog post about an article
- Journal metrics which is the # number of articles published in a journal each year; number of journals in a specific subject area; cited half-life of journals
According to Acharya A et al. Rise of the rest: the growing impact of non-elite journals. arXiv. 16 October 2014, the idea of non-elite journal articles (traditionally, those that have not been cited much) have started to be cited more in the last ten years due to Google scholar.
- Bollen J, Rodriguez MA, Van de Sompel H. Journal status. Scientometrics. 2006;29(3-5):669–687.
- Brown H. How impact factors changed medical publishing—and science. BMJ 2007;334(7593):561–564.
- Brown T. Journal quality metrics: options to consider other than impact factors. Am J Occup Ther. 2011;65:346–350.
- Colaco M, Svider PF, Mauro KM, Eloy JA, Jackson-Rosario I. Is there a relationship between NIH funding and research impact in academic urology? J Urology. 1 March 2013.
- Cronin B, Meho L. Using the h-index to rank influential information scientists. JASIST. 2006;57:1275–1278.
- Cronin B. Bibliometrics and beyond: some thoughts on web-based citation analysis. J Information Science. 2001;27(1):1–7.
- Egghe L, Rousseau R. An h-index weighted by citation impact. Information Processing & Management. 2008;44:770–780.
- Egghe L. An improvement of the h-index: the g-index. ISSI Newsletter. 2006;2:8–9.
- Falagas ME, Kouranos VD, Arencibia-Jorge R, Karageorgopoulos DE. Comparison of SCImago journal rank indicator with journal impact factor. FASEB J. 2008 Aug;22(8):2623–8.
- Franceschet M. A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometrics. 2010;(3):243–258.
- Garfield E. The agony and the ecstasy: the history and meaning of the journal impact factor. International Congress on Peer Review And Biomedical Publication
- Garfield E. Citation indexes to science: a new dimension in documentation through association of ideas. Science. 1955;122(3159):108–11.
- Garfield E. Citation analysis as a tool in journal evaluation. Science. 178(4060):471–479.
- Hirsch JE. An index to quantify an individual’s scientific research output. PNAS. 2005;102:16569–16572.
- Kuo W, Rupe J. R-Impact: reliability-based citation impact factor. IEEE Transactions on Reliability. 2007;56(3):366–367.
- Lehmann S, Jackson AD. Measures for measures. Nature. 2006;444:1003–1004.
- Lehmann S, Jackson AD, Lautrup BE. A quantitative analysis of measures of quality in science; 2007.
- Li X, Thelwall M, Giustini D. Validating online reference managers for scholarly impact measurement. Scientometrics. 2011;91(2):461–471.
- Liang LM. H-index sequence and h-index matrix: Constructions and applications. Scientometrics. 2006;69:153–159.
- Lokker C, Haynes RB, Chu R. How well are journal and clinical article characteristics associated with the journal impact factor? a retrospective cohort study. JMLA. 2012;100:28–33.
- Pickard KT. Impact of open access and social media on scientific research. J Participat Med. 2012 Jul 18;4:e15.
- Rad AE, Brinjikji W. The h-index in academic radiology. Acad Radiol. 2010 May 14.
- Siebelt T, Pilot P, Bloem RM, Bhandari M, Poolman RW. Citation analysis of orthopaedic literature; 18 major orthopaedic journals compared for Impact Factor and SCImago. BMC Musculoskelet Disord. 2010 Jan 4;11:4.
- Spearman CM, Quigley MJ. Survey of the h index for all of academic neurosurgery: another power-law phenomenon? J Neurosurg. 2010 May 14.
- Van Noorden R. Metrics: a profusion of measures. Nature. 2010;465:864.
- Vucovich LA, Baker JB, Smith JT. Analyzing the impact of an author’s publications. JMLA. 2008;96(1):63–66.
- Wu Q. The w-index: a significant improvement of the h-index. Physics and society. 2008