Author impact metrics (in contrast to journal impact factors) refer to a range of indices used to measure a researcher or scientist's impact in their field. There is a very large basket of tools (at least 37 variants of the h-index alone according to Bornmann et al in 2011) capable of measuring the impact of individual researchers and articles. However, the debate focuses on methods to extract data and the validity of methods interpreting that information. Author metrics refer to quantitative methods to determine the scholarly impact of individual authors which are not as easy to perform in an era of social media and digital access as some people assume. For years, lists of citations have been culled from various data sources in order to examine the scholarly impact of intellectual objects and the researchers who create them (their intellectual content). Two citation tracking ideas are cited reference searching and journal impact metrics, though the latter refers to aggregated counts of articles published in journals and how many times they are cited by others. To understand author metrics more fully, author impact should be considered separately from other impact factors; it is important not to conflate metrics for authors with journal metrics, for example.
The Hirsch index is a common author metric used by Google scholar and the Web of Science and Scopus. All three tools are used to locate citation numbers. Google scholar is effective at locating references and articles within the footnotes and bibliographies of papers on the web; but may suffer from metadata errors. One of the best curated tools is the Web of Science (WoS) a multidisciplinary tool that provides more precise counts and visualization features. Its main competitor is Scopus.
Pan et al (2014) introduced an author impact factor in their article entitled Author Impact Factor: tracking the dynamics of individual scientific impact. They describe it as "...a dynamic index to quantify the impact of recent work of scientists, enabling one to track the evolution of the performance of scholars along their careers, especially trends. ...[while] scientometrics is full of performance indicators, we are generally against an indiscriminate proliferation of metrics. However, AIF fills an important gap, as current indicators of individual performance are not able to follow the dynamics of careers, and are not sufficiently sensitive to major events, like sharp variations in the citation flows to an author's work". Only time will tell whether the new AIF has any traction in the author impact area; consequently, it is not included in the list below. As of 2017, there may be other examples.
Inspired by Jin's The AR-index: complementing the h-index, the AWCR is an age-weighted citation rate where # of citations for a paper is divided by how old it is. Jin defines the AR-index as the square root of the sum of all age-weighted citation counts over all papers that contribute to the h-index. In Publish or Perish, papers are summed over as these represent the impact of the total body of work of a scholar. (This allows younger and less-cited papers to contribute to AWCR even though they may not yet contribute to the h-index.)
2) Contemporary h-index
Proposed in Generalized h-index for disclosing latent facts in citation networks, this index aims to improve on the h-index by giving more weight to recent articles, thus rewarding academics who maintain steady levels of activity. Age-related weighting is parametrized; the Publish or Perish implementation uses gamma=4 and-lta=1, like the authors did for their experiments. This means that for an article published during the current year, its citations account four times. For an article published 4 years ago, its citations account only one time. For an article published 6 years ago, its citations account 4/6 times, and so on.
Eigenfactor.org is an academic research project at the University of Washington. Developed by West and Bergstrom, the Eigenfactor is a rating of the total importance of a scientific journal. Eigenfactor is reminiscent of Google's Pagerank algorithm in that journals are rated according to "link love" or the number of incoming citations. Moreover, citations from highly-ranked journals are weighted higher than poorly-ranked. An Eigenfactor score rises with the total impact of a journal. Therefore, journals that generate a higher impact in the field have a larger (or higher) Eigenfactor score.
Eigenfactor is also used in network analysis to develop methods to evaluate the influence of scholarly journals and map academic outputs in various disciplines.
In the Theory and practice of the g-index, Leo Egghe (2006) aims to improve on the h-index by giving more weight to highly-cited articles. The g-index is an index for quantifying scientific productivity based on publications and calculated based on the distribution of citations received by a given researcher's publications. So, given a set of articles ranked in decreasing order of the number of citations that they receive, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations.
Put another way, the g-index is calculated based on distribution of citations received by a given researcher's publications such that given a set of articles ranked in decreasing order of # of citations received; as such, the g-index is the largest number such that the top g articles received together at least g2 citations.
The e-index, complementing the h-index for excess citations is the square root of the surplus of citations in the h-core beyond h^2. One of the aims of the e-index is to differentiate between scientists with identical h-indices but different citations. Another advantage of the e-index is that it can reflect the contributions of highly cited papers of an author, as usually ignored by the h-index. Zhang says that the e-index "is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index."
6) Google's I10-index
The I10-index indicates the # of papers an author has written that have been cited at least ten times by other scholars. It was introduced by Google in 2011 as part of their work on Google scholar, a search tool that locates academic and related papers. Due to some of the problems with inaccurate counts Google's I10 index has come under close scrutiny and criticism.
Advantages of i10-Index: simple and straightforward to calculate; further, My Citations in Google Scholar is free and easy to use
The h-index is an indicator of a researcher's lifetime impact in their field; or, a measure of impact across many fields.
When one scientist publishes n articles and gets cited n times, an h-index or h-factor of n results. This rewards the publication of many good articles (but few poor ones).
It is difficult to increase your h-index through self-citation (a common problem). One or a few "hits" will not alone improve your H-factor.
The h-index will become reliable once you have a substantial production of research output; it is important to emphasize a single number cannot describe any scholar, and the h-index is only one measure of their impact.
Since Hirsch introduced the h-index in 2005, this measure of academic impact has garnered widespread interest as well as proposals for other indices based on analyses of publication data such as the g index, h (2) index, m quotient, r index, to name a few. Several commonly used databases, such as Elsevier’s SciVerse Scopus, Thomson Reuters’ Web of Science, Google Scholar’s Citations and Microsoft’s Academic Search, provide h-index values for authors.
The h-index can be manually determined using citation databases such as Scopus and the Web of Science.
In The w-index: a significant improvement of the h-index, Wu's index is described as similar to the h-index. According to Hirsch's criteria, a researcher with an h-index of 9 indicates that he or she has published at least 9 papers, each of which has been cited 9 or more times. The 'w-index' indicates that a researcher has published w papers, with at least 10w citations each. A researcher with a w-index of 24 means he or she has 24 papers with at least 240 citations each.
Wu says his index is an improvement on the h-index as it "accurately reflects the influence of a scientist’s top papers". He says it should be called the "10h-index". The w-index is easy to calculate using the Web of Knowledge, Scopus (Elsevier) or Google scholar and in the same way as the h-index by searching for a researcher's name and listing all of their papers in order with the highest cited papers cited first.
Citation metrics and h indices can differ significantly due to the derivation of data when using different bibliometric databases. If for example you compile the publications, citations, h index and years since the first publication of 340 researchers from all over the world, their h-indices would vary. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science.
According to Miasny et al (2013), there is a vast difference between the number of citations, number of publications and the h index depending on the source of data used. In their analysis, they concluded that the choice of database affects citation and evaluation metrics but that "bibliometric transfer functions" exist to relate the metrics from these three databases. Miasny et al also investigated the relationship between a journal's impact factor and Google Scholar's h5-index. The h5-index is a better measure of a journal's citation than the 2 or 5 year window impact factor. (Miasny, 2013).
Scopus now has a journal metric called Impact per Publication (IPP). IPP measures the ratio of citations per article published, and provides an additional metric for comparing and evaluating journals. Access the IPP metric from the "Compare journals" tool in Scopus. (For more information, seeImpact factors).
the model for assessment of research impact is a framework for tracking diffusion of research outputs and activities to locate indicators that demonstrate evidence of biomedical research impact
The notion of citing others is embodied in Newton's phrase "standing on the shoulders of giants", and an acknowledgement that knowledge creation does have a linear element to it and researchers build on previous knowledge to create the new.