Are you interested in contributing to HLWIKI International? contact DORA was endorsed by ~150 leading scientists & 75 scientific organizations
To browse other articles on a range of HSL topics, see the A-Z index.
See also Altmetrics | Author impact metrics | Bibliometrics | Impact factors | Scientometrics | Webometrics
In 2013, the San Francisco Declaration on Research Assessment (DORA) (see Wikipedia) initiated by the American Society for Cell Biology (ASCB) together with a group of editors and publishers of scholarly journals, formally acknowledged the need to improve how scientific research journals are evaluated. The group met in 2012 during the ASCB Annual Meeting in San Francisco and circulated a draft declaration among stakeholders. DORA as it now stands has benefited from input by many of the original signatories. DORA is a worldwide initiative covering all scholarly disciplines. We encourage individuals and organizations who are concerned about the appropriate assessment of scientific research to sign DORA.
- DORA questions the use of journal impact factors as the main tool for journal assessment; it proposes other factors and meaningful approaches
- DORA looks at the need to eliminate journal-based metrics in funding, appointment, and promotion considerations
- DORA would like to assess research on its own merits rather than on the basis of journals publishing the research; the need to capitalize on opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact) is growing
- journal impact factors (ranking journals by average number of citations articles receive over a given period) should not be used "as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contribution, or in hiring, promotion or funding decisions."
- DORA argues that articles and researchers should be judged on "their own merits" and emphasizes the "value of all research outputs" not just publications. The use of altmetrics might be part of the story but we need to be conscious of the dangers of gaming the system, and the difficulties of capturing some channels of impact
- According to Bertuzzi et al (2013): "...the Journal Impact Factor (JIF), developed to help librarians make subscription decisions, has de facto been repurposed by researchers, journals, administrators, and funding and hiring committees as a proxy for the quality and importance of research publications. The result of this shortcut is that researchers are judged by where their articles are published rather than by the content of their publications. This is fundamentally wrong."
- citation metrics (also bibliometrics) can be useful but problematic since there is a range of practices used; these can put excellent researchers at a disadvantage; the system can put some researchers who are not good ahead of those just because they work in areas that are popular and in which papers are collecting citations easily
In the News