Posted on 12/31/2013 at 02:56:50 PM by Student BloggerBy Corrie Whisner, PhD
It's that time of year again, a time for life-reflection and resolutions for a better future. This year I've challenged myself to make a scientific resolution for 2014. Following the recent scrutiny of journal impact factors, my new year's scientific resolution is to focus more on the thrill of research rather than journal citation metrics that may have negative effects on scientific advancement, including the promotion of self-comparison and hindering of scientific progress, according to Science editor Bruce Alberts.
As a young scholar I have been given lots of advice, usually helpful although sometimes overwhelming, regarding the use of journal impact factors and personal citation measures (e.g. h-index) to measure personal success in academics. For those who haven't yet been introduced to these metrics, let me fill you in. Impact factor is a score given to each scientific journal based on the average number of times its articles have been cited in other publications. H-index is similar but instead measures a scientist's individual scientific impact based on the number of papers they have published and how many times those papers have been cited in other papers.
At its inception, impact factor was designed to assess the quality of scientific journals. However, growing competition in scientific fields has led to the misuse of such metrics in which scientists are being ranked, tenured and promoted based on the scientific weight of their publications. Alberts' editorial explains how such use of citation metrics can lead to journal bias against less cited fields (i.e. social science and humanities) in comparison with STEM fields. He also states that this practice leads to a phenomenon called “me-too science,” in which scientists strive to increase their publications in highly cited areas rather than new fields.
Nobel Prize winner Randy Schekman recently voiced a similar opinion stating that his lab would no longer pursue publication in elite journals because they restrict the number of articles that are accepted. He also calls impact factor a “toxic influence” because papers are not cited for containing good science. Greater citations can also result from bad science which has become an increasing problem where high impact journals have seen a greater volume of falsified data and paper retractions.
In an effort to combat the improper use of citation metrics, a group of scientists met in December of 2012 to create the San Francisco Declaration on Research Assessment (DORA) to dissuade scientists from using the impact factor as a measure of individual scientific contribution or for evaluating someone for promotion, hiring and funding. While the debate about impact factor continues, my resolution is to publish good scientific work and worry less about comparing myself to other scientists in my field. So, please consider supporting the positive use of impact factor metrics in your own research and share your personal scientific resolutions below.