(translation of the article published in ComeIn originally written in catalan)
Scientifics have been debating for years about a data, an indicator. As simple and clever as it was when it was imagined by Eugene Garfield. But the clever it is, the headaches it has caused. Of course, we are talking about the impact factor. Yes, it is just an indicator for the expected citations can receive a paper in a given. So easy and 40 years being discussed.
The main premise that impact factor has, as we know, is that a citation can be considered as an evidence of quality. But the problem arises when there are a lot of decisions that considers the impact factor. In the beginning, impact factor of a journal was created for making decisions about which journals had to be bought in libraries. After that, it happen to be used for decisions about scientifics, research groups, institutions and all about their quality. The use of the impact factor as a sign of quality could imply comparing non similar disciplines, like mathematics or Librarianship. Too broad to differentiate.
To be honest, it is also true that in recent times impact factor has been balanced with other new indexes, like the Hirsch or h-index. Accordingly, the zoom for considering qualitat and citations it is now more concentrated in articles rather than in journals. Now we are probably more evaluated by just the citations our papers receive from other papers. But it is still true that the aim goes on being that a citation is an indicator for quality.
Our Academic Universe is full of data and tools for monitoring them. But many years ago, before the Internet, there were only an indicator to measure it, the citations. We were not able to know how many times a paper was read, recommended, photocopied, printed, etc. Electronic journals already allow it. The great pioneers have been journals and institutions like PLOS, which have the Article Level Metrics. So, now an article can be measured by many more data and indicators than citations. The concept of academic impact may become much larger.
Nevertheless, and it is not really a criticism, deep down we are describing a same reality, an article, with more data, more indicators defining the use of it. We shall know for better how the scientific knowledge of the paper behaves and is spreaded. Perhaps quality and impact will become two different realities, meaning different things. We will know how many people would have cited, downloaded, tweeted, retweeted, we will know the whole digital track of an article. But what this will tell us really about the quality of the paper? We could have a paper with notorious results but now really spreaded. We could also have a paper with a bad methodology but with some conclusions really attractive for media. For instance, all data related to health, future of media and so on. Even consider sensitive conclusions related to politics.
So, altmetrics from my point of view present the main advantage that we will not depend on just an indicator, which every year can be considered a judgement for scientific editors, librarians and researchers. But there are also some weak spots and doubts for being solved. For instance, how have we to assign a weight to every indicator? Is it better to being tweeted or downloaded? Or is better to being shared via Mendeley? Being tweeted is an indicator of the quality of your work or about the strength of your academic network? How can we prevent social media from being created networks of citation not related to quality of research? We are perhaps in new land, and ethics in using social media in academic uses have still to be created. Time are changing, we can consider it for sure.
To conclude, new indicators for evaluating research implies new uses and changes in the way researches create, behave and spread scientific knowledge. So, these are new times for building different models of Scientific Communication. At last, so many years after the Internet have changed many others fields like music, cinema and literature, it will change the way we disseminate scientific information. We assume it will be all for the best.
Aranda, D. (2012) “ISI ho cremem tot?”. COMeIN. Revista dels Estudis de Ciències de la Informació i la Comunicació, núm. 17 (desembre).
Lin, J.; Fenner, M. (2013). “Altmetrics in Evolution: Defining and Redefining the Ontology of Article-Level Metrics“. Information Standards Quarterly, 25 (2), p. 20.
López-Borrull, A. (2014). “Retos de la comunicación científica“. Anuario ThinkEPI, v. 8.
López-Borrull, Alexandre (2013). “Jo encara diria més, senyor Dupont“. COMeIN. Revista dels Estudis de Ciències de la Informació i la Comunicació, núm. 18 (gener).
Roemer, R. B.; Borchardt, R. (2012) “From bibliometrics to altmetrics: A changing scholarly landscape”. College & Research Libraries News, vol. 73, núm. 10, p. 596-600.
If you want to cite this article, you can use this citation style
López-Borrull (2014). “Almetrics, or altenative metrics, statistics that describes us more. But, best?”. Fahrenheit 2014 https://fahrenheit2014.wordpress.com/2014/02/16/almetrics-or-altenative-metrics-statistics-that-describes-us-more-but-best/