December 2014
In a recently published post in SciELO in Perspective blog, a study that proposes a taxonomy of motives to cite publications in scientific articles was analyzed. The authors suggested that the range of motives leading authors to cite one or another publication, regardless of field of knowledge, can be grouped into four main categories and some subcategories.
Lately, many criticisms regarding the use of impact indicators based on citations to evaluate researchers, research projects, hiring decisions and career promotions have been observed. The limitations of the Impact Factor are numerous and today there are available more comprehensive and reliable alternative metrics on the article level that take into account download data, sharing on social networks and coverage in print and online media.
However, it is about half a century that scientometrics have been largely using citation based metrics. Brazil, whose journals indexed in the Journal Citation Reports represent only 1% of the total number of journals indexed, bases the evaluation system of its graduate programs scientific production primarily in the Impact Factor measured on the Web of Science database (Thomson Reuters).
Two studies recently published in Nature discuss citations and scientific impact. The first, authored by Richard Van Noorden and coworkers analyzed the 100 most cited articles of all times based on the Science Citation Index (Web of Science, Thomson Reuters) for their contribution to science. The number of citations of these top 100 papers range from 305,000 to 12,000 and their analysis reveals that most of these highly cited papers refer to biological and biochemical techniques, two of which resulted in Nobel prizes. Are also present in the top-100 list articles on bioinformatics, phylogenetic (the study of evolutionary relationships among species), statistics, density functional theory (the software based study of the behavior of electrons in a given material that predicts its properties) and crystallography. It is noteworthy, however, that an article reporting a real breakthrough knowledge, perhaps the greatest of the twentieth century in the field of biochemistry, such as the discovery by Watson and Crick of the DNA’s double-helix structure in 1953 that lead to a Nobel prize, received relatively few citations – about 5000. Another example is the article by Farman, Gardiner and Shanlin reporting the discovery of the hole in the ozone layer in 1985, which received only 1,871 citations.
The second article, by John Ioannidis and coworkers reports a study that consisted of interviewing the most prolific authors in biomedical sciences who were asked to score their ten most cited articles in six ways. In this way, the authors set out to answer questions such as: Are the most cited articles the most important ones? Science progresses mainly through evolution or revolution? Would these be mutually exclusive or complementary processes and which one is best reflected by the high rate of citations? Were the results surprising harder to publish? The study has many interesting findings, however, instead of answering the questions, it brings up even more interrogations.
The research methodology consisted of sending to the 400 most cited authors in the field of biomedical sciences in the 1996-2011 period questionnaires with questions about their view of their 10 most cited articles published from 2005 to 2008, and it was requested them to score the articles between zero and 100 in six dimensions, according to their influence and impact. The restricted period prevents publication bias that highly cited and old articles become stereotyped, and are often treated as canonical and more recent work that did not had the time to accrue enough citations.
About 1/3 of the researchers – 123 of them – have responded to the questionnaire, listing 1,214 papers overall. The questionnaire asked the authors to relate their work in six categories: Continuous Progress, Broader Interest and Greater Synthesis – cataloged as Measures of Evolution; the categories Disruptive Innovation and Surprise Publication – cataloged as Measures of Revolution. Publication Difficulty was not categorized in neither, but correlated mostly with Measures of Revolution. Researchers tended to rank their most influential articles in the first three categories, and articles with less citations to the last three categories. For most of them, the most influential articles were published easily, with some exceptions. The mean and median of self-attributed scores by the authors stood around 50 for the first three categories, and 20-40 in the last three categories.
Only 20 researchers (16%) indicated that their most relevant papers published in 2005-2008 were not among their 10 most cited articles. These 20 articles, however, still rank among the most cites articles in the period. Fifty-two papers were evaluated by at least two authors, indicating co-authorship. There was agreement regarding the category in which the article was ranked by the two co-authors and the score indicated by both between 74% to 86% in the six dimensions, but one must take into account the limited sample size.
The authors expected that the papers were self-assessed as of evolutionary or revolutionary nature, but not both. This does not totally agree with the survey results, since there was a strong correlation between the scores of the General Interest category and Measures of Revolution and Evolution.
The study has limitations, according to its authors. Firstly, only 30% of authors responded to the survey, and the others could have given higher scores for Measures of Revolution or Evolution. Secondly, the limited publication period chosen to give uniformity to the data. Third, the fact that the authors can evaluate their own work more positively than others might do. A fourth factor is that the sample is composed of researchers whose work has been widely accepted (and cited). Other researchers with innovative ideas that have not been well accepted were not included in this group. The authors believe that among medium cited work the mark of evolution would be more frequent.
The research by Ioannidis and coworkers confirms the wealth of information that is contained in citation analysis and how much of scientometrics remains to be studied. The authors speculate: How to early identify an innovative paper? This kind of article makes connections between knowledge areas that are not typically made, or is it cited by papers in remote areas? These and many other questions emerge from this study. According to the authors, a way to answer them would be to survey authors who cite highly cited articles, or to evaluate the profile of moderately cited articles. However, the authors are certain about the necessity to use other indexes besides citation based metrics to complement the assessment of science.
Also see:
Wulf, Stefan. The Revista Médica project: medical journals as instruments of German foreign cultural policy towards Latin America, 1920, 1938. Hist. cienc. saude-Manguinhos, Mar 2013, vol.20, no.1, p.181-201. ISSN 0104-5970