Authors: Priem, J.
Paul Groth
Taraborelli, D.
What paper should I read next? Who should I talk to at a conference? Which research group should get this grant? Researchers and funders alike must make daily judgments on how to best spend their limited time and money–judgments that are becoming increasingly difficult as the volume of scholarly communication increases. Not only does the number of scholarly papers continue to grow, it is joined by new forms of communication from data publications to microblog posts.
To deal with incoming information, scholars have always relied upon filters. At first these filters were manually compiled compendia and corpora of the literature. But by the mid-20th century, filters built on manual indexing began to break under the weight of booming postwar science production. Garfield [1] and others pioneered a solution: automated filters that leveraged scientists own impact judgments, aggregating citations as “pellets of peer recognition.” [2].
These citation-based filters have dramatically grown in importance and have become the tenet of how research impact is measured. But, like manual indexing 60 years ago, they may today be failing to keep up with the literature’s growing volume, velocity, and diversity [3].
Citations are heavily gamed [4]–[6] and are painfully slow to accumulate [7], and overlook increasingly important societal and clinical impacts [8]. Most importantly, they overlook new scholarly forms like datasets, software, and research blogs that fall outside of the scope of citable research objects. In sum, citations only reflect formal acknowledgment and thus they provide only a partial picture of the science system [9]. Scholars may discuss, annotate, recommend, refute, comment, read, and teach a new finding before it ever appears in the formal citation registry. We need new mechanisms to create a subtler, higher-resolution picture of the science system.
Journal: PLoS ONE 2012 vol 7 issue 11 start pg e48753