At the 2015 Force11 conference a 1k challenge was run, inviting proposals for radically improving scholarly communication. Force11 members were then invited to vote for the best proposal, and after a bout of voting, three proposals received the highest number of votes. The winning proposal was ‘Crowdreviewing: the Sharing Economy at its Finest’, by Werner Liebregts from Utrecht University in the Netherlands. Liebregts’ proposal is, simply put, that peer review by a small number of anonymous reviewers – as is current predominant practice – should be replaced by a post-publication ‘rating’ by ‘anyone who has read the paper’ according to ‘a standardized set of questions’.
There was some controversy regarding this outcome, however: a large number of people signed up suddenly, after the 2015 conference, seemingly only to vote on this challenge, which made it seem as if they were not interested in Force11 as such, but just in voting for this proposal. We informed the three winners of the fact that we were concerned that the process of voting might have been a bit skewed; the second- and third-place winners decided to decline, but Liebregts did not – since the crowd had spoken and chosen his proposal as the best, he concluded that his first place was justly deserved. Importantly, the community engaged in a positive and thoughtful discussion about this situation and we want to thank all participants for their participation and contributions. See list of all 1k challenge submissions.
As an outcome of this discussion, we’ve decided that to further preclude possibly skewed judging, we will in the future elect the 1k winner during the conference itself (allowing non-participants the ability to vote using social media). But of course, the 1k challenge itself is an example of a crowd reviewing system. So what did it teach us about crowdsourcing the evaluation of scholarly communication?
Subsequent discussions within the Force11 Steering Committee and on the website show there are certain downsides to crowdsourcing, First, there is the question of who gets to judge. The Force11 election was possibly skewed by the addition of new members who may have only joined to vote for one of the proposals, most likely at the request of one of the authors. If the author can – and given this experience and others like it, most likely will – garner partners to vote for his or her paper, the evaluation of scientific quality becomes reduced to a popularity contest. As anyone who’s attended middle-school, browsed around on Buzzfeed, or followed the elections in any major country for the past decade can attest to, crowds ain’t always all that wise.
Second, if everyone’s vote can propel a paper to stardom, then the author’s goal becomes to impress everyone, not just a few well-informed specialists (which we assume today’s peer reviewers to be). Just like the media are now generating content only so it will be clicked [1] and politicians are spouting opinions only so they will be elected, in a crowreviewed worlds we can expect the message – the scientific paper – to evolve to optimally suit the opinion of its evaluator – the masses. I sincerely doubt that that will lead to better science.
Liebregts is, of course, very welcome to the 1k challenge award, – first of all he won, fair and square, but also because the interesting way in which his proposal won and the ensuing discussions brought this matter to the attention of the Force11 community in a very compelling way. To me, the most interesting sentence in Liebregt’s proposal is the first one: ‘The current review process of scientific articles is outdated.’ If that is indeed the case, how can we replace it by a system that does lead to more interesting papers – that contain better science? I am greatly looking forward to continued debates on – and practices of – alternative modes of merit review at the Force11 conferences of the future.
[1] Inside the Buzz-Fueled Media Startups Battling for Your Attention, Wired Magazine 12.17.14, http://www.wired.com/2014/12/new-media-2/
5 thoughts on “Are the Crowds Really All That Wise? What the FORCE11 1k Challenge Taught us About Crowdreview”
1k Challenge Information
Here is a link to the 1k challenge winner
One crucial difference
Dear Anita, thank you for your well-written and well-argued piece about crowdreviewing. I certainly agree that it remains to be seen whether crowdreviewing scientific articles will work. However, the hopefully lively discussions that will follow might help in succeeding to set up a good system better tailored to today's needs and wants. I am quite positive about this, as I think you have missed one crucial difference between the challenge and my idea, i.e. a fully open review process, such that everyone knows who has reviewed what, and what his/her considerations have been to assess the article the way he/she did.
Thanks Werner, you're right, but all we know through our system is how many people voted for a specific idea, there were no opportunities to motivate the vote. Perhaps we can have some open review/commonets in thsi conversation :-)? Thanks – Anita
Lessons from the challenge?
That's exactly the reason why I don't think the challenge has taught us anything about the crowdreviewing process that I propose. Comments and suggestions are always very welcome.
More discussions on the other page!
Just wanted to merge this page with the discussions on the ik page – https://www.force11.org/group/force2015-ps1k-challenge-winner – let's contiinue the disucssion there!