New Peer Review Paradigms for Astrobiology and Origin of Life

Abstract
Scientific publishing is changing. More scientific papers are being published now than at any other time in history. The digital era is facilitating new publishing practices, such as preprint servers and Open Access journals. At the same time, there is a growing concern among scientists about integrity and equity of the peer review process as we know it. These new practices present the scientific community with an array of new opportunities, which may help revitalize peer review and provide new means of scientific discourse. The astrobiology and origin of life (OoL) communities, as a young and dynamic fields of inquiry, can and should lead the broader scientific community in seizing these new opportunities by creating and utilizing new collaborative platforms.


Introduction
The process of peer review is essential for scientific progress. For well over three hundred years the mechanisms used to instantiate some of the simple ideals of peer review have remained largely unchanged, until now. The digital era is making new mechanisms of scientific discourse possible by allowing information and methodology to be readily disseminated. A large new generation of technologically literate and globally connected scientists are demanding science move faster to keep pace with a dynamic world [3]. Meanwhile, one of the key tenets of peer review, the reproduciblity of results, seems to be on the decline [2], and concomitantly the number of retractions in scientific literature is on the rise [8]. I will outline some of the key features of these issues and argue that they are all symptoms of a peer review process which could be improved by utilizing new collaborative platforms.


What are the problems in peer review?
Scientific publications take a long time to process. The typical time between first submission and publication in traditional journals can be months or years, depending on the publisher. On its face, this is not a huge problem, peer review is a key feature of scientific progress and the means by which it is executed should be diligent, and through, which inevitably takes time. However, often manuscripts are rejected not because the the results presented are invalid, but rather because the writing could be clarified, because the editor of the journal finds it to be ill-placed, or because reviewers require more experiments to validate the results. In many of these situations, its preferable to have faster, direct communication with the peer reviewers, rather than slower anonymized communication mediated by a busy editor. Presently, there are few tools to facilitate the kind rapid, public, and collaborative review process which could improve the clarity and quality of manuscripts prior to their submission to journals. 

In spite of the lengthy review process involved in most journal submissions, the number of retractions in scientific literature is rising [8]. This could be due to a number of factors, including the possibility that invalid results are being found and refuted faster, which on it's face is a good thing for the peer review process. However a recent survey of more than 1500 scientists by Nature, reported that 90% agreed there is a reproducibility crisis in modern  science, and more than half agreed that the crisis is "significant." Across every discipline surveyed, more than half of the respondents reported failing to reproduce someone else's work, but only 23% reported that they had tried to publish failed reproduction attempts and only 13% have successfully published the failure to reproduce those results. When asked about the factors which might be contributing to these issues, respondents overwhelming agreed that pressure to publish is an important factor, among other reasons such as insufficient peer review, poor instruction, and insufficient statistical tests.


An overwhelming pressure to publish is a common complaint among contemporary scientists [7] and particularly among early career scientists [1]. On of the main driving factors behind this pressure is that citation statistics (so called bibliometrics) are the one of the only means available to quantitatively evaluate scientists [6]. The rise of interdisciplinary science [9] confounds this problem, because patterns of publication vary between disciplines and fields, making potential candidates for positions or grants difficult to evaluate and compare. Worse, bibliometrics are typically blind to the peer review process. A scientist who is publishing less of their own work, but reproducing others results, and/or giving clear, detailed and constructive feedback to other scientists may be contributing much more to their field than a scientist who is quickly publishing many articles, but providing limited, non constructive, or incoherent feedback in his reviews. Bibliometrics alone would rank the latter above the former, in terms of contribution to their field.


Bibliometrics identify a scientist's reputation with the number of citations their work has received. Accordingly scientists typically attempt to publish in the highest impact journals possible. While high impact journals are perceived as prestigious in the scientific community they often limit access to scientists work by requiring subscriptions. These subscriptions typically cost upwards of $200 dollars per year per journal and typically individuals who do not have access via their institution cannot access work published in these journals. There are alternatives, so called, Open Access (OA) journals. These journals do not charge users to read content, instead they charge authors to have their work published. Publishing a single OA article can cost upward of $1200 dollars. While many authors prefer to have their work accessible to available to all, the financial burden of OA publishing is often not justifiable for many scientists, particularly graduate students and post-docs.


What are the opportunities for peer review?
In physics, mathematics and computer science, the use of the preprint server arXiv.org is ubiquitous. ArXiv.org acts as a supplement to traditional peer review mechanisms in those fields. It hosts scientific manuscripts which have not yet been subjected to complete peer review, which are typically uploaded at the same time as their submission to a scientific journal, these manuscripts are called preprints. By hosting work in progress, arXiv.org (and other preprint servers) facilitate rapid communication of new results, discoveries and ideas. In contrast to scientific journals, which take months or years to publish newly submitted work, preprint servers take only a matter of days. While many other fields have begun to adopt preprint servers [5], their use is by no means ubiquitous in the OoL and astrobiology communities. This provides an opportunity for those communities, we can make a push (following suit from traditional biology) to make use of preprint servers to improve the availability of our science immediately.


One of the key benefits of preprints, is the ability for scientists to get feedback on their work faster than through journal mediated peer review. In order for that to be productive, scientists must be willing to dedicate more of their time to reading, reviewing, critiquing and improving the work of their peers. Most scientists are already doing this by reviewing journal submissions, but adopting preprints means that scientists will have to go out of their way to improve the work of someone whom they might be directly competing with. This might appear as a naive waste of time for early career scientists, struggling to get ahead. However, in the face of rising pressure to publish, it is becoming clear that scientists are becoming increasingly collaborative [7]. This willingness appears to be based on the fact that they can still receive credit for small contributions, allowing them to gain more publications, which are the gold standard of scientist's reputations. This suggests that if scientists were to receive some kind of useful credit for reviewing the work of others, in a meaningful and constructive manner, they would make time to do so. This provides an important opportunity to improve the quality of peer review. If we can establish a means to credit reviewers (in an easily quantifiable manner) for their contributions, everyone will benefit.


While preprints are not always peer reviewed, they provide free and open access to articles which have been subjected to peer reviewed. This is because most journals, including Nature, Science and PNAS, allow submissions of articles which have been posted to preprint servers. There is a demand among scientists for easier access to all types of articles. This demand is evidenced by the wide spread use of the scientific piracy cite Sci-Hub [4]. It appears that even academics with institutional access are using the the site, which blatantly violates a number of intellectual property policies, because it is easier, more convenient and more intuitive than accessing the articles through publishers. While this is clearly a missed opportunity on the part of journals and publishers, it demonstrates that scientists are craving access to content quickly and conveniently. Preprints provide access in this manner, and they are easily searchable via common search engines such as Google Scholar, which helps authors increase (legal) visibility of their work, without having to foot the bill for an open access journal.


How can we exploit the opportunities to solve the problems?
It is becoming clear that there is a demand for new modes of scientific discourse. The current journal-centric paradigm in place is slowing down peer review and limiting access to scientific content, while devaluing diligent and constructive peer review. Arxiv.org is a clear example of the power and utility of preprints, but widespread adoption of preprints alone will not solve some of these key problems. Scientists need to be incentivised to critique and improve other people's work. In order to do that, scientific articles must be easily accessible, and scientists must be able to communicate rapidly and freely. 

I suggest that astrobiologists and origin of life researchers work together to develop a new collaborative platform (perhaps here on SAGANet.org) which will allow users post preprints and interact with peers, by reviewing and reproducing results. This new platform would provide easy access to preprints, as well as new metrics to evaluate scientists. A reputation system would need to be put in place to track the contribution of individual scientists. Such a system could identify users area's of expertise (from the perspective of the community), as well as the number of experiments and results they've confirmed and the quality of the comments and suggestions they've provided to their peers. A reputation system of this type could extend beyond experiments and articles to include contributions to databases, and software platforms, allowing for enhanced collaboration throughout all aspects of scientific work [6]. This platform could be an extension of SAGANet to accommodate a reputation system. Stack Exchange, is a very successful platform which hosts question and answer forums, and utilizes an effective reputation reward system. While the goals of a peer review platform would be very different from Stack Exchange, we could learn a lot from the lessons of that site. The preprints could either be hosted on a new Astrobiology/OoL server as well, or they could be hosted on an existing preprint server. ArXiv.org has a developing API and a quantitative-biology category, which might appeal to the physical scientists already in using arxiv. While this technical details of this endeavor will be undoubtedly complicated, the tools and techniques are out there. 

Beyond any technical or logistic challenges presented by establishing this type of platform, there will be a huge social challenge. We will have to convince scientists to reveal their work prior to publication, which is still a major hurdle in many fields. We will have to convince other scientists to review and recreate others experiments whenever possible. This will not be easy. However, it won't require the support of any institution, university, or funding agency and it won't depend on the support of senior researchers. It will depend on the support of graduate students and post docs who are passionate about doing the best science they can. Astrobiology and origin of life research investigates some of the deepest, most conceptually challenging questions in science. A new collaborative platform for those communities would allow scientists in those fields to hold themselves to the highest possible standards of peer review. It would allow those fields to be the most open, equitable, and accessible in the scientific community. Peer review has allowed science to achieve the unimaginable year after year. In order to tackle the hardest problems of our time, we need to make peer review work better, and the tools to do it a feasible, we just need to build them.


If you are interested in making these improvements in our fields, please let me know.
Review this piece of writing, tell me what you think, comment below, or contact me directly at cole.mathis@asu.edu. I look forward to you're comments and criticism.


References
[1] Measures of success. Science, 352(6281):28-30, mar 2016.
[2] M. Baker. 1, 500 scientists lift the lid on reproducibility. Nature, 533(7604):452-454,
may 2016.
[3] N. Bhalla. Has the time come for preprints in biology? Molecular biology of the cell,
27(8):1185-1187, 2016.
[4] J. Bohannon. Who's downloading pirated papers? everyone. Science, apr 2016.
[5] J. Inglis and R. Sever. biorxiv: a progress report., 2016.
[6] H. Piwowar. Altmetrics: Value all research products. Nature, 493(7431):159-159, 2013.
[7] A. Plume and D. van Weijen. Publish or perish: The rise of the fractional author.
[8] R. G. Steen, A. Casadevall, and F. C. Fang. Why has the number of scientic retractions
increased? PloS one, 8(7):e68397, 2013.
[9] R. Werner. The focus on bibliometrics makes papers less useful. Nature, 517(7534):245-
245, jan 2015.

Views: 197

Comment

You need to be a member of SAGANet to add comments!

Join SAGANet

Comment by Harrison on August 2, 2016 at 1:05pm

And to add to my already long comment, SE just came out with the documentation aspect of their site. So instead of the standard question+answer format, people collaboratively contribute to documents (kind of like a wiki from my first impression), and the documents are based on a programming language or topic. Maybe way down the road, this is how all scientific research will be done. You just have living documents for a particular research areas that people update, contribute to, and contextualize as experiments and research gets done. /endcrazyidea

Comment by Harrison on August 2, 2016 at 1:01pm

I really like your ideas Cole. Perhaps this is too much of a paradigm shift (or just a flawed idea), but I think that instead of using paper count + impact factor as a measure of a scientist's productivity, it would be nice if it was pushed towards some combination of Stack Exchange type reputation + GitHub type activity (kind of like it is in computer science oriented fields). Really this would be due to the increasingly collaborative nature of work, and the desire to incentivize and quantify the not-so-sexy but still important work of scientists (eg. editing, reviewing, reproducing, measuring basic reaction rates, etc...).

So you upload a paper, and people can comment on recommended changes or suggestions. Or volunteer to "review it". You can also volunteer to reproduce or verify parts of the work. All these things are rewarded with tracking your contribution in a public way, and giving you some kind of reputation for all of these things. So now you can earn public recognition not only for publishing, but for reviewing, editing, making suggestions, or reproducing aspects of work. I also imagine that the reputation you'd earn for these things would be proportional to some kind of voting system, like what you see on Stack Exchange. So people can up-vote and put bounties on papers they think are important, and you'd earn more reputation for editing, reviewing, or reproducing parts of those highly voted for papers than lower interest papers. But just like Stack Exchange (SE), you'd of course still earn reputation for contributing to improving the lower-interest work. On SE, you can earn a lot of reputation (which gives you more privileges) by asking or answering a few popular/important questions (sometimes hard to do), or you can earn it by asking or answering more-than-a-few lower popularity/important questions.

Maybe this is too pie-in-the-sky, but I don't think it would actually be much more (if any more) work for scientists than the traditional publish and review process. This is just a more distributed  way to do it, but it also allows people to focus on only the papers or topics that they want, without respect to any particular discipline based boundary.

Comment by Sanjoy Som on June 20, 2016 at 4:11pm

I love this idea, and I'm pretty sure SAGANet can help make this happen!

© 2017   Blue Marble Space, a non-profit organization committed to science and science outreach.   Powered by

Badges  |  Report an Issue  |  Terms of Service