Earlier this semester, when I went to use my lab’s pH meter, it was in bad shape. The storage buffer was dried out, the internal solution had crystallized, and the standards expired over a decade ago. In a way, this pH meter fell victim to the tragedy of the commons. It was only taken care of when people directly benefitted from it, and faced neglect when those caretakers left the lab. No one else in the lab had any particular reason to maintain it, so it deteriorated. Due to the neglect of a piece of shared equipment, my experiment was delayed.
This scenario is not unlike the peer review process, which greatly benefits those receiving the reviews and is an act of volunteer service for the reviewers. Because many people don’t see a personal benefit to doing reviews, it tends to fall to the bottom of their to-do list. Consequently, relatively few scientists invest a substantial amount of time in the process. According to a 2016 article (linked below), 20% of scientists do about 70-95% of reviews. When good reviewers are hard to find, review quality can suffer. It can also take longer for papers to get published. Because peer-reviewing doesn’t provide a lot of tangible benefits to the person providing the review, the peer-review process suffers from the tragedy of the commons.
The health of both shared equipment and the peer review process currently require people to volunteer their time. The volunteer nature of these activities has an inherent flaw in that when schedules fill up, both equipment maintenance and peer-reviewing tend to fall through the cracks. For my lab’s faulty pH meter, fixing it and replacing parts broken after a period of neglect is easy. Peer review, however, doesn’t come with replacement parts. While various tactics have been suggested to improve peer review, significant change will likely not occur unless institutions shift the way they do things to better value and support the role that peer review plays in the scientific process.