Last week, The Scholarly Kitchen posted an article by Angela Cochran,Vice President of Publishing at the American Society of Clinical Oncology, about the inability of publishers to deal with research fraud. She writes:

“The bottom line is that journals are not equipped with their volunteer editors and reviewers, and non-subject matter expert staff to police the world’s scientific enterprise.”

https://scholarlykitchen.sspnet.org/2024/03/28/putting-research-integrity-checks-where-they-belong/

Cochran’s argument is that although publishers manage the peer review process, it was never an expectation of peer review that they would perform ‘forensic analysis’ of datasets and associated materials. Given the huge amount of fraud currently being discovered, and presumably the huge amounts of fraud that is also undiscovered, publishers and their academic volunteers do not have the resources to police the scientific literature. Instead, Cochran writes, ‘every research article submitted to a journal should come with a digital certificate validating that the authors’ institution(s) has completed a series of checks to ensure research integrity.’ The work of research integrity should therefore fall on universities rather than the publishing industry.

Naturally, many scientists jumped on this piece as an example of the publishing industry looking to externalise costs in order to maintain their margins, much like they do with the reliance on volunteer academic labour. For the fraud detector, Elisabeth Bik, the piece sounds like Boeing saying ‘you know how much money quality control costs???‘. While the piece does read like a publisher trying to shift blame for something for which they should be at least partly responsible, Cochran makes the equally correct point that there has not been enough attention paid to the role of universities in research integrity scandals.

I’m interested in the proposal that universities take on more of a role of assessing publications prior to formal dissemination. In fields such as high-energy physics, institutions organise quite rigorous internal review processes between large research teams, a practice that facilitated the dissemination of preprints and led to near-universal open access to high-energy physics research. There is absolutely no reason why universities could not organise such processes for other disciplines too.

With preprints in the news this week thanks to the recently updated Gates policy — which no longer requires publication in a journal but does require that authors share a preprint of their research — the proposal for universities to assess research prior to formal journal submission is attractive because it facilitates immediate sharing while adding a layer of trust to the content through an internal review process that is standardised across institutions and operates between them. This practice will in turn encourage researchers to preprint their research because it will become both normalised and will offer a baseline level of verification that the work can be shared.

I’m not making a proposal here for a specific kind of review process, only that arranging such a process before dissemination is both achievable and desirable. It speaks to the idea that preprints require a degree of structure and labour that I think is often ignored by open science advocates, while also positioning research dissemination under the control of research communities rather than commercial publishing houses extracting our free labour and content. Bringing research dissemination back in house in this way is one way of reducing the market-driven incentives that harm scholarly communication. Yet the idea still recognises the intentional work needed to disseminate good research: i.e., all technical or platform-based solutions will fail if they do not take into account that this work needs to be done.

When I propose bringing publishing back under researcher control in this way, someone always chimes in with the idea that neoliberal universities themselves are essentially businesses too and so cannot be relied upon to adequately vet their work in the way described. This is why such an idea has to be researcher-led, not managerial, and a collaboration between institutions (much like our friends in high-energy physics). I am not proposing that universities bluntly “vet” their own research, but rather that experimentation with intra- and inter-institutional review processes is an excellent way to both encourage rapid dissemination of research while taking publishing back from an industry that seems largely hellbent on running scholarly communication into the ground through APCs and money-saving automation.

To come full circle, I think that academic societies are actually in a good position to both advocate for and organise the kinds of processes I’m describing, theoretically at least. Clearly many of them are completely wedded to the traditional commercial publishing models from which they fund their activities, but many of them are not and so have the ability to experiment with how they organise their different approaches to research communication. Societies can reflect a kind of collectivity that is so needed in higher education right now, while also providing the practical forms of governance to effect real change.