Why open science is primarily a labour issue

Reforming research assessment and culture is a hot topic in higher education, particularly how these issues relate to research funding. I discussed the HELIOS initiative in my last post, which is a funder-led approach to incentivising open science practices in North American tenure and promotion guidelines. Now, in the past week, EU science ministers have agreed on a plan to facilitate coordinated reform of research assessment processes.

As I noted last week, research assessment reform is often predicated upon nurturing cultures of open science based on encouraging researchers to share the materials and underlying processes behind their research. In doing this, the argument goes, research becomes ‘democratised’ and ‘collectivised’ by its ability to bring more people into the scientific conversation through the removal of price and permission barriers to the reuse of materials. Open science, I argue, is an overly resource-focused approach to the knowledge commons (free code, data and publications), rather than one focused on the relationalities and different possible forms of organisation in how these knowledge resources are produced. In addition to freely available resources, these alternative relationalities are vital for a more emancipatory university.

But emancipatory from what? Underpinning all these approaches to assessment reform is the brutally competitive nature of marketised higher education and the fact that precarious and exploited labour props up so much of what the university does. To this extent, open science is primarily a labour issue, not an epistemological one, although it is rarely approached by policymakers in this way. Knowledge production does not benefit from precarity or poor working conditions, not least due to the way they turn researchers into individuals competing with one another at every turn for scarce resources. If open science is to have any meaning, then, it must be grounded in a politics that is emancipatory from capital and the problems of researchers being oriented around capital at every point.

So despite there being an often touted association between open science and collectivity, or the democratisation of higher education, this association is weak at best, but especially when promoted by senior managers and policymakers — i.e., those with a stake in maintaining the neoliberal academy. A truly collectivising approach to research assessment reform would foreground the labour issues associated with contemporary higher education under the assumption that open (or better) science would follow from less individuation and more collective governance over what the university is and does.

I have argued elsewhere that the push toward open access, while regressive in many ways, frees up resources that allow for more progressive and socially just pockets of activity in the margins. Being able to squat within the discourses of efficiency, openness, and other such concepts, affords the capacity to experiment with politically exciting approaches to common and collectively-managed endeavours, even while the profiteering and market-making associated with open access publishing continues apace. Is there a way for us to benefit from the push for research assessment reform in the same way by foregrounding these labour issues and radically reimagining what knowledge production and dissemination could look like?

Part of the problem with policy-led approaches is that they fix and lock down what ‘openness’ is and intends to achieve, while also forcing researchers to conform to this definition and comply with its demands. Yet openness itself, as many have argued, facilitates and requires experimentation, particularly around the forms of organisation required to facilitate the kinds of relationalities that could help us build collective power in higher education. This, I argue, is what research assessment reform should be based on: building the capacity to explore and imagine different ways of producing knowledge, not simply reworking incentives towards open publishing, etc. In many ways, this means leaving behind assessment and replacing it with capacity building (as we’ve argued for in a different context elsewhere) or something altogether detached from the assessment of individual ‘performance’.

For the most part, this vision requires radical thinking — which is why so many incremental approaches to assessment and culture reform fall flat for their tendency to rehearse all the pre-existing issues with the old system. My argument is simply that no one really knows how to best reimagine the new forms of organisation needed for more ethical knowledge production (but many people could give it a good shot given the opportunity, especially the very people currently so exploited by precarity). It involves bringing people together in a variety of ways, sustaining their collective efforts, and not continually dividing them up into individual units to be assessed at every possible turn. This also entails the ceding of control from policymaker to smaller, decentralised collectives of knowledge producers…which is probably a tough sell to the average policymaker.

How does open science ‘democratise’ and ‘collectivise’ research?

A recent article in The Scientist discusses the newly launched Higher Education Leadership Initiative for Open Scholarship (HELIOS). Composed of ‘leaders’ from over 75 US colleges and universities, HELIOS is committed to incentivising open science practices in order to make research more research more ‘inclusive, transparent, and efficient’. It is an approach designed to reorient assessment mechanisms towards open science practices, including ‘publishing in open-access journals, posting data using FAIR (findable, accessible, interoperable, and reusable) principles, and sharing other research outputs such as computer code.’

Throughout the article, we hear how open science ‘democratises’ science and works against the rampant individualism that characterises so much of higher education. Open science is ‘collaborative’ and entails the sharing of data, code and publications for anyone to access and reuse; it also allows research to reach and engage other communities not traditionally considered as part of the research process. These are familiar themes from years of open science advocacy.

Yet it isn’t clear what the relationship is between the greater sharing of research materials and the so-called democratisation at work in open science. What actually is democratising and collectivising about what HELIOS is trying to do?

It is important to ask this question because HELIOS is, by all accounts, a top-down initiative led by senior figures of research-intensive universities in the US. Despite the casual association between open science and collectivity, it appears that HELIOS is more a way for university leaders to coerce researchers into a cultural change, not something that is led by the research community at large. While changing tenure guidelines to prioritise publishing in open access journals, sharing FAIR data and releasing reusable open code may have some good outcomes, they are not themselves the basis for greater collective governance of science. Instead, these changes will provide an economic reason for researchers to adopt open science practices, a reason still based on individual progress within the academy.

Clearly, it makes sense to incentivise behaviours that are good. But the problem here is that greater democratic governance of science is the way by which the incentivising should take place. This is made all the more important because the lack of collective governance within higher education is one of the biggest issues facing knowledge production right now: it is the thing that could lead to much greater cultures of academic research, certainly more so than HELIOS’s narrow focus on open science.

The relationship between ‘openness’ and democratisation is a false one, or at least there is no obvious or necessary connection between the two (see Tkacz’s work for more on the politics of openness). This is because open science is largely focused on the outputs of scientific research instead of the cultures of how they are produced. Or rather, open science is mainly interested in efficient and reproducible modes of production, not ethical or collectively-governed ones. The latter may be a consideration of some visions of open science, but they are not their defining feature.

When policymakers and university leaders mandate openness to specific resources, democratic governance gets left behind. This is because this kind of openness does not require community accountability for it to be realised, only a vague sense that giving resources away will lead to a kind of inclusion that previously did not exist. This focus on resources is what allows the market and private enterprise — the ultimate expression of individualism — to dominate the provision of openness at the expense of community governance.

For open science to adequately ‘democratise’ or ‘collectivise’, it must consider the closures involved in such processes. By closures, I mean the actively designed and nurtured cultures of inclusion — and exclusion, by extension — that are required to foreground the good stuff (different cultures of knowledge, mutual reliance and care) and relegate the bad (everything oriented around profit). At some point, we’re going to have to work out how to leave the openness of openness behind and piece back together a more ethical system of knowledge production based on democratic self-governance of the university itself.

How to shed light on in-house publication review processes?

This post makes a case for universities investing in people and processes for reviewing research in house before publication. This idea has no doubt been proposed before and is probably already a feature of some academic institutions, but I wanted to clarify here why I think it would benefit academic research.

High-energy physics research is often held up as the archetypal open science discipline. Researchers upload their preprints to the arXiv when they are ready to share them, rather than after traditional journal-based peer review, meaning that the discipline itself is almost entirely open access as a result. Cultures of preprinting have been adopted across many other disciplines too and have been hailed as hugely important in the response to the pandemic. Sharing unrefereed research is increasingly commonplace for researchers of many disciplines.

Yet as Kling and McKim wrote in their hugely influential paper on disciplinary publishing differences, the process by which high-energy physics research makes it to the arXiv is not as simple as just throwing out unrefereed research for public consumption. Instead, it reflects an extensive and careful process of evaluation between multiple groups of collaborating institutions:

High-cost (multimillion dollar) research projects usually involve large scientific teams who may also subject their research reports to strong internal reviews, before publishing. Thus, a research report of an
experimental high-energy physics collaboration may have been read and reviewed by dozens of internal reviewers before it is made public.

https://arxiv.org/abs/cs/9909008

Once an experimental high-energy physics article is made public, it has undergone an extensive, internally-managed review process that may be conceived as a kind of peer review. There is care in the process of bringing together the many pairs of eyes that have a stake in the authorship of a paper. Authors have confidence in their preprint not just because they are protected by the safety of hundreds or thousands of co-authors, but because those co-authors have developed internal processes for ensuring confidence in their research.

Although I do not want to reify this or any other kind of peer review as more efficacious in obtaining scientific ‘truth’, it is clear that the reputation physicists have acquired for working entirely in the open may not tell the full story about their review processes. This reputation instead elides the fact that a preprint is often a well-evaluated document rather than a first draft or unfinished research paper, even if it may change according to subsequent feedback or revisions. What I am interested in is whether a similar kind of internal process might be useful across other disciplines too.

I think it’s fair to say that, across all disciplines, many researchers share their work with colleagues before wider dissemination via a journal, preprint server, etc. This could be through informal networks of peers, mentors, or simply other people in your department or lab. But what if there were processes to facilitate and nurture these practices, and what if these processes were made clear to readers at the point of wider dissemination? If an organisation were to formalise and certify this process — devoting resources to organising it according to disciplinary conventions — would this both encourage earlier dissemination of research and give readers confidence that they are not the first people reading it?

I am essentially arguing for a job position within a disciplinary specific-setting, someone who knows intimately the manner and rhythm of their field’s publishing cultures, and who can help organise this process across their department (or wherever). Doing this would shed light on the processes of feedback and revision that took place before the research was disseminated while offering support to others trying to do the same. (It is in no way intended to be punitive.)

I am not arguing that these internal review processes tell us much about the research itself (any more than any peer review process can), only that papers from a specific lab/department/centre/etc. would have broadly acquired feedback in the same way. In externalising these processes and encouraging more people to adopt them, we begin to understand the varying levels of assessment that research receives as it becomes more widely disseminated. This would also work against publishers as the final gatekeepers and would dilute their influence over peer review as a black and white system of verification.

In building the capacity for such a system of feedback, and funding people to make this work, universities would begin to shift culture and reclaim elements of knowledge production from the marketised publishing industry. This approach would encourage better practices prior to sharing un-refereed research, ultimately leading to greater confidence in such research across all disciplines.

An illustration of the problem with the literature on predatory publishing

I’m becoming increasingly interested in the academic literature on predatory publishing, especially the differing definitions and argumentative strategies these articles use to illustrate the problem of poor-quality publishing. Over the weekend I scanned the recently-published article ‘Publishing in Predatory Journals: Guidelines for Nursing Faculty in Promotion and Tenure Policies’, by Broome et al. Through interviews and analysis of tenure and promotion documentation, the article explores the extent to which predatory publishing is mentioned or discussed in the publishing guidance given to faculty in schools of nursing in the USA.

While skimming the article to note down what definition of predatory it uses, I noticed this sentence in the authors’ literature review:

Unfortunately, the rise in number of these predatory publishers, which has recently been estimated to be an industry worth $10.5 billion annually (Wilkinson et al., 2019), has caused alarm, with many in academic communities fearing the potential destruction of the scientific literature.

https://doi.org/10.1111/jnu.12696

$10.5 billion seemed unbelievably high to me so I followed the reference where this figure was taken from. This led me to Wilkinson et al., the first sentence of which contains the line: ‘In the last 10 years, a subset of ‘predatory’ publishers has been able to flourish within the $10.5 billion per year market [1-4].’ Already, then, it seems that the figure referenced by the authors of the original paper is talking about a larger market of which predatory publishers may be a part. Unfortunately, no more context than this is given and so I followed the references [1-4] cited by Wilkinson et al. for more context. Rather than an academic article, this led me to a blog of the publishing industry called The Scholarly Kitchen and an article by Joseph Esposito on the size of the open-access market. It turns out the figure cited is from an industry report by a company named Simba:

Simba notes that the primary form of monetization for OA journals is the article processing charge or APC. In 2013 these fees came to about $242.2 million out of a total STM journals market of $10.5 billion. I thought that latter figure was a bit high, and I’m never sure when people are quoting figures for STM alone or for all journals; but even so, if the number for the total market is high, it’s not far off.

https://scholarlykitchen.sspnet.org/2014/10/29/the-size-of-the-open-access-market/

This seems to be quite different to the initial quote from Broome et al. illustrating incorrectly that the predatory journal market itself represents ‘$10.5 billion annually’, when it, in fact, represents the total STM journals market in 2013 (and even that is disputed by the author). So unless all the journals in the STM market are predatory (which is an argument for another time), this figure is way off the mark.

It turns out that a separate group of authors, Shen and Björk, did try to estimate the size of the predatory market and came up with a figure of $74 million, but even this is probably over-estimated based on the variety of definitions of predatory publishing that exist (and other factors explored by Walt Crawford on his blog). But even if this overinflated figure were accurate, the original figure cited by Broome et al. would still be over 100 times higher and gives the impression that predatory publishing is much, much larger than even the higher estimates claim.

I’m not entirely sure what to conclude from this, but the error seems pretty basic and took me less than a few minutes to get to the bottom of. Should this have been picked up by an expert peer reviewer? Probably. The article was subject to double-blind external review, as outlined by the Journal of Nursing Scholarship author guidelines, and so you would have thought that an expert on academic publishing would have caught this. Peer review is of course not a perfect way of evaluating manuscripts, although this figure does feel pretty egregiously wrong to be part of the scholarly record.

As part of their definition of predatory publishing, the authors themselves cite ‘questionable peer review done by these journals’. Here, we have a clear instance of questionable peer review that impacts the way the entire article is framed. This is not to say that the Journal of Nursing Scholarship is predatory, but rather that definitions of predatory are consistently insufficient and do not actually tell us anything about the substance or veracity of an article in question. They try to separate out good actors (in this case a subscription journal published by the for-profit commercial publisher Wiley) from bad actors based on inconsistently applied criteria often founded on prejudice (I have argued this previously). This has real-world effects because a huge body of scholarship is now dedicated to the analysis of ‘predatory’ journals that are leading to the ‘destruction of the scientific literature’, despite the fact that there is no fixed or useful definition as to what predatory publishing actually is. Paradoxically, as with this instance, many of these articles commit exactly the same sins they claim are characteristic of the kinds of publishing they are critiquing.

What does the UKRI policy mean for open access book publishing?

UK Research and Innovation today published its updated policy on open access. For journals, the policy is simplified and normalised across the disciplines. Immediate open access under CC BY is mandated (with exceptions considered on a case-by-case basis), meaning no embargoes for green open access. Hybrid publishing will not be funded by UKRI where the journal in question does not have a transitional agreement. All in all, the policy is reflective of the direction of travel towards immediate open access for research articles, something the policymakers feel that the more mature market is now able to accommodate.

The policy also mandates open access book publishing subject to a one-year embargo. Unlike journals, open access is not a dominant method of publishing long-form scholarship. The economics are different for book publishing, including the reliance on specialist editorial and production work that needs to be accounted for, alongside printing and distribution costs (particularly as print sales are likely to be one of the main ways of funding open access books). Many models have been developed to support OA monographs, but no single workable model has emerged.

In recognition of the need to explore new models, UKRI has ear-marked a block grant of £3.5 million to support open access book publishing. Though it isn’t immediately clear what this money can be spent on, it is reasonable to assume that the dreaded book processing charge is one possible approach. Often totalling upwards of £10,000, the book processing charge is a staple model used by commercial publishers for open access books. It is a single payment intended to cover editorial and production costs and mitigate against the loss of revenue implied by giving away a free digital copy. In practice, these same publishers are able to sell print copies through regular channels, and so BPCs (which are eye-wateringly expensive) remove risk for commercial organisations wanting to publish open access while allowing them to monetise books as they have always done. It isn’t a great model for publishing.

As more prestigious venues will charge more, the BPC will be just as pernicious as the article-processing charge has been for journal publishing. Authors are spending someone else’s money and so there is no reason for them to be price-sensitive, especially given the high reward that prestige offers. Without further intervention, it is likely that freeing up public money through a block grant will cement the BPC as the primary business model for open access books. This will create a two-tiered system whereby researchers with funding can publish open access books, while those without cannot.

It is important to bear in mind that open access book publishing was pioneered by presses that do not require author payment and instead rely on a range of models and subsidies to support their work. The Radical Open Access Collective is home to lots of them, and Lucy Barnes’ twitter thread below illustrates more. Small, often scholar-led presses have been pioneering OA books for years and their contribution needs to be recognised. But how do they access the funding available for open access monographs? Do they have to start charging BPCs — thus rehearsing all the problems with marketisation — or can the money instead be used to directly fund their operations through consortial funding (as the COPIM project is developing) or direct payments to presses? Without this, we’ll see commercial publishers swoop in and snatch through BPCs the funding that UKRI has made available.

This has always been the main problem with open access policies: they do not take a view on the publishing market, instead merely promoting open over closed access. This not only glosses over the broader motivations for open access, which are about redirecting scholarly communication towards more ethical models and organisations, but also creates new problems by freeing up money that allows commercial publishers to consolidate their power. As with journals, we may well see the emergence of publishing models designed to remove the expert labour and editorial care involved in book publishing (which is already happening in much of the commercial book publishing world) and to automate book production to make it more commercially viable.

But academic book publishing is not and should not be commercially viable — it should be subsidised by universities and made freely available to all who want access. Open access offers the chance to reassess how the market shapes publishing and to return control of it to research communities themselves. It is vital, then, that the block grant for books announced by UKRI can be used to support the alternative ecosystem of open access book publishers and not (simply) those charging BPCs.

The future relationship between university and publisher

As rumours circulate about the forthcoming UKRI open access policy announcement, fierce lobbying is underway by publishers worried that the policy may undermine their business models. Elsevier has even taken the step of directly emailing their UK-based academic editors to criticise the rumoured policy and encourage academics to relay the publisher’s views to UKRI. While these disagreements may not seem particularly new to anyone familiar with the open access movement, it also feels like things are coming to a head between academic publishers and the university sector. Ultimately, as I’ll argue here, universities need to take a view on what their future relationship with publishing should be.

In some respects, the debate over open access has always been about the antagonism between universities and publishers. Although access to research is an important and defining feature of these debates, the spectre of publishing profit margins and extractive business models loomed large from the beginning. There is no getting around the fact that publishers rely on labour and content they get for free. Instead, the editorial work of publishing is remunerated by universities as part of academic salaries, which of course does not fall evenly on individual academics (many of whom precarious, overworked and/or not employed by a university). Nevertheless, the university sector funds much of what the publishing industry relies upon for its operations and expects something in return.

To the extent that it has been marketised, the publishing industry is viewed as standing outside the university and not controlled by it. This is despite the fact that academics (for the most part) maintain editorial control of the publications they edit and peer review. Having talked to numerous editors of commercial journals, there is a very real sense that their publishers are service providers rather than part of the scholarly community. They might not provide the level of service that many editors expect, but they are service providers all the same. As scholarly communication has been ceded entirely to this market of service providers, universities have lost economic and material control of the publications they rely on (which also impacts on editorial control in various ways). This is all the more apparent given the dual functions the industry serves of both knowledge dissemination and researcher evaluation. Universities have outsourced both of these crucial functions to a separate, external industry.

As the university sector grapples with this loss of control, issues like the Rights Retention Strategy have emerged for authors to retain ownership of intellectual property and circumvent publisher contracts that claim exclusive ownership. Such is the separation between university and publisher that researchers are being advised against signing publisher contracts that transfer copyright. Instead, researchers can assert ownership of their copyright prior to transferring it to a journal, allowing them to immediately deposit and share their editorially-accepted word document into a repository. Suffice to say that publishers loathe this strategy — which has the potential to enable immediate green open access — and are coming out against it with all guns blazing.

Much of the current push for OA is thus predicated on the antagonism between publishers and universities. Access to publications is not a simple price negotiation between seller and consumer but instead reflects a struggle over the conditions that shape the negotiation. This situation is not particularly beneficial or sustainable for academic research, not least because universities do not appear to be particularly good at the hard-nosed negotiating that Elsevier is so well known for. It seems unlikely that an antagonistic approach has a long-term future and will only perpetuate the current system over which universities have ceded control. Sooner or later, universities will have to make a difficult call about the conditions of their relationship with the publishing industry, not just the price it pays to read and publish content. This means assessing the publishers they work with and considering the mechanisms that future control should take.

I have made many calls on this blog for greater governance of scholarly publishing by the research community. When I argue for the need to bring publishing back in house, I mean in the sense of university press culture, university-managed infrastructure and governance of the publishers we work with. Universities need to build and manage stuff for this (as many increasingly do) but they also need to demand better accountability from publishers such that issues like the Rights Retention Strategy become unnecessary or unproblematic. There is arguably much more effort paid within the university to building a parallel publishing ecosystem through new university presses and open access publishers, but this new ecosystem will not unsettle the dominance of a handful of large, profiteering publishers with questionable ethics. For a long-term strategy, you require the alternative ecosystem, an understanding of how you want the old guard to change and a plan to eventually cut loose those publishers that refuse increased accountability.

Such a plan would help to inform negotiations currently underway between the UK university sector and Elsevier (led by Jisc). Universities require access to Elsevier journals, although Elsevier will realistically not back down too much on price, and so the negotiators should seek formal pockets of governance over Elsevier publications as part of any deal. It remains to be seen what the priorities for governance should be and where demands might be met, but one could imagine issues relating to journal/data ownership, rights retention, diversity, metric implementation and journal policy changes as being up for grabs, in the long term at least. Introducing these issues into the negotiation now would signal to Elsevier that universities intend to be more active in their push for accountability and control over the industry.

Crucially, increased governance should be an aim across the industry — not just over the oligopoly — in order to cement best practice within the market more broadly. Governance should be an indication of partnership, trust and collaboration, not something punitive. This would also signal to academic editorial boards that publishers are not mere service providers and are part of the scholarly community, but only inasmuch as they act as members of it. This would also mean that academics would not be divorced from the important aspects of academic publishing and would instead be encouraged to use their editorial power for a more ethical and accountable market.

Although the push for governance might feel hopelessly reformist (because the true objective is getting rid of marketisation in both the university and the publishing industry), it is still necessary given the parameters of the neoliberal university and its commercial imperatives. Greater governance does not preclude the possibility of radical alternatives in publishing and merely acts as a counterweight to the worst aspects of marketisation. This is similar to Christopher Newfield’s argument in the recent issue of Radical Philosophy. He argues that we should not ‘wait for wider social change’ before seeking transformation of the neoliberal university. The work to be done is at once reformist and transformative.

But at the same time as appearing reformist, the possibility of greater governance of commercial publishing is also a task of enormous magnitude. Not only do we not know what we require and how greater governance works in practice, it is highly unlikely that the more profiteering actors in the industry will entertain the idea. This is why universities need to make difficult decisions about their future relationship with publishers: those that are willing to open themselves up to greater oversight should be prioritised in negotiations, while those unwilling will stand out for their intransigence. Prioritising governance and oversight will therefore add complexity to negotiations currently based primarily on price, thus paving the way for less antagonistic relationships between ‘good’ commercial actors and the university while leaving those publishers committed to the injustices of the free, ungovernable market out in the cold.

All publishers great and small

It is common knowledge that the academic publishing industry is oligopolistic: a handful of large corporate publishers control the vast majority of the industry. Because it dominates so much of the industry, the oligopoly maintains market power through tentacular economies of scale and control of the publications which libraries must access. This is bad not only for negotiating over price, but also means that the values and practices of the larger publishers are hegemonic in their influence over what publishing should look like. I have written previously about how this shapes debate around the costs of publishing.

Although dividing the industry into a handful of ‘big’ publishers and a large number of ‘small’ ones is unhelpfully binary and elides a great deal of complexity in publishing, there is still an unavoidable issue that publishing is both concentrated and becoming more consolidated. It is (usually) taken as a bad thing that a handful of multinational for-profit companies control scholarly communication. Objections to certain policy interventions, business models and approaches to open access are often predicated on the fact that big publishers will be able to use their size to their benefit, thus consolidating the industry further.

Having commissioned a report on how smaller publishers should not be locked out of open access agreements, the architects of Plan S are clearly keen to tap into the distinction between big and small. On the other side of the debate, the editor-in-chief of publishing industry blog The Scholarly Kitchen argues that aspects of Plan S in fact favour ‘larger incumbent publishers’ who can better respond to reporting requirements. From either perspective, it seems clear that size is important: people want to prevent big publishers from getting bigger.

Yet the implication here is that ‘big is bad’ rather than ‘small is good’. Policymakers and industry representatives want (or need to be seen to want) a fair and competitive market of commercial players where no one has too much power. The corollary of this is that we should intervene in markets only when they diminish competition, and we certainly should not intervene in ways that increase consolidation. But essentially, the publishing industry is a market that should function with minimal interference at most.

The problem with viewing publishing in this way is that it treats publisher size as important only inasmuch as one publisher should not have too much power (so as to control price). There is no implication that the size of the publisher impacts the kind of publishing taking place, only that one or two publishers should not be disproportionately larger than the rest.

Consider, though, that publishing is a situated activity. It benefits from editorial care, community involvement and scholarly experimentation. Revenue-maximising economies of scale, upon which ‘bigger’ publishing are based, homogenise these elements, water down careful human expertise and standardise publishing through cookie-cutter production processes. This has led to the development of platformised publishing infrastructures that seek to remove human expertise where possible and automate all that goes into publishing an article. In contrast to this, small, community-led publishing is something to be valued primarily because it is embedded within the communities that produce scholarship, not abstracted from them. My colleague Janneke Adema and I explore these issues in our article on ‘scaling small’. Bigness is not bad simply for market reasons, it also works against good — which is to say situated — publishing.

The problem for advocates of ‘small’ commercial forms of publishing (irrespective of their profit status) is that the market does not accommodate smallness very well. The market requires growth and is sustained by it. The need for growth is a problem for small publishers who want to stay small but whose work does not sit well with marketisation (often a problem for university presses, for example). It means that all forms of publishing are shaped by the market even if they hope to stand outside or work against it. With open access, this plays out through policy interventions that assume that publishing is predominantly about self-sustaining commercial operations, thus reinforcing the status quo. Of course, publishing is about self-sustaining commercial operations, but that is exactly the problem with it. We need visions for publishing that look beyond the revenue-seeking imperative and the need to make market returns.

This is why when you’re arguing that the oligopoly is bad, you’re really arguing to abolish the market. The oligopoly is merely a symptom of marketisation.

New article in Development and Change

I’ve just published the article ‘Open Access, Plan S and ‘Radically Liberatory’ Forms of Academic Freedom’ in the journal Development and Change. Abstract below.

Link: https://doi.org/10.1111/dech.12640

Abstract

This opinion piece interrogates the position that open access policies infringe academic freedom. Through an analysis of the objections to open access policies (specifically Plan S) that draw on academic freedom as their primary concern, the article illustrates the shortcomings of foregrounding a negative conception of academic freedom that primarily seeks to protect the fortunate few in stable academic employment within wealthy countries. Although Plan S contains many regressive and undesirable elements, the article makes a case for supporting its proposal for zero‐embargo repository‐based open access as the basis for a more positive form of academic freedom for scholars around the globe. Ultimately, open access publishing only makes sense within a project that seeks to nurture this positive conception of academic freedom by transforming higher education towards something more socially just and inclusive of knowledge producers and consumers worldwide.

Look to the commons for the future of R&D and science policy

Originally posted on the LSE Impact Blog

The production and distribution of the COVID-19 vaccine is unquestionably good news and hopefully heralds the beginning of the end of the global pandemic. Much of this progress is down to the spirit of collaboration shown by scientists around the world in the race to beat the virus.

Yet the fact that the vaccine remains private intellectual property, despite being publicly funded, is illustrative of a major failure with R&D policy and its tendency to elevate the concerns of the market over those of the common good. Policymakers should instead turn to the commons as an alternative philosophy for governing scientific knowledge production.

Often positioned as a ‘third way’ between the market and the state, ideas of the commons relate to the self-governance and maintenance of shared resources in a way that foregrounds cooperation over competition and shared ownership over private property. Elinor Ostrom, the first woman to win the Nobel Prize for economics, devoted her career to the study of the commons and the ways in which collective action can deliver superior outcomes to private and competitive forms of enterprise. There are hundreds of successful examples of commons, from groundwater basins and irrigation systems, to online citizen science projects and community centres. Our own work on the Community-led Open Publishing Infrastructures for Monographs (COPIM) project also seeks to find ways of further embedding community collaboration within infrastructures and models for open access knowledge dissemination. All of these projects prioritise – in varying degrees – community collaboration and management of shared resources.

Importantly, the distinguishing feature of commons-based modes of production is their participatory and structured nature rather than the extent to which the resources they generate are freely shared with the public. So although a commons-based approach would ultimately lead to a commonly-owned or ‘People’s’ vaccine, it is more important to generate meaningful and numerous collaborative interactions to create the conditions for such vaccines to be publicly-accessible. This is because the commons refers to the self-organisation of labour as a mode of production, not a method of distributing resources, although it is exactly this self-organisation that would allow the vaccine to be distributed for the common good (as opposed to the interests of private enterprise).

Yet, instead of promoting collective action as a means of production, policymakers have been pre-occupied by openness in the form of open access or open data (see the European Plan S, for example). These concepts relate merely to the method of distributing intellectual resources, not the ways in which they are produced. Openness does little to combat the engrained competitiveness in scientific research, nor does it work against the control of knowledge production infrastructures by a handful of multinational companies. What’s needed is a policy for R&D that both prioritises cooperation in knowledge production and allows the infrastructures, workflows and results to be owned commonly rather than by individuals.

Reorienting R&D funding towards commons-based projects would not only prioritise meaningful collaborations, such as those that helped generate the vaccine, but would also ensure that vaccines and other intellectual property would be owned in common. This could, for example, allow all scientific publications and data to be freely available in perpetuity (because they cannot be enclosed), not commercially owned and made freely available only for the duration of the pandemic at the whims of publishers. For example, we would not have to rely on Elsevier to grant scientists temporary access to their Coronavirus Information Center, because we would already own the intellectual property on which it is based.

Simply put, policymakers should reorient their focus away from mere open access to the outputs of scientific research and instead nurture the commons across the research lifecycle. It would mean less of a winner-takes-all strategy to research funding – away from huge grants dictated by bogus ideas of ‘excellence’ – and more of one that encourages small, careful, collaborative research by and between diverse groups of scientists. This could be facilitated through basic research income, grant lotteries and other non-competitive methods, with the outputs from each grant owned in common by scientists across the globe.

Funders could also stimulate oversight and governance of the infrastructures for knowledge production as knowledge commons, i.e., those that are governed by the communities that use them rather than the market at large. This would allow researchers to decide how these infrastructures are designed and built upon, preventing acquisition of critical knowledge infrastructures and data by undesirable actors. We can stimulate common ownership in scientific research through data trusts, common patent pools and other democratic procedures for sharing resources.

The commons therefore offers a different frame – a third way – for the traditional R&D strategy that currently emphasises the public, the private, or the interplay between both. It prioritises self-organisation over state- and market-based forms, emphasising collaboration in an industry beholden to competition. As academic research is likely to take a hit in the post-pandemic economic slowdown, the commons would be a useful way of directing research to foreground process over brute outcomes, collaboration over competition.

Or as the economist Kate Raworth puts it, ‘if you ignore the commons, you’re ignoring one of the most vibrant spaces of the 21st century economy’.

OASPA panel on funding and business mechanisms for equitable open access

On 22nd September I’ll be participating in a panel on funding and business mechanisms for equitable open access for the 2020 OASPA conference. I’ll be using the opportunity to discuss some of the projects I’m involved in – notably the Radical Open Access Collective and the Community-led Open Publishing Infrastructures for Monographs (COPIM) project – in order to highlight the different approaches to business models and sustainability that these projects may entail. In particular, drawing on my recent work with Janneke Adema, I will be discussing ‘scaling small’, an organisational philosophy that seeks to build resilience within scholarly publishing through mutual reliance and collaboration. Scaling small is an approach that preserves the locality and (biblio)diversity of approaches to publishing while encouraging presses to work together on shared technical, infrastructural and other publishing projects. Predicated on an ethic of care, in direct opposition to the cookie-cutter economies of scale preferred by the larger commercial publishers, scaling small intends to nurture cooperation (over competition) as a sustaining force for global scholarly communication. I’ll be discussing the opportunities and potential drawbacks of this approach for a more ethical and equitable ecosystem of open access scholarly publishing.

The panel is on Tuesday 22nd September at 5pm BST and will feature the following other participants:

  • Vivian Berghahn Berghahn Books, UK
  • Sharla Lair LYRASIS, USA
  • Alexia Hudson-Ward Oberlin College and Conservatory, USA
  • Chair: Charles Watkinson, University of Michigan, USA

You can register for the meeting here: https://webforms.copernicus.org/OASPA2020/registration (I’m told the registration fee can be waived if you do not have funding).