What does the UKRI policy mean for open access book publishing?

UK Research and Innovation today published its updated policy on open access. For journals, the policy is simplified and normalised across the disciplines. Immediate open access under CC BY is mandated (with exceptions considered on a case-by-case basis), meaning no embargoes for green open access. Hybrid publishing will not be funded by UKRI where the journal in question does not have a transitional agreement. All in all, the policy is reflective of the direction of travel towards immediate open access for research articles, something the policymakers feel that the more mature market is now able to accommodate.

The policy also mandates open access book publishing subject to a one-year embargo. Unlike journals, open access is not a dominant method of publishing long-form scholarship. The economics are different for book publishing, including the reliance on specialist editorial and production work that needs to be accounted for, alongside printing and distribution costs (particularly as print sales are likely to be one of the main ways of funding open access books). Many models have been developed to support OA monographs, but no single workable model has emerged.

In recognition of the need to explore new models, UKRI has ear-marked a block grant of £3.5 million to support open access book publishing. Though it isn’t immediately clear what this money can be spent on, it is reasonable to assume that the dreaded book processing charge is one possible approach. Often totalling upwards of £10,000, the book processing charge is a staple model used by commercial publishers for open access books. It is a single payment intended to cover editorial and production costs and mitigate against the loss of revenue implied by giving away a free digital copy. In practice, these same publishers are able to sell print copies through regular channels, and so BPCs (which are eye-wateringly expensive) remove risk for commercial organisations wanting to publish open access while allowing them to monetise books as they have always done. It isn’t a great model for publishing.

As more prestigious venues will charge more, the BPC will be just as pernicious as the article-processing charge has been for journal publishing. Authors are spending someone else’s money and so there is no reason for them to be price-sensitive, especially given the high reward that prestige offers. Without further intervention, it is likely that freeing up public money through a block grant will cement the BPC as the primary business model for open access books. This will create a two-tiered system whereby researchers with funding can publish open access books, while those without cannot.

It is important to bear in mind that open access book publishing was pioneered by presses that do not require author payment and instead rely on a range of models and subsidies to support their work. The Radical Open Access Collective is home to lots of them, and Lucy Barnes’ twitter thread below illustrates more. Small, often scholar-led presses have been pioneering OA books for years and their contribution needs to be recognised. But how do they access the funding available for open access monographs? Do they have to start charging BPCs — thus rehearsing all the problems with marketisation — or can the money instead be used to directly fund their operations through consortial funding (as the COPIM project is developing) or direct payments to presses? Without this, we’ll see commercial publishers swoop in and snatch through BPCs the funding that UKRI has made available.

This has always been the main problem with open access policies: they do not take a view on the publishing market, instead merely promoting open over closed access. This not only glosses over the broader motivations for open access, which are about redirecting scholarly communication towards more ethical models and organisations, but also creates new problems by freeing up money that allows commercial publishers to consolidate their power. As with journals, we may well see the emergence of publishing models designed to remove the expert labour and editorial care involved in book publishing (which is already happening in much of the commercial book publishing world) and to automate book production to make it more commercially viable.

But academic book publishing is not and should not be commercially viable — it should be subsidised by universities and made freely available to all who want access. Open access offers the chance to reassess how the market shapes publishing and to return control of it to research communities themselves. It is vital, then, that the block grant for books announced by UKRI can be used to support the alternative ecosystem of open access book publishers and not (simply) those charging BPCs.

The future relationship between university and publisher

As rumours circulate about the forthcoming UKRI open access policy announcement, fierce lobbying is underway by publishers worried that the policy may undermine their business models. Elsevier has even taken the step of directly emailing their UK-based academic editors to criticise the rumoured policy and encourage academics to relay the publisher’s views to UKRI. While these disagreements may not seem particularly new to anyone familiar with the open access movement, it also feels like things are coming to a head between academic publishers and the university sector. Ultimately, as I’ll argue here, universities need to take a view on what their future relationship with publishing should be.

In some respects, the debate over open access has always been about the antagonism between universities and publishers. Although access to research is an important and defining feature of these debates, the spectre of publishing profit margins and extractive business models loomed large from the beginning. There is no getting around the fact that publishers rely on labour and content they get for free. Instead, the editorial work of publishing is remunerated by universities as part of academic salaries, which of course does not fall evenly on individual academics (many of whom precarious, overworked and/or not employed by a university). Nevertheless, the university sector funds much of what the publishing industry relies upon for its operations and expects something in return.

To the extent that it has been marketised, the publishing industry is viewed as standing outside the university and not controlled by it. This is despite the fact that academics (for the most part) maintain editorial control of the publications they edit and peer review. Having talked to numerous editors of commercial journals, there is a very real sense that their publishers are service providers rather than part of the scholarly community. They might not provide the level of service that many editors expect, but they are service providers all the same. As scholarly communication has been ceded entirely to this market of service providers, universities have lost economic and material control of the publications they rely on (which also impacts on editorial control in various ways). This is all the more apparent given the dual functions the industry serves of both knowledge dissemination and researcher evaluation. Universities have outsourced both of these crucial functions to a separate, external industry.

As the university sector grapples with this loss of control, issues like the Rights Retention Strategy have emerged for authors to retain ownership of intellectual property and circumvent publisher contracts that claim exclusive ownership. Such is the separation between university and publisher that researchers are being advised against signing publisher contracts that transfer copyright. Instead, researchers can assert ownership of their copyright prior to transferring it to a journal, allowing them to immediately deposit and share their editorially-accepted word document into a repository. Suffice to say that publishers loathe this strategy — which has the potential to enable immediate green open access — and are coming out against it with all guns blazing.

Much of the current push for OA is thus predicated on the antagonism between publishers and universities. Access to publications is not a simple price negotiation between seller and consumer but instead reflects a struggle over the conditions that shape the negotiation. This situation is not particularly beneficial or sustainable for academic research, not least because universities do not appear to be particularly good at the hard-nosed negotiating that Elsevier is so well known for. It seems unlikely that an antagonistic approach has a long-term future and will only perpetuate the current system over which universities have ceded control. Sooner or later, universities will have to make a difficult call about the conditions of their relationship with the publishing industry, not just the price it pays to read and publish content. This means assessing the publishers they work with and considering the mechanisms that future control should take.

I have made many calls on this blog for greater governance of scholarly publishing by the research community. When I argue for the need to bring publishing back in house, I mean in the sense of university press culture, university-managed infrastructure and governance of the publishers we work with. Universities need to build and manage stuff for this (as many increasingly do) but they also need to demand better accountability from publishers such that issues like the Rights Retention Strategy become unnecessary or unproblematic. There is arguably much more effort paid within the university to building a parallel publishing ecosystem through new university presses and open access publishers, but this new ecosystem will not unsettle the dominance of a handful of large, profiteering publishers with questionable ethics. For a long-term strategy, you require the alternative ecosystem, an understanding of how you want the old guard to change and a plan to eventually cut loose those publishers that refuse increased accountability.

Such a plan would help to inform negotiations currently underway between the UK university sector and Elsevier (led by Jisc). Universities require access to Elsevier journals, although Elsevier will realistically not back down too much on price, and so the negotiators should seek formal pockets of governance over Elsevier publications as part of any deal. It remains to be seen what the priorities for governance should be and where demands might be met, but one could imagine issues relating to journal/data ownership, rights retention, diversity, metric implementation and journal policy changes as being up for grabs, in the long term at least. Introducing these issues into the negotiation now would signal to Elsevier that universities intend to be more active in their push for accountability and control over the industry.

Crucially, increased governance should be an aim across the industry — not just over the oligopoly — in order to cement best practice within the market more broadly. Governance should be an indication of partnership, trust and collaboration, not something punitive. This would also signal to academic editorial boards that publishers are not mere service providers and are part of the scholarly community, but only inasmuch as they act as members of it. This would also mean that academics would not be divorced from the important aspects of academic publishing and would instead be encouraged to use their editorial power for a more ethical and accountable market.

Although the push for governance might feel hopelessly reformist (because the true objective is getting rid of marketisation in both the university and the publishing industry), it is still necessary given the parameters of the neoliberal university and its commercial imperatives. Greater governance does not preclude the possibility of radical alternatives in publishing and merely acts as a counterweight to the worst aspects of marketisation. This is similar to Christopher Newfield’s argument in the recent issue of Radical Philosophy. He argues that we should not ‘wait for wider social change’ before seeking transformation of the neoliberal university. The work to be done is at once reformist and transformative.

But at the same time as appearing reformist, the possibility of greater governance of commercial publishing is also a task of enormous magnitude. Not only do we not know what we require and how greater governance works in practice, it is highly unlikely that the more profiteering actors in the industry will entertain the idea. This is why universities need to make difficult decisions about their future relationship with publishers: those that are willing to open themselves up to greater oversight should be prioritised in negotiations, while those unwilling will stand out for their intransigence. Prioritising governance and oversight will therefore add complexity to negotiations currently based primarily on price, thus paving the way for less antagonistic relationships between ‘good’ commercial actors and the university while leaving those publishers committed to the injustices of the free, ungovernable market out in the cold.

All publishers great and small

It is common knowledge that the academic publishing industry is oligopolistic: a handful of large corporate publishers control the vast majority of the industry. Because it dominates so much of the industry, the oligopoly maintains market power through tentacular economies of scale and control of the publications which libraries must access. This is bad not only for negotiating over price, but also means that the values and practices of the larger publishers are hegemonic in their influence over what publishing should look like. I have written previously about how this shapes debate around the costs of publishing.

Although dividing the industry into a handful of ‘big’ publishers and a large number of ‘small’ ones is unhelpfully binary and elides a great deal of complexity in publishing, there is still an unavoidable issue that publishing is both concentrated and becoming more consolidated. It is (usually) taken as a bad thing that a handful of multinational for-profit companies control scholarly communication. Objections to certain policy interventions, business models and approaches to open access are often predicated on the fact that big publishers will be able to use their size to their benefit, thus consolidating the industry further.

Having commissioned a report on how smaller publishers should not be locked out of open access agreements, the architects of Plan S are clearly keen to tap into the distinction between big and small. On the other side of the debate, the editor-in-chief of publishing industry blog The Scholarly Kitchen argues that aspects of Plan S in fact favour ‘larger incumbent publishers’ who can better respond to reporting requirements. From either perspective, it seems clear that size is important: people want to prevent big publishers from getting bigger.

Yet the implication here is that ‘big is bad’ rather than ‘small is good’. Policymakers and industry representatives want (or need to be seen to want) a fair and competitive market of commercial players where no one has too much power. The corollary of this is that we should intervene in markets only when they diminish competition, and we certainly should not intervene in ways that increase consolidation. But essentially, the publishing industry is a market that should function with minimal interference at most.

The problem with viewing publishing in this way is that it treats publisher size as important only inasmuch as one publisher should not have too much power (so as to control price). There is no implication that the size of the publisher impacts the kind of publishing taking place, only that one or two publishers should not be disproportionately larger than the rest.

Consider, though, that publishing is a situated activity. It benefits from editorial care, community involvement and scholarly experimentation. Revenue-maximising economies of scale, upon which ‘bigger’ publishing are based, homogenise these elements, water down careful human expertise and standardise publishing through cookie-cutter production processes. This has led to the development of platformised publishing infrastructures that seek to remove human expertise where possible and automate all that goes into publishing an article. In contrast to this, small, community-led publishing is something to be valued primarily because it is embedded within the communities that produce scholarship, not abstracted from them. My colleague Janneke Adema and I explore these issues in our article on ‘scaling small’. Bigness is not bad simply for market reasons, it also works against good — which is to say situated — publishing.

The problem for advocates of ‘small’ commercial forms of publishing (irrespective of their profit status) is that the market does not accommodate smallness very well. The market requires growth and is sustained by it. The need for growth is a problem for small publishers who want to stay small but whose work does not sit well with marketisation (often a problem for university presses, for example). It means that all forms of publishing are shaped by the market even if they hope to stand outside or work against it. With open access, this plays out through policy interventions that assume that publishing is predominantly about self-sustaining commercial operations, thus reinforcing the status quo. Of course, publishing is about self-sustaining commercial operations, but that is exactly the problem with it. We need visions for publishing that look beyond the revenue-seeking imperative and the need to make market returns.

This is why when you’re arguing that the oligopoly is bad, you’re really arguing to abolish the market. The oligopoly is merely a symptom of marketisation.

New article in Development and Change

I’ve just published the article ‘Open Access, Plan S and ‘Radically Liberatory’ Forms of Academic Freedom’ in the journal Development and Change. Abstract below.

Link: https://doi.org/10.1111/dech.12640

Abstract

This opinion piece interrogates the position that open access policies infringe academic freedom. Through an analysis of the objections to open access policies (specifically Plan S) that draw on academic freedom as their primary concern, the article illustrates the shortcomings of foregrounding a negative conception of academic freedom that primarily seeks to protect the fortunate few in stable academic employment within wealthy countries. Although Plan S contains many regressive and undesirable elements, the article makes a case for supporting its proposal for zero‐embargo repository‐based open access as the basis for a more positive form of academic freedom for scholars around the globe. Ultimately, open access publishing only makes sense within a project that seeks to nurture this positive conception of academic freedom by transforming higher education towards something more socially just and inclusive of knowledge producers and consumers worldwide.

Look to the commons for the future of R&D and science policy

Originally posted on the LSE Impact Blog

The production and distribution of the COVID-19 vaccine is unquestionably good news and hopefully heralds the beginning of the end of the global pandemic. Much of this progress is down to the spirit of collaboration shown by scientists around the world in the race to beat the virus.

Yet the fact that the vaccine remains private intellectual property, despite being publicly funded, is illustrative of a major failure with R&D policy and its tendency to elevate the concerns of the market over those of the common good. Policymakers should instead turn to the commons as an alternative philosophy for governing scientific knowledge production.

Often positioned as a ‘third way’ between the market and the state, ideas of the commons relate to the self-governance and maintenance of shared resources in a way that foregrounds cooperation over competition and shared ownership over private property. Elinor Ostrom, the first woman to win the Nobel Prize for economics, devoted her career to the study of the commons and the ways in which collective action can deliver superior outcomes to private and competitive forms of enterprise. There are hundreds of successful examples of commons, from groundwater basins and irrigation systems, to online citizen science projects and community centres. Our own work on the Community-led Open Publishing Infrastructures for Monographs (COPIM) project also seeks to find ways of further embedding community collaboration within infrastructures and models for open access knowledge dissemination. All of these projects prioritise – in varying degrees – community collaboration and management of shared resources.

Importantly, the distinguishing feature of commons-based modes of production is their participatory and structured nature rather than the extent to which the resources they generate are freely shared with the public. So although a commons-based approach would ultimately lead to a commonly-owned or ‘People’s’ vaccine, it is more important to generate meaningful and numerous collaborative interactions to create the conditions for such vaccines to be publicly-accessible. This is because the commons refers to the self-organisation of labour as a mode of production, not a method of distributing resources, although it is exactly this self-organisation that would allow the vaccine to be distributed for the common good (as opposed to the interests of private enterprise).

Yet, instead of promoting collective action as a means of production, policymakers have been pre-occupied by openness in the form of open access or open data (see the European Plan S, for example). These concepts relate merely to the method of distributing intellectual resources, not the ways in which they are produced. Openness does little to combat the engrained competitiveness in scientific research, nor does it work against the control of knowledge production infrastructures by a handful of multinational companies. What’s needed is a policy for R&D that both prioritises cooperation in knowledge production and allows the infrastructures, workflows and results to be owned commonly rather than by individuals.

Reorienting R&D funding towards commons-based projects would not only prioritise meaningful collaborations, such as those that helped generate the vaccine, but would also ensure that vaccines and other intellectual property would be owned in common. This could, for example, allow all scientific publications and data to be freely available in perpetuity (because they cannot be enclosed), not commercially owned and made freely available only for the duration of the pandemic at the whims of publishers. For example, we would not have to rely on Elsevier to grant scientists temporary access to their Coronavirus Information Center, because we would already own the intellectual property on which it is based.

Simply put, policymakers should reorient their focus away from mere open access to the outputs of scientific research and instead nurture the commons across the research lifecycle. It would mean less of a winner-takes-all strategy to research funding – away from huge grants dictated by bogus ideas of ‘excellence’ – and more of one that encourages small, careful, collaborative research by and between diverse groups of scientists. This could be facilitated through basic research income, grant lotteries and other non-competitive methods, with the outputs from each grant owned in common by scientists across the globe.

Funders could also stimulate oversight and governance of the infrastructures for knowledge production as knowledge commons, i.e., those that are governed by the communities that use them rather than the market at large. This would allow researchers to decide how these infrastructures are designed and built upon, preventing acquisition of critical knowledge infrastructures and data by undesirable actors. We can stimulate common ownership in scientific research through data trusts, common patent pools and other democratic procedures for sharing resources.

The commons therefore offers a different frame – a third way – for the traditional R&D strategy that currently emphasises the public, the private, or the interplay between both. It prioritises self-organisation over state- and market-based forms, emphasising collaboration in an industry beholden to competition. As academic research is likely to take a hit in the post-pandemic economic slowdown, the commons would be a useful way of directing research to foreground process over brute outcomes, collaboration over competition.

Or as the economist Kate Raworth puts it, ‘if you ignore the commons, you’re ignoring one of the most vibrant spaces of the 21st century economy’.

OASPA panel on funding and business mechanisms for equitable open access

On 22nd September I’ll be participating in a panel on funding and business mechanisms for equitable open access for the 2020 OASPA conference. I’ll be using the opportunity to discuss some of the projects I’m involved in – notably the Radical Open Access Collective and the Community-led Open Publishing Infrastructures for Monographs (COPIM) project – in order to highlight the different approaches to business models and sustainability that these projects may entail. In particular, drawing on my recent work with Janneke Adema, I will be discussing ‘scaling small’, an organisational philosophy that seeks to build resilience within scholarly publishing through mutual reliance and collaboration. Scaling small is an approach that preserves the locality and (biblio)diversity of approaches to publishing while encouraging presses to work together on shared technical, infrastructural and other publishing projects. Predicated on an ethic of care, in direct opposition to the cookie-cutter economies of scale preferred by the larger commercial publishers, scaling small intends to nurture cooperation (over competition) as a sustaining force for global scholarly communication. I’ll be discussing the opportunities and potential drawbacks of this approach for a more ethical and equitable ecosystem of open access scholarly publishing.

The panel is on Tuesday 22nd September at 5pm BST and will feature the following other participants:

  • Vivian Berghahn Berghahn Books, UK
  • Sharla Lair LYRASIS, USA
  • Alexia Hudson-Ward Oberlin College and Conservatory, USA
  • Chair: Charles Watkinson, University of Michigan, USA

You can register for the meeting here: https://webforms.copernicus.org/OASPA2020/registration (I’m told the registration fee can be waived if you do not have funding).

The datafication in transformative agreements for open access publishing

Transformative agreements are an increasingly common way for universities and consortia to shift publisher business models towards open access. They do this through a prearranged payment that allows institutions to access subscription content while allowing future research to published in an openly accessible form. These deals are a way for publishers to continue to receive subscription income and boast about their open access content, while universities value them as a cost-neutral strategy for transitioning away from subscriptions towards open access (read Lisa Hinchliffe’s primer for an excellent summary of transformative agreements)

Continue reading “The datafication in transformative agreements for open access publishing”

How can we understand the different effects of UKRI’s open access policy on small learned societies in the humanities?

The UKRI open access consultation deadline is this Friday and we’re likely to see a flurry of responses leading up to it. One response to the consultation caught my eye today from the Friends of Coleridge, a society that ‘exists to foster interest in the life and works of the poet Samuel Taylor Coleridge and his circle’. I wanted to jot down a couple of thoughts on this because I think it represents something quite interesting about the way that open access is playing out within UK humanities organisations.

Continue reading “How can we understand the different effects of UKRI’s open access policy on small learned societies in the humanities?”

COVID-19 and the future of open access

On February 26th, what feels like a lifetime ago now, the Los Angeles Times published a column with the headline ‘COVID-19 could kill the for-profit science publishing model. That would be a good thing’. Its author, Michael Hiltzik, argues that for-profit publishing is ‘under assault by universities and government agencies frustrated at being forced to pay for access to research they’ve funded in the first place.’ Hiltzik doesn’t really go into how open access confronts the for-profit model, and instead offers a somewhat crude summary of the importance of open science during the pandemic, including preprints, open collaboration, data sharing and open access to research.

Continue reading “COVID-19 and the future of open access”