What’s so bad about consolidation in academic publishing?

Today’s Scholarly Kitchen blog post is an attempt by David Crotty — the blog’s editor — to quantify the increasing consolidation of the academic publishing industry. Crotty concludes:

Overall, the market has significantly consolidated since 2000 — when the top 5 publishers held 39% of the market of articles to 2022 where they control 61% of it. Looking at larger sets of publishers makes the consolidation even more extreme, as the top 10 largest publishers went from 47% of the market in 2000 to 75% in 2023, and the top 20 largest publishers from 54% to controlling 83% of the corpus.

https://scholarlykitchen.sspnet.org/2023/10/30/quantifying-consolidation-in-the-scholarly-journals-market/

It’s helpful to have more data on the increasing power that a small number of academic publishers hold. Crotty charts this consolidation from the year 2000 onwards, from the concentration brought about by the effects of the Big Deal to the present day where 5 publishers now control 61% of the article output, brought about by the dominant business models for open access based on greater volume and technological scale. The author’s finger is pointed at Coalition S for instigating a ‘rapid state of change’ that allows author-pays open access to flourish.

I’m no fan of open access policies, Plan S especially, and I’m sure that policy interventions play a part in the consolidation at play. There are of course many ways of achieving open access without recourse to author fees, transformative agreements, or technologies that remove human expertise in place of automation and scale. But while there is nothing natural or necessary about the relationship between open access and consolidation, there is a much stronger connection between commercialisation and consolidation. The recent history of academic publishing has been of marketisation and, hence, consolidation.

I always bristle when I read that open access is to blame for the problems with the publishing market, not simply because open access does not have to be a market-based activity (and is better when it isn’t) but more because the explanation is so shallow. It is a position that usually takes as its starting point that the natural and proper way for academic publishing to be organised is as a commercial activity and any intervention that works against this is to blame for the deleterious effects of commercialisation. Publishing is and always is a business (possibly a reflection of the constituents that the Scholarly Kitchen represents), despite the fact that it is exactly the commercial nature of publishing that is the problem.

Yet open access is good precisely because it allows us to reorient academic publishing away from commercial practice and to experiment with forms of publishing that are less reliant on competition, profiteering and the extraction of free and under-remunerated labour. Policymakers are starting to wake up to this fact, as in part illustrated by the turn towards no-fee and non-commercial forms of open access, and I am cautiously optimistic about this turn. The danger is that policymakers mandate no-fee open access in accordance with the requirements of commercial publishing and the need to devalue skilled labour in the pursuit of revenue.

So it’s easy to criticise open access policies for their harmful impact, but this has to be done from a position that understands how the same issues of profiteering and extraction were a consequence of the subscription market too. They are the logical extension of publishing as a market-based activity, not of wanting to make the literature freely available to all. Open access policies do not address the problem of marketisation largely because they are not designed to do so. Profit-seeking actors create business models to maximise their revenues as a result of such policies. The whole system rests on this logic of extraction and that’s what needs to be opposed.

How to cultivate good closures: ‘scaling small’ and the limits of openness

Text of a talk given to the COPIM end-of-project conference: “Scaling Small: Community-Owned Futures for Open Access Books”, April 20th 2023

Open access publishing has always had a difficult relationship with smoothness and scale. Openness implies seamlessness, limitlessness or structureless-ness – or the idea that the removal of price and permission barriers is what’s needed to allow research to reach its full potential. The drive for seamlessness is on display in much of the push for interoperability of standards and persistent identifiers that shape the infrastructures of openness. Throughout the evolution of open access, many ideas have been propagated around, for example, the necessity of CC BY as the one and only licence that facilitates this interoperability and smoothness of access and possible reuse. Similarly, failed projects such as One Repo sought to create a single open access repository to rule them all, in response to the perceived messy and stratified institutional and subject repository landscape.

Yet this relationship between openness and scale also leads to new kinds of closure, particularly the commercial closures of walled gardens that stretch across proprietary services and make researcher data available for increasing user surveillance. The economies of scale of commercial publishers require cookie-cutter production processes that remove all traces of care from publishing, in exchange for APCs and BPCs, thus ensuring that more publications can be processed cheaply with as little recourse to paid human labour as possible. Smoothness and scale are simply market enclosures by another name.

When Janneke and I were writing our ‘scaling small’ article, we were particularly interested in exploring alternative understandings of scale that preserve and facilitate difference and messiness for open access book publishing. How can we nurture careful and bibliodiverse publishing through open access infrastructures when it is exactly this difference and complexity that commercial forms of sustainability want to standardise at every turn? In outlining ‘scaling small’, we looked to the commons as a way of thinking through these issues.

As a mode of production based on collaboration and self-organisation of labour, the commons was a natural fit for the kinds of projects we were involved in. We charted the informal mutual reliance – what we referred to as the latent commons, borrowing from Anna Tsing (2017) – within the Radical Open Access Collective right through to the expansive formality of the COPIM project. In doing so, we illustrated the different forms of organisation that facilitate alternative publishing projects that stand in opposition to the market as the dominant mode of production. Scaling small is primarily about how open access can be sustained if we embed ourselves in each other’s projects and infrastructures in a way that has ‘global reach but preserves local contexts’ (Adema and Moore 2021). It is a reminder that the commons is an active social process rather than a fixed set of open resources available to all.  

In their posthumously released book On the Inconvenience of Other People, Lauren Berlant writes against a fixed understanding of the commons that ‘merely needs the world to create infrastructures to catch up with it’ (Berlant 2022). Instead, for Berlant, the ‘better power’ of the commons is to ‘point to a way to view what’s broken in sociality, the difficulty of convening a world conjointly’. From our perspective, the commons is about revealing how hard it is to scale small in a world dominated by the need for big, homogenising platforms. It is not, then, about having a fixed understanding of the infrastructures necessary for open access publishing but more about experimenting with the different kinds of socialities that may allow experimental infrastructures of different scales and formalities to flourish.

This is why scaling small reveals the limits of openness and forces us to instead cultivate good closures (echoing the ‘good cuts’ of Sarah Kember and Joanna Zylinska’s (2012) reading of Karen Barad) based on what we want to value ethically and politically. So rather than leaving everything to the structureless-ness of market-centric openness, through COPIM we learn how to deal with the fact that things like governance, careful publishing and labour-intensive processes do not scale well according to economic logic. In my time on the COPIM project, for example, I learned how community governance requires pragmatic decision-making and norms of trust within the community; it is not something that can be completely organised through rules and board structures. Yet we still proceed to build these structures to see what works and what doesn’t, relying on the fact that we all share a broad horizon of better, more ethical futures for book publishing.

Yet of course, antagonism still exists within and outside the COPIM project. Is it OK that the models and infrastructures being developed within this community are being extracted from it by commercial publishers? Bloomsbury, for example, has just proudly announced it is the first commercial book publisher to utilise the kind of collective funding model being developed by COPIM, Open Library of Humanities, and other scholar-led publishers. How is it possible to scale small when a big commercial actor is waiting to take what you have developed and piggyback on it for commercial gain? Do we engage with commercial publishers or keep them at arms’ length?

Again, part of the answer to this question lies in sociality, or the fact that COPIM has managed to carve out a pretty unique situation in neoliberal higher education that has brought together a vast array of likeminded people and organisations with an explicit goal of undermining the monopolisation of commercial publishers in place of community-led approaches. Coupled with the move to diamond open access journals that is gaining traction particularly in continental Europe, we have an important counter-hegemonic project being formed around communities cross-pollinating with one another rather than competing. Commercial publishers may treat COPIM’s work as free R&D but it cannot extract the social glue that keeps it together and sets it apart from marketised models. 

This is why I am so excited about the recent announcement of Open Book Futures and its potential to further reach out to and engage libraries, infrastructure providers and communities outside the Global North, increasing the messiness that allows us to scale small. As someone now working in one, I am especially pleased to see libraries treated as partners rather than a chequebook – as is too often the case with new open access initiatives – and given meaningful governance over the future of the Open Book Collective. Scaling small will only work if libraries are understood as part of the community and part of the cross-pollination at work. Without this, there is a danger that the additional labour of collections librarians is undervalued or objectified as a tool for the provision of open access, even though it is a crucial and active facilitator of the smallness we desire.  

As an interdisciplinary, multi-practitioner group of advocates for better publishing futures, I hope we can also consider how scaling small may help transform our professional networks away from the commercially driven conservatism of learned societies and towards expansive forms of mutual reliance and care within and between them. In doing so, it can help build the necessary chains of equivalence between previously disparate learned societies and member organisations, allowing us to turn our attention to the brutally individuating structures of marketised academia (which, at bottom, is the bigger issue at hand).

So in conclusion, I hope to have conveyed in these short remarks that scaling small is, above all, a project of sociality, building new connections and getting together in different ways, and not simply or even primarily about the publications and resources being produced and shared. The point is to continue learning how to hold onto this social and biblio-diversity through the decisions we take and the institutional closures we enact, particularly as more and more actors become involved. Viewed in this light, scaling small reveals the limits of openness and the necessity of cultivating good closures with other (inconvenient) people.

Work cited:

Adema, Janneke, and Samuel A. Moore. 2021. ‘Scaling Small; Or How to Envision New Relationalities for Knowledge Production’. Westminster Papers in Communication and Culture 16 (1). https://doi.org/10.16997/wpcc.918.

Berlant, Lauren. 2022. On the Inconvenience of Other People. Writing Matters! Durham: Duke University Press.

Kember, Sarah, and Joanna Zylinska. 2012. Life after New Media: Mediation as a Vital Process. Cambridge, Mass: MIT Press.

Tsing, Anna Lowenhaupt. 2017. The Mushroom at the End of the World On the Possibility of Life in Capitalist Ruins. Princeton, NJ: Princeton University Press.

Preprints and the futures of peer review

Yesterday, the preprint repositories bioRxiv/medRxiv and arXiv released coordinated statements on the recent memo on open science from the White House Office of Science and Technology Policy. While welcoming the memo, the repositories claim that the ‘rational choice’ for making research immediately accessible would be to mandate preprints for all federally funded research. They write:

This will ensure that the findings are freely accessible to anyone anywhere in the world. An important additional benefit is the immediate availability of the information, avoiding the long delays associated with evaluation by traditional scientific journals (typically around one year). Scientific inquiry then progresses faster, as has been particularly evident for COVID research during the pandemic.

https://connect.biorxiv.org/news/2023/04/11/ostp_response

This is familiar rhetoric and a couple of commentators on social media have noted the similarities between what the preprint servers are proposing here and ‘Plan U‘, the 2019 initiative that attempts to ‘sidestep the complexities and uncertainties’ of initiatives like Plan S through a similar requirement for immediate preprinting of all funded research. The designers of Plan U argue that preprinting is much simpler and cheaper because preprint servers do not perform peer review and so operate on a lower cost-per-paper basis.

I think it’s important to acknowledge the disciplinary differences when it comes to preprinting, not just that many scholars in the arts, humanities and social sciences are more reluctant to preprint, but also the fact that the tradition of preprinting referenced here emerged out of a particularly narrow set of disciplinary conditions and epistemic requirements in high-energy physics. While by no means the only disciplinary group to share working papers (see economics, for example), preprinting has a rich history in high-energy physics. This is primarily because of the ways in which HEP research projects are structured between large overlapping groups from a number of institutes, each with its own internal review and quality-control processes that take place before a preprint is uploaded to the arXiv. I discussed this more in a previous post on in-house peer review processes.

The reason I’m mentioning this is that the statement made by the repositories fails to account for one of the more important elements of Plan U: that such an approach will ‘produce opportunities for new initiatives in peer review and research evaluation’. The need to experiment with new models of peer review/research evaluation should be front and centre of any call for a new, open system of publishing, especially one predicated on lower costs due to the lack of peer review. Without well-explored alternatives, there is an implication that the online marketplace of ideas will just figure out the best way to evaluate and select research.

Yet the history of high-energy physics tells us that ‘openness’ should not be a free-for-all but is predicated on well-designed socio-technical infrastructures that evolved from and thus suit the discipline in question. But these infrastructures and new socialities need to be designed and implemented with care, not left for us to just figure it all out. This is why any system of publishing based on preprinting+evaluation needs adequate support to experiment and build the structures necessary for non-publisher-based forms of peer review. I’m convinced, for example, that a lot of this could be built into the work of disciplinary societies and in-house publishing committees. Yet these ideas require financial support to see what works.

In the context of the never-ending crisis in peer review, the OSTP memo could serve as a springboard for figuring out and operationalising new forms of research evaluation. This could make for a sensitive transition away from traditional publication practices that help the commercial publishing industry maintain control of scholarly communication. The memo is therefore a good opportunity to experiment and build entirely new approaches to academic publishing — but we have to make the correct arguments for how we figure this out, not arguments based on cheaper publishing and web-based editorial free-for-alls.

New preprint: the politics of rights retention

I’ve just uploaded ‘The Politics of Rights Retention’ to my Humanities Commons site: https://hcommons.org/deposits/item/hc:52287/. The article is a preprint of a commentary currently under consideration for a special issue on open access publishing.

Abstract

This article presents a commentary on the recent resurgence of interest in the practice of rights retention in scholarly publishing. Led in part by the evolving European policy landscape, rights retention seeks to ensure immediate access to accepted manuscripts uploaded to repositories. The article identifies a trajectory in the development of rights retention from something that publishers could previously ignore to a practice they are now forced to confront. Despite being couched in the neoliberal logic of market-centric policymaking, I argue that rights retention represents a more combative approach to publisher power by institutions and funders that could yield significant benefits for a more equitable system of open access publishing.

The curious internal logic of open access policymaking

This week, the White House Office of Science and Technology Policy (OSTP) declared 2023 its ‘Year of Open Science‘, announcing ‘new grant funding, improvements in research infrastructure, broadened research participation for emerging scholars, and expanded opportunities for public engagement’. This announcement builds on the OSTP’s open access policy announcement last year that will require immediate open access to federally-funded research from 2025. Given the state of the academic publishing market, and the tendency for US institutions to look towards market-based solutions, such a policy change will result in more article-processing charge payments and, most likely, publishing agreements between libraries and academic publishers (as I have written about elsewhere). The OSTP’s policy interventions will therefore hasten the marketisation of open access publishing by further cementing the business models of large commercial publishers — having similar effects to the policy initiatives of European funders.

As the US becomes more centralised and maximalist in its approach to open access policymaking, European institutions are taking a leaf out of the North American book by implementing rights retention policies — of the kind implemented by Harvard in 2008 and adopted widely in North America thereafter. If 2023 will be the ‘year of open science’ in the USA, it will surely be the year of rights retention in Europe. This is largely in response to funders now refusing to pay APCs for hybrid journals — a form of profiteering initially permitted by many funders who now realise the errors of their ways. With APC payments prohibited, researchers need rights retention to continue publishing in hybrid journals while meeting their funder requirements.

There is a curious internal logic here: the USA following the market-making of Europe, while Europe locking horns with the market and adopting US-style rights retention policies. Maybe this means that we’re heading towards a stable middle ground between the models of these two separate (but equally neoliberal) approaches, or maybe one of the hegemonic blocs is further along the road that both are travelling (not to mention the impact these shifts and market impacts have on Global South countries, or simply those outside Europe and North America).

Clearly I’m being too binary and eliding a great deal of complexity in this very short post, but it struck me that there is a curious internal logic at work here. The push for open access has forced a shift in the business models of academic publishers, but this very same shift causes more of the profiteering that open access was responding to in the first place. Policymakers dance back and forth trying to make open access workable for researchers and affordable for universities, but neither of these aims will be possible to achieve while researchers are required to publish in journals owned by a publishing industry more answerable to shareholders than research communities.

Research assessment in the university without condition

Cross-posted on the Dariah Open blog as part of their series on research assessment in the humanities and social sciences

In his lecture entitled ‘The future of the profession or the university without condition’, Jacques Derrida makes the case for a university dedicated to the ‘principle right to say everything, whether it be under the heading of fiction and the experimentation of knowledge, and the right to say it publicly, to publish it.’ (Derrida 2001). Beyond mere academic freedom, Derrida is arguing for the importance not just of the right to say and publish, but to question the very institutions and practices upon which such freedom is based – those performative structures, repertoires and boundaries that make up what we call (and do not call) ‘the humanities’.

One such structure – implicit in much of what Derrida writes – relates to the material conditions of the university, or the relationship between ‘professing’ and being ‘a professor’ tasked with the creation of singular works representing their thought (‘oeuvres’). Derrida identifies a gulf between the unconditional university he is arguing for and the material conditions that work against the realisation of such a university. Academics are conditioned to publish work in certain ways, not in the service of this unconditional university but in order to simply earn a living. Integral to this situation are the ways in which humanities research and researchers are assessed and valued by universities and funders. We publish in prestigious books and journals so that we might continue to ‘profess’ in the university for a while longer.

For the most part, research assessment reform promises to tinker with these structures and propose new ‘fairer’ ways to evaluate research. For example, the recent European agreement on reforming research assessment seeks to eradicate journal markers and inappropriate quantitative measures, while promoting qualitative measures that reward a plurality of roles and ways that academics can contribute to research. These recommendations are made in the service of recognising those ‘diverse outputs, practices and activities that maximise the quality and impact of research’ (p. 2). The implication is that good research is being done, but it is not possible to learn this from current approaches to research assessment. Assessment is conceived primarily as an epistemological issue that more accurately rewards those doing the best work.

Absent from these reforms is a thoroughgoing consideration of the fact that research assessment is, more than anything else, a labour issue. It is about the material conditions that allow participation and progression with higher education institutions. The mere fact that researchers chase prestige at every turn is because this is the path to being rewarded in such a brutally competitive academic job market. Without a greater push to end precarity and ease workloads, changing evaluative criteria will have no impact on the labour conditions within the contemporary university. This is why such reforms need to be coupled with a real commitment to improving labour conditions that will in turn have their own epistemological benefits in the form of less pressure to publish and a greater freedom to experiment.  

I’m not here to propose alternatives to the European reforms. Instead, I want to us to consider whether Derrida’s university without condition – though no more than a theoretical construct – also requires us to refuse the conditions of external research and especially researcher assessment. Abandoning assessment could be undertaken in favour of careful and collectivising appreciation by the communities that create and sustain research. We should therefore take as our starting point that the assessment of research for the distribution of scarce resources is not strictly necessary to the pursuit of research. Clearly, financial resources have to be distributed, but a more equitable alternative would be to offer greater democratic governance by all members of the university over how resources are distributed: randomly, communally, through basic income or however else. This is the more urgent work of research assessment reform, not tinkering at the margins.

These more experimental approaches, although gaining traction, presuppose that the university should not be beholden to liberal ideas of meritocracy or individual excellence. Assessment reform should instead lower barriers to participation and facilitate experimental, diverse and collective approaches to knowledge production for their own sake. But it is this connection to the material conditions of labour that is most important to recognise and support: the university without condition requires it.

Work cited

Derrida, Jacques. 2001. ‘The Future of the Profession or the Unconditional University (Thanks to the Humanities That Could Take Place Tomorrow)’. In Jacques Derrida and the Humanities: A Critical Reader, edited by Tom Cohen. Cambridge: Cambridge University Press.

New Horizons in Open Access Publishing: upcoming Open Access Week talk

On October 25th I’ll be giving an online talk at University College Cork for their event on New Horizons in Open Access Publishing. Details below:

‘Scaling small’, or why there are no BIG solutions to the problem of ethical open access

As Plan S gains steam in Europe and the US mandates public access to all research published from 2026, subscription publishing seems likely to be an increasingly unviable business model in the near future. We are rapidly moving to a time in which all academic research articles – and increasing amounts of books – will be available to access and share without payment. Yet although open access has won the day, it is worth considering why this victory also feels like something of a defeat. Publishing is still largely controlled by a handful of profiteering companies who are rapidly expanding into areas beyond research articles, such as research data, user data and other elements in the knowledge production workflow. At the same time, many researchers remain unengaged and motivated by regressive research cultures that promote competition over collaboration, seeing open access as an imposition or something to be ignored entirely. But what is to be done here, and why are there no easy or big solutions? This talk will argue that the all-encompassing solutions promised by open access mandates, funder platforms and transformative agreements are part of the problem. Instead, open access practitioners need to consider the necessity of ‘smallness’ and local solutions in nurturing a diverse and ethical diamond open access publishing ecosystem.

Thoughts on the new White House OSTP open access memo

Cross-posted on the University of Cambridge’s Unlocking Research blog.

In the USA last Thursday, the White House Office of Science and Technology Policy announced its decision to mandate public access to all federally funded research articles and data. From 2026, the permitted embargo period of one year for funded publications will be removed and all publications arising from federal funding will have to be immediately accessible through a repository. Although more details are to be announced, my colleague Niamh Tumelty, the OSC’s Head of Open Research Services, shared a helpful summary of the policy and some initial reaction here. I want to offer my own personal assessment of what the new policy might mean from the perspective of open access to research articles, something we are working hard to promote and support throughout the university.

To be sure, the new OSTP memo is big news: the US produces a huge amount of research that will now be made immediately available without payment to the world at large. Following in the footsteps of Plan S in Europe, the open access policy landscape is rapidly evolving away from embargo periods and towards immediate access to research across all disciplines. Publishing industry consultants Clarke & Esposito have even argued that this intervention will make the subscription journal all the more unviable, eventually leading to its demise.

Indeed, responses from the publishing industry have been mixed. The STM Association, for example, offer a muted one-paragraph response claiming tepid support for the memo, while organisations such as the AAP were more vocally against what they see as a lack of ‘formal, meaningful consultation or public input’ on the memo, despite the fact that many more details are still to be announced (presumably, following consultation). A similar sense of frustration was displayed by some of the authors of the industry-supported Scholarly Kitchen blog. It’s fair to say that the publishing industry itself – at least the part of it that makes money from journal subscriptions – has not welcomed the new memo with open arms.

Understandably, funders and advocacy organisations have welcomed the news. Johan Rooryck from Coalition S called the memo a ‘game changer for scholarly publishing’, while the Open Research Funders Group ‘applauds bold OSTP action’ in its response. Open access advocates SPARC described the memo as a ‘historic win’ for open access and a ‘giant step towards realizing our collective goal of ensuring that sharing knowledge is a human right – for everyone’. Certainly, for those arguing in favour of greater public access to research, the memo will indeed result in just this. But I still have my reservations.

My PhD thesis analysed and assessed the creation and implementation of open access policy in the UK. As Cambridge researchers no doubt know, the open access policy landscape is composed of a number of mandates, with varying degrees of complexity, and affects the vast majority of UK researchers in one way or another. This is for better and for worse: there is an increase in bureaucracy associated with open access policy (particularly through repositories), even though it results in greater access to research. However, when you remove this bureaucracy through more seamless approaches to OA like transformative agreements, there is a risk of consolidating the power of large commercial publishers who dominate this space and make obscene profits (a fear also shared by Jeff Pooley in his write-up of the policy). There is therefore a delicate balance to be struck between simply throwing money at market-based solutions and requiring researchers and librarians to take on more of the burden of compliance.

The problem with indiscriminate policy mandates for public access to research, such as the OSTP’s memo, is that they shore up the idea that publishing has to be provided by a private industry that is not especially accountable to research communities or the university more broadly. This is precisely because these policies are indiscriminate and therefore apply to everyone equally, which for academic publishing means benefitting those already in a good position to profit. Larger commercial publishers have worked out better than anyone else how to monetise open access through a range of different business models. As long as researchers need to continue publishing with the bigger publishers, which they do for career reasons, these publishers will always be in a better position to benefit from open access policies. It is hard to imagine how the individual funding bodies could implement the OSTP memo in a way that does foreground a more bibliodiverse publishing system at the expense of commercialism (not least because this goal does not appear to be the target of the memo).  

I do not mean to overplay the pessimism here: it is great that we are heading for a world of much more open access research. The point now is to couple this policy with funding and support to continue building the capacity of an ethical and accountable publishing ecosystem, all while trying to embed these ethical alternatives within the mainstream. This kind of culture change cannot be achieved by mandates like the OSTP is proposing, but it can be achieved by the harder work of raising awareness of alternatives and highlighting the downsides of current approaches to publishing. It is also important to reveal the ways in which research cultures shape how researchers decide to publish their work – often at the expense of experimentation and openness – and how they can be changed for the better.

So I am interested to see how the memo is implemented in practice, especially how it is funded and the conditions set on immediate access to research. I am also keen to see what role, if any, rights retention plays in the implementation and how US libraries decide to support the policy and the changing environment more broadly. Ultimately, however, the move to a more scholar-led and scholar-governed ecosystem will not occur on an open/closed binary, nor on a top-down/bottom-up one, and so we must find a range of ways to support new cultures of knowledge production and dissemination in the university and beyond.

Why open science is primarily a labour issue

Reforming research assessment and culture is a hot topic in higher education, particularly how these issues relate to research funding. I discussed the HELIOS initiative in my last post, which is a funder-led approach to incentivising open science practices in North American tenure and promotion guidelines. Now, in the past week, EU science ministers have agreed on a plan to facilitate coordinated reform of research assessment processes.

As I noted last week, research assessment reform is often predicated upon nurturing cultures of open science based on encouraging researchers to share the materials and underlying processes behind their research. In doing this, the argument goes, research becomes ‘democratised’ and ‘collectivised’ by its ability to bring more people into the scientific conversation through the removal of price and permission barriers to the reuse of materials. Open science, I argue, is an overly resource-focused approach to the knowledge commons (free code, data and publications), rather than one focused on the relationalities and different possible forms of organisation in how these knowledge resources are produced. In addition to freely available resources, these alternative relationalities are vital for a more emancipatory university.

But emancipatory from what? Underpinning all these approaches to assessment reform is the brutally competitive nature of marketised higher education and the fact that precarious and exploited labour props up so much of what the university does. To this extent, open science is primarily a labour issue, not an epistemological one, although it is rarely approached by policymakers in this way. Knowledge production does not benefit from precarity or poor working conditions, not least due to the way they turn researchers into individuals competing with one another at every turn for scarce resources. If open science is to have any meaning, then, it must be grounded in a politics that is emancipatory from capital and the problems of researchers being oriented around capital at every point.

So despite there being an often touted association between open science and collectivity, or the democratisation of higher education, this association is weak at best, but especially when promoted by senior managers and policymakers — i.e., those with a stake in maintaining the neoliberal academy. A truly collectivising approach to research assessment reform would foreground the labour issues associated with contemporary higher education under the assumption that open (or better) science would follow from less individuation and more collective governance over what the university is and does.

I have argued elsewhere that the push toward open access, while regressive in many ways, frees up resources that allow for more progressive and socially just pockets of activity in the margins. Being able to squat within the discourses of efficiency, openness, and other such concepts, affords the capacity to experiment with politically exciting approaches to common and collectively-managed endeavours, even while the profiteering and market-making associated with open access publishing continues apace. Is there a way for us to benefit from the push for research assessment reform in the same way by foregrounding these labour issues and radically reimagining what knowledge production and dissemination could look like?

Part of the problem with policy-led approaches is that they fix and lock down what ‘openness’ is and intends to achieve, while also forcing researchers to conform to this definition and comply with its demands. Yet openness itself, as many have argued, facilitates and requires experimentation, particularly around the forms of organisation required to facilitate the kinds of relationalities that could help us build collective power in higher education. This, I argue, is what research assessment reform should be based on: building the capacity to explore and imagine different ways of producing knowledge, not simply reworking incentives towards open publishing, etc. In many ways, this means leaving behind assessment and replacing it with capacity building (as we’ve argued for in a different context elsewhere) or something altogether detached from the assessment of individual ‘performance’.

For the most part, this vision requires radical thinking — which is why so many incremental approaches to assessment and culture reform fall flat for their tendency to rehearse all the pre-existing issues with the old system. My argument is simply that no one really knows how to best reimagine the new forms of organisation needed for more ethical knowledge production (but many people could give it a good shot given the opportunity, especially the very people currently so exploited by precarity). It involves bringing people together in a variety of ways, sustaining their collective efforts, and not continually dividing them up into individual units to be assessed at every possible turn. This also entails the ceding of control from policymaker to smaller, decentralised collectives of knowledge producers…which is probably a tough sell to the average policymaker.

How does open science ‘democratise’ and ‘collectivise’ research?

A recent article in The Scientist discusses the newly launched Higher Education Leadership Initiative for Open Scholarship (HELIOS). Composed of ‘leaders’ from over 75 US colleges and universities, HELIOS is committed to incentivising open science practices in order to make research more research more ‘inclusive, transparent, and efficient’. It is an approach designed to reorient assessment mechanisms towards open science practices, including ‘publishing in open-access journals, posting data using FAIR (findable, accessible, interoperable, and reusable) principles, and sharing other research outputs such as computer code.’

Throughout the article, we hear how open science ‘democratises’ science and works against the rampant individualism that characterises so much of higher education. Open science is ‘collaborative’ and entails the sharing of data, code and publications for anyone to access and reuse; it also allows research to reach and engage other communities not traditionally considered as part of the research process. These are familiar themes from years of open science advocacy.

Yet it isn’t clear what the relationship is between the greater sharing of research materials and the so-called democratisation at work in open science. What actually is democratising and collectivising about what HELIOS is trying to do?

It is important to ask this question because HELIOS is, by all accounts, a top-down initiative led by senior figures of research-intensive universities in the US. Despite the casual association between open science and collectivity, it appears that HELIOS is more a way for university leaders to coerce researchers into a cultural change, not something that is led by the research community at large. While changing tenure guidelines to prioritise publishing in open access journals, sharing FAIR data and releasing reusable open code may have some good outcomes, they are not themselves the basis for greater collective governance of science. Instead, these changes will provide an economic reason for researchers to adopt open science practices, a reason still based on individual progress within the academy.

Clearly, it makes sense to incentivise behaviours that are good. But the problem here is that greater democratic governance of science is the way by which the incentivising should take place. This is made all the more important because the lack of collective governance within higher education is one of the biggest issues facing knowledge production right now: it is the thing that could lead to much greater cultures of academic research, certainly more so than HELIOS’s narrow focus on open science.

The relationship between ‘openness’ and democratisation is a false one, or at least there is no obvious or necessary connection between the two (see Tkacz’s work for more on the politics of openness). This is because open science is largely focused on the outputs of scientific research instead of the cultures of how they are produced. Or rather, open science is mainly interested in efficient and reproducible modes of production, not ethical or collectively-governed ones. The latter may be a consideration of some visions of open science, but they are not their defining feature.

When policymakers and university leaders mandate openness to specific resources, democratic governance gets left behind. This is because this kind of openness does not require community accountability for it to be realised, only a vague sense that giving resources away will lead to a kind of inclusion that previously did not exist. This focus on resources is what allows the market and private enterprise — the ultimate expression of individualism — to dominate the provision of openness at the expense of community governance.

For open science to adequately ‘democratise’ or ‘collectivise’, it must consider the closures involved in such processes. By closures, I mean the actively designed and nurtured cultures of inclusion — and exclusion, by extension — that are required to foreground the good stuff (different cultures of knowledge, mutual reliance and care) and relegate the bad (everything oriented around profit). At some point, we’re going to have to work out how to leave the openness of openness behind and piece back together a more ethical system of knowledge production based on democratic self-governance of the university itself.