The term ‘predatory publisher’ reveals a limit of language – or rather it asks too much of language. It seeks a binary separation between ‘predatory’ and ‘non-predatory’ where no such separation can exist, ultimately illustrating more about the motivations and hidden biases about the accuser than the supposedly predatory journal at hand. We therefore need another way to conceptualise the practices that predatory publishing seeks to describe.

The limitations of the term are on display in a preprint circulated this week by Severin et al, titled ‘Who reviews for predatory journals? A study on reviewer characteristics’. The study used a review tracking service called Publons to identify researchers who have declared to have reviewed for journals the authors consider to be predatory. They conclude:

[T]he profiles of scholars who review for predatory journals tend to resemble those scholars who publish their research in these outlets: they tend to be young and inexperienced researchers who are affiliated with institutions in developing regions.

According to the analysis, predatory journals both publish the work and rely on the reviewing expertise of the same groups of inexperienced researchers based in ‘developing’ regions. This leads the authors to conclude that a possible explanation is that ‘predatory journals have become an integral part of the workflow for many scholars in low-and lower-middle income countries’ (p. 8). The kinds of people publishing in these journals are demographically similar to those reviewing for them.

The authors base their definition of ‘predatory’ on Cabell’s lists, a proprietary database containing journals identified as ‘potentially not following scientific publication standards or expectations on quality, peer reviewing, or proper metrics’ (p. 4). The authors make it clear that predation is not a ‘simple binary phenomenon’ and that some classified journals may exist in a ‘grey zone’ between predatory and legitimate. As with much humanities/social science research, when something isn’t a binary then it is often conceived as a spectrum.

But the problem with the authors’ strategy is that their analysis is still conducted on a binary between predatory and non-predatory, rather than a spectrum of questionable practices. Because of course, if the authors were to analyse journals along this spectrum, then they would have to move outside the list provided by Cabell’s towards all journals too. We only have to look at websites like Retraction Watch or various ‘sting’ articles to know that bad practice occurs across all forms of publishing, not just those identified by Cabell’s, Jeffrey Beall or whoever else may have a financial or ideological interest in accusations of predation. We may also consider the extractive nature of publisher profiteering to be a similar kind of bad practice that impacts on the quality of the research published.

I’m not in any way questioning the author’s motivations with this study (and they recognise some of these concerns in a Nature story about the article), but just highlighting that the study of bad publishing practice cannot be conducted according to whether or not a journal is ‘predatory’. This is especially important as the journals that are identified as predatory are most often those from outside the Global North. There are always colonial and racial overtones to the analysis of predatory publishers that — consciously or not — separate them from the ‘trustworthy’ outlets in Europe and North America. When, in fact, any study of trustworthiness in publishing should not be limited to publishers already identified as ‘predatory’.

So the term ‘predatory publisher’ is an aporia: the moment you define an organisation as ‘predatory’ is the moment the term collapses and reveals the motivation to decide in advance which publishers are good and which are bad (often according to geographical boundaries). But this issue cannot be decided in advance — it is undecidable — and so is continually open to interpretation and shifting context. Either publishing is always-already a predatory practice or we have to find a different way of analysing trustworthiness.