As a 2024-5 Open Future Fellow, I explored AI across the research and publishing lifecycle. The output of this research is available as a report here:
https://openfuture.eu/publication/governing-the-scholarly-ai-commons
Summary:
In recent years, commercial publishers and information analytics companies have increased their reliance on AI-based technologies to conduct a range of tasks across the research lifecycle. From submission to publication and beyond, automated technologies are assisting with tasks relating to fraud detection, peer review, article production, and citation analysis. These technologies may be developed in-house or introduced as part of the rapidly growing network of startups and companies benefiting from the huge injection of investment in this area. AI is big business and relies on grand claims about its efficacy and potential, making it especially important that affected communities can shape its implementation.
This report, written by Open Future’s fellow Dr. Samuel A. Moore, explores a range of strategies and interventions for the governance of AI usage in academic research production and communication. Looking specifically at commercial research and publishing organisations, the report explores various ways in which communities might begin to reclaim control over the AI in knowledge production—or what the report terms the ‘scholarly commons’—from regulation through to union activity and editorial board management.
In addition, the report recommends that more work is needed to design robust structures for the governance of these technologies, while also nurturing open-source and community-led approaches to these areas.
This research contributes to Open Future’s work on the Open Movement, which envisions a more democratic digital future by preserving the achievements of the past two decades and harnessing the power of the Digital Commons.