2 items with this tag.
This essay examines the shifting landscape of trust in academic scholarship, challenging the traditional model where trust has been outsourced to publishers and journals as proxies for validation and quality assessment. While this system developed important mechanisms for scholarly trust, including persistent identification, version control, peer feedback, and contextual placement, technological change offers an opportunity to reclaim and enhance these mechanisms. Drawing on principles of emergent scholarship, I explore how trust can be reimagined through knowledge connection, innovation through openness, identity through community, value through engagement, and meaning through medium. This approach does not reject traditional scholarship but builds bridges between established practices and new possibilities, enabling a shift from institutional proxies to visible processes. The essay proposes a three-tier technical framework that maintains compatibility with traditional academic structures while introducing new possibilities: a live working environment where scholarship evolves through visible iteration; preprints with DOIs enabling persistent citation; and journal publication connecting to established incentive structures. This framework offers significant benefits, including greater scholarly autonomy, enhanced transparency, increased responsiveness, and recognition of diverse contributions. However, it also presents challenges: technical barriers to participation, potential fragmentation, increased resource demands, and recognition within traditional contexts. The result is not a replacement for traditional scholarship but an evolution that shifts trust from institutional proxies to visible processes, creating scholarship that is more connected, open, engaged, and ultimately more trustworthy.
The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the research industrial complex - a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers' work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication metrics, funding bodies rely on those metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals' core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.