13 items under this folder.

  • Problem-based learning and the structural conditions for productive AI integration

    Higher education's response to AI has focused on the artefact: detecting it, restricting it, and restoring confidence in what students produce. This essay argues that the structural features of problem-based learning — problem-driven inquiry, collaborative knowledge construction, facilitation over instruction, and metacognitive reflection — are the same conditions under which AI integration becomes educationally productive rather than substitutive. The alignment is structural, not retrospective: PBL was designed around these conditions before AI existed. The argument extends further: AI shifts what category of problem PBL can engage with, expanding access to wicked problems previously beyond students' reach. Investing in PBL's structural conditions is simultaneously investing in AI readiness.

  • AI tutor accuracy in health professions education: The accuracy-engagement paradox

    A mathematical framework demonstrating that AI tutoring systems with 10–15% error rates can achieve superior learning outcomes through dramatically increased engagement compared to more accurate but largely unused alternatives. Drawing on evidence from health professions education, this essay shows that the multiplicative relationship between accuracy and utilisation creates an accessibility paradox: imperfect but engaging systems outperform perfect but unused ones. The argument carries three critical qualifications—errors vary in consequence and safety-critical content demands high accuracy; generative AI poses distinctive epistemic challenges that may undermine conventional error correction mechanisms; and engagement is necessary but not sufficient for learning, with superficial use patterns capable of nullifying predicted benefits entirely. A framework for calibrating accuracy requirements to context and consequence is proposed.

  • Documentation becomes infrastructure when AI agents are the readers

    When AI agents consume documentation as operational input, it undergoes a category shift from reference material to operational architecture — inaccuracies no longer merely inconvenience readers, they cause system failures. This essay argues that the primary bottleneck for institutional AI integration is not AI capability but information architecture: how institutional knowledge is structured, maintained, and made available to AI systems. Documentation written for human readers cannot function as reliable AI input without deliberate restructuring around explicit relationships and rigorous maintenance workflows. Treating this transition as a governance imperative — rather than a technical afterthought — determines whether AI integration delivers on its institutional promise.

  • Avoiding innovation theatre: A framework for supporting institutional AI integration

    Higher education institutions face persistent pressure to demonstrate AI engagement, often resulting in 'innovation theatre' — the performance of transformation without corresponding structural change. This essay presents a diagnostic framework distinguishing between performative and structural AI integration across four domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike linear maturity models, it reveals gaps between institutional rhetoric and operational reality. Three legitimate strategic positions — incremental, selective, and transformative — help institutions move from accidental drift toward conscious choice. Treating AI integration as ongoing strategic practice rather than fixed deployment ensures institutions preserve agency over technology decisions aligned with institutional values.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are extensively documented but not systematically queryable, creating artificial information scarcity that makes compliance reporting and quality assurance labour-intensive. This essay proposes a three-layer architecture — graph databases as the source of truth for curriculum structure, vector databases for semantic content retrieval, and a Model Context Protocol layer for stakeholder access — that transforms documentation into operational infrastructure. The architecture incorporates temporal versioning for longitudinal evidence, role-based access controls for multi-stakeholder environments, and internal quality audit against institutional policy alongside external regulatory compliance, enabling verification in hours rather than weeks.

  • Taste and judgement in human-AI systems

    Contemporary AI discourse often focuses on 'sanctuary strategies' — defensive attempts to identify uniquely human capabilities — positioning humans and AI as competitors for finite cognitive territory. This essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction, and introduces 'taste' as a framework for cultivating contextual judgement: sophisticated discernment about when, how, and why to engage AI in service of meaningful purposes. Unlike technical literacy, taste development involves iterative experimentation and reflection, preserving human agency over value determination. By shifting from 'What can humans do that AI cannot?' to 'How might AI help us do more of what we value?', the essay builds a case for abundance-oriented human-AI partnership.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education's focus on prompt engineering — teaching technical skills for crafting AI queries — represents a misunderstanding of learning. This essay argues that prompts emerge from personal meaning-making frameworks, not technical mechanics, and that the institutional impulse to control AI interaction reveals a 'learning alignment problem': systems optimising for measurable proxies like grades rather than authentic curiosity. Drawing parallels to AI safety's value alignment problem, it shows how AI exposes that many assignments were already completable without genuine intellectual work. Universities must shift from control to cultivation paradigms, recognising that learning is personal and resistant to external specification, ensuring AI becomes a partner in human flourishing rather than a tool for strategic performance.

  • Context sovereignty for AI-supported learning: A human-centred approach

    This essay proposes 'context sovereignty' as a framework for maintaining human agency in AI-supported learning, arguing that context engineering — not just prompting — is the key to meaningful human-AI collaboration.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    The predominant AI interface paradigm — text boxes and chronological chat histories — reproduces a deeply embedded cognitive metaphor that misaligns with how professional expertise develops. Drawing on Lakoff and Johnson's container schema, this essay traces how a single organising metaphor has been uncritically reproduced across physical, digital, and AI-mediated learning environments, artificially enclosing knowledge that practitioners must mentally reintegrate. Rather than proposing to replace bounded learning spaces, this essay explores graph-based learning environments as an alternative paradigm where bounded spaces become visible communities within a navigable network. AI serves as both conversational partner and network weaver, with conversations spatially anchored to relevant concepts rather than isolated in chronological chat histories. This reconceptualisation — from enclosure to constrained traversal — suggests possibilities for AI-supported learning environments that better develop the integrative capabilities defining professional expertise.

  • From teaching to learning: Rethinking education for a world of information abundance

    Traditional education systems are structured around teaching, assuming it inevitably produces learning. This essay argues that this unidirectional model — knowledge flowing from expert to novice — no longer suits a world of information abundance and AI disruption. A networked, learner-centred approach offers an alternative, reconceptualising learning as a complex process where educators shift from knowledge authority to learning facilitator, enabling diverse participants to contribute to collective understanding. This transformation is urgent as AI develops capabilities once exclusive to human experts, and requires reimagining institutional structures around learning networks rather than teaching hierarchies to prepare graduates for complexity and uncertainty.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a fundamental challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. This essay introduces a theoretically grounded framework for integrating AI into health professions education that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, six principles emerge — dialogic knowledge construction, critical consciousness, adaptive expertise, contextual authenticity, metacognitive development, and networked knowledge building — to guide AI integration in ways that prepare professionals for the complexity and uncertainty of contemporary healthcare practice.

  • Technological nature of language and the implications for health professions education

    Language is humanity's first general-purpose technology, developed to extend cognitive capabilities beyond biological limits. Large language models represent the latest evolution in a continuum stretching from spoken language through writing and print to digital text, extending language's capabilities through unprecedented scale, cross-domain synthesis, and cognitive adaptability. For health professions education, this framing shifts priorities from knowledge acquisition toward cognitive partnership and adaptive expertise. It demands a reconceptualisation of AI literacy — moving beyond technical prompting to understand how these tools shape reasoning — and requires assessment to evaluate students' capacity for collaborative problem-solving. Understanding LLMs as language technology offers a middle path between uncritical enthusiasm and reflexive resistance.

  • Publishing with purpose: Using AI to enhance scientific discourse

    The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. This essay argues that scientific journals must transform from metrics-driven repositories — prioritising publication volume over meaningful progress — into vibrant knowledge communities using AI to facilitate discourse. AI can support this by surfacing connections between research, making peer review more dialogic, and enabling multimodal knowledge translation. Meaningful change requires coordinated action across institutions, funding bodies, and journals willing to prioritise scientific progress over quantitative metrics. By reimagining journals as AI-supported communities rather than article-processing platforms, the research ecosystem can better serve scientific knowledge development and clinical outcomes.