6 items with this tag.
A mathematical framework demonstrating that AI tutoring systems with 10–15% error rates can achieve superior learning outcomes through dramatically increased engagement compared to more accurate but largely unused alternatives. Drawing on evidence from health professions education, this essay shows that the multiplicative relationship between accuracy and utilisation creates an accessibility paradox: imperfect but engaging systems outperform perfect but unused ones. The argument carries three critical qualifications—errors vary in consequence and safety-critical content demands high accuracy; generative AI poses distinctive epistemic challenges that may undermine conventional error correction mechanisms; and engagement is necessary but not sufficient for learning, with superficial use patterns capable of nullifying predicted benefits entirely. A framework for calibrating accuracy requirements to context and consequence is proposed.
Generative AI presents serious ethical challenges in education—to academic integrity, to equity, to the nature of learning itself. This post acknowledges these concerns while arguing that AI also represents an unprecedented opportunity for learning at scale, particularly for the kinds of personalised, adaptive learning that have always been theoretically desirable but practically impossible to deliver. For health professions educators committed to expanding access to quality education, this opportunity deserves serious, open-minded consideration.
Most commentary on AI in education focuses on what AI cannot do, or catalogues its failures as warnings. This post argues for a different approach—instead of performative critique, demonstrate thoughtful use in your own practice. By modelling considered, reflective engagement with AI tools, health professions educators can critique from experience rather than speculation, help shape how AI is integrated into professional education, and play a better game than the one they're currently losing.
Higher education's focus on prompt engineering — teaching technical skills for crafting AI queries — represents a misunderstanding of learning. This essay argues that prompts emerge from personal meaning-making frameworks, not technical mechanics, and that the institutional impulse to control AI interaction reveals a 'learning alignment problem': systems optimising for measurable proxies like grades rather than authentic curiosity. Drawing parallels to AI safety's value alignment problem, it shows how AI exposes that many assignments were already completable without genuine intellectual work. Universities must shift from control to cultivation paradigms, recognising that learning is personal and resistant to external specification, ensuring AI becomes a partner in human flourishing rather than a tool for strategic performance.
This essay proposes 'context sovereignty' as a framework for maintaining human agency in AI-supported learning, arguing that context engineering — not just prompting — is the key to meaningful human-AI collaboration.
Traditional education systems are structured around teaching, assuming it inevitably produces learning. This essay argues that this unidirectional model — knowledge flowing from expert to novice — no longer suits a world of information abundance and AI disruption. A networked, learner-centred approach offers an alternative, reconceptualising learning as a complex process where educators shift from knowledge authority to learning facilitator, enabling diverse participants to contribute to collective understanding. This transformation is urgent as AI develops capabilities once exclusive to human experts, and requires reimagining institutional structures around learning networks rather than teaching hierarchies to prepare graduates for complexity and uncertainty.