11 items with this tag.
AI assessment scales and similar policies are taxonomies of containment that ask how to protect existing assessment practices from AI, not whether those practices remain fit for purpose. This post argues that they're asking the wrong question, and examines what higher education might be asking instead, with particular implications for health professions education.
Source details Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024).
Source details Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment.
A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.
How arms race dynamics in higher education create adversarial relationships between institutions and students, and what drives these cycles
When educators embed hidden instructions in assessment materials to detect AI use, they import adversarial security thinking into educational relationships. This post examines what AI tripwires reveal about institutional assumptions (i.e. that assessment is about artifact authentication rather than learning measurement) and argues that this approach creates escalating countermeasure dynamics while only detecting carelessness, not genuine disengagement. The alternative requires rethinking what assessment is actually for in an era when artifact production has become trivially automatable.
Rich Sutton's 'Bitter Lesson' from AI research—that general methods leveraging computation outperform human-crafted knowledge—has a direct parallel in education. When AI can produce the kinds of artefacts that assessments have traditionally relied on, it exposes a fundamental problem we have long ignored: we were never really measuring learning, we were measuring the difficulty of producing certain artefacts. This post explores what the Bitter Lesson means for assessment design in health professions education, and why AI makes it impossible to continue pretending otherwise.
Generative AI presents serious ethical challenges in education—to academic integrity, to equity, to the nature of learning itself. This post acknowledges these concerns while arguing that AI also represents an unprecedented opportunity for learning at scale, particularly for the kinds of personalised, adaptive learning that have always been theoretically desirable but practically impossible to deliver. For health professions educators committed to expanding access to quality education, this opportunity deserves serious, open-minded consideration.
Higher education's focus on prompt engineering — teaching technical skills for crafting AI queries — represents a misunderstanding of learning. This essay argues that prompts emerge from personal meaning-making frameworks, not technical mechanics, and that the institutional impulse to control AI interaction reveals a 'learning alignment problem': systems optimising for measurable proxies like grades rather than authentic curiosity. Drawing parallels to AI safety's value alignment problem, it shows how AI exposes that many assignments were already completable without genuine intellectual work. Universities must shift from control to cultivation paradigms, recognising that learning is personal and resistant to external specification, ensuring AI becomes a partner in human flourishing rather than a tool for strategic performance.
The predominant AI interface paradigm — text boxes and chronological chat histories — reproduces a deeply embedded cognitive metaphor that misaligns with how professional expertise develops. Drawing on Lakoff and Johnson's container schema, this essay traces how a single organising metaphor has been uncritically reproduced across physical, digital, and AI-mediated learning environments, artificially enclosing knowledge that practitioners must mentally reintegrate. Rather than proposing to replace bounded learning spaces, this essay explores graph-based learning environments as an alternative paradigm where bounded spaces become visible communities within a navigable network. AI serves as both conversational partner and network weaver, with conversations spatially anchored to relevant concepts rather than isolated in chronological chat histories. This reconceptualisation — from enclosure to constrained traversal — suggests possibilities for AI-supported learning environments that better develop the integrative capabilities defining professional expertise.
Traditional education systems are structured around teaching, assuming it inevitably produces learning. This essay argues that this unidirectional model — knowledge flowing from expert to novice — no longer suits a world of information abundance and AI disruption. A networked, learner-centred approach offers an alternative, reconceptualising learning as a complex process where educators shift from knowledge authority to learning facilitator, enabling diverse participants to contribute to collective understanding. This transformation is urgent as AI develops capabilities once exclusive to human experts, and requires reimagining institutional structures around learning networks rather than teaching hierarchies to prepare graduates for complexity and uncertainty.