6 items with this tag.
Higher education's response to AI has focused on the artefact: detecting it, restricting it, and restoring confidence in what students produce. This essay argues that the structural features of problem-based learning — problem-driven inquiry, collaborative knowledge construction, facilitation over instruction, and metacognitive reflection — are the same conditions under which AI integration becomes educationally productive rather than substitutive. The alignment is structural, not retrospective: PBL was designed around these conditions before AI existed. The argument extends further: AI shifts what category of problem PBL can engage with, expanding access to wicked problems previously beyond students' reach. Investing in PBL's structural conditions is simultaneously investing in AI readiness.
AI assessment scales and similar policies are taxonomies of containment that ask how to protect existing assessment practices from AI, not whether those practices remain fit for purpose. This post argues that they're asking the wrong question, and examines what higher education might be asking instead, with particular implications for health professions education.
Academic offences committees are investigating the wrong party. When AI is integral to authentic professional practice, assessment that excludes it does not protect rigour — it tests performance in a professional context that no longer exists. Valid assessment measures what graduates will actually need to do; for most health professions graduates in 2025, that includes thinking well with AI. The accountability for assessment design lies with educators, not students.
How arms race dynamics in higher education create adversarial relationships between institutions and students, and what drives these cycles
When educators embed hidden instructions in assessment materials to detect AI use, they import adversarial security thinking into educational relationships. This post examines what AI tripwires reveal about institutional assumptions (i.e. that assessment is about artifact authentication rather than learning measurement) and argues that this approach creates escalating countermeasure dynamics while only detecting carelessness, not genuine disengagement. The alternative requires rethinking what assessment is actually for in an era when artifact production has become trivially automatable.
Rich Sutton's 'Bitter Lesson' from AI research—that general methods leveraging computation outperform human-crafted knowledge—has a direct parallel in education. When AI can produce the kinds of artefacts that assessments have traditionally relied on, it exposes a fundamental problem we have long ignored: we were never really measuring learning, we were measuring the difficulty of producing certain artefacts. This post explores what the Bitter Lesson means for assessment design in health professions education, and why AI makes it impossible to continue pretending otherwise.