11 items with this tag.
More than 10,000 healthcare professionals have taken one the courses I've created for Physiopedia Plus. This post focuses on the AI Masterclass for Healthcare Professionals Programme — a practical introduction to AI in clinical practice, education, and research. Physiopedia Plus members get full access, and a 30% discount code is included for new sign-ups.
Higher education's response to AI has focused on the artefact: detecting it, restricting it, and restoring confidence in what students produce. This essay argues that the structural features of problem-based learning — problem-driven inquiry, collaborative knowledge construction, facilitation over instruction, and metacognitive reflection — are the same conditions under which AI integration becomes educationally productive rather than substitutive. The alignment is structural, not retrospective: PBL was designed around these conditions before AI existed. The argument extends further: AI shifts what category of problem PBL can engage with, expanding access to wicked problems previously beyond students' reach. Investing in PBL's structural conditions is simultaneously investing in AI readiness.
AI assessment scales and similar policies are taxonomies of containment that ask how to protect existing assessment practices from AI, not whether those practices remain fit for purpose. This post argues that they're asking the wrong question, and examines what higher education might be asking instead, with particular implications for health professions education.
Academic offences committees are investigating the wrong party. When AI is integral to authentic professional practice, assessment that excludes it does not protect rigour — it tests performance in a professional context that no longer exists. Valid assessment measures what graduates will actually need to do; for most health professions graduates in 2025, that includes thinking well with AI. The accountability for assessment design lies with educators, not students.
A presentation for students participating in an EU-funded Blended Intensive Programme at Thomas More Hogeschool in Belgium. Examines how AI separates the production of artifacts from the learning they were meant to evidence, what problem-based learning already does differently, how AI changes group work and inquiry, and three practical shifts students can make in how they use AI within PBL.
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
Most universities have responded to AI by rewriting assessment policies and running prompt-writing workshops. Context engineering demands something different: infrastructure decisions that commit institutions to a direction. This post explains what context engineering involves, why it matters for health professions education, and why the gap between changing words and changing structures is where most institutions are stuck.
A mathematical framework demonstrating that AI tutoring systems with 10–15% error rates can achieve superior learning outcomes through dramatically increased engagement compared to more accurate but largely unused alternatives. Drawing on evidence from health professions education, this essay shows that the multiplicative relationship between accuracy and utilisation creates an accessibility paradox: imperfect but engaging systems outperform perfect but unused ones. The argument carries three critical qualifications—errors vary in consequence and safety-critical content demands high accuracy; generative AI poses distinctive epistemic challenges that may undermine conventional error correction mechanisms; and engagement is necessary but not sufficient for learning, with superficial use patterns capable of nullifying predicted benefits entirely. A framework for calibrating accuracy requirements to context and consequence is proposed.
Professional education curricula face a fundamental infrastructure problem: while comprehensively documented, they lack systematic queryability. This presentation introduces a three-layer architecture using graph databases as the source of truth for curriculum structure, supported by vector databases for content retrieval and the Model Context Protocol for stakeholder interfaces.
When AI can generate text, images, and ideas at scale, what remains distinctively human? This post argues that evaluative judgement—the capacity to assess what is worth creating, what deserves attention, and what matters—becomes the core human contribution in knowledge work. Drawing on research into evaluative judgement in health professions education, it explores how educators can make this capacity explicit and deliberately develop it, rather than treating it as an invisible by-product of experience.
Health professions education faces a fundamental challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. This essay introduces a theoretically grounded framework for integrating AI into health professions education that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, six principles emerge — dialogic knowledge construction, critical consciousness, adaptive expertise, contextual authenticity, metacognitive development, and networked knowledge building — to guide AI integration in ways that prepare professionals for the complexity and uncertainty of contemporary healthcare practice.