8 items with this tag.
More than 10,000 healthcare professionals have taken one the courses I've created for Physiopedia Plus. This post focuses on the AI Masterclass for Healthcare Professionals Programme — a practical introduction to AI in clinical practice, education, and research. Physiopedia Plus members get full access, and a 30% discount code is included for new sign-ups.
Higher education's response to AI has focused on the artefact: detecting it, restricting it, and restoring confidence in what students produce. This essay argues that the structural features of problem-based learning — problem-driven inquiry, collaborative knowledge construction, facilitation over instruction, and metacognitive reflection — are the same conditions under which AI integration becomes educationally productive rather than substitutive. The alignment is structural, not retrospective: PBL was designed around these conditions before AI existed. The argument extends further: AI shifts what category of problem PBL can engage with, expanding access to wicked problems previously beyond students' reach. Investing in PBL's structural conditions is simultaneously investing in AI readiness.
AI assessment scales and similar policies are taxonomies of containment that ask how to protect existing assessment practices from AI, not whether those practices remain fit for purpose. This post argues that they're asking the wrong question, and examines what higher education might be asking instead, with particular implications for health professions education.
Academic offences committees are investigating the wrong party. When AI is integral to authentic professional practice, assessment that excludes it does not protect rigour — it tests performance in a professional context that no longer exists. Valid assessment measures what graduates will actually need to do; for most health professions graduates in 2025, that includes thinking well with AI. The accountability for assessment design lies with educators, not students.
AI-generated text is fluent regardless of whether its content is accurate or well-reasoned. Fluency was once a reasonable trace of genuine thinking — a student who wrote clearly had usually thought clearly. That relationship no longer holds. Worse, the AI literacy response of teaching output evaluation is a temporary fix: as models improve, output quality converges on expert-level across every artefact we care to measure. The question isn't how to spot current failure modes. It's what you'll do when those failure modes are gone.
A presentation for students participating in an EU-funded Blended Intensive Programme at Thomas More Hogeschool in Belgium. Examines how AI separates the production of artifacts from the learning they were meant to evidence, what problem-based learning already does differently, how AI changes group work and inquiry, and three practical shifts students can make in how they use AI within PBL.
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.