18 items with this tag.
Develop multidimensional capability with generative AI for academic work
More than 10,000 healthcare professionals have taken one the courses I've created for Physiopedia Plus. This post focuses on the AI Masterclass for Healthcare Professionals Programme — a practical introduction to AI in clinical practice, education, and research. Physiopedia Plus members get full access, and a 30% discount code is included for new sign-ups.
Higher education's response to AI has focused on the artefact: detecting it, restricting it, and restoring confidence in what students produce. This essay argues that the structural features of problem-based learning — problem-driven inquiry, collaborative knowledge construction, facilitation over instruction, and metacognitive reflection — are the same conditions under which AI integration becomes educationally productive rather than substitutive. The alignment is structural, not retrospective: PBL was designed around these conditions before AI existed. The argument extends further: AI shifts what category of problem PBL can engage with, expanding access to wicked problems previously beyond students' reach. Investing in PBL's structural conditions is simultaneously investing in AI readiness.
Academic offences committees are investigating the wrong party. When AI is integral to authentic professional practice, assessment that excludes it does not protect rigour — it tests performance in a professional context that no longer exists. Valid assessment measures what graduates will actually need to do; for most health professions graduates in 2025, that includes thinking well with AI. The accountability for assessment design lies with educators, not students.
AI-generated text is fluent regardless of whether its content is accurate or well-reasoned. Fluency was once a reasonable trace of genuine thinking — a student who wrote clearly had usually thought clearly. That relationship no longer holds. Worse, the AI literacy response of teaching output evaluation is a temporary fix: as models improve, output quality converges on expert-level across every artefact we care to measure. The question isn't how to spot current failure modes. It's what you'll do when those failure modes are gone.
Claude produced the word "contribuves" in a piece of writing, which is obviously not a real word. This is a different kind of error than hallucination, and the distinction matters.
A presentation for students participating in an EU-funded Blended Intensive Programme at Thomas More Hogeschool in Belgium. Examines how AI separates the production of artifacts from the learning they were meant to evidence, what problem-based learning already does differently, how AI changes group work and inquiry, and three practical shifts students can make in how they use AI within PBL.
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
An internal staff development session for the CPC team introducing AI through Microsoft Copilot. Covers what AI is and isn't, safe working practices, structured prompting with the RGID heuristic, and hands-on practice — with the goal of each participant leaving with one specific task to try that week.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.
Most academics treat AI models as interchangeable general-purpose tools. They aren't. Different models have different characteristics that make them better suited to particular kinds of cognitive work, and matching tasks to those characteristics may improve both efficiency and output quality. This post explores what that looks like in personal workflows and how the same logic scales to institutional AI strategy.
A framework for embedding AI literacy development into existing modules and courses, enabling students to develop AI capability while learning disciplinary content.
AI literacy is a multidimensional capability spanning recognition, critical evaluation, functional application, creation, ethical awareness, and contextual judgement, and is not reducible to any single dimension.
Any claim that a course or programme of study develops AI literacy requires important qualifications—literacy develops through sustained practice, is developmental and contextual, and cannot be fully assessed at course completion.
Most commentary on AI in education focuses on what AI cannot do, or catalogues its failures as warnings. This post argues for a different approach—instead of performative critique, demonstrate thoughtful use in your own practice. By modelling considered, reflective engagement with AI tools, health professions educators can critique from experience rather than speculation, help shape how AI is integrated into professional education, and play a better game than the one they're currently losing.
AI-forward describes institutions treating AI integration as ongoing strategic practice requiring active engagement, rather than fixed deployment of finished solutions.
A template classroom policy for generative AI use that educators can adapt for their own modules and courses.