36 items with this tag.
More than 10,000 healthcare professionals have taken one the courses I've created for Physiopedia Plus. This post focuses on the AI Masterclass for Healthcare Professionals Programme — a practical introduction to AI in clinical practice, education, and research. Physiopedia Plus members get full access, and a 30% discount code is included for new sign-ups.
Higher education's response to AI has focused on the artefact: detecting it, restricting it, and restoring confidence in what students produce. This essay argues that the structural features of problem-based learning — problem-driven inquiry, collaborative knowledge construction, facilitation over instruction, and metacognitive reflection — are the same conditions under which AI integration becomes educationally productive rather than substitutive. The alignment is structural, not retrospective: PBL was designed around these conditions before AI existed. The argument extends further: AI shifts what category of problem PBL can engage with, expanding access to wicked problems previously beyond students' reach. Investing in PBL's structural conditions is simultaneously investing in AI readiness.
AI assessment scales and similar policies are taxonomies of containment that ask how to protect existing assessment practices from AI, not whether those practices remain fit for purpose. This post argues that they're asking the wrong question, and examines what higher education might be asking instead, with particular implications for health professions education.
When AI could write everything I'd ever written, I had to ask: what had I been doing all this time? The answer changed how I understand both writing and AI — and what it means to be a scholar in a world where words are cheap.
A field note on building arguments with AI: the brainstorming command I use to engage with any source, with an excerpt showing what a session looks like — Claude surfacing vault notes and Zotero sources, and why the conversation living in markdown matters.
Every week I annotate articles in Zotero, highlights in Reader, and podcasts in Snipd, all of which is synced to Obsidian. By Friday I have a week's worth of material, tagged and structured, but unreviewed. This post describes the weekly review command I built to surface what matters and create a reason to engage with it.
A presentation for students participating in an EU-funded Blended Intensive Programme at Thomas More Hogeschool in Belgium. Examines how AI separates the production of artifacts from the learning they were meant to evidence, what problem-based learning already does differently, how AI changes group work and inquiry, and three practical shifts students can make in how they use AI within PBL.
A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.
Most advice on AI effectiveness focuses on prompt engineering. The real leverage comes from somewhere less obvious; knowing your professional commitments clearly enough to turn them into context an AI can work within. This post describes how to build AI personas for professional practice — structured documents that compress your values, frameworks, and evidence into a form an AI agent can actually use.
A field note on the time Claude deleted my file. The agent followed my instructions precisely and that was the problem. A reflection on a different kind of AI failure mode, and what the model's apology reveals about where responsibility actually sits.
I've been writing lecture slides in markdown for several years, mostly because I enjoyed working in structured formats and plain text. That decision turned out to matter in ways I didn't anticipate. When AI agents have access to your local filesystem, the format your teaching materials live in determines what's possible.
An internal staff development session for the CPC team introducing AI through Microsoft Copilot. Covers what AI is and isn't, safe working practices, structured prompting with the RGID heuristic, and hands-on practice — with the goal of each participant leaving with one specific task to try that week.
A field note on what the recent Claude outage revealed about where I am on the dependency curve, and what the difference between a session limit and an outage tells you about infrastructure.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
What does it actually take to work with AI agents in a disciplined way, and how does someone get there from where they are now? This post draws on thinking from developers who've been working through this question longer than most academic knowledge workers have, and translates the hard-won lessons across. Three prerequisites emerge: planning before handoff, documentation treated as infrastructure rather than record, and domain expertise sufficient to evaluate what agents produce. The path in isn't a course or a tutorial — it's building something real with stakes attached.
An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.
For the last few months, my screen has been split between Obsidian and a terminal, with two or three AI agents running in parallel tabs. This post describes what that shift in academic workflow looks like and what made it possible. The change is not simply additive: the work has shifted from execution to direction. What that distinction means in practice — and why it matters for those of us working in knowledge-intensive academic roles — is what I try to work out here.
Seven principles for extended AI collaboration, distilled from a week-long project to restructure a large note collection using Claude Code. The principles cover goal-setting, understanding what AI can and cannot contribute, investing in planning conversations, adaptive planning, safety infrastructure, treating AI output as drafts, and expecting to learn something about your own thinking. Offered not as rules to follow but as patterns to recognise.
A detailed account of a week-long project to restructure 5,819 Obsidian notes using AI as a working partner. The project involved building a 23-category taxonomy, migrating thousands of legacy notes to a consistent metadata structure, and generating AI-written descriptions for every note in the collection. The piece describes not just what was done, but how extended planning conversations, external project documentation, and careful human review at each phase made the work tractable. The most unexpected outcome was that building infrastructure for a note collection required articulating, for the first time, precisely how I think about my academic field.
A field note on switching from Claude Opus 4.6 to Sonnet 4.6 as my default in Claude Code, and what I'm noticing after the first hour.
The process by which a trained language model generates outputs; the computational work that happens each time you send a prompt and receive a response.
The idea that different kinds of cognitive work have different computational costs in large language models, and that matching task complexity to model capability matters for both efficiency and output quality.
Most academics treat AI models as interchangeable general-purpose tools. They aren't. Different models have different characteristics that make them better suited to particular kinds of cognitive work, and matching tasks to those characteristics may improve both efficiency and output quality. This post explores what that looks like in personal workflows and how the same logic scales to institutional AI strategy.
What happens when you query the Zotero database with AI, treating your entire reference library as context rather than searching it document by document? This field note documents a proof of concept using Claude Code to read a Zotero SQLite database directly. The approach works but what breaks reveals how much your metadata practices actually matter.
Most discussions of AI in writing focus on output. This post describes a different experience—using AI as a thinking partner to challenge my choices and claims during a writing session.
A mathematical framework demonstrating that AI tutoring systems with 10–15% error rates can achieve superior learning outcomes through dramatically increased engagement compared to more accurate but largely unused alternatives. Drawing on evidence from health professions education, this essay shows that the multiplicative relationship between accuracy and utilisation creates an accessibility paradox: imperfect but engaging systems outperform perfect but unused ones. The argument carries three critical qualifications—errors vary in consequence and safety-critical content demands high accuracy; generative AI poses distinctive epistemic challenges that may undermine conventional error correction mechanisms; and engagement is necessary but not sufficient for learning, with superficial use patterns capable of nullifying predicted benefits entirely. A framework for calibrating accuracy requirements to context and consequence is proposed.
The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.
Most advice on organising your notes for AI treats it as a retrieval problem. The harder problem is translation; making your thinking machine-readable without losing what makes it yours. Contextual interoperability is the infrastructure that enables genuine AI collaboration in scholarly work.
Most conversations about AI focus on what it produces. This post describes what an AI workflow for academics actually looks like in practice — building structured context through documentation, iteration, and judgement that makes AI collaboration increasingly effective over time. Drawing on several weeks of restructuring scholarly output with Claude Code, I describe the iteration cycle, the role of documentation as external memory, and what the process reveals about the relationship between explicit information architecture and productive AI collaboration.
The discourse around AI and human cognition tends to focus on differences, but what happens when we invert the question and use LLM terminology to explore the similarities between AI and human thinking? This post examines parallels between AI cognitive architecture and human thinking—context windows, training data bias, tokenisation, temperature, hallucination, and pattern matching—not to claim that humans are language models, but to ask what these similarities reveal about our own cognitive processes and why we are so invested in denying them.
When educators embed hidden instructions in assessment materials to detect AI use, they import adversarial security thinking into educational relationships. This post examines what AI tripwires reveal about institutional assumptions (i.e. that assessment is about artifact authentication rather than learning measurement) and argues that this approach creates escalating countermeasure dynamics while only detecting carelessness, not genuine disengagement. The alternative requires rethinking what assessment is actually for in an era when artifact production has become trivially automatable.
Rich Sutton's 'Bitter Lesson' from AI research—that general methods leveraging computation outperform human-crafted knowledge—has a direct parallel in education. When AI can produce the kinds of artefacts that assessments have traditionally relied on, it exposes a fundamental problem we have long ignored: we were never really measuring learning, we were measuring the difficulty of producing certain artefacts. This post explores what the Bitter Lesson means for assessment design in health professions education, and why AI makes it impossible to continue pretending otherwise.
When AI can generate text, images, and ideas at scale, what remains distinctively human? This post argues that evaluative judgement—the capacity to assess what is worth creating, what deserves attention, and what matters—becomes the core human contribution in knowledge work. Drawing on research into evaluative judgement in health professions education, it explores how educators can make this capacity explicit and deliberately develop it, rather than treating it as an invisible by-product of experience.
Generative AI presents serious ethical challenges in education—to academic integrity, to equity, to the nature of learning itself. This post acknowledges these concerns while arguing that AI also represents an unprecedented opportunity for learning at scale, particularly for the kinds of personalised, adaptive learning that have always been theoretically desirable but practically impossible to deliver. For health professions educators committed to expanding access to quality education, this opportunity deserves serious, open-minded consideration.
AI meeting scribes are increasingly being adopted as productivity tools, automatically transcribing and summarising organisational meetings. But who controls these records, and who benefits from perfect organisational memory? This post explores how AI meeting scribes can entrench existing power dynamics by giving those in authority unprecedented access to communication patterns, informal decision-making, and dissent—all rendered visible and retrievable without those present realising the implications for how organisations are governed.
Most commentary on AI in education focuses on what AI cannot do, or catalogues its failures as warnings. This post argues for a different approach—instead of performative critique, demonstrate thoughtful use in your own practice. By modelling considered, reflective engagement with AI tools, health professions educators can critique from experience rather than speculation, help shape how AI is integrated into professional education, and play a better game than the one they're currently losing.