54 items with this tag.
Develop multidimensional capability with generative AI for academic work
A mode of working in which AI agents handle execution-level tasks while the human operates at the direction layer — deciding what should be made, defining what good looks like, and evaluating outputs.
Seven principles for extended AI collaboration, distilled from a week-long project to restructure a large note collection using Claude Code. The principles cover goal-setting, understanding what AI can and cannot contribute, investing in planning conversations, adaptive planning, safety infrastructure, treating AI output as drafts, and expecting to learn something about your own thinking. Offered not as rules to follow but as patterns to recognise.
A detailed account of a week-long project to restructure 5,819 Obsidian notes using AI as a working partner. The project involved building a 23-category taxonomy, migrating thousands of legacy notes to a consistent metadata structure, and generating AI-written descriptions for every note in the collection. The piece describes not just what was done, but how extended planning conversations, external project documentation, and careful human review at each phase made the work tractable. The most unexpected outcome was that building infrastructure for a note collection required articulating, for the first time, precisely how I think about my academic field.
A field note on switching from Claude Opus 4.6 to Sonnet 4.6 as my default in Claude Code, and what I'm noticing after the first hour.
Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.
The process by which a trained language model generates outputs; the computational work that happens each time you send a prompt and receive a response.
The idea that different kinds of cognitive work have different computational costs in large language models, and that matching task complexity to model capability matters for both efficiency and output quality.
Most academics treat AI models as interchangeable general-purpose tools. They aren't. Different models have different characteristics that make them better suited to particular kinds of cognitive work, and matching tasks to those characteristics may improve both efficiency and output quality. This post explores what that looks like in personal workflows and how the same logic scales to institutional AI strategy.
Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.
Most universities have responded to AI by rewriting assessment policies and running prompt-writing workshops. Context engineering demands something different: infrastructure decisions that commit institutions to a direction. This post explains what context engineering involves, why it matters for health professions education, and why the gap between changing words and changing structures is where most institutions are stuck.
Most discussions of AI in writing focus on output. This post describes a different experience—using AI as a thinking partner to challenge my choices and claims during a writing session.
A mathematical framework demonstrating that AI tutoring systems with 10–15% error rates can achieve superior learning outcomes through dramatically increased engagement compared to more accurate but largely unused alternatives. Drawing on evidence from health professions education, this essay shows that the multiplicative relationship between accuracy and utilisation creates an accessibility paradox: imperfect but engaging systems outperform perfect but unused ones. The argument carries three critical qualifications—errors vary in consequence and safety-critical content demands high accuracy; generative AI poses distinctive epistemic challenges that may undermine conventional error correction mechanisms; and engagement is necessary but not sufficient for learning, with superficial use patterns capable of nullifying predicted benefits entirely. A framework for calibrating accuracy requirements to context and consequence is proposed.
The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.
A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.
Most advice on organising your notes for AI treats it as a retrieval problem. The harder problem is translation; making your thinking machine-readable without losing what makes it yours. Contextual interoperability is the infrastructure that enables genuine AI collaboration in scholarly work.
Most conversations about AI focus on what it produces. This post describes what an AI workflow for academics actually looks like in practice — building structured context through documentation, iteration, and judgement that makes AI collaboration increasingly effective over time. Drawing on several weeks of restructuring scholarly output with Claude Code, I describe the iteration cycle, the role of documentation as external memory, and what the process reveals about the relationship between explicit information architecture and productive AI collaboration.
Learned numerical representations of text that capture semantic meaning, enabling similarity-based search and retrieval
A technique that improves LLM responses by retrieving relevant information from external sources and including it in the prompt
Direct question-answer retrieval based on statistical similarity, the default reasoning pattern in RAG systems
A database that stores embeddings for similarity-based retrieval, serving as the knowledge layer for RAG systems
Professional education curricula face a fundamental infrastructure problem: while comprehensively documented, they lack systematic queryability. This presentation introduces a three-layer architecture using graph databases as the source of truth for curriculum structure, supported by vector databases for content retrieval and the Model Context Protocol for stakeholder interfaces.
When AI agents consume documentation as operational input, it undergoes a category shift from reference material to operational architecture — inaccuracies no longer merely inconvenience readers, they cause system failures. This essay argues that the primary bottleneck for institutional AI integration is not AI capability but information architecture: how institutional knowledge is structured, maintained, and made available to AI systems. Documentation written for human readers cannot function as reliable AI input without deliberate restructuring around explicit relationships and rigorous maintenance workflows. Treating this transition as a governance imperative — rather than a technical afterthought — determines whether AI integration delivers on its institutional promise.
The discourse around AI and human cognition tends to focus on differences, but what happens when we invert the question and use LLM terminology to explore the similarities between AI and human thinking? This post examines parallels between AI cognitive architecture and human thinking—context windows, training data bias, tokenisation, temperature, hallucination, and pattern matching—not to claim that humans are language models, but to ask what these similarities reveal about our own cognitive processes and why we are so invested in denying them.
How arms race dynamics in higher education create adversarial relationships between institutions and students, and what drives these cycles
A model for accessing AI capabilities while personal context remains private and under individual control, separating computational intelligence from data ownership.
A lightweight programme that exposes specific data sources or capabilities through the Model Context Protocol standard, acting as an adapter between AI systems and diverse data sources.
An open standard enabling AI systems to access diverse data sources through standardised interfaces with fine-grained permission control.
Persistent context included in every message to an AI model, establishing consistent behaviour, knowledge, or constraints across interactions.
When educators embed hidden instructions in assessment materials to detect AI use, they import adversarial security thinking into educational relationships. This post examines what AI tripwires reveal about institutional assumptions (i.e. that assessment is about artifact authentication rather than learning measurement) and argues that this approach creates escalating countermeasure dynamics while only detecting carelessness, not genuine disengagement. The alternative requires rethinking what assessment is actually for in an era when artifact production has become trivially automatable.
Large language models are deep learning models with billions of parameters, trained on vast text corpora using self-supervised learning, capable of general-purpose language tasks.
A framework for embedding AI literacy development into existing modules and courses, enabling students to develop AI capability while learning disciplinary content.
A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning
AI reasoning capability that draws conclusions by traversing multiple connected concepts
AI literacy is a multidimensional capability spanning recognition, critical evaluation, functional application, creation, ethical awareness, and contextual judgement, and is not reducible to any single dimension.
A six-dimension framework that underlies all forms of literacy—information, media, digital, data, and AI literacy share the same structural pattern.
Any claim that a course or programme of study develops AI literacy requires important qualifications—literacy develops through sustained practice, is developmental and contextual, and cannot be fully assessed at course completion.
When AI can generate text, images, and ideas at scale, what remains distinctively human? This post argues that evaluative judgement—the capacity to assess what is worth creating, what deserves attention, and what matters—becomes the core human contribution in knowledge work. Drawing on research into evaluative judgement in health professions education, it explores how educators can make this capacity explicit and deliberately develop it, rather than treating it as an invisible by-product of experience.
Generative AI presents serious ethical challenges in education—to academic integrity, to equity, to the nature of learning itself. This post acknowledges these concerns while arguing that AI also represents an unprecedented opportunity for learning at scale, particularly for the kinds of personalised, adaptive learning that have always been theoretically desirable but practically impossible to deliver. For health professions educators committed to expanding access to quality education, this opportunity deserves serious, open-minded consideration.
AI meeting scribes are increasingly being adopted as productivity tools, automatically transcribing and summarising organisational meetings. But who controls these records, and who benefits from perfect organisational memory? This post explores how AI meeting scribes can entrench existing power dynamics by giving those in authority unprecedented access to communication patterns, informal decision-making, and dissent—all rendered visible and retrievable without those present realising the implications for how organisations are governed.
Most commentary on AI in education focuses on what AI cannot do, or catalogues its failures as warnings. This post argues for a different approach—instead of performative critique, demonstrate thoughtful use in your own practice. By modelling considered, reflective engagement with AI tools, health professions educators can critique from experience rather than speculation, help shape how AI is integrated into professional education, and play a better game than the one they're currently losing.
Using natural language to produce desired responses from large language models through iterative refinement
A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.
AI-forward describes institutions treating AI integration as ongoing strategic practice requiring active engagement, rather than fixed deployment of finished solutions.
Higher education institutions face persistent pressure to demonstrate AI engagement, often resulting in 'innovation theatre' — the performance of transformation without corresponding structural change. This essay presents a diagnostic framework distinguishing between performative and structural AI integration across four domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike linear maturity models, it reveals gaps between institutional rhetoric and operational reality. Three legitimate strategic positions — incremental, selective, and transformative — help institutions move from accidental drift toward conscious choice. Treating AI integration as ongoing strategic practice rather than fixed deployment ensures institutions preserve agency over technology decisions aligned with institutional values.
Professional curricula are extensively documented but not systematically queryable, creating artificial information scarcity that makes compliance reporting and quality assurance labour-intensive. This essay proposes a three-layer architecture — graph databases as the source of truth for curriculum structure, vector databases for semantic content retrieval, and a Model Context Protocol layer for stakeholder access — that transforms documentation into operational infrastructure. The architecture incorporates temporal versioning for longitudinal evidence, role-based access controls for multi-stakeholder environments, and internal quality audit against institutional policy alongside external regulatory compliance, enabling verification in hours rather than weeks.
Contemporary AI discourse often focuses on 'sanctuary strategies' — defensive attempts to identify uniquely human capabilities — positioning humans and AI as competitors for finite cognitive territory. This essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction, and introduces 'taste' as a framework for cultivating contextual judgement: sophisticated discernment about when, how, and why to engage AI in service of meaningful purposes. Unlike technical literacy, taste development involves iterative experimentation and reflection, preserving human agency over value determination. By shifting from 'What can humans do that AI cannot?' to 'How might AI help us do more of what we value?', the essay builds a case for abundance-oriented human-AI partnership.
Higher education's focus on prompt engineering — teaching technical skills for crafting AI queries — represents a misunderstanding of learning. This essay argues that prompts emerge from personal meaning-making frameworks, not technical mechanics, and that the institutional impulse to control AI interaction reveals a 'learning alignment problem': systems optimising for measurable proxies like grades rather than authentic curiosity. Drawing parallels to AI safety's value alignment problem, it shows how AI exposes that many assignments were already completable without genuine intellectual work. Universities must shift from control to cultivation paradigms, recognising that learning is personal and resistant to external specification, ensuring AI becomes a partner in human flourishing rather than a tool for strategic performance.
This essay proposes 'context sovereignty' as a framework for maintaining human agency in AI-supported learning, arguing that context engineering — not just prompting — is the key to meaningful human-AI collaboration.
The predominant AI interface paradigm — text boxes and chronological chat histories — reproduces a deeply embedded cognitive metaphor that misaligns with how professional expertise develops. Drawing on Lakoff and Johnson's container schema, this essay traces how a single organising metaphor has been uncritically reproduced across physical, digital, and AI-mediated learning environments, artificially enclosing knowledge that practitioners must mentally reintegrate. Rather than proposing to replace bounded learning spaces, this essay explores graph-based learning environments as an alternative paradigm where bounded spaces become visible communities within a navigable network. AI serves as both conversational partner and network weaver, with conversations spatially anchored to relevant concepts rather than isolated in chronological chat histories. This reconceptualisation — from enclosure to constrained traversal — suggests possibilities for AI-supported learning environments that better develop the integrative capabilities defining professional expertise.
Health professions education faces a fundamental challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. This essay introduces a theoretically grounded framework for integrating AI into health professions education that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, six principles emerge — dialogic knowledge construction, critical consciousness, adaptive expertise, contextual authenticity, metacognitive development, and networked knowledge building — to guide AI integration in ways that prepare professionals for the complexity and uncertainty of contemporary healthcare practice.
Language is humanity's first general-purpose technology, developed to extend cognitive capabilities beyond biological limits. Large language models represent the latest evolution in a continuum stretching from spoken language through writing and print to digital text, extending language's capabilities through unprecedented scale, cross-domain synthesis, and cognitive adaptability. For health professions education, this framing shifts priorities from knowledge acquisition toward cognitive partnership and adaptive expertise. It demands a reconceptualisation of AI literacy — moving beyond technical prompting to understand how these tools shape reasoning — and requires assessment to evaluate students' capacity for collaborative problem-solving. Understanding LLMs as language technology offers a middle path between uncritical enthusiasm and reflexive resistance.
LLM terminology provides unexpectedly precise language for human cognitive constraints we've struggled to describe—revealing that the similarities might be more extensive than professional identity allows us to admit
A template classroom policy for generative AI use that educators can adapt for their own modules and courses.