26 items with this tag.
Why I built a bespoke AI knowledge infrastructure from scratch, what MCP makes possible that nothing else did, and why the investment is worth it.
How I used Claude Haiku to score 4,460 Zotero items for relevance to my current research territory — and what the results revealed about how a reference library accumulates over time.
Adding the Obsidian vault and Wikipedia modules to the commonplace MCP server — and what became possible when personal notes and reference library could be queried together.
Closing the infrastructure series: what fourteen tools across three modules enable in practice, the Claude commands that invoke them, and what's still missing.
A field note on building arguments with AI: the brainstorming command I use to engage with any source, with an excerpt showing what a session looks like — Claude surfacing vault notes and Zotero sources, and why the conversation living in markdown matters.
Every week I annotate articles in Zotero, highlights in Reader, and podcasts in Snipd, all of which is synced to Obsidian. By Friday I have a week's worth of material, tagged and structured, but unreviewed. This post describes the weekly review command I built to surface what matters and create a reason to engage with it.
Most advice on AI effectiveness focuses on prompt engineering. The real leverage comes from somewhere less obvious; knowing your professional commitments clearly enough to turn them into context an AI can work within. This post describes how to build AI personas for professional practice — structured documents that compress your values, frameworks, and evidence into a form an AI agent can actually use.
A field note on what the recent Claude outage revealed about where I am on the dependency curve, and what the difference between a session limit and an outage tells you about infrastructure.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
The previous posts described what makes agentic workflows coherent at the individual level: a plan, documentation as infrastructure, and domain expertise that can evaluate outputs. Together, these form an informal harness; the conditions within which delegation stays accountable. At institutional scale, a personal harness is not enough: multiple people directing agents without shared constraints produce compounding drift that no amount of human oversight can track. This post examines what AI agent governance in higher education actually requires, and why a harness, not better oversight, may be the right frame.
What does it actually take to work with AI agents in a disciplined way, and how does someone get there from where they are now? This post draws on thinking from developers who've been working through this question longer than most academic knowledge workers have, and translates the hard-won lessons across. Three prerequisites emerge: planning before handoff, documentation treated as infrastructure rather than record, and domain expertise sufficient to evaluate what agents produce. The path in isn't a course or a tutorial — it's building something real with stakes attached.
Seven principles for extended AI collaboration, distilled from a week-long project to restructure a large note collection using Claude Code. The principles cover goal-setting, understanding what AI can and cannot contribute, investing in planning conversations, adaptive planning, safety infrastructure, treating AI output as drafts, and expecting to learn something about your own thinking. Offered not as rules to follow but as patterns to recognise.
A detailed account of a week-long project to restructure 5,819 Obsidian notes using AI as a working partner. The project involved building a 23-category taxonomy, migrating thousands of legacy notes to a consistent metadata structure, and generating AI-written descriptions for every note in the collection. The piece describes not just what was done, but how extended planning conversations, external project documentation, and careful human review at each phase made the work tractable. The most unexpected outcome was that building infrastructure for a note collection required articulating, for the first time, precisely how I think about my academic field.
Most academics treat AI models as interchangeable general-purpose tools. They aren't. Different models have different characteristics that make them better suited to particular kinds of cognitive work, and matching tasks to those characteristics may improve both efficiency and output quality. This post explores what that looks like in personal workflows and how the same logic scales to institutional AI strategy.
What happens when you query the Zotero database with AI, treating your entire reference library as context rather than searching it document by document? This field note documents a proof of concept using Claude Code to read a Zotero SQLite database directly. The approach works but what breaks reveals how much your metadata practices actually matter.
Most universities have responded to AI by rewriting assessment policies and running prompt-writing workshops. Context engineering demands something different: infrastructure decisions that commit institutions to a direction. This post explains what context engineering involves, why it matters for health professions education, and why the gap between changing words and changing structures is where most institutions are stuck.
A database that stores explicit relationships between entities, serving as the storage layer for knowledge graphs
A lightweight programme that exposes specific data sources or capabilities through the Model Context Protocol standard, acting as an adapter between AI systems and diverse data sources.
An open standard enabling AI systems to access diverse data sources through standardised interfaces with fine-grained permission control.
A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning
A structured representation of knowledge using entities connected by explicit, typed relationships
AI reasoning capability that draws conclusions by traversing multiple connected concepts
A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents
A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.
Professional curricula are extensively documented but not systematically queryable, creating artificial information scarcity that makes compliance reporting and quality assurance labour-intensive. This essay proposes a three-layer architecture — graph databases as the source of truth for curriculum structure, vector databases for semantic content retrieval, and a Model Context Protocol layer for stakeholder access — that transforms documentation into operational infrastructure. The architecture incorporates temporal versioning for longitudinal evidence, role-based access controls for multi-stakeholder environments, and internal quality audit against institutional policy alongside external regulatory compliance, enabling verification in hours rather than weeks.
This essay proposes 'context sovereignty' as a framework for maintaining human agency in AI-supported learning, arguing that context engineering — not just prompting — is the key to meaningful human-AI collaboration.