13 items with this tag.
Every week I annotate articles in Zotero, highlights in Reader, and podcasts in Snipd, all of which is synced to Obsidian. By Friday I have a week's worth of material, tagged and structured, but unreviewed. This post describes the weekly review command I built to surface what matters and create a reason to engage with it.
A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.
Most advice on AI effectiveness focuses on prompt engineering. The real leverage comes from somewhere less obvious; knowing your professional commitments clearly enough to turn them into context an AI can work within. This post describes how to build AI personas for professional practice — structured documents that compress your values, frameworks, and evidence into a form an AI agent can actually use.
A field note on the time Claude deleted my file. The agent followed my instructions precisely and that was the problem. A reflection on a different kind of AI failure mode, and what the model's apology reveals about where responsibility actually sits.
I've been writing lecture slides in markdown for several years, mostly because I enjoyed working in structured formats and plain text. That decision turned out to matter in ways I didn't anticipate. When AI agents have access to your local filesystem, the format your teaching materials live in determines what's possible.
A field note on what the recent Claude outage revealed about where I am on the dependency curve, and what the difference between a session limit and an outage tells you about infrastructure.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
The previous posts described what makes agentic workflows coherent at the individual level: a plan, documentation as infrastructure, and domain expertise that can evaluate outputs. Together, these form an informal harness; the conditions within which delegation stays accountable. At institutional scale, a personal harness is not enough: multiple people directing agents without shared constraints produce compounding drift that no amount of human oversight can track. This post examines what AI agent governance in higher education actually requires, and why a harness, not better oversight, may be the right frame.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
What does it actually take to work with AI agents in a disciplined way, and how does someone get there from where they are now? This post draws on thinking from developers who've been working through this question longer than most academic knowledge workers have, and translates the hard-won lessons across. Three prerequisites emerge: planning before handoff, documentation treated as infrastructure rather than record, and domain expertise sufficient to evaluate what agents produce. The path in isn't a course or a tutorial — it's building something real with stakes attached.
An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.
For the last few months, my screen has been split between Obsidian and a terminal, with two or three AI agents running in parallel tabs. This post describes what that shift in academic workflow looks like and what made it possible. The change is not simply additive: the work has shifted from execution to direction. What that distinction means in practice — and why it matters for those of us working in knowledge-intensive academic roles — is what I try to work out here.
What happens when you query the Zotero database with AI, treating your entire reference library as context rather than searching it document by document? This field note documents a proof of concept using Claude Code to read a Zotero SQLite database directly. The approach works but what breaks reveals how much your metadata practices actually matter.