15 items with this tag.
Transform your workday with sustainable working practices and systematic time management
Academic publishing treats scholarship as a finished, individually owned artefact. This post describes a writing and publishing workflow built on a different premise: that a scholarly corpus could work like an open source project — readable, contributable, forkable, and never permanently owned by anyone.
Academic offences committees are investigating the wrong party. When AI is integral to authentic professional practice, assessment that excludes it does not protect rigour — it tests performance in a professional context that no longer exists. Valid assessment measures what graduates will actually need to do; for most health professions graduates in 2025, that includes thinking well with AI. The accountability for assessment design lies with educators, not students.
When AI could write everything I'd ever written, I had to ask: what had I been doing all this time? The answer changed how I understand both writing and AI — and what it means to be a scholar in a world where words are cheap.
A field note on building arguments with AI: the brainstorming command I use to engage with any source, with an excerpt showing what a session looks like — Claude surfacing vault notes and Zotero sources, and why the conversation living in markdown matters.
Most advice on AI effectiveness focuses on prompt engineering. The real leverage comes from somewhere less obvious; knowing your professional commitments clearly enough to turn them into context an AI can work within. This post describes how to build AI personas for professional practice — structured documents that compress your values, frameworks, and evidence into a form an AI agent can actually use.
I've been writing lecture slides in markdown for several years, mostly because I enjoyed working in structured formats and plain text. That decision turned out to matter in ways I didn't anticipate. When AI agents have access to your local filesystem, the format your teaching materials live in determines what's possible.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
The previous posts described what makes agentic workflows coherent at the individual level: a plan, documentation as infrastructure, and domain expertise that can evaluate outputs. Together, these form an informal harness; the conditions within which delegation stays accountable. At institutional scale, a personal harness is not enough: multiple people directing agents without shared constraints produce compounding drift that no amount of human oversight can track. This post examines what AI agent governance in higher education actually requires, and why a harness, not better oversight, may be the right frame.
What does it actually take to work with AI agents in a disciplined way, and how does someone get there from where they are now? This post draws on thinking from developers who've been working through this question longer than most academic knowledge workers have, and translates the hard-won lessons across. Three prerequisites emerge: planning before handoff, documentation treated as infrastructure rather than record, and domain expertise sufficient to evaluate what agents produce. The path in isn't a course or a tutorial — it's building something real with stakes attached.
For the last few months, my screen has been split between Obsidian and a terminal, with two or three AI agents running in parallel tabs. This post describes what that shift in academic workflow looks like and what made it possible. The change is not simply additive: the work has shifted from execution to direction. What that distinction means in practice — and why it matters for those of us working in knowledge-intensive academic roles — is what I try to work out here.
Seven principles for extended AI collaboration, distilled from a week-long project to restructure a large note collection using Claude Code. The principles cover goal-setting, understanding what AI can and cannot contribute, investing in planning conversations, adaptive planning, safety infrastructure, treating AI output as drafts, and expecting to learn something about your own thinking. Offered not as rules to follow but as patterns to recognise.
What happens when you query the Zotero database with AI, treating your entire reference library as context rather than searching it document by document? This field note documents a proof of concept using Claude Code to read a Zotero SQLite database directly. The approach works but what breaks reveals how much your metadata practices actually matter.
Academic culture has converged on the peer-reviewed journal article as the default unit of scholarly output, creating a hierarchy that excludes many valuable forms of intellectual work. This post makes the case for essays as a legitimate form of scholarship—not as a lesser alternative to empirical research, but as a distinct mode that enables exploration, synthesis, and engagement with audiences that traditional publishing cannot reach. Drawing on Boyer's model of scholarship, it argues for a more generous conception of what counts as scholarly contribution.
Academic publishing has converged on the written journal article as the dominant form of scholarly output, but knowledge has always been transmitted through conversation, dialogue, and oral communication. This post explores whether audio scholarship—podcasts, recorded dialogues, oral histories—deserves recognition as legitimate scholarly work. Drawing on Boyer's model of scholarship, it argues that format matters less than the rigour, intention, and intellectual contribution behind the work, and considers what it would take for academic culture to broaden its definition of what counts.