28 items with this tag.
YAML is a human-readable format for storing structured data as plain text. In knowledge management and publishing workflows, it appears most commonly as the frontmatter block at the top of markdown files, where it holds metadata — title, author, date, tags — that tools can read without parsing the document itself.
Distributed version control is an approach to tracking file changes where every contributor holds a complete copy of the repository and its full history, rather than depending on a central server. It enables offline work, parallel development, and resilience against data loss.
Git is a distributed version control system that tracks changes to files over time. It records who changed what and when, allows you to move between earlier and later states of a project, and lets multiple people work on the same files without overwriting each other's contributions.
Language models can transform documents into interactive tools in minutes. This post walks through a concrete example, turning a 21-page Word questionnaire into a working web app, and reflects on what that capability makes possible.
AI-generated text is fluent regardless of whether its content is accurate or well-reasoned. Fluency was once a reasonable trace of genuine thinking — a student who wrote clearly had usually thought clearly. That relationship no longer holds. Worse, the AI literacy response of teaching output evaluation is a temporary fix: as models improve, output quality converges on expert-level across every artefact we care to measure. The question isn't how to spot current failure modes. It's what you'll do when those failure modes are gone.
Claude produced the word "contribuves" in a piece of writing, which is obviously not a real word. This is a different kind of error than hallucination, and the distinction matters.
Every week I annotate articles in Zotero, highlights in Reader, and podcasts in Snipd, all of which is synced to Obsidian. By Friday I have a week's worth of material, tagged and structured, but unreviewed. This post describes the weekly review command I built to surface what matters and create a reason to engage with it.
A presentation for students participating in an EU-funded Blended Intensive Programme at Thomas More Hogeschool in Belgium. Examines how AI separates the production of artifacts from the learning they were meant to evidence, what problem-based learning already does differently, how AI changes group work and inquiry, and three practical shifts students can make in how they use AI within PBL.
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.
Most advice on AI effectiveness focuses on prompt engineering. The real leverage comes from somewhere less obvious; knowing your professional commitments clearly enough to turn them into context an AI can work within. This post describes how to build AI personas for professional practice — structured documents that compress your values, frameworks, and evidence into a form an AI agent can actually use.
A field note on the time Claude deleted my file. The agent followed my instructions precisely and that was the problem. A reflection on a different kind of AI failure mode, and what the model's apology reveals about where responsibility actually sits.
I've been writing lecture slides in markdown for several years, mostly because I enjoyed working in structured formats and plain text. That decision turned out to matter in ways I didn't anticipate. When AI agents have access to your local filesystem, the format your teaching materials live in determines what's possible.
An internal staff development session for the CPC team introducing AI through Microsoft Copilot. Covers what AI is and isn't, safe working practices, structured prompting with the RGID heuristic, and hands-on practice — with the goal of each participant leaving with one specific task to try that week.
A field note on what the recent Claude outage revealed about where I am on the dependency curve, and what the difference between a session limit and an outage tells you about infrastructure.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
The previous posts described what makes agentic workflows coherent at the individual level: a plan, documentation as infrastructure, and domain expertise that can evaluate outputs. Together, these form an informal harness; the conditions within which delegation stays accountable. At institutional scale, a personal harness is not enough: multiple people directing agents without shared constraints produce compounding drift that no amount of human oversight can track. This post examines what AI agent governance in higher education actually requires, and why a harness, not better oversight, may be the right frame.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
What does it actually take to work with AI agents in a disciplined way, and how does someone get there from where they are now? This post draws on thinking from developers who've been working through this question longer than most academic knowledge workers have, and translates the hard-won lessons across. Three prerequisites emerge: planning before handoff, documentation treated as infrastructure rather than record, and domain expertise sufficient to evaluate what agents produce. The path in isn't a course or a tutorial — it's building something real with stakes attached.
An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.
For the last few months, my screen has been split between Obsidian and a terminal, with two or three AI agents running in parallel tabs. This post describes what that shift in academic workflow looks like and what made it possible. The change is not simply additive: the work has shifted from execution to direction. What that distinction means in practice — and why it matters for those of us working in knowledge-intensive academic roles — is what I try to work out here.
What happens when you query the Zotero database with AI, treating your entire reference library as context rather than searching it document by document? This field note documents a proof of concept using Claude Code to read a Zotero SQLite database directly. The approach works but what breaks reveals how much your metadata practices actually matter.
An agentic AI command-line tool designed to understand, modify, and manage complex repositories of code and documentation.
A high-quality typesetting system that separates content from style, designed for the production of technical and scientific documentation.
A lightweight markup language for creating formatted text using plain-text syntax, enabling portability and interoperability.
A powerful command-line tool that converts documents between dozens of different markup formats, enabling interoperability and workflow flexibility.
The foundational layer of digital information, stored as a sequence of readable characters without proprietary formatting.
A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents