9 items with this tag.
A detailed account of a week-long project to restructure 5,819 Obsidian notes using AI as a working partner. The project involved building a 23-category taxonomy, migrating thousands of legacy notes to a consistent metadata structure, and generating AI-written descriptions for every note in the collection. The piece describes not just what was done, but how extended planning conversations, external project documentation, and careful human review at each phase made the work tractable. The most unexpected outcome was that building infrastructure for a note collection required articulating, for the first time, precisely how I think about my academic field.
A field note on switching from Claude Opus 4.6 to Sonnet 4.6 as my default in Claude Code, and what I'm noticing after the first hour.
Source details Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021).
Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.
The process by which a trained language model generates outputs; the computational work that happens each time you send a prompt and receive a response.
The idea that different kinds of cognitive work have different computational costs in large language models, and that matching task complexity to model capability matters for both efficiency and output quality.
Most academics treat AI models as interchangeable general-purpose tools. They aren't. Different models have different characteristics that make them better suited to particular kinds of cognitive work, and matching tasks to those characteristics may improve both efficiency and output quality. This post explores what that looks like in personal workflows and how the same logic scales to institutional AI strategy.
Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.
A mathematical framework demonstrating that AI tutoring systems with 10–15% error rates can achieve superior learning outcomes through dramatically increased engagement compared to more accurate but largely unused alternatives. Drawing on evidence from health professions education, this essay shows that the multiplicative relationship between accuracy and utilisation creates an accessibility paradox: imperfect but engaging systems outperform perfect but unused ones. The argument carries three critical qualifications—errors vary in consequence and safety-critical content demands high accuracy; generative AI poses distinctive epistemic challenges that may undermine conventional error correction mechanisms; and engagement is necessary but not sufficient for learning, with superficial use patterns capable of nullifying predicted benefits entirely. A framework for calibrating accuracy requirements to context and consequence is proposed.