13 items with this tag.
Building personal knowledge systems for research and learning
Why I built a bespoke AI knowledge infrastructure from scratch, what MCP makes possible that nothing else did, and why the investment is worth it.
Building the Zotero module for the commonplace MCP server, and what I found when I looked properly at what was actually in my reference library.
How I used Claude Haiku to score 4,460 Zotero items for relevance to my current research territory — and what the results revealed about how a reference library accumulates over time.
Adding the Obsidian vault and Wikipedia modules to the commonplace MCP server — and what became possible when personal notes and reference library could be queried together.
Closing the infrastructure series: what fourteen tools across three modules enable in practice, the Claude commands that invoke them, and what's still missing.
Seven principles for extended AI collaboration, distilled from a week-long project to restructure a large note collection using Claude Code. The principles cover goal-setting, understanding what AI can and cannot contribute, investing in planning conversations, adaptive planning, safety infrastructure, treating AI output as drafts, and expecting to learn something about your own thinking. Offered not as rules to follow but as patterns to recognise.
A detailed account of a week-long project to restructure 5,819 Obsidian notes using AI as a working partner. The project involved building a 23-category taxonomy, migrating thousands of legacy notes to a consistent metadata structure, and generating AI-written descriptions for every note in the collection. The piece describes not just what was done, but how extended planning conversations, external project documentation, and careful human review at each phase made the work tractable. The most unexpected outcome was that building infrastructure for a note collection required articulating, for the first time, precisely how I think about my academic field.
Source details Matuschak, A., & Nielsen, M. (2019). How can we develop transformative tools for thought? numinous.productions/ttft .
Most advice on organising your notes for AI treats it as a retrieval problem. The harder problem is translation; making your thinking machine-readable without losing what makes it yours. Contextual interoperability is the infrastructure that enables genuine AI collaboration in scholarly work.
Most conversations about AI focus on what it produces. This post describes what an AI workflow for academics actually looks like in practice — building structured context through documentation, iteration, and judgement that makes AI collaboration increasingly effective over time. Drawing on several weeks of restructuring scholarly output with Claude Code, I describe the iteration cycle, the role of documentation as external memory, and what the process reveals about the relationship between explicit information architecture and productive AI collaboration.
The capacity to make human knowledge machine-readable while preserving its meaning, enabling AI to reason within a specific intellectual framework.
A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.