#academic-integrity

View all 1 →
  • Prompt injection

    Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.

#academic-practice

View all 1 →
  • Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.

  • Headless AI

    A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.

  • Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.

  • Vibe coding

    Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.

  • AI agents

    An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.

  • Agentic workflows

    A mode of working in which AI agents handle execution-level tasks while the human operates at the direction layer — deciding what should be made, defining what good looks like, and evaluating outputs.

  • Claude code

    An agentic AI command-line tool designed to understand, modify, and manage complex repositories of code and documentation.

#ai-agents

View all 1 →
  • LLM context drift

    Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.

#ai-integration

View all 6 →
  • Headless AI

    A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.

  • Vibe coding

    Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.

  • AI agents

    An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.

  • Inference

    The process by which a trained language model generates outputs; the computational work that happens each time you send a prompt and receive a response.

  • Token budget

    The idea that different kinds of cognitive work have different computational costs in large language models, and that matching task complexity to model capability matters for both efficiency and output quality.

Show 1 more notes →

#ai-literacy

View all 6 →
  • The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.

  • Vibe coding

    Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.

  • Prompt injection

    Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.

  • AI literacy

    AI literacy is a multidimensional capability spanning recognition, critical evaluation, functional application, creation, ethical awareness, and contextual judgement, and is not reducible to any single dimension.

  • Any claim that a course or programme of study develops AI literacy requires important qualifications—literacy develops through sustained practice, is developmental and contextual, and cannot be fully assessed at course completion.

Show 1 more notes →

#assessment

View all 1 →
  • An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.

#automation

View all 2 →
  • Claude code

    An agentic AI command-line tool designed to understand, modify, and manage complex repositories of code and documentation.

  • An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.

  • Claude code

    An agentic AI command-line tool designed to understand, modify, and manage complex repositories of code and documentation.

#cognitive-science

View all 1 →
  • LLM terminology provides unexpectedly precise language for human cognitive constraints we've struggled to describe—revealing that the similarities might be more extensive than professional identity allows us to admit

#collaboration

View all 1 →
  • Software with source code that is publicly accessible, allowing for community-driven development, inspection, and modification.

#community

View all 1 →
  • Software with source code that is publicly accessible, allowing for community-driven development, inspection, and modification.

#competency

View all 1 →
  • An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.

  • LLM context drift

    Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.

  • System prompt

    Persistent context included in every message to an AI model, establishing consistent behaviour, knowledge, or constraints across interactions.

#context-engineering

View all 9 →
  • Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.

  • Graph database

    A database that stores explicit relationships between entities, serving as the storage layer for knowledge graphs

  • MCP server

    A lightweight programme that exposes specific data sources or capabilities through the Model Context Protocol standard, acting as an adapter between AI systems and diverse data sources.

  • An open standard enabling AI systems to access diverse data sources through standardised interfaces with fine-grained permission control.

  • GraphRAG

    A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning

Show 4 more notes →

#context-sovereignty

View all 3 →
  • The capacity to make human knowledge machine-readable while preserving its meaning, enabling AI to reason within a specific intellectual framework.

  • A model for accessing AI capabilities while personal context remains private and under individual control, separating computational intelligence from data ownership.

  • A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.

#conversion

View all 1 →
  • Pandoc

    A powerful command-line tool that converts documents between dozens of different markup formats, enabling interoperability and workflow flexibility.

#copyright

View all 1 →
  • A legal framework that enables the sharing and adaptation of creative and scholarly work while maintaining the rights of the creator.

#curriculum-infrastructure

View all 1 →
  • A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.

  • Plain text

    The foundational layer of digital information, stored as a sequence of readable characters without proprietary formatting.

#documentation

View all 5 →
  • YAML

    YAML is a human-readable format for storing structured data as plain text. In knowledge management and publishing workflows, it appears most commonly as the frontmatter block at the top of markdown files, where it holds metadata — title, author, date, tags — that tools can read without parsing the document itself.

  • Distributed version control is an approach to tracking file changes where every contributor holds a complete copy of the repository and its full history, rather than depending on a central server. It enables offline work, parallel development, and resilience against data loss.

  • Git

    Git is a distributed version control system that tracks changes to files over time. It records who changed what and when, allows you to move between earlier and later states of a project, and lets multiple people work on the same files without overwriting each other's contributions.

  • Documentation debt

    The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.

  • Markdown

    A lightweight markup language for creating formatted text using plain-text syntax, enabling portability and interoperability.

#education

View all 1 →
  • An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.

#generative-ai

View all 10 →
  • The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.

  • Vibe coding

    Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.

  • Embeddings

    Learned numerical representations of text that capture semantic meaning, enabling similarity-based search and retrieval

  • A technique that improves LLM responses by retrieving relevant information from external sources and including it in the prompt

  • Vector database

    A database that stores embeddings for similarity-based retrieval, serving as the knowledge layer for RAG systems

Show 5 more notes →

#health-professions-education

View all 1 →
  • The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.

#higher-education

View all 2 →
  • A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.

  • How arms race dynamics in higher education create adversarial relationships between institutions and students, and what drives these cycles

#human-cognition

View all 1 →
  • LLM terminology provides unexpectedly precise language for human cognitive constraints we've struggled to describe—revealing that the similarities might be more extensive than professional identity allows us to admit

#information-architecture

View all 5 →
  • Documentation debt

    The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.

  • A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.

  • Graph database

    A database that stores explicit relationships between entities, serving as the storage layer for knowledge graphs

  • Knowledge graph

    A structured representation of knowledge using entities connected by explicit, typed relationships

  • A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents

#information-management

View all 4 →
  • YAML

    YAML is a human-readable format for storing structured data as plain text. In knowledge management and publishing workflows, it appears most commonly as the frontmatter block at the top of markdown files, where it holds metadata — title, author, date, tags — that tools can read without parsing the document itself.

  • Distributed version control is an approach to tracking file changes where every contributor holds a complete copy of the repository and its full history, rather than depending on a central server. It enables offline work, parallel development, and resilience against data loss.

  • Git

    Git is a distributed version control system that tracks changes to files over time. It records who changed what and when, allows you to move between earlier and later states of a project, and lets multiple people work on the same files without overwriting each other's contributions.

  • Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.

#information-retrieval

View all 4 →
  • Embeddings

    Learned numerical representations of text that capture semantic meaning, enabling similarity-based search and retrieval

  • A technique that improves LLM responses by retrieving relevant information from external sources and including it in the prompt

  • Direct question-answer retrieval based on statistical similarity, the default reasoning pattern in RAG systems

  • Vector database

    A database that stores embeddings for similarity-based retrieval, serving as the knowledge layer for RAG systems

#institutional-dynamics

View all 1 →

#interoperability

View all 3 →
  • Markdown

    A lightweight markup language for creating formatted text using plain-text syntax, enabling portability and interoperability.

  • Pandoc

    A powerful command-line tool that converts documents between dozens of different markup formats, enabling interoperability and workflow flexibility.

  • Plain text

    The foundational layer of digital information, stored as a sequence of readable characters without proprietary formatting.

#knowledge-graphs

View all 5 →
  • Graph database

    A database that stores explicit relationships between entities, serving as the storage layer for knowledge graphs

  • GraphRAG

    A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning

  • Knowledge graph

    A structured representation of knowledge using entities connected by explicit, typed relationships

  • AI reasoning capability that draws conclusions by traversing multiple connected concepts

  • A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents

#knowledge-management

View all 1 →
  • Documentation debt

    The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.

#knowledge-representation

View all 2 →
  • The capacity to make human knowledge machine-readable while preserving its meaning, enabling AI to reason within a specific intellectual framework.

  • Knowledge graph

    A structured representation of knowledge using entities connected by explicit, typed relationships

#knowledge-work

View all 1 →
  • Agentic workflows

    A mode of working in which AI agents handle execution-level tasks while the human operates at the direction layer — deciding what should be made, defining what good looks like, and evaluating outputs.

#language-model

View all 3 →
  • The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.

  • Headless AI

    A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.

  • AI agents

    An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.

#large-language-models

View all 4 →
  • Prompt injection

    Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.

  • Inference

    The process by which a trained language model generates outputs; the computational work that happens each time you send a prompt and receive a response.

  • Token budget

    The idea that different kinds of cognitive work have different computational costs in large language models, and that matching task complexity to model capability matters for both efficiency and output quality.

  • LLM context drift

    Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.

#learning-theory

View all 1 →
  • LLM terminology provides unexpectedly precise language for human cognitive constraints we've struggled to describe—revealing that the similarities might be more extensive than professional identity allows us to admit

#machine-learning

View all 1 →
  • Large language models are deep learning models with billions of parameters, trained on vast text corpora using self-supervised learning, capable of general-purpose language tasks.

#model-context-protocol

View all 1 →
  • A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.

#note-taking

View all 1 →
  • YAML

    YAML is a human-readable format for storing structured data as plain text. In knowledge management and publishing workflows, it appears most commonly as the frontmatter block at the top of markdown files, where it holds metadata — title, author, date, tags — that tools can read without parsing the document itself.

  • A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.

#open-scholarship

View all 3 →
  • Distributed version control is an approach to tracking file changes where every contributor holds a complete copy of the repository and its full history, rather than depending on a central server. It enables offline work, parallel development, and resilience against data loss.

  • Git

    Git is a distributed version control system that tracks changes to files over time. It records who changed what and when, allows you to move between earlier and later states of a project, and lets multiple people work on the same files without overwriting each other's contributions.

  • The research industrial complex describes the self-reinforcing system of incentives across universities, funding bodies, journals, and publishers that rewards publication volume and impact metrics over meaningful scientific progress. The term draws on Eisenhower's military-industrial complex to highlight how interconnected institutional interests can sustain a system that actively works against its own stated mission.

#peer-review

View all 1 →
  • The research industrial complex describes the self-reinforcing system of incentives across universities, funding bodies, journals, and publishers that rewards publication volume and impact metrics over meaningful scientific progress. The term draws on Eisenhower's military-industrial complex to highlight how interconnected institutional interests can sustain a system that actively works against its own stated mission.

#personal-knowledge-management

View all 2 →
  • The capacity to make human knowledge machine-readable while preserving its meaning, enabling AI to reason within a specific intellectual framework.

  • A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.

#precision

View all 1 →
  • LaTeX

    A high-quality typesetting system that separates content from style, designed for the production of technical and scientific documentation.

  • A model for accessing AI capabilities while personal context remains private and under individual control, separating computational intelligence from data ownership.

#prompt-engineering

View all 3 →
  • Prompt injection

    Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.

  • System prompt

    Persistent context included in every message to an AI model, establishing consistent behaviour, knowledge, or constraints across interactions.

  • Prompt engineering

    Using natural language to produce desired responses from large language models through iterative refinement

#publishing

View all 1 →
  • The research industrial complex describes the self-reinforcing system of incentives across universities, funding bodies, journals, and publishers that rewards publication volume and impact metrics over meaningful scientific progress. The term draws on Eisenhower's military-industrial complex to highlight how interconnected institutional interests can sustain a system that actively works against its own stated mission.

#reasoning

View all 3 →
  • LLM context drift

    Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.

  • Direct question-answer retrieval based on statistical similarity, the default reasoning pattern in RAG systems

  • AI reasoning capability that draws conclusions by traversing multiple connected concepts

#retrieval-augmented-generation

View all 1 →
  • GraphRAG

    A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning

#scholarship

View all 1 →
  • A legal framework that enables the sharing and adaptation of creative and scholarly work while maintaining the rights of the creator.

  • A legal framework that enables the sharing and adaptation of creative and scholarly work while maintaining the rights of the creator.

  • Software with source code that is publicly accessible, allowing for community-driven development, inspection, and modification.

#standards

View all 1 →
  • An open standard enabling AI systems to access diverse data sources through standardised interfaces with fine-grained permission control.

  • AI-forward

    AI-forward describes institutions treating AI integration as ongoing strategic practice requiring active engagement, rather than fixed deployment of finished solutions.

#sustainability

View all 1 →
  • Plain text

    The foundational layer of digital information, stored as a sequence of readable characters without proprietary formatting.

  • Pandoc

    A powerful command-line tool that converts documents between dozens of different markup formats, enabling interoperability and workflow flexibility.

#typesetting

View all 1 →
  • LaTeX

    A high-quality typesetting system that separates content from style, designed for the production of technical and scientific documentation.

#workflows

View all 1 →
  • Agentic workflows

    A mode of working in which AI agents handle execution-level tasks while the human operates at the direction layer — deciding what should be made, defining what good looks like, and evaluating outputs.

  • LaTeX

    A high-quality typesetting system that separates content from style, designed for the production of technical and scientific documentation.

  • Markdown

    A lightweight markup language for creating formatted text using plain-text syntax, enabling portability and interoperability.

Untagged

  • A multidimensional framework for scholarship spanning discovery, integration, application, and teaching.

  • A six-dimension framework that underlies all forms of literacy—information, media, digital, data, and AI literacy share the same structural pattern.