Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.
A mode of working in which AI agents handle execution-level tasks while the human operates at the direction layer — deciding what should be made, defining what good looks like, and evaluating outputs.
An agentic AI command-line tool designed to understand, modify, and manage complex repositories of code and documentation.
Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.
A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.
The process by which a trained language model generates outputs; the computational work that happens each time you send a prompt and receive a response.
The idea that different kinds of cognitive work have different computational costs in large language models, and that matching task complexity to model capability matters for both efficiency and output quality.
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.
AI literacy is a multidimensional capability spanning recognition, critical evaluation, functional application, creation, ethical awareness, and contextual judgement, and is not reducible to any single dimension.
Any claim that a course or programme of study develops AI literacy requires important qualifications—literacy develops through sustained practice, is developmental and contextual, and cannot be fully assessed at course completion.
An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.
An agentic AI command-line tool designed to understand, modify, and manage complex repositories of code and documentation.
An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.
An agentic AI command-line tool designed to understand, modify, and manage complex repositories of code and documentation.
LLM terminology provides unexpectedly precise language for human cognitive constraints we've struggled to describe—revealing that the similarities might be more extensive than professional identity allows us to admit
Software with source code that is publicly accessible, allowing for community-driven development, inspection, and modification.
Software with source code that is publicly accessible, allowing for community-driven development, inspection, and modification.
An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.
Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.
Persistent context included in every message to an AI model, establishing consistent behaviour, knowledge, or constraints across interactions.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
A database that stores explicit relationships between entities, serving as the storage layer for knowledge graphs
A lightweight programme that exposes specific data sources or capabilities through the Model Context Protocol standard, acting as an adapter between AI systems and diverse data sources.
An open standard enabling AI systems to access diverse data sources through standardised interfaces with fine-grained permission control.
A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning
The capacity to make human knowledge machine-readable while preserving its meaning, enabling AI to reason within a specific intellectual framework.
A model for accessing AI capabilities while personal context remains private and under individual control, separating computational intelligence from data ownership.
A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.
A powerful command-line tool that converts documents between dozens of different markup formats, enabling interoperability and workflow flexibility.
A legal framework that enables the sharing and adaptation of creative and scholarly work while maintaining the rights of the creator.
A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.
The foundational layer of digital information, stored as a sequence of readable characters without proprietary formatting.
YAML is a human-readable format for storing structured data as plain text. In knowledge management and publishing workflows, it appears most commonly as the frontmatter block at the top of markdown files, where it holds metadata — title, author, date, tags — that tools can read without parsing the document itself.
Distributed version control is an approach to tracking file changes where every contributor holds a complete copy of the repository and its full history, rather than depending on a central server. It enables offline work, parallel development, and resilience against data loss.
Git is a distributed version control system that tracks changes to files over time. It records who changed what and when, allows you to move between earlier and later states of a project, and lets multiple people work on the same files without overwriting each other's contributions.
The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.
A lightweight markup language for creating formatted text using plain-text syntax, enabling portability and interoperability.
An assessment approach that uses automated verification and longitudinal data to evaluate student competence through the creation of digital artifacts.
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
Vibe coding describes using AI tools without maintaining genuine accountability for what they produce; accepting outputs without the scrutiny, direction, or judgement needed to evaluate and improve them. Simon Willison drew the key distinction: vibe coding versus vibe engineering, where the latter uses the same tools while remaining genuinely accountable for every output.
Learned numerical representations of text that capture semantic meaning, enabling similarity-based search and retrieval
A technique that improves LLM responses by retrieving relevant information from external sources and including it in the prompt
A database that stores embeddings for similarity-based retrieval, serving as the knowledge layer for RAG systems
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.
How arms race dynamics in higher education create adversarial relationships between institutions and students, and what drives these cycles
LLM terminology provides unexpectedly precise language for human cognitive constraints we've struggled to describe—revealing that the similarities might be more extensive than professional identity allows us to admit
The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.
A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.
A database that stores explicit relationships between entities, serving as the storage layer for knowledge graphs
A structured representation of knowledge using entities connected by explicit, typed relationships
A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents
YAML is a human-readable format for storing structured data as plain text. In knowledge management and publishing workflows, it appears most commonly as the frontmatter block at the top of markdown files, where it holds metadata — title, author, date, tags — that tools can read without parsing the document itself.
Distributed version control is an approach to tracking file changes where every contributor holds a complete copy of the repository and its full history, rather than depending on a central server. It enables offline work, parallel development, and resilience against data loss.
Git is a distributed version control system that tracks changes to files over time. It records who changed what and when, allows you to move between earlier and later states of a project, and lets multiple people work on the same files without overwriting each other's contributions.
Harness engineering is the practice of building the full architectural scaffolding within which AI agents operate — structured documentation they can reason with, constraints that enforce invariants, and feedback loops that let them know when they've succeeded. It is distinct from prompt engineering, which shapes individual tasks, and from oversight, which monitors outputs after the fact. The harness is the infrastructure that makes delegation coherent at scale.
Learned numerical representations of text that capture semantic meaning, enabling similarity-based search and retrieval
A technique that improves LLM responses by retrieving relevant information from external sources and including it in the prompt
Direct question-answer retrieval based on statistical similarity, the default reasoning pattern in RAG systems
A database that stores embeddings for similarity-based retrieval, serving as the knowledge layer for RAG systems
How arms race dynamics in higher education create adversarial relationships between institutions and students, and what drives these cycles
A lightweight markup language for creating formatted text using plain-text syntax, enabling portability and interoperability.
A powerful command-line tool that converts documents between dozens of different markup formats, enabling interoperability and workflow flexibility.
The foundational layer of digital information, stored as a sequence of readable characters without proprietary formatting.
A database that stores explicit relationships between entities, serving as the storage layer for knowledge graphs
A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning
A structured representation of knowledge using entities connected by explicit, typed relationships
AI reasoning capability that draws conclusions by traversing multiple connected concepts
A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents
The accumulated cost of outdated, ambiguous, or poorly structured institutional knowledge — manageable when humans compensate, operationally consequential when AI agents depend on it literally.
The capacity to make human knowledge machine-readable while preserving its meaning, enabling AI to reason within a specific intellectual framework.
A structured representation of knowledge using entities connected by explicit, typed relationships
A mode of working in which AI agents handle execution-level tasks while the human operates at the direction layer — deciding what should be made, defining what good looks like, and evaluating outputs.
The structural features of an information source that enable its knowledge claims to be challenged, traced back to evidence, and evaluated against the source's track record. Traditional sources carry it; generative AI largely does not.
A headless AI model runs non-interactively — no chat interface, no conversation. You pass it text, it returns output, and it exits. This makes AI tools composable with the same scripts and schedulers that have coordinated Unix processes for decades.
An AI agent is a system that autonomously executes multi-step tasks using language model reasoning — distinct from an AI assistant, which responds to individual prompts. Agents plan, act, observe results, and adapt, using tools such as file access, code execution, and web search. They perform best when given clear goals, explicit constraints, and well-prepared context.
Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.
The process by which a trained language model generates outputs; the computational work that happens each time you send a prompt and receive a response.
The idea that different kinds of cognitive work have different computational costs in large language models, and that matching task complexity to model capability matters for both efficiency and output quality.
Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.
LLM terminology provides unexpectedly precise language for human cognitive constraints we've struggled to describe—revealing that the similarities might be more extensive than professional identity allows us to admit
Large language models are deep learning models with billions of parameters, trained on vast text corpora using self-supervised learning, capable of general-purpose language tasks.
A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.
YAML is a human-readable format for storing structured data as plain text. In knowledge management and publishing workflows, it appears most commonly as the frontmatter block at the top of markdown files, where it holds metadata — title, author, date, tags — that tools can read without parsing the document itself.
A standardised ontology providing business, data, and application architectures for the higher education sector — and a practical starting point for making institutional knowledge machine-readable.
Distributed version control is an approach to tracking file changes where every contributor holds a complete copy of the repository and its full history, rather than depending on a central server. It enables offline work, parallel development, and resilience against data loss.
Git is a distributed version control system that tracks changes to files over time. It records who changed what and when, allows you to move between earlier and later states of a project, and lets multiple people work on the same files without overwriting each other's contributions.
The research industrial complex describes the self-reinforcing system of incentives across universities, funding bodies, journals, and publishers that rewards publication volume and impact metrics over meaningful scientific progress. The term draws on Eisenhower's military-industrial complex to highlight how interconnected institutional interests can sustain a system that actively works against its own stated mission.
The research industrial complex describes the self-reinforcing system of incentives across universities, funding bodies, journals, and publishers that rewards publication volume and impact metrics over meaningful scientific progress. The term draws on Eisenhower's military-industrial complex to highlight how interconnected institutional interests can sustain a system that actively works against its own stated mission.
The capacity to make human knowledge machine-readable while preserving its meaning, enabling AI to reason within a specific intellectual framework.
A framework positioning personal context—knowledge, values, goals, thinking patterns—as central to human-AI collaboration, with individuals maintaining control over their cognitive environment while accessing AI capabilities.
A high-quality typesetting system that separates content from style, designed for the production of technical and scientific documentation.
A model for accessing AI capabilities while personal context remains private and under individual control, separating computational intelligence from data ownership.
Prompt injection is a technique in which instructions embedded in text cause an AI system to follow them as commands. In educational contexts it has appeared as an AI detection mechanism in assessment — which raises sharper questions about authorisation and trust than it might initially seem.
Persistent context included in every message to an AI model, establishing consistent behaviour, knowledge, or constraints across interactions.
Using natural language to produce desired responses from large language models through iterative refinement
The research industrial complex describes the self-reinforcing system of incentives across universities, funding bodies, journals, and publishers that rewards publication volume and impact metrics over meaningful scientific progress. The term draws on Eisenhower's military-industrial complex to highlight how interconnected institutional interests can sustain a system that actively works against its own stated mission.
Learn about LLM context drift (or context rot) and how the reduction in output quality as tokens increase affects complex AI workflows.
Direct question-answer retrieval based on statistical similarity, the default reasoning pattern in RAG systems
AI reasoning capability that draws conclusions by traversing multiple connected concepts
A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning
A legal framework that enables the sharing and adaptation of creative and scholarly work while maintaining the rights of the creator.
A legal framework that enables the sharing and adaptation of creative and scholarly work while maintaining the rights of the creator.
Software with source code that is publicly accessible, allowing for community-driven development, inspection, and modification.
An open standard enabling AI systems to access diverse data sources through standardised interfaces with fine-grained permission control.
AI-forward describes institutions treating AI integration as ongoing strategic practice requiring active engagement, rather than fixed deployment of finished solutions.
The foundational layer of digital information, stored as a sequence of readable characters without proprietary formatting.
A powerful command-line tool that converts documents between dozens of different markup formats, enabling interoperability and workflow flexibility.
A high-quality typesetting system that separates content from style, designed for the production of technical and scientific documentation.
A mode of working in which AI agents handle execution-level tasks while the human operates at the direction layer — deciding what should be made, defining what good looks like, and evaluating outputs.
A high-quality typesetting system that separates content from style, designed for the production of technical and scientific documentation.
A lightweight markup language for creating formatted text using plain-text syntax, enabling portability and interoperability.
A multidimensional framework for scholarship spanning discovery, integration, application, and teaching.
A six-dimension framework that underlies all forms of literacy—information, media, digital, data, and AI literacy share the same structural pattern.