14 items with this tag.
Large language models are deep learning models with billions of parameters, trained on vast text corpora using self-supervised learning, capable of general-purpose language tasks.
AI meeting scribes have automated the control of organisational memory, making existing power dynamics more powerful and less visible.
Rich Sutton's 'Bitter Lesson' applies to education: AI reveals that artifact-based assessment never truly measured learning.
As AI makes creation and curation trivially easy, evaluative judgement about what should exist becomes the primary human contribution.
A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning
AI reasoning capability that draws conclusions by traversing multiple connected concepts
Despite the ethical concerns, generative AI represents an enormous opportunity for learning at scale. Here's why I'm optimistic.
Higher education institutions face persistent pressure to demonstrate visible engagement with artificial intelligence, often resulting in what we characterise as innovation theatre - the performance of transformation without corresponding structural change. This paper presents a diagnostic framework that distinguishes between performative and structural integration through analysis of four operational domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike maturity models that prescribe linear progression, this framework enables institutional leaders to assess whether organisational structures align with stated strategic intentions, revealing gaps between rhetoric and reality. The framework emerged from critical analysis of institutional AI responses but evolved toward practical utility for decision-makers operating within genuine constraints. We position this work as practitioner pattern recognition requiring subsequent empirical validation, outline specific validation pathways, and discuss implications for institutional strategy in contexts of technological disruption.
Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.
Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.
Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.
The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.
Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.