Found 106 total tags.

academic-career

1 item with this tag.

academic-development

1 item with this tag.

academic-practice

1 item with this tag.

accessibility

1 item with this tag.

  • Maximising utility through optimal accuracy: A model for educational AI

    Educational support systems have long prioritised accuracy as the primary metric of quality, resulting in technically excellent resources that remain largely unused. We present a mathematical framework demonstrating that AI tutoring systems with 10-15% error rates might achieve superior learning outcomes through increased utility compared to more accurate but less accessible alternatives. We show that the multiplicative relationship between accuracy and utilisation creates an 'accessibility paradox' where imperfect-but-accessible systems outperform perfect-but-unused ones. Furthermore, we argue that education's inherent error correction mechanisms and the pedagogical value of critical evaluation make this domain particularly suited for moderate-accuracy AI deployment. Our framework provides quantitative thresholds for acceptable error rates and challenges the prevailing assumption that educational AI must meet the same accuracy standards as, for example, diagnostic AI in healthcare.

AI

1 item with this tag.

  • Publishing with purpose: Using AI to enhance scientific discourse

    The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the research industrial complex - a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers' work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication metrics, funding bodies rely on those metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals' core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.

AI-forward

2 items with this tag.

  • Avoiding innovation theatre: A framework for supporting institutional AI integration

    Higher education institutions face persistent pressure to demonstrate visible engagement with artificial intelligence, often resulting in what we characterise as innovation theatre - the performance of transformation without corresponding structural change. This paper presents a diagnostic framework that distinguishes between performative and structural integration through analysis of four operational domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike maturity models that prescribe linear progression, this framework enables institutional leaders to assess whether organisational structures align with stated strategic intentions, revealing gaps between rhetoric and reality. The framework emerged from critical analysis of institutional AI responses but evolved toward practical utility for decision-makers operating within genuine constraints. We position this work as practitioner pattern recognition requiring subsequent empirical validation, outline specific validation pathways, and discuss implications for institutional strategy in contexts of technological disruption.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

AI-literacy

6 items with this tag.

AI-principles

1 item with this tag.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

AI-tutoring

1 item with this tag.

  • Maximising utility through optimal accuracy: A model for educational AI

    Educational support systems have long prioritised accuracy as the primary metric of quality, resulting in technically excellent resources that remain largely unused. We present a mathematical framework demonstrating that AI tutoring systems with 10-15% error rates might achieve superior learning outcomes through increased utility compared to more accurate but less accessible alternatives. We show that the multiplicative relationship between accuracy and utilisation creates an 'accessibility paradox' where imperfect-but-accessible systems outperform perfect-but-unused ones. Furthermore, we argue that education's inherent error correction mechanisms and the pedagogical value of critical evaluation make this domain particularly suited for moderate-accuracy AI deployment. Our framework provides quantitative thresholds for acceptable error rates and challenges the prevailing assumption that educational AI must meet the same accuracy standards as, for example, diagnostic AI in healthcare.

artificial-intelligence

14 items with this tag. Showing first 10 tags.

assessment

2 items with this tag.

audio

1 item with this tag.

authorship

1 item with this tag.

  • From journals to networks: How transparency transforms trust in scholarship

    This essay examines the shifting landscape of trust in academic scholarship, challenging the traditional model where trust has been outsourced to publishers and journals as proxies for validation and quality assessment. While this system developed important mechanisms for scholarly trust, including persistent identification, version control, peer feedback, and contextual placement, technological change offers an opportunity to reclaim and enhance these mechanisms. Drawing on principles of emergent scholarship, I explore how trust can be reimagined through knowledge connection, innovation through openness, identity through community, value through engagement, and meaning through medium. This approach does not reject traditional scholarship but builds bridges between established practices and new possibilities, enabling a shift from institutional proxies to visible processes. The essay proposes a three-tier technical framework that maintains compatibility with traditional academic structures while introducing new possibilities: a live working environment where scholarship evolves through visible iteration; preprints with DOIs enabling persistent citation; and journal publication connecting to established incentive structures. This framework offers significant benefits, including greater scholarly autonomy, enhanced transparency, increased responsiveness, and recognition of diverse contributions. However, it also presents challenges: technical barriers to participation, potential fragmentation, increased resource demands, and recognition within traditional contexts. The result is not a replacement for traditional scholarship but an evolution that shifts trust from institutional proxies to visible processes, creating scholarship that is more connected, open, engaged, and ultimately more trustworthy.

career-development

1 item with this tag.

collaboration

1 item with this tag.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

communication

1 item with this tag.

complexity

2 items with this tag.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

complexity-theory

1 item with this tag.

  • Context sovereignty for AI-supported learning: A human-centred approach

    The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

connectivism

1 item with this tag.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

constructivism

1 item with this tag.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

context-engineering

8 items with this tag.

  • Context engineering

    A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents

  • GraphRAG

    A technique that combines knowledge graphs with retrieval-augmented generation for structured reasoning

  • Knowledge graph

    A structured representation of knowledge using entities connected by explicit, typed relationships

  • Multi-hop reasoning

    AI reasoning capability that draws conclusions by traversing multiple connected concepts

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

  • Context sovereignty for AI-supported learning: A human-centred approach

    The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

context-sovereignty

3 items with this tag.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

  • Context sovereignty for AI-supported learning: A human-centred approach

    The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

control

1 item with this tag.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

critical-pedagogy

1 item with this tag.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

critical-thinking

1 item with this tag.

curriculum-design

1 item with this tag.

curriculum-development

1 item with this tag.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

curriculum-infrastructure

1 item with this tag.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

digital-literacy

1 item with this tag.

discernment

1 item with this tag.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

distributed-cognition

1 item with this tag.

  • Context sovereignty for AI-supported learning: A human-centred approach

    The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

ecological-systems

1 item with this tag.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

editor

1 item with this tag.

  • Publishing with purpose: Using AI to enhance scientific discourse

    The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the research industrial complex - a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers' work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication metrics, funding bodies rely on those metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals' core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.

education

4 items with this tag.

  • The common architecture of literacy

    A six-dimension framework that underlies all forms of literacy—information, media, digital, data, and AI literacy share the same structural pattern.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

  • Technological nature of language and the implications for health professions education

    This essay explores the idea of language as humanity's first general purpose technology—a system we developed to extend human capabilities across a range of domains, which enabled complementary innovations. Through this conceptual lens, large language models (LLMs) emerge not merely as new digital tools, but as a significant evolution in the continuum of language technologies that stretches from spoken language through writing, printing, and digital text. The essay explores how LLMs extend language's core capabilities through unprecedented scale, cross-domain synthesis, adaptability, and emerging multimodality. These extensions are particularly relevant to health professions education, where students face the dual challenge of information overload and inadequate preparation for complex practice environments. By viewing LLMs as an evolution of our most fundamental technology rather than simply new applications, we can better understand their implications for clinical education. This perspective suggests shifting educational emphasis from knowledge acquisition to clinical reasoning and adaptive expertise, developing new forms of AI literacy specific to healthcare contexts, and reimagining assessment approaches. Understanding LLMs as part of language's ongoing evolution offers a nuanced middle path between uncritical enthusiasm and reflexive resistance, informing thoughtful integration that enhances rather than diminishes the human dimensions of healthcare education.

educational-technology

5 items with this tag.

  • A better game: Thoughtful AI use over performative critique

    Rather than cataloguing AI's failures, demonstrate thoughtful use, critique from practice, and amplify what matters to you.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

  • Maximising utility through optimal accuracy: A model for educational AI

    Educational support systems have long prioritised accuracy as the primary metric of quality, resulting in technically excellent resources that remain largely unused. We present a mathematical framework demonstrating that AI tutoring systems with 10-15% error rates might achieve superior learning outcomes through increased utility compared to more accurate but less accessible alternatives. We show that the multiplicative relationship between accuracy and utilisation creates an 'accessibility paradox' where imperfect-but-accessible systems outperform perfect-but-unused ones. Furthermore, we argue that education's inherent error correction mechanisms and the pedagogical value of critical evaluation make this domain particularly suited for moderate-accuracy AI deployment. Our framework provides quantitative thresholds for acceptable error rates and challenges the prevailing assumption that educational AI must meet the same accuracy standards as, for example, diagnostic AI in healthcare.

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

  • Context sovereignty for AI-supported learning: A human-centred approach

    The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

email-management

1 item with this tag.

emergent-scholarship

3 items with this tag.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

  • From teaching to learning: How emergent scholarship disrupts traditional education hierarchies

    Traditional education systems are built on a paradox - institutions dedicated to learning are structured almost entirely around teaching. This fundamental misalignment has persisted for centuries, with higher education operating on the assumption that teaching inevitably produces learning. This paper argues that the traditional model, where knowledge flows unidirectionally from expert to novice, no longer serves a world of information abundance and technological disruption. Emergent scholarship offers an alternative approach that reconceptualises learning as a complex, networked process emerging from connections rather than transmission. By shifting from knowledge authority to learning facilitation, educators can create environments where diverse participants contribute to collective understanding, challenging hierarchies that position faculty as sole knowledge producers. This transformation is particularly urgent as artificial intelligence develops capabilities once exclusive to human experts, fundamentally altering the educational landscape. Rather than fighting these technological changes, emergent scholarship integrates them as participants in the learning ecosystem while focusing on uniquely human capabilities like critical thinking and collaborative problem-solving. The shift from teaching hierarchies to learning networks requires reimagining not just pedagogical approaches but institutional structures, potentially creating educational environments that better prepare graduates for complexity and uncertainty while fostering more engaging experiences for all participants.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

engagement

1 item with this tag.

  • Maximising utility through optimal accuracy: A model for educational AI

    Educational support systems have long prioritised accuracy as the primary metric of quality, resulting in technically excellent resources that remain largely unused. We present a mathematical framework demonstrating that AI tutoring systems with 10-15% error rates might achieve superior learning outcomes through increased utility compared to more accurate but less accessible alternatives. We show that the multiplicative relationship between accuracy and utilisation creates an 'accessibility paradox' where imperfect-but-accessible systems outperform perfect-but-unused ones. Furthermore, we argue that education's inherent error correction mechanisms and the pedagogical value of critical evaluation make this domain particularly suited for moderate-accuracy AI deployment. Our framework provides quantitative thresholds for acceptable error rates and challenges the prevailing assumption that educational AI must meet the same accuracy standards as, for example, diagnostic AI in healthcare.

essays

1 item with this tag.

exceptionalism

1 item with this tag.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

framework

3 items with this tag.

general-purpose-technology

1 item with this tag.

  • Technological nature of language and the implications for health professions education

    This essay explores the idea of language as humanity's first general purpose technology—a system we developed to extend human capabilities across a range of domains, which enabled complementary innovations. Through this conceptual lens, large language models (LLMs) emerge not merely as new digital tools, but as a significant evolution in the continuum of language technologies that stretches from spoken language through writing, printing, and digital text. The essay explores how LLMs extend language's core capabilities through unprecedented scale, cross-domain synthesis, adaptability, and emerging multimodality. These extensions are particularly relevant to health professions education, where students face the dual challenge of information overload and inadequate preparation for complex practice environments. By viewing LLMs as an evolution of our most fundamental technology rather than simply new applications, we can better understand their implications for clinical education. This perspective suggests shifting educational emphasis from knowledge acquisition to clinical reasoning and adaptive expertise, developing new forms of AI literacy specific to healthcare contexts, and reimagining assessment approaches. Understanding LLMs as part of language's ongoing evolution offers a nuanced middle path between uncritical enthusiasm and reflexive resistance, informing thoughtful integration that enhances rather than diminishes the human dimensions of healthcare education.

generative

1 item with this tag.

  • Publishing with purpose: Using AI to enhance scientific discourse

    The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the research industrial complex - a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers' work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication metrics, funding bodies rely on those metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals' core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.

generative-ai

2 items with this tag.

governance

1 item with this tag.

graph-database

1 item with this tag.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

health-professions-education

1 item with this tag.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

higher-education

5 items with this tag.

  • A bitter lesson for higher education

    Rich Sutton's 'Bitter Lesson' applies to education: AI reveals that artifact-based assessment never truly measured learning.

  • AI for learning at scale: Why I'm optimistic

    Despite the ethical concerns, generative AI represents an enormous opportunity for learning at scale. Here's why I'm optimistic.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

  • From teaching to learning: How emergent scholarship disrupts traditional education hierarchies

    Traditional education systems are built on a paradox - institutions dedicated to learning are structured almost entirely around teaching. This fundamental misalignment has persisted for centuries, with higher education operating on the assumption that teaching inevitably produces learning. This paper argues that the traditional model, where knowledge flows unidirectionally from expert to novice, no longer serves a world of information abundance and technological disruption. Emergent scholarship offers an alternative approach that reconceptualises learning as a complex, networked process emerging from connections rather than transmission. By shifting from knowledge authority to learning facilitation, educators can create environments where diverse participants contribute to collective understanding, challenging hierarchies that position faculty as sole knowledge producers. This transformation is particularly urgent as artificial intelligence develops capabilities once exclusive to human experts, fundamentally altering the educational landscape. Rather than fighting these technological changes, emergent scholarship integrates them as participants in the learning ecosystem while focusing on uniquely human capabilities like critical thinking and collaborative problem-solving. The shift from teaching hierarchies to learning networks requires reimagining not just pedagogical approaches but institutional structures, potentially creating educational environments that better prepare graduates for complexity and uncertainty while fostering more engaging experiences for all participants.

human-AI-collaboration

3 items with this tag.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

  • Context sovereignty for AI-supported learning: A human-centred approach

    The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

human-ai-relationships

1 item with this tag.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

information-architecture

3 items with this tag.

  • Context engineering

    A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents

  • Knowledge graph

    A structured representation of knowledge using entities connected by explicit, typed relationships

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

information-management

1 item with this tag.

journal

2 items with this tag.

  • From journals to networks: How transparency transforms trust in scholarship

    This essay examines the shifting landscape of trust in academic scholarship, challenging the traditional model where trust has been outsourced to publishers and journals as proxies for validation and quality assessment. While this system developed important mechanisms for scholarly trust, including persistent identification, version control, peer feedback, and contextual placement, technological change offers an opportunity to reclaim and enhance these mechanisms. Drawing on principles of emergent scholarship, I explore how trust can be reimagined through knowledge connection, innovation through openness, identity through community, value through engagement, and meaning through medium. This approach does not reject traditional scholarship but builds bridges between established practices and new possibilities, enabling a shift from institutional proxies to visible processes. The essay proposes a three-tier technical framework that maintains compatibility with traditional academic structures while introducing new possibilities: a live working environment where scholarship evolves through visible iteration; preprints with DOIs enabling persistent citation; and journal publication connecting to established incentive structures. This framework offers significant benefits, including greater scholarly autonomy, enhanced transparency, increased responsiveness, and recognition of diverse contributions. However, it also presents challenges: technical barriers to participation, potential fragmentation, increased resource demands, and recognition within traditional contexts. The result is not a replacement for traditional scholarship but an evolution that shifts trust from institutional proxies to visible processes, creating scholarship that is more connected, open, engaged, and ultimately more trustworthy.

  • Publishing with purpose: Using AI to enhance scientific discourse

    The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the research industrial complex - a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers' work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication metrics, funding bodies rely on those metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals' core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.

judgement

1 item with this tag.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

knowledge

1 item with this tag.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

knowledge-graphs

4 items with this tag.

knowledge-management

1 item with this tag.

knowledge-representation

2 items with this tag.

  • Knowledge graph

    A structured representation of knowledge using entities connected by explicit, typed relationships

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

language

1 item with this tag.

  • Technological nature of language and the implications for health professions education

    This essay explores the idea of language as humanity's first general purpose technology—a system we developed to extend human capabilities across a range of domains, which enabled complementary innovations. Through this conceptual lens, large language models (LLMs) emerge not merely as new digital tools, but as a significant evolution in the continuum of language technologies that stretches from spoken language through writing, printing, and digital text. The essay explores how LLMs extend language's core capabilities through unprecedented scale, cross-domain synthesis, adaptability, and emerging multimodality. These extensions are particularly relevant to health professions education, where students face the dual challenge of information overload and inadequate preparation for complex practice environments. By viewing LLMs as an evolution of our most fundamental technology rather than simply new applications, we can better understand their implications for clinical education. This perspective suggests shifting educational emphasis from knowledge acquisition to clinical reasoning and adaptive expertise, developing new forms of AI literacy specific to healthcare contexts, and reimagining assessment approaches. Understanding LLMs as part of language's ongoing evolution offers a nuanced middle path between uncritical enthusiasm and reflexive resistance, informing thoughtful integration that enhances rather than diminishes the human dimensions of healthcare education.

language-model

3 items with this tag.

  • Context engineering

    A system-level discipline focused on building dynamic, state-aware information ecosystems for AI agents

  • Technological nature of language and the implications for health professions education

    This essay explores the idea of language as humanity's first general purpose technology—a system we developed to extend human capabilities across a range of domains, which enabled complementary innovations. Through this conceptual lens, large language models (LLMs) emerge not merely as new digital tools, but as a significant evolution in the continuum of language technologies that stretches from spoken language through writing, printing, and digital text. The essay explores how LLMs extend language's core capabilities through unprecedented scale, cross-domain synthesis, adaptability, and emerging multimodality. These extensions are particularly relevant to health professions education, where students face the dual challenge of information overload and inadequate preparation for complex practice environments. By viewing LLMs as an evolution of our most fundamental technology rather than simply new applications, we can better understand their implications for clinical education. This perspective suggests shifting educational emphasis from knowledge acquisition to clinical reasoning and adaptive expertise, developing new forms of AI literacy specific to healthcare contexts, and reimagining assessment approaches. Understanding LLMs as part of language's ongoing evolution offers a nuanced middle path between uncritical enthusiasm and reflexive resistance, informing thoughtful integration that enhances rather than diminishes the human dimensions of healthcare education.

leadership

1 item with this tag.

  • Avoiding innovation theatre: A framework for supporting institutional AI integration

    Higher education institutions face persistent pressure to demonstrate visible engagement with artificial intelligence, often resulting in what we characterise as innovation theatre - the performance of transformation without corresponding structural change. This paper presents a diagnostic framework that distinguishes between performative and structural integration through analysis of four operational domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike maturity models that prescribe linear progression, this framework enables institutional leaders to assess whether organisational structures align with stated strategic intentions, revealing gaps between rhetoric and reality. The framework emerged from critical analysis of institutional AI responses but evolved toward practical utility for decision-makers operating within genuine constraints. We position this work as practitioner pattern recognition requiring subsequent empirical validation, outline specific validation pathways, and discuss implications for institutional strategy in contexts of technological disruption.

learning

7 items with this tag.

  • A better game: Thoughtful AI use over performative critique

    Rather than cataloguing AI's failures, demonstrate thoughtful use, critique from practice, and amplify what matters to you.

  • A bitter lesson for higher education

    Rich Sutton's 'Bitter Lesson' applies to education: AI reveals that artifact-based assessment never truly measured learning.

  • AI for learning at scale: Why I'm optimistic

    Despite the ethical concerns, generative AI represents an enormous opportunity for learning at scale. Here's why I'm optimistic.

  • Qualifications for AI literacy

    Any claim that a course or programme of study develops AI literacy requires important qualifications—literacy develops through sustained practice, is developmental and contextual, and cannot be fully assessed at course completion.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

  • From teaching to learning: How emergent scholarship disrupts traditional education hierarchies

    Traditional education systems are built on a paradox - institutions dedicated to learning are structured almost entirely around teaching. This fundamental misalignment has persisted for centuries, with higher education operating on the assumption that teaching inevitably produces learning. This paper argues that the traditional model, where knowledge flows unidirectionally from expert to novice, no longer serves a world of information abundance and technological disruption. Emergent scholarship offers an alternative approach that reconceptualises learning as a complex, networked process emerging from connections rather than transmission. By shifting from knowledge authority to learning facilitation, educators can create environments where diverse participants contribute to collective understanding, challenging hierarchies that position faculty as sole knowledge producers. This transformation is particularly urgent as artificial intelligence develops capabilities once exclusive to human experts, fundamentally altering the educational landscape. Rather than fighting these technological changes, emergent scholarship integrates them as participants in the learning ecosystem while focusing on uniquely human capabilities like critical thinking and collaborative problem-solving. The shift from teaching hierarchies to learning networks requires reimagining not just pedagogical approaches but institutional structures, potentially creating educational environments that better prepare graduates for complexity and uncertainty while fostering more engaging experiences for all participants.

learning-alignment

1 item with this tag.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

learning-outcomes

1 item with this tag.

  • Maximising utility through optimal accuracy: A model for educational AI

    Educational support systems have long prioritised accuracy as the primary metric of quality, resulting in technically excellent resources that remain largely unused. We present a mathematical framework demonstrating that AI tutoring systems with 10-15% error rates might achieve superior learning outcomes through increased utility compared to more accurate but less accessible alternatives. We show that the multiplicative relationship between accuracy and utilisation creates an 'accessibility paradox' where imperfect-but-accessible systems outperform perfect-but-unused ones. Furthermore, we argue that education's inherent error correction mechanisms and the pedagogical value of critical evaluation make this domain particularly suited for moderate-accuracy AI deployment. Our framework provides quantitative thresholds for acceptable error rates and challenges the prevailing assumption that educational AI must meet the same accuracy standards as, for example, diagnostic AI in healthcare.

learning-theory

2 items with this tag.

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

literacy

1 item with this tag.

machine-learning

1 item with this tag.

model-context-protocol

2 items with this tag.

note-taking

1 item with this tag.

organisation

1 item with this tag.

organisational-change

2 items with this tag.

  • AI meeting scribes, organisational memory, and new governance structures

    AI meeting scribes have automated the control of organisational memory, making existing power dynamics more powerful and less visible.

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

organisational-infrastructure

1 item with this tag.

  • Avoiding innovation theatre: A framework for supporting institutional AI integration

    Higher education institutions face persistent pressure to demonstrate visible engagement with artificial intelligence, often resulting in what we characterise as innovation theatre - the performance of transformation without corresponding structural change. This paper presents a diagnostic framework that distinguishes between performative and structural integration through analysis of four operational domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike maturity models that prescribe linear progression, this framework enables institutional leaders to assess whether organisational structures align with stated strategic intentions, revealing gaps between rhetoric and reality. The framework emerged from critical analysis of institutional AI responses but evolved toward practical utility for decision-makers operating within genuine constraints. We position this work as practitioner pattern recognition requiring subsequent empirical validation, outline specific validation pathways, and discuss implications for institutional strategy in contexts of technological disruption.

pedagogy

1 item with this tag.

personal-knowledge-management

1 item with this tag.

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

personal-learning

2 items with this tag.

  • Context engineering and the technical foundations of educational transformation

    Higher education institutions face a fundamental choice in AI engagement that will determine whether they undergo genuine transformation or sophisticated preservation of existing paradigms. While institutional responses have centred on prompt engineering—teaching students to craft effective AI queries—this approach inadvertently reinforces hierarchical knowledge transmission models and container-based educational structures that increasingly misalign with professional practice. Context engineering emerges as a paradigmatic alternative that shifts focus from optimising individual AI interactions toward architecting persistent knowledge ecosystems. This demands sophisticated technical infrastructure including knowledge graphs capturing conceptual relationships, standardised protocols enabling federated intelligence, and persistent memory systems accumulating understanding over time. These technologies enable epistemic transformations that fundamentally reconceptualise how knowledge exists and operates within educational environments. Rather than discrete curricular containers, knowledge exists as interconnected networks where concepts gain meaning through relationships to broader understanding frameworks. Dynamic knowledge integration enables real-time incorporation of emerging research and community insights, while collaborative construction processes challenge traditional academic gatekeeping through democratic validation involving multiple stakeholder communities. The systemic implications prove profound, demanding governance reconceptualisation, substantial infrastructure investment, and operational transformation that most institutions currently lack capabilities to address effectively. Context engineering creates technical dependencies making traditional educational approaches increasingly untenable, establishing path dependencies favouring continued transformation over reversion to familiar paradigms. This analysis reveals context engineering as a potential watershed moment for higher education institutions seeking educational relevance and technological sophistication within rapidly evolving contexts that traditional academic structures struggle to address effectively.

  • Context sovereignty for AI-supported learning: A human-centred approach

    The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of *context engineering* as an operational framework that supports personal learning and the philosophical goal of *context sovereignty*. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

pkm

1 item with this tag.

podcasts

1 item with this tag.

policy

1 item with this tag.

practice

1 item with this tag.

productivity

2 items with this tag.

professional

1 item with this tag.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

professional-development

2 items with this tag.

professional-education

1 item with this tag.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

professional-learning

1 item with this tag.

prompt-engineering

2 items with this tag.

  • Prompt engineering

    Using natural language to produce desired responses from large language models through iterative refinement

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

publication

1 item with this tag.

  • Publishing with purpose: Using AI to enhance scientific discourse

    The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the research industrial complex - a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers' work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication metrics, funding bodies rely on those metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals' core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.

publishing

2 items with this tag.

reasoning

1 item with this tag.

research

3 items with this tag.

  • Essays as scholarship

    The peer-reviewed article dominates academia, but essays deserve recognition as scholarship—enabling exploration and synthesis that formal research cannot.

  • Boyer's model of scholarship

    A multidimensional framework for scholarship spanning discovery, integration, application, and teaching.

  • Publishing with purpose: Using AI to enhance scientific discourse

    The introduction of generative AI into scientific publishing presents both opportunities and risks for the research ecosystem. While AI could enhance knowledge creation and streamline research processes, it may also amplify existing problems within the research industrial complex - a system that prioritises publication metrics over meaningful scientific progress. In this viewpoint article I suggest that generative AI is likely to reinforce harmful processes unless scientific journals and editors use these technologies to transform themselves into vibrant knowledge communities that facilitate meaningful discourse and collaborative learning. I describe how AI could support this transformation by surfacing connections between researchers' work, making peer review more dialogic, enhancing post-publication discourse, and enabling multimodal knowledge translation. However, implementing this vision faces significant challenges, deeply rooted in the entrenched incentives of the current academic publishing system. Universities evaluate faculty based largely on publication metrics, funding bodies rely on those metrics for grant decisions, and publishers benefit from maintaining existing models. Making meaningful change therefore requires coordinated action across multiple stakeholders who must be willing to accept short-term costs for long-term systemic benefits. The key to success lies in consistently returning to journals' core purpose: advancing scientific knowledge through thoughtful research and professional dialogue. By reimagining journals as AI-supported communities rather than metrics-driven repositories, we can better serve both the scientific community and the broader society it aims to benefit.

research-skills

1 item with this tag.

retrieval-augmented-generation

1 item with this tag.

risk-management

1 item with this tag.

  • Avoiding innovation theatre: A framework for supporting institutional AI integration

    Higher education institutions face persistent pressure to demonstrate visible engagement with artificial intelligence, often resulting in what we characterise as innovation theatre - the performance of transformation without corresponding structural change. This paper presents a diagnostic framework that distinguishes between performative and structural integration through analysis of four operational domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike maturity models that prescribe linear progression, this framework enables institutional leaders to assess whether organisational structures align with stated strategic intentions, revealing gaps between rhetoric and reality. The framework emerged from critical analysis of institutional AI responses but evolved toward practical utility for decision-makers operating within genuine constraints. We position this work as practitioner pattern recognition requiring subsequent empirical validation, outline specific validation pathways, and discuss implications for institutional strategy in contexts of technological disruption.

scholarship

4 items with this tag.

  • What does scholarship sound like?

    Audio scholarship—podcasts, dialogues, oral histories—deserves recognition as legitimate scholarly work. The format matters less than the quality of thinking.

  • Essays as scholarship

    The peer-reviewed article dominates academia, but essays deserve recognition as scholarship—enabling exploration and synthesis that formal research cannot.

  • Boyer's model of scholarship

    A multidimensional framework for scholarship spanning discovery, integration, application, and teaching.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

skills

1 item with this tag.

social

1 item with this tag.

  • A theoretical framework for integrating AI into health professions education

    Health professions education faces a significant challenge: graduates are simultaneously overwhelmed with information yet under-prepared for complex practice environments. Meanwhile, artificial intelligence (AI) tools are being rapidly adopted by students, revealing fundamental gaps in traditional educational approaches. This paper introduces the ACADEMIC framework, a theoretically grounded approach to integrating AI into health professions education (HPE) that shifts focus from assessing outputs to supporting learning processes. Drawing on social constructivism, critical pedagogy, complexity theory, and connectivism, I analysed learning interactions across six dimensions: power dynamics, knowledge representation, agency, contextual influence, identity formation, and temporality. From this comparative analysis emerged seven principles—Augmented dialogue, Critical consciousness, Adaptive expertise development, Dynamic contexts, Emergent curriculum design, Metacognitive development, and Interprofessional Community knowledge building—that guide the integration of AI into HPE. Rather than viewing AI as a tool for efficient content delivery or a threat to academic integrity, the ACADEMIC framework positions AI as a partner in learning that can address longstanding challenges. The framework emphasises that most students are not natural autodidacts and need guidance in learning with AI rather than simply using it to produce better outputs. By reframing the relationship between students and AI, educators can create learning environments that more authentically prepare professionals for the complexity, uncertainty, and collaborative demands of contemporary healthcare practice.

strategy

1 item with this tag.

taste

2 items with this tag.

  • AI and evaluative judgement: Cultivating taste in the age of capability

    As AI makes creation and curation trivially easy, evaluative judgement about what should exist becomes the primary human contribution.

  • Taste and judgement in human-AI systems

    Contemporary discourse surrounding artificial intelligence demonstrates a persistent pattern of defensive positioning, characterised by attempts to identify capabilities that remain exclusively human. This sanctuary strategy creates increasingly fragile distinctions that position human and artificial intelligence as competitors for finite cognitive territory, establishing zero-sum relationships that constrain collaborative possibilities. Through critical analysis of binary thinking frameworks, this essay reveals how defensive approaches inadvertently diminish human agency while failing to address the practical challenges of navigating human-AI relationships. Drawing on ecological systems thinking and cognitive science, the essay reframes human-AI relationships as embedded within complex cognitive ecologies where meaning emerges through interaction. This ecological perspective challenges our investment in human exceptionalism that privilege separation over collaborative participation, revealing how distinctiveness might emerge through sophisticated engagement rather than defensive isolation. The essay introduces taste as a framework for cultivating contextual judgement that transcends binary categorisation while preserving human agency over meaning-making and value determination. Unlike technical literacy that focuses on operational competency, taste development involves iterative experimentation and reflection that enables sophisticated discernment about when, how, and why to engage AI capabilities in service of personally meaningful purposes. This approach transforms the limiting question, What can humans do that AI cannot? toward the more generative inquiry, How might AI help us do more of what we value? The analysis demonstrates how taste development enables abundance-oriented partnerships that expand rather than constrain human possibility through thoughtful technological collaboration. The implications extend beyond individual capability to encompass potential transformations in professional practice, educational approaches, and cultural frameworks for understanding human-technological relationships. By repositioning human agency within collaborative intelligence systems, taste development offers a pathway toward more sophisticated and sustainable approaches to navigating increasingly complex technological landscapes while preserving human authorship over fundamental questions of purpose and meaning.

teaching

3 items with this tag.

  • Classroom policy on the use of generative AI

    A template classroom policy for generative AI use that educators can adapt for their own modules and courses.

  • Boyer's model of scholarship

    A multidimensional framework for scholarship spanning discovery, integration, application, and teaching.

  • From teaching to learning: How emergent scholarship disrupts traditional education hierarchies

    Traditional education systems are built on a paradox - institutions dedicated to learning are structured almost entirely around teaching. This fundamental misalignment has persisted for centuries, with higher education operating on the assumption that teaching inevitably produces learning. This paper argues that the traditional model, where knowledge flows unidirectionally from expert to novice, no longer serves a world of information abundance and technological disruption. Emergent scholarship offers an alternative approach that reconceptualises learning as a complex, networked process emerging from connections rather than transmission. By shifting from knowledge authority to learning facilitation, educators can create environments where diverse participants contribute to collective understanding, challenging hierarchies that position faculty as sole knowledge producers. This transformation is particularly urgent as artificial intelligence develops capabilities once exclusive to human experts, fundamentally altering the educational landscape. Rather than fighting these technological changes, emergent scholarship integrates them as participants in the learning ecosystem while focusing on uniquely human capabilities like critical thinking and collaborative problem-solving. The shift from teaching hierarchies to learning networks requires reimagining not just pedagogical approaches but institutional structures, potentially creating educational environments that better prepare graduates for complexity and uncertainty while fostering more engaging experiences for all participants.

technology

1 item with this tag.

  • Technological nature of language and the implications for health professions education

    This essay explores the idea of language as humanity's first general purpose technology—a system we developed to extend human capabilities across a range of domains, which enabled complementary innovations. Through this conceptual lens, large language models (LLMs) emerge not merely as new digital tools, but as a significant evolution in the continuum of language technologies that stretches from spoken language through writing, printing, and digital text. The essay explores how LLMs extend language's core capabilities through unprecedented scale, cross-domain synthesis, adaptability, and emerging multimodality. These extensions are particularly relevant to health professions education, where students face the dual challenge of information overload and inadequate preparation for complex practice environments. By viewing LLMs as an evolution of our most fundamental technology rather than simply new applications, we can better understand their implications for clinical education. This perspective suggests shifting educational emphasis from knowledge acquisition to clinical reasoning and adaptive expertise, developing new forms of AI literacy specific to healthcare contexts, and reimagining assessment approaches. Understanding LLMs as part of language's ongoing evolution offers a nuanced middle path between uncritical enthusiasm and reflexive resistance, informing thoughtful integration that enhances rather than diminishes the human dimensions of healthcare education.

template

1 item with this tag.

time-management

1 item with this tag.

trust

1 item with this tag.

  • From journals to networks: How transparency transforms trust in scholarship

    This essay examines the shifting landscape of trust in academic scholarship, challenging the traditional model where trust has been outsourced to publishers and journals as proxies for validation and quality assessment. While this system developed important mechanisms for scholarly trust, including persistent identification, version control, peer feedback, and contextual placement, technological change offers an opportunity to reclaim and enhance these mechanisms. Drawing on principles of emergent scholarship, I explore how trust can be reimagined through knowledge connection, innovation through openness, identity through community, value through engagement, and meaning through medium. This approach does not reject traditional scholarship but builds bridges between established practices and new possibilities, enabling a shift from institutional proxies to visible processes. The essay proposes a three-tier technical framework that maintains compatibility with traditional academic structures while introducing new possibilities: a live working environment where scholarship evolves through visible iteration; preprints with DOIs enabling persistent citation; and journal publication connecting to established incentive structures. This framework offers significant benefits, including greater scholarly autonomy, enhanced transparency, increased responsiveness, and recognition of diverse contributions. However, it also presents challenges: technical barriers to participation, potential fragmentation, increased resource demands, and recognition within traditional contexts. The result is not a replacement for traditional scholarship but an evolution that shifts trust from institutional proxies to visible processes, creating scholarship that is more connected, open, engaged, and ultimately more trustworthy.

user-interface

1 item with this tag.

  • Beyond text boxes: Exploring a graph-based user interface for AI-supported learning

    This essay critically examines the predominant interface paradigm for AI interaction today—text-entry fields, chronological chat histories, and project folders—arguing that these interfaces reinforce outdated container-based knowledge metaphors that fundamentally misalign with how expertise develops in professional domains. Container-based approaches artificially segment knowledge that practitioners must mentally reintegrate, creating particular challenges in health professions education where practice demands integrative thinking across traditionally separated domains. The text-entry field, despite its ubiquity in AI interactions, simply recreates container thinking in conversational form, trapping information in linear streams that require scrolling rather than conceptual navigation. I explore graph-based interfaces as an alternative paradigm that better reflects how knowledge functions in professional contexts, and where AI serves as both conversational partner and network builder. In this environment, conversations occur within a visual landscape, spatially anchored to relevant concepts rather than isolated in chronological chat histories. Multimodal nodes represent knowledge across different modalities, while multi-dimensional navigation allows exploration of concepts beyond simple scrolling. Progressive complexity management addresses potential cognitive overload for novices while maintaining the graph as the fundamental organising metaphor. Implementation opportunities include web-based knowledge graph interfaces supported by current visualisation technologies and graph databases, with mobile extensions enabling contextual learning in practice environments. Current AI capabilities, particularly frontier language models, already demonstrate the pattern recognition needed for suggesting meaningful connections across knowledge domains. The barriers to implementing graph-based interfaces are less technological than conceptual and institutional—our collective attachment to container-based thinking and the organisational structures built around it. This reconceptualisation of learning interfaces around networks rather than containers suggests an alternative that may better develop the integrative capabilities that define professional expertise and reduce the persistent gap between education and practice.

value-alignment

1 item with this tag.

  • The learning alignment problem: AI and the loss of control in higher education

    Higher education institutions have responded to AI technologies by emphasising prompt engineering—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or academic integrity. The institutional focus on prompt control reveals what I call the learning alignment problem, where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics. Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms—from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.

values

1 item with this tag.

vector-database

1 item with this tag.

  • Beyond document management: Graph infrastructure for professional education curricula

    Professional curricula are comprehensively documented but not systematically queryable, creating artificial information scarcity. This creates significant problems for institutions: regulatory compliance reporting consumes weeks of staff time, quality assurance requires exhaustive manual verification, and curriculum office teams cannot efficiently answer structural questions. Current approaches—manual document review, VLE keyword search, curriculum mapping spreadsheets, and purpose-built curriculum management systems—fail to expose curriculum structure in queryable form. We propose an architecture where graph databases become the source of truth for curriculum structure, with vector databases for content retrieval and the Model Context Protocol providing accessible interfaces. This makes documented curriculum structure explicitly queryable—prerequisite chains, competency mappings, and assessment coverage—enabling compliance verification in hours rather than weeks. The architecture suits AI-forward institutions—those treating AI integration as ongoing strategic practice requiring active engagement with evolving technologies. Technology handles structural verification; educators retain essential authority over educational meaning-making. The proposal argues for removing technical barriers to interrogating curriculum complexity rather than eliminating that complexity through technological solution.