About this essay

Abstract

The discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper introduces context sovereignty; a framework grounded in the recognition that the most valuable context we bring to any AI interaction is irreducibly personal: our values, intellectual commitments, professional identity, and meaning-making frameworks. While data sovereignty asks who controls personal information, context sovereignty asks who controls the meaning-making environment through which information becomes personally significant. Drawing from learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is essential for productive human-AI collaboration. Current approaches to context-setting — primarily prompting and document uploading — treat context as additive rather than foundational, creating episodic burdens that force learners to adapt to AI systems rather than the reverse. Context sovereignty offers an alternative paradigm: a foundational commitment to personal meaning-making, supported by three operational principles; persistent understanding, individual agency, and cognitive extension. The framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. By centring personal context, the framework ensures AI systems adapt to human thinking rather than requiring human adaptation to AI constraints, enabling a relationship where AI increases the reach and effectiveness of human reasoning.

Beyond prompting

The conversation around AI literacy in education still revolves largely around teaching students to write better prompts. Prompting is a useful skill, but it addresses a small part of what matters when working with AI systems (Schulhoff et al., 2024). The more consequential challenge lies in context-setting: the deliberate shaping of the information, relationships, and signals that guide AI behaviour toward personally meaningful outcomes. This paper introduces context sovereignty for AI-supported learning as a framework grounded in the recognition that the most valuable context we bring to any AI interaction is irreducibly personal: our values, intellectual commitments, professional identity, and meaning-making frameworks.

Context engineering as a system-level discipline. Recent work defines context engineering as a distinct discipline operating at the system level (Teki, 2025), and holistic AI literacy frameworks now explicitly include “contextualization” as a core component (Allen & Kendeou, 2023). Rather than crafting better prompts, context engineering emphasises the informational architecture of the entire ecosystem — the knowledge, tools, and signals an AI needs to interpret and act effectively. The most capable models underperform when given incomplete or poorly structured context, regardless of their underlying sophistication. The relevant shift is from model-centric optimisation to context-centric architecture.

A useful analogy: onboarding a new colleague. When a knowledgeable colleague joins a team, we do not hand them a stack of documents. We share how we think about problems, what matters to us, where the tensions and open questions are. Context engineering is the process of onboarding AI into an intellectual world — not just providing information, but making visible the frameworks, values, and commitments through which that information is interpreted.

Compression, filtering, and structured retrieval. Increasing the volume of context is not sufficient. Intelligent compression and filtering are necessary for surfacing relevant information without overwhelming the model. LLMs have limited context windows; earlier information in a conversation may be lost or compressed to fit within processing limits. While these windows are expanding, they cannot extend over the years of a degree programme or across a professional lifetime. Better context is more valuable than more context.

From static to agentic systems. The field is moving from static, linear pipelines toward dynamic, agentic systems where AI agents plan, use tools, reflect, and collaborate to address complex tasks. New architectures are emerging to simulate memory and enable more adaptive learning experiences. In this expanding ecosystem of interacting agents — each bringing a different contextual model to the interaction — context engineering becomes a foundational competence for AI-supported learning.

The nature of context and its significance for learning

Learning involves integrating new understanding with existing cognitive structures through assimilation and accommodation (Piaget, 1977). The quality of this integration depends on the contextual connections learners make between new and existing knowledge. Context provides the relational substrate within which connections form, determining their durability and transferability — what Bransford et al. (2001) termed “conditionalised knowledge,” connected to appropriate conditions of application. Without rich contextual understanding, knowledge remains inert, bound to the circumstances of its acquisition. Deep contextual awareness enables pattern recognition across domains, supporting flexible application in significantly different situations (Barnett & Ceci, 2002; Salomon & Perkins, 1989). This capacity does not emerge from decontextualised principles but from understanding how contextual elements interact to create conditions for knowledge application.

Context is not only cognitive. The most consequential dimensions of personal context often lie beneath explicit knowledge: values, ethical commitments, aesthetic preferences, professional identity, intellectual temperament, and the frameworks through which individuals make meaning. These are not supplementary to learning — they are constitutive of it. A clinician does not simply apply biomedical knowledge to a patient; they interpret it through frameworks shaped by professional values, cultural understanding, and accumulated experience. A researcher does not simply analyse data; they bring theoretical commitments and methodological dispositions that shape what counts as evidence and what questions are worth asking. This axiological dimension — the values, commitments, and identity that shape interpretation and action — is what makes personal context irreducibly individual. It is also what makes it irreplaceable by AI. No amount of model capability can generate someone’s intellectual commitments from the outside. These must be authored by the individual and made available through deliberate context architecture.

These evolving, interconnected networks of understanding, values, and frameworks are absent from language models. Models do not possess memory or the ability to build upon previous experience, and learners must therefore take responsibility for curating and maintaining their own context as a living product rather than a static asset. Personal knowledge management (PKM) practices offer important insights here. These frameworks explore systematic approaches through which learners capture, organise, connect, and retrieve information across their intellectual lives, transforming scattered content into coherent understanding (Chatti, 2012). Effective PKM systems create environments where ideas, experiences, and insights form dynamic networks of personal meaning — through reflective note-taking, concept mapping, and deliberate connection-making between domains. This infrastructure preserves not just what learners know, but how they know it: the reasoning patterns, emotional associations, and conceptual frameworks that give information personal significance. The effectiveness of AI outputs depends more on how context is structured and managed than on the intelligence of the underlying model (King, 2025).

Context therefore functions not as background information supporting AI interactions but as the active field within which meaning emerges and understanding develops. It shapes not only what we understand, but how we understand — creating conditions within which knowledge becomes personally meaningful. Personal context encompasses prior experience, cultural frameworks, emotional associations, values, and conceptual structures. It is irreducibly individual while existing in dynamic relationship with broader social, cultural, and institutional contexts. Context engineering — through information architecture, system integration, and implementation strategy — enables AI to engage meaningfully with this personal landscape. Poor or missing context produces unreliable outputs, regardless of model sophistication.

Current approaches to context in AI-supported learning

Contemporary AI engagement in education relies primarily on prompting strategies and retrieval-augmented generation (RAG) through document uploading (Lee & Palmer, 2025; Li et al., 2025). Both treat context as additive rather than foundational, reducing it to static information while missing its relational and personal qualities. Researchers have identified this limited contextual understanding as a primary barrier to effective human-AI collaboration (Yan et al., 2023).

Prompting strategies require learners to provide explicit contextual information within each request — background knowledge, learning objectives, preferred explanations, circumstances. This places the entire contextualisation burden on the learner, demanding fresh articulation with every interaction and leaving important information unspoken. The cognitive overhead diverts mental resources from learning toward orienting the AI system, creating what we term episodic burden: the load of repeatedly establishing context rather than building upon accumulated understanding.

Document uploading via RAG compounds the problem by conflating information with context. Uploading research papers, slides, or lecture notes provides a language model with content anyone could share. It does not capture the personal lens through which information becomes meaningful to an individual learner — the connections to existing knowledge, goals, values, and thinking patterns. The distinction between information context and personal context matters: the former is sharable; the latter requires understanding how information connects to the cognitive and axiological frameworks within individuals.

Current AI interactions place the entire burden of re-contextualisation on the learner.

Even memory-enabled models simulate continuity without constituting genuine understanding of how someone thinks.

These episodic approaches treat context as interaction addenda rather than the foundation for cognitive partnership. They create fragmented learning experiences, missing opportunities to develop persistent, accumulating understanding. They also establish asymmetrical power relationships: learners surrender personal information to access enhanced AI capabilities, while commercial entities accumulate value from user data without learners having visibility into how their context is managed. By treating context as static input rather than something that evolves through engagement, current approaches cannot support the kinds of personalised learning, collaborative inquiry, and cognitive extension that become possible when context is properly understood and preserved within learner control.

A conceptual framework for context sovereignty in learning

Context sovereignty positions learners’ personal context — knowledge, values, goals, thinking patterns — as the central element around which AI interaction is organised. Unlike current approaches that require learners to adapt to AI systems, context sovereignty enables AI systems to adapt to learners’ established cognitive patterns while maintaining individual control over personal information.

The foundational insight: what we bring is ourselves

The argument for context sovereignty rests on a recognition that sits prior to any technical or structural consideration. The most valuable context we bring to any AI interaction is irreducibly personal: our values, intellectual commitments, professional identity, ethical frameworks, and meaning-making patterns. Only we can supply these. No other person can provide them on our behalf, and no AI system can generate them from the outside.

The one thing we always bring to an AI interaction is ourselves — our values, commitments, identity, and frameworks for making sense of the world.

This is the irreducibly personal context that no one else can supply and no model can generate.

This is what distinguishes context sovereignty from both data sovereignty and from technical solutions to model memory. Data sovereignty concerns the control of information. Model memory concerns the persistence of facts across sessions. Context sovereignty concerns something more fundamental: ownership of one’s meaning-making environment. A model with perfect memory and unlimited context windows would still lack the learner’s values, commitments, and interpretive frameworks unless these were deliberately authored and made available.

Context sovereignty is distinct from data sovereignty.

Data sovereignty asks who controls information; context sovereignty asks who controls the meaning-making environment that determines what information means.

The practical consequences are significant. The primary task of context engineering is not information management but self-articulation — making visible the values, commitments, and frameworks that shape how we think, so that AI systems can engage with us on terms we have authored. The quality of human-AI collaboration depends less on model capability and more on the richness of the personal context the learner provides. And context sovereignty is not a technical problem to be solved by better models but a human practice to be cultivated — one that develops metacognitive awareness, strengthens intellectual identity, and builds self-knowledge that is valuable whether or not AI is involved.

Three operational principles

The foundational insight is protected and operationalised through three principles that create the conditions for cognitive partnership.

Three operational principles — persistent understanding, individual agency, and cognitive extension — protect and leverage what is irreducibly personal.

Together they shift AI from a generic tool to a genuine cognitive partner.

Persistent understanding transforms episodic encounters into continuous collaboration where interactions build upon previous understanding. Persistence here means more than memory. Current model memory features store and retrieve facts, but they do not constitute developmental understanding — a model that remembers a learner’s topic preferences has not become any smarter about that learner’s intellectual trajectory. Genuine persistent understanding would require continual learning: the capacity for a system to develop through ongoing interaction rather than requiring periodic retraining (Wang et al., 2024). Until AI systems achieve this, the burden of maintaining developmental continuity falls on the learner’s external context architecture — PKM systems, structured knowledge, and knowledge graphs (Ehrlinger & Wöß, 2016; Tamašauskaitė & Groth, 2023) — explicit frameworks that evolve as the learner does. This is not a temporary workaround. It is the practice of context sovereignty itself.

Individual agency ensures learners maintain control over personal context, preserving what Freire (2000) described as the learner’s role as “critical co-investigator” rather than passive recipient. This principle responds to power asymmetries in current AI systems, maintaining the humanising dimensions of education within technological mediation and ensuring learners retain authorship over their cognitive development. Agency extends beyond data control to encompass control over the interpretive frameworks, values, and commitments that shape how AI engages with the learner.

Cognitive extension positions AI as an amplifier of human reasoning rather than a substitute, drawing on distributed cognition and the extended mind thesis (Clark & Chalmers, 1998). Meaningful extension requires deep understanding of the human cognitive partner — their reasoning patterns, knowledge structures, values, and meaning-making approaches. Without this understanding, AI extends generic capabilities. With it, AI extends this particular person’s thinking in ways responsive to their intellectual identity.

Operational dimensions

These principles manifest through interconnected operational dimensions.

Personal context curation requires learners to develop metacognitive awareness of their own learning patterns, knowledge structures, values, and intellectual goals. This transforms Schön’s (1984) “reflection-in-action” into systematic approaches for making thinking visible to AI. Context curation through PKM creates structured representations — notes, reflections, knowledge maps, bi-directional linking — that serve as interfaces between human cognition and AI, transforming information management from passive archive into active cognitive infrastructure.

Continual learning architectures would enable AI systems to evolve their understanding as learners develop and change (Wang et al., 2024). This goes beyond memory of past interactions to include adaptive understanding of how learners’ thinking patterns, goals, and frameworks shift over time. Until models achieve genuine continual learning, this developmental function is served by the learner’s own evolving context architecture — but the aspiration remains important as a design principle for the systems that support context sovereignty.

Contextual interoperability separates AI reasoning from personal context. Context remains private and locally controlled; intelligence is accessed as a service (Lins et al., 2021). Learners benefit from powerful AI capabilities without compromising autonomy over personal information or meaning-making processes — a form of federated intelligence where the reasoning capabilities of different language models, local or remote, can be applied to local context depending on the task and the learner’s preference (Long, 2024).

These shifts collectively represent movement from AI-centric to human-centric collaboration:

Current approachContext sovereigntySignificance
Episodic interactionsPersistent relationshipsEliminates cognitive overhead of repeated contextualisation
Explicit context articulationImplicit contextual understandingEnables natural, efficient communication
Generic AI capabilitiesPersonalised cognitive extensionTransforms AI from tool to thinking partner
Information sharingMeaning preservationMaintains personal significance of knowledge
Data extraction modelsIndividual context sovereigntyPreserves meaning-making environment, not just data
Human adaptation to AIAI adaptation to human patternsCentres human agency in the relationship
Prompting skillsContext curation capabilitiesDevelops metacognitive awareness and self-articulation

Implications of context sovereignty for learning

Productive intellectual challenge through deep contextual understanding

A common concern about personalised AI is that it creates echo chambers — systems that confirm rather than challenge. Context sovereignty addresses this by enabling more sophisticated challenge, not less (Bjork & Bjork, 2009). Productive intellectual challenge requires meaningful engagement: the AI must understand a learner’s existing positions — values, assumptions, reasoning patterns, knowledge structures — deeply enough to identify genuine limitations, contradictions, or unexplored implications. This differs from forced opposition (“tell me why I’m wrong”) or generic alternatives (institutional prompt libraries). An AI with rich understanding of a learner’s intellectual frameworks can surface contradictions between stated beliefs, identify assumptions lacking evidential support, or suggest interpretations the learner finds meaningful precisely because they connect to existing commitments. Generic AI produces generic challenge. Contextually rich AI produces challenge that is difficult to dismiss because it engages with the learner’s own reasoning on its own terms.

For educators, context sovereignty directly addresses concerns about echo chambers.

A system with rich understanding of a learner’s existing frameworks can provide precisely calibrated intellectual challenge that generic prompting cannot.

Distributed context sovereignty and collaborative learning

Context sovereignty enables collaborative learning through distributed contextual awareness — environments where multiple contextually-aware AI systems interact while maintaining clear boundaries around context-sharing (Ferber, 1999). Consider professional education settings where learners bring personal AI agents into institutional contexts with their own agents. These agents interact and integrate personal and institutional context — values, processes, knowledge frameworks — in ways that introduce tensions and contradictions. Rather than adapting to lowest common denominators, distributed context sovereignty treats individual differences as collective learning resources, with AI systems identifying productive complementarities between participants’ knowledge and perspectives.

Reconceptualising authentic intellectual work and assessment

The question is not “what does the student know without AI assistance?” but “what important problem did the student and AI solve together?” and “how effectively did the student mobilise contextual knowledge through AI partnership?” Professional and civic life will include AI support; students need to demonstrate effective AI collaboration rather than performance under its artificial absence. Evidence suggests that learners who engage in iterative, highly interactive processes with AI achieve significantly better outcomes than those who use it as a static information source (Nguyen et al., 2024).

Assessment should shift from 'what can this student produce without AI?' to a more meaningful question.

How effectively did this student mobilise contextual knowledge through an AI partnership to solve a meaningful problem?

Assessment in this paradigm examines learners’ capacity to curate relevant context, guide AI reasoning toward productive insights, and critically evaluate AI-generated ideas within personal knowledge frameworks. Authentic intellectual work becomes the orchestration of human and AI capabilities in the service of personally and socially meaningful problem-solving.

Preserving human agency through cognitive amplification

By centring personal context, context sovereignty ensures AI adapts to human thinking rather than requiring the reverse. Power dynamics shift from models that extract value from user data toward relationships where learners maintain sovereignty over their meaning-making processes. AI becomes a medium through which learners extend cognitive reach while remaining primary agents of their own development. Rather than replacing human judgement with algorithmic decision-making, context sovereignty enables learners to leverage AI as an extension of their own thinking — maintaining critical oversight and creative control over intellectual work. The test of whether this works is whether the human’s meaning-making environment remains under their own authorship. When it does, technological sophistication enhances human capability rather than diminishing it.

Conclusion

Context sovereignty offers a vision of AI in education that preserves human agency while enabling genuine cognitive partnerships. The framework rests on a recognition that the most valuable context any learner brings to an AI interaction is irreducibly personal — values, commitments, identity, and the meaning-making frameworks that only they can author. This insight is not threatened by improvements in model memory or expanding context windows. Even perfect recall does not constitute understanding of someone’s intellectual world.

The three operational principles — persistent understanding, individual agency, and cognitive extension — protect and leverage this personal foundation. Together they enable sophisticated intellectual challenge, collaborative learning that preserves individual autonomy, assessment practices recognising collaborative problem-solving as authentic achievement, and access to powerful AI capabilities while maintaining control over the meaning-making environment.

Context sovereignty points toward learning environments designed for cultivating human-AI collaboration, assessment practices that examine learners’ capacity to mobilise contextual knowledge through AI partnerships, and institutions that prepare learners for a world where AI is a persistent cognitive partner. The essential task is not building better models but building better context: the structured, evolving, personally authored knowledge architectures that make AI collaboration productive. That task belongs to the learner, and so does the sovereignty over what it produces.

References

Allen, L. K., & Kendeou, P. (2023). ED-AI Lit: An Interdisciplinary Framework for AI Literacy in Education. Journal of Educational Computing Research. https://doi.org/10.1177/07356331231216550

Anthropic. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol

Barnett, S. M., & Ceci, S. J. (2002). When and where do we apply what we learn?: A taxonomy for far transfer. Psychological Bulletin, 128(4), 612–637. https://doi.org/10.1037/0033-2909.128.4.612

Bjork, E. L., & Bjork, R. A. (2009). Making Things Hard on Yourself, But in a Good Way: Creating Desirable Difficulties to Enhance Learning. In Psychology and the Real World (pp. 55–64). Worth Pub.

Bransford, D. (2001). How People Learn: Brain, Mind, Experience, and School. Early Childhood Development and Learning: New Knowledge for Policy. Washington, DC: The National Academies Press. https://doi.org/10.17226/10067.

Chalef, D. (2025). What is Context Engineering, Anyway. https://blog.getzep.com/what-is-context-engineering/

Chatti, M. A. (2012). Knowledge management: A personal knowledge network perspective. Journal of Knowledge Management, 16(5), 829–844. https://doi.org/10.1108/13673271211262835

Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.

Ehrlinger, L., & Wöß, W. (2016). Towards a Definition of Knowledge Graphs. SEMANTiCS.

Essa, S. G., Celik, T., & Human-Hendricks, N. E. (2023). Personalized Adaptive Learning Technologies Based on Machine Learning Techniques to Identify Learning Styles: A Systematic Literature Review. IEEE Access, 11, 48392–48409. https://doi.org/10.1109/access.2023.3276439

Ferber, J. (1999). Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence (1st. ed.). Addison-Wesley Longman Publishing Co., Inc., USA.

Freire, P. (2000). Pedagogy of the oppressed (30th anniversary ed). Continuum.

Hummel, P., Braun, M., Tretter, M., & Dabrock, P. (2021). Data sovereignty: A review. Big Data & Society, 8(1), 2053951720982012. https://doi.org/10.1177/2053951720982012

King, S. (2025). Context Engineering: Why Feeding AI the Right Context Matters. Inspired Nonsense blog. https://inspirednonsense.com/context-engineering-why-feeding-ai-the-right-context-matters-353e8f87d6d3

Lee, D., & Palmer, E. (2025). Prompt engineering in higher education: A systematic review to help inform curricula. International Journal of Educational Technology in Higher Education, 22(1), 7. https://doi.org/10.1186/s41239-025-00503-7

Li, Z., Wang, Z., Wang, W., Hung, K., Xie, H., & Wang, F. L. (2025). Retrieval-augmented generation for educational application: A systematic survey. Computers and Education: Artificial Intelligence, 8, 100417. https://doi.org/10.1016/j.caeai.2025.100417

Lins, S., Pandl, K. D., Teigeler, H., Thiebes, S., Bayer, C., & Sunyaev, A. (2021). Artificial Intelligence as a Service. Business & Information Systems Engineering, 63(4), 441–456. https://doi.org/10.1007/s12599-021-00708-w

Long, G. (2024). The rise of federated intelligence: From federated foundation models toward collective intelligence. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 8547–8552. https://doi.org/10.24963/ijcai.2024/980

Nezhurina, M., Cipolina-Kun, L., Cherti, M., & Jitsev, J. (2024). Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models (No. arXiv:2406.02061). arXiv. https://doi.org/10.48550/arXiv.2406.02061

Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Higher Education Research & Development. https://doi.org/10.1080/07294360.2024.2341994

Piaget, J. (1977). The development of thought: Equilibration of cognitive structures. (Trans A. Rosin). Viking.

Salomon, G., & Perkins, D. N. (1989). Rocky Roads to Transfer: Rethinking Mechanism of a Neglected Phenomenon. Educational Psychologist. https://doi.org/10.1207/s15326985ep2402_1

Schön, D. A. (1984). Reflective practitioner: How professionals think in action. Taylor & Francis Group.

Schulhoff, S., Ilie, M., Balepur, N., Kahadze, K., Liu, A., Si, C., Li, Y., Gupta, A., Han, H., Schulhoff, S., Dulepet, P. S., Vidyadhara, S., Ki, D., Agrawal, S., Pham, C., Kroiz, G., Li, F., Tao, H., Srivastava, A., … Resnik, P. (2024). The Prompt Report: A Systematic Survey of Prompting Techniques (No. arXiv:2406.06608). arXiv. https://doi.org/10.48550/arXiv.2406.06608

Tamašauskaitė, R., & Groth, P. (2023). Defining a Knowledge Graph Development Process Through a Systematic Review. ACM Computing Surveys. https://doi.org/10.1145/3592624

Teki, S. (2025). Context Engineering: The Key to Effective AI Agents. Sundeep Teki blog https://www.sundeepteki.org/blog/context-engineering-a-framework-for-robust-generative-ai-systems.

Wang, L., Zhang, X., Su, H., & Zhu, J. (2024). A Comprehensive Survey of Continual Learning: Theory, Method and Application (No. arXiv:2302.00487). arXiv. https://doi.org/10.48550/arXiv.2302.00487

Yan, L., Zhao, L., Martinez-Maldonado, R., Jin, Y., Gašević, D., Echeverria, V., Nieto, G. F., & Swiecki, Z. (2023). Human-AI Collaboration in Thematic Analysis using ChatGPT: A User Study and Design Recommendations. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3580874

Yan, W. (2025). Don’t Build Multi-Agents. Cognition. https://cognition.ai/blog/dont-build-multi-agents.