Skip to content

Context sovereignty for AI-supported learning: A human-centred approach

Metadata

Abstract

The current discourse around artificial intelligence in education has become preoccupied with prompting strategies, overlooking more fundamental questions about the nature of context in human-AI collaboration. This paper explores the concept of context engineering as an operational framework that supports personal learning and the philosophical goal of context sovereignty. Drawing from complexity science and learning theory, we argue that context functions as a dynamic field of meaning-making rather than static background information, and that ownership of that context is an essential consideration. Current approaches to context-setting in AI-supported learning—primarily prompting and document uploading—create episodic burdens requiring learners to adapt to AI systems rather than insisting that AI systems adapt to learners. Context sovereignty offers an alternative paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework addresses concerns about privacy, intellectual challenge, and authentic assessment while enabling new forms of collaborative learning that preserve human agency. Rather than treating AI as an external tool requiring skilful manipulation, context sovereignty suggests AI can become a cognitive partner that understands and extends human thinking while respecting individual boundaries. The implications extend beyond technical implementation to fundamental questions about the nature of learning, assessment, and human-AI collaboration in educational settings.

Key takeaways

  • Context engineering is a distinct discipline: Context engineering is not simply an extension of prompt engineering, but a system-level approach focused on designing dynamic, state-aware information ecosystems for AI agents. The quality and structure of context are often the primary differentiators between fragile demos and robust, reliable AI-supported learning environments.
  • Context is dynamic and relational: Rather than static information to be uploaded, context functions as a complex adaptive system characterised by emergence, temporal evolution, and relational dynamics that continuously shape meaning-making processes.
  • Current approaches create cognitive burdens: Episodic prompting and document uploading require constant re-contextualisation, creating cognitive overhead while establishing power asymmetries where learners adapt to AI systems rather than vice versa.
  • Three principles enable cognitive partnership: Persistent understanding, individual agency, and cognitive extension work together to create conditions for genuine collaboration where AI systems adapt to human cognitive patterns while preserving individual control over personal context.
  • Assessment should focus on collaborative problem-solving: Rather than attempting to distinguish between human and AI contributions, assessment should examine learners' capacity to mobilise contextual knowledge through AI partnerships to solve meaningful problems.
  • Privacy and functionality can coexist: By separating intelligence and reasoning services from personal context, learners can access sophisticated AI capabilities while maintaining control over personal information and meaning-making processes.
  • Implementation requires cultural transformation: Realising context sovereignty's potential demands not just technical capability but fundamental shifts in educational practices, institutional frameworks, and cultural understanding of human-AI collaboration.

Beyond prompting

As educators work to develop AI literacy in students, much of the conversation still revolves around teaching them better prompting strategies and, while prompting is a valuable skill, it is only a small part of what really matters when working with AI systems. The real challenge—and opportunity—lies in the broader practice of context-setting: the deliberate shaping of the information, relationships, and signals that guide AI behaviour towards a personally meaningful learning experience.

A key reason for the shift from prompt engineering to context engineering, is the stateless and static nature of large language models (LLMs). Language models do not remember previous interactions (statelessness), and their knowledge is fixed at the time of training (static). This means that every interaction with an AI system starts from scratch, with no memory of prior context or ability to update its knowledge dynamically. As a result, users and application layers (e.g. ChatGPTs Memory, Claude's Projects, or Gemini's Web Search) must take responsibility for constructing and maintaining context, and for providing up-to-date, relevant information with each interaction. This technical limitation is especially significant in learning, where continuity, personalisation, and the accumulation of understanding are essential.

In this paper we introduce context sovereignty as a way to empower individuals to maintain agency and authorship over their own learning and meaning-making in partnership with AI. I argue that current approaches to context-setting—episodic prompting and document uploading—ignore the temporal, relational nature of context and create cognitive burdens that force learners to adapt to AI systems rather than vice versa. Context sovereignty offers a different paradigm based on three principles: persistent understanding, individual agency, and cognitive extension. This framework imagines the accumulating relationships between human and artificial intelligence as a way of preserving learner autonomy while also enabling a more sophisticated cognitive partnership than the linear and isolated chats supported by current web-based frontier models. It addresses concerns about echo chambers, authentic assessment, and human agency, by showing how deeper contextual understanding enables more nuanced intellectual challenge, collaborative problem-solving, and preservation of human authorship over learning processes. But before we can explore context sovereignty, we first need to understand the evolution of context engineering as a discipline and how it differs from typical prompt engineering.

Context engineering as a system-level discipline. Recent advances define context engineering is a distinct, system-level discipline (Teki, 2025). Rather than focusing solely on crafting better prompts, context engineering emphasises the informational architecture of the entire ecosystem of knowledge, tools, and signals that an AI needs to interpret and act effectively. This discipline is shaped by the statelessness of LLMs: because models do not retain memory between interactions, all relevant information must be provided with each prompt. Even the most capable models can underperform if they are given incomplete or poorly structured context, regardless of their underlying sophistication. This shift in focus—from model-centric optimisation to context-centric architecture—has become foundational for building robust, scalable, and human-centred AI-supported learning environments .

Context-as-a-compiler. A helpful way to understand this shift is through the "context-as-a-compiler" analogy. In this view, the AI model acts like a code compiler, translating ambiguous human intent into useful outputs. The context—everything the model needs to do its job—acts like the libraries, dependencies, and environment variables that a traditional compiler relies on (Teki, 2025). For educators and learners, this means that the primary skill is not just writing prompts, but curating and structuring the context that guides the AI. In the future, we may even see the rise of "context development environments," where managing data sources and retrieval strategies becomes as important as any other tool or process aimed at supporting learning.

Compression, filtering, and structured retrieval. As AI systems grow more capable, simply increasing the amount of context is not enough. Intelligent context compression and filtering are essential for surfacing relevant information without overwhelming the model. Because LLMs are stateless and have a limited context window, earlier information in a conversation may be lost or must be summarised to fit within the model’s processing limits. This is especially important in educational settings, where continuity and personalisation are critical aspects of personal learning. Techniques such as query-aware selection, context compression frameworks, and structured retrieval (like Graph RAG) are emerging to help AI systems reason more effectively and efficiently. While the technical details are complex, the key idea is that the quality, structure, and relevance of context matter as much as the quantity. In other words, better context is better than more context.

From static to agentic systems. The field is also moving from static, linear pipelines to more dynamic, agentic systems. In these systems, AI agents can plan, use tools, reflect, and even collaborate with other agents to solve complex tasks. This evolution is, in part, a response to the limitations of stateless and static LLMs. New architectures and application-layer strategies are emerging to simulate memory and enable more adaptive, personalised learning experiences, addressing the need for continuity and developmental growth in educational contexts. In this wider and more complex ecosystem of interacting agents, each of which brings a different contextual model to the interaction, context engineering skills will become essential aspects of AI-supported learning.

The nature of context and its significance for learning

Learning involves the integration of new understanding with existing cognitive structures through assimilation and accommodation (Piaget, 1977). The quality of this integration depends on the contextual connections that learners make between new information and existing understanding. Context provides the relational substrate within which connections form, determining their durability and transferability, creating "conditionalised knowledge" that is connected to appropriate application conditions (Bransford et al., 2001). Context thus serves as the foundation for transfer—applying knowledge across different situations (Barnett & Ceci, 2002). Without rich contextual understanding, knowledge remains inert, bound to acquisition circumstances. Whereas deep contextual awareness enables the recognition of patterns that transcend particular domains, supporting "far transfer"—flexible application across significantly different contexts (Salomon & Perkins, 1989). This capacity emerges not from decontextualised principles but from understanding how contextual elements interact to create the conditions where knowledge can be applied.

However, these persistent, evolving knowledge structures supporting human learning are absent from language models. They do not possess true memory or the ability to build upon previous experiences, which means that learners must take responsibility for curating and maintaining their own context by treating it as a living product and not a static asset. Personal knowledge management (PKM) practices provide important insights into how learners can deliberately cultivate and maintain rich contextual landscapes to support their learning. These frameworks explore systematic approaches through which learners capture, organise, connect, and retrieve information across their intellectual lives, transforming scattered content into coherent understanding (Chatti, 2012). Unlike passive information storage, effective PKM systems create systems where ideas, experiences, and insights form dynamic networks of personal meaning. Through practices like reflective note-taking, concept mapping, and deliberate connection-making between domains, learners build external scaffolding that supports and extends their thinking processes. This contextual infrastructure becomes particularly valuable for learning because it preserves not just what learners know, but how they know it—the reasoning patterns, emotional associations, and conceptual frameworks that give information personal significance. The effectiveness of AI outputs is therefore more dependent on how context is structured and managed than on the intelligence of the underlying model itself (King, 2025). By making thinking visible and organising information with intention, PKM creates the foundation for more sophisticated learning experiences where new information can be integrated meaningfully with existing contextual understanding.

Thus, context evolves from the background information supporting AI interactions, towards an active, dynamic field within which meaning emerges and understanding develops. Drawing from complexity science, context functions as a space of dynamic exchanges between components rather than a static backdrop (Capra & Luisi, 2014; Davis & Sumara, 2008). Context shapes not only what we understand, but how we understand, creating conditions within which knowledge becomes personally meaningful and intellectually productive. This ecological understanding aligns with complexity theory's emphasis on emergence and non-linear dynamics in learning systems. Context operates as a complex adaptive system, where small changes can lead to significant transformations in understanding (Holland, 2014). Personal context encompasses prior experiences, cultural frameworks, emotional associations, and conceptual structures that learners bring to any encounter. This context is irreducibly individual while at the same time, existing in dynamic relationship with broader social, cultural, and institutional contexts. Context engineering—through information architecture, integration of systems, and implementation strategy—enables AI tools to deliver more meaningful results. And poor or missing context leads to unreliable or mediocre outputs, regardless of model sophistication.

Current approaches to context in AI-supported learning

Contemporary AI engagement in education relies primarily on prompting strategies and retrieval-augmented generation (RAG) through document uploading (Lee & Palmer, 2025; Li, et al., 2025). Both approaches treat context as additive rather than foundational, reducing context to static information while missing its relational and processual qualities. Prompting strategies require learners to provide explicit contextual information within requests—background knowledge, learning objectives, preferred explanations, and circumstances. This places the entire contextualisation burden on learners, requiring fresh articulation with each interaction, and leaving open the possibility that important information will be left out. The cognitive overhead diverts mental resources from learning toward orienting the AI system, creating episodic burden—the load of repeatedly establishing context, rather than building upon an accumulated understanding. This contradicts context's temporal and evolutionary nature, treating as static what is actually dynamic and developing.

Document uploading via RAG may lead to a confusion between information and context. Uploading documents like research papers, presentation slides, or lecture notes merely provide the language model with content that anyone could share, but fails to capture the personal lens through which information becomes meaningful to individual learners. The distinction between "information context" and "personal context" is important—while the former can be shared through upload, the latter requires understanding how information connects to existing knowledge, goals, values, and thinking patterns within individuals. The approach reflects a mechanistic rather than ecological understanding, treating context simply as collections of documents and prompts rather than active environments where meaning emerges as part of a dynamic relationship unique to each individual.

Most fundamentally, these episodic approaches treat context as interaction addenda rather than true cognitive partnerships. They create fragmented, rather than coherent, learning experiences, missing opportunities to develop persistent, accumulating understanding that could transform human-AI collaboration in learning. This undermines meaningful human-AI collaboration, establishing asymmetrical power relationships where learners surrender personal information to access enhanced AI capabilities, creating "cognitive colonisation" where commercial entities accumulate value from user data while users lack visibility into how their context is managed and processed. By failing to recognise context as a complex adaptive system that evolves through engagement, current approaches cannot support nuanced personal learning challenges, collaborative learning, and the kinds of cognitive extension that become possible when context is properly understood and preserved within learner control.

A conceptual framework for context sovereignty in learning

Context sovereignty offers a fundamentally different approach to human-AI collaboration in learning, positioning learners' personal context—their knowledge, values, goals, and thinking patterns—as the central element around which AI interaction is organised. Unlike current approaches requiring learners to adapt to AI systems, context sovereignty enables AI systems to adapt to learners' established cognitive patterns while maintaining individual control over personal information. Context sovereignty can be distinguished from data sovereignty by its emphasis on meaning rather than information. While data sovereignty focuses on who controls raw information (Hummel, et al., 2021), context sovereignty emphasises personal significance, cognitive relationships, and the relevance of personal knowledge. It recognises context not as data to be stored and retrieved, but as the active cognitive environment within which learning and understanding develop.

The concept encompasses three fundamental principles that create the conditions for genuine cognitive partnership, transforming AI from a generic tool into a personal cognitive partner that adapts to individual learning patterns and intellectual development.

Persistent understanding enables AI systems to develop accumulating comprehension of learners over time, acknowledging that context continuously evolves through engagement with ideas, experiences, and other cognitive agents. This creates co-evolutionary relationships where human and artificial intelligence adapt to each other's development (Kauffman, 1995), transforming episodic encounters into continuous cognitive collaboration where interactions build upon previous understanding, reducing the cognitive burden of repeated contextualisation while maintaining clear boundaries between the learner's context and the AI system.

Individual agency ensures learners maintain complete control over personal context, preserving what Freire (2000) described as the learner's role as "critical co-investigator" rather than passive recipient. This principle responds to power asymmetries in current AI systems, maintaining the humanising aspects of education within technological mediation and ensuring learners retain authorship over cognitive development and meaning-making processes.

Cognitive extension positions AI as an amplifier of human reasoning rather than a substitute, drawing on concepts like distributed cognition and the extended mind thesis (Clark & Chalmers, 1998). This cognitive extension creates an ecology where human and AI form integrated systems that enhance, rather than diminish, human capabilities. Meaningful extension requires that the AI has a deep understanding of human cognitive partners, including their reasoning patterns, knowledge structures, and meaning-making approaches.

These principles would enable an authentic partnership between human learners and AI, reflecting the ecological and temporal understanding of context that privileges and preserves human agency in a personal meaning-making processes. The principles manifest through interconnected operational dimensions and have the following implications for how learners think about, and enact, their interactions with generative AI systems.

Personal context curation (also known as context engineering) requires learners to develop metacognitive awareness of their own learning patterns, knowledge structures, and intellectual goals. This principle transforms Schön's (1984) "reflection-in-action" into systematic approaches for making (human) thinking visible to AI collaboration. Context curation, using systems like PKM, create structured representations (through, for example, notes, reflections, knowledge maps, and bi-directional internal linking) that serve as interfaces between human cognition and AI systems, transforming personal information management from passive archives of content into active cognitive infrastructure.

Continual learning would enable AI systems to evolve their understanding of learners' contexts as learners themselves develop and change, without the frequent model retraining necessary for updating context (Wang, et al., 2024). This goes beyond simple memory of past interactions, and includes adaptive understanding of how learners' thinking patterns, knowledge structures, goals, and intellectual frameworks change over time. As learners update their contexts through new experiences, changing priorities, or evolving expertise, the AI system would correspondingly update its model of the learner. This creates genuinely developmental relationships where the AI 'learns on the job', developing a more nuanced understanding of the learner's growth, recognising recurring themes while adapting to new directions in the learner's intellectual journey.

Contextual interoperability addresses the challenge of separating AI and reasoning from personal context. Context should remain private and locally controlled while intelligence is accessed as a service (Lins, et al., 2021). This ensures that learners benefit from powerful AI capabilities without compromising autonomy over personal information or meaning-making processes. This functions as a kind of federated intelligence where the intelligence and reasoning capabilities of different language models - local or remote - can be applied to local context, depending on the nature of the task, and preference of the learner (Long, 2024).

Context sovereignty therefore represents interconnected conceptual shifts constituting a new paradigm for human-AI collaboration:

Current approachContext sovereigntySignificance
Episodic interactionsPersistent relationshipsEliminates cognitive overhead of repeated contextualisation
Explicit context articulationImplicit contextual understandingEnables natural, efficient communication
Generic AI capabilitiesPersonalised cognitive extensionTransforms AI from tool to thinking partner
Information sharingMeaning preservationMaintains personal significance of knowledge
Data extraction modelsIndividual data sovereigntyPreserves privacy while accessing intelligence and reasoning
Human adaptation to AIAI adaptation to human patternsCentres human agency in the relationship
Prompting skillsContext curation capabilitiesDevelops metacognitive awareness of personal knowledge

These shifts collectively represent movement from AI-centric to human-centric collaboration, where AI systems become media through which learners extend their thinking capabilities rather than separate entities requiring constant orientation and instruction.

Implications of context sovereignty for learning

The idea of context sovereignty leads to important implications for learning environment design, creating new possibilities from a more nuanced understanding of dynamic context development.

Transforming personal learning from content delivery to cognitive partnership

Context sovereignty fundamentally reframes personal learning by shifting from content adaptation to cognitive partnership. Traditional educational AI systems operate through algorithmic content matching, adjusting difficulty or sequences based on performance indicators, treating personal learning as an optimisation problem (Essa, et al., 2023). Context sovereignty builds towards a cognitive partnership where AI systems adapt reasoning processes to align with individual cognitive patterns and meaning-making frameworks (supplied by the learner through whatever approach to context engineering they have adopted). This recognises that personal learning systems would be characterised by emergence and non-linear dynamics rather than predictable relationships and standardised platforms. In cognitive partnerships, AI develops a more nuanced understanding of how individual learners think, as they make their reasoning patterns, connection-making approaches, values and motivations explicit, rather than simply tracking learners' knowledge. This enables personal learning that respects learners' irreducible contextual individuality while also providing access to sophisticated intelligence and reasoning support that builds upon human cognitive capabilities.

Context as the foundation for productive intellectual challenge

Context sovereignty addresses concerns about intellectual echo chambers by enabling more sophisticated challenges as part of the learning process (Bjork & Bjork, 2009). Productive intellectual challenge emerges from meaningful engagement, where the AI system understands learners' existing positions deeply enough to identify genuine limitations, contradictions, or unexplored implications, rather than from forced opposition (for examples, prompts to the AI to "tell me why I'm wrong") or generic alternatives (like structured prompt libraries provided by institutions). When AI systems have a rich understanding of learners' unique intellectual frameworks—their values, assumptions, reasoning patterns, and knowledge structures—they would provide more precisely calibrated cognitive challenges that target productive points of tension. Rather than presenting opposing viewpoints that might be easily dismissed as irrelevant by the learner, AI can identify specific thinking aspects meriting exploration or refinement, surfacing contradictions between beliefs, identifying assumptions lacking evidential support, or suggesting alternative evidence interpretations the learner finds meaningful. Current educational landscapes already suffer from intellectual homogeneity, with alternative perspective encounters occurring largely accidentally. Context sovereignty enables the introduction of deliberate intellectual diversity, connecting meaningfully with learners' existing frameworks rather than underwhelming them with generic alternatives.

Distributed context sovereignty and collaborative learning

Context sovereignty enables new forms of collaborative learning through distributed contextual awareness; learning environments where multiple contextually-aware AI systems interact while maintaining clear context-sharing boundaries (Ferber, 1999). For example, we might need to consider professional education settings where learners bring their own personal AI agents into institutional contexts with their own AI agents, where agents interact and integrate personal and institutional context, including values, processes, and knowledge frameworks, that might introduce new tensions and even contradictions into the system. This creates a structural coupling where different systems should maintain their own identities while also being influenced by the other agents they interact with. This distributed approach to sharing context suggests collaborative learning that transcends traditional group work limitations. Rather than adapting to lowest common denominators, distributed context sovereignty enables collaboration where individual differences become collective learning resources, with AI systems identifying productive complementarities between participants' knowledge and perspectives.

Reconceptualising authentic intellectual work and assessment

Context sovereignty suggests that we reframe educational goals and remove the artificial boundaries between human and AI. Instead of asking "What does the student know without AI assistance?", we might instead ask "What important problem did the student and AI solve together?" or "How effectively did the student mobilise contextual knowledge through AI partnership?" This acknowledges that professional and civic contexts will include AI support, making it essential for students to demonstrate effective AI collaboration, rather than seeing this collaboration as a threat to learning authenticity and validity. This suggests an approach to assessment that focuses on collaborative problem-solving rather than isolated information recall, or even of individual knowledge production. Instead, we might assess learners' capacity to curate relevant context, guide AI reasoning toward productive insights, and critically evaluate AI-generated ideas within personal knowledge frameworks. Assessments like this would recognise authentic intellectual work as the sophisticated orchestration of novel combinations of human and AI capabilities, in the service of personal and socially meaningful problem-solving.

Preserving human agency while enabling cognitive amplification

Context sovereignty's most significant implication lies in preserving and enhancing human agency with sophisticated AI systems. By centralising personal context, the framework ensures that AI systems adapt to human thinking patterns rather than requiring human adaptation to AI constraints. This fundamentally shifts power dynamics from models that extract value from user data toward relationships where learners maintain sovereignty over their information and meaning-making processes. This creates "cognitive amplification", where AI increases the reach, sophistication, and effectiveness of human reasoning while preserving human autonomy over the learning process. Rather than replacing human judgement with algorithmic decision-making, context sovereignty enables learners to leverage AI capabilities as extensions of their own thinking, maintaining critical oversight and creative control over intellectual work. The AI becomes a medium through which learners extend their cognitive reach while remaining primary agents of their own learning and development. This addresses concerns about AI's potentially dehumanising effects by demonstrating how technological sophistication can enhance human capabilities, suggesting possibilities for educational institutions that cultivate sophisticated human-AI collaboration while maintaining their fundamental mission of human empowerment.

Conclusion

Context sovereignty offers a vision of AI in education that preserves human agency while enabling the development of genuine cognitive partnerships. The shift from prompt engineering to context engineering represents a reimagining of human-AI relationships, where context-setting is understood as a complex adaptive system rather than the sharing of static information. As context engineering matures, best practices will continue to evolve, and human-centred approaches must remain at the core of system design. The three principles introduced here—persistent understanding, individual agency, and cognitive extension—enable more sophisticated difficulty to challenge learning, collaborative learning that preserves individual autonomy, and assessment practices that recognise collaborative problem-solving as authentic intellectual achievement. The framework also shows how powerful artificial intelligence and reasoning can be accessed while maintaining individual control over personal context. Context sovereignty points us toward the development of learning environments as spaces for cultivating sophisticated human-AI collaboration, assessment practices that examine learners' capacity to mobilise contextual knowledge through AI partnerships, and educational institutions that prepare learners for AI-augmented citizenship. This will require cultural and conceptual transformation in understanding the purpose of an education system where AI is recognised as a cognitive partner that understands and extends human thinking while respecting personal boundaries and preserving human agency. Context sovereignty thus provides the foundation for reimagining human-AI collaborative partnerships that amplify human learning through powerful artificial intelligence, while preserving and enhancing human capabilities.

References

  • Anthropic. (2024). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol
  • Barnett, S. M., & Ceci, S. J. (2002). When and where do we apply what we learn?: A taxonomy for far transfer. Psychological Bulletin, 128(4), 612–637. https://doi.org/10.1037/0033-2909.128.4.612
  • Bjork, E. L., & Bjork, R. A. (2009). Making Things Hard on Yourself, But in a Good Way: Creating Desirable Difficulties to Enhance Learning. In Psychology and the Real World (pp. 55–64). Worth Pub.
  • Bransford, D. (2001). How People Learn: Brain, Mind, Experience, and School. Early Childhood Development and Learning: New Knowledge for Policy. Washington, DC: The National Academies Press. https://doi.org/10.17226/10067.
  • Capra, F., & Luisi, P. L. (2014). The Systems View of Life: A Unifying Vision. Cambridge University Press. https://doi.org/10.1017/CBO9780511895555
  • Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.
  • Chalef, D. (2025). What is Context Engineering, Anyway. https://blog.getzep.com/what-is-context-engineering/
  • Chatti, M. A. (2012). Knowledge management: A personal knowledge network perspective. Journal of Knowledge Management, 16(5), 829–844. https://doi.org/10.1108/13673271211262835
  • Davis, B., & Sumara, D. (2008). Complexity as a theory of education. Transnational Curriculum Inquiry, 5(2), 33–44.
  • Essa, S. G., Celik, T., & Human-Hendricks, N. E. (2023). Personalized Adaptive Learning Technologies Based on Machine Learning Techniques to Identify Learning Styles: A Systematic Literature Review. IEEE Access11, 48392–48409. https://doi.org/10.1109/access.2023.3276439
  • Ferber, J. (1999). Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence (1st. ed.). Addison-Wesley Longman Publishing Co., Inc., USA.
  • Freire, P. (2000). Pedagogy of the oppressed (30th anniversary ed). Continuum.
  • Holland, J. H. (2014). Complexity: A Very Short Introduction (1st Edition). Oxford University Press.
  • Hummel, P., Braun, M., Tretter, M., & Dabrock, P. (2021). Data sovereignty: A review. Big Data & Society, 8(1), 2053951720982012. https://doi.org/10.1177/2053951720982012
  • Kauffman, S. A. (1995). At Home in the Universe: The Search for Laws of Self-organization and Complexity. Oxford University Press.
  • King, S. (2025). Context Engineering: Why Feeding AI the Right Context Matters. Inspired Nonsense blog. https://inspirednonsense.com/context-engineering-why-feeding-ai-the-right-context-matters-353e8f87d6d3
  • Lee, D., & Palmer, E. (2025). Prompt engineering in higher education: A systematic review to help inform curricula. International Journal of Educational Technology in Higher Education, 22(1), 7. https://doi.org/10.1186/s41239-025-00503-7
  • Li, Z., Wang, Z., Wang, W., Hung, K., Xie, H., & Wang, F. L. (2025). Retrieval-augmented generation for educational application: A systematic survey. Computers and Education: Artificial Intelligence, 8, 100417. https://doi.org/10.1016/j.caeai.2025.100417
  • Lins, S., Pandl, K. D., Teigeler, H., Thiebes, S., Bayer, C., & Sunyaev, A. (2021). Artificial Intelligence as a Service. Business & Information Systems Engineering, 63(4), 441–456. https://doi.org/10.1007/s12599-021-00708-w
  • Long, G. (2024). The rise of federated intelligence: From federated foundation models toward collective intelligence. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 8547–8552. https://doi.org/10.24963/ijcai.2024/980
  • Norberg, P. A., Horne, D. R., & Horne, D. A. (2007). The Privacy Paradox: Personal Information Disclosure Intentions versus Behaviors. Journal of Consumer Affairs, 41(1), 100–126. https://doi.org/10.1111/j.1745-6606.2006.00070.x
  • Piaget, J. (1977). The development of thought: Equilibration of cognitive structures. (Trans A. Rosin). Viking.
  • Salomon, G., & Perkins, D. N. (1989). Rocky Roads to Transfer: Rethinking Mechanism of a Neglected Phenomenon. Educational Psychologist. https://doi.org/10.1207/s15326985ep2402_1
  • Schön, D. A. (1984). Reflective practitioner: How professionals think in action. Taylor & Francis Group.
  • Teki, S. (2025). Context Engineering: The Key to Effective AI Agents. Sundeep Teki blog https://www.sundeepteki.org/blog/context-engineering-a-framework-for-robust-generative-ai-systems.
  • Yan, W. (2025). Don’t Build Multi-Agents. Cognition. https://cognition.ai/blog/dont-build-multi-agents.
  • Wang, L., Zhang, X., Su, H., & Zhu, J. (2024). A Comprehensive Survey of Continual Learning: Theory, Method and Application (No. arXiv:2302.00487). arXiv. https://doi.org/10.48550/arXiv.2302.00487