The learning alignment problem: AI and the loss of control in higher education
Metadata
- Author: Michael Rowe (ORCID)
- Affiliation: University of Lincoln (mrowe@lincoln.ac.uk)
- Created: June 05, 2025
- Version: 0.5 (updated 02 Jul, 2025)
- Modified: See Github record
- Keywords: AI principles, control, education, higher education, learning, learning alignment, professional education, prompt engineering, value alignment
- License: Creative Commons Attribution 4.0 International
- Preprint DOI: Why no DOI?
- Peer reviewed: No
Abstract
Higher education institutions have responded to AI technologies by emphasising "prompt engineering"—teaching students technical skills for crafting effective AI queries. This essay argues that a focus on interaction mechanics represents a fundamental misunderstanding of both AI engagement and learning itself. Rather than the outcome of technical skills, prompts emerge from students' personal meaning-making frameworks—their individual contexts for determining what questions matter and what constitutes intellectual exploration, or 'academic integrity'. The institutional focus on prompt control reveals what I call the "learning alignment problem", where educational systems optimise for measurable proxies (grades, compliance, technical proficiency) rather than authentic learning outcomes (for example, traits like curiosity, understanding, intellectual development). AI acts as a mirror, highlighting that students were already circumventing meaningful engagement in favour of strategic optimisation for institutional rewards. When students use AI to complete assignments without learning, they reveal that assignments were already completable without genuine intellectual work. This analysis draws parallels to the value alignment problem in AI safety research—the difficulty of creating systems that pursue intended goals instead of optimising for specified metrics . Educational institutions face similar challenges: we cannot directly measure what we say we value (learning), so we create proxies that students rationally optimise for, often moving further from authentic learning. The essay suggests that universities shift from control paradigms to cultivation paradigms - from teaching prompt engineering to fostering learning purpose, from managing student behaviour to creating conditions where thoughtful engagement emerges naturally. This means recognising that learning is inherently personal, contextual, and resistant to external specification. Educational environments must cultivate intellectual joy and curiosity rather than technical compliance, supporting students' meaning-making processes rather than standardising and seeking to control their interactions with AI.
Key takeaways
- Prompt engineering misses the point. Teaching students technical AI skills treats symptoms rather than causes. Instead, focus on helping students clarify their learning purposes rather than optimising their query construction.
- AI exposes existing misalignment. Student AI use highlights that educational systems were already optimising for performance over learning. The problem isn't the technology—it's the misaligned incentive structures.
- Learning cannot be engineered. Authentic learning is emergent, contextual, and personal—it resists external specification and control. Attempts to manage learning through detailed guidelines drive genuine intellectual engagement away.
- Cultivate conditions, don't control behaviours. Instead of restricting AI use, create educational environments where thoughtful AI partnership serves students' authentic learning goals. When learning feels genuinely valuable, students will naturally use AI to deepen their engagement with ideas.
- Joy and curiosity drive sustainable learning. Students who experience learning as joyful discovery engage with AI as thinking partners. Cultivating intellectual curiosity becomes crucial for preparing adaptive, lifelong learners.
The prompting preoccupation
Higher education has a new pedagogical priority: teaching students to craft better prompts. Across institutions, we see a proliferation of workshops on effective AI communication, guidelines for appropriate prompt construction, and assessment rubrics now evaluate prompt and language model transparency. This focus, while understandable given the rapid pace of technological change and genuine concerns about 'academic integrity', reflects a deeper impulse to maintain control through technical specification. The enthusiasm for prompt engineering reveals something important about our priorities in the higher education sector.
Despite the rhetoric celebrating critical thinking and intellectual development, the institutional response to the transformative technology of generative AI centres on managing interaction mechanics—as if learning quality depends primarily on prompt structure rather than the purpose and curiosity driving that interaction. This is a category error: treating AI engagement as a technical skill to be mastered rather than recognising it as an expression of deeper meaning-making processes.
Every prompt emerges from a student's unique framework for determining what questions matter, what constitutes useful exploration, and what direction inquiry should take. In other words, prompts are informed by professional values. When institutions focus on the mechanics of standardising prompting techniques, they are trying to control something fundamentally personal and contextual. The real challenge isn't teaching better prompting—it's understanding why students engage with AI in the first place and whether our educational environments cultivate the curiosity and joy that drive authentic learning.
Context is personal meaning-making
Current approaches to AI education treat "context" as an information management problem—background details to include, parameters to specify, constraints to articulate. This technical framing misunderstands what context really represents in human learning processes. Context isn't data to be packaged and transmitted; it's an individual's meaning-making framework that determines what is worth exploring, and why. Students bring unique cognitive and emotional perspectives to their interactions with AI, including their prior experiences with similar concepts, their vision of professional identity, and an understanding of what constitutes value in their field. A medical student who has navigated family illness brings a fundamentally different context to healthcare discussions than one who discovered medicine through academic achievement. No prompt template can reconcile these differences because they exist at the level of personal meaning, not technical information.
This changes how we should approach "responsible AI use." You cannot improve educational outcomes by focusing solely on the mechanics of interaction, while ignoring the purposes and values that shape our relationships with AI. The quality of language model interaction depends not only on prompt optimisation but on the clarity of purpose and presence of curiosity about the subject matter. When students use AI efficiently but joylessly—to complete assignments rather than explore questions—they reveal something important about their relationship with learning itself, which is informed by the institutions relationship with learning. The technical proficiency of their prompts becomes irrelevant if the underlying drive for discovery has been displaced by strategic optimisation for institutional rewards.
AI as a mirror
The emergence of powerful AI tools hasn't created new problems in education; it has simply made the existing challenges impossible to ignore. Students were already optimising for grades over understanding, seeking efficient paths through what many perceive as arbitrary requirements, as they learn to perform rather than engage deeply with ideas (see performative compliance in AI). AI has simply made these adaptations more visible and efficient. When students use AI to complete assignments without learning, they expose that our assignments were already completable without meaningful engagement. When they focus on prompt optimisation rather than conceptual exploration, they reflect educational systems that reward surface performance over deep understanding. When they treat AI as a shortcut rather than a thinking partner, they reveal what our institutions have taught them about the purpose of education.
This mirror effect should prompt institutional reflection rather than defensive responses. If students can use AI to pass assessments without developing the capabilities those assessments supposedly measure, the problem lies not in the technology but in assessment design. If they can progress through programmes by optimising outputs rather than understanding, our educational structures may be systematically misaligned with their stated purposes. For health professions education, this revelation is particularly uncomfortable. If students can generate patient care plans without developing care, or produce communication scenarios without developing interpersonal skills, we may all be engaging in performative compliance rather than professional competence; a pantomime of learning.
The impossibility of engineering learning
The institutional fixation on prompt control reflects a deeper misconception about learning itself. Universities approach education as if it were a manufacturing process: design appropriate inputs, manage variables, control behaviours, and reliably produce desired outputs. This mechanistic thinking drives the prompt engineering obsession—if institutions can just specify how students interact with AI, they can ensure appropriate learning outcomes.
But learning resists such specification because it is fundamentally emergent, contextual, and personal. Meaningful learning arises through unexpected connections, serendipitous insights, and combinations of ideas that cannot be predetermined. What constitutes transformative understanding for one student may be irrelevant for another, even within identical curricular contexts. The aspects of learning that matter most—developing professional judgement, making meaningful connections between theory and practice, finding personal significance in disciplinary knowledge—are precisely those that resist quantification and external control. They emerge through engagement with authentic challenges rather than completion of predetermined tasks.
When institutions create detailed AI guidelines, students don't simply comply—they adapt their meaning-making processes to work within or around constraints. Each new restriction generates creative workarounds, not because students are inherently deceptive, but because human curiosity and the drive to make sense of the world cannot be meaningfully constrained through policy. This creates a peculiar dynamic where institutional energy focuses on managing visible symptoms while actual learning processes become increasingly invisible and unsupported. Students develop bilingual fluency: speaking the language of compliance while thinking in terms of their authentic intellectual interests.
The learning alignment problem
This pattern represents education's version of the alignment problem in artificial intelligence development. In AI safety research, alignment refers to the challenge of creating systems that pursue what we actually want rather than what we can specify or measure. AI systems consistently optimise for given metrics, even when doing so completely subverts the purposes those metrics were meant to represent. Educational institutions face an identical challenge. We cannot directly specify or measure the learning outcomes we value—intellectual curiosity, adaptive thinking, professional wisdom, creative problem-solving. Instead, we create proxies: grades, completion rates, assessment scores, prompt rubrics. Students, as rational agents, optimise for these proxies. But optimising for measurable indicators often moves further from authentic learning rather than closer to it.
The specification problem
We cannot specify in advance any of the learning outcomes we think are truly important because learning, like human values, is:
- Irreducibly contextual: What constitutes meaningful learning for one person at one point in their life may be completely different for another person, or even the same person at a different time. A pre-med student and a philosophy major might both take biology, but what "learning biology" means to each of them - how it connects to their goals, interests, and sense of purpose - is fundamentally different.
- Emergent and unpredictable: Real learning often happens in ways we don't expect, through connections we couldn't have planned for. The most significant learning experiences are often serendipitous, arising from the collision of ideas, experiences, and personal readiness in ways that can't be engineered in advance.
- Evolutionary: What we think we want to learn changes as we learn. The person who starts studying history to understand the past may discover they're actually interested in understanding power dynamics, which leads them toward political theory, which opens questions about human nature. The learning process itself transforms the learner's understanding of what they want to learn.
- Resistant to measurement: The aspects of learning that matter most - developing judgement, making meaningful connections, finding personal significance in ideas - are precisely the things that resist quantification.
This creates what might be called a recursive misalignment. When students optimise for grades rather than understanding, institutions typically respond with more sophisticated grading systems, additional assessment requirements, and enhanced monitoring procedures. Students develop correspondingly sophisticated optimisation strategies. Each cycle moves the system further from its educational purpose while maintaining the appearance of academic rigour.
The prompt engineering focus exemplifies this dynamic. Institutions want thoughtful AI engagement but can only measure technical prompt structure. They create detailed rubrics for prompt construction. Students optimise their prompts to score well. But this optimisation may have no relationship—or even negative relationship—with actual learning value. The fundamental issue is that educational systems consistently optimise for what can be measured rather than what actually matters, creating environments where strategic performance becomes more valuable than intellectual engagement.
Joy, curiosity, and courageous learning
Lost in discussions of AI control and prompt optimisation is perhaps the most crucial element of meaningful education: the joy of discovery that transforms learning from obligation into exploration. Students who experience learning as joyful discovery naturally engage with AI as a thinking partner rather than a completion tool. Those who associate education primarily with evaluation and compliance use AI to navigate those systems efficiently. The capacity for intellectual joy—the genuine excitement that comes from understanding something new, making unexpected connections, or seeing familiar concepts from fresh perspectives—represents the foundation of lifelong learning. When students lose touch with this joy, education becomes instrumental rather than transformative. They learn to optimise for outcomes rather than pursue understanding for its intrinsic value.
Curiosity serves as the engine of this joyful engagement. Students driven by genuine curiosity about their field naturally use AI to explore deeper questions, test emerging hypotheses, and examine complex scenarios. They engage with AI prompts not as technical exercises but as opportunities to pursue compelling intellectual challenges. Their interactions reflect personal investment in understanding rather than strategic navigation of requirements. This curiosity becomes especially crucial for preparing students to thrive in uncertain futures. The specific knowledge and skills valued today may become obsolete, but the capacity for joyful, curious engagement with new ideas remains permanently valuable. Students who associate learning with discovery rather than compliance develop resilience and adaptability that serve them throughout their careers.
Educational environments that cultivate joy and curiosity create natural alignment between institutional goals and student motivations. When learning feels genuinely valuable, students protect and pursue it rather than circumventing it. When intellectual exploration connects to personal meaning and professional aspirations, AI becomes a partner in that exploration rather than a shortcut around it.
From control to cultivation
Recognising that learning cannot be controlled or engineered doesn't lead to institutional helplessness—it points toward a more effective approach. Instead of attempting to manage student behaviour directly, educational institutions can focus on cultivating conditions where authentic learning emerges naturally because it serves students' own purposes and aspirations. This shift requires examining what educational environments actually incentivise. If students consistently use AI to bypass learning processes, perhaps those processes don't feel valuable enough to engage with authentically. If they optimise for task completion over understanding, perhaps task completion is what the system genuinely rewards, regardless of stated intentions. Creating environments that support joyful, curious learning means designing experiences that connect to students' developing professional identities, that honour their meaning-making frameworks, and that reward intellectual courage rather than strategic compliance. It means supporting students in articulating their own learning aspirations and then providing resources—including AI partnerships—that serve those aspirations.
The role of educators evolves from content deliverers or behaviour managers to what might be called "becoming coaches"—professionals skilled at helping students clarify their aspirational goals and design learning experiences that support movement toward those visions. This requires understanding each student's unique context, curiosity patterns, and professional trajectory. Assessment transforms from measuring predetermined outcomes to examining alignment between students' actions and their stated learning values. Instead of evaluating prompt construction, educators might explore whether students' AI partnerships serve their authentic intellectual development. Instead of detecting inappropriate use, they might investigate whether students are developing the adaptive thinking capabilities they'll need in their future practice.
The choice we face
The institutional response to AI represents a fundamental choice about educational purpose and method. Universities can continue investing in increasingly sophisticated control mechanisms—more detailed guidelines, more nuanced detection systems, more elaborate prompt rubrics. This path promises the comfort of feeling like institutions are managing the situation, even as student adaptations consistently outpace institutional controls. Alternatively, institutions can embrace the more challenging but ultimately more rewarding work of creating educational environments where thoughtful AI use emerges naturally because it serves students' authentic purposes. This means shifting focus from prompt engineering to purpose cultivation, from behaviour management to becoming support, from technical specifications to meaning-making frameworks.
This transformation doesn't mean abandoning all structure or guidance. It means recognising that the most powerful influence on how students use AI isn't the technical instructions provided but the educational culture created. When learning feels genuinely valuable, students protect it. When intellectual growth connects to professional aspirations, students pursue it. When educational experiences honour their capacity for joy and curiosity, students engage authentically. For health professions education specifically, this approach aligns with what clinical practice actually demands. Healthcare requires professionals who can think adaptively when protocols don't apply, who can continue learning when knowledge evolves, who can make complex judgements in uncertain situations. These capabilities don't develop through prompt engineering workshops—they emerge through educational experiences that honour complexity and support authentic professional development.
The AI moment in education reveals what was always true: we cannot control learning, we can only create conditions where it flourishes. The institutions that thrive will be those that stop trying to engineer student prompts and start cultivating student purposes. They'll recognise that a student who uses AI thoughtfully in service of joyful discovery is infinitely more valuable than one who creates technically perfect prompts in service of compliance. When educational environments align with human meaning-making rather than trying to override it, when they support becoming rather than managing behaviour, when they cultivate curiosity and joy rather than engineering outputs—then AI becomes what it should be: a powerful partner in the deeply human process of learning and growth. The choice between control and cultivation, between engineering and emergence, between compliance and curiosity, ultimately determines whether educational institutions remain relevant to the human beings they claim to serve.