1 item with this tag.
Higher education's focus on prompt engineering — teaching technical skills for crafting AI queries — represents a misunderstanding of learning. This essay argues that prompts emerge from personal meaning-making frameworks, not technical mechanics, and that the institutional impulse to control AI interaction reveals a 'learning alignment problem': systems optimising for measurable proxies like grades rather than authentic curiosity. Drawing parallels to AI safety's value alignment problem, it shows how AI exposes that many assignments were already completable without genuine intellectual work. Universities must shift from control to cultivation paradigms, recognising that learning is personal and resistant to external specification, ensuring AI becomes a partner in human flourishing rather than a tool for strategic performance.