The technological nature of language: Implications for education
Metadata
- Author: Michael Rowe (ORCID)
- Affiliation: University of Lincoln
- Created: March 28, 2025
- Version: 0.6 (last updated: 29 Jun, 2025)
- Modified: See Github record
- Keywords: artificial intelligence, cognitive extension, educational technology, general purpose technology, language models, technological literacy
- License: Creative Commons Attribution 4.0 International
- Preprint DOI: Why no DOI?
- Peer reviewed: No
- :::
Abstract
This essay reconceptualises language as humanity's first general purpose technology—a sophisticated system we developed to extend cognitive capabilities across domains and enable complementary innovations. Through this lens, large language models (LLMs) emerge not as novel digital tools but as the latest evolution in a continuum spanning spoken language, writing, printing, and digital text. LLMs extend language's fundamental capabilities through unprecedented scale, cross-domain synthesis, cognitive adaptability, and emerging multimodality. This technological evolution presents profound challenges for education, where students face information overload while remaining underprepared for complex practice environments that demand adaptive expertise.
The essay examines how viewing LLMs as language technology rather than mere computing applications transforms educational priorities from knowledge acquisition toward cognitive partnership, from technical skill development toward contextual reasoning, and from individual assessment toward collaborative problem-solving. Drawing on complexity theory and distributed cognition research, I argue that language technologies have always mediated human thinking, and that LLMs represent a qualitative shift in this mediation that requires fundamental reconsideration of educational goals, methods, and assessments. The analysis addresses concerns about cognitive dependency, authentic learning, and power dynamics while proposing practical frameworks for educational transformation that enhance rather than diminish human capabilities.
Key takeaways
Language is technology, not just biology: Human language functions as a sophisticated technological system—humanity's first general purpose technology—that extends cognitive capabilities far beyond biological limitations through structured rule systems and symbolic representations.
LLMs represent technological evolution, not revolution: Large language models constitute the latest stage in language technology's historical progression from speech through writing, printing, and digital text, each extending human cognitive reach in new dimensions.
Educational priorities must shift fundamentally: Moving from knowledge transmission to cognitive partnership requires emphasising adaptive expertise, contextual reasoning, and collaborative intelligence over information acquisition and recall.
Assessment needs complete reconceptualisation: Rather than testing individual knowledge without AI assistance, education should evaluate students' capacity to leverage AI partnerships for meaningful problem-solving and critical analysis.
Power dynamics and bias require explicit attention: Language technologies, including LLMs, embed particular worldviews and power structures that educational institutions must critically examine rather than uncritically adopt.
Technical literacy demands deeper understanding: "AI literacy" requires comprehending how language technologies shape thinking processes, not just developing prompting skills or tool manipulation capabilities.
Implementation requires systemic transformation: Integrating LLMs effectively demands institutional change in curriculum design, faculty development, infrastructure, and fundamental educational philosophy.
The technological nature of language
When a student opens ChatGPT to help with an assignment, they are engaging with humanity's oldest and most consequential technology—not just its newest application. This perspective, viewing language itself as technology, transforms how we understand both artificial intelligence and education's response to it. Yet despite language's foundational role in human civilisation, we rarely recognise it as the sophisticated technological achievement it represents.
Language constitutes humanity's first general purpose technology—a systematic application of knowledge that extends human capabilities across multiple domains while enabling countless complementary innovations. Unlike animal communication systems that remain largely instinctual and domain-specific, human language is generative, recursive, and infinitely creative. We discuss past and future, construct hypotheticals, build abstract concepts, and communicate about communication itself. These capabilities mark language as technological achievement rather than mere biological endowment.
The philosopher Andy Clark describes language as "the ultimate artifact," noting that "words enable us to objectify our own thoughts and to reason about them." Through language technology, we externalise cognitive processes, creating what Daniel Dennett calls "tools for thinking" that have driven human progress across millennia. This cognitive extension through language establishes the foundation for understanding how large language models represent not a break from human tradition but its latest evolution.
Language functions as a complex adaptive system characterised by emergence, self-organisation, and nonlinear dynamics. Small changes in linguistic structure can produce dramatic shifts in cognitive capability—the development of written language transformed not just information storage but the nature of human thought itself. Mathematical notation enabled scientific revolution by providing precise symbolic representation. Legal language created frameworks for complex social organisation. Each represents language technology enabling cognitive and social capabilities impossible without it.
Yet language as technology is never neutral. Every linguistic system embeds particular worldviews, power structures, and ways of understanding reality. As James Baldwin observed, language controls and maintains power relationships, determining whose voices are heard and whose experiences are validated. This non-neutrality becomes crucial when considering how large language models, trained on vast linguistic datasets, may amplify existing biases while appearing objective or authoritative.
The technological view of language also reveals its collaborative nature. Language enables distributed cognition—thinking that spans individual minds through shared symbolic systems. Scientific communities develop specialised languages that enable collective reasoning impossible for any individual. Professional languages create shared frameworks for understanding complex domains. These collaborative capabilities suggest that artificial intelligence represents not replacement of human cognition but potential extension of language's fundamental collaborative function.
Language as general purpose technology
General purpose technologies share three defining characteristics: pervasive effects across economic sectors, broad applicability across diverse domains, and the capacity to spawn complementary innovations. Steam power, electricity, and computing exemplify this pattern, each transforming civilisation through their combinatorial effects. Language, examined through this framework, emerges as the archetypal general purpose technology—one whose transformative effects exceed any subsequent innovation.
Language's economy-wide effects shaped human civilisation from its earliest development. Complex economic coordination became possible only through sophisticated communication systems that enabled division of labour, trade relationships, and shared understanding of value. From paleolithic hunting coordination to contemporary global supply chains, economic activity depends fundamentally on linguistic capabilities that allow humans to communicate intentions, coordinate actions, and establish trust relationships across time and space.
The applications of language span virtually every domain of human endeavour. Science depends on linguistic frameworks for hypothesis formation, evidence evaluation, and knowledge sharing. Governance requires linguistic systems for law creation, policy implementation, and democratic participation. Art uses language for narrative construction, emotional expression, and cultural transmission. Education relies entirely on linguistic capabilities for knowledge transfer and skill development. No other technology demonstrates such universal application across human activities.
Perhaps most significantly, language enables complementary innovations that compound its effects across generations. Writing systems extended language beyond immediate temporal and spatial constraints, enabling knowledge preservation and long-distance communication. Mathematical notation provided precise symbolic frameworks for quantitative reasoning. Legal systems codified social norms through linguistic structures. Scientific methodology established systematic approaches to empirical investigation through formalised language use. Digital technologies created new modalities for linguistic expression and connection.
Each complementary innovation built upon language's foundation while extending its capabilities in new directions. This generative capacity distinguishes general purpose technologies from more limited innovations—they create platforms for endless further development rather than solving discrete problems. Language's most transformative effect lies in enabling cumulative knowledge development beyond individual lifespans. Unlike other species limited to immediate experience and genetic transmission, humans build knowledge progressively through linguistic preservation and sharing of discoveries across generations.
This cumulative effect accelerated dramatically with each language technology evolution. Oral traditions enabled cultural knowledge preservation within communities. Writing systems expanded knowledge storage and transmission across time and geography. Printing democratised knowledge access while standardising linguistic forms. Digital technologies connected global linguistic communities while enabling new forms of collaborative knowledge creation.
Understanding language as general purpose technology provides crucial context for evaluating large language models. Rather than representing unprecedented technological disruption, LLMs continue language technology's historical trajectory of extending human cognitive capabilities while enabling new forms of complementary innovation. This perspective suggests that educational responses should build upon accumulated wisdom about integrating language technologies rather than treating AI as entirely novel challenge requiring completely new approaches.
The evolution of language technologies and cognitive extension
The progression of language technologies reveals a consistent pattern: each major innovation extends human cognitive capabilities while enabling new forms of thinking that were previously impossible. Spoken language allowed humans to coordinate complex activities and share knowledge across time. Writing systems externalised memory, enabling complex reasoning that transcended individual cognitive limitations. Printing democratised knowledge while creating shared reference points for intellectual communities. Digital technologies connected global linguistic networks while enabling real-time collaborative thinking.
Large language models represent the latest stage in this evolutionary progression, but with qualitative differences that distinguish them from previous language technologies. While writing extended human memory and printing amplified knowledge distribution, LLMs extend linguistic generation and pattern recognition capabilities at unprecedented scale and sophistication.
LLMs overcome individual cognitive limitations in several dimensions. Where human language use remains constrained by working memory, expertise boundaries, and processing speed, LLMs can engage with vast linguistic corpora simultaneously, identifying patterns and generating connections that would be impossible for individual human cognition. They can shift between linguistic registers, domains, and styles with flexibility that would require years of immersion for human language users to develop.
The scale effects are profound but often misunderstood. LLMs don't simply provide access to more information—they enable new forms of linguistic pattern recognition that emerge from processing billions of language examples. This creates capabilities for cross-domain synthesis, identifying connections between previously isolated knowledge domains, and generating insights that might be difficult for human experts working within specialised boundaries to discover.
Perhaps most significantly, LLMs extend language's collaborative potential beyond traditional human limitations. While previous language technologies enabled collaboration across time and space, they remained bounded by human cognitive capabilities and social coordination constraints. LLMs effectively incorporate linguistic patterns from vast human communities, creating a form of distributed collaboration that transcends traditional group dynamics.
However, this extension of collaborative capability raises important questions about agency and authorship. When students engage with LLMs, they participate in linguistic collaboration that extends far beyond their individual capabilities—but also beyond their individual control. The cognitive extension enabled by LLMs is more comprehensive and less transparent than previous language technologies, creating new challenges for understanding the relationship between individual and collective intelligence.
The cognitive extension enabled by LLMs also transforms the nature of expertise itself. Traditional expertise involved mastering specialised linguistic domains—medical terminology, legal language, technical vocabularies. LLMs can engage competently across multiple specialised domains simultaneously, potentially democratising access to expert-level linguistic capabilities while also challenging traditional notions of what expertise means.
This democratisation effect parallels previous language technology innovations. Writing challenged the authority of oral tradition keepers. Printing undermined scribal monopolies on knowledge reproduction. Digital technologies disrupted traditional publishing gatekeepers. LLMs may similarly challenge expertise hierarchies based on linguistic domain mastery, creating new possibilities for knowledge creation while potentially destabilising existing educational and professional structures.
Distributed cognition and the extended mind
The extended mind thesis, developed by philosophers Andy Clark and David Chalmers, proposes that cognitive processes can extend beyond individual brain boundaries to include external tools and technologies that reliably augment thinking capabilities. Language technologies have always functioned as extended mind systems—writing enables external memory storage, mathematical notation supports complex reasoning, and digital tools facilitate cognitive offloading across multiple domains.
LLMs represent a qualitative expansion of extended mind capabilities, offering cognitive extension that is more comprehensive, adaptive, and interactive than previous language technologies. Unlike passive tools that store information or perform predetermined functions, LLMs engage in dynamic linguistic interaction that can adapt to individual cognitive patterns while providing sophisticated reasoning support.
This cognitive extension operates through what researchers call "distributed cognition"—thinking that spans individual minds, technological systems, and environmental structures. Effective distributed cognition requires alignment between human cognitive patterns and technological capabilities, creating integrated systems where human and artificial intelligence complement rather than compete with each other.
However, distributed cognition through LLMs raises concerns about cognitive dependency and authentic learning that require careful consideration. When cognitive extension becomes too comprehensive or insufficiently transparent, it may undermine rather than enhance human cognitive development. Students who rely on LLMs for thinking tasks they should develop independently may experience cognitive atrophy rather than cognitive enhancement.
The challenge lies in designing educational experiences that leverage LLMs' cognitive extension capabilities while ensuring students develop essential reasoning abilities. This requires understanding which cognitive functions benefit from technological augmentation and which require sustained human development. Pattern recognition, information synthesis, and linguistic generation may benefit from AI partnership, while critical evaluation, creative synthesis, and contextual judgment require sustained human development.
The distributed cognition perspective also highlights the importance of metacognitive awareness—understanding how one's own thinking interacts with technological augmentation. Students need to develop sophisticated understanding of when to rely on AI capabilities, when to work independently, and how to evaluate the quality and appropriateness of AI-generated outputs within specific contexts.
This metacognitive dimension transforms traditional notions of academic integrity and authentic assessment. Rather than viewing AI assistance as inherently problematic, distributed cognition suggests focusing on students' capacity to orchestrate human and artificial intelligence effectively for meaningful problem-solving. The question shifts from "What can students do without AI?" to "How effectively can students mobilise AI capabilities for worthwhile purposes?"
Implications for educational transformation
Viewing LLMs as evolved language technology rather than novel digital tools fundamentally transforms educational priorities and methods. If language technologies have always mediated human learning and thinking, then education must adapt to new forms of linguistic mediation rather than treating AI as external threat to established practices.
From knowledge acquisition to cognitive partnership
Traditional educational models emphasise knowledge acquisition and information recall—functions that language technologies have progressively externalised. Medical students memorise drug interactions that are better accessed through databases. Engineering students learn calculation procedures that software performs more reliably. History students memorise dates and facts that are instantly accessible through digital searches.
LLMs accelerate this externalisation process while enabling new forms of cognitive partnership that transcend simple information access. Rather than replacing human knowledge with artificial knowledge, effective educational integration should focus on developing students' capacity for productive collaboration with intelligent systems.
This partnership model requires fundamentally different educational approaches. Instead of presenting students with predetermined content to memorise, education should focus on developing capabilities for contextual reasoning, critical evaluation, and creative synthesis that complement rather than compete with AI capabilities. Students need to develop sophisticated understanding of how to frame questions, evaluate responses, and integrate AI-generated insights with their own reasoning and values.
The partnership model also requires recognition that different domains may require different relationships between human and artificial intelligence. Technical fields like engineering or computer science may benefit from extensive AI collaboration for routine calculations and code generation, while creative disciplines may use AI for inspiration and exploration while maintaining human agency over artistic decisions.
Reconceptualising technical literacy for the AI age
Traditional approaches to "AI literacy" often focus on technical skills—prompt engineering, tool manipulation, and understanding model capabilities and limitations. While these skills have value, the technological view of language suggests deeper forms of literacy that address how language technologies shape thinking processes and social relationships.
Technical literacy in the age of LLMs requires understanding how artificial intelligence extends and potentially transforms human cognitive capabilities. Students need metacognitive awareness of their own thinking processes and how these interact with AI augmentation. They need critical consciousness about the biases and limitations embedded in AI systems. They need ethical frameworks for responsible AI engagement that preserve human agency while leveraging technological capabilities.
This expanded notion of technical literacy also requires understanding the social and political dimensions of language technologies. Students should understand how AI systems are developed, by whom, and for what purposes. They should recognise how these systems may perpetuate or challenge existing power structures. They should develop capacity for critical evaluation of AI outputs that considers not just accuracy but also perspective, bias, and agenda.
For technical education specifically, this means moving beyond tool-focused training toward deeper understanding of human-AI collaboration principles. Computer science students need to understand not just how to build AI systems but how these systems interact with human cognition and social structures. Engineering students need to develop capacity for AI-augmented design that maintains human creativity and ethical responsibility. Mathematics students need to understand how computational thinking complements rather than replaces mathematical reasoning.
Assessment and authentic intellectual work
The integration of LLMs into education forces fundamental reconsideration of assessment practices and what constitutes authentic intellectual work. Traditional assessments often test students' capacity to reproduce information or perform routine cognitive tasks—functions that AI systems can now accomplish with greater speed and accuracy than most students.
Rather than attempting to prevent AI use or detect AI-generated work, educational assessment should evolve toward evaluating students' capacity for meaningful human-AI collaboration. This requires assessments that examine students' ability to frame important questions, critically evaluate AI responses, synthesise multiple perspectives, and apply insights to complex real-world problems.
Authentic assessment in the AI age might focus on students' capacity to:
- Identify meaningful problems that benefit from AI augmentation
- Develop sophisticated contextual understanding that guides effective AI collaboration
- Critically evaluate AI outputs for accuracy, bias, and appropriateness
- Synthesise AI-generated insights with human reasoning and values
- Apply collaborative human-AI intelligence to novel challenges
- Reflect metacognitively on their own learning and thinking processes
This approach requires moving from individual to collaborative assessment models that recognise cognitive partnership as legitimate and valuable rather than problematic. Rather than testing what students can do without assistance, assessment should examine how effectively students can mobilise available resources—including AI capabilities—for worthwhile purposes.
Power dynamics and institutional change
The integration of LLMs into education inevitably involves questions of power and control that educational institutions must address explicitly rather than leaving to implicit market forces. Who controls the AI systems that students use? What data is collected about student interactions? How do AI systems shape what counts as valid knowledge or appropriate thinking?
Educational institutions have traditionally served as mediators between students and knowledge technologies—providing access to libraries, laboratories, and digital resources while maintaining some degree of institutional control over learning environments. LLMs potentially disrupt this mediating function by providing direct access to sophisticated cognitive capabilities that may be controlled by commercial entities with interests that differ from educational goals.
This disruption creates both opportunities and risks. LLMs may democratise access to sophisticated cognitive tools that were previously available only through expensive educational institutions. However, this democratisation may also create new forms of dependency on commercial AI providers whose business models may not align with educational values.
Educational institutions need to develop strategies for maintaining meaningful agency over learning environments while leveraging powerful AI capabilities. This might involve developing institutional AI policies, creating local AI infrastructure, or negotiating with AI providers for educational-specific services that preserve institutional values and student privacy.
The power dynamic question also extends to faculty-student relationships and traditional expertise hierarchies. When students have access to AI systems that can perform many functions traditionally reserved for experts, traditional teaching models based on knowledge transmission become less relevant. Faculty roles may need to evolve toward coaching, facilitation, and wisdom development rather than information delivery.
Challenges, limitations, and critical perspectives
While the technological view of language offers valuable insights for educational integration of LLMs, it also reveals significant challenges and limitations that require honest acknowledgment and careful consideration.
The risk of cognitive dependency
One of the most serious concerns about LLM integration involves the potential for cognitive dependency that undermines rather than enhances human cognitive development. If students rely too heavily on AI for thinking tasks they should develop independently, they may experience cognitive atrophy rather than cognitive enhancement.
This concern is not merely theoretical. Research on GPS navigation suggests that heavy reliance on technological navigation aids can atrophy spatial reasoning capabilities. Similarly, extensive calculator use in mathematics education may undermine number sense development if not carefully managed. LLMs present similar risks for linguistic and reasoning capabilities if educational integration lacks sufficient attention to human cognitive development.
The challenge lies in identifying which cognitive functions benefit from technological augmentation and which require sustained human development for optimal learning outcomes. Critical thinking, creative synthesis, and contextual judgment appear to require significant human development, while information gathering, pattern recognition, and routine analysis may benefit from AI augmentation.
Educational institutions need to develop sophisticated understanding of cognitive dependency risks and design learning experiences that leverage AI capabilities while ensuring essential human cognitive development occurs. This requires research on learning outcomes, careful curriculum design, and ongoing assessment of student cognitive development in AI-augmented environments.
Bias amplification and epistemic concerns
LLMs trained on vast datasets of human language inevitably incorporate the biases, assumptions, and limitations present in their training data. When educational institutions integrate these systems without adequate critical examination, they risk amplifying existing biases while lending them the authority of technological objectivity.
The biases embedded in LLMs are often subtle and systemic rather than explicit and individual. They may reflect historical patterns of exclusion, cultural assumptions about knowledge and value, or commercial interests that shaped training data collection and model development. Students who engage extensively with biased AI systems may internalise these biases without recognising their presence or influence.
Educational integration of LLMs requires explicit attention to bias recognition and mitigation strategies. Students need to develop critical consciousness about AI systems' limitations and biases. Faculty need training in recognising and addressing AI bias in educational contexts. Institutions need policies for evaluating and selecting AI systems that minimise bias amplification.
The epistemic concerns extend beyond bias to fundamental questions about knowledge, truth, and authority. LLMs can generate fluent, authoritative-sounding text on topics where they lack genuine understanding or accurate information. Students may struggle to distinguish between AI-generated content that reflects genuine knowledge and content that merely sounds plausible.
This epistemic challenge requires educational approaches that develop students' capacity for critical evaluation of information sources, including AI systems. Students need to understand how LLMs generate responses, what kinds of errors they tend to make, and how to verify AI-generated information through independent sources and reasoning.
Authenticity and meaning in learning
The integration of LLMs into education raises fundamental questions about what constitutes authentic learning and meaningful intellectual work. If AI systems can perform many cognitive tasks more efficiently than students, what is the purpose of educational processes that require students to perform these tasks independently?
This challenge is particularly acute in disciplines where linguistic expression constitutes a central component of learning. Writing assignments serve not just to demonstrate knowledge but to develop thinking capabilities through the process of articulating ideas. If students can generate sophisticated written work through AI assistance, traditional writing pedagogy may need fundamental reconceptualisation.
However, the authenticity concern may reflect overly narrow conceptions of learning and intellectual work. Rather than viewing AI assistance as inherently inauthentic, educational institutions might reconceptualise authenticity around students' capacity for meaningful engagement with important questions and problems, regardless of the technological tools they use in that engagement.
This reconceptualisation requires clarity about educational goals and values. If the purpose of education is to develop human capabilities for meaningful participation in social, professional, and civic life, then AI integration should enhance rather than replace these capabilities. If education aims to prepare students for a world where AI collaboration is routine, then developing sophisticated human-AI partnership skills becomes authentic educational goal.
Implementation challenges and resource requirements
The practical implementation of educational transformation for the AI age requires significant institutional investment and cultural change that many educational institutions may struggle to achieve. Faculty development, infrastructure upgrading, curriculum redesign, and policy development all require sustained commitment and substantial resources.
Faculty development represents a particularly significant challenge, as many educators lack experience with AI technologies and may be uncertain about effective integration strategies. Professional development programmes need to address not just technical skills but pedagogical approaches, assessment methods, and ethical considerations for AI-augmented education.
Infrastructure requirements extend beyond simply providing access to AI tools. Institutions need to develop policies for AI use, privacy protection, academic integrity, and quality assurance. They need to create support systems for students and faculty navigating AI integration challenges. They need to develop assessment methods appropriate for AI-augmented learning environments.
The resource requirements may be particularly challenging for institutions serving disadvantaged populations, potentially exacerbating existing educational inequalities. While AI technologies may democratise access to sophisticated cognitive tools, they also require significant technological infrastructure and support systems that not all institutions can provide equally.
These implementation challenges require coordinated effort across educational institutions, technology providers, and policy makers to ensure that AI integration enhances rather than undermines educational equity and effectiveness.
Future directions and conclusion
The technological view of language positions large language models within a long historical trajectory of cognitive extension through linguistic innovation. This perspective offers both inspiration and caution for educational transformation in the AI age.
LLMs represent not a break from human tradition but its latest evolution—the continuation of language technology's trajectory of extending human cognitive capabilities while enabling new forms of thinking and collaboration. This evolutionary perspective suggests that educational institutions should build upon accumulated wisdom about integrating language technologies rather than treating AI as entirely unprecedented challenge.
However, the scale and sophistication of LLMs also create qualitative differences that require careful attention. The cognitive extension enabled by these systems is more comprehensive and less transparent than previous language technologies, creating new challenges for maintaining human agency while leveraging technological capabilities.
The path forward requires educational transformation that preserves and enhances human capabilities while enabling productive collaboration with artificial intelligence. This means shifting from knowledge transmission toward cognitive partnership, from individual assessment toward collaborative problem-solving, and from technical skill development toward deeper forms of literacy that address how language technologies shape thinking and social relationships.
Educational institutions have historically adapted to each evolution in language technology while maintaining their fundamental mission of human development and empowerment. The AI moment represents another such adaptation opportunity—one that requires thoughtful integration of powerful new capabilities with enduring educational values.
The success of this integration will depend on educational institutions' capacity to maintain focus on human development while embracing technological augmentation, to address power dynamics and bias explicitly while leveraging AI capabilities, and to reconceptualise learning and assessment for an age of human-AI collaboration.
By understanding LLMs as part of language technology's ongoing evolution rather than as external disruption to educational tradition, we can navigate this transformation in ways that enhance rather than diminish the fundamentally humanising mission of education. The goal is not to compete with artificial intelligence but to develop forms of human-AI collaboration that amplify human capabilities while preserving human agency, creativity, and wisdom.
The technological nature of language reminds us that humans have always been enhanced by the tools we create for thinking and communication. Large language models represent the latest chapter in this enhancement story—one that offers tremendous potential for educational transformation if we approach it with both enthusiasm for new possibilities and commitment to enduring human values.