If you do not change direction, you may end up where you are heading.
Lao Tzu
Lesson overview
Objective: Understand what generative AI is and how to approach it as language-based cognitive extension rather than software to operate
Summary: This lesson establishes the conceptual foundation for productive AI engagement. Most confusion about AI comes from applying the wrong mental model—treating it as a search engine, traditional software, or knowledge database. Understanding AI as language-based cognitive extension transforms how you approach collaboration and sets the stage for developing genuine literacy.
Key habits:
- Conversational engagement: Approach AI as a thinking partner rather than a query interface, providing context and building on responses iteratively
- Complementary error awareness: Recognise that you and AI make different kinds of mistakes, and use this to catch what each other misses
- Critical evaluation: Verify AI outputs against your scholarly judgement rather than accepting responses uncritically
The scenario
Dr Sarah Chen sits at her desk at 11pm, surrounded by printed articles on student engagement. She’s been researching for three hours and feels overwhelmed—some papers focus on pre-pandemic engagement, others on emergency remote teaching, but nothing quite addresses her specific question about blended learning in UK universities.
Out of desperation, she opens ChatGPT and types: “literature review on student engagement blended learning UK”
The response is generic—a superficial overview that could apply to any context. She closes the tab, frustrated. “AI doesn’t work for research,” she thinks.
The next morning, her colleague mentions using AI differently: “I don’t ask it for answers. I have conversations with it about my thinking.” Sarah decides to try again, but this time she types:
“I’m researching how student engagement has changed in UK universities since the pandemic, particularly in blended learning. I’ve found papers on engagement generally and on pandemic teaching separately, but I’m struggling to connect both. Can you help me think through relevant theoretical frameworks that might bridge these literatures?”
This conversation leads somewhere useful.
Before we begin
How would you currently approach using AI for a research challenge like Sarah’s? What’s your mental model of what AI does?
What AI isn’t (and why it matters)
Most confusion about AI comes from applying the wrong mental model. Before exploring what AI is, let’s clarify what it isn’t.
Not a search engine
Click to expand
When you search Google Scholar, the engine matches your keywords to documents and returns ranked results. You do the work of reading and synthesising.
Generative AI works differently—it generates responses based on patterns learned from vast amounts of text. It’s not retrieving documents; it’s constructing language probabilistically in response to your specific query.
Why this matters: You can have a conversation with AI—asking follow-up questions, requesting clarification, exploring ideas iteratively. This conversational capability fundamentally changes what’s possible, but it also means you can’t expect AI to simply ‘retrieve’ information. It constructs responses, which requires your critical evaluation.
Quick reflection
Think of a recent time you used AI. Did you treat it like a search engine? What happened?
Not traditional software
Click to expand
Most software operates through commands. You execute actions, get results. AI responds to natural language. You describe what you need (often imprecisely at first), and the system interprets your intent and adapts based on conversation.
Why this matters: You don’t need to learn an interface or memorise commands. You communicate naturally, which lowers the barrier to engagement—but requires different skills around clear communication. The quality of what you get depends on how well you communicate what you need.
Quick reflection
Have you ever approached AI like software, expecting to learn ‘how to use it’? What frustrated you?
Not a knowledge database
Click to expand
AI doesn’t store facts like a database. It has learned patterns from training data, allowing it to generate plausible text on almost any topic. But it doesn’t ‘know’ things in the way humans do, and it can confidently generate incorrect information—what we call hallucinations.
Why this matters: You can’t trust AI outputs uncritically. Everything requires your scholarly judgement and verification. The value isn’t in delegating evaluation to AI—it’s in extending your thinking through collaboration whilst maintaining your responsibility for accuracy.
Quick reflection
When have you caught AI being confidently wrong about something in your field?
Quick practice: transactional vs conversational
Activity: Experience the difference
Time required: 5 minutes
Let’s experience the difference between treating AI as a search engine versus a thinking partner.
Step 1: Simple query Choose a topic relevant to your current work—something you’re genuinely curious about or stuck on. Open ChatGPT or Claude and ask a simple, direct question. For example: “What are the main theories of student engagement?” or “What methods work for teaching statistics?”
Read the response but don’t engage further. Note what you got and what’s missing.
Step 2: Contextual conversation Now provide context: What’s your specific situation? What have you already tried? What’s the actual problem you’re trying to solve?
Read AI’s response. Does it feel different? More useful?
Reflection: What did conversation provide that the simple query didn’t? When would you use each approach?
Language as cognitive extension
If AI isn’t search, software, or a knowledge base, what is it? The answer requires thinking about language not just as communication, but as technology that extends human cognition.
Human thinking has always been extended by language technologies. Writing created persistent records accessible to people who’d never met the author. Printing made knowledge accessible at scale. Digital text added searchability and hyperlinks. Generative AI is the latest development: language systems that engage conversationally, generate contextually appropriate text, and adapt to your needs in real-time.
Each technological leap changed what humans could think, not just how they recorded thoughts. Plato worried that writing would weaken memory—and he was right. We no longer cultivate the elaborate memory techniques that oral cultures required. But writing didn’t diminish cognition—it extended it, making new forms of thinking possible.
The pattern is consistent: New language technologies feel threatening because they change intellectual work. But they don’t replace thinking—they extend it, making new forms of thinking possible whilst requiring us to develop new capabilities.
Consider how you already use language to extend thinking. Writing clarifies vague ideas by forcing precise articulation. Teaching deepens understanding by requiring clear explanation. Conversation with colleagues helps you work through complex problems. AI extends these familiar practices—it provides a conversational partner for thinking through ideas, an audience for articulating vague intuitions, and a collaborator for working through problems.
The key difference: AI responds in real-time, adapts to your context, and doesn’t tire of repeated questions.
But like all cognitive extensions, AI is only as valuable as the quality of engagement you bring to it.
Learn more about the history of language technologies
Oral tradition allowed ideas to persist beyond individual memory, but required sophisticated mnemonic techniques we’ve now lost. Mediaeval scholars worried that printing would dilute scholarship by making books too accessible. Today we worry that AI will diminish critical thinking. Each transition sparked legitimate concerns, but each also enabled new forms of intellectual work that weren’t possible before.
Knowledge check
What’s the key difference between AI and previous language technologies like writing or printing?
Show answer
Real-time conversational adaptation. Unlike static text, AI responds to your specific context and adjusts based on your questions, creating a dynamic rather than fixed extension of thinking.
The collaboration framework
Understanding AI as cognitive extension leads to a different model of engagement—one based on collaboration rather than tool operation.
Beyond the tool metaphor
When we call something a ‘tool’, we imply one-directional control. A hammer doesn’t have agency. The relationship is straightforward: you control, it responds.
AI doesn’t fit this model. It interprets your language, makes choices about how to respond, and generates outputs you couldn’t fully predict. The relationship is bidirectional: you shape what AI produces, and AI responses shape what you ask next.
A better metaphor is AI as collaborative thinking partner. Not equal partnership—you bring scholarly expertise, disciplinary knowledge, critical judgement. But genuine collaboration where both participants contribute to the outcome.
This reframing has practical implications:
- Responsibility is shared. You’re responsible for clear communication, appropriate framing, and critical evaluation. AI contributes relevant knowledge, alternative perspectives, and structured thinking.
- Quality emerges through interaction. You refine through follow-up questions, clarifications, and iterative development.
- Context matters enormously. The more effectively you communicate your context and needs, the more valuable AI contributions become.
Understanding complementary errors
One of the most valuable aspects of human-AI collaboration is that you and AI make different kinds of mistakes. Recognising this transforms how you work together.
Interactive assessment
Select the errors you commonly make in your academic work:
Overlooking relevant literature outside my immediate field
How AI helps: Ask AI to suggest connections between your work and adjacent fields. For example: “I’m researching X in education. What parallel work exists in organisational psychology or cognitive science that might inform my approach?”
Why this works: AI’s breadth across domains helps surface connections you might not encounter through standard database searches.
What you still need: Evaluating whether suggested connections are genuinely relevant or superficial analogies.
Getting stuck in familiar thinking patterns
How AI helps: Explicitly ask AI to challenge your assumptions. For example: “I’m approaching this problem by assuming X. What alternative frameworks might I be overlooking?”
Why this works: AI can generate alternative perspectives without the social awkwardness of challenging your own thinking.
What you still need: Scholarly judgement about which alternative perspectives have merit in your specific context.
Missing logical inconsistencies in complex arguments
How AI helps: Present your argument structure to AI and ask: “What assumptions am I making here? Where might this logic break down?”
Why this works: AI can track multiple premises and spot gaps in reasoning that you might miss when deeply immersed.
What you still need: Evaluating whether identified gaps are genuine problems or acceptable limitations.
Losing track of details in extended reasoning
How AI helps: Use AI to summarise and structure complex information, asking it to track threads through long documents or conversations.
Why this works: AI doesn’t experience cognitive fatigue and can maintain attention across extended texts.
What you still need: Verifying that AI hasn’t introduced errors or missed crucial nuances in the summarisation.
Now recognise what AI gets wrong:
AI makes different errors than humans—it generates plausible but incorrect information (hallucinations), misses disciplinary nuance and conventions, produces generic outputs lacking scholarly sophistication, and fails to recognise when simplification distorts meaning.
The collaboration principle: Productive engagement means you catch what AI misses whilst AI catches what you overlook. Neither participant is infallible, but together you reduce the error rate of working alone.
Scaffolded practice: from observation to independence
Let’s work through how collaboration actually functions using a real example from your work.
Step 1: observe an expert collaboration (modelling)
Here’s how an experienced academic might collaborate with AI on improving a conference abstract:
Academic’s prompt: “I’ve written a 250-word abstract for a conference on digital pedagogy. The feedback I got was that it’s too descriptive and doesn’t clearly articulate my contribution. Here’s the abstract: [text]. Can you help me think through what might be missing?”
AI’s response: Analyses the abstract, identifies that it focuses on ‘what’ was done rather than ‘what was learned’, suggests reframing around the insight rather than the activity.
Academic’s follow-up: “That’s helpful. The key insight was that X contradicted our expectations. How might I foreground that contradiction rather than just describing the study design?”
Notice: The academic doesn’t ask AI to rewrite the abstract. They use conversation to clarify their own thinking about what’s missing, then do the rewriting themselves.
Step 2: your turn to complete the final step (fading)
You’re preparing a research proposal and want to strengthen your literature review section. You’ve identified relevant sources but the section feels like a list rather than a narrative.
Starter prompt (already written for you): “I’m writing a literature review on [your topic]. I’ve identified key sources but the section reads like a catalogue rather than building an argument. Here’s what I have: [paste 200 words]. What structural approach might create a stronger narrative?”
Now you complete
After reading AI’s response, what follow-up question would you ask to clarify the most useful suggestion? Write it below.
Step 3: independent application
Think and draft
Think of something you’re currently writing or revising. What specific challenge are you facing with it? Draft a conversational prompt you’d use with AI to think through this challenge. Remember: you’re not asking AI to do the work, but to help you think more clearly about it.
Activity
10-minute exploration
Choose a current challenge in your academic work—something you’re genuinely stuck on or uncertain about. This might be:
- A section of writing that isn’t working
- A methodological decision you’re weighing
- A concept you’re trying to explain to students
- A research direction you’re uncertain about
Step 1: Initiate conversation Spend 10 minutes conversing with AI about this challenge. Important: You’re not seeking AI to solve it, but using conversation to think more clearly about it.
Step 2: Observe your process As you converse, notice:
- When does conversation help clarify your thinking?
- When does AI’s response feel unhelpful or generic?
- What follow-up questions make the exchange more valuable?
- How does talking through the problem change your understanding of it?
Reflection: What became clearer through this conversation? What would you do differently next time?
Key takeaways
-
AI as cognitive extension: AI is language-based cognitive extension, not software you operate. This is the latest development in a long evolutionary history of humans using language technologies to extend thinking. You’re not learning to operate a tool—you’re developing fluency in a new form of language-based collaboration.
-
Shared responsibility for quality: Collaboration requires shared responsibility for quality. You and AI make different kinds of errors. Productive engagement means you catch what AI misses (hallucinations, lack of nuance, generic outputs) whilst AI catches errors you might overlook (missing perspectives, logical inconsistencies, overlooked connections).
-
Conversation over commands: When facing complex challenges, extended conversational engagement produces better outcomes than single transactional queries. Provide context, ask follow-up questions, treat initial responses as starting points rather than final outputs.
-
Appropriate expectations: AI excels at breadth across domains, pattern recognition, generating variations, and conversational exploration. It struggles with disciplinary expertise, original insight, accurate factual recall, and understanding your specific context without being told.
Your commitment
Pause and reflect
Based on this lesson, what’s one specific way you’ll engage with AI differently this week? Document this commitment in your Action Journal.
Looking ahead
In the next lesson, we’ll use this foundational understanding to develop structured prompting approaches that dramatically improve output quality. You’ll learn how to communicate your needs clearly, build iterative conversations, and create a personal prompt library for recurring academic tasks.
Resources
- Bommasani, R., et al. (2021). On the opportunities and risks of foundation models. arXiv.
- Bender, E., et al. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT ‘21.
- Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio.
- Clark, A. (2008). Supersizing the mind: Embodiment, action, and cognitive extension. Oxford University Press.
- Ong, W. (2002). Orality and literacy: The technologising of the word. Routledge.
- Mollick, E. & Mollick, L. (2023). Practical AI for instructors and students. YouTube series.
- UNESCO. (2023). Guidance for generative AI in education and research. UNESCO.