If you do not change direction, you may end up where you are heading.

Lao Tzu

Lesson overview

Objective: Understand what generative AI is and how to approach it as language-based cognitive extension rather than software to operate

Summary: This lesson establishes the conceptual foundation for productive AI engagement. Most confusion about AI comes from applying the wrong mental model—treating it as a search engine, traditional software, or knowledge database. Understanding AI as language-based cognitive extension transforms how you approach collaboration and sets the stage for developing genuine literacy.

Key habits:

  • Conversational engagement: Approach AI as a thinking partner rather than a query interface, providing context and building on responses iteratively
  • Complementary error awareness: Recognise that you and AI make different kinds of mistakes, and use this to catch what each other misses
  • Critical evaluation: Verify AI outputs against your scholarly judgement rather than accepting responses uncritically

The scenario

Dr Sarah Chen sits at her desk at 11pm, surrounded by printed articles on student engagement. She’s been researching for three hours and feels overwhelmed—some papers focus on pre-pandemic engagement, others on emergency remote teaching, but nothing quite addresses her specific question about blended learning in UK universities.

Out of desperation, she opens ChatGPT and types: “literature review on student engagement blended learning UK”

The response is generic—a superficial overview that could apply to any context. She closes the tab, frustrated. “AI doesn’t work for research,” she thinks.

The next morning, her colleague mentions using AI differently: “I don’t ask it for answers. I have conversations with it about my thinking.” Sarah decides to try again, but this time she types:

“I’m researching how student engagement has changed in UK universities since the pandemic, particularly in blended learning. I’ve found papers on engagement generally and on pandemic teaching separately, but I’m struggling to connect both. Can you help me think through relevant theoretical frameworks that might bridge these literatures?”

This conversation leads somewhere useful.

Before we begin

How would you currently approach using AI for a research challenge like Sarah’s? What’s your mental model of what AI does?

What AI isn’t (and why it matters)

Most confusion about AI comes from applying the wrong mental model. Before exploring what AI is, let’s clarify what it isn’t.

Not a search engine

Quick reflection

Think of a recent time you used AI. Did you treat it like a search engine? What happened?

Not traditional software

Quick reflection

Have you ever approached AI like software, expecting to learn ‘how to use it’? What frustrated you?

Not a knowledge database

Quick reflection

When have you caught AI being confidently wrong about something in your field?

Quick practice: transactional vs conversational

Language as cognitive extension

If AI isn’t search, software, or a knowledge base, what is it? The answer requires thinking about language not just as communication, but as technology that extends human cognition.

Human thinking has always been extended by language technologies. Writing created persistent records accessible to people who’d never met the author. Printing made knowledge accessible at scale. Digital text added searchability and hyperlinks. Generative AI is the latest development: language systems that engage conversationally, generate contextually appropriate text, and adapt to your needs in real-time.

Each technological leap changed what humans could think, not just how they recorded thoughts. Plato worried that writing would weaken memory—and he was right. We no longer cultivate the elaborate memory techniques that oral cultures required. But writing didn’t diminish cognition—it extended it, making new forms of thinking possible.

The pattern is consistent: New language technologies feel threatening because they change intellectual work. But they don’t replace thinking—they extend it, making new forms of thinking possible whilst requiring us to develop new capabilities.

Consider how you already use language to extend thinking. Writing clarifies vague ideas by forcing precise articulation. Teaching deepens understanding by requiring clear explanation. Conversation with colleagues helps you work through complex problems. AI extends these familiar practices—it provides a conversational partner for thinking through ideas, an audience for articulating vague intuitions, and a collaborator for working through problems.

The key difference: AI responds in real-time, adapts to your context, and doesn’t tire of repeated questions.

But like all cognitive extensions, AI is only as valuable as the quality of engagement you bring to it.

Knowledge check

What’s the key difference between AI and previous language technologies like writing or printing?

Show answer

Real-time conversational adaptation. Unlike static text, AI responds to your specific context and adjusts based on your questions, creating a dynamic rather than fixed extension of thinking.

The collaboration framework

Understanding AI as cognitive extension leads to a different model of engagement—one based on collaboration rather than tool operation.

Beyond the tool metaphor

When we call something a ‘tool’, we imply one-directional control. A hammer doesn’t have agency. The relationship is straightforward: you control, it responds.

AI doesn’t fit this model. It interprets your language, makes choices about how to respond, and generates outputs you couldn’t fully predict. The relationship is bidirectional: you shape what AI produces, and AI responses shape what you ask next.

A better metaphor is AI as collaborative thinking partner. Not equal partnership—you bring scholarly expertise, disciplinary knowledge, critical judgement. But genuine collaboration where both participants contribute to the outcome.

This reframing has practical implications:

  • Responsibility is shared. You’re responsible for clear communication, appropriate framing, and critical evaluation. AI contributes relevant knowledge, alternative perspectives, and structured thinking.
  • Quality emerges through interaction. You refine through follow-up questions, clarifications, and iterative development.
  • Context matters enormously. The more effectively you communicate your context and needs, the more valuable AI contributions become.

Understanding complementary errors

One of the most valuable aspects of human-AI collaboration is that you and AI make different kinds of mistakes. Recognising this transforms how you work together.

Interactive assessment

Select the errors you commonly make in your academic work:

Now recognise what AI gets wrong:

AI makes different errors than humans—it generates plausible but incorrect information (hallucinations), misses disciplinary nuance and conventions, produces generic outputs lacking scholarly sophistication, and fails to recognise when simplification distorts meaning.

The collaboration principle: Productive engagement means you catch what AI misses whilst AI catches what you overlook. Neither participant is infallible, but together you reduce the error rate of working alone.

Scaffolded practice: from observation to independence

Let’s work through how collaboration actually functions using a real example from your work.

Step 1: observe an expert collaboration (modelling)

Here’s how an experienced academic might collaborate with AI on improving a conference abstract:

Academic’s prompt: “I’ve written a 250-word abstract for a conference on digital pedagogy. The feedback I got was that it’s too descriptive and doesn’t clearly articulate my contribution. Here’s the abstract: [text]. Can you help me think through what might be missing?”

AI’s response: Analyses the abstract, identifies that it focuses on ‘what’ was done rather than ‘what was learned’, suggests reframing around the insight rather than the activity.

Academic’s follow-up: “That’s helpful. The key insight was that X contradicted our expectations. How might I foreground that contradiction rather than just describing the study design?”

Notice: The academic doesn’t ask AI to rewrite the abstract. They use conversation to clarify their own thinking about what’s missing, then do the rewriting themselves.

Step 2: your turn to complete the final step (fading)

You’re preparing a research proposal and want to strengthen your literature review section. You’ve identified relevant sources but the section feels like a list rather than a narrative.

Starter prompt (already written for you): “I’m writing a literature review on [your topic]. I’ve identified key sources but the section reads like a catalogue rather than building an argument. Here’s what I have: [paste 200 words]. What structural approach might create a stronger narrative?”

Now you complete

After reading AI’s response, what follow-up question would you ask to clarify the most useful suggestion? Write it below.

Step 3: independent application

Think and draft

Think of something you’re currently writing or revising. What specific challenge are you facing with it? Draft a conversational prompt you’d use with AI to think through this challenge. Remember: you’re not asking AI to do the work, but to help you think more clearly about it.

Activity

Key takeaways

  • AI as cognitive extension: AI is language-based cognitive extension, not software you operate. This is the latest development in a long evolutionary history of humans using language technologies to extend thinking. You’re not learning to operate a tool—you’re developing fluency in a new form of language-based collaboration.

  • Shared responsibility for quality: Collaboration requires shared responsibility for quality. You and AI make different kinds of errors. Productive engagement means you catch what AI misses (hallucinations, lack of nuance, generic outputs) whilst AI catches errors you might overlook (missing perspectives, logical inconsistencies, overlooked connections).

  • Conversation over commands: When facing complex challenges, extended conversational engagement produces better outcomes than single transactional queries. Provide context, ask follow-up questions, treat initial responses as starting points rather than final outputs.

  • Appropriate expectations: AI excels at breadth across domains, pattern recognition, generating variations, and conversational exploration. It struggles with disciplinary expertise, original insight, accurate factual recall, and understanding your specific context without being told.

Your commitment

Pause and reflect

Based on this lesson, what’s one specific way you’ll engage with AI differently this week? Document this commitment in your Action Journal.

Looking ahead

In the next lesson, we’ll use this foundational understanding to develop structured prompting approaches that dramatically improve output quality. You’ll learn how to communicate your needs clearly, build iterative conversations, and create a personal prompt library for recurring academic tasks.

Resources

  • Bommasani, R., et al. (2021). On the opportunities and risks of foundation models. arXiv.
  • Bender, E., et al. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT ‘21.
  • Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio.
  • Clark, A. (2008). Supersizing the mind: Embodiment, action, and cognitive extension. Oxford University Press.
  • Ong, W. (2002). Orality and literacy: The technologising of the word. Routledge.
  • Mollick, E. & Mollick, L. (2023). Practical AI for instructors and students. YouTube series.
  • UNESCO. (2023). Guidance for generative AI in education and research. UNESCO.