Lesson overview

Objective: Develop adaptation-level literacy—using AI to build genuine competence rather than just extract information, whilst developing metacognitive awareness about your understanding

Summary: This lesson marks the shift from substitution to adaptation. You’ll learn to distinguish competence from familiarity, build working knowledge through staged complexity, and test your understanding systematically. The goal is genuine competence you can deploy, not recognition you can cite.

Key habits:

  • Staged complexity: Build understanding progressively—basic idea, contextual understanding, procedural detail—testing at each stage
  • Immediate testing: Test understanding before progressing to reveal gaps early, not after consuming all information
  • Metacognitive calibration: Accurately assess what you understand versus what you merely recognise

The illusion of understanding

The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.

Daniel J. Boorstin

Quick check

What is grounded theory methodology?

Good. Now answer these without looking anything up:

  • Could you design a small study using grounded theory right now?
  • Could you explain to a student when to use it versus phenomenology?
  • Could you identify a study that applied it incorrectly?

If you answered the first question but struggled with the next three, you have familiarity, not competence.

Familiarity means you’ve heard of something and can recognise it. Competence means you can use it, explain when it’s appropriate, and identify misapplication.

This lesson shows you how to build competence, not just acquire familiarity.

The contrast

Dr. Chen needs to understand discourse analysis for her methodology chapter. She opens Claude and asks: “Summarise discourse analysis in 200 words.”

She reads the summary carefully. Makes some notes. Moves on to other work. Total time: 5 minutes.

Two months later, writing her methodology chapter, she tries to explain her analytical approach. She realises she can’t describe what she’ll actually do. She knows what discourse analysis is called. She can’t explain how it works or why it’s appropriate for her data. She has familiarity without competence.

Dr. Williams needs the same understanding. She spends 60 minutes in structured dialogue with AI:

  • Gets accessible explanation of the core idea, tests by explaining it back
  • Adds contextual understanding of when it’s appropriate versus alternatives
  • Works through methodological detail about the actual process
  • Tests by describing how she’d apply it to sample data from her research
  • Identifies gaps in her understanding, works through confusion
  • Can now design and defend her analytical approach

She has competence, not just familiarity.

Before we begin

Think of a methodology or framework you’ve read about but couldn’t confidently use. What’s the difference between recognising it and being able to deploy it?

Competence versus familiarity

Substitution-level reading (lessons 1-5) taught you to extract information efficiently—know what papers say in 5 minutes. That produces familiarity: you recognise concepts, know who uses them, can cite appropriately.

Adaptation-level reading builds competence: understand how something works, when to use it, how to apply it. This requires 60+ minutes of structured engagement, not 5-minute extraction.

The distinction:

Familiarity (substitution approach):

  • Recognise concepts when you encounter them
  • Can define using technical terms
  • Know who uses these approaches
  • Can cite papers appropriately
  • Time: 5 minutes extraction

Competence (adaptation approach):

  • Explain to non-experts clearly
  • Know when to use versus alternatives
  • Can apply to your specific work
  • Identify misapplication or limitations
  • Time: 60 minutes structured learning

Both matter. Most concepts in your field require only familiarity. A few critical to your research require competence.

Quick reflection

What’s one methodology or framework critical to your research where you need competence, not just familiarity?

Metacognitive calibration

Before learning to build competence, calibrate your metacognitive awareness—your ability to accurately assess what you understand versus what you merely recognise.

Calibration insight: This is what genuine competence looks like—the ability to explain simply, identify boundaries, recognise misuse, and compare alternatives. Not just define using technical language.

This lesson teaches you to build this level of understanding systematically.

Building competence: The staged approach

Building competence requires working through complexity progressively—starting accessible, adding layers systematically, testing understanding at each stage.

This prevents the illusion of understanding: reading sophisticated explanations creates familiarity without competence. You recognise terms but can’t apply concepts.

The four stages:

Stage 1: Get the basic idea → What is this? What problem does it solve?

Stage 2: Understand when to use it → When appropriate? When not? Versus alternatives?

Stage 3: Know how to do it → Actual process? Decisions? Challenges?

Stage 4: Recognise its limits → Criticisms? When it fails? Boundaries?

We’ll focus on stages 1-3 in this lesson. Stage 4 becomes important when you’re ready to deploy competence critically, not while you’re still building it.

Stage 1: Get the basic idea

Start with the simplest explanation that captures core concepts without nuance.

Faded practice: From observation to independence

Stage 1: Observe expert application

Here’s how an experienced researcher builds foundation understanding:

[Prompt] “Explain phenomenology in plain language for someone encountering it for the first time. What’s the basic idea? What problem does it solve?”

[AI responds with explanation]

[Scholar immediately tests understanding] Without looking at AI’s explanation, they write in their own words: “Phenomenology studies lived experience from the person’s perspective—how they interpret what happens to them rather than objective facts about what happened.”

[They check accuracy] They compare their summary to AI’s explanation. Close enough. They understand the core idea.

Notice: They didn’t just read the explanation—they immediately tested whether they could reproduce the core idea without looking. This reveals actual understanding versus illusion of understanding.

Self-explanation

Why test immediately rather than reading all stages first then testing at the end?

Show answer

Testing immediately reveals gaps before you build on shaky foundations. If you don’t understand stage 1, stage 2 explanations won’t make sense. Immediate testing prevents the illusion that reading equals understanding.

Stage 2: Guided practice

Stage 3: Independent application

You now know the pattern: prompt for explanation → immediately test understanding → identify gaps → clarify before progressing.

You’ll apply this pattern to the next complexity stages.

Stage 2: Understand when to use it

Add context: why this approach matters, how it differs from alternatives, when it’s appropriate.

The approach

Prompt structure:

“Now add more context to help me understand [concept]. What scholarly debates or problems motivated this approach? How does it differ from [related approach]? When would researchers choose this versus alternatives?”

Engage conversationally: Don’t just read AI’s response—ask follow-up questions about anything unclear. “What does [confusing term] actually mean in practice?” or “Can you give an example of when this wouldn’t work?”

This conversational engagement is creation/communication—actively constructing understanding through dialogue, not passively receiving information.

If you can’t articulate when NOT to use this approach, ask AI: “What are situations where [concept] wouldn’t be appropriate? When should researchers choose alternatives?”

Literacy note: The ability to articulate when something is inappropriate requires contextual judgement—understanding not just what something is, but when it serves particular purposes versus when it doesn’t.

Stage 3: Know how to do it

Add procedural detail: how you actually do this, what choices researchers make, what challenges emerge.

If you can’t envision application, ask AI: “Walk me through how I would apply [concept] to my specific research situation: [describe your context, research question, data type]. What would I actually do?”

Testing your competence

You’ve worked through three complexity stages. Now test whether you’ve built genuine competence or just acquired detailed familiarity.

Choose ONE test that best assesses competence for your concept:

Option A: The explanation test

Prompt:

“I’m going to explain [your concept] back to you in my own words. Please identify where my understanding is accurate, where I’m confused, and what I’m missing. Here’s my explanation:”

[Write 2-3 paragraphs explaining the concept without looking at anything]

What this reveals: Gaps in understanding you didn’t recognise, areas where you’re using terms without grasping meaning.

Option B: The application test

Prompt:

“How would I apply [your concept] to this specific situation: [describe a real scenario from your research with enough detail that AI can assess appropriate application]. What would it look like? What challenges would I face? Where might I struggle?”

What this reveals: Whether you can translate abstract understanding into practical deployment. Application tests reveal gaps explanation alone obscures.

Option C: The comparison test

Prompt:

“What’s the key difference between [this concept] and [related concept] I already know? When would I use each? What are the implications of choosing one versus the other?”

What this reveals: Whether you can distinguish related ideas, not just understand them individually.

Literacy insight: Testing reveals the difference between recognition (you’ve seen explanations) and genuine understanding (you can use the concept). This metacognitive awareness—knowing what you actually understand—is crucial for adaptation-level literacy.

Decision point: Competence or familiarity?

Let’s see what genuine competence enables versus what superficial familiarity produces.

The scenario

You’ve spent 30 minutes learning about grounded theory through staged complexity. A colleague asks: “Should I use grounded theory for my study on teacher burnout? I want to understand what factors contribute to burnout in secondary schools.”

How do you respond?

Pause and reflect

What’s the difference between these responses? What does genuine competence enable that familiarity doesn’t?

Decision principle: Invest 60 minutes building competence when you need to use, explain, or evaluate concepts. Use 5-minute extraction when recognition suffices.

Co-evolutionary learning in practice

Building competence with AI is co-evolutionary—you adapt together through iterative engagement. This differs fundamentally from substitution-level extraction.

Literacy insight: This co-evolutionary process produces better learning than generic explanations because it connects abstract concepts to your specific scholarly work. This is creation/communication—actively constructing understanding through dialogue.

Competence checklist

Let’s assess whether you’ve built genuine competence or detailed familiarity.

For the concept you’ve been learning, can you:

Explain simply:

  • Explain it to a non-expert in 2 minutes clearly
  • Use your own examples, not AI’s
  • Avoid jargon or define terms when necessary

Identify boundaries:

  • Name 3 situations where it’s inappropriate to use
  • Explain why alternatives would serve better in those situations
  • Articulate its main limitations honestly

Apply practically:

  • Describe what you’d do first if using it
  • Identify key decisions you’d need to make
  • Anticipate challenges you’d face

Evaluate critically:

  • Compare it to 2 alternative approaches meaningfully
  • Explain when you’d choose each alternative
  • Recognise potential misapplication

Interpreting your results

12+ checked: Genuine competence—you can deploy this in your work

8-11 checked: Working competence—you could use this with guidance

4-7 checked: Developing familiarity—you need deeper engagement

0-3 checked: Superficial familiarity—consider whether you need competence

If you have gaps: Identify which specific capabilities you’re missing, then work through those complexity stages again with targeted testing.

Literacy note: This honest metacognitive assessment—accurately recognising what you understand versus what you merely recognise—is crucial for adaptation-level literacy. Overconfident assessment leads to deployment failures.

Activity

Key takeaways

  • Competence requires active construction through testing: Information consumption differs fundamentally from competence building. Consumption is passive—you read sophisticated explanations, take notes, move on. You recognise concepts but can’t use them. Competence is active—you engage through staged complexity, test understanding at each level, work through confusion until concepts become deployable tools.

  • Immediate testing prevents illusion of understanding: Reading sophisticated explanations creates the illusion of understanding—it feels like learning but produces no usable knowledge. You recognise terms but can’t apply concepts. Testing reveals the difference between recognition and genuine understanding. Test immediately after each complexity stage before progressing.

  • Progressive staging builds on solid foundations: Jumping straight to sophisticated explanations creates detailed familiarity without competence—you recognise terminology but can’t deploy concepts. Stage complexity deliberately instead: get the basic idea first (5-10 min), understand when to use it (10-15 min), know how to do it (15-20 min). Test understanding at each stage before progressing.

  • Co-evolutionary learning tailors to your context: Generic explanations produce generic understanding. Co-evolutionary learning—where you provide research context and AI adapts explanations to your situation—produces understanding connected to your work. This iterative, contextual approach builds understanding you can actually use, not just abstract knowledge.

Your commitment

Pause and reflect

Based on this lesson, what concept will you build competence in this week? How will you test whether you’ve achieved genuine understanding? Document this commitment in your Action Journal.

Looking ahead

This lesson developed competence through staged complexity—active construction of understanding through AI-supported learning. You’ve moved from substitution (extract information in 5 minutes) to adaptation (build competence in 60 minutes).

The next adaptation lesson explores using AI for argument development—testing your thinking through structured dialogue. Both lessons emphasise creation/communication (actively constructing understanding), contextual judgement (recognising what serves your goals), and metacognitive awareness (knowing what you actually understand).

The adaptation stage is about reshaping practice around AI capabilities—not just using AI more effectively for existing tasks, but discovering what becomes possible when AI supports sustained intellectual work.

Resources

  • Ericsson, K.A. (2008). Deliberate practice and acquisition of expert performance. Academic Emergency Medicine, 15(11), 988-994.
  • Sadler, D.R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535-550.
  • Flavell, J.H. (1979). Metacognition and cognitive monitoring. American Psychologist, 34(10), 906-911.
  • Dunlosky, J. & Metcalfe, J. (2008). Metacognition. Sage Publications.
  • Zimmerman, B.J. (2002). Becoming a self-regulated learner. Theory into Practice, 41(2), 64-70.
  • Adler, M.J. (1972). How to read a book: The classic guide to intelligent reading. Touchstone.
  • Karnofsky, H. (2021). Reading books vs engaging with them. Cold Takes.