Lesson overview
Objective: Develop adaptation-level literacy—using AI to build genuine competence rather than just extract information, whilst developing metacognitive awareness about your understanding
Summary: This lesson marks the shift from substitution to adaptation. You’ll learn to distinguish competence from familiarity, build working knowledge through staged complexity, and test your understanding systematically. The goal is genuine competence you can deploy, not recognition you can cite.
Key habits:
- Staged complexity: Build understanding progressively—basic idea, contextual understanding, procedural detail—testing at each stage
- Immediate testing: Test understanding before progressing to reveal gaps early, not after consuming all information
- Metacognitive calibration: Accurately assess what you understand versus what you merely recognise
The illusion of understanding
The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.
Daniel J. Boorstin
Quick check
What is grounded theory methodology?
Good. Now answer these without looking anything up:
- Could you design a small study using grounded theory right now?
- Could you explain to a student when to use it versus phenomenology?
- Could you identify a study that applied it incorrectly?
If you answered the first question but struggled with the next three, you have familiarity, not competence.
Familiarity means you’ve heard of something and can recognise it. Competence means you can use it, explain when it’s appropriate, and identify misapplication.
This lesson shows you how to build competence, not just acquire familiarity.
The contrast
Dr. Chen needs to understand discourse analysis for her methodology chapter. She opens Claude and asks: “Summarise discourse analysis in 200 words.”
She reads the summary carefully. Makes some notes. Moves on to other work. Total time: 5 minutes.
Two months later, writing her methodology chapter, she tries to explain her analytical approach. She realises she can’t describe what she’ll actually do. She knows what discourse analysis is called. She can’t explain how it works or why it’s appropriate for her data. She has familiarity without competence.
Dr. Williams needs the same understanding. She spends 60 minutes in structured dialogue with AI:
- Gets accessible explanation of the core idea, tests by explaining it back
- Adds contextual understanding of when it’s appropriate versus alternatives
- Works through methodological detail about the actual process
- Tests by describing how she’d apply it to sample data from her research
- Identifies gaps in her understanding, works through confusion
- Can now design and defend her analytical approach
She has competence, not just familiarity.
Before we begin
Think of a methodology or framework you’ve read about but couldn’t confidently use. What’s the difference between recognising it and being able to deploy it?
Competence versus familiarity
Substitution-level reading (lessons 1-5) taught you to extract information efficiently—know what papers say in 5 minutes. That produces familiarity: you recognise concepts, know who uses them, can cite appropriately.
Adaptation-level reading builds competence: understand how something works, when to use it, how to apply it. This requires 60+ minutes of structured engagement, not 5-minute extraction.
The distinction:
Familiarity (substitution approach):
- Recognise concepts when you encounter them
- Can define using technical terms
- Know who uses these approaches
- Can cite papers appropriately
- Time: 5 minutes extraction
Competence (adaptation approach):
- Explain to non-experts clearly
- Know when to use versus alternatives
- Can apply to your specific work
- Identify misapplication or limitations
- Time: 60 minutes structured learning
Both matter. Most concepts in your field require only familiarity. A few critical to your research require competence.
Understanding the literacy progression
Substitution literacy dimensions:
- Functional application: Using AI effectively for bounded tasks
- Critical evaluation: Honest assessment of whether extraction worked
- Binary evaluation: Did this give me what I needed?
Adaptation literacy dimensions:
- Creation/communication: Actively constructing understanding through dialogue
- Contextual judgement: Recognising when concepts apply appropriately
- Metacognitive awareness: Knowing what you understand vs merely recognise
- Reflective evaluation: Does this help me learn? What am I learning about learning?
This lesson develops adaptation-level capabilities that substitution doesn’t address.
Quick reflection
What’s one methodology or framework critical to your research where you need competence, not just familiarity?
Metacognitive calibration
Before learning to build competence, calibrate your metacognitive awareness—your ability to accurately assess what you understand versus what you merely recognise.
Calibration exercise (5 minutes)
Think of something you’re confident you understand well—a methodology you use regularly, a theoretical framework central to your work.
Now test your understanding WITHOUT looking anything up:
Explain it to an intelligent 12-year-old in 3 sentences:
Name 3 situations where it’s inappropriate to use:
Identify 2 common mistakes people make applying it:
Compare it to one alternative approach—how do you choose between them?
Could you answer all four coherently?
- Yes: You have competence
- Partially: You have working familiarity but gaps remain
- No: You have recognition but not genuine understanding
Calibration insight: This is what genuine competence looks like—the ability to explain simply, identify boundaries, recognise misuse, and compare alternatives. Not just define using technical language.
This lesson teaches you to build this level of understanding systematically.
Building competence: The staged approach
Building competence requires working through complexity progressively—starting accessible, adding layers systematically, testing understanding at each stage.
This prevents the illusion of understanding: reading sophisticated explanations creates familiarity without competence. You recognise terms but can’t apply concepts.
The four stages:
Stage 1: Get the basic idea → What is this? What problem does it solve?
Stage 2: Understand when to use it → When appropriate? When not? Versus alternatives?
Stage 3: Know how to do it → Actual process? Decisions? Challenges?
Stage 4: Recognise its limits → Criticisms? When it fails? Boundaries?
We’ll focus on stages 1-3 in this lesson. Stage 4 becomes important when you’re ready to deploy competence critically, not while you’re still building it.
Stage 1: Get the basic idea
Start with the simplest explanation that captures core concepts without nuance.
Faded practice: From observation to independence
Stage 1: Observe expert application
Here’s how an experienced researcher builds foundation understanding:
[Prompt] “Explain phenomenology in plain language for someone encountering it for the first time. What’s the basic idea? What problem does it solve?”
[AI responds with explanation]
[Scholar immediately tests understanding] Without looking at AI’s explanation, they write in their own words: “Phenomenology studies lived experience from the person’s perspective—how they interpret what happens to them rather than objective facts about what happened.”
[They check accuracy] They compare their summary to AI’s explanation. Close enough. They understand the core idea.
Notice: They didn’t just read the explanation—they immediately tested whether they could reproduce the core idea without looking. This reveals actual understanding versus illusion of understanding.
Self-explanation
Why test immediately rather than reading all stages first then testing at the end?
Show answer
Testing immediately reveals gaps before you build on shaky foundations. If you don’t understand stage 1, stage 2 explanations won’t make sense. Immediate testing prevents the illusion that reading equals understanding.
Stage 2: Guided practice
Your turn
Choose a methodology, theoretical framework, or analytical approach relevant to your research but unfamiliar. Choose something you genuinely need to understand.
My concept:
Why I need competence (not just familiarity):
Write your plain language prompt:
After AI responds, IMMEDIATELY test without looking at the explanation:
State the core principle in one sentence (your own words):
Now check: Compare your sentence to AI’s explanation. Did you capture it accurately?
- Yes, close enough → Progress to Stage 2
- No, missed key elements → Ask AI to clarify what you missed
Self-check:
- I can state the core idea without looking
- I understand what problem this solves
- I’m not just repeating AI’s exact words
Stage 3: Independent application
You now know the pattern: prompt for explanation → immediately test understanding → identify gaps → clarify before progressing.
You’ll apply this pattern to the next complexity stages.
Stage 2: Understand when to use it
Add context: why this approach matters, how it differs from alternatives, when it’s appropriate.
The approach
Prompt structure:
“Now add more context to help me understand [concept]. What scholarly debates or problems motivated this approach? How does it differ from [related approach]? When would researchers choose this versus alternatives?”
Engage conversationally: Don’t just read AI’s response—ask follow-up questions about anything unclear. “What does [confusing term] actually mean in practice?” or “Can you give an example of when this wouldn’t work?”
This conversational engagement is creation/communication—actively constructing understanding through dialogue, not passively receiving information.
Quick practice
Continue with your chosen concept from Stage 1.
Your context prompt:
After AI responds, test your contextual understanding:
When this approach makes sense (2-3 situations):
When alternatives might serve better:
How it differs from [related approach you already know]:
Self-check:
- I can explain when to use this versus alternatives
- I understand the scholarly problem it addresses
- I can articulate choice points for selecting this approach
If you can’t articulate when NOT to use this approach, ask AI: “What are situations where [concept] wouldn’t be appropriate? When should researchers choose alternatives?”
Literacy note: The ability to articulate when something is inappropriate requires contextual judgement—understanding not just what something is, but when it serves particular purposes versus when it doesn’t.
Stage 3: Know how to do it
Add procedural detail: how you actually do this, what choices researchers make, what challenges emerge.
The procedural detail workflow
Prompt structure
“Walk me through the actual process step-by-step for [concept]. What does a researcher do first? What decisions do they make along the way? What challenges commonly arise?”
Engage with practical application
“In my specific research context [describe briefly], how would I apply this? What would it look like? What challenges would I face?”
Test understanding
Can you describe what you would actually do if using this? Can you anticipate challenges and identify decision points?
Quick practice
Your procedural prompt:
After AI responds, test your procedural understanding:
What I’d need to do first:
Key decisions I’d face:
Challenges I’d likely encounter:
Application to my specific research:
Self-check:
- I can describe the actual process, not just define the concept
- I can anticipate decision points
- I can imagine deployment in my specific context
If you can’t envision application, ask AI: “Walk me through how I would apply [concept] to my specific research situation: [describe your context, research question, data type]. What would I actually do?”
Testing your competence
You’ve worked through three complexity stages. Now test whether you’ve built genuine competence or just acquired detailed familiarity.
Choose ONE test that best assesses competence for your concept:
Option A: The explanation test
Prompt:
“I’m going to explain [your concept] back to you in my own words. Please identify where my understanding is accurate, where I’m confused, and what I’m missing. Here’s my explanation:”
[Write 2-3 paragraphs explaining the concept without looking at anything]
What this reveals: Gaps in understanding you didn’t recognise, areas where you’re using terms without grasping meaning.
Option B: The application test
Prompt:
“How would I apply [your concept] to this specific situation: [describe a real scenario from your research with enough detail that AI can assess appropriate application]. What would it look like? What challenges would I face? Where might I struggle?”
What this reveals: Whether you can translate abstract understanding into practical deployment. Application tests reveal gaps explanation alone obscures.
Option C: The comparison test
Prompt:
“What’s the key difference between [this concept] and [related concept] I already know? When would I use each? What are the implications of choosing one versus the other?”
What this reveals: Whether you can distinguish related ideas, not just understand them individually.
Record your test results
Test chosen: A / B / C
What the test revealed about my understanding:
Gaps I need to address:
Next learning steps (if gaps identified):
Literacy insight: Testing reveals the difference between recognition (you’ve seen explanations) and genuine understanding (you can use the concept). This metacognitive awareness—knowing what you actually understand—is crucial for adaptation-level literacy.
Decision point: Competence or familiarity?
Let’s see what genuine competence enables versus what superficial familiarity produces.
The scenario
You’ve spent 30 minutes learning about grounded theory through staged complexity. A colleague asks: “Should I use grounded theory for my study on teacher burnout? I want to understand what factors contribute to burnout in secondary schools.”
How do you respond?
Response A: Superficial familiarity (substitution approach)
You say: “Yeah, grounded theory is good for qualitative research. It develops theory from data instead of testing hypotheses. You should use it if you’re doing interviews.”
Colleague asks: “Should I start with a conceptual framework about burnout, or develop everything from the data? When do I stop collecting data?”
You realise: You can’t answer these questions. You know what grounded theory is called and that it’s qualitative, but you don’t understand how it works or when it’s appropriate versus alternatives.
What happens: Your colleague leaves confused. You’ve provided generic advice that doesn’t help them make an informed decision. Later, they choose grounded theory inappropriately and reviewers critique their methodology.
Time investment: 5 minutes extraction Result: Familiarity without utility
Learning: You consumed information but didn’t build competence. You can define but not consult. This is what substitution-level extraction produces—recognition without deployment capability.
Response B: Genuine competence (adaptation approach)
You say: “It might work well since you don’t have predetermined hypotheses about burnout patterns. Grounded theory would let those patterns emerge from teacher experiences. But you’d need to be comfortable with ambiguous data collection—you won’t know how many interviews you need until themes reach saturation. How does that fit your timeline and institutional review requirements?”
Colleague asks: “What if I want to use existing burnout theory as a starting framework?”
You explain: “Then you might want thematic analysis instead. Grounded theory assumes you’re developing new theory, not applying or testing existing frameworks. If you want to explore how existing burnout theory plays out in secondary schools specifically, thematic analysis would let you do that while staying open to unexpected themes.”
What happens: Your colleague has enough understanding to make an informed choice. They recognise grounded theory isn’t appropriate for their goals and choose thematic analysis instead. Their methodology section is coherent and defensible.
Time investment: 60 minutes structured learning Result: Competence that enables consultation
Learning: You built competence through staged learning and systematic testing. You can explain when it’s appropriate, identify requirements, recognise inappropriate application, and suggest alternatives.
Pause and reflect
What’s the difference between these responses? What does genuine competence enable that familiarity doesn’t?
Decision principle: Invest 60 minutes building competence when you need to use, explain, or evaluate concepts. Use 5-minute extraction when recognition suffices.
Co-evolutionary learning in practice
Building competence with AI is co-evolutionary—you adapt together through iterative engagement. This differs fundamentally from substitution-level extraction.
Try co-evolutionary learning (5 minutes)
Step 1: Provide your context
Explain your research context to AI: “I’m studying [topic] using [methodology] in [context]. My research question is [question]. I’m working with [type of data].”
Step 2: Get tailored explanation
Now ask: “Given my research context, how would [concept I’m learning] apply to my specific situation? What would it look like in practice?”
Notice: AI’s explanation is now tailored to YOUR research, not generic. This is what makes learning co-evolutionary.
Step 3: Iterate through confusion
Ask a follow-up about something specific to your work: “What if my participants are [specific detail about your research]? How would that change the approach?”
Notice: You’re co-creating understanding together—you provide specific context, AI adapts explanations to your situation, you refine through questions.
Your tailored understanding:
Literacy insight: This co-evolutionary process produces better learning than generic explanations because it connects abstract concepts to your specific scholarly work. This is creation/communication—actively constructing understanding through dialogue.
Competence checklist
Let’s assess whether you’ve built genuine competence or detailed familiarity.
For the concept you’ve been learning, can you:
Explain simply:
- Explain it to a non-expert in 2 minutes clearly
- Use your own examples, not AI’s
- Avoid jargon or define terms when necessary
Identify boundaries:
- Name 3 situations where it’s inappropriate to use
- Explain why alternatives would serve better in those situations
- Articulate its main limitations honestly
Apply practically:
- Describe what you’d do first if using it
- Identify key decisions you’d need to make
- Anticipate challenges you’d face
Evaluate critically:
- Compare it to 2 alternative approaches meaningfully
- Explain when you’d choose each alternative
- Recognise potential misapplication
Interpreting your results
12+ checked: Genuine competence—you can deploy this in your work
8-11 checked: Working competence—you could use this with guidance
4-7 checked: Developing familiarity—you need deeper engagement
0-3 checked: Superficial familiarity—consider whether you need competence
If you have gaps: Identify which specific capabilities you’re missing, then work through those complexity stages again with targeted testing.
Literacy note: This honest metacognitive assessment—accurately recognising what you understand versus what you merely recognise—is crucial for adaptation-level literacy. Overconfident assessment leads to deployment failures.
Activity
Metacognitive reflection
Time required: 5 minutes
Reflect on what you learned about learning itself, not just the concept.
Process reflection:
How did staged complexity differ from reading a sophisticated explanation all at once?
What role did immediate testing play in revealing what you actually understood?
Did the co-evolutionary approach (tailoring to your research) help build understanding?
Literacy development:
Did this build genuine competence or just detailed familiarity? How do you know?
Would you use this approach again for other concepts requiring competence?
What did you learn about your learning process itself?
⚠️ Illusion check: If you think you understand but can’t check most boxes on the competence checklist, you have the illusion of understanding. Feeling like you understand ≠ being able to deploy understanding.
Your commitment
This week I will:
- Apply [concept I learned] to [specific aspect of my research]
- Schedule ONE 90-minute session to build competence in [another concept]
- Test understanding through [explanation/application/comparison] before assuming I’m ready
Specific task: Day/time:
Key takeaways
-
Competence requires active construction through testing: Information consumption differs fundamentally from competence building. Consumption is passive—you read sophisticated explanations, take notes, move on. You recognise concepts but can’t use them. Competence is active—you engage through staged complexity, test understanding at each level, work through confusion until concepts become deployable tools.
-
Immediate testing prevents illusion of understanding: Reading sophisticated explanations creates the illusion of understanding—it feels like learning but produces no usable knowledge. You recognise terms but can’t apply concepts. Testing reveals the difference between recognition and genuine understanding. Test immediately after each complexity stage before progressing.
-
Progressive staging builds on solid foundations: Jumping straight to sophisticated explanations creates detailed familiarity without competence—you recognise terminology but can’t deploy concepts. Stage complexity deliberately instead: get the basic idea first (5-10 min), understand when to use it (10-15 min), know how to do it (15-20 min). Test understanding at each stage before progressing.
-
Co-evolutionary learning tailors to your context: Generic explanations produce generic understanding. Co-evolutionary learning—where you provide research context and AI adapts explanations to your situation—produces understanding connected to your work. This iterative, contextual approach builds understanding you can actually use, not just abstract knowledge.
Your commitment
Pause and reflect
Based on this lesson, what concept will you build competence in this week? How will you test whether you’ve achieved genuine understanding? Document this commitment in your Action Journal.
Looking ahead
This lesson developed competence through staged complexity—active construction of understanding through AI-supported learning. You’ve moved from substitution (extract information in 5 minutes) to adaptation (build competence in 60 minutes).
The next adaptation lesson explores using AI for argument development—testing your thinking through structured dialogue. Both lessons emphasise creation/communication (actively constructing understanding), contextual judgement (recognising what serves your goals), and metacognitive awareness (knowing what you actually understand).
The adaptation stage is about reshaping practice around AI capabilities—not just using AI more effectively for existing tasks, but discovering what becomes possible when AI supports sustained intellectual work.
Resources
- Ericsson, K.A. (2008). Deliberate practice and acquisition of expert performance. Academic Emergency Medicine, 15(11), 988-994.
- Sadler, D.R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535-550.
- Flavell, J.H. (1979). Metacognition and cognitive monitoring. American Psychologist, 34(10), 906-911.
- Dunlosky, J. & Metcalfe, J. (2008). Metacognition. Sage Publications.
- Zimmerman, B.J. (2002). Becoming a self-regulated learner. Theory into Practice, 41(2), 64-70.
- Adler, M.J. (1972). How to read a book: The classic guide to intelligent reading. Touchstone.
- Karnofsky, H. (2021). Reading books vs engaging with them. Cold Takes.