Lesson overview
Objective: Apply functional application literacy to reading—using AI for information extraction while developing critical judgement about when extraction suffices versus when comprehension requires engagement
Summary: This lesson focuses on substitution-level reading: using AI to extract information efficiently when that’s what you need. You’ll learn to triage your reading backlog, extract information strategically, and develop judgement about when summaries suffice versus when you need deep reading. The goal is creating headspace for papers that genuinely matter.
Key habits:
- Strategic triage: Use AI to rapidly assess relevance before investing reading time
- Extraction versus comprehension: Recognise which goal you have for each paper and choose your approach accordingly
- Reading infrastructure: Maintain a system that distinguishes papers needing deep reading from those where extraction suffices
The contrast
Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking.
Albert Einstein
Dr. James Chen looks at his reading backlog: 47 papers marked “need to read.” He’s been trying to read them all carefully, one at a time. Last week, he spent 2 hours on one paper about qualitative methodology. It was good, but not directly relevant to his work. He still has 46 papers to go. The backlog grows faster than he can manage it.
He feels perpetually behind on his field.
Dr. Sarah Martinez has a similar backlog. On Monday morning, she spends 20 minutes using AI to process 10 papers from her queue:
- 2 are directly relevant to her research → marked for deep reading this week
- 5 provide useful information but don’t need careful reading → extracts key details, documents notes
- 3 turn out to be irrelevant → deletes from backlog
By 10:20, she’s triaged 10 papers and knows exactly what deserves her sustained attention. Tuesday afternoon, she reads the 2 important papers carefully, building genuine understanding. Wednesday, she returns to her backlog and processes 8 more papers.
Three weeks later, James has carefully read 3 more papers (50 total hours). Sarah has processed 45 papers through triage and read 8 papers deeply (20 total hours).
Before we begin
How do you currently approach your reading backlog? Do you try to read everything carefully? How does that work for you?
Extraction versus comprehension
You’ve used AI for content creation (lesson 3). The same principle applies to reading—AI handles information extraction while you provide scholarly judgement.
The crucial distinction for reading: information extraction versus comprehension building.
Information extraction answers specific questions: What’s the main argument? What methodology did they use? Is this relevant to my work? You’re getting information out of the text efficiently.
Comprehension building develops understanding: How does this methodology work? Why does this theoretical framework matter? What are the implications? You’re building competence and insight through engagement with the text.
AI literacy means recognising which goal you have for each paper, then choosing your approach accordingly. Someone who is truly literate doesn’t just know how to use AI for extraction; they know when extraction serves their purposes and when it doesn’t.
This lesson focuses on substitution-level reading: using AI to extract information efficiently when that’s what you need. You’re not changing your reading practice fundamentally; you’re adding AI to accelerate extraction so you can read more strategically.
How this builds on previous lessons
Substitution framework continues:
- Same goal: Stay current with your field, identify relevant papers, extract information for your work
- Same process: You still decide what to read, evaluate relevance, determine what you need
- What changes: AI extracts information faster, allowing you to process more texts strategically
Same literacy dimensions as lesson 3:
- Functional application: Using extraction-level engagement effectively
- Critical evaluation: Honestly assessing whether extraction gave you what you needed
- Binary assessment: Did this work or didn’t it?
Same purpose: Creating headspace. Time saved through AI extraction should go toward deeper reading of papers that genuinely matter, not just processing more volume.
Quick reflection
If you could process your reading backlog efficiently, what papers would you make time to read deeply?
Calibration exercise: Can you trust AI summaries?
Before using AI to extract information, you need to calibrate—understanding what AI gets right and what it misses in your specific domain.
Calibration activity (5 minutes)
Choose one paper you’ve already read carefully—something you know well enough to evaluate a summary.
Ask AI to: “Summarise this paper in 200 words covering: (1) main research question, (2) methodology used, (3) key findings, (4) main conclusions.”
Now compare AI’s summary to your understanding:
- What AI got right:
- What AI missed or oversimplified:
- Could you have cited this paper confidently based only on this summary? Yes / No
Calibration insight: This exercise shows you AI’s limitations in your domain. You’re building critical evaluation skills by understanding where AI summaries are trustworthy and where they need your verification.
Workflow 1: Basic summarisation
The most common use of AI for reading is summarisation—condensing papers into key points. This works when you need to know what a paper says rather than deeply understand it.
Faded practice: From observation to independence
Stage 1: Observe expert application
Here’s a complete summarisation prompt with critical evaluation. Notice how it specifies exactly which elements to include:
[Prompt] “Summarise this paper in 200 words covering: (1) main research question, (2) methodology used, (3) key findings, (4) main conclusions or implications.”
[AI generates summary]
[Critical evaluation] After receiving the summary, ask yourself:
- Does this tell me what I needed to know?
- Can I determine relevance based on this?
- Would I feel comfortable citing this paper based on this information?
- What’s missing that matters for my purposes?
Self-explanation
Why specify “200 words” rather than just asking for “a summary”?
Show answer
Specifying word count prevents AI from generating either too-brief summaries (missing key details) or too-long ones (defeating the efficiency purpose). 200 words provides enough detail for most triage purposes while being quick to read. This is functional application—clear communication about exactly what you need.
Stage 2: Apply to your work
Choose one paper from your reading backlog. Create a summarisation prompt for it, being specific about what elements you need.
Your turn
Write your summarisation prompt, then evaluate:
- Did this give you what you needed? Yes / No
- What would you need to know before citing this paper?
Stage 3: Refine your approach
If the summary was insufficient, what specific element was missing? Revise your prompt to request that information.
When basic summarisation works well:
- Checking relevance before deeper reading
- Getting the gist of papers in adjacent fields
- Quickly reviewing papers you read previously
- Scanning what’s new in your field
When you need more:
- Papers central to your research
- Methodologies you’re considering adopting
- Theoretical frameworks you need to apply
- Papers you’ll engage critically in your writing
Typical time comparison:
- Reading and noting a paper manually: 30-60 minutes
- AI summary + evaluation: 3-5 minutes
Workflow 2: Targeted extraction
Often you need specific details rather than general summaries. This is targeted extraction—more efficient than reading entire papers when you have clear information needs.
The approach
Instead of asking for everything, specify exactly what you need:
Prompt structure:
“From this paper, extract: (1) theoretical framework used, (2) sample size and characteristics, (3) how they measured [specific variable], (4) acknowledged limitations.”
Critical evaluation: Did this extraction give you exactly what you needed? Is anything missing that matters for your purposes?
When this works well:
- Writing literature reviews (need methodology details)
- Checking methods before citing
- Comparing approaches across studies
- Identifying gaps in existing research
Quick practice
Choose a paper you need to cite. What specific information do you need from it? Write a targeted extraction prompt.
Self-check:
- I specified exactly which details I needed
- The extraction provided citation-ready information
- I know what’s still missing (if anything)
- I’ve documented whether I need to read the full paper
Literacy note: This targeted approach requires clear communication (functional application from lesson 2) about exactly what you need. Vague requests produce vague results. Remember the RGID framework—specificity in instructions produces better outputs.
Workflow 3: Checking relevance efficiently
Before investing reading time, quickly assess whether papers warrant your attention.
The relevance checking workflow
Prompt structure
“I’m researching [your specific topic]. Based on this paper’s abstract and introduction, is it relevant? What specific contributions does it make that might be useful for my work?”
Critical evaluation
Does this help you decide whether to read further? Does it identify connections you might have missed?
When this suffices
Quick triage prevents reading papers that won’t serve your work. This is efficiency for headspace—redirecting time toward papers that matter.
Time saved
- Checking relevance by reading abstracts: 10-15 minutes per paper
- AI relevance assessment: 2-3 minutes per paper
Quick practice
Take 3 papers from your backlog. For each, ask AI whether it’s relevant to your specific research question.
Paper 1: Title: ___ | AI says: Relevant / Possibly relevant / Not relevant | Your assessment: Agree / Disagree
Paper 2: Title: ___ | AI says: Relevant / Possibly relevant / Not relevant | Your assessment: Agree / Disagree
Paper 3: Title: ___ | AI says: Relevant / Possibly relevant / Not relevant | Your assessment: Agree / Disagree
How many could you confidently remove from your backlog?
Literacy insight: This relevance filtering requires contextual judgement. AI can identify potential connections, but you evaluate whether those connections matter for your scholarly goals. Your domain expertise trumps AI’s suggestions.
Decision point: Extraction versus deep reading
The crucial judgement is recognising when extraction serves your goals versus when you need deeper engagement. Let’s practice this decision-making.
The scenario
You’ve found a paper on qualitative coding that looks relevant. You have 30 minutes before your next meeting. The paper is 8,000 words.
What’s your approach?
Option A: Use AI for quick summary, move on
Your approach: You ask AI for a 150-word summary. It tells you the paper uses grounded theory for interview analysis. Sounds useful. You save it to cite.
What happens: Two weeks later, writing your methodology section, you realise you don’t understand grounded theory well enough to compare it to your approach. You need to read the paper properly anyway, but now you’re on a deadline.
Time total: 3 minutes now + 40 minutes later when rushed = 43 minutes
Learning: Extraction worked for initial relevance check but not for the depth you actually needed. Your writing revealed insufficient understanding. Should have either read it properly initially or noted it needed deeper reading.
Option B: Read the paper carefully for 30 minutes
Your approach: You start reading from the beginning, taking notes carefully.
What happens: You get through introduction and literature review but don’t finish. The meeting interrupts you. Later that day, you can’t remember details and need to re-read sections. The paper takes 90 minutes total across three sessions.
Time total: 30 + 30 + 30 = 90 minutes (fragmented)
Learning: Deep reading is important but 30-minute chunks aren’t sufficient for 8,000-word papers. Should have either allocated proper time or triaged with extraction first to determine if full reading was necessary.
Option C: Use AI for detailed extraction, assess if you need more
Your approach: You spend 5 minutes getting AI to extract: coding approach used, how grounded theory is applied, what makes this approach different from other methods, acknowledged limitations, and key takeaways about when this approach works well.
What happens: You now understand it’s similar to your approach but emphasises different aspects of emergence in coding. You save detailed notes. When writing your methodology section, you know exactly which 2 pages to read carefully for the specific variation you want to discuss. Total time: 5 minutes extraction + 15 minutes targeted reading later = 20 minutes.
Time total: 5 + 15 = 20 minutes (strategic)
Learning: Targeted extraction helped you understand relevance AND what specifically to read deeply. You invested reading time where it mattered most. This is strategic triage serving your scholarly goals.
Pause and reflect
Which approach best served scholarly goals? When might each be appropriate?
Decision principle: Extraction first for triage, then read deeply only what your work requires. Don’t default to either always extracting or always reading everything carefully.
When to extract versus when to read deeply
Before processing any paper, use this decision checklist to determine your approach:
Decision checklist
For each paper, answer these questions:
1. Will I be citing specific claims or arguments from this paper? → YES: Needs verification through reading → NO: Continue to question 2
2. Do I need to understand HOW something works (not just WHAT they found)? → YES: Deep reading required → NO: Continue to question 3
3. Is this potentially central to my research? → YES: Deep reading required → NO: Continue to question 4
4. Am I just checking if it’s relevant to my work? → YES: Extraction sufficient → NO: Continue to question 5
5. Do I need to know WHAT they found or concluded? → YES: Extraction sufficient → NO: May not need this paper at all
Pause and reflect
Think of 2-3 papers currently in your backlog. Run each through this checklist. Do most require deep reading, or are you defaulting to deep reading when extraction would suffice?
⚠️ Literacy warning: If everything ends up as “deep reading required,” you may be defaulting to reading everything carefully out of anxiety rather than strategic judgement. Aim for roughly 20-30% deep reading, 40-50% extraction, 20-30% delete/irrelevant. Adjust based on your actual needs.
Activity
Process your backlog strategically
Time required: 15 minutes
The four-tier system
Categorise papers into four tiers based on what they require:
- Tier 1 (Read deeply): Central to my work, needs careful reading—extraction can’t replace engagement
- Tier 2 (Targeted reading): Interesting, will read specific sections—use extraction to identify which sections matter
- Tier 3 (Extraction sufficient): Good to know about, summary provides what I need—extraction serves my purposes
- Tier 4 (Not relevant): Can delete from backlog—extraction revealed it doesn’t serve my work
Process 5-7 papers
For each paper:
- Ask AI for a 150-word summary
- Ask AI: “Is this relevant to [your specific research question]?”
- Categorise using the four-tier system
Results
- Tier 1 (Read deeply): ___ papers
- Tier 2 (Targeted reading): ___ papers
- Tier 3 (Extraction sufficient): ___ papers
- Tier 4 (Delete from backlog): ___ papers
Track your time savings
Let’s make the efficiency gains concrete rather than abstract.
Before this activity:
- Papers in your current backlog: ___
- Average time you spend reading a paper carefully: ___ minutes
- Number of papers you typically process per week: ___
After this activity:
- Papers processed in 15 minutes: ___
- Papers you’re confident you understand well enough: ___
- Papers you removed from backlog as not relevant: ___
- Time you would have spent reading these papers manually: ___ minutes
- Time actually spent with AI extraction: ___ minutes
- Time saved: ___ minutes
Weekly projection: If you processed 10-15 papers per week this way instead of trying to read everything carefully:
- Traditional approach: ___ hours/week reading everything
- Strategic approach: ___ hours/week with triage + targeted deep reading
- Potential time savings: ___ hours/week
Pause and reflect
What will you do with 2-3 reclaimed hours per week? This time should go toward deeper reading of papers that truly matter, not just processing more volume.
Set up your reading infrastructure
Create a simple system for tracking your backlog that distinguishes extraction from deep reading.
Choose your approach
Option A (Low-tech): Folders in reference manager
- Read deeply (Tier 1)
- Targeted reading (Tier 2)
- Extraction sufficient (Tier 3)
- Processed (with saved summaries)
Option B (Spreadsheet):
| Paper title | Tier | Summary/notes | Status | Date processed |
|---|---|---|---|---|
| [Title] | 1 | [Brief notes] | Not started | 2024-12-11 |
Set it up now
- I’ve created my tracking system
- I’ve moved the papers from this activity into appropriate categories
- System is ready to use for future papers
This week’s commitment:
- Process 5-10 more papers from my backlog using AI extraction
- Read deeply at least ONE Tier 1 paper without AI
- Document what I learn in my own words
Specific day/time for deep reading: ___ Calendar entry created: Yes / Not yet
Literacy note: This infrastructure supports metacognition—reflecting on your reading goals and making intentional choices about engagement level.
⚠️ Maintaining critical reading skills: Plan to read at least 20-30% of papers without AI assistance to maintain your ability to critically evaluate texts. If you only read through AI summaries, your evaluation skills atrophy.
Key takeaways
-
Extraction versus comprehension is a literacy distinction: Information extraction differs from comprehension building, and both serve different reading goals. Extraction answers specific questions about what a paper says. Comprehension builds understanding of how something works and why it matters—this requires your engagement with texts. Recognising which goal you have for each paper is fundamental to contextual judgement.
-
When summaries suffice requires judgement: Extraction suffices for routine monitoring of your field, scanning breadth in adjacent areas, checking relevance before investing reading time, and retrieving specific information. You need deeper engagement for papers central to your research, methodologies you’re considering adopting, and theoretical frameworks you’ll apply.
-
Strategic triage prevents backlog paralysis: Most academics have reading backlogs growing faster than they can manage. AI helps you actually process your backlog by enabling rapid triage—identifying what needs careful reading versus what needs only information extraction versus what’s not relevant. You read less total volume but read the right things at the right depth.
-
Guard against over-reliance: Using AI for extraction carries a risk: defaulting to summaries when you need comprehension. Guard against this by setting explicit criteria for when you’ll read papers yourself, regularly reading without AI to maintain critical reading skills, and documenting what you learn in your own words. Aim for 20-30% of your reading to be deep engagement without AI assistance.
Your commitment
Pause and reflect
Based on this lesson, how will you process your reading backlog this week? What’s one Tier 1 paper you commit to reading deeply? Document this commitment in your Action Journal.
Looking ahead
You’ve applied functional application literacy to both content creation (lesson 3) and reading (this lesson). The next substitution lesson applies these same capabilities to writing assistance—using AI as a writing partner while maintaining your distinctive scholarly voice.
Before moving on, make sure you’ve set up your reading tracking system and scheduled time for deep reading of at least one important paper. Both extraction and deep reading matter—literacy is knowing when to use which.
Resources
- Adler, M.J. (1972). How to read a book: The classic guide to intelligent reading. Touchstone.
- Karnofsky, H. (2021). Reading books vs engaging with them. Cold Takes.
- Howard, P.N. & Jamil, S. (2023). The use of generative AI to support research. SocArXiv.
- Shanahan, M. (2024). Talking about large language models. Communications of the ACM, 67(2), 68-79.
- Weinberger, D. (2011). Too big to know: Rethinking knowledge now that the facts aren’t the facts. Basic Books.