Lesson overview
Objective: Develop evaluative judgement about argument construction—using AI for pre-writing exploration whilst maintaining your distinctive analytical voice and scholarly goals
Summary: This lesson introduces pre-writing exploration: testing argument structures, evidence sufficiency, and implicit assumptions before committing to prose. You’ll learn to distinguish challenges that strengthen your thinking from those that complicate unnecessarily—a core adaptation-level skill called evaluative judgement.
Key habits:
- Pre-writing exploration: Test arguments systematically before drafting to reveal structural problems early
- Evaluative judgement: Distinguish helpful challenges from unnecessary complexity based on your goals
- Proportionate response: Address weaknesses at appropriate depth rather than over- or under-responding
The drafting paralysis problem
The first draft is just you telling yourself the story.
Terry Pratchett
You’re writing an argument section. Three hours later, you have 2,000 words. Reading it back, something feels wrong:
- The logic seems circular
- Evidence doesn’t quite support your claims
- A major assumption you didn’t realise you were making is now glaringly obvious
- The structure emphasises tangential points and buries your strongest evidence
- You realise there’s a counterargument you should have anticipated
You start revising. But the problems aren’t surface-level—they’re structural. You need to rethink the argument fundamentally. Six hours total, and you’re restarting.
What happened? You tried to develop and articulate your argument simultaneously. You discovered problems after committing to prose.
This lesson shows you how to work through argument development BEFORE writing a single sentence.
The contrast
Dr. Martinez needs to argue that increased administrative workload decreases research productivity. She opens a document and starts writing: “Universities should recognise that administrative burden…”
Three hours later, she has 2,000 words. Re-reading, she realises:
- Her evidence shows correlation, not causation
- She assumed teaching and research are zero-sum without defending this
- Her structure leads with methodology instead of findings
- There’s a selection effect problem she didn’t address
She starts over. Total time: 6+ hours of writing and rewriting. Still unsatisfied.
Dr. Thompson needs to make the same argument. She spends 45 minutes with AI before writing anything:
- Explores three argument structures, chooses one that leads with findings
- Tests whether evidence supports causal claims (realises she needs to qualify)
- Surfaces the zero-sum assumption and decides to defend it explicitly
- Generates strongest counterargument (selection effects) and plans how to address it
- Identifies which evidence is necessary vs merely available
Then she drafts for 2 hours—articulating arguments she’s already tested. The prose flows because she knows exactly what she’s arguing and why. Total time: 3 hours total, confident submission.
Before we begin
Think about your last argument section. Did you discover structural problems after you’d already drafted? What would have changed if you’d explored the argument first?
Argument weakness calibration
Before learning pre-writing exploration, assess your current argument development practice.
Calibration exercise (5 minutes)
Think of the last significant argument you wrote (paper section, chapter, substantial blog post).
Answer honestly:
1. Did you discover after drafting that your evidence didn’t fully support your claims? Yes / No / Partially
2. Did reviewers or readers point out assumptions you hadn’t consciously made? Yes / No / Partially
3. Did you realise after writing that the structure emphasised the wrong elements? Yes / No / Partially
4. Did counterarguments emerge in review that you should have anticipated? Yes / No / Partially
5. Did you have to substantially restructure or rewrite after initial drafting? Yes / No / Partially
Interpreting your results
3+ “Yes” answers: You discover argument problems after committing to prose. Pre-writing exploration would reveal these issues before you write, saving revision time and producing stronger arguments.
1-2 “Yes” answers: You have some argument development practice but could strengthen systematic testing.
0 “Yes” answers: Either you have exceptional argument development skills, or you haven’t had sufficient critical review to see gaps yet.
Calibration insight: Most academics draft first, then discover structural problems. This lesson teaches systematic pre-writing exploration that tests arguments before you commit to prose. It takes 30-45 minutes upfront but prevents hours of structural revision.
Pre-writing versus drafting-first
Drafting-first approach (substitution): AI drafts prose → you revise extensively → discover structural problems → rewrite
Pre-writing exploration (adaptation): AI helps explore arguments → you test reasoning systematically → you draft prose with tested structure
This reshapes your writing practice fundamentally. Instead of discovering problems while drafting, you work through complexity upfront.
Time investment:
- Drafting-first: 0 minutes upfront + 3-6 hours drafting/revising/restructuring
- Pre-writing: 45 minutes exploration + 2-3 hours confident drafting = faster total time
Literacy focus:
- Substitution: Functional application, voice preservation
- Adaptation: Evaluative judgement (recognising helpful vs unhelpful challenges), contextual judgement (choosing structures that serve goals)
Understanding the shift
What changes from substitution to adaptation:
Substitution focuses on prose—getting words on the page efficiently. Adaptation focuses on reasoning—testing argument structure before writing.
In substitution, you use AI for specific bounded tasks: draft an email, revise a paragraph, generate alternatives. In adaptation, you use AI for sustained intellectual work: explore argument structures over 30+ minutes, test reasoning from multiple angles, surface assumptions you didn’t know you were making.
The adaptation stage is about discovering what becomes possible when AI supports extended pre-writing exploration—something that wasn’t feasible when you were thinking alone.
Quick reflection
When you write arguments, do you draft first then discover problems, or explore systematically before writing?
Exploring argument structures
The same evidence can support different arguments depending on structure. Most academics use the first structure that occurs to them. Adaptation-level literacy means deliberately exploring alternatives.
Faded practice: From observation to independence
Stage 1: Observe expert application
Here’s how an experienced scholar explores structural options systematically:
[Scholar’s prompt] “I’m arguing that teaching evaluations are biased against women faculty. My evidence includes: meta-analysis showing gender gaps in ratings, experimental studies with identical teaching but different perceived genders, qualitative data on gendered language in comments. Suggest three different ways to structure this argument. For each: what it emphasises, what logical flow it creates, what it might obscure.”
[AI suggests 3 structures]
Structure 1: Meta-analysis → experimental evidence → qualitative confirmation (Emphasises: statistical strength, de-emphasises: lived experience)
Structure 2: Lived experience (qualitative) → experimental explanation → statistical scope (Emphasises: human impact, de-emphasises: generalisability)
Structure 3: Problem (bias exists) → mechanisms (why it happens) → evidence (across methods) (Emphasises: theoretical understanding, de-emphasises: methodological rigour)
[Scholar evaluates against goals] “My audience is skeptical administrators who value data over stories. Structure 1 serves this—lead with statistics, establish pattern, then explain mechanisms through qualitative data. Structure 2 would feel anecdotal to them despite strong evidence. Structure 3 risks seeming abstract.”
[Scholar makes deliberate choice] “I’ll use Structure 1 because it meets administrators where they are—they need statistical evidence first. I’m choosing to de-emphasise lived experience in the structure while still including it as confirmation.”
Notice: The scholar judged structures against specific goals (persuade skeptical administrators), not abstract quality. They recognised what each structure emphasises and made a deliberate choice about what to foreground.
Self-explanation
Why generate three structures when you might already have one in mind?
Show answer
Generating alternatives makes structure a conscious choice rather than default. Your first idea might be good, but seeing alternatives reveals what you’re emphasising/de-emphasising. This lets you choose deliberately based on goals rather than using whatever occurred to you first.
Stage 2: Guided evaluation
Your turn
Think of an argument you’re currently developing. Write your claim and main evidence briefly.
My argument claim:
My main evidence:
My audience and goals:
Now prompt AI:
“I’m arguing that [claim]. My evidence includes [list]. Suggest three different ways to structure this argument. For each: what it emphasises, what logical flow it creates, what it might obscure.”
Three structural options AI suggested:
Structure 1:
- Flow:
- Emphasises:
- De-emphasises:
Structure 2:
- Flow:
- Emphasises:
- De-emphasises:
Structure 3:
- Flow:
- Emphasises:
- De-emphasises:
Now evaluate against YOUR goals:
Which structure best serves what you need this argument to accomplish?
Why does this structure serve your goals?
What are you choosing to emphasise?
What are you comfortable de-emphasising?
Self-check:
- I evaluated structures against my specific goals
- I made a deliberate choice, not accepting first suggestion
- I understand what this structure foregrounds vs backgrounds
- This reflects my analytical approach, not AI’s
Stage 3: Independent application
You now know the pattern: generate structural options → evaluate against your goals → make deliberate choices about emphasis.
Testing evidence: Does it support your claims?
Strong arguments require evidence that actually supports claims. But it’s easy to assume connections that don’t fully hold. Systematic testing reveals gaps before reviewers find them.
The sufficiency test
Prompt structure:
“I’m claiming [your claim] based on [your evidence]. Evaluate whether this evidence is sufficient to support this claim. What alternative explanations could account for my evidence without supporting my conclusion? What additional evidence would strengthen the claim?”
Example:
“I’m claiming workload increases cause research productivity decline based on survey data showing both increased workload and decreased output over 5 years. Is this evidence sufficient? What alternative explanations could account for both patterns?”
Quick practice
Apply the sufficiency test to your argument from the previous exercise.
Your sufficiency test prompt:
What AI identified:
Alternative explanations AI suggested:
Additional evidence that would help:
Now evaluate: Is this helpful or unhelpful?
This revealed a genuine weakness I need to address: Yes / No
If yes, what will you do?
- Qualify my claims (acknowledge correlation ≠ causation)
- Find additional evidence
- Restructure to make a weaker but defensible claim
- Explain why alternatives are unlikely
This introduced unnecessary complexity: Yes / No
If yes, why was this unhelpful?
Literacy insight: This honest assessment of whether challenges help is evaluative judgement developing. Not all AI suggestions improve arguments. Some reveal genuine weaknesses; others overcomplicate. Your judgement about which is which develops through practice.
Testing evidence: Counterevidence and objections
Testing isn’t just about whether evidence supports claims—it’s about what contradicts them.
The steel man test
Don’t ask AI for weak objections you can easily dismiss. Ask for the strongest possible challenge to your thinking.
Prompt structure:
“Present the strongest possible argument against my claim that [your claim]. What would the most sophisticated skeptic say? Don’t give me easy objections—give me the ones that would actually challenge my thinking.”
Why "steel man" not "straw man"?
Straw man: Weak version of opposing argument that’s easy to knock down Steel man: Strongest possible version of opposing argument that genuinely challenges you
Steel man objections are more useful. If you can address the strongest version, you’ve developed a robust argument. If you can only address weak objections, your argument has gaps.
Quick practice
Generate a steel man objection to your argument.
Your steel man prompt:
Strongest objection AI generated:
Evaluate this challenge:
This revealed something I need to address: Yes / No / Partially
This is a genuine intellectual challenge: Yes / No
This introduced unnecessary complexity: Yes / No
My plan for addressing this:
- Address it directly in my argument
- Acknowledge as a limitation
- Explain why it doesn’t undermine my core claim
- Ignore it (explain why below)
Self-check:
- I took the objection seriously
- I evaluated whether it genuinely weakens my argument
- I have a plan for how to handle it
- I didn’t dismiss it just because it’s uncomfortable
Surfacing implicit assumptions
We’re often blind to our own assumptions. AI can surface them for explicit examination.
The three assumption types
Conceptual assumptions: What you’re assuming about how concepts work
Causal assumptions: What mechanisms you’re assuming connect cause and effect
Normative assumptions: What values you’re treating as obvious when they might be contested
Choose the ONE most relevant to your argument:
Surface your key assumptions
Choose your assumption type: Conceptual / Causal / Normative
Conceptual assumptions prompt: “What am I assuming about how key concepts work in my argument about [topic]? What definitional choices am I making implicitly?”
Causal assumptions prompt: “What causal relationships am I assuming in my argument? What mechanisms am I implying connect [cause] to [effect]?”
Normative assumptions prompt: “What values or priorities am I treating as obvious in my argument? Where might readers from different perspectives challenge my normative premises?”
Your prompt:
Key assumptions AI surfaced: 1. 2. 3.
For the most important assumption, I will:
- Defend it explicitly (make it visible and justify it)
- Modify it to be more defensible (weaken or qualify)
- Acknowledge it as a limitation (note but don’t resolve)
- Test whether it holds (gather evidence)
Why this approach?
Literacy insight: Deciding how to handle assumptions requires contextual judgement about your audience and goals. There’s no formula—it depends on what you’re trying to accomplish and what your readers will accept.
Decision point: Helpful versus unhelpful challenges
Not all AI challenges improve arguments. Let’s practice distinguishing helpful from unhelpful suggestions.
The scenario
You’re arguing that increased workload decreases research productivity based on longitudinal survey data showing both trends.
AI suggests: “But what if high achievers choose heavy workloads? Your evidence might show correlation, not causation. You need to address selection effects, endogeneity, and omitted variable bias. Consider instrumental variable approaches or regression discontinuity designs.”
How do you respond?
Response A: Accept uncritically
You decide: AI is right—this is fatal. You abandon your straightforward argument and spend hours researching selection effects. You try to find instrumental variables in your survey data (there aren’t any). Your argument becomes defensive and methodologically complex.
What happens: Your core claim gets buried in methodological details. Readers lose track of your main point. You’re now defending technical choices tangential to your argument. The paper becomes about methods, not findings.
Time total: 6+ hours on methodological complexity
Learning: Not all challenges improve arguments. Selection effects might be relevant, but the level of methodological sophistication AI suggested distracted from your goals. The challenge introduced complexity that didn’t serve your scholarly purposes.
Response B: Reject defensively
You decide: AI doesn’t understand my field or my data. This is perfectly fine. You ignore the concern entirely.
What happens: You draft your argument. Submit. Reviewer 2 writes: “The authors conflate correlation with causation. Selection effects are not addressed. The causal claim is unjustified given the observational data.”
Major revisions or rejection.
Time total: Weeks of delay, revision cycles
Learning: You rejected a genuine concern. It was uncomfortable but legitimate. The challenge was worth engaging with, even if not at the level AI suggested.
Response C: Evaluate proportionately
You consider: Is this a genuine weakness or unnecessary complexity for my purposes?
Your decision: “Selection effects are a legitimate concern. But given my data and goals, I’ll acknowledge this proportionately rather than trying to rule it out completely. I can say: ‘While selection effects may play a role, the temporal pattern—where workload changes precede productivity changes within individuals—supports a causal interpretation. We acknowledge that experimental designs would provide stronger causal evidence.‘”
What happens: You addressed the concern without getting derailed. Your argument anticipates the challenge proportionately. Reviewers see you’ve thought through limitations without letting them dominate the paper.
Time total: 15 minutes addressing it appropriately
Learning: This is evaluative judgement—recognising the concern is legitimate but deciding how much attention it deserves given your goals, evidence, and audience. Not every concern requires extensive methodological response.
Pause and reflect
What’s the difference between these responses? How do you decide what level of engagement a challenge deserves?
Decision principle: Evaluate challenges against your goals and evidence. Some reveal genuine weaknesses. Some introduce valuable sophistication. Some overcomplicate. Your judgement about which is which determines whether pre-writing exploration helps or hinders.
Evaluative judgement practice
Let’s practice the core literacy skill for this lesson: distinguishing helpful from unhelpful suggestions.
Scenario 1
Your argument: You’re conducting qualitative research on teacher experiences using rich interview data and interpretive analysis.
AI suggests: “Your evidence is anecdotal. You need quantitative data and larger sample sizes to make generalisable claims.”
Your decision: Accept / Modify / Reject
Show reasoning
Likely: Reject - AI applied quantitative standards to qualitative work inappropriately. Your epistemological approach values rich description over generalisability. This suggestion misunderstands qualitative research paradigms.
However, if your claims do overreach (claiming “all teachers” when you studied 12), you might Modify by being more careful about scope claims.
Scenario 2
Your argument: You’re making methodological claims in your field.
AI suggests: “You haven’t addressed Scholar X’s 2019 critique of this methodology. Their argument about [specific limitation] seems relevant to your work.”
Your decision: Accept / Modify / Reject
Show reasoning
Likely: Accept - If Scholar X’s critique is legitimate and known in your field, this is important scholarly engagement. Reviewers will expect you to address it.
Modify if: Scholar X’s critique applies to different contexts. You might acknowledge it briefly while explaining why it doesn’t apply to your work.
Reject if: Scholar X is not actually relevant to your specific approach.
Scenario 3
Your argument: You’re making a claim that rests on a disciplinary assumption common in your field.
AI suggests: “This assumption seems problematic. How do you know readers will accept it? You should defend it extensively.”
Your decision: Accept / Modify / Reject
Show reasoning
Likely: Modify - AI correctly identified an assumption, but the response depends on your audience.
If writing for your field: Modify by making the assumption explicit briefly without extensive defence (readers share this assumption).
If writing for broader audience: Accept by defending it more thoroughly (readers may not share this assumption).
Reject if: The assumption is so foundational that defending it would be condescending to your disciplinary audience.
Literacy insight: Evaluative judgement develops through practice. You learn to recognise patterns: what types of challenges tend to help your thinking, what types tend to complicate unnecessarily, how to distinguish productive from unproductive engagement.
Your distinctive analytical voice
Pre-writing exploration is valuable only if it strengthens YOUR thinking—not if it replaces it with generic approaches.
Here are three ways to engage with the same counterargument. Which reflects distinctive analytical voice versus generic or defensive prose?
Version A (Generic AI style)
“While critics may argue otherwise, the evidence clearly demonstrates that this interpretation is justified by the methodological approach employed. Furthermore, recent scholarship substantiates this perspective.”
Version B (Defensive)
“Some reviewers might question this, but they would be wrong. The evidence overwhelmingly supports my interpretation. Those who disagree simply don’t understand the methodology.”
Version C (Thoughtful analytical voice)
“Alternative interpretations exist—particularly from scholars prioritising experimental over observational data. But given the complexity of real-world academic contexts, observational approaches reveal patterns experiments can’t capture. The tradeoff between experimental control and ecological validity is worth it for addressing this question. We gain richness at the cost of causal certainty.”
Which sounds like distinctive scholarly engagement? Version C
What makes version C stronger?
- Acknowledges alternatives respectfully
- Explains reasoning transparently
- Shows awareness of tradeoffs
- Defends choices without defensiveness
- Sounds like a thoughtful scholar thinking aloud
Your analytical voice emerges through:
- Which structural options feel true to how you think
- What evidence you find genuinely compelling (not just convenient)
- How you interpret complexities without oversimplifying
- What assumptions you’re willing to defend vs acknowledge
- How you engage with counterarguments—respectfully and thoughtfully
Quick practice
Rewrite this generic sentence in your analytical voice:
Generic: “The data suggests that further research is needed in this area.”
Your voice:
Literacy note: Adaptation-level engagement means your practice reshapes around AI capabilities whilst your distinctive scholarly approach remains central. You’re using AI to think more rigorously like yourself—not to think like AI suggests.
Argument development map
Here’s how the pieces fit together in actual practice:
Pre-writing argument development sequence:
1. Explore structures (12 min)
↓ Choose structure that serves your goals
2. Test evidence sufficiency (7 min)
↓ Identify weaknesses to address
3. Generate steel man objection (7 min)
↓ Plan how to address proportionately
4. Surface key assumptions (6 min)
↓ Decide how to handle each assumption
5. Evaluate what helped vs hindered (5 min)
↓ Develop evaluative judgement through reflection
6. NOW draft prose (2-3 hours)
→ Articulating arguments you've already tested
Total investment: 35-40 minutes before writing Result: Arguments tested before committing to prose
When to use this approach
Yes—invest the time:
- Core arguments in papers or chapters
- High-stakes claims where reviewers will scrutinise
- Arguments where you’re unsure if evidence supports claims
- Complex reasoning with multiple assumptions
- Arguments where structure choice really matters
No—just draft:
- Straightforward explanations
- Routine argument structures you’ve used successfully before
- Low-stakes contexts where quick is better than perfect
- Arguments you’ve already explored extensively in conversation
Guideline: If reviewers will seriously evaluate your reasoning, invest the 35 minutes. If you’re just explaining something straightforward to supportive readers, write directly.
Activity
Reflection and evaluative judgement development
Time required: 5 minutes
Reflect on what you learned about argument development and your own evaluative judgement.
Evaluative judgement reflection
Challenges that improved your argument:
- [Specific challenge] → Improved because:
Challenges you rejected:
- [Specific challenge] → Rejected because:
What you learned about your evaluative judgement:
What types of challenges tend to help your thinking?
What types tend to complicate unnecessarily?
How do you distinguish helpful from unhelpful?
Process reflection
How did pre-writing exploration differ from drafting first?
What did systematic testing reveal that you might have missed while drafting?
Did AI challenges strengthen your argument or complicate unnecessarily? What helped you judge which was which?
Your commitment
Before drafting my next major argument, I will:
- Spend 35 minutes on pre-writing exploration
- Test evidence systematically
- Surface at least one key assumption
- Document what improves my thinking vs what complicates
Specific argument: When:
Key takeaways
-
Pre-writing exploration produces stronger arguments faster: Systematic pre-writing thinking produces stronger arguments than drafting first and discovering problems later. Before writing prose, use AI to explore argument structures, test evidence systematically, surface implicit assumptions, and generate steel man objections. This reveals gaps before reviewers find them.
-
Evaluative judgement distinguishes helpful from unhelpful challenges: Not all AI challenges improve arguments. Some reveal genuine weaknesses requiring attention. Others introduce unnecessary complexity, apply inappropriate standards, or miss your scholarly goals. Recognising which challenges strengthen thinking versus which complicate unnecessarily is the critical literacy dimension for this lesson.
-
Structure shapes argument fundamentally: The same evidence supports different arguments depending on structure. Most academics default to their first structural idea. Adaptation-level literacy means deliberately exploring alternatives to make conscious choices based on scholarly goals. Generate multiple structural options, evaluate how they handle your evidence, then choose deliberately.
-
Distinctive voice remains central through deliberate choices: Pre-writing exploration is valuable only if it strengthens your thinking—not if it replaces it with generic approaches. Your voice emerges through which options feel true to how you think, what evidence you find genuinely compelling, how you interpret without oversimplifying, and how you engage with counterarguments respectfully.
Your commitment
Pause and reflect
Based on this lesson, what argument will you develop through pre-writing exploration this week? How will you evaluate whether challenges strengthen or complicate your thinking? Document this commitment in your Action Journal.
Looking ahead
This lesson developed evaluative judgement through argument exploration—learning to distinguish helpful from unhelpful challenges, maintaining your voice whilst engaging with alternative perspectives, making deliberate choices about structure and reasoning.
The next adaptation lesson applies similar literacy dimensions to problem decomposition—using AI to break down complex scholarly challenges systematically. Both lessons emphasise creation/communication (actively constructing understanding), contextual judgement (recognising what serves goals), and evaluative judgement (distinguishing productive from unproductive engagement).
Resources
- Toulmin, S. (2003). The uses of argument. Cambridge University Press.
- Booth, W., Colomb, G., Williams, J., Bizup, J., & Fitzgerald, W. (2016). The craft of research. University of Chicago Press.
- Paul, R. & Elder, L. (2019). The miniature guide to critical thinking concepts and tools. Foundation for Critical Thinking.
- Facione, P. (2020). Critical thinking: What it is and why it counts. Measured Reasons LLC.
- Sadler, D.R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535-550.
- Schön, D.A. (1984). The reflective practitioner: How professionals think in action. Basic Books.