Lesson overview

Objective: Develop evaluative judgement about argument construction—using AI for pre-writing exploration whilst maintaining your distinctive analytical voice and scholarly goals

Summary: This lesson introduces pre-writing exploration: testing argument structures, evidence sufficiency, and implicit assumptions before committing to prose. You’ll learn to distinguish challenges that strengthen your thinking from those that complicate unnecessarily—a core adaptation-level skill called evaluative judgement.

Key habits:

  • Pre-writing exploration: Test arguments systematically before drafting to reveal structural problems early
  • Evaluative judgement: Distinguish helpful challenges from unnecessary complexity based on your goals
  • Proportionate response: Address weaknesses at appropriate depth rather than over- or under-responding

The drafting paralysis problem

The first draft is just you telling yourself the story.

Terry Pratchett

You’re writing an argument section. Three hours later, you have 2,000 words. Reading it back, something feels wrong:

  • The logic seems circular
  • Evidence doesn’t quite support your claims
  • A major assumption you didn’t realise you were making is now glaringly obvious
  • The structure emphasises tangential points and buries your strongest evidence
  • You realise there’s a counterargument you should have anticipated

You start revising. But the problems aren’t surface-level—they’re structural. You need to rethink the argument fundamentally. Six hours total, and you’re restarting.

What happened? You tried to develop and articulate your argument simultaneously. You discovered problems after committing to prose.

This lesson shows you how to work through argument development BEFORE writing a single sentence.

The contrast

Dr. Martinez needs to argue that increased administrative workload decreases research productivity. She opens a document and starts writing: “Universities should recognise that administrative burden…”

Three hours later, she has 2,000 words. Re-reading, she realises:

  • Her evidence shows correlation, not causation
  • She assumed teaching and research are zero-sum without defending this
  • Her structure leads with methodology instead of findings
  • There’s a selection effect problem she didn’t address

She starts over. Total time: 6+ hours of writing and rewriting. Still unsatisfied.

Dr. Thompson needs to make the same argument. She spends 45 minutes with AI before writing anything:

  • Explores three argument structures, chooses one that leads with findings
  • Tests whether evidence supports causal claims (realises she needs to qualify)
  • Surfaces the zero-sum assumption and decides to defend it explicitly
  • Generates strongest counterargument (selection effects) and plans how to address it
  • Identifies which evidence is necessary vs merely available

Then she drafts for 2 hours—articulating arguments she’s already tested. The prose flows because she knows exactly what she’s arguing and why. Total time: 3 hours total, confident submission.

Before we begin

Think about your last argument section. Did you discover structural problems after you’d already drafted? What would have changed if you’d explored the argument first?

Argument weakness calibration

Before learning pre-writing exploration, assess your current argument development practice.

Calibration insight: Most academics draft first, then discover structural problems. This lesson teaches systematic pre-writing exploration that tests arguments before you commit to prose. It takes 30-45 minutes upfront but prevents hours of structural revision.

Pre-writing versus drafting-first

Drafting-first approach (substitution): AI drafts prose → you revise extensively → discover structural problems → rewrite

Pre-writing exploration (adaptation): AI helps explore arguments → you test reasoning systematically → you draft prose with tested structure

This reshapes your writing practice fundamentally. Instead of discovering problems while drafting, you work through complexity upfront.

Time investment:

  • Drafting-first: 0 minutes upfront + 3-6 hours drafting/revising/restructuring
  • Pre-writing: 45 minutes exploration + 2-3 hours confident drafting = faster total time

Literacy focus:

  • Substitution: Functional application, voice preservation
  • Adaptation: Evaluative judgement (recognising helpful vs unhelpful challenges), contextual judgement (choosing structures that serve goals)

Quick reflection

When you write arguments, do you draft first then discover problems, or explore systematically before writing?

Exploring argument structures

The same evidence can support different arguments depending on structure. Most academics use the first structure that occurs to them. Adaptation-level literacy means deliberately exploring alternatives.

Faded practice: From observation to independence

Stage 1: Observe expert application

Here’s how an experienced scholar explores structural options systematically:

[Scholar’s prompt] “I’m arguing that teaching evaluations are biased against women faculty. My evidence includes: meta-analysis showing gender gaps in ratings, experimental studies with identical teaching but different perceived genders, qualitative data on gendered language in comments. Suggest three different ways to structure this argument. For each: what it emphasises, what logical flow it creates, what it might obscure.”

[AI suggests 3 structures]

Structure 1: Meta-analysis → experimental evidence → qualitative confirmation (Emphasises: statistical strength, de-emphasises: lived experience)

Structure 2: Lived experience (qualitative) → experimental explanation → statistical scope (Emphasises: human impact, de-emphasises: generalisability)

Structure 3: Problem (bias exists) → mechanisms (why it happens) → evidence (across methods) (Emphasises: theoretical understanding, de-emphasises: methodological rigour)

[Scholar evaluates against goals] “My audience is skeptical administrators who value data over stories. Structure 1 serves this—lead with statistics, establish pattern, then explain mechanisms through qualitative data. Structure 2 would feel anecdotal to them despite strong evidence. Structure 3 risks seeming abstract.”

[Scholar makes deliberate choice] “I’ll use Structure 1 because it meets administrators where they are—they need statistical evidence first. I’m choosing to de-emphasise lived experience in the structure while still including it as confirmation.”

Notice: The scholar judged structures against specific goals (persuade skeptical administrators), not abstract quality. They recognised what each structure emphasises and made a deliberate choice about what to foreground.

Self-explanation

Why generate three structures when you might already have one in mind?

Show answer

Generating alternatives makes structure a conscious choice rather than default. Your first idea might be good, but seeing alternatives reveals what you’re emphasising/de-emphasising. This lets you choose deliberately based on goals rather than using whatever occurred to you first.

Stage 2: Guided evaluation

Stage 3: Independent application

You now know the pattern: generate structural options → evaluate against your goals → make deliberate choices about emphasis.

Testing evidence: Does it support your claims?

Strong arguments require evidence that actually supports claims. But it’s easy to assume connections that don’t fully hold. Systematic testing reveals gaps before reviewers find them.

The sufficiency test

Prompt structure:

“I’m claiming [your claim] based on [your evidence]. Evaluate whether this evidence is sufficient to support this claim. What alternative explanations could account for my evidence without supporting my conclusion? What additional evidence would strengthen the claim?”

Example:

“I’m claiming workload increases cause research productivity decline based on survey data showing both increased workload and decreased output over 5 years. Is this evidence sufficient? What alternative explanations could account for both patterns?”

Literacy insight: This honest assessment of whether challenges help is evaluative judgement developing. Not all AI suggestions improve arguments. Some reveal genuine weaknesses; others overcomplicate. Your judgement about which is which develops through practice.

Testing evidence: Counterevidence and objections

Testing isn’t just about whether evidence supports claims—it’s about what contradicts them.

The steel man test

Don’t ask AI for weak objections you can easily dismiss. Ask for the strongest possible challenge to your thinking.

Prompt structure:

“Present the strongest possible argument against my claim that [your claim]. What would the most sophisticated skeptic say? Don’t give me easy objections—give me the ones that would actually challenge my thinking.”

Surfacing implicit assumptions

We’re often blind to our own assumptions. AI can surface them for explicit examination.

The three assumption types

Conceptual assumptions: What you’re assuming about how concepts work

Causal assumptions: What mechanisms you’re assuming connect cause and effect

Normative assumptions: What values you’re treating as obvious when they might be contested

Choose the ONE most relevant to your argument:

Literacy insight: Deciding how to handle assumptions requires contextual judgement about your audience and goals. There’s no formula—it depends on what you’re trying to accomplish and what your readers will accept.

Decision point: Helpful versus unhelpful challenges

Not all AI challenges improve arguments. Let’s practice distinguishing helpful from unhelpful suggestions.

The scenario

You’re arguing that increased workload decreases research productivity based on longitudinal survey data showing both trends.

AI suggests: “But what if high achievers choose heavy workloads? Your evidence might show correlation, not causation. You need to address selection effects, endogeneity, and omitted variable bias. Consider instrumental variable approaches or regression discontinuity designs.”

How do you respond?

Pause and reflect

What’s the difference between these responses? How do you decide what level of engagement a challenge deserves?

Decision principle: Evaluate challenges against your goals and evidence. Some reveal genuine weaknesses. Some introduce valuable sophistication. Some overcomplicate. Your judgement about which is which determines whether pre-writing exploration helps or hinders.

Evaluative judgement practice

Let’s practice the core literacy skill for this lesson: distinguishing helpful from unhelpful suggestions.

Scenario 1

Your argument: You’re conducting qualitative research on teacher experiences using rich interview data and interpretive analysis.

AI suggests: “Your evidence is anecdotal. You need quantitative data and larger sample sizes to make generalisable claims.”

Your decision: Accept / Modify / Reject

Scenario 2

Your argument: You’re making methodological claims in your field.

AI suggests: “You haven’t addressed Scholar X’s 2019 critique of this methodology. Their argument about [specific limitation] seems relevant to your work.”

Your decision: Accept / Modify / Reject

Scenario 3

Your argument: You’re making a claim that rests on a disciplinary assumption common in your field.

AI suggests: “This assumption seems problematic. How do you know readers will accept it? You should defend it extensively.”

Your decision: Accept / Modify / Reject

Literacy insight: Evaluative judgement develops through practice. You learn to recognise patterns: what types of challenges tend to help your thinking, what types tend to complicate unnecessarily, how to distinguish productive from unproductive engagement.

Your distinctive analytical voice

Pre-writing exploration is valuable only if it strengthens YOUR thinking—not if it replaces it with generic approaches.

Here are three ways to engage with the same counterargument. Which reflects distinctive analytical voice versus generic or defensive prose?

Version A (Generic AI style)

“While critics may argue otherwise, the evidence clearly demonstrates that this interpretation is justified by the methodological approach employed. Furthermore, recent scholarship substantiates this perspective.”

Version B (Defensive)

“Some reviewers might question this, but they would be wrong. The evidence overwhelmingly supports my interpretation. Those who disagree simply don’t understand the methodology.”

Version C (Thoughtful analytical voice)

“Alternative interpretations exist—particularly from scholars prioritising experimental over observational data. But given the complexity of real-world academic contexts, observational approaches reveal patterns experiments can’t capture. The tradeoff between experimental control and ecological validity is worth it for addressing this question. We gain richness at the cost of causal certainty.”

Which sounds like distinctive scholarly engagement? Version C

What makes version C stronger?

  • Acknowledges alternatives respectfully
  • Explains reasoning transparently
  • Shows awareness of tradeoffs
  • Defends choices without defensiveness
  • Sounds like a thoughtful scholar thinking aloud

Your analytical voice emerges through:

  • Which structural options feel true to how you think
  • What evidence you find genuinely compelling (not just convenient)
  • How you interpret complexities without oversimplifying
  • What assumptions you’re willing to defend vs acknowledge
  • How you engage with counterarguments—respectfully and thoughtfully

Literacy note: Adaptation-level engagement means your practice reshapes around AI capabilities whilst your distinctive scholarly approach remains central. You’re using AI to think more rigorously like yourself—not to think like AI suggests.

Argument development map

Here’s how the pieces fit together in actual practice:

Pre-writing argument development sequence:

1. Explore structures (12 min)
   ↓ Choose structure that serves your goals

2. Test evidence sufficiency (7 min)
   ↓ Identify weaknesses to address

3. Generate steel man objection (7 min)
   ↓ Plan how to address proportionately

4. Surface key assumptions (6 min)
   ↓ Decide how to handle each assumption

5. Evaluate what helped vs hindered (5 min)
   ↓ Develop evaluative judgement through reflection

6. NOW draft prose (2-3 hours)
   → Articulating arguments you've already tested

Total investment: 35-40 minutes before writing Result: Arguments tested before committing to prose

When to use this approach

Yes—invest the time:

  • Core arguments in papers or chapters
  • High-stakes claims where reviewers will scrutinise
  • Arguments where you’re unsure if evidence supports claims
  • Complex reasoning with multiple assumptions
  • Arguments where structure choice really matters

No—just draft:

  • Straightforward explanations
  • Routine argument structures you’ve used successfully before
  • Low-stakes contexts where quick is better than perfect
  • Arguments you’ve already explored extensively in conversation

Guideline: If reviewers will seriously evaluate your reasoning, invest the 35 minutes. If you’re just explaining something straightforward to supportive readers, write directly.

Activity

Key takeaways

  • Pre-writing exploration produces stronger arguments faster: Systematic pre-writing thinking produces stronger arguments than drafting first and discovering problems later. Before writing prose, use AI to explore argument structures, test evidence systematically, surface implicit assumptions, and generate steel man objections. This reveals gaps before reviewers find them.

  • Evaluative judgement distinguishes helpful from unhelpful challenges: Not all AI challenges improve arguments. Some reveal genuine weaknesses requiring attention. Others introduce unnecessary complexity, apply inappropriate standards, or miss your scholarly goals. Recognising which challenges strengthen thinking versus which complicate unnecessarily is the critical literacy dimension for this lesson.

  • Structure shapes argument fundamentally: The same evidence supports different arguments depending on structure. Most academics default to their first structural idea. Adaptation-level literacy means deliberately exploring alternatives to make conscious choices based on scholarly goals. Generate multiple structural options, evaluate how they handle your evidence, then choose deliberately.

  • Distinctive voice remains central through deliberate choices: Pre-writing exploration is valuable only if it strengthens your thinking—not if it replaces it with generic approaches. Your voice emerges through which options feel true to how you think, what evidence you find genuinely compelling, how you interpret without oversimplifying, and how you engage with counterarguments respectfully.

Your commitment

Pause and reflect

Based on this lesson, what argument will you develop through pre-writing exploration this week? How will you evaluate whether challenges strengthen or complicate your thinking? Document this commitment in your Action Journal.

Looking ahead

This lesson developed evaluative judgement through argument exploration—learning to distinguish helpful from unhelpful challenges, maintaining your voice whilst engaging with alternative perspectives, making deliberate choices about structure and reasoning.

The next adaptation lesson applies similar literacy dimensions to problem decomposition—using AI to break down complex scholarly challenges systematically. Both lessons emphasise creation/communication (actively constructing understanding), contextual judgement (recognising what serves goals), and evaluative judgement (distinguishing productive from unproductive engagement).

Resources

  • Toulmin, S. (2003). The uses of argument. Cambridge University Press.
  • Booth, W., Colomb, G., Williams, J., Bizup, J., & Fitzgerald, W. (2016). The craft of research. University of Chicago Press.
  • Paul, R. & Elder, L. (2019). The miniature guide to critical thinking concepts and tools. Foundation for Critical Thinking.
  • Facione, P. (2020). Critical thinking: What it is and why it counts. Measured Reasons LLC.
  • Sadler, D.R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535-550.
  • Schön, D.A. (1984). The reflective practitioner: How professionals think in action. Basic Books.