Lesson overview
Objective: Apply functional application literacy to routine content creation—using structured prompting to streamline common academic tasks while developing critical evaluation skills
Summary: This lesson develops substitution-level literacy: applying the RGID framework to accelerate bounded tasks like lesson planning, slides, assessment, and professional writing. You’ll learn workflows for common content creation, practice with your actual work, and honestly evaluate whether AI produces genuine efficiency gains. The goal isn’t to do more work—it’s creating headspace for work that genuinely matters.
Key habits:
- Task selection: Choose high time-investment tasks with clear structure for AI assistance; manual may be faster for quick tasks
- Honest evaluation: Track time including all revision to assess whether AI actually saved time
- Time protection: Schedule reclaimed time for high-value work before it disappears into your schedule
The contrast
Efficiency is doing things right; effectiveness is doing the right things.
Peter Drucker
Dr Elena Rodriguez spends Monday morning creating a lesson plan for next week’s seminar on qualitative research methods. She starts at 09:00 with coffee, a blank document, and good intentions.
By 11:30, she has a rough outline but keeps second-guessing the activity sequence. Should the group discussion come before or after the methodology explanation? What examples will resonate with this particular cohort? She breaks for lunch, still not quite satisfied with the structure.
After lunch, she refines the timing, adds discussion prompts, and creates a handout template. By 14:00—after 3 hours of work—she has a usable lesson plan. It’s good, but the process was exhausting.
The following Monday, Elena tries something different. She opens Claude and spends 5 minutes structuring a detailed prompt: her pedagogical approach, student background, learning objectives, time constraints, and specific deliverables needed. AI generates a structured lesson plan in 30 seconds. Elena spends the next 40 minutes evaluating and refining—adjusting examples to her students, adding her research insights, modifying activities based on what she knows works with this cohort.
By 10:00—after 45 minutes total—she has a lesson plan that’s just as good as last week’s, and she’s protected 2 hours for research writing.
Before we begin
Think about a recent time you spent hours on routine content creation (lesson plans, slides, emails, documentation). What made it time-consuming? What parts required your unique expertise vs structural work?
Why efficiency matters for academics
You’ve learned what AI is (lesson 1) and how to communicate effectively with it (lesson 2). Now you apply those foundational capabilities to actual academic tasks.
Academic work involves relentless content creation: lesson plans, presentation slides, assessment questions, emails, documentation. Each task requires creativity and expertise, but much of the time goes to structural work—organizing information, formatting, creating first drafts—rather than the parts requiring your unique scholarly judgement.
This lesson develops substitution-level literacy: applying functional application to existing workflows without changing your underlying practice. You’re doing the same things, the same way, just faster. AI handles structural heavy lifting; you provide expertise, context, and critical evaluation.
The goal isn’t to do more work. The goal is creating headspace—mental and temporal space for work that genuinely matters. Saving 30 minutes on slides only matters if you protect that time for research, deeper teaching preparation, or leaving work on time occasionally. Without intentional planning, efficiency gains simply disappear into your schedule.
Your evaluation focus: Is this usable? Did this actually save time including revision? Would I use this approach again? At the substitution level, evaluation is straightforward—it worked (saved time, maintained quality) or it didn’t. Both outcomes teach you about meaningful AI engagement in your specific context.
Understanding where this lesson fits in the course
This course follows a three-stage progression from novice application through adaptive practice to transformative capability:
Stage 1: Substitution (this lesson) - You’re applying functional literacy to existing workflows. Same tasks, same process, less time. Evaluation is binary: does this work or not?
Stage 2: Adaptation (later lessons) - Your practice begins reshaping around AI capabilities. You’re not just doing tasks faster—you’re doing different things because AI makes new approaches possible.
Stage 3: Transformation (much later) - You’re doing work that wasn’t previously possible, pursuing different scholarly questions, engaging with scholarship in fundamentally new ways.
We start with substitution because you need functional competence before attempting complex applications. You need to develop judgement about when AI helps and when it hinders through actual practice with bounded tasks.
Quick reflection
If AI saved you 2-3 hours per week on routine content creation, what would you do with that time?
Workflow 1: Creating lesson plans
Teaching materials consume substantial time. Your functional application literacy from lesson 2 can accelerate creation while you maintain pedagogical judgement.
The approach
Before prompting, define your parameters:
- Learning objectives for this session
- Student level and prior knowledge
- Time available
- Your pedagogical preferences
- Specific content requirements
Faded practice: From observation to independence
Stage 1: Observe expert application
Here’s a complete RGID prompt for a lesson plan. Read through and notice how it provides specific context:
[Role] You are an experienced university teacher familiar with active learning approaches and seminar-style teaching.
[Goal] I need a structured lesson plan for a 90-minute seminar on interview methodology in qualitative research for second-year sociology students.
[Instruct] Please include: (1) Timing breakdown showing how 90 minutes should be allocated, (2) Key concepts with brief explanations appropriate for this level, (3) Two active learning activities with clear instructions for students, (4) Three discussion prompts that encourage critical thinking about methodological choices, (5) One quick assessment opportunity to check understanding.
[Discuss] Background: These students completed an introductory research methods module last year but have limited practical experience with qualitative approaches. I prefer activities where students work with real data or scenarios rather than abstract discussions. After generating the structure, I’ll want to discuss adapting activities for different learning styles and potentially adding a reflective component.
Self-explanation
Why does specifying “second-year sociology students” rather than just “students” matter?
Show answer
Specifying the level and discipline activates patterns relevant to that context—the sophistication level, disciplinary conventions, typical background knowledge, and common student challenges in sociology vs other fields. “Students” would produce generic content averaging across all contexts.
Stage 2: Complete the instruction component
Dr. Patel needs a lesson plan for a statistics workshop. The Role and Goal are provided. Your task: complete the Instruct section with 4-5 specific deliverables.
[Role] You are an experienced statistics educator familiar with teaching quantitative methods to social science students.
[Goal] I need a structured lesson plan for a 2-hour workshop on regression analysis for first-year research methods students who find statistics intimidating.
Your turn: Write 4-5 numbered items
Write your 4-5 numbered items specifying what AI should include.
See example completion
[Instruct] Please include: (1) Timing breakdown showing a gradual progression from simple to complex, (2) Key concepts explained with everyday analogies before technical language, (3) Three scaffolded activities building from guided practice to independence, (4) Common misconceptions students have about regression and how to address them, (5) Visual aids or demonstrations that make abstract concepts concrete.
Stage 3: Create your own complete prompt
Think of a lesson or seminar you need to prepare. Create a complete RGID prompt for it.
Your turn: Create your complete RGID prompt
Self-check:
- Role specifies pedagogical approach or teaching context
- Goal states student level, topic, and time frame
- Instruct includes at least 4 specific deliverables
- Discuss mentions student background or teaching constraints
If you’re missing elements, revise before continuing.
After AI generates the plan
Critical evaluation (your expertise matters):
- Does timing align with your knowledge of this cohort?
- Are activities appropriate for your students’ capabilities?
- Does it match your teaching philosophy?
- What examples would resonate with your specific students?
Refine through follow-up: “The discussion activity in section 2 seems too abstract. Suggest a more concrete activity connecting to [specific context relevant to your students].”
Personalise where it matters: Add examples from your research, anecdotes that connect concepts, adaptations for students you know struggle, disciplinary context AI can’t provide.
Typical time comparison:
- Manual lesson planning: 2-3 hours
- With AI assistance: 45-60 minutes
- But track honestly—if revision takes 2 hours, you didn’t save time
Workflow 2: Creating presentation slides
Whether for conferences, departmental meetings, or teaching, slide creation consumes significant time.
The slide creation workflow
Structure your prompt
[Role] You are a presentation design expert familiar with academic contexts and effective visual communication.
[Goal] Create slide content for a 20-minute conference presentation on [your research topic].
[Instruct] Please provide: (1) Title slide with compelling framing of the research problem, (2) 3-4 main sections with key points as brief bullets (not full sentences), (3) Conclusion slide emphasizing implications and contributions, (4) Suggestions for visual elements for each slide (graphs, diagrams, images).
[Discuss] My research focuses on [brief context]. Audience includes both specialists in my subfield and researchers from adjacent areas. After reviewing the structure, I’ll need help refining technical explanations for accessibility without oversimplification.
After generation
Critical evaluation:
- Are slides visual-focused rather than text-heavy?
- Does flow make sense for your actual presentation?
- Are technical concepts explained appropriately for audience?
Refine: “Slide 3 has too much text. Reduce to 3 bullet points maximum that I can elaborate on verbally.”
Add your voice: Your specific data, your examples, your insights, your personality in speaker notes.
Typical time comparison:
- Manual slide creation (15-20 slides): 2-3 hours
- With AI assistance: 45-60 minutes
Literacy note: Extraction-level engagement works well here because slides are bounded, structured outputs. But generic slides aren’t usable—your critical evaluation determines whether you deploy them.
Quick practice
Think of your next presentation. Write a Goal statement for the slide deck that specifies topic, time frame, and audience.
Workflow 3: Creating assessment questions
Quiz questions, exam prompts, discussion questions, assignment briefs—assessment creation is time-intensive and cognitively demanding.
The assessment creation workflow
Structure your prompt
[Role] You are an assessment design expert familiar with [your discipline] education and evidence-based assessment practices.
[Goal] Generate 10 multiple-choice questions assessing understanding of [specific concept] at application level, not just recall.
[Instruct] For each question: (1) Create a scenario or problem requiring application of the concept, (2) Provide one clearly correct answer, (3) Include three plausible distractors that reflect common misconceptions, (4) Avoid tricky wording—test understanding, not test-taking skills, (5) Briefly explain why the correct answer is correct.
[Discuss] Students in this module struggle most with [common misconception]. After reviewing questions, I’ll want to refine any that don’t effectively address this challenge or that use overly technical language for this level.
After generation
Critical evaluation (complementary errors from lesson 1):
- Do questions test what you want to assess?
- Are distractors plausible but clearly incorrect to someone who understands?
- Would these work with your actual students?
- Do they address common misconceptions you know about?
Adapt: Add disciplinary context, adjust difficulty based on student knowledge, ensure alignment with your teaching.
Typical time comparison:
- Manual creation (10 good questions): 1-2 hours
- With AI assistance: 30-45 minutes
Literacy note: This is where complementary errors become evident. AI generates plausible questions but may miss subtleties. You catch disciplinary nuance AI misses. Quality emerges through this partnership.
Quick practice
Generate 3 quiz questions for a topic you’re currently teaching. Use AI, then evaluate: would you actually use these with students? What needs adjustment?
Workflow 4: Professional emails and documentation
Routine professional correspondence—scheduling, documentation, responses—accumulates throughout the week.
The email workflow
Structure your prompt
[Role] You are a professional communication expert familiar with academic contexts and conventions.
[Goal] Draft an email to [recipient] about [purpose].
[Instruct] Please include: (1) Brief but warm opening that acknowledges context, (2) Clear statement of what I need or am asking, (3) Specific action items or questions, (4) Professional close with explicit next steps.
[Discuss] My relationship with this person is [formal/collaborative/friendly professional]. Tone should be [professional but friendly/formal/direct]. They’re busy, so brevity matters while maintaining warmth.
After generation
Critical evaluation:
- Does it sound like something you’d write?
- Is the tone appropriate for this relationship?
- Are action items clear?
Adapt: Adjust for your voice, add personal touches, modify formality level.
Warning about false efficiency: If you spend 15 minutes editing what AI produces, when you could write it yourself in 10 minutes, you didn’t save time. Email is where false efficiency appears most often.
Typical time comparison:
- Simple emails: May add overhead rather than saving time
- Complex professional writing (proposals, documentation): May save 30-50%
Honest evaluation matters: Not every task benefits from AI assistance.
Quick practice
Compare approaches. Draft one routine email manually (time it). Draft a similar email with AI assistance (time it including revision). Which was actually faster?
Task selection: Choosing wisely
Not every content creation task benefits from AI assistance. Let’s practice recognising which tasks warrant AI engagement.
The scenario
You have 45 minutes before your next meeting. Looking at your task list, you need to:
- Create slides for tomorrow’s 90-minute lecture (need 25 slides with examples and activities)
- Respond to 3 student emails about assignment extensions (each requires 2-3 sentences)
- Draft 5 discussion questions for next week’s seminar on a topic you know well
Which tasks should you use AI for?
Option A: Use AI for all three tasks
Your approach: You try to prompt for slides, emails, and questions all within 45 minutes.
What happens: You rush through prompting, get mediocre outputs for all three. The slides lack examples you wanted, the emails feel impersonal, and the discussion questions miss the nuance you wanted to develop. You spend the meeting distracted because nothing feels quite right. Later that evening, you spend an hour fixing everything.
Time total: 45 min during day + 60 min evening = 105 minutes
Learning: Not every task benefits from AI. Very quick tasks (like short emails) often take longer to prompt than to write. Rushing through prompts produces poor outputs that require extensive revision.
Option B: Use AI only for slides (highest time investment)
Your approach: You focus AI engagement on the most time-intensive task—slides. You spend 20 minutes creating a strong prompt and reviewing output, 15 minutes refining with follow-up questions, 5 minutes adding your examples. You manually write the three emails (3 minutes total) and draft discussion questions from your expertise (7 minutes).
What happens: You have solid slide structure that you’ll polish tomorrow morning. The emails are personal and appropriate. The discussion questions reflect exactly what you wanted to explore. You protected time for what matters.
Time total: 20 + 15 + 5 + 3 + 7 = 50 minutes (5 minutes over, but manageable)
Learning: Prioritise AI for high time-investment tasks with clear structure. Quick tasks are often faster manual. Tasks requiring your distinctive voice or expertise may not benefit from AI’s generic starting point.
Option C: Don't use AI—do everything manually
Your approach: You decide to create slides, write emails, and draft questions all manually.
What happens: Creating 25 slides manually takes 2 hours. You run out of time before finishing, have to skip the meeting, and still need to complete everything later. The emails and questions never get written.
Time total: 2+ hours, incomplete work, missed meeting
Learning: For genuinely time-intensive tasks with clear structure, AI assistance can prevent being overwhelmed. The question isn’t “should I use AI?” but “which tasks benefit most from AI assistance?”
Pause and reflect
Which option best balanced time investment with quality outcomes? What does this reveal about task selection?
Task selection principle: Use AI for bounded, time-intensive tasks with clear structure. Manual may be faster for very quick tasks or those requiring your distinctive voice throughout.
Activity
Apply to your actual work
Time required: 20-25 minutes
Choose ONE content creation task from your actual workload this week:
- Lesson plan or teaching material
- Presentation slides
- Assessment questions
- Professional email or documentation
- Other routine content you need to create
Commit to deploying it: Only count this as successful if you actually use what you create in your work, not just practice.
Step 1: Write your RGID prompt
Step 2: Track your time
- Prompt creation: ___ minutes
- AI response review: ___ minutes
- First revision round: ___ minutes
- Second revision round: ___ minutes
- Final polish: ___ minutes
Step 3: Critical evaluation before finishing
- I would actually deploy this in my work without major additional revision
- It meets my normal quality standards for this task type
- It includes my distinctive perspective/examples/voice where it matters
Honest efficiency evaluation
This is the critical evaluation dimension of literacy. Be brutally honest—both successes and failures teach you about meaningful AI engagement.
Time analysis
- Normal time for this task: ___ minutes
- Time with AI (including all revision): ___ minutes
- Net time saved (or lost): ___ minutes
Quality check
If it worked:
- Usable without major additional revision
- Meets my normal quality standards
- Would use this approach again for this task type
What made this task suitable for AI assistance?
Reflect on what worked well.
If it didn’t work:
- Required too much revision to be worthwhile
- Below my normal quality standards
- Added overhead rather than saving time
- Wouldn’t use AI for this task type again
What made this unsuitable?
Reflect on what didn’t work.
Troubleshooting: Why didn't this work?
Possible causes:
- Task too simple - Faster to do manually than prompt (e.g., 2-sentence email)
- Task too complex - Requires sustained expertise throughout, not just evaluation at the end
- Insufficient context in prompt - AI couldn’t generate useful output because it lacked critical information
- Task needed your voice from the start - Generic AI starting point added revision burden rather than saving time
What to do:
- Identify which factor applied
- If prompting issue: revise RGID structure once more with better context
- If task mismatch: note that this task type doesn’t suit substitution
- Move on—recognising what doesn’t work is valuable learning
Literacy note: Honest evaluation—including recognising what doesn’t work—is essential for developing taste. If AI added overhead, that’s valuable information about task characteristics, not a failure.
Protecting the time you saved
If you saved 20-30 minutes, that only matters if you protect that time for high-value work. Without intentional planning, efficiency gains simply disappear into your schedule.
Schedule your reclaimed time
I will use this reclaimed time for: (examples: 30 minutes uninterrupted writing, reading one paper, leaving work earlier, proper lunch break, specific research task)
When: (Specific day and time this week)
Where: (Location)
Put this in your calendar before other work fills the space:
- Calendar entry created
- I’ve protected this time
Why this matters: This is contextual judgement—understanding how efficiency serves what matters to you. Saving time on slides is meaningless if you just fill that time with more email. Instrumental efficiency becomes meaningful when it serves your scholarly goals.
Pattern recognition: Building your taste
Looking across the workflows you’ve learned and the task you completed, identify patterns about when AI assistance works well.
Pause and reflect
List 2-3 routine content creation tasks you do regularly where AI might help. For each, note:
- Task type
- Frequency (weekly/monthly)
- Time per instance
- Good candidate for AI? (Yes/Maybe/No)
What characteristics do your “good candidate” tasks share? Possible patterns:
- Bounded and routine (clear start and end)?
- Follow predictable structures?
- Require expertise to evaluate but not to generate initial structure?
- Time-consuming but don’t require your unique voice throughout?
This is taste development: Through repeated honest evaluation, you’re building professional judgement about when substitution serves your work and when it doesn’t. This pattern recognition is how literacy develops beyond technique mastery.
Key takeaways
-
Substitution applies foundational literacy to specific tasks: You’re using extraction-level engagement and structured prompting (RGID from lesson 2) to accelerate bounded tasks—lesson planning, slides, assessment, professional writing—while maintaining quality through critical evaluation. Success means completing necessary content faster while maintaining acceptable quality.
-
Efficiency serves headspace, not productivity: The value of AI-assisted content creation isn’t producing more content—it’s producing necessary content efficiently so you have time for work that genuinely matters. Without intentional planning, efficiency gains simply disappear into your schedule. Identify high-value work that matters to you, then protect the time you save for those activities.
-
Honest evaluation builds literacy: Ask: Does this work? Is it usable? Did it actually save time including all revision? The evaluation is largely binary at substitution level—it works or it doesn’t—but the honesty matters enormously for literacy development. If AI adds overhead rather than saving time for a particular task, that’s valuable information.
-
Not all work should be efficient: Use AI to accelerate tasks that don’t require deep engagement. Reserve your cognitive resources for work requiring genuine intellectual engagement—research, deep reading, developing original arguments. The efficiency you gain through substitution should serve your capacity for deep, meaningful scholarly work.
Your commitment
Pause and reflect
Based on this lesson, what’s one specific content creation task you’ll try with AI assistance this week? How will you evaluate whether it produced genuine efficiency gains? Document this commitment in your Action Journal.
Looking ahead
You’ve applied functional application literacy to content creation. The next substitution lesson applies these same skills to reading academic literature—using AI as a reading companion to manage volume while maintaining critical engagement.
Before moving on, make sure you’ve identified at least one routine content creation task that benefited from AI assistance and one that didn’t. Both observations inform your developing taste.
Resources
- Mollick, E. & Mollick, L. (2023). Using AI to implement effective teaching strategies in classrooms. SSRN.
- Mollick, E. & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. SSRN Electronic Journal.
- Newport, C. (2016). Deep work: Rules for focused success in a distracted world. Grand Central Publishing.
- Burkeman, O. (2021). Four thousand weeks: Time management for mortals. Farrar, Straus and Giroux.
- Allen, D. (2015). Getting things done: The art of stress-free productivity. Penguin.
- Drucker, P. (2006). The effective executive: The definitive guide to getting the right things done. Harper Business.