The single biggest problem in communication is the illusion that it has taken place.
George Bernard Shaw
Lesson overview
Objective: Develop functional application literacy—learning to communicate effectively with AI through structured prompting and understanding three levels of engagement
Summary: This lesson introduces the RGID framework (Role, Goal, Instruct, Discuss) for structuring prompts that produce useful outputs. You’ll also learn to match engagement levels to different academic tasks—extraction for quick outputs, conversation for complex thinking, and context for sustained projects. These skills form the foundation for everything that follows in subsequent lessons.
Key habits:
- Structured prompting: Use the RGID framework to communicate clearly rather than hoping simple queries will work
- Level matching: Choose engagement depth (extraction, conversation, context) based on task requirements
- Prompt documentation: Build a personal library of effective prompts for recurring tasks
The contrast
Dr James Mitchell needs help analysing 20 semi-structured interviews about teachers’ experiences with technology adoption. He’s feeling overwhelmed by the volume of data and isn’t sure how to start coding.
First attempt:
He opens ChatGPT and types: “Help me analyse my interview data”
AI responds with a generic overview of qualitative analysis—thematic analysis, grounded theory, content analysis. It’s accurate but completely useless. James has no idea how to apply any of this to his specific situation. He closes the tab, frustrated.
Second attempt:
The next day, James tries again with a different approach:
“You are an experienced qualitative researcher specialising in interview methodology, with expertise in grounded theory approaches. I need a step-by-step coding framework for analysing 20 semi-structured interviews exploring teachers’ experiences with technology adoption. Please: (1) Review key theoretical frameworks relevant to technology adoption in education, (2) Propose 5-7 initial coding categories based on common themes, (3) For each category, provide a clear definition, example indicators, and potential sub-categories, (4) Suggest an approach for managing codes that don’t fit existing categories. After generating the framework, I’d like to discuss handling contradictory data.”
This time, AI generates a specific, actionable coding framework with relevant theoretical grounding. James can immediately start applying it to his data.
Before we begin
What’s the difference between these two approaches? Why did the second work so much better than the first?
Why structure matters
In lesson 1, you learned AI is language-based cognitive extension. But how do you actually communicate effectively with it?
Most academics default to simple queries like “help me with X”—treating AI like a search engine. Type keywords, hope for useful results. But as you learned, AI isn’t retrieving information—it’s constructing responses based on how you frame your request. Vague prompts produce vague responses. Structured prompts that establish context, specify needs, and provide clear instructions yield outputs far more useful for academic purposes.
Think of prompting as briefing a research assistant. You wouldn’t walk into someone’s office and simply say “research methods.” You’d provide context about your project, explain what you’re trying to achieve, and ask specific questions.
This lesson develops two essential literacy skills: how to structure prompts using a simple framework called RGID, and how to match your engagement level to different academic tasks. These capabilities form the foundation for everything that follows in subsequent lessons.
RGID component 1: Role
The RGID framework gives you a mental scaffold for communicating clearly with AI. It’s not a rigid formula—it’s a thinking tool. Let’s build it one component at a time.
Define the AI’s perspective
Establish what perspective or domain of knowledge you want AI to draw from. This isn’t about anthropomorphising—it’s about establishing the frame of reference, activating the patterns in AI’s training data that are most relevant to your needs.
Naive approach: “Help me with my research methods”
Structured approach: “You are an experienced qualitative researcher specialising in interview methodology, with expertise in grounded theory approaches”
The role establishes which patterns in AI’s training data become most relevant. Remember from lesson 1 that AI generates responses based on patterns learned from vast amounts of text—the role helps focus which patterns matter most.
Why this works: Without a role, AI draws from generic patterns. With a specific role, it weights responses toward expertise in that domain. You’re not creating a personality—you’re focusing the statistical patterns that generate the response.
Quick practice: Write a role statement
Choose something you’re currently working on. What perspective or expertise would be most helpful?
Write a role statement for this task. Does your role specify a domain of expertise or perspective relevant to your task? If it’s too generic (“you are a helpful assistant”), try adding specificity.
RGID component 2: Goal
Clearly state your desired outcome
Be specific about format, scope, and purpose. What exactly do you need? Vague goals produce vague outputs.
Naive approach: “I need help with data analysis”
Structured approach: “I need a step-by-step coding framework for analysing 20 semi-structured interviews exploring teachers’ experiences with technology adoption”
Clarity about goals helps AI calibrate its response appropriately. This connects to the shared responsibility framework from lesson 1—you’re responsible for clear communication about what you need.
What makes a goal specific:
- States the format (framework, summary, questions, outline)
- Defines the scope (20 interviews, 3 key themes, first draft)
- Clarifies the purpose (for analysis, for teaching, for publication)
Quick practice: Transform a vague goal
Take the vague goal “I need help with literature review.” Transform it into a specific, structured goal statement.
Does your goal answer “what format?”, “how much?”, and “for what purpose?” If someone else read your goal, could they understand exactly what you need?
RGID component 3: Instruct
Provide specific steps
Break down what you want AI to do into clear, actionable steps. This creates structure that produces more coherent outputs.
Example for the interview analysis:
- Review key theoretical frameworks relevant to technology adoption in education
- Propose 5-7 initial coding categories based on common themes
- For each category, provide: clear definition, example indicators, potential sub-categories
- Suggest an approach for managing codes that don’t fit existing categories
Clear instructions leverage AI’s strength at following structured guidance while acknowledging that you retain responsibility for directing the collaboration.
Why numbered steps work: They create a logical sequence, make the output predictable and scannable, and ensure nothing important gets omitted from the response.
Quick practice: Create numbered steps
Think about your goal from the previous exercise. What are 3-4 specific steps AI should take to achieve that goal? Write them as a numbered list.
Could someone follow your steps in order? Do they lead logically from one to the next? Are they specific enough that AI knows what to produce at each stage?
RGID component 4: Discuss
Signal openness to iteration
Plan to ask follow-up questions and refine through dialogue. The ‘Discuss’ component signals that you’re open to iterative refinement—first responses are starting points for conversation.
Example: “After generating the framework, I’d like to discuss handling contradictory data and explore how this compares to alternative approaches”
This connects directly to the conversation paradigm from lesson 1. Quality emerges through interaction, not through expecting perfect first outputs.
What this accomplishes:
- Signals you expect to refine, not just receive
- Prepares AI for follow-up questions
- Frames engagement as collaborative rather than transactional
Quick practice: Plan follow-up questions
For your emerging prompt (role + goal + instructions), what follow-up questions would help refine the output? What aspects might need clarification? Write 2-3 questions you’d plan to ask.
Do your questions dig deeper into the output you’ll receive, or do they just ask for more of the same? Good follow-ups typically ask “why,” “how does this apply when,” or “what if.”
Putting RGID together: Faded practice
You’ve now built all four RGID components. Let’s practice combining them through three progressive stages.
Stage 1: Observe expert application (fully worked)
Here’s a complete RGID prompt for a research question development task. Read through and notice how all four components work together:
[Role] You are an expert in higher education research with knowledge of academic workload, wellbeing, and organisational culture.
[Goal] I need research questions for a study examining how UK academics manage competing demands, particularly the relationship between workload and research productivity.
[Instruct] Please: (1) Identify 3-5 key areas of debate in the literature, (2) Formulate 2-3 research questions for each area, (3) Explain why each matters and what gap it addresses.
[Discuss] I’d like to discuss feasibility and potential methodological approaches for the most promising questions.
Self-explanation
Why does specifying “UK academics” rather than just “academics” improve the output?
Show answer
Specifying “UK academics” activates patterns relevant to the UK higher education context—REF pressures, teaching-intensive vs research-intensive institutions, casualisation trends. Generic “academics” would produce responses averaging across different national systems with different pressures and constraints.
Stage 2: Complete the final components (fading)
Dr Sarah Lee needs help with literature review structure. The Role and Goal are provided. Your task: complete the Instruct and Discuss components.
[Role] You are an expert in systematic literature review methodology with experience in education research.
[Goal] I need a structural framework for organising 60 papers on student engagement in blended learning, grouped thematically to identify gaps and research opportunities.
Complete the Instruct component
Write 3-4 numbered steps AI should take
Complete the Discuss component
What follow-up questions would refine this output?
See example completion
[Instruct] Please: (1) Identify 4-6 major themes across the 60 papers, (2) For each theme, summarise the key arguments and methodological approaches, (3) Map which themes are well-developed versus underexplored, (4) Suggest 3-5 specific research gaps emerging from theme patterns.
[Discuss] I’d like to explore how these themes have evolved chronologically and discuss which gaps are most feasible for a two-year doctoral study.
Stage 3: Create your own complete RGID prompt (independent)
Now apply RGID to a real task from your work. Combine all four components into a complete, structured prompt you could actually use this week.
Create your complete RGID prompt
Does your prompt include all four components?
- Role: Specifies perspective or expertise
- Goal: States specific desired outcome
- Instruct: Provides clear numbered steps
- Discuss: Signals openness to iteration
Three levels of engagement
The RGID framework helps structure individual prompts. But AI literacy also requires understanding different engagement levels—matching how you interact with AI to what the task requires.
Not every task needs the same depth of engagement. Sometimes you need quick outputs; sometimes you need extended thinking; sometimes you need accumulated understanding over time. Recognising which level serves your purpose is a core literacy skill.
Extraction: Quick outputs for bounded tasks
Click to learn about Extraction (2-5 minutes)
What it is: Single-prompt interactions that produce quick, specific outputs for clearly defined tasks.
When to use it:
- Summarising papers or reports
- Generating teaching materials from existing content
- Extracting key information from documents
- Creating lists, templates, or examples
When NOT to use it:
- Understanding complex frameworks
- Working through methodological decisions
- Exploring your own thinking
- Developing nuanced arguments
Example task: “Summarise the methodology section of this paper in 200 words, focusing on data collection and analysis approaches”
Conversation: Extended dialogue for complex thinking
Click to learn about Conversation (15-30 minutes)
What it is: Multi-turn exchanges where you work through complex ideas, test assumptions, and develop understanding through dialogue.
When to use it:
- Exploring methodological choices and trade-offs
- Understanding unfamiliar theoretical frameworks
- Testing your reasoning and assumptions
- Developing arguments or course structures
When NOT to use it:
- Simple factual questions
- Tasks requiring accumulated context from previous conversations
- When you just need a quick output
Example task: A 20-minute conversation exploring: “I’m deciding between survey and interview approaches for studying academic stress. Help me think through the implications of each for my specific research questions…”
Context: Sustained collaboration for ongoing projects
Click to learn about Context (ongoing, building)
What it is: Returning to the same conversation over time, building shared understanding as the AI accumulates context about your project.
When to use it:
- Developing course materials over several weeks
- Working through a complex research project
- Building something that requires iterative refinement
- When each session builds on previous understanding
When NOT to use it:
- One-off questions unrelated to ongoing work
- When starting fresh gives better results than accumulated context
- Tasks where you need AI to challenge rather than build on your thinking
Example task: Returning to the same conversation across multiple sessions while developing a module, each time building on previous discussions about learning objectives, assessment design, and content structure.
Branching scenario: Matching level to task
Let’s practice choosing the right engagement level. Read the scenario and select your approach.
The scenario
You’re preparing to teach a new module on research methods next semester. You’ve taught research methods before, but that was in sociology—this time you’re teaching it in education. The fundamental concepts are the same, but the examples, applications, and disciplinary conventions are different.
Which engagement level would you use and why?
Option A: Extraction — Quick material generation
Your approach: Write a single prompt: “Generate 10 examples of research questions in education using both qualitative and quantitative approaches.”
What happens: You get 10 generic examples. They’re technically correct but don’t reflect the nuances of education research. When you use them in class, students ask questions you can’t answer fluently because the examples don’t quite fit the disciplinary context.
Reflection: Extraction works for adaptation when you already deeply understand the new context. Here, you needed to develop that understanding first, not just get outputs.
Option B: Conversation — Explore adaptation challenges
Your approach: Spend 20 minutes in conversation: “I’ve taught research methods in sociology but now teaching it in education. What are the key differences in how these disciplines approach research? What examples work well in education? What disciplinary conventions should I know?”
What happens: Through dialogue, you identify that education research has different relationships with practitioner knowledge, different ethical considerations around research with children, and different expectations about actionable findings. You develop genuine understanding of how to adapt your existing knowledge.
Reflection: Conversation helped you understand the adaptation challenge rather than just generating outputs. This serves the task well—you needed to develop understanding, not just materials.
Option C: Context — Build comprehensive understanding
Your approach: Return to the same conversation over several weeks, progressively building understanding of education research methods, testing example problems, discussing student misconceptions, refining materials based on what you learn.
What happens: You develop very deep understanding of education research methods. The time investment creates excellent materials and genuine fluency. However, conversation-level engagement would have been sufficient—you didn’t actually need the accumulated context building that context-level engagement provides.
Reflection: Context level can work, but it’s overkill for this task. The time investment isn’t justified by the incremental benefit over conversation-level engagement. Save context for projects requiring sustained, iterative development.
Pause and reflect
Which option best balanced time investment with the understanding needed? What did this reveal about matching engagement level to task requirements?
Activity
Compare approaches with your work
Time required: 8-10 minutes
Choose a specific task you need AI help with this week—something real from your actual work.
Step 1: Quick extraction (3 minutes) Write a simple, direct prompt for this task. Keep it under 20 words. Try it with actual AI and note what you get.
Step 2: Structured RGID (5 minutes) Now apply the full RGID framework to the same task. Have 2-3 turns refining the output based on what you need.
Step 3: Reflect on the difference
- Which approach fits the task better?
- How much time did each take?
- Which output can you actually use in your work?
- What does this reveal about structured versus naive prompting?
Start your prompt library
Create a simple document titled “Prompt Library” (you can use a Word doc, note-taking app, or even just a text file).
Save 1-2 prompts from today’s activities that worked well. For each, note:
Task type: What were you trying to accomplish?
Prompt structure: The actual prompt (copy-paste it)
Why it worked: What made this effective?
Engagement level: Extraction, conversation, or context?
Example entry:
Task type: Literature review organisation
Prompt: [Role] You are an expert in systematic literature review... [etc]
Why it worked: The numbered steps created clear structure, and planning follow-up questions helped me refine the themes
Engagement level: Conversation (needed 20 minutes of dialogue to develop the framework)
This becomes a resource you’ll build throughout the course—a personal collection of what works for your specific contexts.
Literacy note: Documentation supports metacognition—reflecting on your practice and building frameworks you can apply across situations. This is how expertise develops.
Key takeaways
-
Structured prompting produces better results: The RGID framework—Role, Goal, Instruct, Discuss—provides a mental scaffold for communicating clearly with AI. By establishing perspective, specifying desired outcomes, providing clear steps, and signalling openness to dialogue, you fundamentally change the quality of AI responses.
-
Three engagement levels serve different purposes: Extraction works for bounded tasks where you want quick outputs. Conversation works for complex challenges requiring extended thinking. Context works for sustained projects where accumulated understanding creates value. Matching engagement level to task requirements is a key aspect of developing taste.
-
Iteration develops collaboration and understanding: Effective AI engagement is rarely about finding the perfect prompt on the first try. It’s about starting with structured communication using RGID, then refining through follow-up questions. Each iteration develops understanding—both AI’s understanding of what you need and your understanding of the task itself.
-
Taste develops through reflective practice: Professional judgement about when and how AI engagement serves your work can’t be taught through rules. It develops by trying different approaches, reflecting on what produces meaningful value, and gradually building intuitions about effective engagement.
Your commitment
Pause and reflect
Based on this lesson, what’s one specific task you’ll try with structured RGID prompting this week? How will you evaluate whether it produced meaningful value? Document this commitment in your Action Journal.
Looking ahead
You’ve now developed functional application—the ability to communicate effectively with AI through structured prompts and appropriate engagement levels. In the next lesson, you’ll apply these skills to reading academic literature, using AI as a reading companion to manage volume while maintaining critical engagement.
Before moving on, make sure you’ve saved at least one successful prompt to your library. You’ll build on this foundation in every subsequent lesson.
Resources
- Anthropic. (2024). Prompt engineering guide. https://docs.anthropic.com/
- OpenAI. (2023). Prompt engineering. https://platform.openai.com/docs/guides/prompt-engineering
- Schulhoff, S. et al. (2024). The prompt report: A systematic survey of prompting techniques. arXiv.
- Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. CHI ‘20.
- Ng, D. T. K., et al. (2021). A conceptual framework for AI literacy. Computers and Education: Artificial Intelligence, 2.
- Mollick, E. & Mollick, L. (2023). Practical AI for instructors and students Part 3: Prompting AI. YouTube.
- Mollick, E. & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. SSRN Electronic Journal.