The single biggest problem in communication is the illusion that it has taken place.

George Bernard Shaw

Lesson overview

Objective: Develop functional application literacy—learning to communicate effectively with AI through structured prompting and understanding three levels of engagement

Summary: This lesson introduces the RGID framework (Role, Goal, Instruct, Discuss) for structuring prompts that produce useful outputs. You’ll also learn to match engagement levels to different academic tasks—extraction for quick outputs, conversation for complex thinking, and context for sustained projects. These skills form the foundation for everything that follows in subsequent lessons.

Key habits:

  • Structured prompting: Use the RGID framework to communicate clearly rather than hoping simple queries will work
  • Level matching: Choose engagement depth (extraction, conversation, context) based on task requirements
  • Prompt documentation: Build a personal library of effective prompts for recurring tasks

The contrast

Dr James Mitchell needs help analysing 20 semi-structured interviews about teachers’ experiences with technology adoption. He’s feeling overwhelmed by the volume of data and isn’t sure how to start coding.

First attempt:

He opens ChatGPT and types: “Help me analyse my interview data”

AI responds with a generic overview of qualitative analysis—thematic analysis, grounded theory, content analysis. It’s accurate but completely useless. James has no idea how to apply any of this to his specific situation. He closes the tab, frustrated.

Second attempt:

The next day, James tries again with a different approach:

“You are an experienced qualitative researcher specialising in interview methodology, with expertise in grounded theory approaches. I need a step-by-step coding framework for analysing 20 semi-structured interviews exploring teachers’ experiences with technology adoption. Please: (1) Review key theoretical frameworks relevant to technology adoption in education, (2) Propose 5-7 initial coding categories based on common themes, (3) For each category, provide a clear definition, example indicators, and potential sub-categories, (4) Suggest an approach for managing codes that don’t fit existing categories. After generating the framework, I’d like to discuss handling contradictory data.”

This time, AI generates a specific, actionable coding framework with relevant theoretical grounding. James can immediately start applying it to his data.

Before we begin

What’s the difference between these two approaches? Why did the second work so much better than the first?

Why structure matters

In lesson 1, you learned AI is language-based cognitive extension. But how do you actually communicate effectively with it?

Most academics default to simple queries like “help me with X”—treating AI like a search engine. Type keywords, hope for useful results. But as you learned, AI isn’t retrieving information—it’s constructing responses based on how you frame your request. Vague prompts produce vague responses. Structured prompts that establish context, specify needs, and provide clear instructions yield outputs far more useful for academic purposes.

Think of prompting as briefing a research assistant. You wouldn’t walk into someone’s office and simply say “research methods.” You’d provide context about your project, explain what you’re trying to achieve, and ask specific questions.

This lesson develops two essential literacy skills: how to structure prompts using a simple framework called RGID, and how to match your engagement level to different academic tasks. These capabilities form the foundation for everything that follows in subsequent lessons.

RGID component 1: Role

The RGID framework gives you a mental scaffold for communicating clearly with AI. It’s not a rigid formula—it’s a thinking tool. Let’s build it one component at a time.

Define the AI’s perspective

Establish what perspective or domain of knowledge you want AI to draw from. This isn’t about anthropomorphising—it’s about establishing the frame of reference, activating the patterns in AI’s training data that are most relevant to your needs.

Naive approach: “Help me with my research methods”

Structured approach: “You are an experienced qualitative researcher specialising in interview methodology, with expertise in grounded theory approaches”

The role establishes which patterns in AI’s training data become most relevant. Remember from lesson 1 that AI generates responses based on patterns learned from vast amounts of text—the role helps focus which patterns matter most.

Why this works: Without a role, AI draws from generic patterns. With a specific role, it weights responses toward expertise in that domain. You’re not creating a personality—you’re focusing the statistical patterns that generate the response.

RGID component 2: Goal

Clearly state your desired outcome

Be specific about format, scope, and purpose. What exactly do you need? Vague goals produce vague outputs.

Naive approach: “I need help with data analysis”

Structured approach: “I need a step-by-step coding framework for analysing 20 semi-structured interviews exploring teachers’ experiences with technology adoption”

Clarity about goals helps AI calibrate its response appropriately. This connects to the shared responsibility framework from lesson 1—you’re responsible for clear communication about what you need.

What makes a goal specific:

  • States the format (framework, summary, questions, outline)
  • Defines the scope (20 interviews, 3 key themes, first draft)
  • Clarifies the purpose (for analysis, for teaching, for publication)

RGID component 3: Instruct

Provide specific steps

Break down what you want AI to do into clear, actionable steps. This creates structure that produces more coherent outputs.

Example for the interview analysis:

  1. Review key theoretical frameworks relevant to technology adoption in education
  2. Propose 5-7 initial coding categories based on common themes
  3. For each category, provide: clear definition, example indicators, potential sub-categories
  4. Suggest an approach for managing codes that don’t fit existing categories

Clear instructions leverage AI’s strength at following structured guidance while acknowledging that you retain responsibility for directing the collaboration.

Why numbered steps work: They create a logical sequence, make the output predictable and scannable, and ensure nothing important gets omitted from the response.

RGID component 4: Discuss

Signal openness to iteration

Plan to ask follow-up questions and refine through dialogue. The ‘Discuss’ component signals that you’re open to iterative refinement—first responses are starting points for conversation.

Example: “After generating the framework, I’d like to discuss handling contradictory data and explore how this compares to alternative approaches”

This connects directly to the conversation paradigm from lesson 1. Quality emerges through interaction, not through expecting perfect first outputs.

What this accomplishes:

  • Signals you expect to refine, not just receive
  • Prepares AI for follow-up questions
  • Frames engagement as collaborative rather than transactional

Putting RGID together: Faded practice

You’ve now built all four RGID components. Let’s practice combining them through three progressive stages.

Stage 1: Observe expert application (fully worked)

Here’s a complete RGID prompt for a research question development task. Read through and notice how all four components work together:

[Role] You are an expert in higher education research with knowledge of academic workload, wellbeing, and organisational culture.

[Goal] I need research questions for a study examining how UK academics manage competing demands, particularly the relationship between workload and research productivity.

[Instruct] Please: (1) Identify 3-5 key areas of debate in the literature, (2) Formulate 2-3 research questions for each area, (3) Explain why each matters and what gap it addresses.

[Discuss] I’d like to discuss feasibility and potential methodological approaches for the most promising questions.

Self-explanation

Why does specifying “UK academics” rather than just “academics” improve the output?

Show answer

Specifying “UK academics” activates patterns relevant to the UK higher education context—REF pressures, teaching-intensive vs research-intensive institutions, casualisation trends. Generic “academics” would produce responses averaging across different national systems with different pressures and constraints.

Stage 2: Complete the final components (fading)

Dr Sarah Lee needs help with literature review structure. The Role and Goal are provided. Your task: complete the Instruct and Discuss components.

[Role] You are an expert in systematic literature review methodology with experience in education research.

[Goal] I need a structural framework for organising 60 papers on student engagement in blended learning, grouped thematically to identify gaps and research opportunities.

Stage 3: Create your own complete RGID prompt (independent)

Now apply RGID to a real task from your work. Combine all four components into a complete, structured prompt you could actually use this week.

Three levels of engagement

The RGID framework helps structure individual prompts. But AI literacy also requires understanding different engagement levels—matching how you interact with AI to what the task requires.

Not every task needs the same depth of engagement. Sometimes you need quick outputs; sometimes you need extended thinking; sometimes you need accumulated understanding over time. Recognising which level serves your purpose is a core literacy skill.

Extraction: Quick outputs for bounded tasks

Conversation: Extended dialogue for complex thinking

Context: Sustained collaboration for ongoing projects

Branching scenario: Matching level to task

Let’s practice choosing the right engagement level. Read the scenario and select your approach.

The scenario

You’re preparing to teach a new module on research methods next semester. You’ve taught research methods before, but that was in sociology—this time you’re teaching it in education. The fundamental concepts are the same, but the examples, applications, and disciplinary conventions are different.

Which engagement level would you use and why?

Pause and reflect

Which option best balanced time investment with the understanding needed? What did this reveal about matching engagement level to task requirements?

Activity

Start your prompt library

Create a simple document titled “Prompt Library” (you can use a Word doc, note-taking app, or even just a text file).

Save 1-2 prompts from today’s activities that worked well. For each, note:

Task type: What were you trying to accomplish?

Prompt structure: The actual prompt (copy-paste it)

Why it worked: What made this effective?

Engagement level: Extraction, conversation, or context?

Example entry:

Task type: Literature review organisation
Prompt: [Role] You are an expert in systematic literature review... [etc]
Why it worked: The numbered steps created clear structure, and planning follow-up questions helped me refine the themes
Engagement level: Conversation (needed 20 minutes of dialogue to develop the framework)

This becomes a resource you’ll build throughout the course—a personal collection of what works for your specific contexts.

Literacy note: Documentation supports metacognition—reflecting on your practice and building frameworks you can apply across situations. This is how expertise develops.

Key takeaways

  • Structured prompting produces better results: The RGID framework—Role, Goal, Instruct, Discuss—provides a mental scaffold for communicating clearly with AI. By establishing perspective, specifying desired outcomes, providing clear steps, and signalling openness to dialogue, you fundamentally change the quality of AI responses.

  • Three engagement levels serve different purposes: Extraction works for bounded tasks where you want quick outputs. Conversation works for complex challenges requiring extended thinking. Context works for sustained projects where accumulated understanding creates value. Matching engagement level to task requirements is a key aspect of developing taste.

  • Iteration develops collaboration and understanding: Effective AI engagement is rarely about finding the perfect prompt on the first try. It’s about starting with structured communication using RGID, then refining through follow-up questions. Each iteration develops understanding—both AI’s understanding of what you need and your understanding of the task itself.

  • Taste develops through reflective practice: Professional judgement about when and how AI engagement serves your work can’t be taught through rules. It develops by trying different approaches, reflecting on what produces meaningful value, and gradually building intuitions about effective engagement.

Your commitment

Pause and reflect

Based on this lesson, what’s one specific task you’ll try with structured RGID prompting this week? How will you evaluate whether it produced meaningful value? Document this commitment in your Action Journal.

Looking ahead

You’ve now developed functional application—the ability to communicate effectively with AI through structured prompts and appropriate engagement levels. In the next lesson, you’ll apply these skills to reading academic literature, using AI as a reading companion to manage volume while maintaining critical engagement.

Before moving on, make sure you’ve saved at least one successful prompt to your library. You’ll build on this foundation in every subsequent lesson.

Resources

  • Anthropic. (2024). Prompt engineering guide. https://docs.anthropic.com/
  • OpenAI. (2023). Prompt engineering. https://platform.openai.com/docs/guides/prompt-engineering
  • Schulhoff, S. et al. (2024). The prompt report: A systematic survey of prompting techniques. arXiv.
  • Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. CHI ‘20.
  • Ng, D. T. K., et al. (2021). A conceptual framework for AI literacy. Computers and Education: Artificial Intelligence, 2.
  • Mollick, E. & Mollick, L. (2023). Practical AI for instructors and students Part 3: Prompting AI. YouTube.
  • Mollick, E. & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. SSRN Electronic Journal.