Lesson overview
Objective: Develop contextual judgement and metacognitive awareness—the professional judgement that distinguishes AI literacy from mere competence
Summary: This lesson addresses what frameworks and techniques cannot: the professional judgement that determines whether AI engagement is meaningful or superficial. You’ll learn to evaluate engagement quality across three levels—output, process, and meta-evaluation—and establish a practice-reflection cycle that develops taste over time. Unlike previous lessons, this capability develops through months of systematic practice, not through lesson completion.
Key habits:
- Attend to engagement quality: Notice whether AI collaboration strengthens your thinking or bypasses it
- Domain-specific evaluation: Apply different standards to research, teaching, and administration
- Meta-reflection: Regularly evaluate your own judgement quality, not just your outputs
The judgement question
Taste is the fundamental quality which sums up all the other qualities. It is the nec plus ultra of the intelligence.
Comte de Lautréamont
You’ve written an excellent piece with AI collaboration. The prose is polished, the argument is clear, colleagues are impressed.
But was this meaningful engagement or superficial assistance?
Did AI help you think more deeply about your argument, surfacing complexity you needed to address? Or did it smooth over unclear thinking with professional-sounding prose that masks conceptual gaps you should have resolved?
You’re preparing a lecture on a complex topic. You could use AI to generate examples, draft explanations, create discussion activities. The materials would be good—clear, well-structured, pedagogically sound.
But should you? For which parts does AI collaboration serve your teaching goals, and for which parts does it undermine them? When does efficiency support learning, and when does it distance you from content you need to own?
These questions require professional judgement that frameworks, techniques, and infrastructure can’t answer. They require taste—contextual judgement about meaningful engagement.
Reflection: Evaluating meaningful engagement
Before we begin: Think of a recent substantial AI engagement. Can you articulate whether it was meaningful or superficial? What distinguishes the two?
What taste means
Taste is professional judgement about meaningful AI engagement. It integrates all the literacy dimensions you’ve developed:
- Functional application: Knowing how to collaborate effectively
- Critical evaluation: Assessing whether collaboration serves your goals
- Ethical awareness: Maintaining appropriate boundaries
- Contextual judgement: Knowing when and how engagement serves your work
This integration—all dimensions working together in contextually appropriate ways—is what distinguishes AI literacy from mere competence.
Why taste can’t be taught through rules:
Wine experts don’t follow rules like “good wine has 12% alcohol.” Their taste develops through: tasting many wines, attending to what they experience, developing vocabulary for distinctions, understanding context (what suits what occasion), and recognizing “good wine” depends on purpose.
Similarly, you develop judgement about AI engagement through: many engagements across different work types, attending to what produces value versus what feels superficial, developing vocabulary for meaningful versus performative collaboration, understanding context (what serves research versus teaching versus administration), and recognizing “good engagement” depends on goals.
The core insight: Same task, different contexts, different judgement. Writing an abstract with AI might be meaningful engagement (you’ve developed your argument, AI helps articulate it clearly) or superficial engagement (your thinking is unclear, AI papers over gaps with polished prose). The task is identical; whether it’s meaningful depends on context only professional judgement can evaluate.
This is why literacy requires judgement, not just competence with techniques.
Three levels of evaluation
Taste operates at three levels, each requiring judgement:
Level 1: Output quality
Questions:
- Does this serve my scholarly goals or just look sophisticated?
- Does this maintain my distinctive voice or sound generic?
- Would I be satisfied claiming this as my work?
- Is quality appropriate for my standards?
Level 2: Process quality
Questions:
- Was this intellectually productive or did it bypass thinking I should do?
- Did I maintain critical stance or accept suggestions uncritically?
- Did collaboration strengthen understanding or smooth over complexity?
- Did time invested produce proportional value?
Level 3: Meta-evaluation (the literacy level)
Questions:
- Am I developing good judgement or poor habits?
- What patterns reveal when engagement serves versus hinders my work?
- Is my judgement about AI engagement sound?
- How is my taste evolving?
This third level distinguishes literacy from competence. You’re not just judging engagement quality—you’re judging your own judgement. This metacognitive awareness enables ongoing development.
Reflection: Self-assessment of evaluation level
Which level do you typically evaluate? Most academics focus on outputs, occasionally on process, rarely on meta-evaluation. Literacy requires all three.
Decision point: Recognising meaningful versus superficial engagement
Let’s practice evaluating what constitutes meaningful engagement through a realistic scenario.
The scenario
You’re writing a challenging theory section for a paper. You need to position your work against existing frameworks whilst developing your distinctive contribution. This is intellectually demanding work.
You have three options for AI engagement:
Option A: Draft entirely yourself without AI
What happens: You spend 6 hours over three days writing 2,000 words. It’s intellectually demanding—you wrestle with theoretical positioning, work through how different frameworks relate to your case, develop your argument through writing. The section is distinctively yours—colleagues would recognise your analytical voice.
Outcome: High intellectual integrity, time-intensive process, distinctively your thinking.
Reflection: This maintained intellectual quality but was demanding. Could AI have helped without undermining the thinking?
Pattern: Sometimes the struggle is the point—the intellectual work happens through wrestling with complexity. Efficiency isn’t always the goal.
Option B: AI drafts, you revise extensively
What happens: You prompt AI with your research question and key points. AI generates a 2,000-word theoretical section in 5 minutes. You spend 5 hours revising—rewriting 80%, restructuring the argument entirely, removing generic theoretical language, adding your specific examples and analytical insights, changing the framing to match your distinctive approach.
Outcome: Final product is good, but you wonder if drafting yourself would have been more efficient. You spent nearly as much time but started with text that didn’t actually scaffold your thinking.
Reflection: This saved perhaps 1 hour but the AI draft didn’t help you think—it gave you text to revise rather than helping you develop your argument. The generic draft may have actually constrained your thinking by providing a structure you felt obligated to work within.
Pattern: This felt efficient but was superficial. The AI draft looked like progress but didn’t advance your intellectual work. You mistook text production for thinking.
Option C: AI helps explore frameworks, you draft yourself
What happens: You spend 45 minutes with AI exploring theoretical frameworks. You ask: “How does organisational justice theory relate to this workload issue? What about institutional theory? Where might these frameworks conflict?” AI helps you map relationships between theories, surface connections you hadn’t considered, identify tensions requiring resolution. This clarifies your thinking.
Then you draft the section yourself (4 hours). The argument is yours, the voice is yours, the analytical contribution is yours. But it’s more theoretically sophisticated than you would have written alone—the AI conversation helped you think more comprehensively.
Outcome: Intellectually productive collaboration, distinctively your work, enhanced theoretical sophistication.
Reflection: This is meaningful engagement. AI enhanced your thinking without replacing it. The section is recognisably yours but benefits from the exploratory conversation that helped you see theoretical connections more clearly.
Pattern: AI served as thinking partner during development, not as text producer. The intellectual work remained yours but was strengthened through collaboration.
This is taste in action: Recognising that Option C serves your scholarly work whilst Option B feels efficient but undermines intellectual development. Option A maintains integrity but misses collaborative benefits. Rules can’t capture these distinctions—only judgement cultivated through experience.
Reflection: Your typical engagement pattern
Which option would you typically choose? Why? What does that reveal about your taste?
Domain-specific judgement
Taste is domain-specific—what constitutes meaningful engagement differs across research, teaching, and administration. Developing literacy means recognizing these differences.
Key takeaway: Meaningful engagement looks different in research, teaching, and administration. What strengthens scholarly thinking might undermine student learning or administrative effectiveness. Develop domain-specific standards.
Research context: Intellectual integrity
Meaningful engagement characteristics:
- Maintains scholarly rigour appropriate to your field
- Advances genuine understanding, not just publishable text
- Surfaces complexity rather than smoothing it over
- Strengthens your distinctive analytical contribution
- Enables thinking you couldn’t access alone whilst remaining recognizably yours
Superficial engagement characteristics:
- Bypasses intellectual work you should do yourself
- Produces generic scholarship lacking distinctive voice
- Over-relies on AI for judgements requiring your expertise
- Sacrifices depth for efficiency inappropriately
- Makes work faster without making it better
Key evaluation question: Did this collaboration strengthen my scholarly thinking whilst maintaining my analytical voice?
Teaching context: Pedagogical appropriateness
Meaningful engagement characteristics:
- Helps develop pedagogically sound materials efficiently
- Frees cognitive resources for high-value teaching activities
- Maintains your distinctive teaching voice and approach
- Serves student learning goals substantively
- Enhances rather than replaces your teaching presence
Superficial engagement characteristics:
- Produces generic materials lacking your pedagogical context
- Distances you from teaching content you should own
- Undermines authentic teacher-student relationships
- Prioritizes efficiency over pedagogical appropriateness
- Makes preparation faster without making teaching better
Key evaluation question: Does this serve student learning whilst maintaining my teaching presence?
Administration context: Professional effectiveness
Meaningful engagement characteristics:
- Handles routine communications efficiently whilst maintaining quality
- Preserves relationship context and personal touch appropriately
- Produces clear, professional communication serving its purpose
- Saves cognitive load for higher-value work without sacrificing effectiveness
- Maintains professional standards whilst reducing administrative burden
Superficial engagement characteristics:
- Produces generic communication lacking appropriate personal touch
- Damages professional relationships through inappropriate tone
- Saves time at expense of communication effectiveness
- Over-automates work requiring human judgement
- Makes work faster whilst making relationships worse
Key evaluation question: Does this maintain professional relationships whilst reducing administrative burden?
The pattern across domains
In each domain, meaningful engagement serves domain-appropriate goals—intellectual integrity (research), pedagogical appropriateness (teaching), professional effectiveness (administration)—whilst superficial engagement prioritises efficiency inappropriately.
Developing taste means building domain-specific evaluation standards that recognise these differences. What serves research may undermine teaching. What serves administration may be inappropriate for scholarship.
The practice-reflection cycle
Taste develops through systematic practice-reflection cycles. This is the core framework for cultivating judgement over time.
The cycle
1. Engage: Work with AI on actual scholarly tasks (not practice exercises—real work where outcomes matter)
2. Attend: Notice what happens during engagement:
- Process indicators: Am I thinking more deeply or offloading thinking? Does this feel intellectually productive or performatively efficient?
- Emotional indicators: Does this feel meaningful or am I going through motions? Would I be proud discussing this process with colleagues?
- Output indicators: Does this maintain my voice? Is quality appropriate? Would I be satisfied claiming this as my work?
3. Document: Capture engagement details before memory fades:
- What were you working on? What approach did you take?
- Was process intellectually productive? Why or why not?
- Does output serve your goals and maintain your voice?
- What does this reveal about when AI helps versus hinders your work?
4. Reflect: Look for patterns across multiple engagements (monthly):
- When does AI consistently help? When does it consistently hinder?
- What distinguishes meaningful from superficial engagement?
- Which contexts produce quality outcomes?
5. Adjust: Refine practice based on patterns:
- What will you do more of? Less of? Differently?
- Which literacy dimensions need more development?
6. Meta-reflect: Evaluate your own judgement quality:
- Is your taste developing appropriately?
- What blind spots might you have?
- How is your judgement evolving?
This cycle, repeated systematically, cultivates taste. Each iteration develops judgement further. But—crucially—the cycle only works if you attend honestly to your experience and reflect on patterns, not just document actions mechanically.
Making evaluation criteria explicit (optional advanced practice)
As your taste develops, you may want to articulate explicit standards for different work types. What does “maintains my voice” mean in research versus teaching? What constitutes “intellectually productive” in each domain?
For each domain, you might develop:
- Quality indicators specific to that work
- Process criteria that define meaningful engagement
- Meta-evaluation standards for assessing your judgement
Making criteria explicit can accelerate development—but this is optional advanced practice, not essential. Many academics develop sophisticated taste through the basic practice-reflection cycle without explicit criteria articulation.
Activity
Establish your practice (15 minutes setup + ongoing)
This activity establishes the simple practice infrastructure that develops taste over time.
Step 1: Create your engagement log (5 minutes)
In your note-taking system, create a note called “AI Engagement Reflections”
Use this template for each engagement:
Date: [___] Context: Research / Teaching / Administration Task: [What you were trying to accomplish] Approach: [How you engaged with AI]
Process evaluation:
- Was this intellectually productive? Why or why not?
- Did I maintain critical stance or accept suggestions uncritically?
- Did this feel meaningful or superficial?
Output evaluation:
- Does this serve my goals and maintain my voice?
- Would I be satisfied claiming this as my work?
- Is quality appropriate for my standards?
Pattern recognition:
- What does this reveal about when AI helps my work?
- What would I do differently next time?
Literacy connection:
- Which literacy dimensions was I exercising well?
- Which need more development?
Step 2: Document one recent engagement (8 minutes)
Think of a substantial AI engagement from the past week. Document it now using your template.
Be honest in your evaluation—both successes and failures teach. The goal isn’t to demonstrate good practice; it’s to notice patterns in your actual practice.
Step 3: Commit to ongoing practice (2 minutes)
Your commitment:
- I will document: 2-3 substantial AI engagements per week
- I will reflect monthly: 15-20 minutes reviewing documented engagements to identify patterns, adjust practice, and evaluate judgement quality
- First monthly review scheduled: [Date - add to calendar right now]
Why this matters: Taste develops through this sustained practice over months, not through completing this activity once. Your literacy continues developing through regular engagement with this simple cycle.
Optional: Advanced practice infrastructure
As your practice matures, you may add:
Reference library: Examples of excellent engagement (exemplars) and poor engagement (counter-exemplars) with analysis of what distinguishes them
Quarterly criteria review: Evaluating whether your evaluation standards still serve you well, adjusting as needed
Yearly comprehensive assessment: Reflecting on how taste has evolved, which literacy dimensions need attention, development priorities for the coming year
But start simple: document engagements, reflect monthly. Other infrastructure can develop later if helpful.
Reflection and commitment
Taste is:
- Professional judgement about meaningful engagement
- Integration of all literacy dimensions in contextually appropriate ways
- Domain-specific understanding (research ≠ teaching ≠ administration)
- Metacognitive awareness of your own judgement quality
- Continuously developing through systematic practice-reflection
Taste is not:
- A set of rules to follow
- Achieved through completing this lesson
- Generic across all work types
- Fixed once developed
Your commitment to ongoing development
I understand that:
- Taste develops through months of systematic practice, not through lesson completion
- The practice-reflection cycle is simple: engage, attend, document, reflect monthly, adjust
- Meta-evaluation (judging my own judgement) distinguishes literacy from competence
- My taste will continue evolving as AI capabilities change and my work develops
I commit to:
- Documenting 2-3 substantial engagements weekly
- Monthly 15-minute reflection on patterns
- Honest evaluation of both successes and failures
- Ongoing attention to when AI serves versus hinders my work
What sophisticated AI literacy means for my practice:
Key takeaways
-
Taste is integrated literacy: Professional judgement where functional application, critical evaluation, ethical awareness, and contextual judgement work together in contextually appropriate ways. This integration distinguishes literate from merely competent AI users. You’re not just using AI well—you’re exercising sophisticated professional judgement about meaningful engagement.
-
Develops through practice-reflection cycles: Engage with AI on real work, attend to what happens (process quality, output quality, your experience), document engagements before memory fades, reflect monthly on patterns, adjust practice based on learning, and meta-evaluate your own judgement quality. This cycle, repeated systematically over months, cultivates taste. Single experiences teach little; patterns across many experiences develop judgement.
-
Domain-specific and continuously developing: What constitutes meaningful engagement differs across research (intellectual integrity), teaching (pedagogical appropriateness), and administration (professional effectiveness). Develop appropriate standards for each domain whilst recognising they’ll evolve. Taste continues developing throughout your career as AI capabilities change, your work develops, and you gain experience. This is literacy as ongoing practice, not achieved expertise.
Your commitment
Pause and reflect
Based on this lesson, how will you establish a practice-reflection cycle? What domains need the most attention? Document this commitment in your Action Journal.
Looking ahead
You’ve completed the AI literacy course, developing functional application, critical evaluation, ethical awareness, creation capabilities, context sovereignty, and contextual judgement. The final lesson provides frameworks for integrating these literacy dimensions into sustained scholarly practice. You’ve developed sophisticated capabilities—now you’ll learn to maintain and continue developing them over time.
Resources
- Dreyfus, H. & Dreyfus, S. (2005). Expertise in real world contexts. Organisation Studies, 26(5), 779-792.
- Schön, D. (1983). The reflective practitioner: How professionals think in action. Basic Books.
- Tai, J. et al. (2018). Developing evaluative judgement. Higher Education, 76(3), 467-481.