Where human value now lives
AI can learn your patterns—what you typically choose. But it cannot judge whether those patterns produce outcomes worth amplifying. As creation and curation become trivial, evaluative judgement about what should exist becomes the primary human contribution.
Content creation has become trivially easy and what used to take hours now takes minutes. The technical barrier to producing text, images, presentations, and video has collapsed in ways that seemed impossible just two years ago.
Unfortunately, the result is slop: low-effort output flooding every channel. The common response is to valorise curation—we tell ourselves that creation is cheap but curation is expensive, that the real skill is knowing what to make, not how to make it.
But I think this framing misses something important—evaluative judgement about what should exist—and this post is an attempt to explore what that means.
First, we need to distinguish between three different things:
- Creation: the technical execution—writing, building, coordinating—which is becoming trivially easy.
- Curation: the process of selecting and organising—choosing what to make from available options. This is also becoming easier as AI learns your patterns and preferences.
- Taste: the evaluative judgement about what matters and what should exist. This is fundamentally different from the other two.
The distinction matters because AI can be descriptive but not evaluative. It can learn what you typically choose—your patterns, preferences, and historical decisions. But it cannot judge whether those patterns produce outcomes worth amplifying. You might have consistent preferences like writing style and reading lists (which AI can learn), but questionable taste (which it cannot replace).
You might object by saying that a well-aligned AI might help improve taste by questioning our choices. And maybe we’ll get to a point where an AI could challenge your decisions, suggest you’re optimising for the wrong outcomes, or refuse to generate obvious slop. But this raises uncomfortable questions about who decides what constitutes ‘good taste’ or ‘slop’—you, the AI, or the companies building these systems? These are important considerations, but I don’t think they change the core point: even if AI challenges your choices, you’re still the one making the final evaluation. The judgement is still yours.
Both creation and curation are becoming trivially easy. What remains is the harder question: do your patterns—the things you consistently choose to do—reflect good evaluative judgement about what should exist in the world?
From execution to evaluation
Professional value has traditionally been bound to execution capability—the ability to write compelling arguments, design effective systems, or analyse complex data. And professional identity emerges from these craft skills.
I believe that this relationship is breaking down. The craft itself is increasingly automated when generative AI can both read and write based on my historical patterns and preferences. What cannot be automated, I think, is the evaluative judgement that determines what should be done in the first place.
A small example: writing a blog post. Previously, skill meant wordsmithing—finding the right phrase, structuring arguments, maintaining voice. Now, skill means evaluating whether this particular idea deserves to exist in this particular form for this particular audience at this particular moment. The execution becomes automatic; the evaluation is still human.
A bigger example: coordinating AI agents to manage a school. Technical orchestration matters, but value lives in the taste shaping every interaction—what counts as good tutoring, when to escalate questions to humans, how to balance efficiency with empathy. This judgement cascades through thousands of interactions daily.
This is taste: evaluative judgement about what matters and what makes something worth doing. And taste, unlike technical skill or learned preferences, scales in unexpected ways when mediated by AI.
Why taste matters at scale
Someone with poor taste writing blog posts creates localised slop—annoying but containable. That same person coordinating AI systems creates scaled harm. Poor taste amplified produces outcomes nobody wants: manipulative interfaces, metrics optimised for irrelevance, decisions that serve systems rather than people.
The inverse is also true. Good taste amplified creates disproportionate value. Someone with refined judgement about what matters can coordinate systems that produce better outcomes than they could ever create individually.
This isn’t a technical problem. It’s a taste problem. And taste doesn’t improve automatically through practice—it requires deliberate cultivation.
What cultivation looks like
Taste develops through exposure and reflection. You encounter diverse examples—good and bad—and develop evaluative judgement about why particular approaches or outcomes work or fail. This operates across interconnected dimensions:
Aesthetic taste: What creates clarity versus obscurity? When does complexity serve understanding?
Strategic taste: What problems actually matter? What creates lasting value versus temporary fixes?
Ethical taste: Who benefits and who bears costs? What are the second-order effects?
These aren’t separate categories but interconnected aspects of a unified sensibility about what’s worth doing.
Cultivating taste involves:
- Paying attention to why some approaches succeed where others fail
- Finding examples of excellence and examining what makes them excellent
- Developing sensitivity to when something feels wrong, even if you can’t articulate why
- Building confidence to trust your judgement while staying open to being wrong
- Exposing yourself to perspectives that challenge your default assumptions
This is different from skill development. Skills have clear progression paths. Taste requires evaluative judgement, which means that sometimes you’re going to be wrong. It demands caring enough to develop strong opinions while remaining open to revision.
This is increasingly where human value lives. As AI handles more execution and pattern-matching, evaluation becomes the primary human contribution.
The identity shift
If you’ve built your professional identity around execution capabilities—writing well, analysing competently, designing effectively—this shift is going to be uncomfortable. These capabilities still matter, but they’re no longer sufficient. The question shifts from “can I do this well?” to “should this be done at all?”
You can see this playing out in universities right now. Institutions are focused on detecting AI use in student assessments—worried about whether students are doing their own writing—when the more important question is whether we should be assessing through essays at all. Staff are using AI to read and write reports, speeding up the process, when they should be asking whether those reports need to exist in the first place. We’re optimising execution when we should be evaluating purpose. The fixation on “how do we do this?” prevents us from asking “should we be doing this?”
You’re not just shaping individual artefacts anymore. You’re shaping systems that produce artefacts, decisions, and actions at scale. And the primary constraint on those systems is your taste—your evaluative judgement about what deserves to exist.
The opportunity
This shift isn’t purely threatening. For most of human history, having good taste wasn’t enough. You might know exactly what should exist without having the technical ability to make it real. The gap between vision and execution was enormous.
But, if you can discern what matters and what deserves to made real, you can increasingly coordinate systems that give form to ideas. The bottleneck shifts from execution to judgement.
This makes taste accessible as a primary form of value. You don’t need to spend a decade mastering a craft before you can make a meaningful contribution. Instead, you need to develop discernment about what’s worth making and how to make it well.
And taste has this interesting property: it’s contagious. When you encounter something made with care and judgement, it shifts your sense of what’s possible. Good taste creates standards that elevate surrounding work. As more people develop taste and coordinate AI systems to express it, we can create environments where excellence becomes more visible and reproducible.
An invitation
So: what are you putting into the world? Not just what content, but what outcomes? What happens because you chose to create this rather than something else?
AI can learn your patterns—what you typically choose to do. But it cannot evaluate whether those patterns produce outcomes worth amplifying. That evaluation is yours.
If you’re satisfied with your answer—if your taste produces outcomes you’d want amplified—then the work going forward is refinement. Making it more precise, more generous, more attuned to what genuinely matters.
If you’re not satisfied—if you recognise that your use of AI is adding noise rather than signal—then your work is cultivation. Not punishment but possibility. You get to decide what deserves existence.
The technical barriers have collapsed. What remains is a deeply human question: what’s worth making? What should exist that doesn’t yet exist? What outcome would make the world incrementally better?
These are questions of taste. And unlike technical skill or learned patterns, taste isn’t zero-sum. My developing better taste doesn’t prevent yours. It potentially helps—because good taste is contagious, and exposure to excellent work improves everyone’s discernment.
This is the opportunity hidden in the slop crisis. Not just curating better content, but becoming people whose taste shapes better outcomes. People who understand what matters and can coordinate systems to realise it. People who add signal rather than noise.
That’s the work. That’s what becomes possible when both creation and curation cost nothing, and evaluation becomes everything.
Provenance
This post is based on an earlier article, “AI and judgement: Cultivating taste in an age of capability”, originally published on 15 October 2025.