A different game
Technology amplifies human intention. Rather than cataloguing AI’s failures, demonstrate thoughtful use, critique from practice, and amplify what matters to you. The question isn’t whether AI is good or bad—it’s what you choose to point it at.
If your LinkedIn feed is anything like mine, you’re seeing a steady stream of posts warning about the dangers of AI. They include critiques of tech companies’ - and their CEOs - motives, videos with errors presented as evidence of failure, and moral panic about the dangers of misinformation. This post argues for thoughtful AI use as an alternative to performative critique.
Walk into any number of discussions about emerging technology and you’ll find a predictable rhythm. New development announced. Immediate deconstruction follows. Corporate motives questioned. Potential harms catalogued. Limitations highlighted. Discussion concludes with knowing nods about capitalism, surveillance or inequality. Carlo Iacono. Wonder as Resistance
Now, it may be fair to say that it’s reasonable to engage with the technology only by pointing out the flaws, although I’d say that it isn’t helpful.
So I want to suggest another perspective.
An alternative game
Technology amplifies human intention. This is true of electricity, cars, social media—and it’s true of AI. Misinformation and cheating existed before large language models. But so did education, creativity, and expertise. AI doesn’t create new categories of harm or benefit; it amplifies what we can do in the categories we choose to point it at.
It’s like we’re playing a game where every post is an attempt to allocate points to teams, where “AI is Good”, or “AI is Bad”. But the reality is that AI is whatever you choose it to be, and it will amplify whatever you choose to amplify.
An alternative version of this game looks like this:
Engage in good faith. Use the tools to solve real problems in your work rather than generating examples designed to fail. Prompt thoughtfully. Choose appropriate applications. Learn what works and what doesn’t through practice, not through setting up strawmen.
Look for value. When a new capability appears, ask what becomes possible rather than cataloguing what remains impossible. What could this enable? What would make it more useful? What’s almost within reach if the technology improves?
Demonstrate use. Show how you’re actually working with AI rather than just describing what others might do. Share what works, what doesn’t, what you learned. Make your practice visible so others can learn from it.
Critique from practice. Ground your concerns in actual use. Point to specific problems you’ve encountered, not hypothetical disasters. Suggest alternatives based on what you’ve tried. This is far more useful than imagining worst cases.
Expose systemic problems. Use AI to reveal how existing systems are sub-optimal. Show how (most) current assessment practices don’t really measure learning. Demonstrate how publishing incentives in research are misaligned. Make visible the gatekeeping and access problems that existed long before AI. The technology can expose these issues and you can use that exposure productively.
What thoughtful AI use looks like
I use AI daily. For brainstorming ideas and getting feedback on my thinking. For quick literature reviews and synthesising documentation. For peer-reviewing material before I share with others. For supporting my own learning when I’m trying to understand something complex. As an editor for my writing.
I teach my students to use it the same way; as a thinking partner that supports their learning and research processes, helping them engage with ideas more effectively. While they can use it to generate outputs they submit as their own work, we talk about why this misses the point. The goal of them being in that classroom is to develop their capacity to think critically and learn independently. Insofar as AI can help move us collectively towards that goal, I try to work with my students to explore what that process looks like.
These aren’t revolutionary uses. They’re small, daily practices that make my work more productive and (I hope) my students’ learning more effective. Anyone can do this. The barrier isn’t technical capability—it’s orientation. Are you looking for what might go wrong, or what might work?
It’s like the difference between looking for evidence of cheating versus looking for evidence of learning. Same data, different lens. One assumes the worst of people. The other looks for growth and capability. There’s an idea in research that you’ll find what you’re looking for, and this is a perfect example of it.
The choice
To be fair, the critical posts I see on social media could serve a purpose. They raise concerns, point out limitations, and question motives. But if that’s all they do—show how we’re spending our time and energy imagining problems rather than demonstrating solutions—we’re missing an opportunity to shape how this technology develops. And most importantly, we’re choosing to amplify a negative view of the world.
Yes, you could write another post highlighting the increasingly rare edge cases where the technology fails. Or you could use it to make your research more accessible, to help students develop critical thinking, to surface connections across literatures, to make conferences more inclusive, to expose broken systems and suggest alternatives.
Different use cases are possible. One maintains the status quo while performing concern (I think of this as armchair quarterbacking). Another approach could be to help work towards something better.
Next time you see a post describing AI’s failures or questioning tech companies’ motives, remember: there’s another game available. One where you demonstrate thoughtful use, share what you’ve learned, critique from practice, and use the technology to amplify what matters to you.
The question then becomes, what will you choose to amplify?
Provenance
This post is based on an earlier article, “A better game: Choosing what to amplify with AI”, originally published on 10 October 2025.