AI didn't threaten my writing — it revealed what it was always actually for
When AI could produce the kind of prose I’d spent years learning to write, I had to ask what I’d actually been doing. The answer wasn’t comfortable, but it was clarifying: the writing was never the distinctive part. The glimpse upstream of it — the thing that needed to be worked out and made visible — always was.
I used to say I was compelled to write. Not that I enjoyed it — compelled. There’s a difference. Enjoyment is about the process. Compulsion is about something that won’t let you rest until it’s done.
Then AI arrived, and I discovered it could write. Not badly, either. Given the right prompt, it could produce an essay with the shape and register of something I might write. Structured arguments, reasonable prose, the kind of hedged academic confidence I’d spent years cultivating. And when I saw that, it unsettled me — not the quality of the output, but the question it raised. If AI could produce that, the question of AI and academic writing became uncomfortably personal: what had I actually been doing all this time?
The identity trap
The answer I first reached for was wrong. I told myself the output wasn’t quite right — missing something, a quality of thought, a human presence. And there was something to that. But it wasn’t the real issue.
The real issue was simpler and more uncomfortable: I had built my identity around a particular tool. “I’m a writer” was load-bearing in how I understood myself as an academic. The writing was the most visible part of the process, so it had become the identity. And now the tool was threatened, which felt like the identity was threatened.
Jussupow, Heinzl and Spohrer (2018) studied this pattern in medical professionals facing AI: when a technology challenges the knowledge that defines your professional identity, the threat lands not just as inconvenience but as something closer to existential. The most relevant dimension they identify is threat to expertise — the sense that what made you distinctively you is no longer distinctively yours.
What I worked out is that I had misidentified the expertise. The words were never the distinctive part. They were always reproducible — by me, on a good day, and it turns out by AI, on most days. The thing that wasn’t reproducible was upstream of the words entirely.
The glimpse
Before any piece of writing that mattered to me, there was a moment. Not an idea fully formed, but something more like a direction — a brief clarity about a relationship between two things that convention kept separate, or a hidden assumption that everyone was treating as a fact, or the core of something complex that had been obscured by its own accretions. A glimpse, before any words attached to it.
This is what the compulsion was actually responding to. The glimpse of something as-yet-unseen, felt urgent — needing to be got out of my head and made solid before the fog closed again. The writing was how I anchored it — turned the ephemeral into something I could then mould into a coherent thought.
Paul Graham put it precisely: good writing isn’t about demonstrating what you already understand; it’s about discovering new ideas. The French etymology is telling: the word essai means to try. An essay is an attempt, a working-toward, not a demonstration of arrival. That compulsion was always the compulsion to attempt — to work the glimpse out, and then to make it visible to someone else.
That’s what I was doing. Not writing, in the narrow sense. Making something visible.
What changed — and what didn’t
Feynman described writing as not merely a record of what he thought, but the medium through which he thought — the thing that made conscious thinking possible. That framing holds for me, and it’s what survived the AI transition intact. Writing is still thinking. What’s changed is what I mean by “writing.”
I used to write with a pen. Then with a word processor. Now with AI. But the asymmetry matters. A pen and a word processor are passive — they transcribe; they don’t participate. AI is categorically different because it contributes: it offers framings I hadn’t considered, surfaces connections I’d missed, pushes back in ways that force me to drill into the idea more precisely. It’s closer to a colleague than a tool.
That participation is exactly what makes it generative — and exactly what makes it risky if you’re not careful about how you use it.
The distinction that matters
When AI generates fluent, coherent text, there’s a temptation to accept it. The prose is good. The structure holds. It sounds right. But fluency is a form of noise. It can create the appearance that an idea has been captured when it has only been approximated. The test isn’t “does this read well?” — it’s “does this say what I was trying to say?” Those questions can have opposite answers.
Working well with AI in writing requires learning to look past the fluency to assess whether the glimpse was actually captured. To treat polished output with the same scepticism you’d apply to a first draft. To reject plausible-but-wrong text and use the AI’s response as a prompt to go deeper, not as a finished product. Nguyen, Hong, Dang and Huang (2024) found exactly this pattern in doctoral students: those who engaged in iterative, back-and-forth collaboration with AI achieved better writing outcomes than those who used it as a supplementary information source and accepted what came back.
Bedington, Halcomb and McKee (2024) call this “human-machine teaming” and frame it around authorial agency: the author is the one who exercises judgement, regardless of who produces the words. That framing is right. What makes writing yours isn’t the source of the words, but the exercise of evaluative judgement over them.
The failure mode has a name now, borrowed from software: vibe coding with words. The vibe coder accepts whatever AI produces and hopes it works. They’ve outsourced not just the execution but the judgement. The writing is fluent, possibly even good. But the glimpse — if there was one — has been replaced by the model’s average.
As Carlo Iacono (2025) put it: “when words get cheaper, meaning gets more expensive.” Discernment is what becomes scarce. If we’re not exercising it, we’re not writing with AI. We’re just prompting.
What I am, then
I’m not a writer who uses AI. That framing keeps the identity in the wrong place — attached to the output form, the words, the thing that turned out to be reproducible.
I’m a scholar. And scholarship, in its everyday form, is messy, personal, and purposeful: the practice of noticing things worth noticing, working them out, and making them visible to others. The tool has always been secondary to that. I wrote with a pen, then a keyboard, and now in dialogue with a model that pushes back and extends and occasionally says something that surprises me into a sharper version of what I was reaching for.
The glimpse is still mine. The compulsion is still mine. What’s changed is that the medium for working it out has become richer and stranger than anything I expected.
And if that doesn’t feel like writing, it’s because we’ve been using “writing” as a shorthand for something that was always bigger than words.
Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education. https://doi.org/10.1080/03075079.2024.2323593
Bedington, A., Halcomb, E. F., & McKee, H. A. (2024). Writing with generative AI and human-machine teaming. Computers and Composition. https://doi.org/10.1016/j.compcom.2024.102833
Jussupow, E., Heinzl, A., & Spohrer, K. (2018). I am; We are — Conceptualizing professional identity threats from information technology. Conference paper.
Iacono, C. (2025). Thoughts for 2026.