Screenshot of the Claude status page during the March 2026 outage — the disruption that revealed what AI dependency in practice looks like when you can no longer plan your work

Claude had intermittent outages over a couple of days this week; not a single dramatic failure, but hours at a time where it was unreliable or unavailable. Each time, I switched to Gemini, carried on working, and got through what I needed to. But the experience made something clear about AI dependency in practice: I’d crossed a threshold without noticing.

It wasn’t that I couldn’t work. Gemini is a very capable model. The difference is more like working with two equally competent colleagues where you simply get on better with one of them. The work gets done either way. But the switch itself — the disruption to how I’d been thinking, the context I’d built up, the friction of adjusting — told me something about where AI now sits in my practice.

What made it clear was the contrast with a different kind of interruption I’ve already adapted to. When I hit my session limit with Claude, I’ve started setting work aside rather than finishing it manually. It’s a simple calculation: if I can wait two hours and have AI complete something in five minutes, it doesn’t make sense to spend an hour doing it myself. I’ll work on something that doesn’t currently benefit from AI, and come back. That’s a rational resource allocation decision because I know the constraint, I know the timeline, I can plan around it.

The outage was qualitatively different and the intermittent pattern made it worse. Not because the capability was gone — I had Gemini, I had local models — but because the uncertainty removed my ability to plan. Do I rebuild context in another model? Do I wait? How long? That inability to plan is the signature of infrastructure dependence. Not that you can’t function without it, but that its absence introduces uncertainty into how you organise your work. The whole experience felt unsettling in a way it wouldn’t have done even two months ago.

This is the individual version of a question I worked through in a post on institutional AI governance. If a single practitioner moving into agentic workflows already notices this, what does it look like when agentic processes are embedded across an institution, and the supporting infrastructure falls over?