The real governance question
AI meeting scribes haven’t created new power dynamics—they’ve automated existing ones, making them more technical, less visible, and more scalable. The question isn’t how to stop gaming but how to govern dynamics that have always existed.
Years ago, I realised that when you take notes during meetings and share them with attendees, you control the narrative for that meeting. Your notes become the canonical reference for what was discussed, what was decided, and who’s responsible for what happens next. It’s not manipulation; this is just how organisational memory works. Now AI meeting scribes have automated this process, making the dynamic both more powerful and more subtle.
Bruce Schneier recently described “AI summarisation optimisation” (AISO); the practice of strategically shaping speech during meetings to influence how AI scribes (note-takers) capture and prioritise information. Rather than persuading colleagues directly, “clever meeting attendees can manipulate this system’s record by speaking more to what the underlying AI weights for summarisation and importance than to their colleagues.” They use high-signal phrases like “key takeaway” and “action item,” keep statements brief, repeat critical points, and speak at strategic moments.
Why AI meeting scribes are vulnerable to exploitation
This gaming is possible because AI meeting scribes exhibit predictable technical vulnerabilities. These systems over-rely on content positioned at the start and end of conversations, systematically under-weighting information in the middle. They can’t reliably distinguish between embedded instructions and ordinary content, especially when phrasing mimics salient cues or uses formulaic language. These aren’t accidental flaws but fundamental limitations in how language models process sequential information. Once people understand these patterns, exploitation becomes inevitable, because the vulnerabilities are systematic and learnable.
This feels new, but it’s really an evolution of something that already existed. Meeting dynamics have always been adversarial to some degree; people have always positioned agenda items strategically, controlled air time, and used particular terminology to frame decisions. What’s changed is that these dynamics are becoming:
- More technical: Success requires understanding algorithmic preferences, not just social dynamics
- Less visible: Gaming happens through subtle language choices rather than obvious dominance behaviours
- More scalable: Once you understand the patterns, you can deploy them consistently across every meeting
For me, the organisational leadership question isn’t “how do we stop AISO?” It’s “how do we govern power dynamics that have always existed but now have new technological expression?” In some ways this parallels what we’re seeing play out in other higher education contexts; initial attempts to ban AI use, which have now morphed into attempts to constrain it.
Three layers of organisational response
Organisations will soon need governance across three domains:
- Cultivating social awareness by helping people recognise these patterns and creating norms around authentic versus adversarial communication
- Establishing organisational policies that acknowledge why this gaming happens and address the underlying incentive structures
- Implementing technical safeguards that make AI meeting scribes more robust to manipulation while maintaining their utility
This isn’t about preventing specific behaviours. It’s about building organisations that remain healthy when adversarial dynamics gain new technological mechanisms. The technology didn’t create the problem; it just made existing dynamics harder to ignore and more urgent to address.
Provenance
This post is based on an earlier article, “Gaming AI meeting scribes: Why organisational memory needs new governance”, originally published on 08 December 2025.