2 items with this tag.
When AI agents consume documentation as operational input, it undergoes a category shift from reference material to operational architecture — inaccuracies no longer merely inconvenience readers, they cause system failures. This essay argues that the primary bottleneck for institutional AI integration is not AI capability but information architecture: how institutional knowledge is structured, maintained, and made available to AI systems. Documentation written for human readers cannot function as reliable AI input without deliberate restructuring around explicit relationships and rigorous maintenance workflows. Treating this transition as a governance imperative — rather than a technical afterthought — determines whether AI integration delivers on its institutional promise.
When educators embed hidden instructions in assessment materials to detect AI use, they import adversarial security thinking into educational relationships. This post examines what AI tripwires reveal about institutional assumptions (i.e. that assessment is about artifact authentication rather than learning measurement) and argues that this approach creates escalating countermeasure dynamics while only detecting carelessness, not genuine disengagement. The alternative requires rethinking what assessment is actually for in an era when artifact production has become trivially automatable.