Beyond document management

Graph infrastructure for professional curricula


Michael Rowe (University of Lincoln)
Wesley Lynch (Snapplify)

The revalidation problem

BSc Nursing programme was revalidated last year; three fields: adult, mental health, and paediatric nursing.

5 staff members. 20 weeks of preparation.

  • Documents across Worktribe, SharePoint, and OneDrive
  • Competency mapping done manually in spreadsheets
  • Communication via email (outside the systems it was about)
  • Implicit knowledge holding the process together

Word, PDF, PowerPoint, and Excel documents. No connections between them. No queryable relationships.

Artificial information scarcity

The curriculum already has explicit structure:

  • Programmes contain modules with prerequisites
  • Learning outcomes map to NMC proficiencies
  • Assessments test outcomes across year levels

The structure exists. But it lives in prose, spreadsheets, and institutional memory; not in any form a machine can reliably traverse.

The information isn't absent. It's just not available for AI to create trustworthy connections. This is artificial information scarcity.

Documentation becomes infrastructure

When AI agents consume curriculum documents as operational input, documentation shifts into another category.

Curriculum reference material

Designed for human readers ->

Humans infer, interpret, work around ambiguity ->

Inaccuracies are inconvenient but tolerable ->

Operational architecture

Consumed by AI agents

Agents interpret literally, execute against structure

Inaccuracies cause system failures

Rowe (2026)

What this means for curricula

A module specification in Word describes prerequisites in prose. Readable by AI, but not traversable.

A graph database encodes the same prerequisites as typed relationships that are explicit, queryable, and verifiable.

The goal is not to create structure but to make existing structure computationally accessible.

Hilliger et al. (2024); Rowe (2026)

Graph database

Graph databases enable formal relationships between curriculum concepts (Abu-Salih & Alotaibi, 2024); Wang et al., 2025

  • Module A requires Module B
  • Learning outcome maps to NMC proficiency
  • Assessment tests outcome

Graph nodes and edges based on HERM as ontological framework (UCISA, 2025); supporting standardised entity types and valid relationships

The graph is the source of truth that generates documentation

Vector vs graph database

Vector database = semantic search across all curriculum materials; finds content about similar topics through statistical similarity

  • Aside: this is the reason for concerns around hallucination and accuracy

Query: "Show all module prerequisites two levels deep for clinical prescribing"

  • Vector search: finds documents mentioning prerequisites
  • Graph traversal: follows actual REQUIRES relationships through the graph

Xiao (2023)

Model Context Protocol

Natural language interface for non-technical staff to access curriculum infrastructure through conversation

  • Open, standardised, permission-based interface (allows fine-grained access control)
  • Access diverse data sources (vector- and graph-db, email, local file system, public knowledge sources e.g. Wikipedia)
  • Enables tool use (web search, read/write from the file system, skills)
  • Connect to any language model via API (local/remote, open-source/commercial)

Anthropic (2024)

Use case: Impact analysis

Query: "If we change the Year 2 pharmacology assessment from a written exam to a clinical simulation, show the impact on NMC proficiency coverage across all three fields."

  • NMC proficiencies currently evidenced by that assessment identified
  • Coverage gaps and overlaps with other assessments flagged
  • Dependent modules and placement requirements shown

Current process: Emails to module leads. Heavy reliance on institutional memory and implicit knowledge. Hope nothing gets missed.

Use case: Compliance verification

Query: "Show NMC proficiency coverage across all assessment touchpoints for prescribing competencies. Identify gaps."

  • Complete NMC proficiency mapping in seconds
  • Gaps automatically identified across all three fields
  • Full audit trail, export-ready for submission

Current process: 5 staff, 20 weeks, manual cross-referencing across Worktribe, SharePoint, and OneDrive.

One infrastructure, multiple lenses

  • Programme leaders see dependency chains and structural impacts
  • Compliance officers see NMC proficiency coverage and gaps
  • Quality assurance see patterns across programmes and fields
  • Module leads see prerequisite assumptions and alignment
  • Regulatory bodies see proficiency coverage and compliance evidence

Educators retain authority over what the structure should be. The system makes that structure transparent.

When change becomes continuous

Currently: Curriculum changes subject to committee timelines, human bottlenecks, and coordination complexity across teams.

With queryable infrastructure, stakeholders ask questions and the graph provides real-time feedback:

  • Does this still satisfy NMC proficiency requirements?
  • Does it align with institutional assessment policy?
  • Are prerequisite assumptions for dependent modules still valid?

System doesn't make decisions; it makes implications visible.

Governance that runs on annual review cycles cannot verify a curriculum that changes continuously.

This is not an efficiency argument. It is a governance argument and the question is whether institutions build this capability or wait for vendors to define it for them.

Rowe, M. (2026). Documentation becomes infrastructure when AI agents are the readers.

Rowe, M., & Lynch, W. (2025). Beyond document management: Graph infrastructure for professional education curricula.

Rowe, M., & Lynch, W. (2025). Context sovereignty for AI-supported learning: A human-centred approach.

Rowe, M., & Lynch, W. (2025). Context engineering and the technical foundations of educational transformation.

Rowe, M. (2025). Beyond text boxes: Exploring a graph-based user interface for AI-supported learning.

Available at michael-rowe.github.io/emergent-scholarship

Thank you

Michael Rowe (mrowe@lincoln.ac.uk)
Wesley Lynch (wesley@snapplify.com)

References (1/2)

Abu-Salih, B., & Alotaibi, S. (2024). A systematic literature review of knowledge graph construction and application in education. Heliyon, 10(3), e25383. https://doi.org/10.1016/j.heliyon.2024.e25383

Anthropic. (2024, November 25). Introducing the Model Context Protocol. https://www.anthropic.com/news/model-context-protocol

Hilliger, I., Miranda, C., Celis, S., & Perez-Sanagustin, M. (2024). Curriculum analytics adoption in higher education: A multiple case study engaging stakeholders in different phases of design. British Journal of Educational Technology, 55(3), 785–801. https://doi.org/10.1111/bjet.13374

Rowe, M. (2026). Documentation becomes infrastructure when AI agents are the readers. https://michael-rowe.github.io/emergent-scholarship/Essays/documentation-as-infrastructure/

References (2/2)

UCISA. (2025). Higher Education Reference Models. https://www.ucisa.ac.uk/groups/enterprise-architecture/herm

Wang, Q., Hou, S., Wan, S., Feng, X., & Feng, H. (2025). Applying Knowledge Graph to Interdisciplinary Higher Education. European Journal of Education, 60(2), e70078. https://doi.org/10.1111/ejed.70078

Xiao, G., & Yahya, A. (2023). A Personalized Learning Path for French Study in Colleges Based on a Big Data Knowledge Map. Scientific Programming, 2023(1), 4359133. https://doi.org/10.1155/2023/4359133