About this essay
- Author: Michael Rowe (ORCID; mrowe@lincoln.ac.uk)
- Affiliation: University of Lincoln
- Created: Dec 05, 2025
- Version: 0.4
- Modified: See Github record
- Keywords: AI-forward, artificial intelligence, leadership, organisational infrastructure, risk management
- License: Creative Commons Attribution 4.0 International
- Preprint DOI: N/A
- Peer reviewed: No
Abstract
Higher education institutions face persistent pressure to demonstrate visible engagement with artificial intelligence, often resulting in what we characterise as “innovation theatre” - the performance of transformation without corresponding structural change. This paper presents a diagnostic framework that distinguishes between performative and structural integration through analysis of four operational domains: governance and accountability, resource architecture, learning systems, and boundary setting. Unlike maturity models that prescribe linear progression, this framework enables institutional leaders to assess whether organisational structures align with stated strategic intentions, revealing gaps between rhetoric and reality. The framework emerged from critical analysis of institutional AI responses but evolved toward practical utility for decision-makers operating within genuine constraints. We position this work as practitioner pattern recognition requiring subsequent empirical validation, outline specific validation pathways, and discuss implications for institutional strategy in contexts of technological disruption.
Introduction
The institutional response to artificial intelligence in higher education exhibits a curious pattern. Universities announce AI strategies, establish task forces, launch pilot initiatives, and communicate commitment to innovation. Yet many of these efforts occur within defensive structures that limit genuine experimentation, protect fundamental operations from challenge, and distribute risk asymmetrically across institutional hierarchies. Resources flow toward visible demonstration rather than adaptive capacity. Learning from experiments - particularly failures - rarely influences subsequent decisions. Core assumptions about educational delivery remain insulated from transformation.
This pattern represents more than typical organisational inertia. It constitutes active performance of innovation while maintaining structures that constrain genuine experimentation. We term this phenomenon “innovation theatre” - not to dismiss legitimate institutional constraints, but to name a specific dynamic where stated strategy diverges from structural reality.
The framework presented here emerged from observing this pattern across multiple institutions and evolved through several iterations toward practical utility. Initial development produced a ten-dimension analytical tool examining institutional dynamics in granular detail. However, comprehensiveness created problems for adoption. The framework risked becoming what it critiqued - intellectually sophisticated but practically inert, more useful for external evaluation than internal decision-making.
This paper presents the refined framework: four operational domains where senior institutional leaders make concrete decisions that enable or constrain genuine experimentation. We argue that these domains capture the essential structural levers while remaining operationally actionable. The framework doesn’t prescribe particular strategic positions but enables institutions to assess whether their structures align with their stated intentions.
Theoretical foundations
Innovation theatre as analytical construct
The concept of innovation theatre draws on established organisational theory while applying it specifically to AI integration contexts. DiMaggio and Powell’s (1983) work on institutional isomorphism explains how organisations adopt structures and practices to gain legitimacy rather than efficiency. Meyer and Rowan (1977) distinguish between formal structures that demonstrate compliance with institutional expectations and technical activities that accomplish actual work - what they term “decoupling.”
Innovation theatre represents a specific manifestation of this decoupling in technology adoption contexts. Institutions perform innovation through visible initiatives while maintaining defensive technical structures. The performance satisfies stakeholder expectations for AI engagement. The defensive structures protect existing operations from disruption. Resources flow toward the performance rather than building genuine adaptive capacity.
This differs from simple resistance to change. Innovation theatre actively embraces change rhetoric while constraining structural transformation. It creates visible demonstration of engagement precisely to avoid deeper institutional disruption. The pattern becomes problematic not because defensive positions are inherently wrong - some institutions face legitimate reasons for conservative approaches - but because the gap between performance and structure creates strategic vulnerability.
Institutions performing innovation theatre spend resources satisfying expectations for visible response without building capacity to learn and adapt as circumstances shift. They appear responsive while limiting actual organisational learning. This creates dangerous illusions of preparedness in contexts where technological disruption may require genuine adaptive capacity.
Distinguishing from maturity models
The framework explicitly rejects maturity model logic. Traditional maturity models (Humphrey, 1988; Paulk et al., 1993) assume linear progression through capability levels where higher stages represent objectively superior organisational states. Such models serve useful functions in contexts with clear technical benchmarks and stable environmental conditions.
However, maturity models applied to institutional AI strategy create several problems. First, they privilege particular strategic positions - typically “advanced” or “transformative” - without acknowledging legitimate institutional variation in mission, context, and constraints. A specialised research university and a regional teaching institution face fundamentally different strategic imperatives. Mature responses differ by context.
Second, maturity models obscure the distinction between conscious strategic choice and accidental positioning. An institution scoring “low” on maturity might represent either deliberate conservatism or unintentional theatre. The framework presented here makes this distinction central rather than peripheral.
Third, maturity models struggle to capture the performative-structural gap that characterises innovation theatre. An institution might score high on visible indicators (strategy documents, pilot initiatives, committee structures) while remaining fundamentally defensive in actual operational patterns. Traditional assessments miss this misalignment.
The framework instead provides diagnostic criteria for assessing structural-strategic alignment. It makes visible the organisational patterns that enable or constrain genuine experimentation without prescribing which patterns institutions should adopt.
Strategic positioning and intentionality
We propose three legitimate strategic positions regarding AI integration: incremental, selective, and transformative. These positions represent distinct approaches rather than developmental stages.
Incremental positioning prioritises contained experimentation at operational margins while protecting core institutional structures. This stance proves appropriate for institutions facing significant regulatory constraints, those serving populations requiring stability, or those making conscious strategic choices about technology adoption timing. The incremental position becomes problematic only when institutions believe they’re being transformative while structures remain defensive.
Selective positioning designates specific domains for genuine experimentation while maintaining others as protected. This approach acknowledges that not all institutional functions require equal openness to transformation. It distinguishes between genuine constraints (regulatory requirements, contractual obligations) and institutional preferences. Selective positions require explicit boundary-setting about what’s available for experimentation versus what’s protected.
Transformative positioning makes core institutional assumptions available for challenge through experimentation. This doesn’t mean abandoning all structures but rather creating conditions where fundamental questions about educational delivery, assessment, credentialing, and institutional identity can be tested empirically. Few institutions genuinely operate here; most claiming this position exhibit more limited structural openness.
The framework’s value lies not in advocating particular positions but in revealing whether institutions occupy their chosen position consciously. An institution claiming transformative stance while exhibiting incremental structures experiences strategic drift. An institution deliberately adopting incremental positioning with full awareness of its implications demonstrates strategic coherence even if conservative.
The four-domain framework
The framework organises around four domains where institutional leaders make operational decisions that determine whether genuine experimentation becomes structurally possible. We argue these domains comprehensively capture the essential structural levers while remaining sufficiently bounded for practical utility.
Domain 1: Governance and accountability
This domain examines fundamental questions about decision-making authority and risk distribution. Who decides what experiments get resourced? Who carries consequences when initiatives fail? Who defines success criteria, including ethical standards for evaluating AI deployment? The answers reveal whether institutions create conditions for genuine experimentation or constrain it through governance structures.
Incremental governance patterns concentrate decision rights at senior levels while distributing risk downward to individuals conducting experiments. Leadership approves initiatives but remains insulated from failure consequences. Success criteria are defined centrally, often emphasising risk mitigation and reversibility. Experiments are designed for safe, contained outcomes.
This pattern appears frequently across higher education. Faculty or staff propose AI initiatives. Senior leadership approves them (often enthusiastically). If experiments fail, consequences fall primarily on those who led them - their time wasted, their credibility questioned, their units affected. Leadership faces minimal direct consequences. Next proposals receive similar approval within similarly constrained parameters.
Selective governance patterns create designated zones where accountability becomes genuinely shared for specific experiments. For particular initiatives, executives co-own outcomes with project leads. Success criteria are negotiated within defined boundaries rather than imposed centrally. Risk distribution becomes more proportional, though strategic decisions remain largely protected from experimental challenge.
Transformative governance patterns distribute risk proportionally across hierarchical levels. Those holding decision-making authority carry commensurate consequences for outcomes. Success definition emerges from experimentation processes rather than being predetermined. Governance structures enable bottom-up challenge of strategic assumptions, creating pathways for experiments that question executive-level decisions.
The governance domain matters because institutional positioning on risk fundamentally shapes what experiments become possible. If leadership remains insulated from consequences while pushing risk onto individuals, genuine experimentation becomes structurally unlikely regardless of stated intentions or visible initiatives.
Domain 2: Resource architecture
This domain examines how organisations allocate resources and create capacity for experimentation. What gets funded versus defunded? Where does protected experimental budget exist? What capacity supports learning from failure? The answers reveal whether institutions genuinely reprioritise activities or simply add initiatives while preserving all existing structures.
Incremental resource patterns frame all AI initiatives as additive. Nothing gets defunded. Existing structures remain entirely preserved. Investment is positioned as efficiency gains, low-cost pilots, or additions that don’t threaten current operations. Resources flow toward visible demonstration rather than building learning infrastructure.
This pattern manifests when institutions announce AI initiatives funded through “efficiency savings” or “reallocation within existing budgets” without identifying what actually ends to enable what begins. Everything is additive. The institutional portfolio grows continuously larger without corresponding reduction elsewhere. This proves financially unsustainable and signals that AI integration isn’t genuinely strategic - if it were, institutions would make explicit choices about tradeoffs.
Selective resource patterns involve targeted reallocation within units. Some existing work gets reprioritised or ended. Protected budget exists for designated experiments, though it may be modest. Core institutional structures remain funded, but marginal activities face scrutiny. Resources begin flowing based on learning from previous experiments rather than purely on proposal quality.
Transformative resource patterns make active defunding decisions at institutional level. Protected experimental budget represents meaningful percentage of operations. Dedicated capacity exists for learning infrastructure - staff whose role involves capturing insights from experiments and ensuring they influence future decisions. Clear choices are made about what ends to enable what begins, with explicit acknowledgement of tradeoffs.
The resource domain matters because additive innovation that preserves all existing structures differs qualitatively from transformation requiring reallocation. The way organisations fund and defund activities reveals actual priorities more clearly than stated strategies.
Domain 3: Learning systems
This domain explores how organisations generate and use insights from initiatives. How do experiments produce learning? What happens to failed initiatives? How does insight spread across institutions and influence future resource allocation and governance decisions? The answers reveal whether experiments are designed primarily for safe outcomes or genuine learning.
Incremental learning patterns lack systematic infrastructure for capturing insights. Pilots are designed to succeed or fail safely rather than generate challenging learning. Failed experiments disappear quietly. Success stories circulate without rigorous analysis of causation. Documentation is minimal and unsystematic. Learning remains tacit within individuals rather than becoming institutional knowledge.
This pattern appears when institutions can’t name specific insights gained from failed AI experiments or explain how those insights changed subsequent approaches. Experiments occur, produce outcomes, and fade from institutional memory. Success stories demonstrate visible progress but don’t build cumulative understanding of what works, why, and under what conditions.
Selective learning patterns implement post-implementation reviews for designated initiatives. Structured documentation captures what happened. Insights spread within teams or units. Learning influences similar future projects, though primarily at operational rather than strategic levels. Some infrastructure exists to support learning, but it’s not central to institutional decision-making.
Transformative learning patterns create infrastructure treating both success and failure as data sources. Formal mechanisms capture insights systematically. Analysis examines not just whether initiatives worked but why, under what conditions, and with what unexpected consequences. Learning influences institutional policy decisions, resource allocation, and governance structures. Continued funding for new initiatives becomes contingent on demonstrating learning from previous work.
The learning domain matters because experiments only generate institutional benefit if insights get captured and influence future decisions. Without infrastructure treating outcomes as learning opportunities, initiatives produce performance theatre rather than adaptive capacity.
Domain 4: Boundary setting
This domain identifies what’s protected from transformation versus open for experimentation. What can experimentation challenge? What constraints are genuine (regulatory, contractual, technical) versus assumed or preferred? What would it take to make seemingly immovable structures movable? The answers reveal whether institutions create genuine openness or maintain defensive boundaries while performing innovation.
Incremental boundary patterns protect all fundamental structures from experimental challenge. Experiments occur at operational margins - implementation details, efficiency improvements, supplementary services. Core assumptions about educational delivery, assessment, credentialing, institutional identity remain untouched. Clear boundaries exist around what can be questioned. The pattern often manifests as transformation rhetoric accompanied by protective practice.
This appears when institutions launch AI initiatives that don’t challenge any core structural assumptions. Chatbots answer student questions but don’t question whether question-answering represents optimal support. Automated grading accelerates feedback but doesn’t challenge whether traditional grading models serve learning. AI tools supplement lectures but don’t question whether lectures represent optimal knowledge transmission. All experimentation reinforces existing structures rather than testing alternatives.
Selective boundary patterns designate specific areas where core assumptions become experimentally available. Explicit distinction exists between genuine constraints (regulatory requirements, union agreements, technical limitations) and institutional preferences. Defined zones have permeability - particular aspects of teaching, assessment, or credentialing might be available for challenge while others remain protected. The boundaries are conscious and justified rather than assumed.
Transformative boundary patterns make core institutional assumptions available for challenge through structured experimentation. Regular processes question which constraints are genuine versus maintained by institutional inertia. Experiments can test fundamental structures with defined criteria for continuation or abandonment. Executive-level commitment exists to act on evidence even when it challenges institutional identity or requires significant structural change.
The boundary domain matters because what’s protected versus open for transformation determines the scope of possible change. Institutions often perform openness while protecting fundamental structures, creating innovation theatre at scale.
Domain interdependencies
These four domains don’t operate independently. Changes in one typically require corresponding adjustments in others, creating specific sequencing challenges for institutions seeking to shift positioning.
Governance patterns fundamentally constrain what’s possible in other domains. Institutions can’t genuinely reallocate resources (Domain 2) if governance structures (Domain 1) keep leadership insulated from consequences of defunding decisions. They can’t build learning systems (Domain 3) if governance concentrates success definition at executive levels, making it unsafe to report challenging insights. They can’t expand boundary permeability (Domain 4) if governance structures don’t enable experiments that question strategic assumptions.
Resource architecture enables or limits learning infrastructure. Without dedicated capacity - staff time, not just funding - for capturing and disseminating insights, learning systems (Domain 3) remain aspirational. Institutions claiming commitment to learning while lacking protected resources for learning infrastructure exhibit structural-rhetorical gaps characteristic of innovation theatre.
Learning systems influence boundary permeability. If experiments generate insights that challenge core assumptions but no mechanisms exist to translate those insights into policy reconsideration, boundaries remain functionally impermeable regardless of stated openness. Conversely, expanding boundaries without learning infrastructure creates chaos rather than productive experimentation - lots of change activity without cumulative understanding.
These interdependencies suggest that institutional transformation typically requires coordinated movement across domains rather than isolated initiatives in single areas. The framework helps leaders identify which domains need attention and in what sequence, based on current positioning and intended direction.
Methodological considerations
Framework genesis and iteration
The framework emerged through several distinct phases. Initial observation of institutional AI responses identified patterns of visible initiative without structural change. This produced critical analysis focused on exposing what we termed “innovation theatre” - the performance of transformation while maintaining defensive structures.
Early framework development attempted comprehensive coverage through ten dimensions examining risk distribution, power geometry, structural permanence, resource reallocation, reversibility design, uncertainty tolerance, evidence requirements, failure infrastructure, boundary permeability, and timeline orientation. Each dimension included diagnostic questions, theatre indicators, and experimental alternatives.
However, comprehensiveness created adoption problems. Ten dimensions felt overwhelming for practical use. More fundamentally, the critical lens had limited utility. No senior leader would voluntarily use a tool that might label their work as theatre. The framework risked becoming another academic critique - intellectually satisfying but operationally inert.
The breakthrough came from audience reframing. Rather than creating an external evaluation tool, we refocused on decision support for institutional leaders facing genuine constraints and constant pressure to “do something about AI.” This required collapsing dimensions into operationally meaningful domains and shifting from purely critical to diagnostic framing.
The current four-domain structure resulted from analysis of which dimensions captured distinct operational levers versus which examined different manifestations of the same underlying pattern. For example, “risk distribution” and “power geometry” both examined governance dynamics. “Resource reallocation” and “reversibility design” both examined resource architecture. “Evidence requirements” and “failure infrastructure” both examined learning systems. The consolidation preserved analytical coverage while dramatically improving practical utility.
Current evidence basis
The framework currently represents practitioner pattern recognition rather than empirically validated model. It draws on established organisational theory regarding institutional change, technology adoption, and decoupling between formal structures and technical activities. It synthesises direct observation of AI initiatives across multiple higher education institutions. This provides legitimate foundation but differs from rigorous empirical validation.
The framework’s analytical categories emerged inductively from examining institutional responses to AI rather than deductively from existing theoretical frameworks. This bottom-up development creates both strength and limitation. Strength: the domains map directly to operational realities institutional leaders recognise. Limitation: without systematic empirical testing, we can’t claim the domains comprehensively capture all relevant structural dynamics.
Several indicators suggest the framework has face validity for practitioners. When presented to chief digital officers, chief operating officers, and other senior administrators, they consistently report that the domains resonate with their operational experience. They identify specific examples from their institutions illustrating each pattern. They request access to assessment tools based on the framework. This suggests the framework names dynamics they recognise but previously lacked clear vocabulary to discuss.
However, face validity among practitioners differs from rigorous validation. The framework requires systematic empirical investigation to establish whether it reliably reveals structural patterns, predicts organisational outcomes, or improves decision quality.
Validation pathways
Multiple validation approaches could strengthen the framework’s empirical foundations:
Survey methodology could ask senior technology leaders which factors most influenced success or failure of specific AI initiatives. Do reported factors map to the four domains, or do systematic patterns emerge outside the framework’s coverage? Do leaders identify additional structural enablers or constraints not captured by current domains? Survey design should avoid confirming bias by presenting open-ended questions before introducing framework categories.
Case study analysis could examine specific institutional AI initiatives through multiple analytical lenses. Does the framework reveal dynamics that other analytical approaches miss? Can it predict which initiatives will generate genuine learning versus theatre? Do institutions exhibiting specific domain patterns show corresponding differences in outcomes? Case methodology should include both successful and failed initiatives, with careful attention to how success gets defined.
Longitudinal study could track institutions using the framework over time to assess whether it improves decision quality. Do institutions using the framework show greater strategic-structural alignment than comparable peers? Do they demonstrate improved ability to learn from experiments? Longitudinal research requires careful control for confounding factors and explicit theorising about causal mechanisms between framework use and organisational outcomes.
Literature mapping could connect each domain systematically to established research on organisational innovation, institutional change, and technology adoption. This would strengthen theoretical foundations while potentially revealing gaps where domains lack clear grounding in existing research. It might also identify relevant empirical findings from other contexts applicable to higher education AI integration.
Comparative institutional analysis could assess whether institutions at different positions on the domains show predicted differences in characteristics. Do selective institutions demonstrate different risk distributions than incremental ones? Do transformative institutions show different resource allocation patterns? This would test whether domain categories capture meaningful institutional variation.
Each validation pathway presents methodological challenges. Survey responses about past initiatives face retrospective bias. Case studies struggle with generalisability. Longitudinal research requires sustained access and confronts attribution problems. Literature mapping might reveal that existing research uses different conceptual categories, making connections difficult. Comparative analysis faces challenges defining and measuring structural patterns reliably.
Despite these challenges, systematic validation remains essential if the framework is to become more than useful practitioner heuristic. The framework currently serves as structured way to think about institutional dynamics. Validation would establish whether it captures those dynamics accurately enough to guide consequential decisions.
Implications and applications
Strategic planning and resource allocation
The framework serves several specific functions for institutional decision-making. When evaluating proposals for AI initiatives, it reveals whether projects are structured for genuine learning or designed to fail safely. Leaders can ask: Where does risk sit in this proposal? Who carries consequences if it doesn’t work? What learning mechanisms exist? What boundaries does it challenge? These questions surface structural patterns that standard business case analysis might miss.
For resource allocation decisions, the framework forces explicit conversation about tradeoffs. Rather than treating all initiatives as additive, it prompts questions about what gets defunded, what capacity needs protecting, what infrastructure supports learning. This shifts discussion from whether to fund particular initiatives toward portfolio-level questions about institutional positioning and structural coherence.
The framework also provides political cover for difficult decisions. When a chief digital officer wants to decline a high-profile initiative or support a risky experiment, the framework offers defensible rationale grounded in organisational dynamics rather than personal judgment. It links decisions to established understanding of institutional change and innovation adoption, making it harder to dismiss concerns as excessive caution or unjustified risk-taking.
Governance and organisational development
For executive teams, the framework creates shared language for distinguishing between types of institutional response to AI. Many leadership discussions about technology strategy lack vocabulary for making structural patterns visible and discussable. The framework provides specific concepts - risk distribution, learning infrastructure, boundary permeability - that enable more precise conversation than generic terms like “innovation” or “transformation.”
It also helps governance structures assess their own positioning. Executive teams can ask whether their governance processes enable or constrain experimentation. Do they concentrate decision rights while distributing risk? Do they require predetermined success criteria or allow emergence? Do they create pathways for challenging their own assumptions? These questions reveal whether governance structures match stated strategic intentions.
For institutional planning cycles, the framework suggests different assessment questions than traditional strategic planning processes. Rather than asking what initiatives to launch, it prompts examination of structural enablers. Do current governance patterns allow genuine experimentation? Does resource architecture enable learning? Are boundaries appropriately permeable for intended positioning? This shifts planning attention from initiative lists toward institutional capacity building.
Limitations and boundaries
The framework has clear boundaries. It doesn’t address every aspect of AI integration. It deliberately focuses on structural patterns that senior leaders can influence through operational decisions. It doesn’t examine pedagogical questions about effective AI use in teaching, technical questions about infrastructure and tools, or detailed implementation questions about specific applications.
This narrow focus represents conscious design choice rather than oversight. The framework aims for comprehensive coverage of structural enablers and constraints, not comprehensive coverage of all AI integration topics. If structural dynamics exist that fundamentally shape experimentation possibilities but fall outside these four domains, that would represent genuine limitation requiring framework expansion.
The framework also cannot determine appropriate strategic positioning for particular institutions. It reveals structural-strategic alignment or misalignment but doesn’t prescribe which position institutions should adopt. A regional teaching institution and a flagship research university face different strategic imperatives. The framework helps each assess whether their structures match their chosen direction, not which direction they should choose.
Additionally, the framework’s utility depends on users’ willingness to engage honestly with potentially uncomfortable realisations about institutional patterns. It can reveal performance-structure gaps, but only if users approach assessment with openness to discovering misalignment. Institutions using the framework defensively - seeking confirmation of existing positioning rather than genuine diagnosis - will generate self-congratulatory results that obscure rather than illuminate structural reality.
Future research directions
Several research directions could strengthen and extend the framework:
Empirical validation studies remain highest priority. The framework needs systematic testing to establish whether it reliably reveals structural patterns, predicts outcomes, or improves decision quality. This requires research designs comparing institutions using the framework against comparable peers, longitudinal tracking of organisational changes, and careful measurement of structural characteristics corresponding to domain categories.
Cross-sector application could explore whether the framework transfers to organisations beyond higher education. Do healthcare systems, government agencies, or corporations exhibit similar patterns of performative versus structural integration? Does the framework require modification for different organisational contexts? Cross-sector research might reveal whether current domains capture universal dynamics or reflect sector-specific characteristics.
Refinement of domain boundaries through empirical investigation could determine whether the current four-domain structure optimally captures relevant dynamics. Do phenomena exist that fall between domains? Do some domains contain empirically distinct subdimensions requiring separation? Should domains be further consolidated? This research would test whether current structure represents natural joints in organisational reality or arbitrary analytical convenience.
Integration with existing frameworks could position this work within broader literature on organisational change, technology adoption, and institutional innovation. How does it relate to Rogers’ diffusion of innovations model? To Kotter’s change management framework? To existing work on academic capitalism and institutional isomorphism? Explicit theoretical integration would strengthen foundations and reveal connections to established research traditions.
Development of interventions based on domain-specific diagnoses could create practical tools for institutions seeking to shift positioning. If assessment reveals that governance patterns constrain experimentation, what specific interventions prove effective? What sequencing works when institutions need movement across multiple domains? This applied research would test whether framework-informed interventions outperform standard change management approaches.
Ethical and equity implications deserve systematic attention. Do different strategic positions create differential impacts on students from particular backgrounds? Do transformative experiments sometimes privilege innovation over stability in ways that harm vulnerable populations? Does focus on structural change obscure questions about purposes and values? Research examining equity implications would ensure the framework doesn’t inadvertently promote change for its own sake while ignoring distributional consequences.
Conclusion
Higher education institutions face genuine pressure to respond to artificial intelligence in contexts of uncertainty, constraint, and competing priorities. Many responses exhibit what we characterise as innovation theatre - visible demonstration of engagement without corresponding structural change. This pattern isn’t simple resistance to transformation. It represents active performance of innovation precisely to avoid deeper institutional disruption, creating strategic vulnerability through resource allocation toward appearance rather than adaptive capacity.
The framework presented here provides diagnostic tools for distinguishing between performative and structural integration through examination of four operational domains where institutional leaders make consequential decisions. It enables assessment of whether organisational structures align with stated strategic intentions, revealing gaps between rhetoric and reality that characterise innovation theatre.
The framework explicitly rejects maturity model logic that privileges particular strategic positions. Incremental, selective, and transformative stances each prove appropriate under different institutional circumstances. The framework’s value lies not in prescribing positions but in enabling conscious choice rather than accidental drift.
Current evidence basis combines established organisational theory with practitioner pattern recognition. The framework requires systematic empirical validation to establish reliability, generalisability, and utility. Multiple validation pathways exist, each presenting methodological challenges but collectively strengthening foundations for consequential use.
The framework serves institutional leaders facing decisions about resource allocation, governance structures, and strategic positioning in technological disruption contexts. It provides analytical vocabulary, diagnostic criteria, and political justification for decisions that might otherwise face dismissal as excessive caution or unjustified risk-taking. It enables honest institutional conversation about what’s being protected, why, and whether those choices match stated strategic intentions.
Most fundamentally, the framework addresses institutional agency in technological change contexts. Institutions choosing positions consciously - whether incremental, selective, or transformative - maintain capacity to adjust as circumstances evolve. Institutions performing innovation theatre while believing they’re being experimental lose both benefits of genuine experimentation and stability of deliberately conservative positions.
The question isn’t whether institutions should be “AI-forward” or defensive. It’s whether they occupy their chosen position through conscious strategic choice or arrive there accidentally through drift between stated intentions and structural reality. The framework helps answer that question, creating conditions for genuine institutional learning regardless of adopted strategic stance.
Author note
This framework emerged from practitioner observation and evolved through multiple iterations toward operational utility. It represents work-in-progress requiring empirical validation. Comments, critique, and collaboration opportunities welcome. The framework aims not to provide definitive answers about institutional AI strategy but to enable better questions about structural enablers, constraints, and strategic-rhetorical alignment.