The PhD certifies a person but we've been assessing a document.

Generative AI has broken the inferential chain between a submitted thesis and the person who wrote it, not because students are less honest, but because the mechanism that made the document valid evidence has been structurally disrupted. The response is not better detection, more sophisticated viva questions, or AI-assisted gap analysis of a final PDF. It is to make the process of becoming visible, navigable, and genuinely evidential.

There are two questions circulating in doctoral education right now. The first is immediate and practical: What are we going to do now that students are writing their theses with AI? This is the core challenge for AI and PhD assessment boards revising policies mid-cycle, for supervisors unsure what to ask for in a draft, and for students unsure about what they’re allowed to do. It is the question that has produced an industry of AI detection tools, academic integrity frameworks, and conference presentations on maintaining standards. Everyone involved can feel that the ground has shifted and nobody is quite sure where it has shifted to.

The second question is quieter, and asked mainly by those who have sat with the first long enough to notice that it doesn’t go deep enough: What are we going to do when AI can do the research independently? Not merely assist with the write-up. Design the study, generate and analyse the data, interpret the results, situate the contribution within existing literature, and write and submit the paper. This is not a speculative provocation about future systems. It’s a reasonable description of what current AI systems can approximate and what near-future systems will do more reliably. The question of whether students are writing with AI is, from this vantage point, already a quaint concern.

Both questions are real. But they are symptoms of a deeper problem that neither policy revision nor better detection will resolve. Before we can answer what to do, we need to be precise about what we are trying to protect, and that means asking what, exactly, do we mean by “the PhD”?

Four things called a PhD

We use the same word for Four distinct things: the degree conferred by the institution, the document submitted as evidence, the process of learning, and the person who emerges from the process. A PhD is awarded, submitted, and hired. The conflation is so embedded in academic culture that we don’t even notice it.

But the degree, the thesis, the process, and the person are distinct, and the relationship between them has always been inferential. The institution awards the degree to the person on the basis of the thesis. But the degree is not certifying that the thesis is good. It is certifying that the person has become something different; a researcher with particular capacities for judgement, direction, and original contribution. The thesis was the evidence of the person, not the thing being assessed in its own right.

This distinction locates the real object of the PhD, which is an identity shift in the individual doing the work. The person who submits is not the same person who enrolled. They have developed a different relationship with knowledge, with uncertainty, and with their field. They are able to identify the problems worth working on, navigate complexity without collapsing it, and create contributions that are genuinely theirs.

Understanding this distinction changes the question we are actually trying to answer. The problem is not that AI can produce a thesis. The problem is that we have been using the thesis as a proxy for the person and that proxy has broken down.

Why the thesis worked as evidence

The thesis functioned as valid evidence of becoming because producing it required the process. A competent literature review could not be written without having read widely and developed a genuine sense of the field; who the important thinkers are, where the live debates sit, what the gaps look like from inside the conversation rather than from above it. A methodology chapter could not be constructed without genuine engagement with epistemological choices. An argument could not be sustained across 80,000 words without the intellectual stamina and disciplinary fluency those words were supposed to certify. The friction was the mechanism of development and the document produced was downstream of this, which is why it could serve as the evidence for it.

This is no longer reliably true. Generative AI can produce a coherent literature review, a defensible methodology, and a polished argument, without any of the engagement those sections were supposed to represent. The document no longer reliably indicates that the development happened. The inferential chain from thesis to person has broken; not through any change in the integrity of doctoral researchers, but because the conditions that made the document reliable evidence no longer hold. The friction that produced the development can now be bypassed, and the resulting document is, on the surface, indistinguishable from one produced through genuine engagement.

This is not a claim about the prevalence of academic misconduct. The vast majority of doctoral researchers are genuinely committed to their work. The PhD is a substantial personal investment, undertaken over years at considerable opportunity cost, and the last thing most doctoral students want is to cheat themselves out of what the degree is supposed to develop. The concern is structural, not moral: the mechanism has broken for everyone, not only for those who might want to exploit the gap.

Nor is this an argument for making the PhD “AI-proof”: constructing a process capable of excluding AI as though its absence were itself the point. That goal is neither achievable nor desirable. AI, used well, makes better research possible: more thorough engagement with literature, more sophisticated analysis, more precise articulation of complex ideas. The argument I’m making is the opposite of AI-exclusion. It is that we need a model of doctoral education that takes AI seriously as a contributor to the developmental process, and that requires us to stop relying on a final document as sufficient evidence of anything.

The viva can’t rescue a broken model

The most common institutional response to this broken inferential chain is that the viva will “catch” students who did not do the work. I think this will sometimes be true but it depends on a version of the viva that is considerably less common in practice than the argument requires.

In many doctoral traditions, the viva is not primarily an examination. It’s a rite of passage, a performance, and a ritual that marks the transition from candidate to researcher. The substantive examination happened during the thesis review: in the examiner reports, the committee deliberations, and in the decision about whether the work meets the standard for the degree. By the time the student enters the viva room, the outcome has typically been determined in all but the most formal sense. The viva confirms, celebrates, and occasionally requires corrections. It rarely reverses. The student who has submitted a thesis that passed examiner review will, in almost all cases, pass the viva, not because the viva lacks rigour, but because the system has already done the substantive assessment work before the conversation begins.

This matters because it means the viva, in most cases, cannot serve as a safety net for a broken evidentiary model. It wasn’t designed for that function, and retrofitting it to perform adversarial examination of candidates who may not understand their own thesis creates a worse instrument for everyone. We shouldn’t be designing an examination around the rare bad actor without degrading the experience for the majority who are genuine. That’s the wrong problem to solve.

The right problem is to recover the evidence that the document has stopped providing, and to do so in a way that serves the process of becoming rather than surveilling it.

The evidence was always in the process

The process of becoming a researcher leaves traces. Drafts written and abandoned. Research questions reformulated after a seminar conversation or a difficult supervision session. Theoretical frameworks tried and found wanting. Connections made between ideas that were not obviously connected; the particular synthesis, arrived at through sustained engagement with a field, that is the actual substance of original contribution. The slow accumulation of a way of seeing that is eventually, irreducibly, and uniquely, yours.

This texture of development was always the real evidence of what the PhD produces. It was invisible to the institution by the time of submission, not because it did not exist, but because capturing it across three to five years of intellectual work was practically impossible. And even if it had been captured, making sense of the volume would have exceeded any individual examiner’s capacity. So the institution accepted the downstream document as a reasonable approximation of the upstream process.

But neither of these constraints applies any longer.

Version control systems — the infrastructure used in software development to track every change made to a codebase across its entire history — make longitudinal tracking of intellectual work trivial. Every draft, every revision, every abandoned direction, every moment of decision can be committed, timestamped, and made navigable. The specific platform is less important than the principle, which is that a change-tracked record of how the corpus developed, rather than a single snapshot of where it ended, can reliably show the work of becoming.

Connected note-taking environments like Obsidian extend this further. The graph of ideas becomes visible; which concepts were linked, how those links evolved, and where synthesis happened that was not present in any single source. A knowledge graph of a researcher’s developing thinking is a different kind of evidence from either a thesis or a viva. It shows the structure of understanding as it formed, including the connections that are genuinely original: the particular combination of sources, observations, and framings that the researcher assembled in a way no one had assembled before.

AI that can navigate this material — identifying where the researcher’s conceptual connections diverge from what already exists in the literature, tracing the arc of an identity shift across years, constructing a narrative of how the project developed — makes originality assessable in a way it has never been before. Not as a property of a final document, but as a quality of a developmental process.

The evidence of becoming was always in the process and now we have the tools to see it.

Becoming a researcher was never a solo act

The process of becoming a researcher was never solely individual. Supervision, peer feedback, seminars, committee review, participant relationships, and reading the literature show that development happens in interaction, not in isolation. The submitted thesis laundered all of that into a single-authored product, making the distributed and relational nature of research invisible. A process model does not. It makes every meaningful contribution to the researcher’s development part of the record.

AI belongs in that record at every node, not as a threat to be managed, but as a participant whose contributions can be tracked, evaluated, and understood as part of the developmental picture.

The most direct evidence is how the researcher uses AI. Across literature, methods, analysis, and writing, the quality of AI-assisted work is not fixed; it is a direct function of the researcher’s skill in directing it. A researcher who constructs a productive inquiry through AI demonstrates something real: they knew what to ask, what to pursue, and what to set aside. Research taste expressed through the direction of AI is developmental evidence, visible in the prompt record and conversation transcripts. The sophistication of that use changes across the arc of the PhD in ways that are themselves meaningful.

The supervisor’s relationship to this record is different in kind. A supervisor with access to the longitudinal process record can engage with the developing trajectory rather than with isolated snapshots: where the thinking changed, where it stalled, where a connection was missed that the student was close to making. AI-assisted analysis of the record enables supervisory engagement at a depth that periodic draft review cannot provide. The supervisor’s guidance — and the AI-assisted analysis that informed it — becomes part of the record, making supervision visible and accountable in ways it currently is not. Supervisory quality is one of the strongest determinants of doctoral outcomes and one of the least systematically evaluated dimensions of the process; that is not a trivial shift.

Originality assessment has always been one of the most contested parts of doctoral evaluation. Any given committee member’s ability to judge whether a contribution is genuinely new depends on their familiarity with the relevant literature, shaped by disciplinary norms that vary across fields. AI navigation of a candidate’s connected process record offers something more systematic: identifying where the researcher’s synthesis diverges from existing work, tracing conceptual connections that appear genuinely novel, flagging the moments of intellectual departure that mark an original contribution.

Research participants are stakeholders in both the research and its outcomes, and their contribution to the researcher’s development has always been invisible in the final document. Making the research relationship genuinely dialogic, where participants are equipped to interrogate the research design, surface assumptions they find problematic, or provide critical commentary on emerging findings, is rarely possible under current models. A researcher who has navigated genuine challenge from those their research affects, and who can show how that engagement shaped their thinking, is demonstrating epistemic accountability — being answerable for how their knowledge was produced — in a way that’s genuinely difficult to capture in a thesis.

All of these contributors — supervisor, committee, participants — are stakeholders with different positions, different knowledge, and different relationships to the work. And in future, all will bring their own AI agents into the community that forms around a PhD candidate. What they share is that their interaction with the researcher is constitutive of the becoming. The current submission model makes very little of this community visible. A process model makes all of it part of the record.

The record itself is the methodological transparency this model requires: AI models used, prompts constructed, conversation transcripts, supervisor responses, participant commentary, committee deliberations; all are logged, all navigable, all contributing to a picture of how the development actually occurred. This is not surveillance infrastructure. It is an honest account of how research has always worked, made visible for the first time.

What the viva becomes

In this model, the viva is not the defence of a document. It is a guided conversation through a process record; an exploration of how the thinking developed, with AI as an institutional participant that has navigated the record and can surface the moments most worth discussing.

Not: here are questions generated by gap analysis of a final thesis submitted as a PDF. But: here is where your research question changed significantly; walk us through what drove that. Here is a conceptual connection you made across bodies of literature that do not usually speak to each other; how did you arrive at it, and what did it open up? Here is the moment your participants pushed back on your framing; how did you respond, and what did it change in how you understood your own project?

These are questions about the becoming. They cannot be answered by someone who was not present for the process, regardless of how fluent the document that resulted from it might be. The record is the evidence. The degree certifies that the process was real, that the identity shift happened, and that the researcher who leaves the room has genuinely become someone different from the person who began.

A note on equity

An objection I commonly hear is that access to this kind of infrastructure would be unevenly distributed; that the model advantages researchers at better-resourced institutions, with more technical fluency, who have more time.

That is true. But it’s also true of every dimension of doctoral education as it currently exists. A student at a well-resourced institution has access to supervision quality, library infrastructure, seminar culture, and peer networks that are unavailable to a part-time researcher managing full-time employment and caring responsibilities at home. The student who enters a highly-ranked programme arrives with a support infrastructure — financial, social, academic — that most doctoral researchers do not have. Some doctoral students have family support that allows them to complete in three years; others take seven, navigating the same intellectual demands alongside everything else a life contains. These disparities shape outcomes in ways that even the current submission model does not capture or correct for.

Inequity in doctoral education is not a new condition, and it’s not an AI condition. The quality of AI contribution to research — like the quality of supervision, access to participants, or the time available simply to think — varies with circumstances. That’s a reason to address inequity in doctoral education, seriously and structurally. It is not a reason to decline a model that makes the actual process of becoming more visible to every researcher who undertakes it.

What we are actually certifying

We have been awarding a degree that certifies a person by assessing a document. That worked for as long as producing the document required becoming the person. It no longer does; not because doctoral students have changed, but because the mechanism that connected document to person has been structurally disrupted, and the disruption is permanent.

The instinct to respond by protecting the document — better detection, more sophisticated viva questions, AI-assisted gap analysis of the final PDF — is understandable. It reaches for the familiar. But it’s reaching in the wrong direction. The document was always downstream of the development, and making it harder to replicate downstream changes nothing about what happens upstream.

The only response that makes sense is to look directly at the process of transformation. To make the process the submission. To use the tools now available to make that process navigable, meaningful, and genuinely evidential. Not to catch the rare student who wants to game the system, but to honour what the vast majority of doctoral researchers actually do: develop, over years, into something they were not when they began. Someone determined to deceive will always find a way through any system, and designing a PhD around that possibility insults the integrity of everyone else. The goal is not an AI-proof process. The goal is a process that’s a valid representation of the identity shift the degree is supposed to certify.

The evidence was never in the document. It was always in the process; the interactions, the revisions, the connections made, the challenges navigated, the slow and irreversible development of a particular way of seeing and contributing to a field. We discarded that evidence because we had no way to capture it, and the document was a good enough proxy.

The proxy has now failed.

What remains is the institutional will to recognise that what we are certifying is a process of becoming, not a thesis.