From "knowledge about" to "capability with"

Programmatic assessment in a digital context moves evaluation from static essays to dynamic artifacts. By using automated “tests” to verify that a student’s work is accurate, safe, and functional, we can provide immediate feedback and ensure a high standard of clinical and technical competence.

Programmatic assessment

One-sentence definition: Programmatic assessment is a method of evaluation where students create digital “artifacts”—such as data models, clinical tools, or code—which are then verified through a series of automated “tests” to ensure they meet professional and technical standards.

In health professions education, “programmatic assessment” (Van der Vleuten, 2012) traditionally refers to a longitudinal approach where many low-stakes data points are used to build a high-stakes picture of competence. In a digital curriculum, this concept extends to automated verification. Instead of a teacher manually grading every output, the student creates a digital product that is checked by “test scripts.” These scripts verify that the work is not only “correct” in its content but also “functional” and “safe” in its execution.

Why it matters for educators

  • Authentic Digital Practice: As clinical work increasingly involves data management and the use of digital tools, assessment must reflect these realities. Asking a student to build a clinical calculator and “testing” its accuracy is more authentic than asking them to describe how such a calculator works in an essay.
  • Rapid Iterative Learning: Because the “tests” are automated, students receive instant feedback. They can see exactly where their work fails a “safety check,” make a correction, and re-test their work immediately. This mimics the professional “loop” of practice, error, and improvement.
  • Consistency and Scalability: Automated verification provides an objective, unbiased standard that is applied identically to every student. It allows institutions to maintain high standards of quality assurance at a scale that would be impossible through manual review alone.

The Educator as “Architect of Verification”

This approach shifts the role of the educator from “marker” to “architect.” The challenge is to define exactly what “success” and “safety” look like in a way that can be programmatically verified. It requires a high level of AI literacy and technical understanding to design assessments that are both pedagogically sound and technically robust, ensuring that the “tests” genuinely measure the intended learning outcomes.


Sources

  • Van der Vleuten, C. P. M., et al. (2012). A model for programmatic assessment fit for purpose. Medical Teacher, 34(3), 205–214.
  • “Automated Grading of Complex Tasks.” (Computer Science Education Research).
  • “Competency-based medical education in the era of big data.” (Rowe, 2025).

Notes

While the term “programmatic assessment” has a specific meaning in medical education (longitudinal evaluation), its use in the context of digital tools emphasizes the programmatic (automated and code-driven) nature of the verification process itself. Both meanings converge on the idea that assessment should be a continuous, data-rich process that supports learning.