Autopsy of Assessment: What Technology Adds and Complicates in Learning
- eliciabullock81
- Nov 2, 2025
- 3 min read
Reconsidering What Counts as Learning
As I continue exploring digital assessment, I have been reflecting on one of the assessments that I worked on: a Virtual Autopsy Simulation paired with an EdPuzzle activity. Thinking about what technology adds and complicates in the process of learning and assessment has prompted me to question what counts as evidence of understanding and how digital traces shape our perception of student learning.
The assessment will allow students to engage in an authentic forensic investigation—an experience that would be nearly impossible to replicate in a high-school classroom without technology. The simulation invites them into the procedural reasoning of a medical examiner, while EdPuzzle captures their responses and provides instant feedback. Ideally, this combination will help students make thinking visible and allow me to target misconceptions with timely support. In this sense, I see assessment as technology, a tool that extends how and where learning can happen.
Assessment in a Culture of Surveillance and Quantification
Yet these same affordances sit within what scholars call a culture of surveillance in education (Selwyn, 2019). Every click, replay, and attempt is stored as data. This process transforms the messy process of learning into neat visualizations and dashboards, a process called quantification. While these analytics may help identify misconceptions, they also risk turning students into data points. My goal will be to interpret these patterns ethically, to use insights to guide instruction, not to police behaviour.
Automation, Gamification, and Quantification
Much of the EdPuzzle will use multiple-choice questions with automated feedback. While this automation supports formative assessment, it can also simplify complex reasoning into right-or-wrong binaries. The platform introduces an element of gamification—students can retry the task until they “beat” the 75 percent threshold. This design could motivate persistence, but it might also promote superficial engagement or strategic guessing. The embedded feedback will therefore focus on reflection and reasoning, encouraging students to consider why an answer is correct and where they can find supporting evidence in the simulation.
The threshold for pass/fail also illustrates quantification: learning represented as a score. To counter this, I plan to use the assessment formatively, evaluate progress through competencies, and emphasize mastery rather than marks. By framing success around scientific reasoning and ethical awareness, I hope to preserve the nuance that numbers often flatten.
From Data to Design
Looking ahead, I see potential for extending this work creatively. Inspired by Kafai and Burke (2016), who argue that designing games cultivates systems thinking, I could ask students to build their own interactive case scenarios—perhaps a game where players solve a crime using autopsy data. This shift would transform students from consumers of assessment into designers of it, reinforcing that assessment can be playful, participatory, and reflective.
Ultimately, my current thinking is that technology in assessment is both empowering and problematic. It expands what students can experience and makes feedback immediate, yet it also generates data that can be misread or misused. The challenge will be to harness technology to illuminate thinking without surveilling it, to collect evidence of learning while keeping human curiosity and ethical reflection at the centre.
References
Kafai, Y. B., & Burke, Q. (2016). Connected gaming: What making video games can teach us about learning and literacy. MIT Press.
Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press.



Comments