Seeing the Whole Picture: Combining Feedback Types in Forensics Assessment
- eliciabullock81
- Nov 16
- 3 min read
Updated: 6d
How can teachers truly see the bigger picture of student learning?
In today’s classrooms, relying solely on numerical grades or written feedback often provides an incomplete view. By combining feedback types, teachers can better interpret individual understanding, identify class-wide patterns, and refine assessment design, supporting both student growth and more equitable, data-informed decisions.
In an upcoming summative assessment, students will be asked to create a forensic diorama and submit a corresponding case file. This task follows units on Crime Scene Investigation and the Human Body as Evidence to determine cause, manner, and mechanism of death. The diorama allows students to represent crime scene evidence physically, demonstrating their understanding through a tangible, hands-on medium. The case file, on the other hand, requires students to analyze the evidence, justify their conclusions, and document findings in writing. Finally, the post assessment conference is a built-in validity check and equity measures, including consistent access to materials and grading based on evidence accuracy rather than artistic skill. By intentionally collecting multiple forms of evidence across different modalities, the assessment generates rich, reliable data that supports a more equitable evaluation of what students truly know and can do.
While numerical and written feedback each have value, they also have limitations when used in isolation. Numerical scores provide a quick summary and allow for easy comparisons, but they do not capture the reasoning behind a student’s choices or the depth of their understanding. For example, a student may receive an 8/10 on “analyzing evidence,” but this number alone does not indicate whether misconceptions exist or which aspects of the analysis were strong. Written feedback, in contrast, offers detailed insights into student thinking, highlighting strengths and areas for improvement. However, qualitative feedback is often subjective, difficult to aggregate, and can vary in clarity and usefulness. Wiggins and Frontier (2022) emphasize that clear rubrics can improve consistency in scoring, while Hashem (2017) suggests that single-point rubrics focus feedback on essential criteria without overwhelming students with unnecessary detail.

In this forensic project, the diorama provides concrete, visible evidence of understanding, while the case file captures students’ analytical and scientific reasoning. Observing the diorama can reveal a student’s grasp of evidence connections even if their written explanation is incomplete, and a well-reasoned case file can clarify understanding even if the physical model is imperfect. Collecting multiple evidence points also reduces the risk of bias in interpretation, as teachers rely on a combination of qualitative and quantitative data rather than a single measure (Montenegro & Jankowski, 2017; Karoub, 2024).
Finally, thoughtful use of assessment data allows teachers to improve both learning and assessments. By combining assessment data, patterns can be identified across a class, gaps in understanding can be addressed, and instructional strategies can be adapted. Data-driven decisions help teachers provide targeted support, highlight exceptional performance, and ensure that all students are evaluated equitably. Including a conference also guards against potential misuse of AI in written tasks (Tan et al., 2023), ensuring that assessments reflect authentic learning. The forensic diorama and case file task hopefully exemplify how multiple forms of evidence can enhance evaluation, reduce bias, and guide both teaching and assessment design.
By seeing the whole picture, teachers can better support learning, promote equity, and make informed, data-driven decisions in the classroom.
References:
Hashem, D. (2017, October 24). 6 reasons to try a single-point rubric. Edutopia.
Karoub, J. (2024, April 17). Researchers find lower grades given to students with surnames that come later in alphabetical order. Phys.org.
Montenegro, E., & Jankowski, N. A. (2017). Equity and assessment: Moving towards culturally responsive assessment. University of Illinois and Indiana University, NILOA.
Tan, S., Nguyen, D., & Johnson, M. S. (Executive Producers). (2023, June 28). Suspicion, cheating and bans: A.I. hits America's schools. The Daily [Audio podcast].
Wiggins, G., & Frontier, T. (2022, April 1). How to provide better feedback with rubrics. ASCD.



Comments