## Key Ideas > [!abstract] Core Concepts > > - **Invalid inferences from performance**: Question difficulty varies, surface structure confuses, and small samples provide unreliable data > - **Data rarely used effectively**: Information collected primarily for reporting rather than improving learning > - **Emotional responses interfere**: Students' emotional reactions to marks prevent effective engagement with feedback ## Definition **Question Level Analysis**: Recording marks for each exam question in spreadsheets and reviewing entire exam question-by-question in class; a practice with significant limitations and minimal learning benefit. ## Connected To [[Feedback]] | [[Surface and Deep Structure]] | [[Practice]] | [[Formative Assessment]] --- ## Problems with question level analysis ### Invalid performance inferences Question-level data leads to false conclusions about student understanding. Questions vary in complexity. An 80% failure rate on a Band 6 question may be appropriate rather than indicating poor teaching. Surface structure can confuse students even when they understand the underlying concept (Chi et al., 1981): a cricket chirping scenario for data analysis might obscure whether students struggle with statistics or merely with an unfamiliar context. Small sample sizes compound these problems, as four marks of trigonometry questions cannot reliably assess understanding of the topic. ### Data collection without action Teachers spend extensive time recording individual question performance, but this data serves primarily for reporting to leadership rather than improving learning. Despite knowing the results, teachers often don't change their instruction (Wiliam, 2011). Hours of analysis yield minimal impact on teaching and learning. ### Student emotional responses When students receive marks, their emotional reactions interfere with learning. Low achievers become disheartened and assume they won't understand feedback. Students focused on marks pester teachers for additional points rather than learning from mistakes. High achievers zone out because their performance suggests they need no attention. These reactions mean few students enter the optimal mindset to receive and act on feedback (Butler, 1988; Kluger & DeNisi, 1996). ### Single question limitation Learning requires sustained practice, but question-by-question review provides only one opportunity to address each error. Students may correct their immediate mistake during class review but lack adequate practice to automate the correct approach. They often make the same mistakes within a week. Reviewing a question once doesn't create the repeated practice needed for learning. ### Time burden Question level analysis creates substantial administrative overhead. Teachers record individual question marks for entire classes, create spreadsheets and analysis documents, spend class time going through each question systematically, and prepare individual feedback. ## Better alternatives ### Targeted error analysis Instead of comprehensive question-by-question analysis, teachers identify 2-3 common errors affecting many students, plan specific reteaching for these misconceptions, and provide focused practice on the identified problem areas (Black & Wiliam, 1998). ### Forward-looking feedback Rather than dwelling on test performance, teachers use errors to inform future teaching, plan additional practice opportunities for difficult concepts, and integrate remediation into ongoing instruction (Hattie & Timperley, 2007). ### Strategic reteaching Efficient reteaching begins with pattern identification: what errors occurred across multiple students? Teachers then focus on which conceptual misunderstandings need addressing, provide targeted instruction for identified gaps, and build remediation into future lessons. ### Student self-assessment More effective feedback approaches involve students identifying their own error patterns, explaining their mistakes to themselves, setting goals for improvement areas, and regularly monitoring their progress. This self-directed approach develops metacognitive skills alongside content knowledge. ## References Black, P., & Wiliam, D. (1998). Assessment and classroom learning. *Assessment in Education: Principles, Policy & Practice*, 5(1), 7-74. https://doi.org/10.1080/0969595980050102 Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task-involving and ego-involving evaluation on interest and performance. *British Journal of Educational Psychology*, 58(1), 1-14. https://doi.org/10.1111/j.2044-8279.1988.tb00874.x Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. *Cognitive Science*, 5(2), 121-152. https://doi.org/10.1207/s15516709cog0502_2 Hattie, J., & Timperley, H. (2007). The power of feedback. *Review of Educational Research*, 77(1), 81-112. https://doi.org/10.3102/003465430298487 Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. *Psychological Bulletin*, 119(2), 254-284. https://doi.org/10.1037/0033-2909.119.2.254 Wiliam, D. (2011). *Embedded formative assessment*. Solution Tree Press.