## Key Ideas > [!abstract] Core Concepts > > - **Reveal specific misconceptions efficiently**: Multiple choice questions designed so each wrong answer reveals different thinking errors or misconceptions > - **Under 10 seconds response time**: Test single skill/concept to enable precise diagnosis of student understanding > - **Wrong answers provide teaching data**: Each incorrect response designed to reveal specific misconceptions without requiring student explanation ## Definition **Diagnostic Questions**: Multiple choice questions where each answer choice (correct and incorrect) provides specific information about student thinking and understanding. ## Connected To [[Formative Assessment]] | [[Misconceptions]] | [[Check For Understanding]] | [[Responsive Teaching]] | [[Turn and Talk]] | [[Culture of Error]] --- ## How diagnostic questions work Well-designed diagnostic questions are precise instruments for revealing student thinking (Sadler, 1998; Treagust, 1988). Diagnostic assessments using carefully constructed distractors can reveal specific misconceptions more efficiently than open-ended questions (Briggs et al., 2006). **Each answer reveals different thinking**: ![[DiagnosticQuestion.png|500]] - **Answer A** suggests the student understands that angles on a straight line must add up to 180° and can identify the relevant angle, but has made an arithmetic error when subtracting 65 from 180 - **Answer B** indicates the student may be mistakenly thinking this is an example of vertically opposite angles being equal - **Answer C** is the correct answer - **Answer D** implies the student is aware of the concept that angles on a straight line must add up to 180° but has included all visible angles in their calculations This specificity allows targeted teaching rather than generic reteaching of the entire concept (Black & Wiliam, 2009). ## Principles for effective question design Research on item design shows that well-constructed multiple-choice questions with diagnostic distractors provide reliable assessment data (Haladyna et al., 2002; Treagust, 1988). Effective diagnostic questions must be clear and unambiguous, permitting only a single interpretation to prevent confusion from masking genuine understanding (Haladyna et al., 2002). Each question should test a single skill or concept rather than multiple ideas simultaneously, enabling precise diagnosis of student thinking (Sadler, 1998). The response time provides a useful heuristic: students should answer in under 10 seconds, indicating that quick thinking rather than extended reasoning suffices when the question focuses on a single concept (Barton, 2018). Each incorrect answer choice should reveal specific thinking patterns, providing actionable teaching data rather than serving merely as implausible options (Briggs et al., 2006). The correct answer must require proper understanding of the concept being assessed; students holding misconceptions should be unable to select the correct answer, ensuring valid assessment (Sadler, 1998). ## Strategic usage in lessons Diagnostic questions serve different purposes at different lesson stages. Before presenting worked examples, teachers can test prerequisite knowledge to identify gaps before teaching new content and inform decisions about necessary scaffolding. Between guided and independent practice, diagnostic questions check for misconceptions before releasing students to work independently, ensuring understanding and preventing practice of incorrect procedures. As exit ticket assessments, diagnostic questions provide quick diagnostics of lesson understanding to inform next lesson planning and identify students needing additional support. ## Responding to results The power of diagnostic questions lies not in the data alone but in the systematic response. When 80% or more of students answer correctly, teachers should ask one student who answered correctly to explain their reasoning, then ask a different student to repeat the explanation in their own words before moving forward with confidence that understanding is secure (Rosenshine, 2012; Wilson et al., 2019). The 80% threshold derives from research on optimal success rates for learning progression. When fewer than 80% of students answer correctly, a four-step protocol addresses confusion whilst treating errors as learning opportunities (Barton, 2018; Black & Wiliam, 2009; Kapur, 2008). 1. First, students engage in [[Turn and Talk]] discussion with the prompt "If you disagree, explain why you think you're correct. If you agree, explain why you're both correct." 2. Second, the class revotes to check whether discussion improved understanding. 3. Third, if responses remain below 80% correct, the teacher reteaches the concept then conducts another vote to confirm understanding. 4. Fourth, teachers can extend learning by asking students to "change one part of the question to make an incorrect answer correct", deepening conceptual understanding. ## Implementation considerations Students should record their reasoning alongside answers for later discussion, making their thinking visible for both themselves and teachers. Used within a [[Culture of Error]], wrong answers become valuable teaching opportunities rather than failures to be avoided. Teachers should use results to inform immediate teaching decisions rather than merely collecting data, and build question banks of effective diagnostic questions for repeated use across years. Diagnostic questions work best for focused assessment of single concepts. They prove unsuitable for topics requiring extended reasoning or integration of multiple skills (Haladyna et al., 2002). Questions where students can guess correctly without genuine understanding fail to diagnose thinking accurately. Poor distractors that fail to reveal common misconceptions waste the opportunity for precise diagnosis (Treagust, 1988). Without follow-up through responsive teaching, diagnostic questions become data collection exercises rather than formative assessment tools (Black & Wiliam, 2009). ## References Barton, C. (2018). *How I wish I'd taught maths: Lessons learned from research, conversations with experts, and 12 years of mistakes*. John Catt Educational. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. *Educational Assessment, Evaluation and Accountability*, 21(1), 5-31. https://doi.org/10.1007/s11092-008-9068-5 Briggs, D. C., Alonzo, A. C., Schwab, C., & Wilson, M. (2006). Diagnostic assessment with ordered multiple-choice items. *Educational Assessment*, 11(1), 33-63. https://doi.org/10.1207/s15326977ea1101_2 Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. *Applied Measurement in Education*, 15(3), 309-333. https://doi.org/10.1207/S15324818AME1503_5 Kapur, M. (2008). Productive failure. *Cognition and Instruction*, 26(3), 379-424. https://doi.org/10.1080/07370000802212669 Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. *American Educator*, 36(1), 12-19. Sadler, P. M. (1998). Psychometric models of student conceptions in science: Reconciling qualitative studies and distractor-driven assessment instruments. *Journal of Research in Science Teaching*, 35(3), 265-296. https://doi.org/10.1002/(SICI)1098-2736(199803)35:3<265::AID-TEA3>3.0.CO;2-P Treagust, D. F. (1988). Development and use of diagnostic tests to evaluate students' misconceptions in science. *International Journal of Science Education*, 10(2), 159-169. https://doi.org/10.1080/0950069880100204 Wilson, R. C., Shenhav, A., Straccia, M., & Cohen, J. D. (2019). The eighty five percent rule for optimal learning. *Nature Communications*, 10, 4646. https://doi.org/10.1038/s41467-019-12552-4 ---