A practical toolkit for teaching learners to accurately judge what they know. Covers the Dunning-Kruger effect, judgements of learning, desirable difficulties, and a 20-minute calibration intervention for any subject.
A Year 10 learner expects above 80% on a biology exam but scores 47%. This "calibration gap" affects all learners, as Kruger and Dunning (1999) showed. Lower-performing learners often greatly overestimate results. Hiller, Ihme, and Pfeiffer (2020) found metacognitive training improved accuracy in undergraduates. Learners with poor initial calibration improved most after training. Knowing is a skill, not a trait, and responds to teaching. This article offers a time-efficient method for teachers to teach calibration.
Key Takeaways
The calibration gap (the difference between what learners think they know and what they actually know) is the single largest barrier to effective self-regulated learning (Kruger and Dunning, 1999).
Structured calibration training improves both monitoring accuracy and academic performance, with the weakest learners showing the greatest gains (Hiller et al., 2020).
Teachers can implement a 20-minute calibration intervention using prediction, testing, comparison, and reflection in any subject and any key stage.
Judgements of learning made at the item level (question by question) are more accurate predictors of performance than global judgements about overall readiness (Thiede, Anderson and Therriault, 2005).
What Is Calibration and Why Does It Matter?
Calibration is the degree of alignment between a learner's confidence in their knowledge and their actual knowledge. A perfectly calibrated learner who predicts they will score 70% on a test and then scores 70% has zero calibration error. In practice, perfect calibration is rare. What matters is the direction and size of the error.
The 20-Minute Calibration Intervention
Koriat (1997) established the cue-utilisation framework for understanding how people make judgements of learning. His research demonstrated that when people assess how well they know something, they do not directly access the strength of their memory trace. Instead, they rely on cues: how familiar the material feels, how easily it comes to mind, and how fluently they can process it. These cues are often misleading. Material that has been recently read feels familiar and fluent, leading to high confidence, even though recognition familiarity is a poor predictor of recall ability.
Misleading Cues vs. Calibration
Re-reading notes boosts confidence, but this common revision tactic can mislead learners (Dunlosky et al., 2013). Learners feel they know the material, so they stop revising. However, they may struggle with recall during exams because recognition differs from recall. (Brown, Roediger & McDaniel, 2014).
Calibration Skill Cycle
Thiede, Anderson, and Therriault (2005) found that metacognitive timing and focus impact accuracy. Broad judgements are less accurate than specific ones. For teachers, asking learners about specific question performance is more useful than asking overall topic confidence.
Classroom Example: The Prediction Quiz
Before a retrieval practice quiz, give each learner a prediction sheet. Next to each question number, they write their confidence (1-5) that they can answer correctly. They then attempt the quiz. Afterwards, they compare their predictions with their actual scores question by question. The visual gap between the prediction column and the score column is the calibration error made visible. This takes 3 minutes of lesson time and produces immediate metacognitive insight.
The Dunning-Kruger Effect in the Classroom
Many misunderstand the Dunning-Kruger effect. Kruger and Dunning (1999) found poor performers can't judge their ability. Incompetence causes both poor performance and inaccurate self-assessment. This is helpful for teachers.
Jansen, Rafferty, and Griffiths (2021) in Nature Human Behaviour created a model of the Dunning-Kruger effect. The researchers' study, with 4,000 learners, showed task performance drives the effect. Metacognitive sensitivity also plays a role, they noted. When tasks were equally hard, the calibration gap vanished, the team found. This suggests task difficulty affects perceived knowledge, according to Jansen, Rafferty, and Griffiths (2021).
For teachers, this means that low-performing learners are not deliberately overconfident. They genuinely cannot tell the difference between knowing and not knowing because they lack the domain knowledge needed to make that distinction. The solution is not to tell them they are wrong about their confidence. It is to give them structured practice in making and checking predictions, so they develop the metacognitive skill alongside the domain knowledge.
Schleinschok, Eitel and Scheiter (2024) studied physics learners. Their Frontiers in Education work backs the "unskilled and unaware" idea. Low performing learners overestimated their abilities before tests. They also struggled to change their views after tests (Schleinschok, Eitel & Scheiter, 2024). This failure to adjust after feedback continues poor performance all year.
Classroom Example: The Confidence Calibration Graph
After three quizzes with prediction sheets, help each learner plot their data on a simple graph. The x-axis shows their average prediction (1-5) and the y-axis shows their average score (as a percentage). A perfectly calibrated learner sits on the diagonal line. Most learners will see they sit above the line (overconfident) or below it (underconfident). Showing learners their own calibration pattern, without judgement, is the single most powerful intervention available. The graph does the teaching.
A 20-Minute Calibration Intervention
This four-step protocol can be used in any subject, any key stage, and requires no special resources. It takes 20 minutes the first time and 10 minutes on subsequent uses.
Step 1: Predict (3 minutes). Before a retrieval task, learners make item-level predictions. For a 10-question quiz, they write a confidence rating (1-5) next to each question. For an extended task, they predict their score on a rubric. The key is specificity: "I think I will get question 4 correct because I can remember the formula" is more useful than "I think I know this topic."
Step 2: Test (5-8 minutes). Learners complete the retrieval task under normal conditions. This can be a quiz, a diagram from memory, a set of practice questions, or a written recall task. The format does not matter. What matters is that the task produces a clear, scorable outcome that can be compared to the prediction.
Step 3: Compare (3 minutes). Learners score their work (peer-marking works well here) and then compare their predictions with their actual scores. They highlight any question where their prediction was wrong by 2 or more points. These are their calibration blind spots, the specific areas where their metacognitive monitoring is least accurate.
Step 4: Reflect and Adjust (4 minutes). Learners answer two questions: "Which topics did I think I knew but actually did not?" and "What will I do differently to check my understanding next time?" The first question builds metacognitive knowledge. The second builds metacognitive regulation. Over repeated cycles, learners begin to internalise the prediction-test-compare loop and apply it independently during revision.
Hiller, Ihme and Pfeiffer (2020) found structured judgement training with feedback improved learner calibration. This was better than repeated testing alone. Metacognitive training reduced overconfidence, stabilising after four cycles. This suggests a half-term of practice can help learners.
Classroom Example: Science Department Calibration Tracking
Science teachers use prediction quizzes every two weeks for Years 7-11. We track each learner's calibration accuracy on a spreadsheet. This shows the difference between prediction and performance. Teachers review data at each half term. They identify learners needing extra metacognitive help (Winne, 2010), not just more lessons (Bjork, 1994; Dunlosky, 2013).
Desirable Difficulties and the Calibration Paradox
Bjork (1994) described desirable difficulties, which boost long-term learning. Spacing, interleaving, and retrieval practice count (Bjork, 1994). Desirable difficulties lower learner confidence, despite improving later recall. Learners using spacing and retrieval feel less sure than those re-reading (Bjork, 1994).
This creates a dangerous feedback loop. A learner tries retrieval practice, feels that it is difficult, concludes she does not know the material, and switches back to re-reading, which feels fluent and produces high (but false) confidence. The illusion of knowing actually punishes effective study strategies and rewards ineffective ones.
Koriat (1997) described cue utilisation: learning judgements rely on processing ease. Re-reading creates fluency, so learners feel confident. Retrieval practice lowers fluency, reducing confidence, despite better learning.
Research by Bjork (1994) and Kornell (2009) shows the calibration paradox impacts learning. Teachers can explain this paradox to learners directly. When learners recognise difficulty means they are learning, they persevere. This helps learners ignore misleading fluency cues, say Bjork (1994), and stick to good strategies despite lacking confidence, per Kornell (2009).
Classroom Example: The Difficulty Diary
Ask learners to keep a one-column diary during revision sessions. After each study session, they write one sentence: "This felt [easy/medium/hard]." After the test, they compare their difficulty ratings with their scores for each topic. Over time, a pattern emerges: the topics that felt hardest during revision often produce the highest scores. This experiential evidence is more persuasive than any teacher explanation of the testing effect.
Calibration Across Key Stages
Calibration instruction should suit the learner's age. Predictions should be made, then checked against reality. Learners use the difference between prediction and result to improve.
Key Stage 1 (Ages 5-7). Use physical self-assessment with immediate checking. After a phonics activity, learners hold up a smiley face, straight face, or sad face to show their confidence. The teacher then asks them to read the target words. Learners compare their face with their reading performance. The focus is on building the habit of self-assessment, not on accuracy.
Key Stage 2 (Ages 7-11). Introduce the prediction quiz format with simple numerical scales. Learners predict their scores on multiplication table tests, spelling tests, or reading comprehension questions. Track calibration accuracy over a half-term. Introduce the concept of overconfidence explicitly: "Sometimes our brain tricks us into thinking we know something when we actually just recognise it."
Learners aged 11-14 predict their levels by rubric (Key Stage 3). They do this before submitting extended work. Teachers then compare their assessment to learner predictions. Discuss the biggest differences to improve self-assessment (Sadler, 1989; Black & Wiliam, 1998).
Learners in Key Stages 4 and 5 (ages 14-18) create calibration graphs. They plot prediction accuracy across assessments. This helps them see personal calibration bias (Lichtenstein et al., 1982). Discuss how calibration links to revision strategies. Connect this to Bjork and Bjork's (1992) research on desirable difficulties.
Boud and Falchikov (2024) found interventions boosted problem solving accuracy. The meta-analysis showed metacognitive knowledge helped learners. Targeting whole tasks, not just timing, worked best (Boud & Falchikov, 2024). The effect size was small, but significant (g = 0.25).
Classroom Example: Year 9 English Calibration Wall
An English department creates a "Calibration Wall" in each Year 9 classroom. After each assessment, learners place a post-it note showing their predicted grade and actual grade. The wall makes calibration patterns visible across the class. A learner who consistently predicts grade 7 but achieves grade 5 can see that pattern physically. The teacher uses the wall to identify learners for targeted calibration support.
The Role of Feedback in Calibration Improvement
Learners need feedback on prediction accuracy, not just task results. Practice alone does not improve calibration (Lichtenstein et al., 1982; Stone, 2000; Yan et al., 2017).
Garcia Conejero et al. (2025) studied university learners in incentivised groups, getting feedback or neither. Metacognitive feedback alone showed no impact on accuracy. Feedback helped learners only when paired with instruction and practice.
Hiller, Ihme, and Pfeiffer (2020) found combined methods improved calibration more. Teachers should explain why calibration matters to learners. Give learners practice predicting and checking, then offer feedback.
Pintrich (2005) showed global monitoring stayed stable all semester. Local judgements better reflected feedback and related to grades. Calibration practice should target specific questions, not general confidence (Pintrich, 2005).
Classroom Example: The Feedback Sandwich
When returning marked assessments, give learners three minutes before they see their marks. In this time, they write their predicted score. Then they receive the mark. Then they spend three minutes comparing prediction with reality and identifying the specific questions where they were most miscalibrated. This simple "predict-receive-compare" sandwich turns every assessment into a calibration training opportunity at zero additional cost.
Calibration and Formative Assessment
Calibration is not separate from formative assessment. It is formative assessment turned inward. Where formative assessment asks "What does this learner know?", calibration asks "Does this learner know what they know?" Both questions are essential for effective learning, and neither is sufficient alone.
A learner with good domain knowledge but poor calibration will revise the wrong topics, under-prepare for difficult content, and over-prepare for content already mastered. A learner with poor domain knowledge but good calibration will at least direct their limited study time to their actual weaknesses.
Kalender, Marshman, and Singh (2024) found physics learners struggled with accurate self-assessment. Lower performing learners were overconfident and couldn't adjust after exams. Their problem wasn't just physics; they struggled to monitor their knowledge. Researchers believe monitoring skills and content knowledge grow together. Interventions should target both, not just content.
Assessment leads, add calibration checks to cycles. Summative assessment data points are useful. Use formative assessment to train learners (Black & Wiliam, 1998). Schools have data; use it for thinking and learning.
Your Next Lesson
Before your next quiz or test, hand out a prediction sheet. Ask each learner to rate their confidence (1-5) for each question before they attempt it. After the test, give them two minutes to compare predictions with scores. Ask: "Where were you most wrong about your own knowledge?" Do this once a fortnight for a half-term and track the average calibration gap. You will see it narrow. The learners who improve their calibration will also improve their performance, because accurate monitoring drives effective regulation. The illusion of knowing is breakable. It just takes practice.
Fill in your details below and we'll send the resource pack straight to your inbox.
✅
Your resource pack is ready
We've also sent a copy to your email. Check your inbox.
References
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe and A. P. Shimamura (Eds.), Metacognition: Knowing About Knowing (pp. 185-205). MIT Press.
Boud and Falchikov (2024) reviewed problem-solving intervention studies. They focused on accuracy monitoring, crucial for learner progress. The study appeared in Educational Psychology Review, 36, page 45.
Garcia Conejero, J., Pinilla Lebrero, J. J. and Garcia Gallego, A. (2025). The role of monetary incentives and feedback on how well students calibrate their academic performance. European Journal of Education, 60(1), e12834.
Hiller, S., Ihme, T. A. and Pfeiffer, H. C. (2020). Enhanced monitoring accuracy and test performance: Incremental effects of judgment training over and above repeated testing. Learning and Instruction, 65, 101245.
Jansen, R. A., Rafferty, A. N. and Griffiths, T. L. (2021). A rational model of the Dunning-Kruger effect supports insensitivity to evidence in low performers. Nature Human Behaviour, 5, 756-763.
Kalender, Z. Y., Marshman, E. and Singh, C. (2024). Unskilled and unaware? Differences in metacognitive awareness between high and low-ability students in STEM. Frontiers in Education, 9, 1375638.
Koriat (1997) suggests learners monitor their knowledge while studying. They use cues to judge their own learning, according to Koriat's cue-utilisation approach. This research appeared in the Journal of Experimental Psychology: General. The study was published in 1997, volume 126, issue 4, pages 349-370.
Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.
Metacognition impacts learner success, said Pintrich (2005). Learners who monitor their understanding better often achieve more. Pintrich's 2005 study explored this link in higher education classrooms. The research appeared in the Journal of Experimental Education.
Schleinschok, K., Eitel, A. and Scheiter, K. (2024). Unskilled and unaware? Differences in metacognitive awareness between high and low-ability students in STEM. Frontiers in Education, 9, 1375638.
Thiede, K. W., Anderson, M. C. M. and Therriault, D. (2005). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95(1), 66-73.
Cognitive Science Platform
Make Thinking Visible
Open a free account and help organise learners' thinking with evidence-based graphic organisers. Reduce cognitive load and guide schema building dynamically.
A Year 10 learner expects above 80% on a biology exam but scores 47%. This "calibration gap" affects all learners, as Kruger and Dunning (1999) showed. Lower-performing learners often greatly overestimate results. Hiller, Ihme, and Pfeiffer (2020) found metacognitive training improved accuracy in undergraduates. Learners with poor initial calibration improved most after training. Knowing is a skill, not a trait, and responds to teaching. This article offers a time-efficient method for teachers to teach calibration.
Key Takeaways
The calibration gap (the difference between what learners think they know and what they actually know) is the single largest barrier to effective self-regulated learning (Kruger and Dunning, 1999).
Structured calibration training improves both monitoring accuracy and academic performance, with the weakest learners showing the greatest gains (Hiller et al., 2020).
Teachers can implement a 20-minute calibration intervention using prediction, testing, comparison, and reflection in any subject and any key stage.
Judgements of learning made at the item level (question by question) are more accurate predictors of performance than global judgements about overall readiness (Thiede, Anderson and Therriault, 2005).
What Is Calibration and Why Does It Matter?
Calibration is the degree of alignment between a learner's confidence in their knowledge and their actual knowledge. A perfectly calibrated learner who predicts they will score 70% on a test and then scores 70% has zero calibration error. In practice, perfect calibration is rare. What matters is the direction and size of the error.
The 20-Minute Calibration Intervention
Koriat (1997) established the cue-utilisation framework for understanding how people make judgements of learning. His research demonstrated that when people assess how well they know something, they do not directly access the strength of their memory trace. Instead, they rely on cues: how familiar the material feels, how easily it comes to mind, and how fluently they can process it. These cues are often misleading. Material that has been recently read feels familiar and fluent, leading to high confidence, even though recognition familiarity is a poor predictor of recall ability.
Misleading Cues vs. Calibration
Re-reading notes boosts confidence, but this common revision tactic can mislead learners (Dunlosky et al., 2013). Learners feel they know the material, so they stop revising. However, they may struggle with recall during exams because recognition differs from recall. (Brown, Roediger & McDaniel, 2014).
Calibration Skill Cycle
Thiede, Anderson, and Therriault (2005) found that metacognitive timing and focus impact accuracy. Broad judgements are less accurate than specific ones. For teachers, asking learners about specific question performance is more useful than asking overall topic confidence.
Classroom Example: The Prediction Quiz
Before a retrieval practice quiz, give each learner a prediction sheet. Next to each question number, they write their confidence (1-5) that they can answer correctly. They then attempt the quiz. Afterwards, they compare their predictions with their actual scores question by question. The visual gap between the prediction column and the score column is the calibration error made visible. This takes 3 minutes of lesson time and produces immediate metacognitive insight.
The Dunning-Kruger Effect in the Classroom
Many misunderstand the Dunning-Kruger effect. Kruger and Dunning (1999) found poor performers can't judge their ability. Incompetence causes both poor performance and inaccurate self-assessment. This is helpful for teachers.
Jansen, Rafferty, and Griffiths (2021) in Nature Human Behaviour created a model of the Dunning-Kruger effect. The researchers' study, with 4,000 learners, showed task performance drives the effect. Metacognitive sensitivity also plays a role, they noted. When tasks were equally hard, the calibration gap vanished, the team found. This suggests task difficulty affects perceived knowledge, according to Jansen, Rafferty, and Griffiths (2021).
For teachers, this means that low-performing learners are not deliberately overconfident. They genuinely cannot tell the difference between knowing and not knowing because they lack the domain knowledge needed to make that distinction. The solution is not to tell them they are wrong about their confidence. It is to give them structured practice in making and checking predictions, so they develop the metacognitive skill alongside the domain knowledge.
Schleinschok, Eitel and Scheiter (2024) studied physics learners. Their Frontiers in Education work backs the "unskilled and unaware" idea. Low performing learners overestimated their abilities before tests. They also struggled to change their views after tests (Schleinschok, Eitel & Scheiter, 2024). This failure to adjust after feedback continues poor performance all year.
Classroom Example: The Confidence Calibration Graph
After three quizzes with prediction sheets, help each learner plot their data on a simple graph. The x-axis shows their average prediction (1-5) and the y-axis shows their average score (as a percentage). A perfectly calibrated learner sits on the diagonal line. Most learners will see they sit above the line (overconfident) or below it (underconfident). Showing learners their own calibration pattern, without judgement, is the single most powerful intervention available. The graph does the teaching.
A 20-Minute Calibration Intervention
This four-step protocol can be used in any subject, any key stage, and requires no special resources. It takes 20 minutes the first time and 10 minutes on subsequent uses.
Step 1: Predict (3 minutes). Before a retrieval task, learners make item-level predictions. For a 10-question quiz, they write a confidence rating (1-5) next to each question. For an extended task, they predict their score on a rubric. The key is specificity: "I think I will get question 4 correct because I can remember the formula" is more useful than "I think I know this topic."
Step 2: Test (5-8 minutes). Learners complete the retrieval task under normal conditions. This can be a quiz, a diagram from memory, a set of practice questions, or a written recall task. The format does not matter. What matters is that the task produces a clear, scorable outcome that can be compared to the prediction.
Step 3: Compare (3 minutes). Learners score their work (peer-marking works well here) and then compare their predictions with their actual scores. They highlight any question where their prediction was wrong by 2 or more points. These are their calibration blind spots, the specific areas where their metacognitive monitoring is least accurate.
Step 4: Reflect and Adjust (4 minutes). Learners answer two questions: "Which topics did I think I knew but actually did not?" and "What will I do differently to check my understanding next time?" The first question builds metacognitive knowledge. The second builds metacognitive regulation. Over repeated cycles, learners begin to internalise the prediction-test-compare loop and apply it independently during revision.
Hiller, Ihme and Pfeiffer (2020) found structured judgement training with feedback improved learner calibration. This was better than repeated testing alone. Metacognitive training reduced overconfidence, stabilising after four cycles. This suggests a half-term of practice can help learners.
Classroom Example: Science Department Calibration Tracking
Science teachers use prediction quizzes every two weeks for Years 7-11. We track each learner's calibration accuracy on a spreadsheet. This shows the difference between prediction and performance. Teachers review data at each half term. They identify learners needing extra metacognitive help (Winne, 2010), not just more lessons (Bjork, 1994; Dunlosky, 2013).
Desirable Difficulties and the Calibration Paradox
Bjork (1994) described desirable difficulties, which boost long-term learning. Spacing, interleaving, and retrieval practice count (Bjork, 1994). Desirable difficulties lower learner confidence, despite improving later recall. Learners using spacing and retrieval feel less sure than those re-reading (Bjork, 1994).
This creates a dangerous feedback loop. A learner tries retrieval practice, feels that it is difficult, concludes she does not know the material, and switches back to re-reading, which feels fluent and produces high (but false) confidence. The illusion of knowing actually punishes effective study strategies and rewards ineffective ones.
Koriat (1997) described cue utilisation: learning judgements rely on processing ease. Re-reading creates fluency, so learners feel confident. Retrieval practice lowers fluency, reducing confidence, despite better learning.
Research by Bjork (1994) and Kornell (2009) shows the calibration paradox impacts learning. Teachers can explain this paradox to learners directly. When learners recognise difficulty means they are learning, they persevere. This helps learners ignore misleading fluency cues, say Bjork (1994), and stick to good strategies despite lacking confidence, per Kornell (2009).
Classroom Example: The Difficulty Diary
Ask learners to keep a one-column diary during revision sessions. After each study session, they write one sentence: "This felt [easy/medium/hard]." After the test, they compare their difficulty ratings with their scores for each topic. Over time, a pattern emerges: the topics that felt hardest during revision often produce the highest scores. This experiential evidence is more persuasive than any teacher explanation of the testing effect.
Calibration Across Key Stages
Calibration instruction should suit the learner's age. Predictions should be made, then checked against reality. Learners use the difference between prediction and result to improve.
Key Stage 1 (Ages 5-7). Use physical self-assessment with immediate checking. After a phonics activity, learners hold up a smiley face, straight face, or sad face to show their confidence. The teacher then asks them to read the target words. Learners compare their face with their reading performance. The focus is on building the habit of self-assessment, not on accuracy.
Key Stage 2 (Ages 7-11). Introduce the prediction quiz format with simple numerical scales. Learners predict their scores on multiplication table tests, spelling tests, or reading comprehension questions. Track calibration accuracy over a half-term. Introduce the concept of overconfidence explicitly: "Sometimes our brain tricks us into thinking we know something when we actually just recognise it."
Learners aged 11-14 predict their levels by rubric (Key Stage 3). They do this before submitting extended work. Teachers then compare their assessment to learner predictions. Discuss the biggest differences to improve self-assessment (Sadler, 1989; Black & Wiliam, 1998).
Learners in Key Stages 4 and 5 (ages 14-18) create calibration graphs. They plot prediction accuracy across assessments. This helps them see personal calibration bias (Lichtenstein et al., 1982). Discuss how calibration links to revision strategies. Connect this to Bjork and Bjork's (1992) research on desirable difficulties.
Boud and Falchikov (2024) found interventions boosted problem solving accuracy. The meta-analysis showed metacognitive knowledge helped learners. Targeting whole tasks, not just timing, worked best (Boud & Falchikov, 2024). The effect size was small, but significant (g = 0.25).
Classroom Example: Year 9 English Calibration Wall
An English department creates a "Calibration Wall" in each Year 9 classroom. After each assessment, learners place a post-it note showing their predicted grade and actual grade. The wall makes calibration patterns visible across the class. A learner who consistently predicts grade 7 but achieves grade 5 can see that pattern physically. The teacher uses the wall to identify learners for targeted calibration support.
The Role of Feedback in Calibration Improvement
Learners need feedback on prediction accuracy, not just task results. Practice alone does not improve calibration (Lichtenstein et al., 1982; Stone, 2000; Yan et al., 2017).
Garcia Conejero et al. (2025) studied university learners in incentivised groups, getting feedback or neither. Metacognitive feedback alone showed no impact on accuracy. Feedback helped learners only when paired with instruction and practice.
Hiller, Ihme, and Pfeiffer (2020) found combined methods improved calibration more. Teachers should explain why calibration matters to learners. Give learners practice predicting and checking, then offer feedback.
Pintrich (2005) showed global monitoring stayed stable all semester. Local judgements better reflected feedback and related to grades. Calibration practice should target specific questions, not general confidence (Pintrich, 2005).
Classroom Example: The Feedback Sandwich
When returning marked assessments, give learners three minutes before they see their marks. In this time, they write their predicted score. Then they receive the mark. Then they spend three minutes comparing prediction with reality and identifying the specific questions where they were most miscalibrated. This simple "predict-receive-compare" sandwich turns every assessment into a calibration training opportunity at zero additional cost.
Calibration and Formative Assessment
Calibration is not separate from formative assessment. It is formative assessment turned inward. Where formative assessment asks "What does this learner know?", calibration asks "Does this learner know what they know?" Both questions are essential for effective learning, and neither is sufficient alone.
A learner with good domain knowledge but poor calibration will revise the wrong topics, under-prepare for difficult content, and over-prepare for content already mastered. A learner with poor domain knowledge but good calibration will at least direct their limited study time to their actual weaknesses.
Kalender, Marshman, and Singh (2024) found physics learners struggled with accurate self-assessment. Lower performing learners were overconfident and couldn't adjust after exams. Their problem wasn't just physics; they struggled to monitor their knowledge. Researchers believe monitoring skills and content knowledge grow together. Interventions should target both, not just content.
Assessment leads, add calibration checks to cycles. Summative assessment data points are useful. Use formative assessment to train learners (Black & Wiliam, 1998). Schools have data; use it for thinking and learning.
Your Next Lesson
Before your next quiz or test, hand out a prediction sheet. Ask each learner to rate their confidence (1-5) for each question before they attempt it. After the test, give them two minutes to compare predictions with scores. Ask: "Where were you most wrong about your own knowledge?" Do this once a fortnight for a half-term and track the average calibration gap. You will see it narrow. The learners who improve their calibration will also improve their performance, because accurate monitoring drives effective regulation. The illusion of knowing is breakable. It just takes practice.
Fill in your details below and we'll send the resource pack straight to your inbox.
✅
Your resource pack is ready
We've also sent a copy to your email. Check your inbox.
References
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe and A. P. Shimamura (Eds.), Metacognition: Knowing About Knowing (pp. 185-205). MIT Press.
Boud and Falchikov (2024) reviewed problem-solving intervention studies. They focused on accuracy monitoring, crucial for learner progress. The study appeared in Educational Psychology Review, 36, page 45.
Garcia Conejero, J., Pinilla Lebrero, J. J. and Garcia Gallego, A. (2025). The role of monetary incentives and feedback on how well students calibrate their academic performance. European Journal of Education, 60(1), e12834.
Hiller, S., Ihme, T. A. and Pfeiffer, H. C. (2020). Enhanced monitoring accuracy and test performance: Incremental effects of judgment training over and above repeated testing. Learning and Instruction, 65, 101245.
Jansen, R. A., Rafferty, A. N. and Griffiths, T. L. (2021). A rational model of the Dunning-Kruger effect supports insensitivity to evidence in low performers. Nature Human Behaviour, 5, 756-763.
Kalender, Z. Y., Marshman, E. and Singh, C. (2024). Unskilled and unaware? Differences in metacognitive awareness between high and low-ability students in STEM. Frontiers in Education, 9, 1375638.
Koriat (1997) suggests learners monitor their knowledge while studying. They use cues to judge their own learning, according to Koriat's cue-utilisation approach. This research appeared in the Journal of Experimental Psychology: General. The study was published in 1997, volume 126, issue 4, pages 349-370.
Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.
Metacognition impacts learner success, said Pintrich (2005). Learners who monitor their understanding better often achieve more. Pintrich's 2005 study explored this link in higher education classrooms. The research appeared in the Journal of Experimental Education.
Schleinschok, K., Eitel, A. and Scheiter, K. (2024). Unskilled and unaware? Differences in metacognitive awareness between high and low-ability students in STEM. Frontiers in Education, 9, 1375638.
Thiede, K. W., Anderson, M. C. M. and Therriault, D. (2005). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95(1), 66-73.
Cognitive Science Platform
Make Thinking Visible
Open a free account and help organise learners' thinking with evidence-based graphic organisers. Reduce cognitive load and guide schema building dynamically.
{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/illusion-of-knowing-calibration#article","headline":"The Illusion of Knowing: A Calibration Toolkit for Teachers","description":"A practical toolkit for teaching learners to accurately judge what they know. Covers the Dunning-Kruger effect, judgements of learning, desirable...","datePublished":"2026-03-31T12:03:32.953Z","dateModified":"2026-04-05T10:33:22.591Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant","sameAs":["https://www.linkedin.com/in/paul-main-structural-learning/","https://www.structural-learning.com/team/paulmain","https://www.amazon.co.uk/stores/Paul-Main/author/B0BTW6GB8F","https://www.structural-learning.com"]},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/illusion-of-knowing-calibration"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/69cbb9d573a7146af5ce3ffc_69cbb9d46ae455ab945dddc0_illusion-of-knowing-calibration-infographic.webp","wordCount":2840,"mentions":[{"@type":"Thing","name":"Metacognition","sameAs":"https://www.wikidata.org/wiki/Q1201994"},{"@type":"Thing","name":"Retrieval Practice","sameAs":"https://www.wikidata.org/wiki/Q7316866"},{"@type":"Thing","name":"Interleaving (learning)","sameAs":"https://www.wikidata.org/wiki/Q6047540"},{"@type":"Thing","name":"Formative Assessment","sameAs":"https://www.wikidata.org/wiki/Q5470023"},{"@type":"Thing","name":"Self-regulation","sameAs":"https://www.wikidata.org/wiki/Q7448095"},{"@type":"Thing","name":"Feedback","sameAs":"https://www.wikidata.org/wiki/Q14915"},{"@type":"Thing","name":"Reading Comprehension","sameAs":"https://www.wikidata.org/wiki/Q845800"},{"@type":"Person","name":"Carol Dweck","sameAs":"https://www.wikidata.org/wiki/Q5319776"},{"@type":"Person","name":"Dylan Wiliam","sameAs":"https://www.wikidata.org/wiki/Q7999029"},{"@type":"Thing","name":"Phonics","sameAs":"https://www.wikidata.org/wiki/Q1196857"}]},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/illusion-of-knowing-calibration#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"The Illusion of Knowing: A Calibration Toolkit for Teachers","item":"https://www.structural-learning.com/post/illusion-of-knowing-calibration"}]}]}