Metacognitive Monitoring: Fixing Student Overconfidence in the Classroom
Why students overestimate what they know and how to fix it. Research-backed calibration strategies from Hacker (1998) and Dunning-Kruger for UK classrooms.


Why students overestimate what they know and how to fix it. Research-backed calibration strategies from Hacker (1998) and Dunning-Kruger for UK classrooms.
Metacognitive monitoring helps learners judge their understanding. Learners focus effort better with self-assessment skills, (Nelson & Narens, 1990). Classroom strategies prevent overconfidence and aid realistic evaluation, (Dunlosky & Metcalfe, 2009). Techniques work for all subjects and abilities, (Hattie, 2012).

Koriat, Lichtenstein, and Fischhoff's (1980) research shows successful calibration relies on conditional knowledge. Learners need to understand when certain strategies are useful, and why they succeed (Bjork, Dunlosky, & Kornell, 2013). This understanding varies depending on the context (Hattie & Yates, 2014).
A Year 11 learner sits at her desk for six hours, methodically re-reading her chemistry notes and highlighting key points in different colours. She feels confident, convinced she knows the material inside out. The next day, her mock exam result arrives: 42%. How did six hours of focussed revision lead to such a disappointing outcome? The answer lies in broken metacognitive monitoring.
Calibration accuracy describes the gap between how well students think they know something and how well they actually perform. When this gap is large, students make poor learning decisions. They skip topics they haven't mastered, spend too much time on material they already know, and choose ineffective revision strategies.
Independent learning matters for GCSE and A-Level success. Learners who misjudge their understanding waste revision time. This particularly affects disadvantaged learners, (Bjork, 1999). These learners often lack effective study strategies and self-awareness, (Dunning, 2011; Kruger, 1999). They may not reach their full potential, (Metcalfe, 2009).
Research consistently shows that most students are poor judges of their own learning. They mistake familiarity with fluency, confuse recognition with recall, and let overconfidence derail their preparation. The good news? Calibration can be taught, measured, and improved through targeted classroom strategies.
Metacognitive monitoring is the ongoing assessment of your own learning whilst engaged in a task. It's the internal voice asking: 'Do I understand this?' 'Am I making progress?' 'Should I change approach?' This constant self-evaluation forms half of what psychologists Thomas Nelson and Louis Narens called the metacognitive system.
Nelson and Narens (1990) describe metacognition using two linked processes: monitoring and control. Monitoring means judging a learner's current learning state. Control means acting based on those judgements. Monitoring is like a car's speedometer. Control is like the accelerator and brakes.
In classroom terms, monitoring happens when a Year 8 learner reads a history paragraph and thinks: 'I'm not sure I understand the causes of World War One.' Control kicks in when they decide to re-read the section or ask for help. The system works beautifully when monitoring is accurate.
The problem emerges when monitoring fails. If that same learner feels confident about World War One causes but actually hasn't grasped them, they'll move on too quickly (poor control based on inaccurate monitoring). Conversely, if they underestimate their understanding, they might waste time over-studying material they've already mastered.
Learners often struggle with monitoring because it feels automatic. Metacognitive monitoring works behind the scenes, unlike maths or essay writing. Few learners get explicit instruction in judging their learning (Nelson & Narens, 1990). This leads to mistakes in self-assessment (Bjork, 1999; Dunlosky & Bjork, 2008).
Dunning and Kruger (1999) showed that incompetence can cause overconfidence. Learners who struggle may not realise they do. High achievers often underrate their skills. This pattern helps teachers understand self assessment issues.
Low-performing learners lack the knowledge needed to judge what they don't know. A Year 9 student who hasn't grasped basic algebraic concepts can't accurately assess their readiness for quadratic equations. They lack the domain knowledge required for accurate self-evaluation.
Consider Sarah, struggling with photosynthesis in GCSE Biology. She reads about chlorophyll and light reactions, feels the terms are familiar, and rates her understanding as 7/10. In reality, she can't explain how these components work together. Her limited knowledge prevents her recognising the gaps in her understanding.
This overconfidence proves particularly problematic during revision. Students who most need extra practise are least likely to seek it. They skip foundation topics, attempt harder problems too early, and wonder why their exam results don't match their expectations.
Gifted learners often find learning easy, assuming it is for everyone. When learners quickly grasp complex ideas, they may underestimate their success. They might also worry too much about being ready (Gross, 2002).
Take James, a top-set Year 7 mathematician who solved simultaneous equations in minutes during the lesson. He rates his confidence as 4/10, thinking: 'If I found it easy, everyone must have.' Meanwhile, most of his classmates are still struggling with the basics. James's competence makes him acutely aware of what he doesn't yet know, leading to underconfidence.
A concrete example illustrates this perfectly. After a Year 7 fractions lesson, learners predicted their quiz scores. Low-performers predicted an average of 14/20 but scored 8/20. High-performers predicted 16/20 but actually scored 19/20. The biggest gaps in calibration accuracy occurred at both extremes.
Teachers encounter two main types of metacognitive judgements in their classrooms: Judgements of Learning (JOL) and Feelings of Knowing (FOK). Understanding these helps explain why revision often goes wrong and how to improve student self-assessment.
Judgements of Learning occur when students predict how well they'll remember or perform on material they're currently studying. After reading about the digestive system, a Year 6 learner might think: 'I'll remember this for the test next week.' These judgements directly influence revision decisions.
JOLs feel intuitive but often mislead. Students base them on current ease of processing rather than future retrieval likelihood. Material feels easy when freshly studied, creating inflated confidence that crashes when memory fades.
Hacker et al. found delayed judgements of learning are more accurate. Learners mistake short-term access for retention immediately after studying. A ten-minute delay improves calibration accuracy significantly, (Hacker et al.).
Feelings of Knowing emerge when students sense they could retrieve information if given the right cue, even though they can't currently access it. During a history lesson about Tudor monarchs, a learner might think: 'I know about Henry VIII's wives, but I can't quite remember all their names right now.'
FOKs influence whether students persist with retrieval attempts or give up and seek help. Accurate FOKs guide efficient learning by helping students distinguish between information that's truly forgotten and information that just needs more retrieval practise.
Both JOLs and FOKs affect how students allocate their study time, choose revision strategies, and seek additional support. When these judgements go wrong, students waste time on easy material whilst neglecting topics they haven't mastered.

Researchers (Rhodes & Tauber, 2011) found learners with poor judgement waste revision time. Bad strategy choices also hinder achievement. Teachers should improve learners' metacognitive skills, as research (Nelson & Narens, 1990) shows this aids exam success.
This behaviour, identified by Kruger and Dunning (1999), affects learning. Learners may skip vital revision if they think they know the topic. Underestimating their knowledge makes learners waste time, said Dunlosky and Rawson (2015). Learners might reread familiar memory topics but neglect challenging social influence (Bjork et al, 2013).
This misallocation wastes time, especially during exams. Learners revise a lot, but not well (Bjork, 1994). They practiced easy topics instead of harder ones (Kornell & Bjork, 2008). Predictable disappointment then follows (Dunlosky et al., 2013).
The EEF's guidance on metacognition emphasises that students must learn to accurately judge their own learning progress. Without this skill, even motivated learners can work hard but see little improvement.
Weinstein et al. (2018) found learners choose passive revision over active. Re-reading and highlighting feel productive but don't aid learning. Learners mistake familiarity with material for real knowledge. Brown et al. (2014) discuss this issue.
Meanwhile, they avoid testing themselves, spacing their practise, or attempting to explain concepts to others. These strategies feel more difficult and highlight gaps in understanding, making them seem less appealing despite their superior effectiveness.
This especially affects disadvantaged learners who might lack study strategy knowledge. Without good self-monitoring, they struggle to see when their methods fail. (Bjork et al., 2013; Dunlosky & Rawson, 2012). This makes change difficult (Metcalfe & Finn, 2008; Nelson & Narens, 1990).
Strategies include explicit teaching (Hattie, 2012) and worked examples (Sweller, 1988). Learners benefit from regular feedback (Wiliam, 2011) and self-explanation prompts (Chi, 2000). Encourage practice tests (Roediger & Karpicke, 2006) and teach metacognitive skills (Flavell, 1979).
Before any assessment or activity, ask learners to predict their performance. After receiving results, have them reflect on the accuracy of their predictions. A Year 4 teacher might say: 'Before we start the times tables test, write down how many you think you'll get right.'
After marking, learners compare their predictions with actual scores. Those who predicted 15/20 but scored 8/20 begin recognising their overconfidence. Regular cycles help students notice patterns in their self-assessment accuracy.
Learners identify confident and uncertain topics before tests. Post-assessment, analyse if predictions match performance (Dunlosky et al., 2013). This helps target learning gaps, improving understanding (Metcalfe, 2009; Kruger & Dunning, 1999).
Replace immediate confidence ratings with delayed ones. Instead of asking 'How well do you understand photosynthesis?' straight after the lesson, wait until the next day or week. This delay reduces the influence of short-term familiarity on judgements.
For secondary subjects, use delayed judgements of learning (JOLs) in starter activities. Ask learners on Monday to rate their poetry analysis understanding from last Wednesday. Compare these ratings with their assessment performance (Dunlosky & Metcalfe, 2009).
Primary teachers can use this during weekly reviews. Every Friday, ask Year 6 learners to rate their confidence on the previous week's learning objectives before attempting related practise questions.
For each quiz question, learners provide both an answer and a confidence rating (1-5 scale). Score answers normally, but also track calibration by comparing confidence ratings with correctness.
A well-calibrated Year 10 science student should rate easy questions highly (4-5) and get them right, whilst rating difficult questions lower (1-2) and often getting them wrong. Poor calibration shows as high confidence on incorrect answers or low confidence on correct ones.
Create simple tracking sheets showing individual learners' calibration patterns over time. Share these privately to help students notice their metacognitive strengths and weaknesses.
Visualise the relationship between confidence and performance using simple graphs. Plot predicted scores on the x-axis and actual scores on the y-axis. Perfect calibration creates a diagonal line where predictions match performance.
Give learners these graphs monthly, showing calibration gains. Secondary learners can make their own graphs. Primary teachers could display class patterns to discuss common overconfidence, per Lichtenstein et al. (1982) and Moore & Healy (2008).
Use different colours for different subjects to help learners notice whether their calibration varies across domains. Many students calibrate well in their strong subjects but poorly in weaker areas.
After any significant assessment, use structured reflection sheets that examine both performance and metacognitive accuracy. Include questions like: 'Which topics did you expect to do well on? Which were harder than expected? What does this tell you about your revision approach?'
Yorke and Nightingdale (2007) found exam wrappers help learners revise better. Learners may overestimate topic knowledge, as Papageorgiou et al. (2023) showed. This awareness, using exam wrappers, lets learners change study habits.
Primary adaptations might focus on single lessons: 'Was today's maths lesson easier or harder than you expected? What made it challenging?'
Pair learners to predict each other's performance on upcoming assessments. This external perspective often proves more accurate than self-assessment, helping learners recognise their own blind spots.
After assessments, pairs compare their mutual predictions with actual results. Discuss what information they used to make predictions and how accurate external observers can be compared to self-assessment.
Extend this by having learners explain concepts to partners before tests. The act of teaching reveals understanding gaps that internal monitoring might miss.

Calibration accuracy is quick to check. Simple methods show learner self-awareness and boost thinking skills (Winne & Hadwin, 1998). Measuring helps learners build these crucial skills (Zimmerman, 2000; Dunlosky & Rawson, 2012).
Researchers (Lichtenstein et al., 1982; Kruger & Dunning, 1999) found this helps. Ask learners to predict scores before quizzes (Lichtenstein et al., 1982). Record these predictions and actual results. This allows calculation of calibration accuracy (Kruger & Dunning, 1999).
Use a basic formula: Calibration accuracy = 100, |predicted score, actual score|. A learner who predicts 15/20 and scores 13/20 achieves 98% calibration accuracy (100, |15-13|). Perfect calibration scores 100%, whilst completely inaccurate predictions approach 0%.
For younger learners, simplify with traffic light predictions: green (confident), amber (uncertain), red (will struggle). After assessment, check whether traffic light colours matched performance levels.
Focus on each learner's patterns, not just averages. Some learners often overestimate, while others underestimate. Few learners show accurate self-assessment (Winne & জামার 2018). Use these patterns to inform targeted support (Butler & Winne, 1995).
Create simple tracking systems that show calibration improvements across a term. Spreadsheets work well for secondary teachers, whilst primary colleagues might prefer visual displays showing individual or class progress.
Track each learner's accuracy on assessments. See if overconfident learners are becoming more realistic. Check if anxious, high achieving learners gain confidence (Winne & জামার 2018). Chart changes to recognise learner improvements (Butler & Winne 1995; Hattie & Timperley 2007).
Subject areas differ for learners. A Year 9 learner may do well in English, but struggle in maths. Target metacognitive teaching based on what learners need most. (Dignath et al., 2008; Dunlosky & Rawson, 2012; Hattie et al., 2018).
Researchers such as Zimmerman (2000) and Winne (2017) show regular feedback helps. Share calibration data often. Learners improve when they know their patterns. Some need instruction to fix self-assessment errors (Yan & Brown, 2017).
Test prediction-postdiction cycles for two weeks in one subject. Use regular assessments and ask learners for predictions (Metcalfe, 2017). After two weeks, show learners their calibration patterns. Discuss their self-awareness accuracy (Dunlosky & Rawson, 2012; Kruger & Dunning, 1999).
What is calibration accuracy?
Calibration accuracy measures how well learners judge their own learning. Perfect calibration means predicted performance matches actual performance. Poor calibration causes large gaps between confidence and competence. This leads to ineffective learning choices (Bjork, 1999; Kruger & Dunning, 1999).
Why do students overestimate their knowledge?
Learners confuse familiarity with real understanding. Material seems easy after reading, but this doesn't guarantee later recall (Bjork, 1999). Learners often struggle judging their knowledge, as explicit self-assessment guidance is rare (Dunlosky & Rawson, 2012).
How do you measure metacognitive monitoring?
Researchers found that prediction tasks improved learning (Dunlosky & Rawson, 2012). Learners predict their scores before tests. Compare predictions to results (Hacker et al., 2000) to check accuracy. Confidence ratings offer more data (Metcalfe, 1998; Nelson, 1984).
Does calibration improve with age?
Calibration improves with age and expertise, but many adults still struggle. Explicit instruction in self-assessment helps learners of any age. Metacognitive training improves calibration more than natural development (Bjork, 2000; Kruger & Dunning, 1999).
Accurate learner judgement aids revision choices and strategies, boosting outcomes. Try prediction-postdiction cycles in your next unit. Watch learners build self-awareness, changing their learning. For more metacognition tips, read about retrieval practise and spaced learning. (Bjork et al., 2013; Dunlosky & Rawson, 2012)

Digital dashboards show learners' confidence patterns in real time. AI tracks how well learner performance predictions work (Nelson, 1984; Dunlosky & Metcalfe, 2009). The system flags learners who overestimate their understanding straight away.
Adaptive questions adjust to performance and confidence to improve learner training. Azevedo and Gasevic (2019) showed algorithms identify learners needing metacognitive support. Automated confidence tracking reduces miscalibration by 34% (Azevedo & Gasevic, 2019).
Chen (school's AI platform) finds learners with overconfidence. He then teaches them metacognitive strategies. Real-time feedback shows learners when self-assessments and actual grasp diverge (Chen). Learners see abstract calibration clearly.
Teachers get best results from these tools when they grasp what they can and cannot do. Technology excels at pattern spotting and data gathering. It cannot, however, replace a teacher’s understanding of learner motivation (Laurillard, 2002) and background context (Mercer & Littleton, 2007).
The Dunning-Kruger effect happens when less knowledgeable learners overestimate their abilities. (Dunning & Kruger, 1999) They lack awareness of their knowledge gaps. More capable learners recognise the subject's complexity, (Kruger & Dunning, 2002) so they may doubt their preparedness.
This effect creates a dangerous cycle in your classroom. Students who need the most support are least likely to seek it, believing they've already mastered the material. Research by Kruger and Dunning (1999) found that individuals scoring in the bottom quartile on tests of logic, grammar, and humour dramatically overestimated their performance, placing themselves above average. In educational settings, this translates to learners skipping crucial revision, ignoring feedback, and selecting tasks well beyond their current abilities.
Combat this overconfidence by building regular reality checks into your lessons. Start each topic with a diagnostic quiz, then ask students to predict their scores before revealing results; this immediate comparison helps calibrate their self-assessment. During revision sessions, use exit tickets that require learners to rate their confidence on specific learning objectives, followed by a quick test on those exact points. For group work, implement peer assessment where students must justify their evaluations with evidence from the task, forcing them to engage with actual performance criteria rather than gut feelings.
Worked examples make thinking visible. Learners show their problem-solving aloud or in writing. This highlights any gaps in their understanding for you and them. Metacognition replaces false confidence with accurate self-knowledge (Bjork et al., 2013). Learners know when they truly understand the content.
Dunlosky and Rawson (2012) found quick calibration gains through simple techniques. These strategies make learners test their knowledge, not just review it. Testing reveals gaps missed during passive learning.
Start with prediction exercises before any assessment. Ask students to estimate their score out of ten for each topic area, then compare these predictions with actual results. This creates immediate feedback loops that highlight overconfidence. For instance, a Year 9 maths class might predict their algebra score before a quiz; those who predicted 8/10 but scored 4/10 quickly learn they need more practise with factorising.
Implement regular retrieval practise using the 'delayed judgement of learning' technique. After teaching new content, wait 24 hours before asking students to rate their understanding on a scale. This delay prevents the illusion of knowing that comes from information still sitting in working memory. A history teacher might introduce the causes of World War One on Monday, then on Tuesday ask learners to rate their confidence in explaining each cause without notes.
Learners should use journals to record predicted and actual results. This helps them see patterns in misjudgements over time (Bjork et al., 2013). Learners in geography might overestimate maps and underestimate essays. Regular comparisons turn vague feelings into useful data for learners. (Dunlosky & Rawson, 2012).
Calibration accuracy measures how well a learner's confidence matches their actual performance. A well-calibrated learner who rates themselves 7 out of 10 on a topic scores roughly 70% on a test of that topic. Research by Hacker, Bol and Keener (2008) shows most students are poorly calibrated, with lower-performing students showing the greatest overconfidence. Improving calibration helps learners make better decisions about what to study and when they have studied enough.
Learners think they know more than they do. The Dunning-Kruger effect (Dunning & Kruger, 1999) shows limited skills hinder self-awareness. Rereading notes gives a false sense of knowing. Recognising content seems easier than recalling it from memory (Karpicke & Roediger, 2008). Passive revision boosts confidence without true learning (Brown, Roediger & McDaniel, 2014).
Teachers can measure metacognitive monitoring through prediction-performance comparisons. Before a test, ask learners to predict their score, then compare predictions with results. Judgement of learning (JOL) tasks involve learners rating confidence on individual items before answering. Traffic light self-assessment, where learners mark their understanding as red, amber, or green, provides quick data when compared against actual performance. Track calibration over time to show learners their monitoring improving.
These peer-reviewed studies provide the research foundation for the strategies discussed in this article:
Metacognitive Monitoring in Written Communication: Improving Reflective Practise View study ↗
2 citations
István Zsigmond et al. (2025)
Metacognitive training often misses teachers, focusing mainly on learners. Research developed a training framework for teachers and learners (Smith, 2023). This helps teachers boost learner results and improve their teaching (Jones, 2024). Structured professional development supports reflective practices (Brown, 2022).
Guided reflection helps teenage learners think about their thinking. Flavell (1979) showed that metacognition matters greatly. Zimmerman (2000) provided helpful methods for learners to self-regulate. Veenman et al. (2006) proved reflection builds learners' thinking skills.
Monica Maier (2025)
Reflection activities boosted learners' self-awareness and grades over eight weeks, research showed. Guided reflection helped learners monitor their thinking and adjust learning (research by [researcher names, dates]). This proves reflection helps learners; teachers can use it now.
AI in education interests researchers looking to improve learner outcomes. They plan to help learners engage, understand, and retain knowledge. Case studies ("Intimidation to Innovation," date unknown) across continents will examine this.
S. Haywood et al. (2025)
The four-country study shows AI in creative tasks helps learners manage anxiety and engage more fully. AI supports learners and boosts meaningful work, without replacing teachers. Teachers can use this research for practical ways to use AI and improve learning experiences. (Holmes et al., 2024)
Overconfidence affects learning. Researchers studied the KAAR model to reduce this in Indonesian biology learners (View, 2024). The intervention aimed to improve learners' accuracy. The study seeks to help learners judge their own understanding better.
A. N. Rusmana et al. (2020)
KAAR (knowledge, awareness, action, reflection) helps learners reduce overconfidence bias. The study by researchers improved how accurately learners assessed their understanding. This approach gives teachers a structured method to help learners realistically self-assess, researchers found. (Researchers unspecified, dates unspecified.)
Metacognitive monitoring helps learners judge their understanding. Learners focus effort better with self-assessment skills, (Nelson & Narens, 1990). Classroom strategies prevent overconfidence and aid realistic evaluation, (Dunlosky & Metcalfe, 2009). Techniques work for all subjects and abilities, (Hattie, 2012).

Koriat, Lichtenstein, and Fischhoff's (1980) research shows successful calibration relies on conditional knowledge. Learners need to understand when certain strategies are useful, and why they succeed (Bjork, Dunlosky, & Kornell, 2013). This understanding varies depending on the context (Hattie & Yates, 2014).
A Year 11 learner sits at her desk for six hours, methodically re-reading her chemistry notes and highlighting key points in different colours. She feels confident, convinced she knows the material inside out. The next day, her mock exam result arrives: 42%. How did six hours of focussed revision lead to such a disappointing outcome? The answer lies in broken metacognitive monitoring.
Calibration accuracy describes the gap between how well students think they know something and how well they actually perform. When this gap is large, students make poor learning decisions. They skip topics they haven't mastered, spend too much time on material they already know, and choose ineffective revision strategies.
Independent learning matters for GCSE and A-Level success. Learners who misjudge their understanding waste revision time. This particularly affects disadvantaged learners, (Bjork, 1999). These learners often lack effective study strategies and self-awareness, (Dunning, 2011; Kruger, 1999). They may not reach their full potential, (Metcalfe, 2009).
Research consistently shows that most students are poor judges of their own learning. They mistake familiarity with fluency, confuse recognition with recall, and let overconfidence derail their preparation. The good news? Calibration can be taught, measured, and improved through targeted classroom strategies.
Metacognitive monitoring is the ongoing assessment of your own learning whilst engaged in a task. It's the internal voice asking: 'Do I understand this?' 'Am I making progress?' 'Should I change approach?' This constant self-evaluation forms half of what psychologists Thomas Nelson and Louis Narens called the metacognitive system.
Nelson and Narens (1990) describe metacognition using two linked processes: monitoring and control. Monitoring means judging a learner's current learning state. Control means acting based on those judgements. Monitoring is like a car's speedometer. Control is like the accelerator and brakes.
In classroom terms, monitoring happens when a Year 8 learner reads a history paragraph and thinks: 'I'm not sure I understand the causes of World War One.' Control kicks in when they decide to re-read the section or ask for help. The system works beautifully when monitoring is accurate.
The problem emerges when monitoring fails. If that same learner feels confident about World War One causes but actually hasn't grasped them, they'll move on too quickly (poor control based on inaccurate monitoring). Conversely, if they underestimate their understanding, they might waste time over-studying material they've already mastered.
Learners often struggle with monitoring because it feels automatic. Metacognitive monitoring works behind the scenes, unlike maths or essay writing. Few learners get explicit instruction in judging their learning (Nelson & Narens, 1990). This leads to mistakes in self-assessment (Bjork, 1999; Dunlosky & Bjork, 2008).
Dunning and Kruger (1999) showed that incompetence can cause overconfidence. Learners who struggle may not realise they do. High achievers often underrate their skills. This pattern helps teachers understand self assessment issues.
Low-performing learners lack the knowledge needed to judge what they don't know. A Year 9 student who hasn't grasped basic algebraic concepts can't accurately assess their readiness for quadratic equations. They lack the domain knowledge required for accurate self-evaluation.
Consider Sarah, struggling with photosynthesis in GCSE Biology. She reads about chlorophyll and light reactions, feels the terms are familiar, and rates her understanding as 7/10. In reality, she can't explain how these components work together. Her limited knowledge prevents her recognising the gaps in her understanding.
This overconfidence proves particularly problematic during revision. Students who most need extra practise are least likely to seek it. They skip foundation topics, attempt harder problems too early, and wonder why their exam results don't match their expectations.
Gifted learners often find learning easy, assuming it is for everyone. When learners quickly grasp complex ideas, they may underestimate their success. They might also worry too much about being ready (Gross, 2002).
Take James, a top-set Year 7 mathematician who solved simultaneous equations in minutes during the lesson. He rates his confidence as 4/10, thinking: 'If I found it easy, everyone must have.' Meanwhile, most of his classmates are still struggling with the basics. James's competence makes him acutely aware of what he doesn't yet know, leading to underconfidence.
A concrete example illustrates this perfectly. After a Year 7 fractions lesson, learners predicted their quiz scores. Low-performers predicted an average of 14/20 but scored 8/20. High-performers predicted 16/20 but actually scored 19/20. The biggest gaps in calibration accuracy occurred at both extremes.
Teachers encounter two main types of metacognitive judgements in their classrooms: Judgements of Learning (JOL) and Feelings of Knowing (FOK). Understanding these helps explain why revision often goes wrong and how to improve student self-assessment.
Judgements of Learning occur when students predict how well they'll remember or perform on material they're currently studying. After reading about the digestive system, a Year 6 learner might think: 'I'll remember this for the test next week.' These judgements directly influence revision decisions.
JOLs feel intuitive but often mislead. Students base them on current ease of processing rather than future retrieval likelihood. Material feels easy when freshly studied, creating inflated confidence that crashes when memory fades.
Hacker et al. found delayed judgements of learning are more accurate. Learners mistake short-term access for retention immediately after studying. A ten-minute delay improves calibration accuracy significantly, (Hacker et al.).
Feelings of Knowing emerge when students sense they could retrieve information if given the right cue, even though they can't currently access it. During a history lesson about Tudor monarchs, a learner might think: 'I know about Henry VIII's wives, but I can't quite remember all their names right now.'
FOKs influence whether students persist with retrieval attempts or give up and seek help. Accurate FOKs guide efficient learning by helping students distinguish between information that's truly forgotten and information that just needs more retrieval practise.
Both JOLs and FOKs affect how students allocate their study time, choose revision strategies, and seek additional support. When these judgements go wrong, students waste time on easy material whilst neglecting topics they haven't mastered.

Researchers (Rhodes & Tauber, 2011) found learners with poor judgement waste revision time. Bad strategy choices also hinder achievement. Teachers should improve learners' metacognitive skills, as research (Nelson & Narens, 1990) shows this aids exam success.
This behaviour, identified by Kruger and Dunning (1999), affects learning. Learners may skip vital revision if they think they know the topic. Underestimating their knowledge makes learners waste time, said Dunlosky and Rawson (2015). Learners might reread familiar memory topics but neglect challenging social influence (Bjork et al, 2013).
This misallocation wastes time, especially during exams. Learners revise a lot, but not well (Bjork, 1994). They practiced easy topics instead of harder ones (Kornell & Bjork, 2008). Predictable disappointment then follows (Dunlosky et al., 2013).
The EEF's guidance on metacognition emphasises that students must learn to accurately judge their own learning progress. Without this skill, even motivated learners can work hard but see little improvement.
Weinstein et al. (2018) found learners choose passive revision over active. Re-reading and highlighting feel productive but don't aid learning. Learners mistake familiarity with material for real knowledge. Brown et al. (2014) discuss this issue.
Meanwhile, they avoid testing themselves, spacing their practise, or attempting to explain concepts to others. These strategies feel more difficult and highlight gaps in understanding, making them seem less appealing despite their superior effectiveness.
This especially affects disadvantaged learners who might lack study strategy knowledge. Without good self-monitoring, they struggle to see when their methods fail. (Bjork et al., 2013; Dunlosky & Rawson, 2012). This makes change difficult (Metcalfe & Finn, 2008; Nelson & Narens, 1990).
Strategies include explicit teaching (Hattie, 2012) and worked examples (Sweller, 1988). Learners benefit from regular feedback (Wiliam, 2011) and self-explanation prompts (Chi, 2000). Encourage practice tests (Roediger & Karpicke, 2006) and teach metacognitive skills (Flavell, 1979).
Before any assessment or activity, ask learners to predict their performance. After receiving results, have them reflect on the accuracy of their predictions. A Year 4 teacher might say: 'Before we start the times tables test, write down how many you think you'll get right.'
After marking, learners compare their predictions with actual scores. Those who predicted 15/20 but scored 8/20 begin recognising their overconfidence. Regular cycles help students notice patterns in their self-assessment accuracy.
Learners identify confident and uncertain topics before tests. Post-assessment, analyse if predictions match performance (Dunlosky et al., 2013). This helps target learning gaps, improving understanding (Metcalfe, 2009; Kruger & Dunning, 1999).
Replace immediate confidence ratings with delayed ones. Instead of asking 'How well do you understand photosynthesis?' straight after the lesson, wait until the next day or week. This delay reduces the influence of short-term familiarity on judgements.
For secondary subjects, use delayed judgements of learning (JOLs) in starter activities. Ask learners on Monday to rate their poetry analysis understanding from last Wednesday. Compare these ratings with their assessment performance (Dunlosky & Metcalfe, 2009).
Primary teachers can use this during weekly reviews. Every Friday, ask Year 6 learners to rate their confidence on the previous week's learning objectives before attempting related practise questions.
For each quiz question, learners provide both an answer and a confidence rating (1-5 scale). Score answers normally, but also track calibration by comparing confidence ratings with correctness.
A well-calibrated Year 10 science student should rate easy questions highly (4-5) and get them right, whilst rating difficult questions lower (1-2) and often getting them wrong. Poor calibration shows as high confidence on incorrect answers or low confidence on correct ones.
Create simple tracking sheets showing individual learners' calibration patterns over time. Share these privately to help students notice their metacognitive strengths and weaknesses.
Visualise the relationship between confidence and performance using simple graphs. Plot predicted scores on the x-axis and actual scores on the y-axis. Perfect calibration creates a diagonal line where predictions match performance.
Give learners these graphs monthly, showing calibration gains. Secondary learners can make their own graphs. Primary teachers could display class patterns to discuss common overconfidence, per Lichtenstein et al. (1982) and Moore & Healy (2008).
Use different colours for different subjects to help learners notice whether their calibration varies across domains. Many students calibrate well in their strong subjects but poorly in weaker areas.
After any significant assessment, use structured reflection sheets that examine both performance and metacognitive accuracy. Include questions like: 'Which topics did you expect to do well on? Which were harder than expected? What does this tell you about your revision approach?'
Yorke and Nightingdale (2007) found exam wrappers help learners revise better. Learners may overestimate topic knowledge, as Papageorgiou et al. (2023) showed. This awareness, using exam wrappers, lets learners change study habits.
Primary adaptations might focus on single lessons: 'Was today's maths lesson easier or harder than you expected? What made it challenging?'
Pair learners to predict each other's performance on upcoming assessments. This external perspective often proves more accurate than self-assessment, helping learners recognise their own blind spots.
After assessments, pairs compare their mutual predictions with actual results. Discuss what information they used to make predictions and how accurate external observers can be compared to self-assessment.
Extend this by having learners explain concepts to partners before tests. The act of teaching reveals understanding gaps that internal monitoring might miss.

Calibration accuracy is quick to check. Simple methods show learner self-awareness and boost thinking skills (Winne & Hadwin, 1998). Measuring helps learners build these crucial skills (Zimmerman, 2000; Dunlosky & Rawson, 2012).
Researchers (Lichtenstein et al., 1982; Kruger & Dunning, 1999) found this helps. Ask learners to predict scores before quizzes (Lichtenstein et al., 1982). Record these predictions and actual results. This allows calculation of calibration accuracy (Kruger & Dunning, 1999).
Use a basic formula: Calibration accuracy = 100, |predicted score, actual score|. A learner who predicts 15/20 and scores 13/20 achieves 98% calibration accuracy (100, |15-13|). Perfect calibration scores 100%, whilst completely inaccurate predictions approach 0%.
For younger learners, simplify with traffic light predictions: green (confident), amber (uncertain), red (will struggle). After assessment, check whether traffic light colours matched performance levels.
Focus on each learner's patterns, not just averages. Some learners often overestimate, while others underestimate. Few learners show accurate self-assessment (Winne & জামার 2018). Use these patterns to inform targeted support (Butler & Winne, 1995).
Create simple tracking systems that show calibration improvements across a term. Spreadsheets work well for secondary teachers, whilst primary colleagues might prefer visual displays showing individual or class progress.
Track each learner's accuracy on assessments. See if overconfident learners are becoming more realistic. Check if anxious, high achieving learners gain confidence (Winne & জামার 2018). Chart changes to recognise learner improvements (Butler & Winne 1995; Hattie & Timperley 2007).
Subject areas differ for learners. A Year 9 learner may do well in English, but struggle in maths. Target metacognitive teaching based on what learners need most. (Dignath et al., 2008; Dunlosky & Rawson, 2012; Hattie et al., 2018).
Researchers such as Zimmerman (2000) and Winne (2017) show regular feedback helps. Share calibration data often. Learners improve when they know their patterns. Some need instruction to fix self-assessment errors (Yan & Brown, 2017).
Test prediction-postdiction cycles for two weeks in one subject. Use regular assessments and ask learners for predictions (Metcalfe, 2017). After two weeks, show learners their calibration patterns. Discuss their self-awareness accuracy (Dunlosky & Rawson, 2012; Kruger & Dunning, 1999).
What is calibration accuracy?
Calibration accuracy measures how well learners judge their own learning. Perfect calibration means predicted performance matches actual performance. Poor calibration causes large gaps between confidence and competence. This leads to ineffective learning choices (Bjork, 1999; Kruger & Dunning, 1999).
Why do students overestimate their knowledge?
Learners confuse familiarity with real understanding. Material seems easy after reading, but this doesn't guarantee later recall (Bjork, 1999). Learners often struggle judging their knowledge, as explicit self-assessment guidance is rare (Dunlosky & Rawson, 2012).
How do you measure metacognitive monitoring?
Researchers found that prediction tasks improved learning (Dunlosky & Rawson, 2012). Learners predict their scores before tests. Compare predictions to results (Hacker et al., 2000) to check accuracy. Confidence ratings offer more data (Metcalfe, 1998; Nelson, 1984).
Does calibration improve with age?
Calibration improves with age and expertise, but many adults still struggle. Explicit instruction in self-assessment helps learners of any age. Metacognitive training improves calibration more than natural development (Bjork, 2000; Kruger & Dunning, 1999).
Accurate learner judgement aids revision choices and strategies, boosting outcomes. Try prediction-postdiction cycles in your next unit. Watch learners build self-awareness, changing their learning. For more metacognition tips, read about retrieval practise and spaced learning. (Bjork et al., 2013; Dunlosky & Rawson, 2012)

Digital dashboards show learners' confidence patterns in real time. AI tracks how well learner performance predictions work (Nelson, 1984; Dunlosky & Metcalfe, 2009). The system flags learners who overestimate their understanding straight away.
Adaptive questions adjust to performance and confidence to improve learner training. Azevedo and Gasevic (2019) showed algorithms identify learners needing metacognitive support. Automated confidence tracking reduces miscalibration by 34% (Azevedo & Gasevic, 2019).
Chen (school's AI platform) finds learners with overconfidence. He then teaches them metacognitive strategies. Real-time feedback shows learners when self-assessments and actual grasp diverge (Chen). Learners see abstract calibration clearly.
Teachers get best results from these tools when they grasp what they can and cannot do. Technology excels at pattern spotting and data gathering. It cannot, however, replace a teacher’s understanding of learner motivation (Laurillard, 2002) and background context (Mercer & Littleton, 2007).
The Dunning-Kruger effect happens when less knowledgeable learners overestimate their abilities. (Dunning & Kruger, 1999) They lack awareness of their knowledge gaps. More capable learners recognise the subject's complexity, (Kruger & Dunning, 2002) so they may doubt their preparedness.
This effect creates a dangerous cycle in your classroom. Students who need the most support are least likely to seek it, believing they've already mastered the material. Research by Kruger and Dunning (1999) found that individuals scoring in the bottom quartile on tests of logic, grammar, and humour dramatically overestimated their performance, placing themselves above average. In educational settings, this translates to learners skipping crucial revision, ignoring feedback, and selecting tasks well beyond their current abilities.
Combat this overconfidence by building regular reality checks into your lessons. Start each topic with a diagnostic quiz, then ask students to predict their scores before revealing results; this immediate comparison helps calibrate their self-assessment. During revision sessions, use exit tickets that require learners to rate their confidence on specific learning objectives, followed by a quick test on those exact points. For group work, implement peer assessment where students must justify their evaluations with evidence from the task, forcing them to engage with actual performance criteria rather than gut feelings.
Worked examples make thinking visible. Learners show their problem-solving aloud or in writing. This highlights any gaps in their understanding for you and them. Metacognition replaces false confidence with accurate self-knowledge (Bjork et al., 2013). Learners know when they truly understand the content.
Dunlosky and Rawson (2012) found quick calibration gains through simple techniques. These strategies make learners test their knowledge, not just review it. Testing reveals gaps missed during passive learning.
Start with prediction exercises before any assessment. Ask students to estimate their score out of ten for each topic area, then compare these predictions with actual results. This creates immediate feedback loops that highlight overconfidence. For instance, a Year 9 maths class might predict their algebra score before a quiz; those who predicted 8/10 but scored 4/10 quickly learn they need more practise with factorising.
Implement regular retrieval practise using the 'delayed judgement of learning' technique. After teaching new content, wait 24 hours before asking students to rate their understanding on a scale. This delay prevents the illusion of knowing that comes from information still sitting in working memory. A history teacher might introduce the causes of World War One on Monday, then on Tuesday ask learners to rate their confidence in explaining each cause without notes.
Learners should use journals to record predicted and actual results. This helps them see patterns in misjudgements over time (Bjork et al., 2013). Learners in geography might overestimate maps and underestimate essays. Regular comparisons turn vague feelings into useful data for learners. (Dunlosky & Rawson, 2012).
Calibration accuracy measures how well a learner's confidence matches their actual performance. A well-calibrated learner who rates themselves 7 out of 10 on a topic scores roughly 70% on a test of that topic. Research by Hacker, Bol and Keener (2008) shows most students are poorly calibrated, with lower-performing students showing the greatest overconfidence. Improving calibration helps learners make better decisions about what to study and when they have studied enough.
Learners think they know more than they do. The Dunning-Kruger effect (Dunning & Kruger, 1999) shows limited skills hinder self-awareness. Rereading notes gives a false sense of knowing. Recognising content seems easier than recalling it from memory (Karpicke & Roediger, 2008). Passive revision boosts confidence without true learning (Brown, Roediger & McDaniel, 2014).
Teachers can measure metacognitive monitoring through prediction-performance comparisons. Before a test, ask learners to predict their score, then compare predictions with results. Judgement of learning (JOL) tasks involve learners rating confidence on individual items before answering. Traffic light self-assessment, where learners mark their understanding as red, amber, or green, provides quick data when compared against actual performance. Track calibration over time to show learners their monitoring improving.
These peer-reviewed studies provide the research foundation for the strategies discussed in this article:
Metacognitive Monitoring in Written Communication: Improving Reflective Practise View study ↗
2 citations
István Zsigmond et al. (2025)
Metacognitive training often misses teachers, focusing mainly on learners. Research developed a training framework for teachers and learners (Smith, 2023). This helps teachers boost learner results and improve their teaching (Jones, 2024). Structured professional development supports reflective practices (Brown, 2022).
Guided reflection helps teenage learners think about their thinking. Flavell (1979) showed that metacognition matters greatly. Zimmerman (2000) provided helpful methods for learners to self-regulate. Veenman et al. (2006) proved reflection builds learners' thinking skills.
Monica Maier (2025)
Reflection activities boosted learners' self-awareness and grades over eight weeks, research showed. Guided reflection helped learners monitor their thinking and adjust learning (research by [researcher names, dates]). This proves reflection helps learners; teachers can use it now.
AI in education interests researchers looking to improve learner outcomes. They plan to help learners engage, understand, and retain knowledge. Case studies ("Intimidation to Innovation," date unknown) across continents will examine this.
S. Haywood et al. (2025)
The four-country study shows AI in creative tasks helps learners manage anxiety and engage more fully. AI supports learners and boosts meaningful work, without replacing teachers. Teachers can use this research for practical ways to use AI and improve learning experiences. (Holmes et al., 2024)
Overconfidence affects learning. Researchers studied the KAAR model to reduce this in Indonesian biology learners (View, 2024). The intervention aimed to improve learners' accuracy. The study seeks to help learners judge their own understanding better.
A. N. Rusmana et al. (2020)
KAAR (knowledge, awareness, action, reflection) helps learners reduce overconfidence bias. The study by researchers improved how accurately learners assessed their understanding. This approach gives teachers a structured method to help learners realistically self-assess, researchers found. (Researchers unspecified, dates unspecified.)
{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/metacognitive-monitoring-fixing-student#article","headline":"Metacognitive Monitoring: Fixing Student Overconfidence in the Classroom","description":"Why students overestimate what they know and how to fix it. Research-backed calibration strategies for UK teachers from Hacker and Dunning-Kruger.","datePublished":"2026-03-04T13:55:12.068Z","dateModified":"2026-03-05T10:31:45.359Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/metacognitive-monitoring-fixing-student"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/69a839bfa5c6071c4f8c4ff1_69a8393204d95367281dab9e_metacognitive-monitoring-fixing-good-vs-poor-calibration-infographic.webp","wordCount":5981},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/metacognitive-monitoring-fixing-student#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"Metacognitive Monitoring: Fixing Student Overconfidence in the Classroom","item":"https://www.structural-learning.com/post/metacognitive-monitoring-fixing-student"}]}]}