Visible Learning: Hattie's Research on What WorksClassroom activity focused on visible learning: hattie's research on what actually works in teaching with primary

Updated on  

March 16, 2026

Visible Learning: Hattie's Research on What Works

|

October 26, 2021

Hattie's Visible Learning research explained: which teaching strategies have the biggest impact on pupil outcomes. Evidence-based methods from 300+ million learners.

Course Enquiry
Copy citation

Main, P (2021, October 26). Visible Learning: A teacher's guide. Retrieved from https://www.structural-learning.com/post/visible-learning-a-teachers-guide

What is Visible Learning?

Visible Learning is an evidence-based approach to teaching developed by education researcher John Hattie. At its core, the idea is simple: learning should be visible, to teachers and to students themselves. This means students must know what they are learning, how to go about learning it, and how to measure their progress along the way. Hattie's work shifts the focus from simply delivering content to evaluating the impact of teaching on student achievement.

Key Takeaways

  1. Visible Learning quantifies teaching impact through effect sizes: Hattie's extensive meta-analyses identify high-impact strategies, with a 0.40 effect size serving as a benchmark for meaningful gains in pupil achievement (Hattie, 2009). This evidence-based approach empowers teachers to prioritise interventions proven to accelerate pupil progress effectively.
  2. Teachers are the most significant influence on pupil achievement: Hattie's research consistently highlights the profound impact of teacher expertise and pedagogical choices on learning outcomes (Hattie, 2012). Effective Visible Learning implementation requires teachers to understand their impact and adapt their practice based on clear evidence of pupil learning.
  3. Clear learning intentions and success criteria are fundamental to visible learning: For pupils to understand what they are learning and how to achieve it, teachers must explicitly articulate learning intentions and corresponding success criteria (Hattie & Yates, 2014). This clarity enables pupils to self-regulate their learning and provides teachers with precise evidence of progress.
  4. Continuous evaluation of teaching strategies is essential for maximising pupil achievement: Visible Learning encourages teachers to become evaluators of their own impact, using evidence to determine which strategies are most effective for their pupils (Hattie, 2009). Tools like the EEF Strategy Recommendation Engine support schools in selecting and assessing the cost-effectiveness of evidence-based approaches.

What does the research say? Hattie's (2009) Visible Learning synthesis analysed 800+ meta-analyses covering 80+ million pupils. The average effect size across all interventions is d = 0.40 (the "hinge point"). Top influences include collective teacher efficacy (d = 1.57), self-reported grades (d = 1.44), teacher credibility (d = 0.90) and feedback (d = 0.70). The key insight: teachers must see learning through pupils' eyes and make the learning process visible.

Based on a meta-analysis of millions of students and thousands of studies, Hattie introduced the concept of effect size, a way to identify which teaching strategies have the greatest impact on learning. His findings offer a clear message: great teaching is not just about planning activities, it's about seeing learning through the eyes of students and helping them become their own teachers.

Visible Learning framework infographic showing what it is, how to implement it, and why it works
The Visible Learning Framework

The Visible Learning model places strong emphasis on:

Circular diagram showing the four-stage Visible Learning feedback cycle with directional arrows
Cycle diagram with directional arrows: Visible Learning Framework System

  • Setting clear learning intentions and success criteria
  • Using feedback and assessment to guide progress
  • Encouraging learners to take ownership of their learning journey
  • Teachers are not just facilitators, they are activators of learning who monitor progress, adapt instruction, and make teaching decisions based on real-time evidence of what's working.

    Key Principles of Visible Learning:

    • Clarity and Goal-Setting, Students must understand what they're learning and why it matters.
    • Feedback-Informed Practice, Teachers continuously adjust instruction based on assessment evidence.
    • Student Ownership, Learners are active participants who reflect on and take responsibility for their progress.

    Visible Learning allocates an enhanced role for teachers as they begin to evaluate their teaching. According to John Hattie, visible learning and intelligent teaching take place when teachers begin to see learning from the eyes of students and guide them to become their teachers.

    To measure the effect of visible learning, Hattie performed the statistical analysis on millions of students through 'effect size' and compared the experimental effect of many teaching strategies on student achievement, e.g. Learning strategies, feedback, holidays and class size.

    Visible Learning Effect Sizes
    visible learning effect sizes

    The research foundation reveals striking patterns across different educational contexts and subjects. Hattie's analysis demonstrated that feedback, for instance, achieves an effect size of 0.7, making it nearly twice as powerful as average teaching practices. Similarly, formative evaluation scores 0.9, whilst collective teacher efficacy - when teachers believe they can positively impact all students - reaches an impressive 1.57. These findings provide teachers with clear priorities for professional development and classroom implementation.

    Pyramid infographic showing John Hattie's hierarchy of teaching strategies by their effect size, from highest impact like collective teacher efficacy to lowest below the 0.4 threshold.
    Teaching Impact Hierarchy

    In practice, Visible Learning strategies transform everyday classroom interactions. Teachers might begin lessons by sharing learning intentions and success criteria, ensuring students understand what they're learning and how they'll know when they've succeeded. During lessons, teachers actively seek evidence of student understanding through questioning techniques and mini-assessments, adjusting their instruction accordingly. Students become partners in this process, learning to self-assess and provide meaningful peer feedback, creating a classroom culture where learning is everyone's responsibility.

    How do teachers implement the visible learning model effectively in their classrooms?

    Teachers implement visible learning by making learning intentions explicit at the start of each lesson and sharing clear success criteria with students. They continuously gather evidence of student understanding through formative assessment and adjust their teaching based on this feedback. The model requires teachers to help students understand where they are in their learning journey and what steps they need to take next.

    John Hattie used over 68,000 education research projects and 25 million students to research what makes the student learning the most successful. According to Hattie's meta-analyses chapter of Visible Learning, the greater the effect size, the more beneficial the approach. Whatever is at or greater than 0.4 is seen as the "Zone of Desired Effects." Hattie contends that school learning and teachers must focus their energy on enhancing skills with the help of these approaches. According to John Hattie, visible learners are the students who can:

    • Set learning goals;
    • Express what they are learning;
    • Describe the next steps in their learning;
    • Know what to do when they are stuck;
    • See mistakes as opportunities for additional learning;
    • Take feedback.

    This aligns with Rosenshine's principles of effective instruction, which emphasise the importance of clear guidance and structured support. Students who develop these capabilities show greater self-regulation and become more independent learners.

    Effective questioning techniques play a crucial role in making thinking visible. Teachers can use questioning strategies to probe student understanding and guide them through their learning process. This approach is particularly powerful when combined with thinking routines that make student thought processes explicit.

    Visual tools can also support visible learning by helping students organise and represent their understanding. Graphic organisers and concept maps allow students to see connections between ideas and track their developing knowledge structures.

    The visible learning approach recognises that different students may need different levels of support depending on their needs. Teachers working with students with special educational needs can adapt these strategies to ensure all learners can participate effectively in the learning process.

    Understanding how students process information is essential for implementing visible learning effectively. Teachers need to be aware of working memory limitations and design instruction that supports cognitive processing while making learning visible.

    Student motivation plays a critical role in visible learning success. When students can see their progress a nd understand their learning goals, they become more invested in the process and take greater ownership of their education.

    Structural Learning

    Visible Learning Impact Auditor

    Select the strategies you currently use. See how your teaching toolkit compares against Hattie's effect size research.

    This tool lets you audit your teaching strategies against Hattie's Visible Learning effect sizes. Select the strategies you use regularly and see their average impact, individual rankings, and whether you are investing time in high-impact or low-impact approaches.

    With over 250 influences on student achievement measured, Hattie's meta-analyses provide the largest evidence base for what works in education. Strategies with an effect size above 0.40 represent roughly a year's worth of progress for a year's input. Below 0.20 and the strategy may not be worth the time invested.

    (Hattie, 2009; Hattie, 2023)

    1. Select the teaching strategies you regularly use from the list.
    2. Review the effect size for each strategy and your overall average.
    3. Identify strategies to prioritise and those to reconsider.

    Your Impact Profile

    About effect sizes: Hattie (2009, 2023) synthesised 1,800+ meta-analyses covering 300 million students. An effect size of 0.40 represents roughly one year's progress. Strategies above this "hinge point" accelerate learning beyond typical growth. Your average reflects the combined impact of your selected strategies.

    The Research Behind Visible Learning

    John Hattie's Visible Learning represents one of the most comprehensive syntheses of educational research ever undertaken, drawing from over 800 meta-analyses encompassing approximately 50,000 studies and 80 million students. This unprecedented scale of analysis allows educators to move beyond individual studies or personal anecdotes to understand which teaching practices genuinely accelerate student achievement. Hattie's work transforms scattered research findings into practical findings that can directly inform classroom practice.

    The foundation of Visible Learning rests on effect sizes, a statistical measure that quantifies the impact of different educational interventions. Hattie established that an effect size of 0.40 represents the average yearly growth students typically achieve, setting this as the benchmark for determining whether teaching strategies are genuinely effective. Interventions exceeding this threshold demonstrate above-average impact on learning outcomes, whilst those below suggest limited educational value despite potentially consuming significant time and resources.

    For classroom practitioners, this research foundation provides evidence-based guidance for prioritising professional development and instructional strategies. Rather than adopting every new educational trend, teachers can focus their efforts on high-impact practices such as feedback, formative evaluation, and metacognitive strategies, all of which consistently demonstrate substantial effect sizes across diverse educational contexts.

    Understanding Effect Sizes in Education

    Effect sizes provide teachers with a powerful lens for evaluating the true impact of different educational practices on student learning. Unlike traditional research that simply tells us whether something works, effect sizes reveal how much it works, allowing educators to distinguish between marginal gains and transformative strategies. John Hattie's synthesis of over 800 meta-analyses established that an effect size of 0.40 represents the average yearly progress students make, providing a crucial benchmark for assessing teaching interventions.

    Understanding this metric transforms how teachers approach professional development and classroom decision-making. Practices with effect sizes above 0.40 accelerate learning beyond typical progress, whilst those below may actually hinder student achievement. For instance, Hattie's research shows that feedback achieves an effect size of 0.70, indicating substantial impact, whereas ability grouping registers just 0.12, suggesting minimal benefit despite its widespread use in schools.

    In practical terms, teachers can use effect sizes to prioritise their energy and resources. Rather than adopting every new initiative, focus on evidence-based strategies with demonstrated high impact. This might mean investing time in developing quality feedback systems, implementing formative assessment practices, or building strong teacher-student relationships, all of which consistently show effect sizes well above the 0.40 threshold for meaningful educational impact.

    Effect Sizes and the d=0.40 Hinge Point

    When John Hattie published Visible Learning in 2009, he synthesised 800 meta-analyses covering more than 50,000 individual studies and roughly 80 million students. To make sense of that volume of research, he used a statistical tool called Cohen's d: a standardised measure of the difference between a treatment group and a control group, expressed in units of standard deviation. A d of 1.0 means the average student in the treatment group outperformed 84 per cent of students in the control group. A d of 0.20 is a small effect; 0.50 is moderate; 0.80 is large (Cohen, 1988).

    Hattie's key contribution was the hinge point of d=0.40, which he calculated as the average effect size across all the influences he examined. He proposed that teachers should use 0.40 as a baseline: any strategy that produces an effect size below it is delivering less than a typical year's teaching, regardless of how popular or well-resourced that strategy might be. Approaches above the line represent meaningful acceleration of learning. This reframing matters because it shifts the question from "does this work?" to "does this work better than simply being taught?" (Hattie, 2009).

    The mechanics behind a meta-analysis are worth understanding. Researchers calculate an effect size for each individual study, then average those effect sizes across the meta-analysis, weighting by sample size. Hattie then averaged effect sizes across multiple meta-analyses to produce his ranked list. Each layer of aggregation increases the distance between the original classroom data and the final number that appears in a league table. What you see as d=0.60 for a given strategy may represent thousands of different teacher-student interactions, in different countries, measured with different assessments, across different subject areas.

    Kraft (2020) raised a specific technical concern: when effect sizes are computed using pre-post gains rather than comparison group differences, the resulting numbers are systematically inflated. Many studies in Hattie's database used pre-post designs, which means the 0.40 hinge point may itself be set too high relative to what rigorous randomised trials would produce. For classroom teachers, this does not invalidate the general ordering of Hattie's influences, but it does mean that interpreting specific d values as precise measurements is unwarranted. The hinge point is better read as a rough filter than as a precise threshold.

    High-Impact Visible Learning Strategies

    The most effective teaching strategies share a common characteristic: they make learning visible to both teachers and students. John Hattie's research identifies several high-impact practices that consistently produce effect sizes above 0.40, indicating substantial improvements in student achievement. These strategies include feedback, formative evaluation, and classroom discussion, all of which create transparent learning processes where progress becomes tangible and measurable.

    Cognitive scientist John Sweller's work on cognitive load theory demonstrates why strategies like worked examples and scaffolding prove so effective. By reducing extraneous mental processing, these approaches allow students to focus on essential learning content. Similarly, Dylan Wiliam's research on formative assessment shows how regular, low-stakes assessment creates feedback loops that guide both teaching decisions and student understanding in real-time.

    Successful implementation requires teachers to become evaluators of their own impact. This means systematically collecting evidence of student learning through methods such as exit tickets, peer discussions, and learning journals. When teachers can clearly see what works and adjust their practice accordingly, student outcomes improve dramatically. The key lies not in perfecting individual techniques, but in developing a responsive teaching approach that adapts to visible evidence of learning.

    Collective Teacher Efficacy and Its Exceptional Effect Size

    In Hattie's updated rankings, collective teacher efficacy sits at the top of the entire list with an effect size of d=1.57, well above any instructional strategy. The construct originates with Albert Bandura's work on self-efficacy, which he extended from the individual level to the collective. Bandura (1997) defined collective efficacy as a group's shared belief in its combined capacity to organise and execute the actions required to produce a given level of attainment. In schools, this means the degree to which the staff as a whole believe that their collective actions can make a measurable difference to every pupil, including those facing disadvantage.

    Jenni Donohoo's research has been particularly influential in translating Bandura's theory into school improvement practice. Donohoo (2017) identified six enabling conditions that build collective teacher efficacy: advanced teacher influence over decisions, goal consensus, teachers' knowledge about one another's work, cohesive staff relationships, responsiveness of leadership to teacher concerns, and consideration of the task at hand. Where these conditions are weak, even technically skilled individual teachers struggle to lift overall outcomes. The culture itself acts as a ceiling on what any one teacher can achieve.

    Why does a belief about collective impact produce such a large measured effect? The mechanism runs through professional behaviour. When staff believe their collective effort will shift outcomes, they set higher expectations for all pupils, they persist with struggling learners rather than attributing failure to factors outside school, and they share responsibility for results rather than retreating into individual classrooms. Pupils experience this as consistent high expectations across every subject and year group, not just in the classes of a few exceptional teachers (Donohoo, Hattie and Eells, 2018).

    For school leaders, this is an argument for investing in collaborative structures before adding new programmes. A school that buys a new literacy intervention but runs departments in isolation is likely to get less return than one that builds shared planning time, lesson study cycles, and a genuine culture of professional trust. The effect size of d=1.57 does not mean that belief alone raises attainment; it means that when a staff team collectively acts on the belief that they can succeed with every cohort, the resulting changes in practice are large enough to show up clearly in outcome data.

    Setting Clear Learning Intentions and Success Criteria

    Learning intentions and success criteria form the cornerstone of effective teaching practice, providing students with a clear roadmap of what they will learn and how they will know they have succeeded. Research by Shirley Clarke demonstrates that when students understand the purpose of their learning and can recognise quality work, achievement increases significantly. Learning intentions should be written in student-friendly language and focus on the skills, knowledge, or understanding students will develop, rather than the activities they will complete.

    Success criteria break down the learning intention into specific, observable behaviours or outcomes that students can use to self-assess their progress. These criteria should be co-constructed with students where possible, as Dylan Wiliam's research shows this increases student ownership and engagement. Effective success criteria are specific, measurable, and directly linked to the learning intention, helping students understand what good work looks like and how to achieve it themselves.

    In practice, display learning intentions and success criteria prominently and refer to them throughout the lesson. Begin by sharing and explaining them, use them during learning activities to guide student self-reflection, and return to them at lesson end for evaluation. This transparent approach transforms learning from a mystery into a clear, achievable process that helps students to take responsibility for their own progress.

    Find the Right Evidence-Based Strategy for Your School

    Answer five questions about your school context and receive personalised EEF strategy recommendations ranked by impact, cost, and evidence strength.

    EEF Strategy Recommendation Engine

    Match your school context to the highest-impact, evidence-based teaching strategies from the EEF Toolkit.

    This tool matches your school context to the most evidence-based teaching strategies from the EEF Teaching and Learning Toolkit. Answer five questions about your priorities, and receive personalised recommendations ranked by expected impact.

    The Education Endowment Foundation (EEF) Teaching and Learning Toolkit synthesises international evidence on 30 teaching approaches, reporting the average months of additional progress each one delivers. Using evidence to guide spending decisions helps schools, particularly those with tight budgets, invest where the research shows the greatest returns.

    (EEF, 2023; Hattie, 2023; Higgins et al., 2014)

    1. Answer five questions about your improvement priorities.
    2. Review the top three strategies ranked by fit for your context.
    3. Download or copy your personalised recommendations to share with colleagues.
    1 of 5
    1

    Based on your context, here are the three strategies with the strongest evidence fit. Expand each card for implementation guidance.

    Compare the Cost-Effectiveness of Teaching Strategies

    Enter your budget, select strategies, and instantly see which approaches deliver the most progress per pound spent.

    EEF Cost-Effectiveness Calculator

    Compare the cost-effectiveness of EEF Toolkit strategies against your school budget.

    This calculator compares the cost-effectiveness of EEF Teaching and Learning Toolkit strategies for your specific budget. Enter your funding and number of pupils, select up to five strategies, and see which delivers the most progress per pound spent.

    Schools face pressure to demonstrate value for money, particularly with Pupil Premium and catch-up funding. The EEF Toolkit provides average months of progress for each strategy, but comparing cost-effectiveness across multiple options requires calculation. This tool does that comparison instantly.

    (EEF, 2023; Sharples et al., 2018)

    1. Enter your annual budget and number of eligible pupils.
    2. Select up to 5 strategies you are considering.
    3. Review the comparison chart and download the budget brief for your governors.
    Select strategies (up to 5)0 of 5 selected

    Progress per pound (best value first)

    Optimal allocation

    StrategyMonthsCost/PupilTotal Cost% BudgetProgress/£1,000

    Cost estimates are indicative averages. Actual costs will vary by school context, region, and implementation approach.

    Currency shown in GBP (£). The tool works with any currency; simply enter your budget in your local currency.

    Design a Custom Feedback Protocol

    Choose your feedback type, subject, and time constraints to generate a tailored protocol with marking codes, prompt stems, and workload strategies.

    Feedback Protocol Designer

    Design a custom feedback protocol based on Hattie & Timperley's feedback model and EEF evidence.

    Designs a custom feedback protocol for your classroom, drawing on Hattie & Timperley's (2007) feedback model and EEF evidence on effective feedback (+6 months of additional progress).

    Feedback is one of the most powerful influences on learning, but its effects are highly variable (Hattie & Timperley, 2007). The EEF's guidance on teacher feedback (2021) identifies that the quality and type of feedback matters more than the quantity. Effective feedback operates at four levels: task, process, self-regulation, and self. Crucially, feedback must be actionable and timely; marking everything in detail is neither necessary nor effective (Elliott et al., 2016).

    Hattie, J. & Timperley, H. (2007). The Power of Feedback. EEF (2021). Teacher Feedback to Improve Pupil Learning. Elliott, V. et al. (2016). A Marked Improvement?

    1. Select your feedback context (type, subject, key stage).
    2. Indicate your time constraints and class size.
    3. Receive a tailored feedback protocol with marking codes, frequency, and example stems.
    4. Download as a ready-to-use policy document.

    Hattie & Timperley Focus Levels

    Protocol Overview

    Feedback Stems

      Marking Codes

      Workload Management

        Common Pitfalls to Avoid

          Evidence Base

          Mindframes: The Ten Teacher Mindsets Behind Visible Learning

          In the decade following the publication of Visible Learning, Hattie shifted his focus from cataloguing what works to examining why effective teachers consistently outperform their peers regardless of the specific strategies they use. The answer, he and Zierer (2018) argued, lay not in technique selection but in a set of underlying beliefs — what they called mindframes — that govern how teachers interpret their role and read evidence of student learning.

          Hattie and Zierer (2018) identified ten mindframes central to high-impact teaching. The most fundamental is that teachers see themselves primarily as evaluators of their own impact: they continuously collect evidence of what students have learned and use it to adjust their practice rather than attributing outcomes to student effort or ability. A second mindframe holds that teaching and learning are forms of error-making and error-detection; classrooms where mistakes are treated as diagnostic information rather than failures produce greater cognitive risk-taking and deeper learning. A third mindframe frames the relationship between teacher and student as a dialogue about learning rather than a transmission of content.

          Additional mindframes include seeing professional collaboration as a core responsibility, not an optional enrichment; believing that all pupils can improve; and using learning intentions and success criteria as planning tools rather than administrative requirements. Hattie and Zierer distinguished mindframes sharply from instructional strategies: a teacher can deploy exit tickets, peer assessment, or worked examples as surface procedures without the underlying mindframe that treats the evidence they generate as personally meaningful feedback on their own teaching.

          The mindframes framework has practical implications for continuing professional development. Training that focuses on new techniques without addressing underlying beliefs about ability, error, and teacher responsibility is less likely to shift classroom practice durably. Research on professional learning communities (Hargreaves and Fullan, 2012) supports this: sustainable improvement in pupil outcomes is associated with schools where collective inquiry into impact data is a cultural norm, not an occasional event. For individual teachers, the most accessible entry point is treating lesson observations, exit tickets, and assessment results as feedback on teaching, not merely feedback about pupils.

          Surface, Deep, and Transfer Learning: A Three-Phase Model

          Hattie and Donoghue (2016) proposed a learning model that resolved a persistent tension in the Visible Learning data: why do some strategies that produce large effect sizes in research trials produce poor results when implemented as whole-class instructional approaches? Their answer was that most teaching strategies are phase-specific — they produce their strongest effects at a particular stage of learning, and deploying them at the wrong phase reduces or eliminates their benefit.

          The model describes three phases. Surface learning involves the initial acquisition and consolidation of facts, skills, and concepts. Pupils at this stage need explicit instruction, direct explanation of what is to be learned, deliberate practice, and feedback oriented to the correctness of specific responses. Strategies with high effect sizes during surface learning include worked examples (d=0.57), direct instruction (d=0.60), and spaced practice (d=0.65). Pushing pupils into collaborative inquiry or self-regulated investigation before they have sufficient surface knowledge to reason with is counterproductive; they lack the domain-specific content on which deeper thinking depends.

          The deep learning phase involves connecting facts and skills into integrated conceptual structures, identifying relationships between ideas, and applying knowledge to unfamiliar problems within the same domain. Strategies most effective at this phase include reciprocal teaching, concept mapping, and elaborative interrogation — techniques that require pupils to construct relationships rather than retrieve isolated items. Hattie and Donoghue noted that the classroom talk and collaborative inquiry strategies often promoted in professional development have their strongest evidence base at the deep phase, which explains why they work well in research with near-expert learners but disappoint when applied to novices encountering new content.

          The third phase, transfer learning, involves the application of conceptual understanding to genuinely novel problems across domains. Transfer is the hardest to achieve and the most valuable. Strategies that support transfer include metacognitive monitoring, problem-solving in varied contexts, and deliberate attention to the conditions under which knowledge applies. For lesson planning, the three-phase model suggests that the same unit of work should contain distinct instructional sequences matched to the phase of learning at each point, rather than applying the same pedagogical approach throughout. Wiliam (2011) reached a complementary conclusion: the key question for a teacher is not "which strategy is best?" but "what does this pupil need at this moment?"

          The Four Levels of Feedback: Hattie and Timperley's Model

          Feedback consistently produces among the largest effect sizes in Hattie's database, yet research also shows that feedback frequently has no effect or even negative effects on learning. Hattie and Timperley (2007) resolved this paradox by distinguishing four levels at which feedback can be directed, arguing that the effectiveness of any feedback act depends critically on which level it addresses and whether that level is appropriate to the learner's current state.

          The first level is task feedback (FT): information about whether a specific answer, product, or performance is correct or incorrect. This is the most common form of feedback in classrooms and the least powerful for generating learning, though it is useful when pupils have fundamental misconceptions that must be corrected before further work can proceed. The second level is process feedback (FP): information about the strategies and procedures used to complete a task. Process feedback helps pupils understand that their approach, not just their answer, is subject to improvement, and it supports the development of transferable skills rather than task-specific performance.

          The third level is self-regulation feedback (FR): information that supports pupils in monitoring their own learning, checking their own work, and seeking help effectively. This level has the strongest evidence base for long-term learning gains because it reduces dependence on teacher feedback and builds the metacognitive habits that sustain independent learning. The fourth level is self feedback (FS): comment directed at the learner as a person rather than at their task, process, or regulation. Hattie and Timperley found that this level is the least effective and potentially harmful, particularly when praise for ability replaces information about learning. Dweck's (1999) research on fixed versus growth mindsets converges with this finding: praising pupils as "clever" reduces their willingness to take on challenging tasks and attribute difficulty to insufficient effort.

          The model has a direct implication for written feedback policies. A comment such as "Good work — you clearly understand this" operates at the self level and provides no information the pupil can act on. Reframing it as "You identified the correct pattern. Check whether it holds when the numbers are negative" operates at the process and task levels simultaneously. For pupils who consistently self-correct successfully, moving feedback to the self-regulation level — "You found your own error: what strategy did you use?" — builds the monitoring habits most predictive of long-term achievement.

          Student Expectations and Self-Reported Grades: The Most Powerful Influence

          In Hattie's (2009) original meta-analysis, self-reported grades emerged as the single influence with the highest effect size in the entire dataset, at d=1.44. The finding attracted considerable attention and some scepticism, in part because its meaning was not immediately obvious. Self-reported grades does not mean allowing pupils to mark their own work uncritically. The construct, drawn from research by Kuncel, Crede and Thomas (2005) and earlier work by Mabe and West (1982), refers to the accuracy with which pupils predict their own performance on an upcoming test or task.

          Pupils whose self-predictions closely match their actual outcomes have, by implication, an accurate internal model of their own current knowledge and skill. This accuracy is itself the product of previous feedback, metacognitive experience, and transparent assessment practices. When pupils receive regular, specific feedback that allows them to calibrate their self-assessments, they develop an internal standard against which to measure new learning. Hattie interpreted the effect size not as evidence that self-assessment is a magic technique but as a demonstration that pupils who know what they know, and know what they do not yet know, are in the optimal position to direct their own learning.

          The practical implication connects closely to learning intentions and success criteria. When teachers share clear criteria in advance and ask pupils to assess their own work against those criteria before receiving teacher feedback, they are cultivating the calibration mechanism that underlies the self-reported grades effect. Andrade and Valtcheva (2009) found that structured self-assessment using rubrics produced significant gains in writing quality compared to unstructured self-evaluation, because the rubric provided an external standard against which to calibrate internal judgements.

          Black and Wiliam (1998) reached a parallel conclusion in their review of formative assessment research: the gains from self- and peer-assessment are most reliable when pupils have been taught to use specific criteria, not simply asked to express opinions about their work. For teachers, the practical question is whether assessment tasks are transparent enough, and feedback specific enough, for pupils to build an accurate model of their own understanding rather than relying solely on teacher evaluation to tell them where they stand.

          Free Resource Pack

          Download this free Complete Teaching Essentials Bundle resource pack for your classroom and staff room. Includes printable posters, desk cards, and CPD materials.

          Free Resource Pack

          Teaching Essentials Toolkit

          Key resources for building a robust and effective teaching practice, informed by evidence.

          Teaching Essentials Toolkit — 4 resources
          Classroom ManagementEffective FeedbackMetacognitionRetrieval PracticeTeaching StrategiesCPD Briefing VisualQuick Reference GuideClassroom Wall DisplayPlanning TemplateTeacher Development

          Download your free bundle

          Fill in your details below and we'll send the resource pack straight to your inbox.

          Quick survey (helps us create better resources)

          How confident are you in consistently implementing high-impact teaching essentials in your daily practice?

          Not at all confident
          Slightly confident
          Moderately confident
          Very confident
          Extremely confident

          To what extent do your school and colleagues actively support and promote the use of foundational teaching strategies?

          Not at all
          Minimally
          Moderately
          Significantly
          Extensively

          How often do you currently reflect on and refine your core teaching routines and strategies for improved impact?

          Rarely or never
          Occasionally
          Sometimes
          Frequently
          Constantly

          Your resource pack is ready

          We've also sent a copy to your email. Check your inbox.

          Further Reading: Key Research Papers

          These studies provide the research foundation for visible learning and its practical applications in schools.

          Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement View study ↗ 6,865 citations

          Hattie, J. (2009)

          Hattie's landmark synthesis of 800+ meta-analyses ranked 138 teaching influences by their effect size on student achievement. Feedback (d=0.73), teacher clarity (d=0.75), and formative evaluation (d=0.90) emerged as among the most powerful interventions. The work provides teachers with an evidence hierarchy for deciding where to invest classroom time and energy.

          Visible Learning for Teachers: Maximizing Impact on Learning 1,147 citations

          Hattie, J. (2012)

          This companion volume translates the meta-analytic findings into practical classroom strategies. Hattie introduces the concept of "know thy impact," arguing that teachers who regularly evaluate their effect on student learning become more effective practitioners. The book provides checklists, lesson planning frameworks, and self-evaluation tools grounded in the original research synthesis.

          The Power of Feedback View study ↗ 6,378 citations

          Hattie, J. and Timperley, H. (2007)

          This paper presents the feedback model central to visible learning, identifying four levels: task, process, self-regulation, and self. The research demonstrates that feedback about the task and learning process produces the strongest effects, whilst praise directed at the self has minimal impact on achievement. Teachers can use this framework to design feedback that genuinely moves learning forward.

          Teachers Make a Difference: What Is the Research Evidence? View study ↗ 1,310 citations

          Hattie, J. (2003)

          This earlier paper establishes that teacher quality accounts for approximately 30% of variance in student achievement, making it the most significant school-level factor. The research identifies expert teachers as those who challenge students, set high expectations, and maintain awareness of their impact. These findings laid the groundwork for the visible learning framework that followed.

          Embedding Formative Assessment: Practical Techniques for K-12 Classrooms

          Wiliam, D. (2011)

          Wiliam's work complements Hattie's findings by providing a practical framework for the formative assessment strategies that visible learning identifies as highly effective. The book introduces five key strategies including clarifying learning intentions, engineering classroom discussions, and activating students as instructional resources for each other.

          Written by the Structural Learning Research Team

          Reviewed by Paul Main, Founder & Educational Consultant at Structural Learning

          Criticisms and Methodological Limitations of Visible Learning

          Visible Learning has attracted substantial methodological criticism from educational researchers, and teachers who use Hattie's rankings should understand the main lines of concern. The most fundamental objection is what Slavin (2018) called the "apples and oranges" problem. Hattie's database combines meta-analyses from early childhood education, secondary schooling, higher education, clinical psychology, and sports coaching. Effect sizes from these very different contexts are then averaged as if they were measuring the same thing. A meta-analysis of feedback in medical training and a meta-analysis of feedback in primary literacy lessons both contribute to the same d value, even though the populations, tasks, and assessment instruments are entirely different.

          Simpson (2017) raised concerns about the mathematical aggregation process itself. When you average effect sizes across meta-analyses that used different study selection criteria, different statistical methods, and different definitions of the same construct, the resulting number carries no clear meaning. An effect size is a standardised comparison between two groups: if the groups, the interventions, and the outcomes differ across studies, then the standardisation does not hold. Simpson argued that Hattie's league table of influences creates an illusion of precision; the d values look like measurements, but they reflect the accumulated artefacts of dozens of different research traditions rather than a stable property of any particular teaching strategy.

          Bergeron (2017) examined the ecological validity problem: whether findings from controlled studies can be generalised to ordinary classrooms. Many studies in Hattie's database were conducted under conditions that differ from daily teaching: short intervention windows, volunteer participants, researcher involvement in delivery, and outcomes measured by researcher-designed tests rather than national assessments. A strategy that produces d=0.60 in a six-week university trial with graduate student facilitators may produce a much smaller effect when delivered by a single teacher with 30 pupils across a full academic year. The context in which research is conducted is part of what generates the effect size, not just the strategy itself.

          None of these criticisms mean that Visible Learning is without value. The broad ordering of influences, with factors related to teacher cognition, feedback quality, and pupil self-regulation clustered at the top, is consistent with findings from other research traditions, including the Education Endowment Foundation's toolkit and the work of Robert Rosenshine. What the criticisms do mean is that treating a specific d value as a precise prediction of what will happen in your classroom is not warranted. Hattie (2015) acknowledged that effect sizes should be treated as starting points for professional inquiry rather than prescriptions, and the research has most value when it is used to generate questions about practice rather than to rank strategies by number.

          Frequently Asked Questions

          What is Visible Learning and how does it work?

          Visible Learning is an evidence-based teaching approach developed by John Hattie that makes learning visible to both teachers and students. Students must clearly understand what they are learning, how to learn it, and how to measure their progress. The approach focuses on evaluating the impact of teaching on student achievement rather than simply delivering content, with teachers acting as activators of learning who monitor progress and adapt instruction based on real-time evidence.

          How do I implement Visible Learning strategies in my classroom?

          Start each lesson by sharing clear learning intentions and success criteria so students understand what they are learning and how they will know when they have succeeded. During lessons, continuously gather evidence of student understanding through questioning techniques and mini-assessments, then adjust your instruction accordingly. Help students become partners in the learning process by teaching them to self-assess and provide meaningful peer feedback to create a classroom culture where learning is everyone's responsibility.

          What are the main benefits of using Visible Learning in schools?

          Visible Learning transforms students from passive recipients into active partners who set goals, track progress, and seek feedback independently. Teachers benefit from evidence-based guidance on which strategies actually work, with effect sizes showing that feedback achieves 0.7 impact and formative evaluation reaches 0.9. The approach helps teachers see learning through students' eyes and make teaching decisions based on real-time evidence of what's working rather than popular but ineffective interventions.

          What does the 0.4 effect size threshold mean in Visible Learning?

          The 0.4 effect size represents Hattie's "Zone of Desired Effects" where teaching strategies begin to have meaningful impact on student achievement. Any teaching approach with an effect size of 0.4 or greater is considered beneficial, whilst strategies below this threshold may not significantly improve learning outcomes. This threshold helps teachers identify which interventions are worth their time and effort, as many popular teaching methods actually fall below this effective zone.

          How do I know if Visible Learning is working in my classroom?

          Look for students who can clearly articulate what they are learning and why it matters, and who actively seek feedback and set their own learning goals. You should see evidence of students self-assessing their work and providing meaningful peer feedback without constant teacher prompting. Additionally, you will notice your teaching decisions becoming more responsive to student needs as you continuously gather and act upon assessment evidence during lessons.

          What are common mistakes teachers make when implementing Visible Learning?

          Many teachers focus only on sharing learning objectives without teaching students how to use success criteria to self-assess their progress. Another common mistake is gathering assessment evidence but failing to adjust instruction based on what the data reveals about student understanding. Some teachers also assume that simply posting learning intentions on the board constitutes Visible Learning, when the approach actually requires active student participation and ongoing feedback loops throughout the lesson.

          Implementing Effective Feedback Practices

          Effective feedback represents one of the most powerful tools in a teacher's arsenal, with Hattie's research consistently placing it among the top influences on student achievement. However, the quality and timing of feedback matter significantly more than its frequency. Effective feedback focuses on the task, the process, and self-regulation rather than praising the person, helping students understand what they got wrong and how to improve their learning strategies.

          The most impactful feedback addresses three fundamental questions: Where am I going? How am I going? Where to next? This framework, developed through extensive educational research, ensures feedback is both specific and actionable. Teachers should provide feedback that is timely, specific to learning intentions, and connects directly to success criteria. Rather than simply marking work as correct or incorrect, effective feedback identifies patterns in student thinking and guides them towards deeper understanding of the subject matter.

          In classroom practice, this means moving beyond generic praise such as "good work" towards targeted comments like "your use of evidence in paragraph two strengthens your argument, now consider how you might apply this same approach to your conclusion." Peer feedback and self-assessment opportunities also enhance learning outcomes, as students develop metacognitive awareness of their own progress and learning processes.

          Assessment Strategies for Visible Learning

          Effective assessment in Visible Learning classrooms moves beyond traditional testing to become a continuous dialogue between teachers and students about learning progress. Formative assessment strategies, such as exit tickets, learning journals, and peer feedback sessions, provide real-time data that enables teachers to adjust instruction immediately rather than waiting for summative results. This approach aligns with Dylan Wiliam's research on assessment for learning, which demonstrates that frequent, low-stakes feedback can significantly accelerate student achievement.

          The key lies in making learning intentions and success criteria transparent from the outset. When students understand exactly what they're working towards and can articulate their own progress, they become active partners in the assessment process. Regular self-assessment activities, where pupils reflect on their understanding and identify next steps, creates the metacognitive skills essential for independent learning. This practice supports Hattie's findings that self-reported grades have one of the highest effect sizes on student achievement.

          Practically, teachers can implement simple yet powerful monitoring tools such as traffic light systems for student confidence levels, one-minute summaries at lesson transitions, or structured peer assessment using clear rubrics. The crucial element is ensuring assessment data directly informs subsequent teaching decisions, creating a responsive classroom environment where both successes and misconceptions are addressed promptly and purposefully.

        Loading audit...

        What is Visible Learning?

        Visible Learning is an evidence-based approach to teaching developed by education researcher John Hattie. At its core, the idea is simple: learning should be visible, to teachers and to students themselves. This means students must know what they are learning, how to go about learning it, and how to measure their progress along the way. Hattie's work shifts the focus from simply delivering content to evaluating the impact of teaching on student achievement.

        Key Takeaways

        1. Visible Learning quantifies teaching impact through effect sizes: Hattie's extensive meta-analyses identify high-impact strategies, with a 0.40 effect size serving as a benchmark for meaningful gains in pupil achievement (Hattie, 2009). This evidence-based approach empowers teachers to prioritise interventions proven to accelerate pupil progress effectively.
        2. Teachers are the most significant influence on pupil achievement: Hattie's research consistently highlights the profound impact of teacher expertise and pedagogical choices on learning outcomes (Hattie, 2012). Effective Visible Learning implementation requires teachers to understand their impact and adapt their practice based on clear evidence of pupil learning.
        3. Clear learning intentions and success criteria are fundamental to visible learning: For pupils to understand what they are learning and how to achieve it, teachers must explicitly articulate learning intentions and corresponding success criteria (Hattie & Yates, 2014). This clarity enables pupils to self-regulate their learning and provides teachers with precise evidence of progress.
        4. Continuous evaluation of teaching strategies is essential for maximising pupil achievement: Visible Learning encourages teachers to become evaluators of their own impact, using evidence to determine which strategies are most effective for their pupils (Hattie, 2009). Tools like the EEF Strategy Recommendation Engine support schools in selecting and assessing the cost-effectiveness of evidence-based approaches.

        What does the research say? Hattie's (2009) Visible Learning synthesis analysed 800+ meta-analyses covering 80+ million pupils. The average effect size across all interventions is d = 0.40 (the "hinge point"). Top influences include collective teacher efficacy (d = 1.57), self-reported grades (d = 1.44), teacher credibility (d = 0.90) and feedback (d = 0.70). The key insight: teachers must see learning through pupils' eyes and make the learning process visible.

        Based on a meta-analysis of millions of students and thousands of studies, Hattie introduced the concept of effect size, a way to identify which teaching strategies have the greatest impact on learning. His findings offer a clear message: great teaching is not just about planning activities, it's about seeing learning through the eyes of students and helping them become their own teachers.

        Visible Learning framework infographic showing what it is, how to implement it, and why it works
        The Visible Learning Framework

        The Visible Learning model places strong emphasis on:

        Circular diagram showing the four-stage Visible Learning feedback cycle with directional arrows
        Cycle diagram with directional arrows: Visible Learning Framework System

        • Setting clear learning intentions and success criteria
        • Using feedback and assessment to guide progress
        • Encouraging learners to take ownership of their learning journey
        • Teachers are not just facilitators, they are activators of learning who monitor progress, adapt instruction, and make teaching decisions based on real-time evidence of what's working.

          Key Principles of Visible Learning:

          • Clarity and Goal-Setting, Students must understand what they're learning and why it matters.
          • Feedback-Informed Practice, Teachers continuously adjust instruction based on assessment evidence.
          • Student Ownership, Learners are active participants who reflect on and take responsibility for their progress.

          Visible Learning allocates an enhanced role for teachers as they begin to evaluate their teaching. According to John Hattie, visible learning and intelligent teaching take place when teachers begin to see learning from the eyes of students and guide them to become their teachers.

          To measure the effect of visible learning, Hattie performed the statistical analysis on millions of students through 'effect size' and compared the experimental effect of many teaching strategies on student achievement, e.g. Learning strategies, feedback, holidays and class size.

          Visible Learning Effect Sizes
          visible learning effect sizes

          The research foundation reveals striking patterns across different educational contexts and subjects. Hattie's analysis demonstrated that feedback, for instance, achieves an effect size of 0.7, making it nearly twice as powerful as average teaching practices. Similarly, formative evaluation scores 0.9, whilst collective teacher efficacy - when teachers believe they can positively impact all students - reaches an impressive 1.57. These findings provide teachers with clear priorities for professional development and classroom implementation.

          Pyramid infographic showing John Hattie's hierarchy of teaching strategies by their effect size, from highest impact like collective teacher efficacy to lowest below the 0.4 threshold.
          Teaching Impact Hierarchy

          In practice, Visible Learning strategies transform everyday classroom interactions. Teachers might begin lessons by sharing learning intentions and success criteria, ensuring students understand what they're learning and how they'll know when they've succeeded. During lessons, teachers actively seek evidence of student understanding through questioning techniques and mini-assessments, adjusting their instruction accordingly. Students become partners in this process, learning to self-assess and provide meaningful peer feedback, creating a classroom culture where learning is everyone's responsibility.

          How do teachers implement the visible learning model effectively in their classrooms?

          Teachers implement visible learning by making learning intentions explicit at the start of each lesson and sharing clear success criteria with students. They continuously gather evidence of student understanding through formative assessment and adjust their teaching based on this feedback. The model requires teachers to help students understand where they are in their learning journey and what steps they need to take next.

          John Hattie used over 68,000 education research projects and 25 million students to research what makes the student learning the most successful. According to Hattie's meta-analyses chapter of Visible Learning, the greater the effect size, the more beneficial the approach. Whatever is at or greater than 0.4 is seen as the "Zone of Desired Effects." Hattie contends that school learning and teachers must focus their energy on enhancing skills with the help of these approaches. According to John Hattie, visible learners are the students who can:

          • Set learning goals;
          • Express what they are learning;
          • Describe the next steps in their learning;
          • Know what to do when they are stuck;
          • See mistakes as opportunities for additional learning;
          • Take feedback.

          This aligns with Rosenshine's principles of effective instruction, which emphasise the importance of clear guidance and structured support. Students who develop these capabilities show greater self-regulation and become more independent learners.

          Effective questioning techniques play a crucial role in making thinking visible. Teachers can use questioning strategies to probe student understanding and guide them through their learning process. This approach is particularly powerful when combined with thinking routines that make student thought processes explicit.

          Visual tools can also support visible learning by helping students organise and represent their understanding. Graphic organisers and concept maps allow students to see connections between ideas and track their developing knowledge structures.

          The visible learning approach recognises that different students may need different levels of support depending on their needs. Teachers working with students with special educational needs can adapt these strategies to ensure all learners can participate effectively in the learning process.

          Understanding how students process information is essential for implementing visible learning effectively. Teachers need to be aware of working memory limitations and design instruction that supports cognitive processing while making learning visible.

          Student motivation plays a critical role in visible learning success. When students can see their progress a nd understand their learning goals, they become more invested in the process and take greater ownership of their education.

          Structural Learning

          Visible Learning Impact Auditor

          Select the strategies you currently use. See how your teaching toolkit compares against Hattie's effect size research.

          This tool lets you audit your teaching strategies against Hattie's Visible Learning effect sizes. Select the strategies you use regularly and see their average impact, individual rankings, and whether you are investing time in high-impact or low-impact approaches.

          With over 250 influences on student achievement measured, Hattie's meta-analyses provide the largest evidence base for what works in education. Strategies with an effect size above 0.40 represent roughly a year's worth of progress for a year's input. Below 0.20 and the strategy may not be worth the time invested.

          (Hattie, 2009; Hattie, 2023)

          1. Select the teaching strategies you regularly use from the list.
          2. Review the effect size for each strategy and your overall average.
          3. Identify strategies to prioritise and those to reconsider.

          Your Impact Profile

          About effect sizes: Hattie (2009, 2023) synthesised 1,800+ meta-analyses covering 300 million students. An effect size of 0.40 represents roughly one year's progress. Strategies above this "hinge point" accelerate learning beyond typical growth. Your average reflects the combined impact of your selected strategies.

          The Research Behind Visible Learning

          John Hattie's Visible Learning represents one of the most comprehensive syntheses of educational research ever undertaken, drawing from over 800 meta-analyses encompassing approximately 50,000 studies and 80 million students. This unprecedented scale of analysis allows educators to move beyond individual studies or personal anecdotes to understand which teaching practices genuinely accelerate student achievement. Hattie's work transforms scattered research findings into practical findings that can directly inform classroom practice.

          The foundation of Visible Learning rests on effect sizes, a statistical measure that quantifies the impact of different educational interventions. Hattie established that an effect size of 0.40 represents the average yearly growth students typically achieve, setting this as the benchmark for determining whether teaching strategies are genuinely effective. Interventions exceeding this threshold demonstrate above-average impact on learning outcomes, whilst those below suggest limited educational value despite potentially consuming significant time and resources.

          For classroom practitioners, this research foundation provides evidence-based guidance for prioritising professional development and instructional strategies. Rather than adopting every new educational trend, teachers can focus their efforts on high-impact practices such as feedback, formative evaluation, and metacognitive strategies, all of which consistently demonstrate substantial effect sizes across diverse educational contexts.

          Understanding Effect Sizes in Education

          Effect sizes provide teachers with a powerful lens for evaluating the true impact of different educational practices on student learning. Unlike traditional research that simply tells us whether something works, effect sizes reveal how much it works, allowing educators to distinguish between marginal gains and transformative strategies. John Hattie's synthesis of over 800 meta-analyses established that an effect size of 0.40 represents the average yearly progress students make, providing a crucial benchmark for assessing teaching interventions.

          Understanding this metric transforms how teachers approach professional development and classroom decision-making. Practices with effect sizes above 0.40 accelerate learning beyond typical progress, whilst those below may actually hinder student achievement. For instance, Hattie's research shows that feedback achieves an effect size of 0.70, indicating substantial impact, whereas ability grouping registers just 0.12, suggesting minimal benefit despite its widespread use in schools.

          In practical terms, teachers can use effect sizes to prioritise their energy and resources. Rather than adopting every new initiative, focus on evidence-based strategies with demonstrated high impact. This might mean investing time in developing quality feedback systems, implementing formative assessment practices, or building strong teacher-student relationships, all of which consistently show effect sizes well above the 0.40 threshold for meaningful educational impact.

          Effect Sizes and the d=0.40 Hinge Point

          When John Hattie published Visible Learning in 2009, he synthesised 800 meta-analyses covering more than 50,000 individual studies and roughly 80 million students. To make sense of that volume of research, he used a statistical tool called Cohen's d: a standardised measure of the difference between a treatment group and a control group, expressed in units of standard deviation. A d of 1.0 means the average student in the treatment group outperformed 84 per cent of students in the control group. A d of 0.20 is a small effect; 0.50 is moderate; 0.80 is large (Cohen, 1988).

          Hattie's key contribution was the hinge point of d=0.40, which he calculated as the average effect size across all the influences he examined. He proposed that teachers should use 0.40 as a baseline: any strategy that produces an effect size below it is delivering less than a typical year's teaching, regardless of how popular or well-resourced that strategy might be. Approaches above the line represent meaningful acceleration of learning. This reframing matters because it shifts the question from "does this work?" to "does this work better than simply being taught?" (Hattie, 2009).

          The mechanics behind a meta-analysis are worth understanding. Researchers calculate an effect size for each individual study, then average those effect sizes across the meta-analysis, weighting by sample size. Hattie then averaged effect sizes across multiple meta-analyses to produce his ranked list. Each layer of aggregation increases the distance between the original classroom data and the final number that appears in a league table. What you see as d=0.60 for a given strategy may represent thousands of different teacher-student interactions, in different countries, measured with different assessments, across different subject areas.

          Kraft (2020) raised a specific technical concern: when effect sizes are computed using pre-post gains rather than comparison group differences, the resulting numbers are systematically inflated. Many studies in Hattie's database used pre-post designs, which means the 0.40 hinge point may itself be set too high relative to what rigorous randomised trials would produce. For classroom teachers, this does not invalidate the general ordering of Hattie's influences, but it does mean that interpreting specific d values as precise measurements is unwarranted. The hinge point is better read as a rough filter than as a precise threshold.

          High-Impact Visible Learning Strategies

          The most effective teaching strategies share a common characteristic: they make learning visible to both teachers and students. John Hattie's research identifies several high-impact practices that consistently produce effect sizes above 0.40, indicating substantial improvements in student achievement. These strategies include feedback, formative evaluation, and classroom discussion, all of which create transparent learning processes where progress becomes tangible and measurable.

          Cognitive scientist John Sweller's work on cognitive load theory demonstrates why strategies like worked examples and scaffolding prove so effective. By reducing extraneous mental processing, these approaches allow students to focus on essential learning content. Similarly, Dylan Wiliam's research on formative assessment shows how regular, low-stakes assessment creates feedback loops that guide both teaching decisions and student understanding in real-time.

          Successful implementation requires teachers to become evaluators of their own impact. This means systematically collecting evidence of student learning through methods such as exit tickets, peer discussions, and learning journals. When teachers can clearly see what works and adjust their practice accordingly, student outcomes improve dramatically. The key lies not in perfecting individual techniques, but in developing a responsive teaching approach that adapts to visible evidence of learning.

          Collective Teacher Efficacy and Its Exceptional Effect Size

          In Hattie's updated rankings, collective teacher efficacy sits at the top of the entire list with an effect size of d=1.57, well above any instructional strategy. The construct originates with Albert Bandura's work on self-efficacy, which he extended from the individual level to the collective. Bandura (1997) defined collective efficacy as a group's shared belief in its combined capacity to organise and execute the actions required to produce a given level of attainment. In schools, this means the degree to which the staff as a whole believe that their collective actions can make a measurable difference to every pupil, including those facing disadvantage.

          Jenni Donohoo's research has been particularly influential in translating Bandura's theory into school improvement practice. Donohoo (2017) identified six enabling conditions that build collective teacher efficacy: advanced teacher influence over decisions, goal consensus, teachers' knowledge about one another's work, cohesive staff relationships, responsiveness of leadership to teacher concerns, and consideration of the task at hand. Where these conditions are weak, even technically skilled individual teachers struggle to lift overall outcomes. The culture itself acts as a ceiling on what any one teacher can achieve.

          Why does a belief about collective impact produce such a large measured effect? The mechanism runs through professional behaviour. When staff believe their collective effort will shift outcomes, they set higher expectations for all pupils, they persist with struggling learners rather than attributing failure to factors outside school, and they share responsibility for results rather than retreating into individual classrooms. Pupils experience this as consistent high expectations across every subject and year group, not just in the classes of a few exceptional teachers (Donohoo, Hattie and Eells, 2018).

          For school leaders, this is an argument for investing in collaborative structures before adding new programmes. A school that buys a new literacy intervention but runs departments in isolation is likely to get less return than one that builds shared planning time, lesson study cycles, and a genuine culture of professional trust. The effect size of d=1.57 does not mean that belief alone raises attainment; it means that when a staff team collectively acts on the belief that they can succeed with every cohort, the resulting changes in practice are large enough to show up clearly in outcome data.

          Setting Clear Learning Intentions and Success Criteria

          Learning intentions and success criteria form the cornerstone of effective teaching practice, providing students with a clear roadmap of what they will learn and how they will know they have succeeded. Research by Shirley Clarke demonstrates that when students understand the purpose of their learning and can recognise quality work, achievement increases significantly. Learning intentions should be written in student-friendly language and focus on the skills, knowledge, or understanding students will develop, rather than the activities they will complete.

          Success criteria break down the learning intention into specific, observable behaviours or outcomes that students can use to self-assess their progress. These criteria should be co-constructed with students where possible, as Dylan Wiliam's research shows this increases student ownership and engagement. Effective success criteria are specific, measurable, and directly linked to the learning intention, helping students understand what good work looks like and how to achieve it themselves.

          In practice, display learning intentions and success criteria prominently and refer to them throughout the lesson. Begin by sharing and explaining them, use them during learning activities to guide student self-reflection, and return to them at lesson end for evaluation. This transparent approach transforms learning from a mystery into a clear, achievable process that helps students to take responsibility for their own progress.

          Find the Right Evidence-Based Strategy for Your School

          Answer five questions about your school context and receive personalised EEF strategy recommendations ranked by impact, cost, and evidence strength.

          EEF Strategy Recommendation Engine

          Match your school context to the highest-impact, evidence-based teaching strategies from the EEF Toolkit.

          This tool matches your school context to the most evidence-based teaching strategies from the EEF Teaching and Learning Toolkit. Answer five questions about your priorities, and receive personalised recommendations ranked by expected impact.

          The Education Endowment Foundation (EEF) Teaching and Learning Toolkit synthesises international evidence on 30 teaching approaches, reporting the average months of additional progress each one delivers. Using evidence to guide spending decisions helps schools, particularly those with tight budgets, invest where the research shows the greatest returns.

          (EEF, 2023; Hattie, 2023; Higgins et al., 2014)

          1. Answer five questions about your improvement priorities.
          2. Review the top three strategies ranked by fit for your context.
          3. Download or copy your personalised recommendations to share with colleagues.
          1 of 5
          1

          Based on your context, here are the three strategies with the strongest evidence fit. Expand each card for implementation guidance.

          Compare the Cost-Effectiveness of Teaching Strategies

          Enter your budget, select strategies, and instantly see which approaches deliver the most progress per pound spent.

          EEF Cost-Effectiveness Calculator

          Compare the cost-effectiveness of EEF Toolkit strategies against your school budget.

          This calculator compares the cost-effectiveness of EEF Teaching and Learning Toolkit strategies for your specific budget. Enter your funding and number of pupils, select up to five strategies, and see which delivers the most progress per pound spent.

          Schools face pressure to demonstrate value for money, particularly with Pupil Premium and catch-up funding. The EEF Toolkit provides average months of progress for each strategy, but comparing cost-effectiveness across multiple options requires calculation. This tool does that comparison instantly.

          (EEF, 2023; Sharples et al., 2018)

          1. Enter your annual budget and number of eligible pupils.
          2. Select up to 5 strategies you are considering.
          3. Review the comparison chart and download the budget brief for your governors.
          Select strategies (up to 5)0 of 5 selected

          Progress per pound (best value first)

          Optimal allocation

          StrategyMonthsCost/PupilTotal Cost% BudgetProgress/£1,000

          Cost estimates are indicative averages. Actual costs will vary by school context, region, and implementation approach.

          Currency shown in GBP (£). The tool works with any currency; simply enter your budget in your local currency.

          Design a Custom Feedback Protocol

          Choose your feedback type, subject, and time constraints to generate a tailored protocol with marking codes, prompt stems, and workload strategies.

          Feedback Protocol Designer

          Design a custom feedback protocol based on Hattie & Timperley's feedback model and EEF evidence.

          Designs a custom feedback protocol for your classroom, drawing on Hattie & Timperley's (2007) feedback model and EEF evidence on effective feedback (+6 months of additional progress).

          Feedback is one of the most powerful influences on learning, but its effects are highly variable (Hattie & Timperley, 2007). The EEF's guidance on teacher feedback (2021) identifies that the quality and type of feedback matters more than the quantity. Effective feedback operates at four levels: task, process, self-regulation, and self. Crucially, feedback must be actionable and timely; marking everything in detail is neither necessary nor effective (Elliott et al., 2016).

          Hattie, J. & Timperley, H. (2007). The Power of Feedback. EEF (2021). Teacher Feedback to Improve Pupil Learning. Elliott, V. et al. (2016). A Marked Improvement?

          1. Select your feedback context (type, subject, key stage).
          2. Indicate your time constraints and class size.
          3. Receive a tailored feedback protocol with marking codes, frequency, and example stems.
          4. Download as a ready-to-use policy document.

          Hattie & Timperley Focus Levels

          Protocol Overview

          Feedback Stems

            Marking Codes

            Workload Management

              Common Pitfalls to Avoid

                Evidence Base

                Mindframes: The Ten Teacher Mindsets Behind Visible Learning

                In the decade following the publication of Visible Learning, Hattie shifted his focus from cataloguing what works to examining why effective teachers consistently outperform their peers regardless of the specific strategies they use. The answer, he and Zierer (2018) argued, lay not in technique selection but in a set of underlying beliefs — what they called mindframes — that govern how teachers interpret their role and read evidence of student learning.

                Hattie and Zierer (2018) identified ten mindframes central to high-impact teaching. The most fundamental is that teachers see themselves primarily as evaluators of their own impact: they continuously collect evidence of what students have learned and use it to adjust their practice rather than attributing outcomes to student effort or ability. A second mindframe holds that teaching and learning are forms of error-making and error-detection; classrooms where mistakes are treated as diagnostic information rather than failures produce greater cognitive risk-taking and deeper learning. A third mindframe frames the relationship between teacher and student as a dialogue about learning rather than a transmission of content.

                Additional mindframes include seeing professional collaboration as a core responsibility, not an optional enrichment; believing that all pupils can improve; and using learning intentions and success criteria as planning tools rather than administrative requirements. Hattie and Zierer distinguished mindframes sharply from instructional strategies: a teacher can deploy exit tickets, peer assessment, or worked examples as surface procedures without the underlying mindframe that treats the evidence they generate as personally meaningful feedback on their own teaching.

                The mindframes framework has practical implications for continuing professional development. Training that focuses on new techniques without addressing underlying beliefs about ability, error, and teacher responsibility is less likely to shift classroom practice durably. Research on professional learning communities (Hargreaves and Fullan, 2012) supports this: sustainable improvement in pupil outcomes is associated with schools where collective inquiry into impact data is a cultural norm, not an occasional event. For individual teachers, the most accessible entry point is treating lesson observations, exit tickets, and assessment results as feedback on teaching, not merely feedback about pupils.

                Surface, Deep, and Transfer Learning: A Three-Phase Model

                Hattie and Donoghue (2016) proposed a learning model that resolved a persistent tension in the Visible Learning data: why do some strategies that produce large effect sizes in research trials produce poor results when implemented as whole-class instructional approaches? Their answer was that most teaching strategies are phase-specific — they produce their strongest effects at a particular stage of learning, and deploying them at the wrong phase reduces or eliminates their benefit.

                The model describes three phases. Surface learning involves the initial acquisition and consolidation of facts, skills, and concepts. Pupils at this stage need explicit instruction, direct explanation of what is to be learned, deliberate practice, and feedback oriented to the correctness of specific responses. Strategies with high effect sizes during surface learning include worked examples (d=0.57), direct instruction (d=0.60), and spaced practice (d=0.65). Pushing pupils into collaborative inquiry or self-regulated investigation before they have sufficient surface knowledge to reason with is counterproductive; they lack the domain-specific content on which deeper thinking depends.

                The deep learning phase involves connecting facts and skills into integrated conceptual structures, identifying relationships between ideas, and applying knowledge to unfamiliar problems within the same domain. Strategies most effective at this phase include reciprocal teaching, concept mapping, and elaborative interrogation — techniques that require pupils to construct relationships rather than retrieve isolated items. Hattie and Donoghue noted that the classroom talk and collaborative inquiry strategies often promoted in professional development have their strongest evidence base at the deep phase, which explains why they work well in research with near-expert learners but disappoint when applied to novices encountering new content.

                The third phase, transfer learning, involves the application of conceptual understanding to genuinely novel problems across domains. Transfer is the hardest to achieve and the most valuable. Strategies that support transfer include metacognitive monitoring, problem-solving in varied contexts, and deliberate attention to the conditions under which knowledge applies. For lesson planning, the three-phase model suggests that the same unit of work should contain distinct instructional sequences matched to the phase of learning at each point, rather than applying the same pedagogical approach throughout. Wiliam (2011) reached a complementary conclusion: the key question for a teacher is not "which strategy is best?" but "what does this pupil need at this moment?"

                The Four Levels of Feedback: Hattie and Timperley's Model

                Feedback consistently produces among the largest effect sizes in Hattie's database, yet research also shows that feedback frequently has no effect or even negative effects on learning. Hattie and Timperley (2007) resolved this paradox by distinguishing four levels at which feedback can be directed, arguing that the effectiveness of any feedback act depends critically on which level it addresses and whether that level is appropriate to the learner's current state.

                The first level is task feedback (FT): information about whether a specific answer, product, or performance is correct or incorrect. This is the most common form of feedback in classrooms and the least powerful for generating learning, though it is useful when pupils have fundamental misconceptions that must be corrected before further work can proceed. The second level is process feedback (FP): information about the strategies and procedures used to complete a task. Process feedback helps pupils understand that their approach, not just their answer, is subject to improvement, and it supports the development of transferable skills rather than task-specific performance.

                The third level is self-regulation feedback (FR): information that supports pupils in monitoring their own learning, checking their own work, and seeking help effectively. This level has the strongest evidence base for long-term learning gains because it reduces dependence on teacher feedback and builds the metacognitive habits that sustain independent learning. The fourth level is self feedback (FS): comment directed at the learner as a person rather than at their task, process, or regulation. Hattie and Timperley found that this level is the least effective and potentially harmful, particularly when praise for ability replaces information about learning. Dweck's (1999) research on fixed versus growth mindsets converges with this finding: praising pupils as "clever" reduces their willingness to take on challenging tasks and attribute difficulty to insufficient effort.

                The model has a direct implication for written feedback policies. A comment such as "Good work — you clearly understand this" operates at the self level and provides no information the pupil can act on. Reframing it as "You identified the correct pattern. Check whether it holds when the numbers are negative" operates at the process and task levels simultaneously. For pupils who consistently self-correct successfully, moving feedback to the self-regulation level — "You found your own error: what strategy did you use?" — builds the monitoring habits most predictive of long-term achievement.

                Student Expectations and Self-Reported Grades: The Most Powerful Influence

                In Hattie's (2009) original meta-analysis, self-reported grades emerged as the single influence with the highest effect size in the entire dataset, at d=1.44. The finding attracted considerable attention and some scepticism, in part because its meaning was not immediately obvious. Self-reported grades does not mean allowing pupils to mark their own work uncritically. The construct, drawn from research by Kuncel, Crede and Thomas (2005) and earlier work by Mabe and West (1982), refers to the accuracy with which pupils predict their own performance on an upcoming test or task.

                Pupils whose self-predictions closely match their actual outcomes have, by implication, an accurate internal model of their own current knowledge and skill. This accuracy is itself the product of previous feedback, metacognitive experience, and transparent assessment practices. When pupils receive regular, specific feedback that allows them to calibrate their self-assessments, they develop an internal standard against which to measure new learning. Hattie interpreted the effect size not as evidence that self-assessment is a magic technique but as a demonstration that pupils who know what they know, and know what they do not yet know, are in the optimal position to direct their own learning.

                The practical implication connects closely to learning intentions and success criteria. When teachers share clear criteria in advance and ask pupils to assess their own work against those criteria before receiving teacher feedback, they are cultivating the calibration mechanism that underlies the self-reported grades effect. Andrade and Valtcheva (2009) found that structured self-assessment using rubrics produced significant gains in writing quality compared to unstructured self-evaluation, because the rubric provided an external standard against which to calibrate internal judgements.

                Black and Wiliam (1998) reached a parallel conclusion in their review of formative assessment research: the gains from self- and peer-assessment are most reliable when pupils have been taught to use specific criteria, not simply asked to express opinions about their work. For teachers, the practical question is whether assessment tasks are transparent enough, and feedback specific enough, for pupils to build an accurate model of their own understanding rather than relying solely on teacher evaluation to tell them where they stand.

                Free Resource Pack

                Download this free Complete Teaching Essentials Bundle resource pack for your classroom and staff room. Includes printable posters, desk cards, and CPD materials.

                Free Resource Pack

                Teaching Essentials Toolkit

                Key resources for building a robust and effective teaching practice, informed by evidence.

                Teaching Essentials Toolkit — 4 resources
                Classroom ManagementEffective FeedbackMetacognitionRetrieval PracticeTeaching StrategiesCPD Briefing VisualQuick Reference GuideClassroom Wall DisplayPlanning TemplateTeacher Development

                Download your free bundle

                Fill in your details below and we'll send the resource pack straight to your inbox.

                Quick survey (helps us create better resources)

                How confident are you in consistently implementing high-impact teaching essentials in your daily practice?

                Not at all confident
                Slightly confident
                Moderately confident
                Very confident
                Extremely confident

                To what extent do your school and colleagues actively support and promote the use of foundational teaching strategies?

                Not at all
                Minimally
                Moderately
                Significantly
                Extensively

                How often do you currently reflect on and refine your core teaching routines and strategies for improved impact?

                Rarely or never
                Occasionally
                Sometimes
                Frequently
                Constantly

                Your resource pack is ready

                We've also sent a copy to your email. Check your inbox.

                Further Reading: Key Research Papers

                These studies provide the research foundation for visible learning and its practical applications in schools.

                Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement View study ↗ 6,865 citations

                Hattie, J. (2009)

                Hattie's landmark synthesis of 800+ meta-analyses ranked 138 teaching influences by their effect size on student achievement. Feedback (d=0.73), teacher clarity (d=0.75), and formative evaluation (d=0.90) emerged as among the most powerful interventions. The work provides teachers with an evidence hierarchy for deciding where to invest classroom time and energy.

                Visible Learning for Teachers: Maximizing Impact on Learning 1,147 citations

                Hattie, J. (2012)

                This companion volume translates the meta-analytic findings into practical classroom strategies. Hattie introduces the concept of "know thy impact," arguing that teachers who regularly evaluate their effect on student learning become more effective practitioners. The book provides checklists, lesson planning frameworks, and self-evaluation tools grounded in the original research synthesis.

                The Power of Feedback View study ↗ 6,378 citations

                Hattie, J. and Timperley, H. (2007)

                This paper presents the feedback model central to visible learning, identifying four levels: task, process, self-regulation, and self. The research demonstrates that feedback about the task and learning process produces the strongest effects, whilst praise directed at the self has minimal impact on achievement. Teachers can use this framework to design feedback that genuinely moves learning forward.

                Teachers Make a Difference: What Is the Research Evidence? View study ↗ 1,310 citations

                Hattie, J. (2003)

                This earlier paper establishes that teacher quality accounts for approximately 30% of variance in student achievement, making it the most significant school-level factor. The research identifies expert teachers as those who challenge students, set high expectations, and maintain awareness of their impact. These findings laid the groundwork for the visible learning framework that followed.

                Embedding Formative Assessment: Practical Techniques for K-12 Classrooms

                Wiliam, D. (2011)

                Wiliam's work complements Hattie's findings by providing a practical framework for the formative assessment strategies that visible learning identifies as highly effective. The book introduces five key strategies including clarifying learning intentions, engineering classroom discussions, and activating students as instructional resources for each other.

                Written by the Structural Learning Research Team

                Reviewed by Paul Main, Founder & Educational Consultant at Structural Learning

                Criticisms and Methodological Limitations of Visible Learning

                Visible Learning has attracted substantial methodological criticism from educational researchers, and teachers who use Hattie's rankings should understand the main lines of concern. The most fundamental objection is what Slavin (2018) called the "apples and oranges" problem. Hattie's database combines meta-analyses from early childhood education, secondary schooling, higher education, clinical psychology, and sports coaching. Effect sizes from these very different contexts are then averaged as if they were measuring the same thing. A meta-analysis of feedback in medical training and a meta-analysis of feedback in primary literacy lessons both contribute to the same d value, even though the populations, tasks, and assessment instruments are entirely different.

                Simpson (2017) raised concerns about the mathematical aggregation process itself. When you average effect sizes across meta-analyses that used different study selection criteria, different statistical methods, and different definitions of the same construct, the resulting number carries no clear meaning. An effect size is a standardised comparison between two groups: if the groups, the interventions, and the outcomes differ across studies, then the standardisation does not hold. Simpson argued that Hattie's league table of influences creates an illusion of precision; the d values look like measurements, but they reflect the accumulated artefacts of dozens of different research traditions rather than a stable property of any particular teaching strategy.

                Bergeron (2017) examined the ecological validity problem: whether findings from controlled studies can be generalised to ordinary classrooms. Many studies in Hattie's database were conducted under conditions that differ from daily teaching: short intervention windows, volunteer participants, researcher involvement in delivery, and outcomes measured by researcher-designed tests rather than national assessments. A strategy that produces d=0.60 in a six-week university trial with graduate student facilitators may produce a much smaller effect when delivered by a single teacher with 30 pupils across a full academic year. The context in which research is conducted is part of what generates the effect size, not just the strategy itself.

                None of these criticisms mean that Visible Learning is without value. The broad ordering of influences, with factors related to teacher cognition, feedback quality, and pupil self-regulation clustered at the top, is consistent with findings from other research traditions, including the Education Endowment Foundation's toolkit and the work of Robert Rosenshine. What the criticisms do mean is that treating a specific d value as a precise prediction of what will happen in your classroom is not warranted. Hattie (2015) acknowledged that effect sizes should be treated as starting points for professional inquiry rather than prescriptions, and the research has most value when it is used to generate questions about practice rather than to rank strategies by number.

                Frequently Asked Questions

                What is Visible Learning and how does it work?

                Visible Learning is an evidence-based teaching approach developed by John Hattie that makes learning visible to both teachers and students. Students must clearly understand what they are learning, how to learn it, and how to measure their progress. The approach focuses on evaluating the impact of teaching on student achievement rather than simply delivering content, with teachers acting as activators of learning who monitor progress and adapt instruction based on real-time evidence.

                How do I implement Visible Learning strategies in my classroom?

                Start each lesson by sharing clear learning intentions and success criteria so students understand what they are learning and how they will know when they have succeeded. During lessons, continuously gather evidence of student understanding through questioning techniques and mini-assessments, then adjust your instruction accordingly. Help students become partners in the learning process by teaching them to self-assess and provide meaningful peer feedback to create a classroom culture where learning is everyone's responsibility.

                What are the main benefits of using Visible Learning in schools?

                Visible Learning transforms students from passive recipients into active partners who set goals, track progress, and seek feedback independently. Teachers benefit from evidence-based guidance on which strategies actually work, with effect sizes showing that feedback achieves 0.7 impact and formative evaluation reaches 0.9. The approach helps teachers see learning through students' eyes and make teaching decisions based on real-time evidence of what's working rather than popular but ineffective interventions.

                What does the 0.4 effect size threshold mean in Visible Learning?

                The 0.4 effect size represents Hattie's "Zone of Desired Effects" where teaching strategies begin to have meaningful impact on student achievement. Any teaching approach with an effect size of 0.4 or greater is considered beneficial, whilst strategies below this threshold may not significantly improve learning outcomes. This threshold helps teachers identify which interventions are worth their time and effort, as many popular teaching methods actually fall below this effective zone.

                How do I know if Visible Learning is working in my classroom?

                Look for students who can clearly articulate what they are learning and why it matters, and who actively seek feedback and set their own learning goals. You should see evidence of students self-assessing their work and providing meaningful peer feedback without constant teacher prompting. Additionally, you will notice your teaching decisions becoming more responsive to student needs as you continuously gather and act upon assessment evidence during lessons.

                What are common mistakes teachers make when implementing Visible Learning?

                Many teachers focus only on sharing learning objectives without teaching students how to use success criteria to self-assess their progress. Another common mistake is gathering assessment evidence but failing to adjust instruction based on what the data reveals about student understanding. Some teachers also assume that simply posting learning intentions on the board constitutes Visible Learning, when the approach actually requires active student participation and ongoing feedback loops throughout the lesson.

                Implementing Effective Feedback Practices

                Effective feedback represents one of the most powerful tools in a teacher's arsenal, with Hattie's research consistently placing it among the top influences on student achievement. However, the quality and timing of feedback matter significantly more than its frequency. Effective feedback focuses on the task, the process, and self-regulation rather than praising the person, helping students understand what they got wrong and how to improve their learning strategies.

                The most impactful feedback addresses three fundamental questions: Where am I going? How am I going? Where to next? This framework, developed through extensive educational research, ensures feedback is both specific and actionable. Teachers should provide feedback that is timely, specific to learning intentions, and connects directly to success criteria. Rather than simply marking work as correct or incorrect, effective feedback identifies patterns in student thinking and guides them towards deeper understanding of the subject matter.

                In classroom practice, this means moving beyond generic praise such as "good work" towards targeted comments like "your use of evidence in paragraph two strengthens your argument, now consider how you might apply this same approach to your conclusion." Peer feedback and self-assessment opportunities also enhance learning outcomes, as students develop metacognitive awareness of their own progress and learning processes.

                Assessment Strategies for Visible Learning

                Effective assessment in Visible Learning classrooms moves beyond traditional testing to become a continuous dialogue between teachers and students about learning progress. Formative assessment strategies, such as exit tickets, learning journals, and peer feedback sessions, provide real-time data that enables teachers to adjust instruction immediately rather than waiting for summative results. This approach aligns with Dylan Wiliam's research on assessment for learning, which demonstrates that frequent, low-stakes feedback can significantly accelerate student achievement.

                The key lies in making learning intentions and success criteria transparent from the outset. When students understand exactly what they're working towards and can articulate their own progress, they become active partners in the assessment process. Regular self-assessment activities, where pupils reflect on their understanding and identify next steps, creates the metacognitive skills essential for independent learning. This practice supports Hattie's findings that self-reported grades have one of the highest effect sizes on student achievement.

                Practically, teachers can implement simple yet powerful monitoring tools such as traffic light systems for student confidence levels, one-minute summaries at lesson transitions, or structured peer assessment using clear rubrics. The crucial element is ensuring assessment data directly informs subsequent teaching decisions, creating a responsive classroom environment where both successes and misconceptions are addressed promptly and purposefully.

              Big Ideas

              Back to Blog

              <script type="application/ld+json">{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide#article","headline":"Visible Learning: Hattie's Research on What Actually","description":"Hattie's Visible Learning synthesises 1,800+ meta-analyses covering 300+ million pupils. Understand effect sizes.","datePublished":"2021-10-26T13:33:18.840Z","dateModified":"2026-03-02T11:01:41.916Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/6950177de4081a62575ff99d_7rmpyq.webp","wordCount":3380},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"Visible Learning: Hattie's Research on What Actually","item":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide"}]}]}</script>