Visible Learning: Hattie's Research on What Works
Hattie's Visible Learning research explained: which teaching strategies have the biggest impact on pupil outcomes. Evidence-based methods from 300+ million learners.


Visible Learning is an evidence-based approach to teaching developed by education researcher John Hattie. At its core, the idea is simple: learning should be visible, to teachers and to students themselves. This means students must know what they are learning, how to go about learning it, and how to measure their progress along the way. Hattie's work shifts the focus from simply delivering content to evaluating the impact of teaching on student achievement.
What does the research say? Hattie's (2009) Visible Learning synthesis analysed 800+ meta-analyses covering 80+ million pupils. The average effect size across all interventions is d = 0.40 (the "hinge point"). Top influences include collective teacher efficacy (d = 1.57), self-reported grades (d = 1.44), teacher credibility (d = 0.90) and feedback (d = 0.70). The key insight: teachers must see learning through pupils' eyes and make the learning process visible.
Based on a meta-analysis of millions of students and thousands of studies, Hattie introduced the concept of effect size, a way to identify which teaching strategies have the greatest impact on learning. His findings offer a clear message: great teaching is not just about planning activities, it's about seeing learning through the eyes of students and helping them become their own teachers.

The Visible Learning model places strong emphasis on:

Teachers are not just facilitators, they are activators of learning who monitor progress, adapt instruction, and make teaching decisions based on real-time evidence of what's working.
Key Principles of Visible Learning:
Visible Learning allocates an enhanced role for teachers as they begin to evaluate their teaching. According to John Hattie, visible learning and intelligent teaching take place when teachers begin to see learning from the eyes of students and guide them to become their teachers.
To measure the effect of visible learning, Hattie performed the statistical analysis on millions of students through 'effect size' and compared the experimental effect of many teaching strategies on student achievement, e.g. Learning strategies, feedback, holidays and class size.

The research foundation reveals striking patterns across different educational contexts and subjects. Hattie's analysis demonstrated that feedback, for instance, achieves an effect size of 0.7, making it nearly twice as powerful as average teaching practices. Similarly, formative evaluation scores 0.9, whilst collective teacher efficacy - when teachers believe they can positively impact all students - reaches an impressive 1.57. These findings provide teachers with clear priorities for professional development and classroom implementation.

In practice, Visible Learning strategies transform everyday classroom interactions. Teachers might begin lessons by sharing learning intentions and success criteria, ensuring students understand what they're learning and how they'll know when they've succeeded. During lessons, teachers actively seek evidence of student understanding through questioning techniques and mini-assessments, adjusting their instruction accordingly. Students become partners in this process, learning to self-assess and provide meaningful peer feedback, creating a classroom culture where learning is everyone's responsibility.
Teachers implement visible learning by making learning intentions explicit at the start of each lesson and sharing clear success criteria with students. They continuously gather evidence of student understanding through formative assessment and adjust their teaching based on this feedback. The model requires teachers to help students understand where they are in their learning journey and what steps they need to take next.
John Hattie used over 68,000 education research projects and 25 million students to research what makes the student learning the most successful. According to Hattie's meta-analyses chapter of Visible Learning, the greater the effect size, the more beneficial the approach. Whatever is at or greater than 0.4 is seen as the "Zone of Desired Effects." Hattie contends that school learning and teachers must focus their energy on enhancing skills with the help of these approaches. According to John Hattie, visible learners are the students who can:
This aligns with Rosenshine's principles of effective instruction, which emphasise the importance of clear guidance and structured support. Students who develop these capabilities show greater self-regulation and become more independent learners.
Effective questioning techniques play a crucial role in making thinking visible. Teachers can use questioning strategies to probe student understanding and guide them through their learning process. This approach is particularly powerful when combined with thinking routines that make student thought processes explicit.
Visual tools can also support visible learning by helping students organise and represent their understanding. Graphic organisers and concept maps allow students to see connections between ideas and track their developing knowledge structures.
The visible learning approach recognises that different students may need different levels of support depending on their needs. Teachers working with students with special educational needs can adapt these strategies to ensure all learners can participate effectively in the learning process.
Understanding how students process information is essential for implementing visible learning effectively. Teachers need to be aware of working memory limitations and design instruction that supports cognitive processing while making learning visible.
Student motivation plays a critical role in visible learning success. When students can see their progress a nd understand their learning goals, they become more invested in the process and take greater ownership of their education.
John Hattie's Visible Learning represents one of the most comprehensive syntheses of educational research ever undertaken, drawing from over 800 meta-analyses encompassing approximately 50,000 studies and 80 million students. This unprecedented scale of analysis allows educators to move beyond individual studies or personal anecdotes to understand which teaching practices genuinely accelerate student achievement. Hattie's work transforms scattered research findings into practical findings that can directly inform classroom practice.
The foundation of Visible Learning rests on effect sizes, a statistical measure that quantifies the impact of different educational interventions. Hattie established that an effect size of 0.40 represents the average yearly growth students typically achieve, setting this as the benchmark for determining whether teaching strategies are genuinely effective. Interventions exceeding this threshold demonstrate above-average impact on learning outcomes, whilst those below suggest limited educational value despite potentially consuming significant time and resources.
For classroom practitioners, this research foundation provides evidence-based guidance for prioritising professional development and instructional strategies. Rather than adopting every new educational trend, teachers can focus their efforts on high-impact practices such as feedback, formative evaluation, and metacognitive strategies, all of which consistently demonstrate substantial effect sizes across diverse educational contexts.
Effect sizes provide teachers with a powerful lens for evaluating the true impact of different educational practices on student learning. Unlike traditional research that simply tells us whether something works, effect sizes reveal how much it works, allowing educators to distinguish between marginal gains and transformative strategies. John Hattie's synthesis of over 800 meta-analyses established that an effect size of 0.40 represents the average yearly progress students make, providing a crucial benchmark for assessing teaching interventions.
Understanding this metric transforms how teachers approach professional development and classroom decision-making. Practices with effect sizes above 0.40 accelerate learning beyond typical progress, whilst those below may actually hinder student achievement. For instance, Hattie's research shows that feedback achieves an effect size of 0.70, indicating substantial impact, whereas ability grouping registers just 0.12, suggesting minimal benefit despite its widespread use in schools.
In practical terms, teachers can use effect sizes to prioritise their energy and resources. Rather than adopting every new initiative, focus on evidence-based strategies with demonstrated high impact. This might mean investing time in developing quality feedback systems, implementing formative assessment practices, or building strong teacher-student relationships, all of which consistently show effect sizes well above the 0.40 threshold for meaningful educational impact.
When John Hattie published Visible Learning in 2009, he synthesised 800 meta-analyses covering more than 50,000 individual studies and roughly 80 million students. To make sense of that volume of research, he used a statistical tool called Cohen's d: a standardised measure of the difference between a treatment group and a control group, expressed in units of standard deviation. A d of 1.0 means the average student in the treatment group outperformed 84 per cent of students in the control group. A d of 0.20 is a small effect; 0.50 is moderate; 0.80 is large (Cohen, 1988).
Hattie's key contribution was the hinge point of d=0.40, which he calculated as the average effect size across all the influences he examined. He proposed that teachers should use 0.40 as a baseline: any strategy that produces an effect size below it is delivering less than a typical year's teaching, regardless of how popular or well-resourced that strategy might be. Approaches above the line represent meaningful acceleration of learning. This reframing matters because it shifts the question from "does this work?" to "does this work better than simply being taught?" (Hattie, 2009).
The mechanics behind a meta-analysis are worth understanding. Researchers calculate an effect size for each individual study, then average those effect sizes across the meta-analysis, weighting by sample size. Hattie then averaged effect sizes across multiple meta-analyses to produce his ranked list. Each layer of aggregation increases the distance between the original classroom data and the final number that appears in a league table. What you see as d=0.60 for a given strategy may represent thousands of different teacher-student interactions, in different countries, measured with different assessments, across different subject areas.
Kraft (2020) raised a specific technical concern: when effect sizes are computed using pre-post gains rather than comparison group differences, the resulting numbers are systematically inflated. Many studies in Hattie's database used pre-post designs, which means the 0.40 hinge point may itself be set too high relative to what rigorous randomised trials would produce. For classroom teachers, this does not invalidate the general ordering of Hattie's influences, but it does mean that interpreting specific d values as precise measurements is unwarranted. The hinge point is better read as a rough filter than as a precise threshold.
The most effective teaching strategies share a common characteristic: they make learning visible to both teachers and students. John Hattie's research identifies several high-impact practices that consistently produce effect sizes above 0.40, indicating substantial improvements in student achievement. These strategies include feedback, formative evaluation, and classroom discussion, all of which create transparent learning processes where progress becomes tangible and measurable.
Cognitive scientist John Sweller's work on cognitive load theory demonstrates why strategies like worked examples and scaffolding prove so effective. By reducing extraneous mental processing, these approaches allow students to focus on essential learning content. Similarly, Dylan Wiliam's research on formative assessment shows how regular, low-stakes assessment creates feedback loops that guide both teaching decisions and student understanding in real-time.
Successful implementation requires teachers to become evaluators of their own impact. This means systematically collecting evidence of student learning through methods such as exit tickets, peer discussions, and learning journals. When teachers can clearly see what works and adjust their practice accordingly, student outcomes improve dramatically. The key lies not in perfecting individual techniques, but in developing a responsive teaching approach that adapts to visible evidence of learning.
In Hattie's updated rankings, collective teacher efficacy sits at the top of the entire list with an effect size of d=1.57, well above any instructional strategy. The construct originates with Albert Bandura's work on self-efficacy, which he extended from the individual level to the collective. Bandura (1997) defined collective efficacy as a group's shared belief in its combined capacity to organise and execute the actions required to produce a given level of attainment. In schools, this means the degree to which the staff as a whole believe that their collective actions can make a measurable difference to every pupil, including those facing disadvantage.
Jenni Donohoo's research has been particularly influential in translating Bandura's theory into school improvement practice. Donohoo (2017) identified six enabling conditions that build collective teacher efficacy: advanced teacher influence over decisions, goal consensus, teachers' knowledge about one another's work, cohesive staff relationships, responsiveness of leadership to teacher concerns, and consideration of the task at hand. Where these conditions are weak, even technically skilled individual teachers struggle to lift overall outcomes. The culture itself acts as a ceiling on what any one teacher can achieve.
Why does a belief about collective impact produce such a large measured effect? The mechanism runs through professional behaviour. When staff believe their collective effort will shift outcomes, they set higher expectations for all pupils, they persist with struggling learners rather than attributing failure to factors outside school, and they share responsibility for results rather than retreating into individual classrooms. Pupils experience this as consistent high expectations across every subject and year group, not just in the classes of a few exceptional teachers (Donohoo, Hattie and Eells, 2018).
For school leaders, this is an argument for investing in collaborative structures before adding new programmes. A school that buys a new literacy intervention but runs departments in isolation is likely to get less return than one that builds shared planning time, lesson study cycles, and a genuine culture of professional trust. The effect size of d=1.57 does not mean that belief alone raises attainment; it means that when a staff team collectively acts on the belief that they can succeed with every cohort, the resulting changes in practice are large enough to show up clearly in outcome data.
Learning intentions and success criteria form the cornerstone of effective teaching practice, providing students with a clear roadmap of what they will learn and how they will know they have succeeded. Research by Shirley Clarke demonstrates that when students understand the purpose of their learning and can recognise quality work, achievement increases significantly. Learning intentions should be written in student-friendly language and focus on the skills, knowledge, or understanding students will develop, rather than the activities they will complete.
Success criteria break down the learning intention into specific, observable behaviours or outcomes that students can use to self-assess their progress. These criteria should be co-constructed with students where possible, as Dylan Wiliam's research shows this increases student ownership and engagement. Effective success criteria are specific, measurable, and directly linked to the learning intention, helping students understand what good work looks like and how to achieve it themselves.
In practice, display learning intentions and success criteria prominently and refer to them throughout the lesson. Begin by sharing and explaining them, use them during learning activities to guide student self-reflection, and return to them at lesson end for evaluation. This transparent approach transforms learning from a mystery into a clear, achievable process that helps students to take responsibility for their own progress.
Answer five questions about your school context and receive personalised EEF strategy recommendations ranked by impact, cost, and evidence strength.
Enter your budget, select strategies, and instantly see which approaches deliver the most progress per pound spent.
Choose your feedback type, subject, and time constraints to generate a tailored protocol with marking codes, prompt stems, and workload strategies.
In the decade following the publication of Visible Learning, Hattie shifted his focus from cataloguing what works to examining why effective teachers consistently outperform their peers regardless of the specific strategies they use. The answer, he and Zierer (2018) argued, lay not in technique selection but in a set of underlying beliefs — what they called mindframes — that govern how teachers interpret their role and read evidence of student learning.
Hattie and Zierer (2018) identified ten mindframes central to high-impact teaching. The most fundamental is that teachers see themselves primarily as evaluators of their own impact: they continuously collect evidence of what students have learned and use it to adjust their practice rather than attributing outcomes to student effort or ability. A second mindframe holds that teaching and learning are forms of error-making and error-detection; classrooms where mistakes are treated as diagnostic information rather than failures produce greater cognitive risk-taking and deeper learning. A third mindframe frames the relationship between teacher and student as a dialogue about learning rather than a transmission of content.
Additional mindframes include seeing professional collaboration as a core responsibility, not an optional enrichment; believing that all pupils can improve; and using learning intentions and success criteria as planning tools rather than administrative requirements. Hattie and Zierer distinguished mindframes sharply from instructional strategies: a teacher can deploy exit tickets, peer assessment, or worked examples as surface procedures without the underlying mindframe that treats the evidence they generate as personally meaningful feedback on their own teaching.
The mindframes framework has practical implications for continuing professional development. Training that focuses on new techniques without addressing underlying beliefs about ability, error, and teacher responsibility is less likely to shift classroom practice durably. Research on professional learning communities (Hargreaves and Fullan, 2012) supports this: sustainable improvement in pupil outcomes is associated with schools where collective inquiry into impact data is a cultural norm, not an occasional event. For individual teachers, the most accessible entry point is treating lesson observations, exit tickets, and assessment results as feedback on teaching, not merely feedback about pupils.
Hattie and Donoghue (2016) proposed a learning model that resolved a persistent tension in the Visible Learning data: why do some strategies that produce large effect sizes in research trials produce poor results when implemented as whole-class instructional approaches? Their answer was that most teaching strategies are phase-specific — they produce their strongest effects at a particular stage of learning, and deploying them at the wrong phase reduces or eliminates their benefit.
The model describes three phases. Surface learning involves the initial acquisition and consolidation of facts, skills, and concepts. Pupils at this stage need explicit instruction, direct explanation of what is to be learned, deliberate practice, and feedback oriented to the correctness of specific responses. Strategies with high effect sizes during surface learning include worked examples (d=0.57), direct instruction (d=0.60), and spaced practice (d=0.65). Pushing pupils into collaborative inquiry or self-regulated investigation before they have sufficient surface knowledge to reason with is counterproductive; they lack the domain-specific content on which deeper thinking depends.
The deep learning phase involves connecting facts and skills into integrated conceptual structures, identifying relationships between ideas, and applying knowledge to unfamiliar problems within the same domain. Strategies most effective at this phase include reciprocal teaching, concept mapping, and elaborative interrogation — techniques that require pupils to construct relationships rather than retrieve isolated items. Hattie and Donoghue noted that the classroom talk and collaborative inquiry strategies often promoted in professional development have their strongest evidence base at the deep phase, which explains why they work well in research with near-expert learners but disappoint when applied to novices encountering new content.
The third phase, transfer learning, involves the application of conceptual understanding to genuinely novel problems across domains. Transfer is the hardest to achieve and the most valuable. Strategies that support transfer include metacognitive monitoring, problem-solving in varied contexts, and deliberate attention to the conditions under which knowledge applies. For lesson planning, the three-phase model suggests that the same unit of work should contain distinct instructional sequences matched to the phase of learning at each point, rather than applying the same pedagogical approach throughout. Wiliam (2011) reached a complementary conclusion: the key question for a teacher is not "which strategy is best?" but "what does this pupil need at this moment?"
Feedback consistently produces among the largest effect sizes in Hattie's database, yet research also shows that feedback frequently has no effect or even negative effects on learning. Hattie and Timperley (2007) resolved this paradox by distinguishing four levels at which feedback can be directed, arguing that the effectiveness of any feedback act depends critically on which level it addresses and whether that level is appropriate to the learner's current state.
The first level is task feedback (FT): information about whether a specific answer, product, or performance is correct or incorrect. This is the most common form of feedback in classrooms and the least powerful for generating learning, though it is useful when pupils have fundamental misconceptions that must be corrected before further work can proceed. The second level is process feedback (FP): information about the strategies and procedures used to complete a task. Process feedback helps pupils understand that their approach, not just their answer, is subject to improvement, and it supports the development of transferable skills rather than task-specific performance.
The third level is self-regulation feedback (FR): information that supports pupils in monitoring their own learning, checking their own work, and seeking help effectively. This level has the strongest evidence base for long-term learning gains because it reduces dependence on teacher feedback and builds the metacognitive habits that sustain independent learning. The fourth level is self feedback (FS): comment directed at the learner as a person rather than at their task, process, or regulation. Hattie and Timperley found that this level is the least effective and potentially harmful, particularly when praise for ability replaces information about learning. Dweck's (1999) research on fixed versus growth mindsets converges with this finding: praising pupils as "clever" reduces their willingness to take on challenging tasks and attribute difficulty to insufficient effort.
The model has a direct implication for written feedback policies. A comment such as "Good work — you clearly understand this" operates at the self level and provides no information the pupil can act on. Reframing it as "You identified the correct pattern. Check whether it holds when the numbers are negative" operates at the process and task levels simultaneously. For pupils who consistently self-correct successfully, moving feedback to the self-regulation level — "You found your own error: what strategy did you use?" — builds the monitoring habits most predictive of long-term achievement.
In Hattie's (2009) original meta-analysis, self-reported grades emerged as the single influence with the highest effect size in the entire dataset, at d=1.44. The finding attracted considerable attention and some scepticism, in part because its meaning was not immediately obvious. Self-reported grades does not mean allowing pupils to mark their own work uncritically. The construct, drawn from research by Kuncel, Crede and Thomas (2005) and earlier work by Mabe and West (1982), refers to the accuracy with which pupils predict their own performance on an upcoming test or task.
Pupils whose self-predictions closely match their actual outcomes have, by implication, an accurate internal model of their own current knowledge and skill. This accuracy is itself the product of previous feedback, metacognitive experience, and transparent assessment practices. When pupils receive regular, specific feedback that allows them to calibrate their self-assessments, they develop an internal standard against which to measure new learning. Hattie interpreted the effect size not as evidence that self-assessment is a magic technique but as a demonstration that pupils who know what they know, and know what they do not yet know, are in the optimal position to direct their own learning.
The practical implication connects closely to learning intentions and success criteria. When teachers share clear criteria in advance and ask pupils to assess their own work against those criteria before receiving teacher feedback, they are cultivating the calibration mechanism that underlies the self-reported grades effect. Andrade and Valtcheva (2009) found that structured self-assessment using rubrics produced significant gains in writing quality compared to unstructured self-evaluation, because the rubric provided an external standard against which to calibrate internal judgements.
Black and Wiliam (1998) reached a parallel conclusion in their review of formative assessment research: the gains from self- and peer-assessment are most reliable when pupils have been taught to use specific criteria, not simply asked to express opinions about their work. For teachers, the practical question is whether assessment tasks are transparent enough, and feedback specific enough, for pupils to build an accurate model of their own understanding rather than relying solely on teacher evaluation to tell them where they stand.
Download this free Complete Teaching Essentials Bundle resource pack for your classroom and staff room. Includes printable posters, desk cards, and CPD materials.
These studies provide the research foundation for visible learning and its practical applications in schools.
Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement View study ↗ 6,865 citations
Hattie, J. (2009)
Hattie's landmark synthesis of 800+ meta-analyses ranked 138 teaching influences by their effect size on student achievement. Feedback (d=0.73), teacher clarity (d=0.75), and formative evaluation (d=0.90) emerged as among the most powerful interventions. The work provides teachers with an evidence hierarchy for deciding where to invest classroom time and energy.
Visible Learning for Teachers: Maximizing Impact on Learning 1,147 citations
Hattie, J. (2012)
This companion volume translates the meta-analytic findings into practical classroom strategies. Hattie introduces the concept of "know thy impact," arguing that teachers who regularly evaluate their effect on student learning become more effective practitioners. The book provides checklists, lesson planning frameworks, and self-evaluation tools grounded in the original research synthesis.
The Power of Feedback View study ↗ 6,378 citations
Hattie, J. and Timperley, H. (2007)
This paper presents the feedback model central to visible learning, identifying four levels: task, process, self-regulation, and self. The research demonstrates that feedback about the task and learning process produces the strongest effects, whilst praise directed at the self has minimal impact on achievement. Teachers can use this framework to design feedback that genuinely moves learning forward.
Teachers Make a Difference: What Is the Research Evidence? View study ↗ 1,310 citations
Hattie, J. (2003)
This earlier paper establishes that teacher quality accounts for approximately 30% of variance in student achievement, making it the most significant school-level factor. The research identifies expert teachers as those who challenge students, set high expectations, and maintain awareness of their impact. These findings laid the groundwork for the visible learning framework that followed.
Embedding Formative Assessment: Practical Techniques for K-12 Classrooms
Wiliam, D. (2011)
Wiliam's work complements Hattie's findings by providing a practical framework for the formative assessment strategies that visible learning identifies as highly effective. The book introduces five key strategies including clarifying learning intentions, engineering classroom discussions, and activating students as instructional resources for each other.
Visible Learning has attracted substantial methodological criticism from educational researchers, and teachers who use Hattie's rankings should understand the main lines of concern. The most fundamental objection is what Slavin (2018) called the "apples and oranges" problem. Hattie's database combines meta-analyses from early childhood education, secondary schooling, higher education, clinical psychology, and sports coaching. Effect sizes from these very different contexts are then averaged as if they were measuring the same thing. A meta-analysis of feedback in medical training and a meta-analysis of feedback in primary literacy lessons both contribute to the same d value, even though the populations, tasks, and assessment instruments are entirely different.
Simpson (2017) raised concerns about the mathematical aggregation process itself. When you average effect sizes across meta-analyses that used different study selection criteria, different statistical methods, and different definitions of the same construct, the resulting number carries no clear meaning. An effect size is a standardised comparison between two groups: if the groups, the interventions, and the outcomes differ across studies, then the standardisation does not hold. Simpson argued that Hattie's league table of influences creates an illusion of precision; the d values look like measurements, but they reflect the accumulated artefacts of dozens of different research traditions rather than a stable property of any particular teaching strategy.
Bergeron (2017) examined the ecological validity problem: whether findings from controlled studies can be generalised to ordinary classrooms. Many studies in Hattie's database were conducted under conditions that differ from daily teaching: short intervention windows, volunteer participants, researcher involvement in delivery, and outcomes measured by researcher-designed tests rather than national assessments. A strategy that produces d=0.60 in a six-week university trial with graduate student facilitators may produce a much smaller effect when delivered by a single teacher with 30 pupils across a full academic year. The context in which research is conducted is part of what generates the effect size, not just the strategy itself.
None of these criticisms mean that Visible Learning is without value. The broad ordering of influences, with factors related to teacher cognition, feedback quality, and pupil self-regulation clustered at the top, is consistent with findings from other research traditions, including the Education Endowment Foundation's toolkit and the work of Robert Rosenshine. What the criticisms do mean is that treating a specific d value as a precise prediction of what will happen in your classroom is not warranted. Hattie (2015) acknowledged that effect sizes should be treated as starting points for professional inquiry rather than prescriptions, and the research has most value when it is used to generate questions about practice rather than to rank strategies by number.
Visible Learning is an evidence-based teaching approach developed by John Hattie that makes learning visible to both teachers and students. Students must clearly understand what they are learning, how to learn it, and how to measure their progress. The approach focuses on evaluating the impact of teaching on student achievement rather than simply delivering content, with teachers acting as activators of learning who monitor progress and adapt instruction based on real-time evidence.
Start each lesson by sharing clear learning intentions and success criteria so students understand what they are learning and how they will know when they have succeeded. During lessons, continuously gather evidence of student understanding through questioning techniques and mini-assessments, then adjust your instruction accordingly. Help students become partners in the learning process by teaching them to self-assess and provide meaningful peer feedback to create a classroom culture where learning is everyone's responsibility.
Visible Learning transforms students from passive recipients into active partners who set goals, track progress, and seek feedback independently. Teachers benefit from evidence-based guidance on which strategies actually work, with effect sizes showing that feedback achieves 0.7 impact and formative evaluation reaches 0.9. The approach helps teachers see learning through students' eyes and make teaching decisions based on real-time evidence of what's working rather than popular but ineffective interventions.
The 0.4 effect size represents Hattie's "Zone of Desired Effects" where teaching strategies begin to have meaningful impact on student achievement. Any teaching approach with an effect size of 0.4 or greater is considered beneficial, whilst strategies below this threshold may not significantly improve learning outcomes. This threshold helps teachers identify which interventions are worth their time and effort, as many popular teaching methods actually fall below this effective zone.
Look for students who can clearly articulate what they are learning and why it matters, and who actively seek feedback and set their own learning goals. You should see evidence of students self-assessing their work and providing meaningful peer feedback without constant teacher prompting. Additionally, you will notice your teaching decisions becoming more responsive to student needs as you continuously gather and act upon assessment evidence during lessons.
Many teachers focus only on sharing learning objectives without teaching students how to use success criteria to self-assess their progress. Another common mistake is gathering assessment evidence but failing to adjust instruction based on what the data reveals about student understanding. Some teachers also assume that simply posting learning intentions on the board constitutes Visible Learning, when the approach actually requires active student participation and ongoing feedback loops throughout the lesson.
Effective feedback represents one of the most powerful tools in a teacher's arsenal, with Hattie's research consistently placing it among the top influences on student achievement. However, the quality and timing of feedback matter significantly more than its frequency. Effective feedback focuses on the task, the process, and self-regulation rather than praising the person, helping students understand what they got wrong and how to improve their learning strategies.
The most impactful feedback addresses three fundamental questions: Where am I going? How am I going? Where to next? This framework, developed through extensive educational research, ensures feedback is both specific and actionable. Teachers should provide feedback that is timely, specific to learning intentions, and connects directly to success criteria. Rather than simply marking work as correct or incorrect, effective feedback identifies patterns in student thinking and guides them towards deeper understanding of the subject matter.
In classroom practice, this means moving beyond generic praise such as "good work" towards targeted comments like "your use of evidence in paragraph two strengthens your argument, now consider how you might apply this same approach to your conclusion." Peer feedback and self-assessment opportunities also enhance learning outcomes, as students develop metacognitive awareness of their own progress and learning processes.
Effective assessment in Visible Learning classrooms moves beyond traditional testing to become a continuous dialogue between teachers and students about learning progress. Formative assessment strategies, such as exit tickets, learning journals, and peer feedback sessions, provide real-time data that enables teachers to adjust instruction immediately rather than waiting for summative results. This approach aligns with Dylan Wiliam's research on assessment for learning, which demonstrates that frequent, low-stakes feedback can significantly accelerate student achievement.
The key lies in making learning intentions and success criteria transparent from the outset. When students understand exactly what they're working towards and can articulate their own progress, they become active partners in the assessment process. Regular self-assessment activities, where pupils reflect on their understanding and identify next steps, creates the metacognitive skills essential for independent learning. This practice supports Hattie's findings that self-reported grades have one of the highest effect sizes on student achievement.
Practically, teachers can implement simple yet powerful monitoring tools such as traffic light systems for student confidence levels, one-minute summaries at lesson transitions, or structured peer assessment using clear rubrics. The crucial element is ensuring assessment data directly informs subsequent teaching decisions, creating a responsive classroom environment where both successes and misconceptions are addressed promptly and purposefully.
Visible Learning is an evidence-based approach to teaching developed by education researcher John Hattie. At its core, the idea is simple: learning should be visible, to teachers and to students themselves. This means students must know what they are learning, how to go about learning it, and how to measure their progress along the way. Hattie's work shifts the focus from simply delivering content to evaluating the impact of teaching on student achievement.
What does the research say? Hattie's (2009) Visible Learning synthesis analysed 800+ meta-analyses covering 80+ million pupils. The average effect size across all interventions is d = 0.40 (the "hinge point"). Top influences include collective teacher efficacy (d = 1.57), self-reported grades (d = 1.44), teacher credibility (d = 0.90) and feedback (d = 0.70). The key insight: teachers must see learning through pupils' eyes and make the learning process visible.
Based on a meta-analysis of millions of students and thousands of studies, Hattie introduced the concept of effect size, a way to identify which teaching strategies have the greatest impact on learning. His findings offer a clear message: great teaching is not just about planning activities, it's about seeing learning through the eyes of students and helping them become their own teachers.

The Visible Learning model places strong emphasis on:

Teachers are not just facilitators, they are activators of learning who monitor progress, adapt instruction, and make teaching decisions based on real-time evidence of what's working.
Key Principles of Visible Learning:
Visible Learning allocates an enhanced role for teachers as they begin to evaluate their teaching. According to John Hattie, visible learning and intelligent teaching take place when teachers begin to see learning from the eyes of students and guide them to become their teachers.
To measure the effect of visible learning, Hattie performed the statistical analysis on millions of students through 'effect size' and compared the experimental effect of many teaching strategies on student achievement, e.g. Learning strategies, feedback, holidays and class size.

The research foundation reveals striking patterns across different educational contexts and subjects. Hattie's analysis demonstrated that feedback, for instance, achieves an effect size of 0.7, making it nearly twice as powerful as average teaching practices. Similarly, formative evaluation scores 0.9, whilst collective teacher efficacy - when teachers believe they can positively impact all students - reaches an impressive 1.57. These findings provide teachers with clear priorities for professional development and classroom implementation.

In practice, Visible Learning strategies transform everyday classroom interactions. Teachers might begin lessons by sharing learning intentions and success criteria, ensuring students understand what they're learning and how they'll know when they've succeeded. During lessons, teachers actively seek evidence of student understanding through questioning techniques and mini-assessments, adjusting their instruction accordingly. Students become partners in this process, learning to self-assess and provide meaningful peer feedback, creating a classroom culture where learning is everyone's responsibility.
Teachers implement visible learning by making learning intentions explicit at the start of each lesson and sharing clear success criteria with students. They continuously gather evidence of student understanding through formative assessment and adjust their teaching based on this feedback. The model requires teachers to help students understand where they are in their learning journey and what steps they need to take next.
John Hattie used over 68,000 education research projects and 25 million students to research what makes the student learning the most successful. According to Hattie's meta-analyses chapter of Visible Learning, the greater the effect size, the more beneficial the approach. Whatever is at or greater than 0.4 is seen as the "Zone of Desired Effects." Hattie contends that school learning and teachers must focus their energy on enhancing skills with the help of these approaches. According to John Hattie, visible learners are the students who can:
This aligns with Rosenshine's principles of effective instruction, which emphasise the importance of clear guidance and structured support. Students who develop these capabilities show greater self-regulation and become more independent learners.
Effective questioning techniques play a crucial role in making thinking visible. Teachers can use questioning strategies to probe student understanding and guide them through their learning process. This approach is particularly powerful when combined with thinking routines that make student thought processes explicit.
Visual tools can also support visible learning by helping students organise and represent their understanding. Graphic organisers and concept maps allow students to see connections between ideas and track their developing knowledge structures.
The visible learning approach recognises that different students may need different levels of support depending on their needs. Teachers working with students with special educational needs can adapt these strategies to ensure all learners can participate effectively in the learning process.
Understanding how students process information is essential for implementing visible learning effectively. Teachers need to be aware of working memory limitations and design instruction that supports cognitive processing while making learning visible.
Student motivation plays a critical role in visible learning success. When students can see their progress a nd understand their learning goals, they become more invested in the process and take greater ownership of their education.
John Hattie's Visible Learning represents one of the most comprehensive syntheses of educational research ever undertaken, drawing from over 800 meta-analyses encompassing approximately 50,000 studies and 80 million students. This unprecedented scale of analysis allows educators to move beyond individual studies or personal anecdotes to understand which teaching practices genuinely accelerate student achievement. Hattie's work transforms scattered research findings into practical findings that can directly inform classroom practice.
The foundation of Visible Learning rests on effect sizes, a statistical measure that quantifies the impact of different educational interventions. Hattie established that an effect size of 0.40 represents the average yearly growth students typically achieve, setting this as the benchmark for determining whether teaching strategies are genuinely effective. Interventions exceeding this threshold demonstrate above-average impact on learning outcomes, whilst those below suggest limited educational value despite potentially consuming significant time and resources.
For classroom practitioners, this research foundation provides evidence-based guidance for prioritising professional development and instructional strategies. Rather than adopting every new educational trend, teachers can focus their efforts on high-impact practices such as feedback, formative evaluation, and metacognitive strategies, all of which consistently demonstrate substantial effect sizes across diverse educational contexts.
Effect sizes provide teachers with a powerful lens for evaluating the true impact of different educational practices on student learning. Unlike traditional research that simply tells us whether something works, effect sizes reveal how much it works, allowing educators to distinguish between marginal gains and transformative strategies. John Hattie's synthesis of over 800 meta-analyses established that an effect size of 0.40 represents the average yearly progress students make, providing a crucial benchmark for assessing teaching interventions.
Understanding this metric transforms how teachers approach professional development and classroom decision-making. Practices with effect sizes above 0.40 accelerate learning beyond typical progress, whilst those below may actually hinder student achievement. For instance, Hattie's research shows that feedback achieves an effect size of 0.70, indicating substantial impact, whereas ability grouping registers just 0.12, suggesting minimal benefit despite its widespread use in schools.
In practical terms, teachers can use effect sizes to prioritise their energy and resources. Rather than adopting every new initiative, focus on evidence-based strategies with demonstrated high impact. This might mean investing time in developing quality feedback systems, implementing formative assessment practices, or building strong teacher-student relationships, all of which consistently show effect sizes well above the 0.40 threshold for meaningful educational impact.
When John Hattie published Visible Learning in 2009, he synthesised 800 meta-analyses covering more than 50,000 individual studies and roughly 80 million students. To make sense of that volume of research, he used a statistical tool called Cohen's d: a standardised measure of the difference between a treatment group and a control group, expressed in units of standard deviation. A d of 1.0 means the average student in the treatment group outperformed 84 per cent of students in the control group. A d of 0.20 is a small effect; 0.50 is moderate; 0.80 is large (Cohen, 1988).
Hattie's key contribution was the hinge point of d=0.40, which he calculated as the average effect size across all the influences he examined. He proposed that teachers should use 0.40 as a baseline: any strategy that produces an effect size below it is delivering less than a typical year's teaching, regardless of how popular or well-resourced that strategy might be. Approaches above the line represent meaningful acceleration of learning. This reframing matters because it shifts the question from "does this work?" to "does this work better than simply being taught?" (Hattie, 2009).
The mechanics behind a meta-analysis are worth understanding. Researchers calculate an effect size for each individual study, then average those effect sizes across the meta-analysis, weighting by sample size. Hattie then averaged effect sizes across multiple meta-analyses to produce his ranked list. Each layer of aggregation increases the distance between the original classroom data and the final number that appears in a league table. What you see as d=0.60 for a given strategy may represent thousands of different teacher-student interactions, in different countries, measured with different assessments, across different subject areas.
Kraft (2020) raised a specific technical concern: when effect sizes are computed using pre-post gains rather than comparison group differences, the resulting numbers are systematically inflated. Many studies in Hattie's database used pre-post designs, which means the 0.40 hinge point may itself be set too high relative to what rigorous randomised trials would produce. For classroom teachers, this does not invalidate the general ordering of Hattie's influences, but it does mean that interpreting specific d values as precise measurements is unwarranted. The hinge point is better read as a rough filter than as a precise threshold.
The most effective teaching strategies share a common characteristic: they make learning visible to both teachers and students. John Hattie's research identifies several high-impact practices that consistently produce effect sizes above 0.40, indicating substantial improvements in student achievement. These strategies include feedback, formative evaluation, and classroom discussion, all of which create transparent learning processes where progress becomes tangible and measurable.
Cognitive scientist John Sweller's work on cognitive load theory demonstrates why strategies like worked examples and scaffolding prove so effective. By reducing extraneous mental processing, these approaches allow students to focus on essential learning content. Similarly, Dylan Wiliam's research on formative assessment shows how regular, low-stakes assessment creates feedback loops that guide both teaching decisions and student understanding in real-time.
Successful implementation requires teachers to become evaluators of their own impact. This means systematically collecting evidence of student learning through methods such as exit tickets, peer discussions, and learning journals. When teachers can clearly see what works and adjust their practice accordingly, student outcomes improve dramatically. The key lies not in perfecting individual techniques, but in developing a responsive teaching approach that adapts to visible evidence of learning.
In Hattie's updated rankings, collective teacher efficacy sits at the top of the entire list with an effect size of d=1.57, well above any instructional strategy. The construct originates with Albert Bandura's work on self-efficacy, which he extended from the individual level to the collective. Bandura (1997) defined collective efficacy as a group's shared belief in its combined capacity to organise and execute the actions required to produce a given level of attainment. In schools, this means the degree to which the staff as a whole believe that their collective actions can make a measurable difference to every pupil, including those facing disadvantage.
Jenni Donohoo's research has been particularly influential in translating Bandura's theory into school improvement practice. Donohoo (2017) identified six enabling conditions that build collective teacher efficacy: advanced teacher influence over decisions, goal consensus, teachers' knowledge about one another's work, cohesive staff relationships, responsiveness of leadership to teacher concerns, and consideration of the task at hand. Where these conditions are weak, even technically skilled individual teachers struggle to lift overall outcomes. The culture itself acts as a ceiling on what any one teacher can achieve.
Why does a belief about collective impact produce such a large measured effect? The mechanism runs through professional behaviour. When staff believe their collective effort will shift outcomes, they set higher expectations for all pupils, they persist with struggling learners rather than attributing failure to factors outside school, and they share responsibility for results rather than retreating into individual classrooms. Pupils experience this as consistent high expectations across every subject and year group, not just in the classes of a few exceptional teachers (Donohoo, Hattie and Eells, 2018).
For school leaders, this is an argument for investing in collaborative structures before adding new programmes. A school that buys a new literacy intervention but runs departments in isolation is likely to get less return than one that builds shared planning time, lesson study cycles, and a genuine culture of professional trust. The effect size of d=1.57 does not mean that belief alone raises attainment; it means that when a staff team collectively acts on the belief that they can succeed with every cohort, the resulting changes in practice are large enough to show up clearly in outcome data.
Learning intentions and success criteria form the cornerstone of effective teaching practice, providing students with a clear roadmap of what they will learn and how they will know they have succeeded. Research by Shirley Clarke demonstrates that when students understand the purpose of their learning and can recognise quality work, achievement increases significantly. Learning intentions should be written in student-friendly language and focus on the skills, knowledge, or understanding students will develop, rather than the activities they will complete.
Success criteria break down the learning intention into specific, observable behaviours or outcomes that students can use to self-assess their progress. These criteria should be co-constructed with students where possible, as Dylan Wiliam's research shows this increases student ownership and engagement. Effective success criteria are specific, measurable, and directly linked to the learning intention, helping students understand what good work looks like and how to achieve it themselves.
In practice, display learning intentions and success criteria prominently and refer to them throughout the lesson. Begin by sharing and explaining them, use them during learning activities to guide student self-reflection, and return to them at lesson end for evaluation. This transparent approach transforms learning from a mystery into a clear, achievable process that helps students to take responsibility for their own progress.
Answer five questions about your school context and receive personalised EEF strategy recommendations ranked by impact, cost, and evidence strength.
Enter your budget, select strategies, and instantly see which approaches deliver the most progress per pound spent.
Choose your feedback type, subject, and time constraints to generate a tailored protocol with marking codes, prompt stems, and workload strategies.
In the decade following the publication of Visible Learning, Hattie shifted his focus from cataloguing what works to examining why effective teachers consistently outperform their peers regardless of the specific strategies they use. The answer, he and Zierer (2018) argued, lay not in technique selection but in a set of underlying beliefs — what they called mindframes — that govern how teachers interpret their role and read evidence of student learning.
Hattie and Zierer (2018) identified ten mindframes central to high-impact teaching. The most fundamental is that teachers see themselves primarily as evaluators of their own impact: they continuously collect evidence of what students have learned and use it to adjust their practice rather than attributing outcomes to student effort or ability. A second mindframe holds that teaching and learning are forms of error-making and error-detection; classrooms where mistakes are treated as diagnostic information rather than failures produce greater cognitive risk-taking and deeper learning. A third mindframe frames the relationship between teacher and student as a dialogue about learning rather than a transmission of content.
Additional mindframes include seeing professional collaboration as a core responsibility, not an optional enrichment; believing that all pupils can improve; and using learning intentions and success criteria as planning tools rather than administrative requirements. Hattie and Zierer distinguished mindframes sharply from instructional strategies: a teacher can deploy exit tickets, peer assessment, or worked examples as surface procedures without the underlying mindframe that treats the evidence they generate as personally meaningful feedback on their own teaching.
The mindframes framework has practical implications for continuing professional development. Training that focuses on new techniques without addressing underlying beliefs about ability, error, and teacher responsibility is less likely to shift classroom practice durably. Research on professional learning communities (Hargreaves and Fullan, 2012) supports this: sustainable improvement in pupil outcomes is associated with schools where collective inquiry into impact data is a cultural norm, not an occasional event. For individual teachers, the most accessible entry point is treating lesson observations, exit tickets, and assessment results as feedback on teaching, not merely feedback about pupils.
Hattie and Donoghue (2016) proposed a learning model that resolved a persistent tension in the Visible Learning data: why do some strategies that produce large effect sizes in research trials produce poor results when implemented as whole-class instructional approaches? Their answer was that most teaching strategies are phase-specific — they produce their strongest effects at a particular stage of learning, and deploying them at the wrong phase reduces or eliminates their benefit.
The model describes three phases. Surface learning involves the initial acquisition and consolidation of facts, skills, and concepts. Pupils at this stage need explicit instruction, direct explanation of what is to be learned, deliberate practice, and feedback oriented to the correctness of specific responses. Strategies with high effect sizes during surface learning include worked examples (d=0.57), direct instruction (d=0.60), and spaced practice (d=0.65). Pushing pupils into collaborative inquiry or self-regulated investigation before they have sufficient surface knowledge to reason with is counterproductive; they lack the domain-specific content on which deeper thinking depends.
The deep learning phase involves connecting facts and skills into integrated conceptual structures, identifying relationships between ideas, and applying knowledge to unfamiliar problems within the same domain. Strategies most effective at this phase include reciprocal teaching, concept mapping, and elaborative interrogation — techniques that require pupils to construct relationships rather than retrieve isolated items. Hattie and Donoghue noted that the classroom talk and collaborative inquiry strategies often promoted in professional development have their strongest evidence base at the deep phase, which explains why they work well in research with near-expert learners but disappoint when applied to novices encountering new content.
The third phase, transfer learning, involves the application of conceptual understanding to genuinely novel problems across domains. Transfer is the hardest to achieve and the most valuable. Strategies that support transfer include metacognitive monitoring, problem-solving in varied contexts, and deliberate attention to the conditions under which knowledge applies. For lesson planning, the three-phase model suggests that the same unit of work should contain distinct instructional sequences matched to the phase of learning at each point, rather than applying the same pedagogical approach throughout. Wiliam (2011) reached a complementary conclusion: the key question for a teacher is not "which strategy is best?" but "what does this pupil need at this moment?"
Feedback consistently produces among the largest effect sizes in Hattie's database, yet research also shows that feedback frequently has no effect or even negative effects on learning. Hattie and Timperley (2007) resolved this paradox by distinguishing four levels at which feedback can be directed, arguing that the effectiveness of any feedback act depends critically on which level it addresses and whether that level is appropriate to the learner's current state.
The first level is task feedback (FT): information about whether a specific answer, product, or performance is correct or incorrect. This is the most common form of feedback in classrooms and the least powerful for generating learning, though it is useful when pupils have fundamental misconceptions that must be corrected before further work can proceed. The second level is process feedback (FP): information about the strategies and procedures used to complete a task. Process feedback helps pupils understand that their approach, not just their answer, is subject to improvement, and it supports the development of transferable skills rather than task-specific performance.
The third level is self-regulation feedback (FR): information that supports pupils in monitoring their own learning, checking their own work, and seeking help effectively. This level has the strongest evidence base for long-term learning gains because it reduces dependence on teacher feedback and builds the metacognitive habits that sustain independent learning. The fourth level is self feedback (FS): comment directed at the learner as a person rather than at their task, process, or regulation. Hattie and Timperley found that this level is the least effective and potentially harmful, particularly when praise for ability replaces information about learning. Dweck's (1999) research on fixed versus growth mindsets converges with this finding: praising pupils as "clever" reduces their willingness to take on challenging tasks and attribute difficulty to insufficient effort.
The model has a direct implication for written feedback policies. A comment such as "Good work — you clearly understand this" operates at the self level and provides no information the pupil can act on. Reframing it as "You identified the correct pattern. Check whether it holds when the numbers are negative" operates at the process and task levels simultaneously. For pupils who consistently self-correct successfully, moving feedback to the self-regulation level — "You found your own error: what strategy did you use?" — builds the monitoring habits most predictive of long-term achievement.
In Hattie's (2009) original meta-analysis, self-reported grades emerged as the single influence with the highest effect size in the entire dataset, at d=1.44. The finding attracted considerable attention and some scepticism, in part because its meaning was not immediately obvious. Self-reported grades does not mean allowing pupils to mark their own work uncritically. The construct, drawn from research by Kuncel, Crede and Thomas (2005) and earlier work by Mabe and West (1982), refers to the accuracy with which pupils predict their own performance on an upcoming test or task.
Pupils whose self-predictions closely match their actual outcomes have, by implication, an accurate internal model of their own current knowledge and skill. This accuracy is itself the product of previous feedback, metacognitive experience, and transparent assessment practices. When pupils receive regular, specific feedback that allows them to calibrate their self-assessments, they develop an internal standard against which to measure new learning. Hattie interpreted the effect size not as evidence that self-assessment is a magic technique but as a demonstration that pupils who know what they know, and know what they do not yet know, are in the optimal position to direct their own learning.
The practical implication connects closely to learning intentions and success criteria. When teachers share clear criteria in advance and ask pupils to assess their own work against those criteria before receiving teacher feedback, they are cultivating the calibration mechanism that underlies the self-reported grades effect. Andrade and Valtcheva (2009) found that structured self-assessment using rubrics produced significant gains in writing quality compared to unstructured self-evaluation, because the rubric provided an external standard against which to calibrate internal judgements.
Black and Wiliam (1998) reached a parallel conclusion in their review of formative assessment research: the gains from self- and peer-assessment are most reliable when pupils have been taught to use specific criteria, not simply asked to express opinions about their work. For teachers, the practical question is whether assessment tasks are transparent enough, and feedback specific enough, for pupils to build an accurate model of their own understanding rather than relying solely on teacher evaluation to tell them where they stand.
Download this free Complete Teaching Essentials Bundle resource pack for your classroom and staff room. Includes printable posters, desk cards, and CPD materials.
These studies provide the research foundation for visible learning and its practical applications in schools.
Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement View study ↗ 6,865 citations
Hattie, J. (2009)
Hattie's landmark synthesis of 800+ meta-analyses ranked 138 teaching influences by their effect size on student achievement. Feedback (d=0.73), teacher clarity (d=0.75), and formative evaluation (d=0.90) emerged as among the most powerful interventions. The work provides teachers with an evidence hierarchy for deciding where to invest classroom time and energy.
Visible Learning for Teachers: Maximizing Impact on Learning 1,147 citations
Hattie, J. (2012)
This companion volume translates the meta-analytic findings into practical classroom strategies. Hattie introduces the concept of "know thy impact," arguing that teachers who regularly evaluate their effect on student learning become more effective practitioners. The book provides checklists, lesson planning frameworks, and self-evaluation tools grounded in the original research synthesis.
The Power of Feedback View study ↗ 6,378 citations
Hattie, J. and Timperley, H. (2007)
This paper presents the feedback model central to visible learning, identifying four levels: task, process, self-regulation, and self. The research demonstrates that feedback about the task and learning process produces the strongest effects, whilst praise directed at the self has minimal impact on achievement. Teachers can use this framework to design feedback that genuinely moves learning forward.
Teachers Make a Difference: What Is the Research Evidence? View study ↗ 1,310 citations
Hattie, J. (2003)
This earlier paper establishes that teacher quality accounts for approximately 30% of variance in student achievement, making it the most significant school-level factor. The research identifies expert teachers as those who challenge students, set high expectations, and maintain awareness of their impact. These findings laid the groundwork for the visible learning framework that followed.
Embedding Formative Assessment: Practical Techniques for K-12 Classrooms
Wiliam, D. (2011)
Wiliam's work complements Hattie's findings by providing a practical framework for the formative assessment strategies that visible learning identifies as highly effective. The book introduces five key strategies including clarifying learning intentions, engineering classroom discussions, and activating students as instructional resources for each other.
Visible Learning has attracted substantial methodological criticism from educational researchers, and teachers who use Hattie's rankings should understand the main lines of concern. The most fundamental objection is what Slavin (2018) called the "apples and oranges" problem. Hattie's database combines meta-analyses from early childhood education, secondary schooling, higher education, clinical psychology, and sports coaching. Effect sizes from these very different contexts are then averaged as if they were measuring the same thing. A meta-analysis of feedback in medical training and a meta-analysis of feedback in primary literacy lessons both contribute to the same d value, even though the populations, tasks, and assessment instruments are entirely different.
Simpson (2017) raised concerns about the mathematical aggregation process itself. When you average effect sizes across meta-analyses that used different study selection criteria, different statistical methods, and different definitions of the same construct, the resulting number carries no clear meaning. An effect size is a standardised comparison between two groups: if the groups, the interventions, and the outcomes differ across studies, then the standardisation does not hold. Simpson argued that Hattie's league table of influences creates an illusion of precision; the d values look like measurements, but they reflect the accumulated artefacts of dozens of different research traditions rather than a stable property of any particular teaching strategy.
Bergeron (2017) examined the ecological validity problem: whether findings from controlled studies can be generalised to ordinary classrooms. Many studies in Hattie's database were conducted under conditions that differ from daily teaching: short intervention windows, volunteer participants, researcher involvement in delivery, and outcomes measured by researcher-designed tests rather than national assessments. A strategy that produces d=0.60 in a six-week university trial with graduate student facilitators may produce a much smaller effect when delivered by a single teacher with 30 pupils across a full academic year. The context in which research is conducted is part of what generates the effect size, not just the strategy itself.
None of these criticisms mean that Visible Learning is without value. The broad ordering of influences, with factors related to teacher cognition, feedback quality, and pupil self-regulation clustered at the top, is consistent with findings from other research traditions, including the Education Endowment Foundation's toolkit and the work of Robert Rosenshine. What the criticisms do mean is that treating a specific d value as a precise prediction of what will happen in your classroom is not warranted. Hattie (2015) acknowledged that effect sizes should be treated as starting points for professional inquiry rather than prescriptions, and the research has most value when it is used to generate questions about practice rather than to rank strategies by number.
Visible Learning is an evidence-based teaching approach developed by John Hattie that makes learning visible to both teachers and students. Students must clearly understand what they are learning, how to learn it, and how to measure their progress. The approach focuses on evaluating the impact of teaching on student achievement rather than simply delivering content, with teachers acting as activators of learning who monitor progress and adapt instruction based on real-time evidence.
Start each lesson by sharing clear learning intentions and success criteria so students understand what they are learning and how they will know when they have succeeded. During lessons, continuously gather evidence of student understanding through questioning techniques and mini-assessments, then adjust your instruction accordingly. Help students become partners in the learning process by teaching them to self-assess and provide meaningful peer feedback to create a classroom culture where learning is everyone's responsibility.
Visible Learning transforms students from passive recipients into active partners who set goals, track progress, and seek feedback independently. Teachers benefit from evidence-based guidance on which strategies actually work, with effect sizes showing that feedback achieves 0.7 impact and formative evaluation reaches 0.9. The approach helps teachers see learning through students' eyes and make teaching decisions based on real-time evidence of what's working rather than popular but ineffective interventions.
The 0.4 effect size represents Hattie's "Zone of Desired Effects" where teaching strategies begin to have meaningful impact on student achievement. Any teaching approach with an effect size of 0.4 or greater is considered beneficial, whilst strategies below this threshold may not significantly improve learning outcomes. This threshold helps teachers identify which interventions are worth their time and effort, as many popular teaching methods actually fall below this effective zone.
Look for students who can clearly articulate what they are learning and why it matters, and who actively seek feedback and set their own learning goals. You should see evidence of students self-assessing their work and providing meaningful peer feedback without constant teacher prompting. Additionally, you will notice your teaching decisions becoming more responsive to student needs as you continuously gather and act upon assessment evidence during lessons.
Many teachers focus only on sharing learning objectives without teaching students how to use success criteria to self-assess their progress. Another common mistake is gathering assessment evidence but failing to adjust instruction based on what the data reveals about student understanding. Some teachers also assume that simply posting learning intentions on the board constitutes Visible Learning, when the approach actually requires active student participation and ongoing feedback loops throughout the lesson.
Effective feedback represents one of the most powerful tools in a teacher's arsenal, with Hattie's research consistently placing it among the top influences on student achievement. However, the quality and timing of feedback matter significantly more than its frequency. Effective feedback focuses on the task, the process, and self-regulation rather than praising the person, helping students understand what they got wrong and how to improve their learning strategies.
The most impactful feedback addresses three fundamental questions: Where am I going? How am I going? Where to next? This framework, developed through extensive educational research, ensures feedback is both specific and actionable. Teachers should provide feedback that is timely, specific to learning intentions, and connects directly to success criteria. Rather than simply marking work as correct or incorrect, effective feedback identifies patterns in student thinking and guides them towards deeper understanding of the subject matter.
In classroom practice, this means moving beyond generic praise such as "good work" towards targeted comments like "your use of evidence in paragraph two strengthens your argument, now consider how you might apply this same approach to your conclusion." Peer feedback and self-assessment opportunities also enhance learning outcomes, as students develop metacognitive awareness of their own progress and learning processes.
Effective assessment in Visible Learning classrooms moves beyond traditional testing to become a continuous dialogue between teachers and students about learning progress. Formative assessment strategies, such as exit tickets, learning journals, and peer feedback sessions, provide real-time data that enables teachers to adjust instruction immediately rather than waiting for summative results. This approach aligns with Dylan Wiliam's research on assessment for learning, which demonstrates that frequent, low-stakes feedback can significantly accelerate student achievement.
The key lies in making learning intentions and success criteria transparent from the outset. When students understand exactly what they're working towards and can articulate their own progress, they become active partners in the assessment process. Regular self-assessment activities, where pupils reflect on their understanding and identify next steps, creates the metacognitive skills essential for independent learning. This practice supports Hattie's findings that self-reported grades have one of the highest effect sizes on student achievement.
Practically, teachers can implement simple yet powerful monitoring tools such as traffic light systems for student confidence levels, one-minute summaries at lesson transitions, or structured peer assessment using clear rubrics. The crucial element is ensuring assessment data directly informs subsequent teaching decisions, creating a responsive classroom environment where both successes and misconceptions are addressed promptly and purposefully.
<script type="application/ld+json">{"@context":"https://schema.org","@graph":[{"@type":"Article","@id":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide#article","headline":"Visible Learning: Hattie's Research on What Actually","description":"Hattie's Visible Learning synthesises 1,800+ meta-analyses covering 300+ million pupils. Understand effect sizes.","datePublished":"2021-10-26T13:33:18.840Z","dateModified":"2026-03-02T11:01:41.916Z","author":{"@type":"Person","name":"Paul Main","url":"https://www.structural-learning.com/team/paulmain","jobTitle":"Founder & Educational Consultant"},"publisher":{"@type":"Organization","name":"Structural Learning","url":"https://www.structural-learning.com","logo":{"@type":"ImageObject","url":"https://cdn.prod.website-files.com/5b69a01ba2e409e5d5e055c6/6040bf0426cb415ba2fc7882_newlogoblue.svg"}},"mainEntityOfPage":{"@type":"WebPage","@id":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide"},"image":"https://cdn.prod.website-files.com/5b69a01ba2e409501de055d1/6950177de4081a62575ff99d_7rmpyq.webp","wordCount":3380},{"@type":"BreadcrumbList","@id":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https://www.structural-learning.com/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https://www.structural-learning.com/blog"},{"@type":"ListItem","position":3,"name":"Visible Learning: Hattie's Research on What Actually","item":"https://www.structural-learning.com/post/visible-learning-a-teachers-guide"}]}]}</script>