Slug: progress-monitoring-cbm-teachers-guide
---
Progress monitoring is the systematic, repeated measurement of student performance over time to determine whether instruction is working. Curriculum-Based Measurement (CBM) is the most rigorously researched tool for doing this: standardised, brief probes drawn from the curriculum a student is expected to master by year's end. Together, they answer the question every teacher already asks informally: is this child making progress, and if not, what do I change?
Stanley Deno (1985) at the University of Minnesota developed CBM as a practical, repeatable measurement system for special education. His insight was that academic performance could be reliably assessed in one to three minutes using curriculum-based probes, and that this data, collected repeatedly over time, would reveal learning trajectories that single test scores never could. Four decades of replication have confirmed that insight.
This guide covers how CBM works, how to read the graphs it produces, and, critically, what to do when the trendline is flat.
---
Key Takeaways
- CBM is brief, standardised, and repeatable: One-to-three-minute probes sampled from the year-end curriculum give you a reliable growth trajectory rather than a single data point.
- The trendline, not the score, drives decisions: A student's rate of improvement over six to eight data points tells you far more than any individual probe result.
- Flat data demands a systematic pivot: When the trendline is not rising toward the goal line, a structured decision protocol, not a gut feeling, should govern your next instructional move.
- Progress monitoring scales across all three MTSS tiers: Benchmark screening three times a year, biweekly monitoring at Tier 2, and weekly monitoring at Tier 3 form a complete data infrastructure for the whole school.
---
What Is Curriculum-Based Measurement?
CBM is distinct from the curriculum-embedded tests most teachers use day to day. A typical unit test checks whether a student has mastered the specific skills taught in the past two weeks. A CBM probe samples the entire curriculum a student is expected to know by the end of the school year. This distinction matters enormously in practice.
Consider a Year 3 reading fluency probe: rather than assessing only the phonics patterns introduced in October, the probe draws from the full range of grade-level text the student will need to read by June. A student who scores low in October and rises steadily over the year is demonstrating genuine learning growth. A student who scores high on unit tests but flat on CBM probes may be mastering taught chunks without building fluency across the broader curriculum.
Shinn (2008) describes this as the difference between a 'skills mastery' approach and a 'curriculum sampling' approach. Skills mastery measures whether a discrete skill is acquired. Curriculum sampling measures general proficiency in the academic domain. CBM is firmly in the curriculum-sampling tradition, which is why it predicts end-of-year outcomes so reliably.
Probes are administered under standardised conditions: same instructions, same timing, same scoring rules every time. This standardisation is what makes repeated measurement meaningful. If the administration varies, the score variance reflects procedure, not learning.
---
Why Progress Monitoring Matters
The Individuals with Disabilities Education Act (IDEA, 2004) requires that IEPs include measurable annual goals and that parents receive regular reports on progress toward those goals. Progress monitoring provides the data infrastructure to meet both requirements. Without systematic measurement, the 'regular reports' parents receive are professional estimates, not evidence.
Beyond compliance, the evidence for progress monitoring as an instructional tool is substantial. Fuchs and Fuchs (1986) conducted a landmark meta-analysis of 21 studies and found that teachers who used CBM data to make instructional decisions produced significantly greater student gains than those who relied on professional judgement alone. The effect size was 0.70, which the authors noted was comparable to one-to-one tutoring effects.
The mechanism is straightforward: when you collect data every week, you see problems emerging within weeks rather than discovering at year's end that a student did not meet their goal. Stecker, Fuchs, and Fuchs (2005) confirmed this in a systematic review of 18 studies, concluding that the achievement benefits of progress monitoring are contingent on teachers actually using the data to alter instruction. Collecting data without changing practice produces no benefit.
Data-Based Individualisation (DBI), developed by the National Center on Intensive Intervention (NCII, 2013), formalises this link. DBI is a five-step process: implement validated intervention, use progress monitoring to assess response, analyse the data, adapt the intervention, and repeat. It is the applied framework built on the foundation Deno and Fuchs established. Teachers working within MTSS and RTI frameworks will recognise DBI as the engine that makes tiered support genuinely responsive rather than procedurally compliant.
---
CBM Tools by Domain
Progress monitoring tools are domain-specific. The reading probe that works for a first-grader is not the same as the one that works for a fourth-grader, and neither translates to maths or writing. The table below summarises the major CBM tools by domain.
Reading
Oral Reading Fluency (ORF) is the most researched CBM measure. The student reads aloud from a grade-level passage for one minute; the examiner records words read correctly per minute (WCPM). WCPM is a robust indicator of overall reading proficiency because fluent reading requires the simultaneous integration of decoding, vocabulary, and comprehension (Good and Kaminski, 2002).
DIBELS (Dynamic Indicators of Basic Early Literacy Skills) and AIMSweb are the dominant ORF platforms in US schools. Both provide benchmarks, standardised passages, and growth norms. EasyCBM, developed at the University of Oregon, offers a free or low-cost alternative with strong psychometric properties. For students in Grades 1 through 6, ORF WCPM benchmarks typically fall in the following ranges:
- Grade 1 (spring): 60–90 WCPM
- Grade 2 (spring): 90–110 WCPM
- Grade 3 (spring): 110–130 WCPM
- Grade 4 (spring): 115–140 WCPM
Students reading more than 10 words per minute below benchmark warrant closer monitoring.
Maze passages are a valid alternative for students who find oral reading uncomfortable or for group administration. A passage has every seventh word replaced with three choices; the student selects the correct word silently for two to three minutes. Maze correlates well with reading comprehension and is particularly useful from Grade 3 upward, where comprehension increasingly separates struggling readers from fluent ones.
Understanding the science of reading provides important context for interpreting ORF data: a student with decoding deficits will show a different CBM profile from one with fluency or vocabulary problems, and the intervention response will differ accordingly.
Maths
CBM maths probes take two forms: computation probes and concepts and applications probes. Computation probes typically last two minutes and contain a mixed set of problems matching grade-level expectations (single-digit addition in Grade 1, multi-digit operations in Grades 3 to 5, fraction computation in Grades 6 to 7). The score is digits correct per minute rather than problems correct, which gives finer-grained measurement.
Concepts and applications probes are longer (six to eight minutes) and assess applied reasoning: word problems, measurement, data interpretation, and number sense. These capture a broader picture of mathematical understanding and are better predictors of state assessment performance in the upper elementary grades.
Teachers using cognitive load theory in their maths instruction will find that CBM computation scores often expose exactly where cognitive overload is occurring: a student who scores well on isolated computation but poorly on mixed probes may be struggling with the retrieval and selection demands of a mixed format, not with the underlying operations.
Written Expression
Written expression CBM is less commonly implemented than reading or maths, but it is a valuable tool for students with writing difficulties. A grade-level sentence starter is provided and the student writes for three minutes. Scores are reported as:
- Total Words Written (TWW): a measure of productivity and fluency
- Correct Word Sequences (CWS): adjacent word pairs that are both correctly spelled and syntactically acceptable, providing a measure of writing quality
CWS is the more sensitive measure for tracking growth in students receiving writing intervention. A student who writes many words but with poor syntax will have a high TWW and a low CWS ratio; intervention targeting sentence structure should raise the CWS score while TWW may stay stable or even decrease initially as the student slows down to apply new skills.
Spelling
Spelling CBM uses a dictation format: the examiner reads words aloud and students write them. Scoring counts correct letter sequences (CLS) rather than whole words, which makes the measure sensitive to partial learning. A student who writes 'frend' for 'friend' has most of the letter sequences correct and the score reflects that partial knowledge. This matters for tracking progress in students with dyslexia, where growth can be genuine but slow, and whole-word scoring would understate it.
---
CBM Tools by Domain: Quick Reference
| Domain |
Tool / Measure |
What It Measures |
Probe Duration |
Typical Frequency |
Primary Use |
| Reading |
DIBELS ORF / AIMSweb ORF / easyCBM |
Words correct per minute |
1 minute |
Weekly (Tier 3), biweekly (Tier 2), 3x/year (benchmark) |
Benchmark and progress monitoring |
| Reading |
DIBELS Maze / AIMSweb Maze |
Correct maze selections |
2–3 minutes |
Biweekly to monthly |
Comprehension screening, group admin |
| Maths |
AIMSweb M-COMP / easyCBM Maths |
Digits correct per minute |
2 minutes |
Weekly to biweekly |
Computation fluency monitoring |
| Maths |
AIMSweb M-CAP |
Concepts and applications score |
6–8 minutes |
Monthly |
Applied reasoning, state test prediction |
| Written Expression |
CBM-WE (AIMSweb / local probes) |
Total words written; correct word sequences |
3 minutes |
Biweekly to monthly |
Writing fluency and quality monitoring |
| Spelling |
CBM Spelling (dictation format) |
Correct letter sequences |
2 minutes |
Biweekly |
Spelling growth, partial knowledge tracking |
---
Setting Up a Progress Monitoring System
The first decision is measure selection. Choose the CBM measure that maps most directly to the student's IEP goal. If the goal targets reading fluency, use an ORF measure. If the goal targets maths computation, use a computation probe. Mismatching the measure to the goal is a common error that renders the data uninterpretable.
Establishing a Baseline
Collect a minimum of three data points before setting the goal line. Three points allow you to calculate a median baseline score that is less susceptible to a single bad or good day. Administer probes on three separate days, ideally within a one-week window, and use the median score as your baseline.
Some practitioners prefer a three-point baseline spread over two weeks to capture natural performance variability. For a student whose baseline scores are 42, 67, and 51 WCPM, the median is 51. Starting with a single score of 67 would produce an unrealistically high baseline and a goal that may be beyond reach.
Setting Ambitious Goals
There are two main approaches to goal-setting: using normative growth rates and using end-of-year benchmark targets. Normative growth rates specify expected weekly gains for typical students at each grade level. For oral reading fluency, typical growth rates (from Fuchs, Fuchs, Hamlett, Walz, and Germann, 1993) are approximately:
- Grade 1: 2–3 WCPM per week
- Grade 2: 1.5–2 WCPM per week
- Grade 3: 1–1.5 WCPM per week
- Grade 4: 0.85–1.1 WCPM per week
For students receiving Tier 3 intervention, the NCII (2013) recommends setting growth rate goals that are 1.5 times the typical rate, reflecting the expectation that intensive intervention should accelerate growth. A Grade 3 student with a baseline of 60 WCPM and a 30-week intervention period might have a goal of 60 + (1.5 × 1.25 × 30) = 116 WCPM, which approaches grade-level benchmark.
Setting IEP goals linked to CBM growth rates requires the same careful attention to ambition and reachability described in guidance on writing neurodiversity-affirming IEP goals. Goals should stretch the student meaningfully without being set so high that the data always shows failure.
Graphing Data
Graph every data point as it is collected. The visual display of CBM data is not optional or decorative; it is the mechanism by which you and the student can see the learning trajectory. The graph contains:
- The baseline: Three to four data points plotted to the left of a vertical phase change line
- The goal point: A single point plotted at the goal score on the final week of the monitoring period
- The aimline: A straight line connecting the median baseline to the goal point; this is the expected trajectory
- Data points: Each weekly or biweekly probe score plotted and connected
Most CBM platforms (DIBELS Next, AIMSweb Plus, easyCBM) generate graphs automatically. If you are graphing manually, graph paper or a simple spreadsheet works well. The key is that the graph is visible, updated every time a probe is administered, and used in instructional planning.
---
Reading the CBM Graph
The aimline divides the graph into two interpretive zones. Data points above the aimline mean the student is exceeding the expected trajectory. Data points below the aimline mean the student is falling short. Individual data points above or below the aimline on their own are not meaningful; the pattern across six to eight points is what matters.
The Four-Point Decision Rule
The NCII (2013) recommends the four-point decision rule as the standard guide for data interpretation:
- Four consecutive data points above the aimline: The goal is too easy. Raise the goal.
- Four consecutive data points below the aimline: The intervention is insufficient. Change the intervention.
- Data points scattered above and below with no pattern: Continue the current intervention and collect more data.
- Data points consistently on or near the aimline: The student is on track; maintain the current approach.
This rule prevents teachers from reacting to single data points, which fluctuate for countless reasons (a difficult Monday, a late-night visit to the emergency room, a forgotten breakfast). Only sustained patterns justify a decision.
When to Raise the Goal
If four data points fall consecutively above the aimline, draw a new vertical phase change line and recalculate. Use the median of the most recent three to four scores as the new baseline, and set a new goal using the same growth rate formula. Do not simply extend the original aimline; doing so would understate the student's capacity and produce a goal that is no longer ambitious.
Sharing this process with students is itself instructional. When a student can see their own growth line rising, understands what the aimline means, and participates in setting a new goal, they develop a specific kind of metacognitive awareness about their own learning trajectory. This connects directly to what the research on formative assessment strategies identifies as self-regulated learning, one of the most powerful instructional moves available.
When Data Is Variable but Trending
A student whose scores bounce between 55 and 80 WCPM over eight weeks may not trigger the four-point rule but also may not be making consistent progress. In these cases, drawing a best-fit trendline (or using the platform's trend calculation) gives a clearer picture. If the trendline slope is positive and at least as steep as the aimline, the intervention is working despite the variability. If the trendline is shallower than the aimline, the four-point rule is probably soon to be triggered.
---
The Friday Data / Tuesday Pivot
This is where most teachers, and most CBM guides, stop. They explain the four-point decision rule, note that an intervention change is required, and leave the teacher to figure out what that change should be. The result is what Stecker, Fuchs, and Fuchs (2005) identified as the central implementation failure of CBM: teachers collect the data, see the flat trendline, and do not know what to do next.
The Friday Data / Tuesday Pivot addresses this gap directly. When you score Friday's probe and see that four consecutive points have fallen below the aimline, you need a systematic protocol to decide what to change by Tuesday. Not a vague aspiration to 'try something different'; a structured diagnostic sequence.
Work through the following checklist in order. Each question is a hypothesis; when you identify a plausible cause, act on it before moving to the next.
The Instructional Pivot Checklist
1. Was the intervention implemented with fidelity?
Check your session logs. Was the intervention delivered the required number of times per week? For the full session duration? By the same person using the same materials? A student receiving a three-times-per-week fluency intervention who actually received it twice per week for three of the past four weeks has not had an adequate exposure to evaluate. Fidelity problems are not failures of the student; they are failures of the delivery system. Fix the delivery before concluding the intervention is ineffective.
Attendance records also matter here. A student who missed six of the past twelve intervention sessions may have a flat trendline not because the intervention is wrong, but because they have not received enough of it. Cross-reference CBM data with attendance logs before making any instructional change.
2. Is the instructional match correct?
Burns (2004) established an instructional hierarchy for academic skills based on error rate and fluency. Reading material at the student's frustration level (above 90% accuracy) produces anxiety and avoidance without producing learning. Material at the independent level (above 97% accuracy) produces practice without challenge or growth. The instructional level (93–97% accuracy) is the zone where learning occurs most efficiently.
If a student's ORF probe shows an accuracy rate below 93%, the intervention passages may be at frustration level. Drop to a lower-level passage set and re-establish baseline before concluding that a different intervention is needed. The same principle applies to maths: a student doing two-digit multiplication CBM probes with below-90% accuracy needs single-digit fluency work first. Scaffolding instruction to the correct instructional level is not accommodation; it is the precondition for growth.
3. Does the cognitive load need reducing?
Flat fluency data in a student who can demonstrate the skill accurately but slowly often signals that the instructional routine is demanding too much from working memory. When a student must simultaneously retrieve, apply, and produce a response, the cognitive load can exceed working memory capacity. The result is slow, effortful performance that does not improve because the system is running at capacity, not building automaticity.
Practical reductions in cognitive load include: breaking multi-step tasks into single-step practice, using partially worked examples before requiring independent production, reducing the number of stimuli on the page, and providing a visual scaffold (a multiplication grid, a phoneme chart) that offloads the retrieval demand so the student can practise the application. Once fluency improves with the scaffold, fade it systematically.
4. Should the modality change?
A student who has received six weeks of auditory-phonological intervention (phoneme blending, segmenting) with a flat ORF trendline may not need more of the same. They may need a visual approach: grapheme-phoneme card sorts, colour-coded word families, or a structured word study programme. The evidence for differentiated instruction suggests that when one pathway is not producing growth, shifting to a complementary pathway (visual to auditory, abstract to concrete, implicit to explicit) can unblock progress.
This is not 'learning styles' theory. It is recognition that different instructional activities recruit different cognitive processes, and a student who is not responding to one approach may respond differently to another that engages the same underlying skill through a different route.
5. Is the intervention frequency sufficient?
For students in Tier 3 with significantly below-grade-level skills, three sessions per week may not be enough. The research base for intensive intervention typically involves four to five sessions per week of 30–45 minutes each (Fuchs and Fuchs, 2007). If a student's programme delivers three 20-minute sessions per week and the trendline is flat, increasing frequency to five 30-minute sessions is a legitimate instructional change, distinct from changing the content of the intervention.
6. Are there confounding variables?
Before concluding that the instruction is the problem, rule out variables outside the classroom. A student who began a new medication in week four of monitoring may show a performance dip that reflects medication adjustment, not instructional failure. A student whose family experienced a significant stressor will show variability in CBM scores. These data points are real and should be noted, but they do not warrant an instructional pivot. Document confounding variables on the graph (a small note or phase change line with annotation) so the data record is interpretable.
When CBM data is flat and you have also been tracking behaviour, the picture may be more complex. Functional behaviour assessment can reveal whether avoidance behaviour is masking skill gaps or whether skill gaps are driving avoidance behaviour. The two problems require different responses.
When data is persistently flat despite systematic pivots, the situation may warrant the kind of deep diagnostic review described in guidance for IEP annual reviews where progress has stalled.
---
Progress Monitoring Within MTSS
CBM functions differently at each tier of the MTSS framework, and the frequency of measurement reflects the intensity of support provided.
Tier 1: Universal Screening
At Tier 1, all students are screened three times per year (autumn, winter, spring) using the same CBM measure. This is benchmark assessment, not progress monitoring in the strict sense, but it serves the same measurement function: it tells you which students are below benchmark and need closer attention.
The cut-point for identifying risk typically falls at the 25th percentile for the grade-level benchmark. Students below this threshold are candidates for Tier 2 support. Students between the 25th and 40th percentiles may need closer monitoring before a formal intervention assignment.
Tier 2: Strategic Monitoring
Students receiving Tier 2 intervention should be monitored every two weeks. Fourteen days is enough time to collect two to three data points and see an emerging pattern, but not so long that a student spending six weeks in an ineffective intervention before anyone notices. After approximately eight to ten data points (roughly four to five weeks of biweekly monitoring), apply the four-point decision rule.
If a Tier 2 student is responding well (trendline meeting or exceeding the aimline), they may be stepped down to Tier 1 with continued benchmark screening. If they are not responding, the decision moves to either intensifying the Tier 2 intervention or escalating to Tier 3.
Tier 3: Intensive Monitoring
Weekly progress monitoring is the standard at Tier 3. Students at this tier are receiving intensive intervention and the decision-making pace should match the intervention intensity. Weekly data points allow the four-point rule to be applied after approximately four weeks, meaning a student who is not responding to Tier 3 intervention can be identified and referred for comprehensive evaluation within six to eight weeks rather than waiting months.
CBM data is a critical component of the eligibility determination process for students referred for special education evaluation. A pattern of inadequate response to well-implemented, intensive intervention, documented through systematic CBM graphs, is the evidential foundation for a learning disability determination under IDEA's Response to Intervention framework. The CBM record you maintain at Tier 3 is, in effect, part of the student's eligibility record. The relationship between Tier 3 data and IEP eligibility decisions makes the quality of that data consequential in a legal as well as instructional sense.
---
Common Implementation Errors
The gap between what CBM can do and what it typically does in schools reflects a predictable set of implementation errors. These are worth naming directly.
Inconsistent administration. Allowing a student extra time, using a different passage format, or permitting corrections during the probe invalidates the score for progress monitoring purposes. Standardisation is the mechanism that makes repeated measurement meaningful. A score collected under non-standard conditions is not comparable to previous scores.
Irregular scheduling. Collecting CBM every three to five weeks rather than weekly or biweekly produces too few data points for the four-point decision rule to function. Four consecutive points below the aimline requires four data points; if those are collected over 20 weeks, the student has spent 20 weeks in an intervention that may not be working.
Graphing delays. Scoring probes but not graphing them for a week negates the formative purpose of the tool. The graph must be updated immediately after scoring so it can be used for planning. A stack of unplotted probes is an archive, not a monitoring system.
Ignoring decision rules. The four-point rule exists because teachers reliably over-interpret random variation and under-react to sustained patterns. 'I know he can do better than this' is not a decision rule. 'Four consecutive points below the aimline' is. Trusting the rule over the intuition is not a failure of professional judgement; it is professional judgement informed by evidence.
Testing but not teaching. Some students receive CBM probes reliably but receive little change in their instruction regardless of the results. The probe scores become an administrative record rather than an instructional tool. This is the pattern Stecker, Fuchs, and Fuchs (2005) identified as the primary failure mode: data collected, not used.
Using CBM as punishment. In some school cultures, CBM probes are associated exclusively with being 'pulled out' or identified as struggling. When students experience the probe as a stressor rather than a growth tool, performance anxiety contaminates the scores. Sharing the graph with the student, celebrating upward trends, and framing each probe as a chance to see how much they have grown reframes the process. Students who monitor their own progress show greater motivation and self-efficacy in classroom research, consistent with findings on retrieval practice and self-regulation.
Failing to share data with students. This is the missed opportunity most often overlooked. The student is the person most motivated to understand their own learning trajectory. A student who knows their baseline, understands their goal, and can read their own graph has a fundamentally different relationship to their intervention than one who simply turns up for sessions.
---
A Cross-Atlantic Perspective on Assessment
American CBM is heavily quantified: standardised probes, normative tables, WCPM targets, four-point rules. This is both its strength and its limitation. The precision of the measurement system produces defensible, replicable data. It can also produce a narrowing of attention to what is measured at the expense of what is not.
Black and Wiliam (1998), in their foundational review of formative assessment research, argued that the quality of feedback depends on the quality of the teacher's understanding of the student, not just the quality of the measurement. Their review of 580 studies found that rich, specific, growth-oriented feedback produced effect sizes of 0.4 to 0.7. They were not describing CBM probes; they were describing the kind of moment-to-moment, observation-based assessment that sits alongside any measurement system.
The UK formative assessment tradition, which Black and Wiliam shaped, leans heavily on expert teacher judgement, professional inference, and qualitative observation. It produces deep contextual understanding of individual learners but can struggle to demonstrate growth in legally defensible terms.
Best practice blends both. Use CBM for its rigour and its legal defensibility. Use formative observation and professional judgement for the context that makes the numbers meaningful. A student's ORF score of 68 WCPM tells you they are below the Grade 3 benchmark. Your observation of the specific words they stumble over, the patterns in their miscues, and the look on their face when they encounter multisyllabic words tells you what to teach next. Neither alone is sufficient.
---
Further Reading
Key Research Papers on Progress Monitoring and CBM
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52(3), 219–232.
The foundational paper that established CBM as a measurement system. Deno argues that brief, curriculum-referenced probes are more useful for instructional decision-making than conventional standardised tests. Essential reading for understanding why CBM was developed and what problem it was designed to solve. View study
Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53(3), 199–208.
A meta-analysis of 21 controlled studies examining what happens when teachers use CBM data to modify instruction. Teachers using systematic formative evaluation produced significantly larger achievement gains than control-group teachers. The effect size of 0.70 established the empirical case for data-based decision-making. View study
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42(8), 795–819.
A systematic review of 18 CBM implementation studies that identifies the conditions under which progress monitoring produces achievement benefits. The central finding is that data collection alone is insufficient; teachers must use data to change instruction. The paper provides specific protocols for data use that informed the DBI framework. View study
Burns, M. K. (2004). Empirical analysis of drill ratio research: Refining the instructional level for drill tasks. Remedial and Special Education, 25(3), 167–173.
Burns analyses the evidence base for the frustration/instructional/independent level framework in academic skills, providing the empirical foundation for the instructional match checklist in the Friday Data / Tuesday Pivot protocol. The paper establishes accuracy rate thresholds that are directly applicable to CBM data interpretation and intervention matching decisions. View study
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–74.
A synthesis of 580 studies on formative assessment that remains one of the most cited papers in educational assessment research. Black and Wiliam's finding that feedback quality drives achievement gains provides the theoretical complement to CBM's measurement rigour. Essential for understanding the limits of quantitative progress monitoring and the role of teacher observation alongside it. View study
---
References
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–74.
Burns, M. K. (2004). Empirical analysis of drill ratio research: Refining the instructional level for drill tasks. Remedial and Special Education, 25(3), 167–173.
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52(3), 219–232.
Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53(3), 199–208.
Fuchs, L. S., & Fuchs, D. (2007). A model for implementing responsiveness to intervention. Teaching Exceptional Children, 39(5), 14–20.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22(1), 27–48.
Good, R. H., & Kaminski, R. A. (2002). Dynamic Indicators of Basic Early Literacy Skills (6th ed.). Institute for the Development of Educational Achievement.
National Center on Intensive Intervention. (2013). Data-based individualization: A framework for intensive intervention. American Institutes for Research.
Shinn, M. R. (2008). Best practices in using curriculum-based measurement in a problem-solving model. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology V (pp. 243–262). National Association of School Psychologists.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42(8), 795–819.
---
Pick one student whose progress you are uncertain about and administer three ORF or maths computation probes this week. Establish a median baseline. Set a goal using the grade-level growth rate for your tier. Then graph it. The act of graphing a baseline is the most consequential first step: once the line exists, it makes absence of growth visible, and visible problems get solved.