Dynamic Indicators of Basic Early Literacy Skills
The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) are a set of procedures and measures for assessing the acquisition of early literacy skills from kindergarten through sixth grade.
What are DIBELS?
The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) are a set of procedures and measures for assessing the acquisition of early literacy skills from kindergarten through sixth grade. They are designed to be short (one minute) fluency measures used to regularly monitor the development of early literacy and early reading skills.
The DIBELS measures were specifically designed to assess the Big Ideas in Reading. These research-based measures are linked to one another and predictive of later reading proficiency. The measures are also consistent with many of the Common Core State Standards in Reading, especially the Foundational Skills. Combined, the measures form an assessment system of early literacy development that allows educators to readily and reliably determine student progress.
Why use DIBELS?
Teaching with the odds in your favor.
The purpose of the DIBELS Benchmark goals is to provide educators with standards for gauging the progress of all students. The Benchmark goals represent a level of performance for all students to reach in order to be considered on track for becoming a successful reader. The DIBELS goals and cut scores are research-based, criterion-referenced scores. They indicate the probability of achieving subsequent early literacy goals. Benchmark goals for each measure and time period were established using a minimum cut point at which the odds were in favor of a student achieving a future reading goal. So, for a child with a score at or above the benchmark goal at a given point, the probability is high for achieving future goals; the probability of needing additional support in order to achieve future goals is low.
In addition to these goals, DIBELS also include cutoff scores where the odds against achieving subsequent literacy goals are indicated. Students with scores at or below these cutoff points are unlikely to meet subsequent early literacy goals unless additional instructional support is provided.
A unique feature of the DIBELS benchmark decision rules is the inclusion of a zone where a clear prediction is not possible. Students with scores in this category require strategic planning on the part of educators to determine appropriate strategies to support the students to meet subsequent early literacy goals.
Teachers can use students' performance to identify students who will most likely require more intensive instruction at the beginning of the school year to prevent the likelihood of the student being a struggling reader at a later time point.
Because the goals and cut scores are based on longitudinal predictive probabilities, they are not set in stone. A score at or above
the benchmark indicates a
The DIBELS measures were specifically designed to assess the Big Ideas in Reading
|Measure||Measurement Area||DIBELS Edition|
|NWF||Alphabetic Principle and Phonics||✓||✓|
Alphabetic Principle and Phonics
Accuracy and Fluency
|WUF||Vocabulary and Oral Language||✓|
These research-based measures are linked to one another and predictive of later reading proficiency. The measures are also consistent with many of the Common Core State Standards in Reading, especially the Foundational Skills. Combined, the measures form an assessment system of early literacy development that allows educators to readily and reliably determine student progress.
Full support of benchmark assessment data for DIBELS 6th Edition and DIBELS Next.
Benchmark testing is the systematic process of screening all students on essential skills predictive of later reading performance. Benchmark testing is one part of a comprehensive assessment system that includes universal screening, progress monitoring, summative assessments and other formal and informal assessments all designed to get the critical information needed to make informed instructional decisions. It is a foundational link between assessment, instruction and goal setting.
The DIBELS assessments have been researched and validated specifically for benchmark testing in kindergarten through sixth grade. We recommend screening all students three times per year with grade-level materials. Research indicates that early identification and early intervention are essential for helping students who are at risk for future reading difficulties, or are currently having reading difficulties. Screening all students, including those who met earlier benchmark goals, also provides a complete data set that is needed to determine if reading instruction is effective with all students at the school or district level. Benchmark data can help answer the following types of questions:
- Is our reading program effective with all students at all grade levels?
- Are there exemplar schools (or classes) in our district on which we can model successful reading instruction?
- What are the strengths of our reading program?
- What areas of our reading program need improvement?
- Did we meet our literacy goals this year?
The testing materials consist of grade-level booklets for each student and a set of display materials. Most testing is done one on one with each student and takes approximately 5-10 minutes per student. Student scores are used to determine how each student is doing in relation to a benchmark goal that is predictive of later reading success. The benchmark goals are criterion-referenced. Each measure has an empirically established goal (or benchmark) that changes across time to ensure students’ skills are developing in a manner predictive of continued progress. The goals are the same for all students learning to read in English. Current research indicates that the goals are equally predictive for native English speakers and for English language learners. DIBELS 6th Edition Goals, DIBELS Next Recommended Goals and DIBELS Next Former Goals are available for download.
Track students' level of performance and rate of improvement by progress monitoring with DIBELS 6th Edition or DIBELS Next.
Progress monitoring is a key component of providing differentiated and individualized reading instruction. Student performance and development of literacy skills should be monitored frequently for all students who are at risk of reading difficulty. The data gathered during progress monitoring can be used in the instructional decision making process.
Benchmark testing with DIBELS can help determine which students are at risk for later reading difficulties. Students who receive supplemental instructional support should be progress monitored. The assessment used to monitor progress should align with the instructional priorities of the supplemental reading instruction. For example, if a student’s area of weakness is identified as fluency with connected text then monitoring with Oral Reading Fluency (ORF) is the best option since ORF measures reading fluency. See our Big Ideas in Beginning Reading pages for information on targeting instruction and the relationship between assessment and instruction.
Progress monitoring materials consist of alternate forms of the Benchmark assessments. The only exception to this is Letter Naming Fluency (LNF). LNF should not be progress monitored. It is different from the other measures in that it is not aligned with one of the five major skill areas in beginning reading. It's used for benchmark screening because it is a good indicator of risk, but shouldn't be monitored beyond that.
The progress monitoring probes are all approximately at the same difficulty level within the grade they are used. For example, at each grade ORF passage #1 is the approximately the same reading level as ORF passage #20. The probes should be given in order since they are arranged in a specific way to account for small differences in difficulty. Progress monitoring probes should not be used for practice or as instructional materials.
Appropriate level of materials
Typically, the level of assessment used for monitoring should match the student’s instructional level. Progress monitoring can be done with grade-level or out-of-grade-level materials. Testing with the appropriate level of materials will provide the best feedback for planning instruction.
If the student’s benchmark score is in the Strategic Level of Support then grade-level materials are most likely the appropriate level at which to progress monitor. If the student’s benchmark score is in the Intensive Level of Support, you may want to administer a measure or ORF passage from one grade-level below. You can continue administering passages and moving down grade levels until you find a level that will allow you to measure growth.
Frequency and duration
For a student identified as Core (at benchmark/low risk), we recommend screening only during the three benchmark periods. For a student identified as Strategic (below benchmark/some risk) who receives additional instructional support, we recommend progress monitoring 1 to 2 times per month on the measure(s) assessing the skill(s) targeted in the intervention. For a student identified as Intensive (well below benchmark/at risk) who begins receiving additional, intensive instructional support, we recommend progress monitoring 2 to 4 times per month on the measure(s) assessing the skill(s) targeted in the intervention. When monitoring weekly or every other week with ORF, we recommend using one ORF passage. When monitoring once per month with ORF, we recommend using 3 passages and then recording the median score.
The duration that each student is progress monitored may vary. You can measure growth by plotting the scores on the front of the progress-monitoring booklet and drawing an aimline from the student’s first data point to the target goal. Plot each score on the graph as the student is monitored. If you are using an online data system, it may have a report that does this for you. If a student is above the aimline but hasn't yet reached the end-of-year target goal, you may want to continue monitoring if the student is receiving additional instructional support. However, if the student is consistently scoring above the aimline (e.g., three times or more), you may review whether the student needs to continue to receive additional instructional support as well as progress monitoring.
The UO DIBELS Data System provides full data management support for DIBELS 6th Edition and DIBELS Next.
Both DIBELS 6th Edition and DIBELS Next include benchmark testing and progress monitoring 3 times per year for kindergarten through sixth grade. DIBELS testing materials are available as a FREE download.
HiFi Reading is a tablet-based app that facilitates the administration, scoring, and management of DIBELS 6th Edition assessments. Scoring is completed automatically and reports are available immediately after assessment.
Zones of Growth provides educators an easy way to set individualized literacy goals, review growth percentiles, and evaluate students' progress.
District, school and project reports provide immediate feedback for decision making. Class and student reports help identify students who need additional support and monitor response to intervention. Create reports and analyze data immediately after assessments are completed or scores are entered, to provide immediate feedback and allow for timely decision making. View all reports
Progress Monitoring in the Data System
Progress monitoring data can be stored in your DIBELS Data System account. Scores can be entered up to one time per week for each measure. You can enter scores for both in-grade and out-of-grade measures. In addition to scores, notes can be entered allowing documentation of instructional changes. Phase lines can also be added to graphs to indicate changes to a reading intervention. The Progress Monitoring Quick Start Guide includes step-by-step instructions on selecting students and entering progress monitoring data.
Progress monitoring assessment materials can be used for summer school. Summer school data can be entered in the DIBELS Data System using the progress monitoring data entry pages. Data should be added to the student’s record for the year they have just completed.
Take a tour
The DIBELS Data System (DDS) is operated by the Center on Teaching and Learning (CTL) at the University of Oregon and has been serving schools across the U.S. and internationally since 2001. The Data System has been used in over 28,000 schools. Learn more about the DDS features.
- Download Testing Materials
- Administration Timeline
- DIBELS 6th Benchmark Goals
- DIBELS Next Recommended Goals
- DIBELS Next Former Goals
- Curriculum Maps for Grades K - 3
- Strategies for Collecting Schoolwide DIBELS Data
Worksheets & Guides
- Instructional Grouping Worksheets
Download instructional grouping worksheets for:
DDS customers can use the online Instructional Grouping Report
DIBELS Next Recommended Goals
DIBELS Next Former Goals
DIBELS 6th Edition
- DIBELS Next Progress Monitoring Tracking Sheets
- Parent Guide to DIBELS Assessment
- Parent Guide to DIBELS Assessment (Spanish)
The CTL Professional Development Courseware offers high-quality online training courses taken at your own pace, and award a certificate of completion.
In Person Training
CTL has partnered with HILL for Literacy to provide in person DIBELS training.
History of DIBELS
DIBELS were developed based on measurement procedures for Curriculum-Based Measurement (CBM), which were created by Deno and colleagues through the Institute for Research and Learning Disabilities at the University of Minnesota in the 1970s-80s (e.g., Deno and Mirkin, 1977; Deno, 1985; Deno and Fuchs, 1987; Shinn, 1989). Like CBM, DIBELS were developed to be economical and efficient indicators of a student's progress toward achieving a general outcome.
Although DIBELS materials were initially developed to be linked to the local curriculum like CBM (Kaminski & Good, 1996), current DIBELS measures are generic and draw content from sources other than any specific school's curriculum. The use of generic CBM methodology is typically referred to as General Outcome Measurement (GOM) (Fuchs & Deno, 1994).
Initial research on DIBELS was conducted at the University of Oregon in the late 1980s. Since then, an ongoing series of studies on DIBELS has documented the reliability and validity of the measures as well as their sensitivity to student change. Research on DIBELS 6th Edition and DIBELS Next continues at the University of Oregon's Center on Teaching and Learning (CTL).
DIBELS as Indicators
The role of DIBELS as indicators is described in Kaminski, Cummings, Powell-Smith, and Good (2008) as follows:
DIBELS measures, by design, are indicators of each of the Basic Early Literacy Skills. For example, DIBELS do not measure all possible phonemic awareness skills such as rhyming, alliteration, blending, and segmenting. Instead, the DIBELS measure of phonemic awareness, Phoneme Segmentation Fluency (PSF), is designed to be an indicator of a student's progress toward the long-term phonemic awareness outcome of segmenting words. The notion of DIBELS as indicators is a critical one. It is this feature of DIBELS that distinguishes it from other assessments and puts it in a class of assessments known as General Outcome Measures.
General Outcome Measures (GOMs) like DIBELS differ in meaningful and important ways from other commonly used formative assessment approaches. The most common formative assessment approach that teachers use is assessment of a child's progress in the curriculum, often called mastery measurement. End-of-unit tests in a curriculum are one example of mastery measurement. Teachers teach skills and then test for mastery of the skills just taught. They then teach the next set of skills in the sequence and assess mastery of those skills. Both the type and difficulty of the skills assessed change from test to test; therefore scores from different times in the school year cannot be compared. Mastery-based formative assessment such as end-of-unit tests addresses the question, "has the student learned the content taught?" In contrast, GOMs are designed to answer the question, "is the student learning and making progress toward the long-term goal?"
In much the same way as an individual's temperature or blood pressure can be used to indicate the effectiveness of a medical intervention, GOMs in the area of education can be used to indicate the effectiveness of our teaching. However, the powerful predictive validity of the measures does not mean that their content should become the sole components of our instruction. In other words, unlike mastery based assessment in which it is appropriate to teach the exact skills tested, each DIBELS indicator represents a broader sequence of skills to be taught. (For an example of sequence of skills related to and leading to the goals, please see the Curriculum Maps). DIBELS measures are designed to be brief so that our teaching doesn't have to be.