Dynamic Indicators of Basic Early Literacy Skills
The Dynamic Indicators of Basic Early Literacy Skills® (DIBELS) are a set of procedures and measures for assessing the acquisition of early literacy skills. They are designed to be short (one minute) fluency measures used to regularly monitor the development of early literacy and early reading skills.
What are DIBELS?
DIBELS were developed to measure recognized and empirically validated skills related to reading outcomes. Each measure has been thoroughly researched and demonstrated to be reliable and valid indicators of early literacy development. When implemented as recommended, the results can be used to evaluate individual student development as well as provide grade-level feedback toward validated instructional objectives.
The research-based measures are linked to one another and predictive of later reading proficiency. The measures are also consistent with many of the Common Core State Standards in Reading, especially the Foundational Skills. Combined, the measures form an assessment system of early literacy development that allows educators to readily and reliably determine student progress.
Versions of DIBELS
DIBELS 8th Edition
- Kindergarten - 8th Grade
- Benchmark Screening
- Progress Monitoring
- Short (one minute) Fluency Measures
- Growth Percentiles
- Equated Scores
- Advanced Test Design
- Tablet Scoring
- Dyslexia Screener
- Free Testing Materials
- Kindergarten - 6th Grade
- Benchmark Screening
- Progress Monitoring
- Short (one minute) Fluency Measures
- Growth Percentiles
- Free Testing Materials
Why use DIBELS?
DIBELS 8th Edition represents the culmination of decades of research into supporting students in becoming successful readers. DIBELS uses state-of-the-art, research-based methods for designing and validating curriculum-based measures of reading. As a result, DIBELS is more useful for more students in more grades than ever before.
The purpose of the DIBELS is to provide educators with standards for gauging the progress of all students. DIBELS subtests measure critical skills and abilities that are necessary for reading success and most offer both benchmark and progress-monitoring forms.
Each benchmark subtest has two cut-scores. Students with scores falling below the risk cut-score are identified as at risk for not meeting end-of-year expectations in reading. These students require intensive intervention to get back on track in reading. Students scoring above the benchmark cut-score are identified as at minimal risk for not meeting end-of-year expectations in reading; put more positively, they have a high likelihood of meeting end-of-year learning goals in reading. A unique feature of DIBELS cut-scores is the inclusion of a zone where a clear prediction is not possible. Students with scores in this category are considered at some risk and require strategic planning on the part of educators to determine appropriate strategies to support the students to meet subsequent early literacy goals.
Thus, teachers can use students' performance to identify students who will most benefit from intensive instruction, strategic instruction, and core instruction alone.
A new benefit of DIBELS benchmark subtests is that they utilize advanced test design principles to provide teachers with instructionally relevant information on all readers. In addition to identifying students at different levels of risk, the letter-naming fluency (LNF), phonemic segmentation fluency (PSF), nonsense word fluency (NWF), and word reading fluency (WRF) subtests support item analyses that can suggest next instructional steps. This feature is brand new in DIBELS 8th Edition and will be undergoing additional development to provide teachers with reporting options that reflect this new capacity. This feature increases the utility of DIBELS for all students when combined with regular benchmark testing (i.e., three times a year), making DIBELS a more powerful tool than ever before.
Another purpose of DIBELS is to monitor student progress. Progress monitoring is essential for ensuring that students identified for intensive and strategic support actually benefit from this support as intended. Progress monitoring enables interventionists to change and intensify intervention until the desired pattern of improvement is achieved.
The DIBELS measures were specifically designed to assess the Big Ideas in Reading
|Measure||Measurement Area||DIBELS Edition|
|6th Edition||Next||8th Edition|
|NWF||Alphabetic Principle and Phonics||✓||✓||✓|
Alphabetic Principle and Phonics
Accuracy and Fluency
|WRF|| Alphabetic Principle and Phonics
Accuracy and Fluency
|WUF||Vocabulary and Oral Language||✓|
These research-based measures are linked to one another and predictive of later reading proficiency. The measures are also consistent with many of the Common Core State Standards in Reading, especially the Foundational Skills. Combined, the measures form an assessment system of early literacy development that allows educators to readily and reliably determine student progress.
Benchmark testing is the systematic process of screening all students on essential skills predictive of later reading performance. Benchmark testing is one part of a comprehensive assessment system that includes universal screening, progress monitoring, summative assessments and other formal and informal assessments all designed to get the critical information needed to make informed instructional decisions. Benchmark assessment is the foundation for the assessment, goal setting, and instruction cycle.
The DIBELS assessments have been researched and validated specifically for benchmark testing. We recommend screening all students three times per year with grade-level materials. Research indicates that early identification and early intervention are essential for helping students who are at risk for future reading difficulties, or are currently having reading difficulties. Screening all students, including those who met earlier benchmark goals, also provides a complete data set that is needed to determine if reading instruction is effective with all students at the school or district level.
In addition to identifying students at risk for reading problems including dyslexia, benchmark data can help answer the following types of questions:
- Is our reading program effective with all students at all grade levels?
- Are there exemplar schools (or classes) in our district on which we can model successful reading instruction?
- What are the strengths of our reading program?
- What areas of our reading program need improvement?
- Did we meet our literacy goals this year?
Due to advanced test design features, DIBELS 8th Edition also provides instructionally relevant data for all students. For all measures except ORF, forms progress in difficulty past the cut-score and utilize patterns of items that align to typical instructional goals within a grade. For instance, in first grade, NWF begins with CVC and VC words and after a certain point includes additional spelling patterns typically taught in first grade, including silent-e and consonant blends and digraphs.
The testing materials consist of grade-level scoring booklets - one for each student - and a set of display materials – one for each test administrator. Most testing is done individually with each student and takes approximately 3-8 minutes per student. Student scores are used to determine how each student is doing in relation to a benchmark goal that is predictive of later reading success. The benchmark goals are criterion-referenced. Each measure has an empirically established goal (or benchmark) that changes across time to ensure students’ skills are developing in a manner predictive of continued progress.
Progress monitoring is the systematic process of regularly assessing students receiving intervention for improvement over time between benchmark screenings. Progress monitoring is a key component of providing differentiated and individualized reading instruction. Student performance and development of literacy skills should be monitored frequently for all students who are at risk of reading difficulty. The data gathered during progress monitoring can be used in the instructional decision making process.
Benchmark testing with DIBELS can help determine which students are at risk for later reading difficulties. Students who receive supplemental instructional support should be progress monitored. The assessment used to monitor progress should align with the instructional priorities of the supplemental reading instruction. For example, if a student’s area of weakness is identified as fluency with connected text then monitoring with Oral Reading Fluency (ORF) is the best option since ORF measures reading fluency. See our Big Ideas in Beginning Reading pages for information on targeting instruction and the relationship between assessment and instruction.
Progress monitoring materials consist of alternate forms of the Benchmark assessments. The only exception to this is Letter Naming Fluency (LNF). LNF should not be progress monitored. It is different from the other measures in that it is not aligned with one of the five major skill areas in beginning reading. It's used for benchmark screening because it is a good indicator of risk, but shouldn't be monitored beyond that.
The progress monitoring probes are all approximately at the same difficulty level within the grade they are used. For example, at each grade ORF passage #1 is the approximately the same reading level as ORF passage #20. DIBELS 8th Edition also utilizes equating meaning that the forms can be given in any order so long as raw scores are converted to equated scores. The equated scores take any differences in average difficulty between forms into account. Progress monitoring probes should not be used for practice or as instructional materials.
Appropriate Level of Materials
Typically, the level of assessment used for monitoring should match the student’s instructional level. Progress monitoring can be done with grade-level or out-of-grade-level materials. Testing with the appropriate level of materials will provide the best feedback for planning instruction.
If the student’s benchmark score is in the Strategic Level of Support then grade-level materials are most likely the appropriate level at which to progress monitor. If the student’s benchmark score is in the Intensive Level of Support, you may want to administer a measure from one grade-level below. You can continue administering measures and moving down grade levels until you find a level that will allow you to measure growth.
Frequency and Duration
For a student identified as Core (at benchmark/low risk), we recommend screening only during the three benchmark periods. For a student identified as Strategic (below benchmark/some risk) who receives additional instructional support, we recommend progress monitoring 1 to 2 times per month on the measure(s) assessing the skill(s) targeted in the intervention. For a student identified as Intensive (well below benchmark/at risk) who begins receiving additional, intensive instructional support, we recommend progress monitoring 2 to 4 times per month on the measure(s) assessing the skill(s) targeted in the intervention. These principles apply most readily in Grades K-3, but when monitoring is conducted with ORF alone and in later grades, a less frequent progress monitoring schedule may be advisable. Reading growth is very rapid in Grades K-2 and slows down considerably after Grade 3, partly because what develops in the later grades is comprehension and reading a wider range of texts for a wider range of purposes. This type of development naturally moves more slowly, thus a less frequent monitoring schedule in later grades is advised: every 3-4 weeks for intensive students who have moved beyond basic phonics skills and every 5-6 weeks for strategic students.
The duration that each student is progress monitored may vary. If a student is above the aimline but hasn't yet reached the end-of-year target goal, you may want to continue monitoring if the student is receiving additional instructional support. However, if the student is consistently scoring above the aimline, you may wish to review whether the student needs to continue to receive additional instructional support as well as progress monitoring.
The UO DIBELS Data System Features
Using a tablet-based app facilitates the administration, scoring, and management of assessments. Scoring is completed automatically and reports are available immediately after assessment. Current tablet-based options are:
- HiFi Reading facilitates the administration, scoring, and management of reading assessments. Current support for DIBELS 6th Edition and,for research participants, DIBELS 8th Edition.
Zones of Growth provides educators an easy way to set individualized literacy goals, review growth percentiles, and evaluate students' progress. Current support for DIBELS 6th Edition and DIBELS Next. Support for DIBELS 8th Edition will be added in 2019-20.
District, school and project reports provide immediate feedback for decision making. Class and student reports help identify students who need additional support and monitor response to intervention. Create reports and analyze data immediately after assessments are completed or scores are entered, to provide immediate feedback and allow for timely decision making. View all reports
Progress Monitoring in the Data System
Progress monitoring data can be stored in your DIBELS Data System account. Scores can be entered up to one time per week for each measure. In addition to scores, notes can be entered allowing documentation of instructional changes. Phase lines can also be added to graphs to indicate changes to a reading intervention. The Progress Monitoring Quick Start Guide includes step-by-step instructions on selecting students and entering progress monitoring data.
Progress monitoring assessment materials can be used for summer school. Summer school data can be entered in the DIBELS Data System using the progress monitoring data entry pages. Data should be added to the student’s record for the year they have just completed.
The CTL Professional Development Courseware offers high-quality online training courses taken at your own pace, and award a certificate of completion.
History of DIBELS
DIBELS was developed based on Curriculum-Based Measurement (CBM), which were created by Deno and colleagues through the Institute for Research and Learning Disabilities at the University of Minnesota in the 1970s-80s (e.g., Deno and Mirkin, 1977; Deno, 1985; Deno and Fuchs, 1987; Shinn, 1989). Like CBM, DIBELS were developed to be economical and efficient indicators of a student's progress toward achieving a general outcome.
Although DIBELS materials were initially developed to be linked to the local curriculum like CBM (Kaminski & Good, 1996), current DIBELS measures are generic and draw content from sources other than any specific curriculum. The use of generic CBM methodology is typically referred to as General Outcome Measurement (GOM) (Fuchs & Deno, 1994).
Initial research on DIBELS was conducted at the University of Oregon in the late 1980s. Since then, an ongoing series of studies on DIBELS has documented the reliability and validity of the measures as well as their sensitivity to student change. Research on DIBELS continues at the University of Oregon's Center on Teaching and Learning (CTL; Cummings, Park, & Bauer Schaper, 2013; Cummings, Stoolmiller, Baker, Fien, & Kame’enui, 2015; Smolkowski & Cummings, 2016; Stoolmiller, Biancarosa, & Fien, 2013). .
DIBELS as Indicators
The role of DIBELS as indicators is described in Kaminski, Cummings, Powell-Smith, and Good (2008) as follows:
DIBELS measures, by design, are indicators of each of the Basic Early Literacy Skills. For example, DIBELS do not measure all possible phonemic awareness skills such as rhyming, alliteration, blending, and segmenting. Instead, the DIBELS measure of phonemic awareness, Phoneme Segmentation Fluency (PSF), is designed to be an indicator of a student's progress toward the long-term phonemic awareness outcome of segmenting words. The notion of DIBELS as indicators is a critical one. It is this feature of DIBELS that distinguishes it from other assessments and puts it in a class of assessments known as General Outcome Measures.
General Outcome Measures (GOMs) like DIBELS differ in meaningful and important ways from other commonly used formative assessment approaches. The most common formative assessment approach that teachers use is assessment of a child's progress in the curriculum, often called mastery measurement. End-of-unit tests in a curriculum are one example of mastery measurement. Teachers teach skills and then test for mastery of the skills just taught. They then teach the next set of skills in the sequence and assess mastery of those skills. Both the type and difficulty of the skills assessed change from test to test; therefore scores from different times in the school year cannot be compared. Mastery-based formative assessment such as end-of-unit tests addresses the question, "has the student learned the content taught?" In contrast, GOMs are designed to answer the question, "is the student learning and making progress toward the long-term goal?"
In much the same way as an individual's temperature or blood pressure can be used to indicate the effectiveness of a medical intervention, GOMs in the area of education can be used to indicate the effectiveness of our teaching. However, the powerful predictive validity of the measures does not mean that their content should become the sole components of our instruction. In other words, unlike mastery based assessment in which it is appropriate to teach the exact skills tested, each DIBELS indicator represents a broader sequence of skills to be taught. (For an example of sequence of skills related to and leading to the goals, please see the Curriculum Maps). DIBELS measures are designed to be brief so that our teaching doesn't have to be.
DIBELS 8th Edition service costs $1 per student per year! Mulitple year options are available.
DIBELS 6th Edition and DIBELS Next are included in the DDS Standard service that costs $1 per student per year!
Sign Up for a
DIBELS Data System Account
The UO DIBELS Data System has been used in over 28,000 schools.