Dynamic Indicators of Basic Early Literacy Skills
The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) are a set of procedures and measures for assessing the acquisition of early literacy skills from kindergarten through sixth grade.
The DIBELS measures were specifically designed to assess the Big Ideas in Reading. They are designed to be short (one minute) fluency measures used to regularly monitor the development of early literacy and early reading skills. These research-based measures are linked to one another and predictive of later reading proficiency. The measures are also consistent with many of the Common Core State Standards in Reading, especially the Foundational Skills. Combined, the measures form an assessment system of early literacy development that allows educators to readily and reliably determine student progress.
History of DIBELS
DIBELS were developed based on measurement procedures for Curriculum-Based Measurement (CBM), which were created by Deno and colleagues through the Institute for Research and Learning Disabilities at the University of Minnesota in the 1970s-80s (e.g., Deno and Mirkin, 1977; Deno, 1985; Deno and Fuchs, 1987; Shinn, 1989). Like CBM, DIBELS were developed to be economical and efficient indicators of a student's progress toward achieving a general outcome.
Although DIBELS materials were initially developed to be linked to the local curriculum like CBM (Kaminski & Good, 1996), current DIBELS measures are generic and draw content from sources other than any specific school's curriculum. The use of generic CBM methodology is typically referred to as General Outcome Measurement (GOM) (Fuchs & Deno, 1994).
Initial research on DIBELS was conducted at the University of Oregon in the late 1980s. Since then, an ongoing series of studies on DIBELS has documented the reliability and validity of the measures as well as their sensitivity to student change. Research on DIBELS 6th Edition and DIBELS Next continues at the University of Oregon's Center on Teaching and Learning (CTL).
DIBELS as Indicators
The role of DIBELS as indicators is described in Kaminski, Cummings, Powell-Smith, and Good (2008) as follows:
DIBELS measures, by design, are indicators of each of the Basic Early Literacy Skills. For example, DIBELS do not measure all possible phonemic awareness skills such as rhyming, alliteration, blending, and segmenting. Instead, the DIBELS measure of phonemic awareness, Phoneme Segmentation Fluency (PSF), is designed to be an indicator of a student's progress toward the long-term phonemic awareness outcome of segmenting words. The notion of DIBELS as indicators is a critical one. It is this feature of DIBELS that distinguishes it from other assessments and puts it in a class of assessments known as General Outcome Measures.
General Outcome Measures (GOMs) like DIBELS differ in meaningful and important ways from other commonly used formative assessment approaches. The most common formative assessment approach that teachers use is assessment of a child's progress in the curriculum, often called mastery measurement. End-of-unit tests in a curriculum are one example of mastery measurement. Teachers teach skills and then test for mastery of the skills just taught. They then teach the next set of skills in the sequence and assess mastery of those skills. Both the type and difficulty of the skills assessed change from test to test; therefore scores from different times in the school year cannot be compared. Mastery-based formative assessment such as end-of-unit tests addresses the question, "has the student learned the content taught?" In contrast, GOMs are designed to answer the question, "is the student learning and making progress toward the long-term goal?"
In much the same way as an individual's temperature or blood pressure can be used to indicate the effectiveness of a medical intervention, GOMs in the area of education can be used to indicate the effectiveness of our teaching. However, the powerful predictive validity of the measures does not mean that their content should become the sole components of our instruction. In other words, unlike mastery based assessment in which it is appropriate to teach the exact skills tested, each DIBELS indicator represents a broader sequence of skills to be taught. (For an example of sequence of skills related to and leading to the goals, please see the Curriculum Maps). DIBELS measures are designed to be brief so that our teaching doesn't have to be.
Why use DIBELS?
Teaching with the odds in your favor.
The purpose of the DIBELS Benchmark goals is to provide educators with standards for gauging the progress of all students. The Benchmark goals represent a level of performance for all students to reach in order to be considered on track for becoming a successful reader. The DIBELS goals and cut scores are research-based, criterion-referenced scores. They indicate the probability of achieving subsequent early literacy goals. Benchmark goals for each measure and time period were established using a minimum cut point at which the odds were in favor of a student achieving a future reading goal. So, for a child with a score at or above the benchmark goal at a given point, the probability is high for achieving future goals; the probability of needing additional support in order to achieve future goals is low.
In addition to these goals, DIBELS also include cutoff scores where the odds against achieving subsequent literacy goals are indicated. Students with scores at or below these cutoff points are unlikely to meet subsequent early literacy goals unless additional instructional support is provided.
A unique feature of the DIBELS benchmark decision rules is the inclusion of a zone where a clear prediction is not possible. Students with scores in this category require strategic planning on the part of educators to determine appropriate strategies to support the students to meet subsequent early literacy goals.
Teachers can use students' performance to identify students who will most likely require more intensive instruction at the beginning of the school year to prevent the likelihood of the student being a struggling reader at a later time point.
Because the goals and cut scores are based on longitudinal predictive probabilities, they are not set in stone. A score at or above
the benchmark indicates a
The UO DIBELS Data System provides full data management support for DIBELS 6th Edition and DIBELS Next. You can choose the DIBELS Next Recommended Goals or the DIBELS Next Former Goals with composite score.
The Data System tracks and measures progress at the student, class, school, and district-level. Reports can be created immediately after scores are entered, providing immediate feedback and allowing for timely decision making. The DIBELS Data System is operated by the Center on Teaching and Learning (CTL) at the University of Oregon and has been serving schools across the U.S. and internationally since 2001. The Data System has been used in over 15,000 schools. Take a video tour of the DDS.
The CTL Professional Development Courseware offers high-quality online training courses taken at your own pace, and award a certificate of completion.
In Person Training
CTL has partnered with HILL for Literacy to provide in person DIBELS training.
Our research to your classroom
Data management and reporting for DIBELS 6th Edition and DIBELS Next is included in the DDS Standard service that costs $1 per student per year!
Ready to get Started? Sign up for an account!
DIBELS 6th Edition testing materials are available as a FREE download.