Research Base of the Recommended Goals

October 2012
An introduction to the research base of the recommended benchmark goals. Several of the fundamental strengths of the recommended goals are highlighted.

Transcript

Narrator: The University of Oregon, DIBELS Data System offers new and improved benchmark goals. These new goals more accurately predict students’ future reading performance. They provide teachers and administrators with the tools they need to support all students on the road to becoming healthy readers. In this video, we will cover how the recommended benchmark goals will improve your data-based decision making and ­why the changes were made.

Kelli D. Cummings Ph.D., CTL Research Associate: Hi my name is Kelli Cummings. I’m a research associate at the Center on Teaching and Learning at the University of Oregon. As you look at your reports from the DIBELS Data System, the first thing you will notice is that the recommended benchmark goals are more ambitious. And consequently, you may have more students who are identified as needing additional support in order to reach end-of-year standards. Now, we understand that this can be distressing to teachers and administrators but it’s important to keep in mind that this change in benchmark status is NOT because your students are performing at a lower level than last year. But rather that the new benchmark goals do a BETTER job of identifying students correctly. This offers an opportunity to provide additional support to ensure that all of your students, at the end of the year, have higher performance levels on comprehensive reading skills.

The former goals actually missed 40% of students who were in need of additional support.

Let's take a look at some results from DIBELS Next benchmark assessments. First, we’ll look at DDS reports using the former benchmark goals, then we’ll look at the exact same data using the recommended benchmark goals.

Here we’re looking at school-wide results using the former benchmark goals.

As you can see, the former goals identified 73% of students as likely to need core instructional support. Let’s look at the same data using the recommended goals. Now we have 40% of students identified as likely to need core instructional support. It’s quite a difference. It’s important to remember that the former goals missed students who were likely to need strategic or intensive support and the recommended goals are accurately identifying those students.

DIBELS assessments are designed to make screening decisions within a problem-solving model. So when a student scores below benchmark, it actually provides an opportunity to engage in proactive, preventive teaching to change outcomes.

Narrator: The recommended goals have a stronger research base for several reasons:

  • First, they were developed using a nationally representative sample of students.
  • Second, they were developed using a consistent criterion for sensitivity,
  • and third, they are linked to an external measure of reading comprehension, rather than the DIBELS Next composite score.

Let’s start with the representative sample.

The former benchmark goals were created by a for-profit company and not at the University of Oregon.

They are based on a narrow sample of students in just 2 of the 9 census regions. This sample came from communities of mostly white students, and only 16% of those students qualified for Free or Reduced-Price Lunch. This is hardly representative of the diversity in schools today.

In contrast, the recommended benchmark goals use a representative sample from diverse communities in all nine census regions to ensure that students from many socio-economic and ethnic backgrounds are represented.

Sensitivity is the percentage of students out of all the students tested who are screened to be at risk, who are truly at risk. The recommended goals are a more sensitive predictor and they use a consistent criterion.

The former goals used criteria that varied across grades, measures, and time of the school year. This ever-changing criteria identified anywhere between 25% to 81% of students in need of additional support. With the consistent criterion for sensitivity of the recommended goals, 90% of ALL struggling readers are consistently identified for additional support across all measures, grade levels, and time points.

By using the recommended goals, teachers have a better understanding of their students’ reading skills, allowing them to more appropriately place students in instructional groups and increase their chances for later success.

The recommended benchmark goals for all measure and time points are linked directly to an external measure the Stanford Achievement Test, which is aligned with the common core standards. This is a standard endorsed by test development experts.

The former goals do not meet this standard, which may lead teachers to overlook students who truly need reading support.

Having consistently sensitive benchmark goals will help ensure that more students receive the instruction they need to improve their literacy skills. And remember, if you have any questions, we’re here to support you!