Date of Award

8-15-2013

Document Type

Dissertation

Degree Name

Doctor of Education (Ded)

Department

Educational and School Psychology

First Advisor

Timothy J. Runge, Ph.D.

Second Advisor

Joseph F. Kovaleski, D.Ed.

Third Advisor

Mark J. Staszkiewicz, Ed.D.

Fourth Advisor

David Lillenstein, D.Ed.

Abstract

The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The DIBELS benchmarks are calculated based on performance with a nationally-normed standardized achievement test (Good et al. 2011b). Therefore, it may not accurately represent the standard of performance on a state assessment. The DIBELS benchmarks may show a high number of false positives or false negatives when the benchmark is used to predict state test scores. A different criterion that more accurately reflects local expectations may be needed. A locally-generated benchmark expectation, established using a state assessment as a criterion for success, may provide a valid alternative to the DIBELS benchmark. DIBELS Oral Reading Fluency (ORF) data and Pennsylvania System of School Assessment (PSSA; PDE, 2010) scores were collected from two school districts in Pennsylvania. The collected data reflected fall, winter, and spring DIBELS ORF scores for students in grades 3 through 5 as well as their scores on the PSSA. Using logistic regression, locally-generated ORF benchmarks using PSSA performance as the criterion for successful outcomes were created for both school districts. Diagnostic accuracy statistics, including sensitivity, specificity, negative predictive power, and positive predictive power, and overall accuracy percentage as well as values for kappa and phi were calculated for each set of benchmarks. Contrary to the hypothesis, significant differences were not found between the locally-generated benchmarks and the DIBELS benchmarks in PSSA prediction accuracy. Significant differences between the locally-generated benchmarks and the DIBELS benchmarks levels of sensitivity, specificity, and positive predict power was produced in both school districts. Given these differences, the use of more than one set of benchmarks scores for instructional decision making is recommended. The author also recommends that school psychologists should learn how well the nationally-derived DIBELS benchmark corresponds with the local expectations to ensure that sound decision practices are used when determining how to best meet student needs.

Share

COinS