Friday, June 3, 2016

Promises and Pitfalls of Using NAEP Data

The following post presents research or analyses from outside KASB and is presented for information purposes.  KASB neither endorses nor refutes the conclusions or recommendations contained herein.
for your consideration.gif


“Breaking the Curve:  Promises and Pitfalls of Using NAEP Data to Assess the State Role in Student Achievement”

Matthew Chingos, of the Urban Institute in D.C., published a study in October 2015 that examined the appropriateness of using NAEP data to indicate the impact of state-level education system changes.  You can find the study here.  Using student-level data from 2003 through 2013, he found:

  • Kansas ranks 15th in state performance on the NAEP in 2013.  When adjusting for the following list of student factors, the rank goes to 11th:
    • Gender
    • Race and ethnicity
    • Eligibility for free or reduced-price lunch
    • Limited English proficient
    • Special education
    • Age
    • Accommodations on the NAEP exam
    • Amenities in the student’s home (computer, internet, own room, dishwasher, clothes dryer, etc.)
    • Number of books in the home
    • Language spoken at home
    • Family structure (two-parent, single-parent, foster, etc.)
  • Kansas ranks 32nd in terms of changes to NAEP state performance from 2003 to 2013.  When adjusting for the factors listed above, the rank goes to 28th.
  • Overall, “similar students vary significant in their test performance,” meaning the demographic characteristics listed above are only some of the factors that impact student performance.
  • The states where students perform better than their demographic peers are often not the states with high scores overall.
  • “NAEP scores in all 50 states have increased more than would be expected based on demographic shifts between 2003 and 2013.”
  • Any state comparison using NAEP results needs to consider the demographic differences in students across states.  

In the study, Chingos cited an earlier study that found “63 percent of the variance in achievement was at the student level, 5 percent at the teacher level, 3 percent at the school level, and 2 percent at the district level.”  He goes on to indicate that the role the state and local government play would be even further removed.  However, he indicates that even if the state and federal policies and procedures account for a small percent of overall change, the impact of that small percent can have large impacts when looking across all students in the state or across states.  

The findings that Kansas ranks would be higher if adjusted for student demographic characteristics is contrary to claims made by some in the past that Kansas has students who are “easier to educate” than many states.

The finding that those states where students perform better than predicted based on demographic characteristics echos what KASB found when looking at “Higher Impact” states, as described in this previous blog post.

The author concluded his study with the following:

NAEP scores will become more useful for making comparisons across states if the underlying data include more detailed information on the characteristics of students and their families… Education researchers, policymakers, and practitioners are hungry for information on what works in education...   Coupling principles of sound research design with the use of NAEP data is critical to maximizing the positive impact of this vital resource.  

No comments:

Post a Comment