Monday, June 2, 2014

STEM-Focused Schools and Mixed Research Results

Today, EdWeek posted a blog entry entitled "Research on STEM-Focused Schools Shows Mixed Results."   The article presents some interesting results related to the effectiveness of STEM programs, but it also serves as a cautionary tale related to research and how it should be interpreted.

I encourage readers interested in STEM programs to take a look at the article.  However, rather than summarizing the findings, I want to highlight a few "yellow flags" that I noticed.

  •  The first article cited was written by a researcher employed by the American Institutes for Research; which is a nonprofit company that offers a wide variety of services and products including technical assistance and evaluation services to schools, districts, and state departments of education.  Because AIR is a company interested in increasing revenue, there is a greater chance for bias in the approach and presentation of research done by its staff than would be in research done by universities or other more independent entities.  That is not to say this bias is certainly present in anything presented by AIR staff, but it is something to consider. 
  • The first article is also cited as using "so-called 'value-added' methods."  Such methods, which involve using student learning gains as measured by test scores to evaluate effectiveness of teachers, schools, and/or districts, are controversial at best.  The Institute of Education Sciences indicates that value added estimates "are likely to be noisy using the amount of data that are typically used in practice" - meaning that they should not be considered absolute and foolproof measures of effectiveness.
  • The second article cited also employed value-added methods, and found that STEM magnets did not significantly improve student achievement in STEM courses, but that STEM charter schools did.  However, almost as an afterthought, the blog post notes the researcher's "analysis was limited in that  he did not have data on science, technology or engineering achievement—just math."  In other words, the researcher used only math scores to evaluate the effectiveness of programs designed to improve learning in science, technology, engineering, and math.  
  • The third article cited looked at STEM programs in New York City, and concluded that "once the researchers accounted for demographics and prior test scores, most of the STEM-focused schools' advantages disappeared, suggesting that the schools were disproportionately attracting higher-achieving students who were interested in STEM."  In other words, the STEM program advantages are discredited because these program attract a) higher-achieving students who are more likely to make an active choice in the programs in which they wish to participate, and b) STEM programs attract students who are interested in STEM topics.  Both of these factors mean that the groups being compared are not equal based on their existing characteristics and therefore there is a bias in the "experimental design," but in terms of the program design, it would seem the STEM programs are doing exactly what they are expected to do; attracting the students who are most likely to succeed in STEM.  
The "yellow flags" I note above are just things to consider when looking at this particular research.  I present them because it is important to look at any research with a critical eye, and consider the source of the research and how it has been analyzed and presented.  Often the articles and blog posts we see only give us a small portion of the overall picture, so readers should never take them as absolute fact.  

No comments:

Post a Comment