Friday, August 3, 2018

On Average, the Average Average isn't as Average as the Average Person Thinks...

In what some are calling the "Post-Fact World," it is critical that we work to be informed consumers of information.  To that end, I thought it might be useful to share some thoughts and information on the notion of "average" amounts and why you need to look at reported averages very closely to make sure you understand what they mean.

First, let's talk about what the word "Average" means.

Merriam-Webster says Average is "a : a single value (such as a mean, mode, or median) that summarizes or represents the general significance of a set of unequal values."

Wikipedia says Average is "a middle or typical number of a list of numbers."

Neither of these definitions is terribly helpful.  Many statistics texts talk about "measures of central tendency," which is what measures typically called "averages" really are getting at.

As Merriam-Webster mentions, the three measures that are often referred to as averages are Mean, Mode, and Median.
  • The Mean is what most of us think of when we hear "Average."  It is calculated by adding up all the values in a list and then dividing by the number of values in that list.  
  • The Mode is the most frequently occurring value in a list of values. 
  • The Median is the middle value in a series of values sorted from lowest to highest or from highest to lowest.  
Let's take the following list of values:
  • 1
  • 2
  • 2
  • 2
  • 3
  • 4
  • 5
  • 5
  • 6
  • 7
  • 7
  • 8
  • 9
The median for the above list would be 5, because that is the 7th number on the list of 13 going from smallest to largest.

The mode for the above list would be 2, because there are three 2's in the list.

The mean for the above list would be 4.69, because 1+2+2+2+3+4+5+5+6+7+7+8+9 =  61 and 61/13 = 4.69.

So, as can be seen, even on a simple list of numbers, different values can be reported for "Average" depending on what statistic is used.

Why do we have these different statistics?  And when should each be used?

Typically, mean and median are the most commonly used statistics for calculating averages.  And the biggest reason to choose median over mode is to control for what are called outliers.

In our example list above, the median and mode are still pretty close (5 versus 4.69).  But what if the list looked like this:
  • 1
  • 2
  • 2
  • 2
  • 3
  • 4
  • 5
  • 5
  • 6
  • 7
  • 7
  • 8
  • 9
  • 100
In this case, the median is still 5, but the mean is 12.38.  A single value that is very different from the rest of the list is enough to make the mean less meaningful.

People will sometimes "throw out" outliers to avoid producing averages that aren't really representative of the data, so that is something to watch for as well.  Typically this will be done by excluding a certain number of the highest and lowest values in a list.

On the list above, if the two highest and two lowest values were excluded from the calculation (to "control" for outliers), the median would still be 5, and the mean would also be 5.

On the other hand, what if our list looked like this:
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 100
  • 100
  • 1,000
  • 1,000
  • 10,000
  • 10,000
In this case, our median is 1, and our mean is 1,708.23.  If we exclude the two highest and lowest values, then the median is still 1, and the mean would be 245.  Neither "average" really gives us a good feel for what is in the dataset.  In fact, for this data the mode might be the better statistic to use.

Another consideration is the level of aggregation.  In other words, when dealing with groups within groups, how are the statistics calculated?

For example, let's say we are interested in the average income for a state.  We have information on the average by county, so if we want to know the state average, we can just add up the county amounts and divide by the number of counties to get the mean for the state, right?

That would work if each county had the same number of income-earners in it.  But they don't.  So to get an accurate average across the state by income-earner, you would have to know the number of people in each county, and "weigh" the average amount for each county based on the number of people in that county.

Let's use another example.  This time we are going to use grocery stores.  The following list shows the store number, the average sale amount, and the number of customers.
  • 1     $30     42
  • 2     $25     20
  • 3     $10     75
If we want to know the average (mean) sale amount by customer, we can't just add $30 + $25 + $10 and divide by three to get $21.67.  We have to do this:

( ($30 x 42) + ($25 x 20) + ($10 x 75) ) / (42 + 20 + 75) = ($1,260 + $500 + $750) / 137 = $2,510 / 137 = $18.32

If we want to know the average total store sales, on the other hand, we would do this:

( ($30 x 42) + ($25 x 20) + ($10 x 75) ) / 3 = $836.67

By the way, I apologize if I am causing you high school algebra flashbacks, but sometimes working the math out is the best way to ensure you know what you are looking at.

Next, let's talk about rounding.  Best practice suggests that we should do all of our calculations using the most specific data, then the final results can be rounded for reporting.

For example, if I have a list of numbers that each have four decimal places, but I want to report out just using a single decimal place, I should still do all of my calculations using four decimal places and then round my final answer back up to one decimal place.  You will often see people taking shortcuts and rounding their list of numbers before trying to calculate an average.  Though this does not result in an average that is wrong, it does result in an average that is less accurate.

I've talked about the calculations above without mention of deliberate bias so far, but this has to be considered as well.

In the examples above comparing mean and median, some folks would calculate both and then use whichever one was more consistent with their message.  The same can be said for rounding, throwing out outliers, and level of aggregation.

These are just a few considerations to keep in mind when reading reported averages.  There are others, of course, but the important thing to know is that there are many ways to calculate averages, and when reading or hearing about a particular average, it is a good idea to try and determine how it was calculated and whether there was an ulterior motive in the choice of statistics used.  

Tuesday, May 22, 2018

The Devil's in the Demographics

This scatterplot shows the relationship between the percent eligible for Free or Reduced-Price Lunch within a school and the percent performing at Grade Level or Above on the 2017 KS State Assessments (Reading and Mathematics). 

We all want to be able to compare things.  Humans have a drive to qualify the world around them by determining what is better and what is worse.  The problem is, we often make these judgments without considering all the pertinent facts.

Take school performance, for example.  What if you wanted to make a Report Card that compared schools on state assessment results?  What would you need to take into consideration?

From a statistical standpoint, you would want to find all the things that can reliably predict student outcomes on these state assessments not related to school effectiveness, then you would want to somehow take these into consideration when comparing one school to another, to make sure you were really getting a sense of the differences based on the school's influence rather than on other outside factors.

KASB has spent quite a bit of time with the Kansas State Assessment data for the past few years.  We've compiled in an online tool which can be found here:

In the process of examining this data, we have found several demographic characteristics at both the school and district level that are significant predictors of school performance.  In other words, we can reliably predict student outcomes on the Kansas State Assessments based on a variety of factors not directly related to school effectiveness.

Here are some examples of what we found:
  • Larger schools have lower percents of students performing at grade level.
  • Schools in districts with higher percents of white students have higher percents of students performing at grade level and at college and career ready. 
  • Schools in districts with higher percents of Hispanic students have lower percents of students performing at grade level and at college and career ready. 
  • Schools in districts with higher percents of ELL students have lower percents of students performing at grade level and at college and career ready.
  • Schools in districts with higher percents of Migrant students have lower percents of students performing at grade level and at college and career ready.
  • Schools with higher percents of Economically Disadvantaged students, which is measured by the percent of students eligible for Free or Reduced-Price Lunch, have lower percents of students performing at grade level and at college and career ready. 
  • Schools in districts with higher percents of Students with Disabilities have lower percents of students performing at grade level and at college and career ready. 

If all of the above are true, then a rating system that did not take these factors into consideration would be biased towards smaller schools with high percents of white students and low percents of students that are Hispanic, English Language Learners, Migrant, Economically Disadvantaged, and that have Disabilities.  

This is certainly something to consider when looking at any kind of rating system that claims to present an accurate comparison across schools in Kansas.  

Thursday, May 3, 2018

Kansas Enrollment Projections 2017-18 Enhanced

Earlier this year, KASB released its annual Statewide Enrollment Projections along with district-level projections by grade and free/reduced-price lunch status.  The report can be found here, and the online tool allowing you to select one or more districts can be found here.

Recently, KASB gathered demographics data from KSDE's report card data, and has incorporated this into the enrollment projections to provide state and district-level projections for gender, race/ethnicity, economically disadvantaged students, English Language Learner (ELL) students, migrant students, and students with disabilities.

This data was available for the 2012-13, 2013-14, 2014-15, 2015-16, and 2016-17 school years.  KASB calculated the percent change from one year to the next for these five years, then averaged them together to determine the 2017-18 values.  For 2018-19 through 2022-23, a five-year rolling average percent change was used to calculated the values.

It is important to note that these demographic indicators come from the state's assessment and accountability data, and may reflect a slightly different population than the headcount enrollment numbers, based on funding, that are used for the overall enrollment projection counts.

The results at the state level are as follows:


The data suggests there will be no change in the percent breakouts for gender, with just slightly over half of students (51.5%) male.


There is a growing percent of the student population reported as Hispanic, increasing from 17.8% in 2012-13 to 19.5% in 2016-17.  This trend is expected to continue, with the projections indicating the percent will increase to 21.8% by 2022-23.  Students reported as white or black are expected to decrease, but students reported in the remaining race categories, taken collectively, are expected to increase.

Economically Disadvantaged

From 2012-13 to 2016-17, the percent of Kansas students reported as economically disadvantaged decreased from 50.0% to 48.3%.  These projections suggest that by 2022-23, the percent could be down to 45.1%.


The ELL data shows the biggest change across this time period.  From 2012-13 to 2016-17, the percent of students receiving English Language Learner services increased from 8.4% to 11.8%.  If this trend continues, 17.3% of students could be receiving such services by 2022-23.  

However, it is important to note that the reporting and testing requirements have changed in recent years, as have the effectiveness of identifying students needing such services.  So, increases in the past few years may not continue throughout the projection period.  


The percent of students identified as migrant continues to be very small - going from 1.8% to 1.9% in the actual data and projected to increase to 2.1% by 2022-23.  

Students with Disabilities

The percent of students with disabilities increased slightly in the actual data, and is projected to continue this gradual increase, going from 14.4% in 2016-17 to 15.5% in 2022-23.

Next year's projections will include these demographic categories, along with any others that KASB can find to include.  KASB also hopes to address the issue of virtual school enrollment numbers, which are reported by KSDE at the district level and can inflate projections for certain districts.

Friday, April 27, 2018

NAEP Limitations and Cautions

Every other year when the NAEP results come out, there is a lot of discussion and debate about the national examination.  Many put a lot of stock in the data it provides, as it is pretty much the only national test with results from every state that can be used to make comparisons of student performance across the country.  However, there are some limitations to the NAEP exam and some cautions that should be noted in terms of how the results are used.

In 2009, researchers from the Buros Center for Testing at the University of Nebraska-Lincoln (my graduate school) and from the Center for Educational Assessment at the University of Massachusetts - Amherst published the Final Report for their Evaluation of the National Assessment of Educational Process based on a congressional mandate.  The full report can be found here.

In the executive summary, the authors state the following:
Comparing student achievement on NAEP across states is complicated. To appreciate the challenges in making state-by-state comparisons, it is necessary to understand the sampling design adopted by NAEP and its potential impact on the results and their interpretations. In NAEP’s multistage cluster sampling procedure, not all students take the assessment, and those students who do take NAEP respond to a subset of the NAEP items in each content area. While this allows for a broad sampling of items from any one content domain, the extent to which subgroups of students are represented adequately in NAEP’s state samples is of concern.  
As reported in the current evaluation, NAEP’s sampling procedures do not ensure adequate representation of various subgroups (including those defined by race and ethnicity) within some states, putting valid interpretations about subgroup performances within a state and across states at risk. Using NAEP to verify state results regarding the achievement of students with disabilities is also problematic because decisions about inclusion and allowable accommodations are made at the state level. Because states vary in their inclusion rates and in their treatment of accommodations for NAEP, the validity of state-by-state comparisons is debatable. 
Below are the main concerns and recommendations the researchers stated regarding NAEP and appropriate uses of its data.

  • There is not an organized validity framework for the exam, which is needed given the complexity and multiple uses of NAEP.  According to the report, "An organized validity framework takes into account the history of the assessment program, current learning theory, and content-performance expectations from the subject-matter field and related professions. It also addresses contemporary xviii issues in current interpretations and uses of the assessment and anticipates future appropriate and inappropriate uses and consequences of the assessment."
  • Additional studies are warranted if NAEP is to be used to verify state assessment results.  According to the report, "As reported in the current evaluation, there are numerous factors that can jeopardize the validity of interpretations when using NAEP to verify state results. These include differences in content being assessed, differences in standard setting policies and procedures, differences in the definition of the achievement levels, and differences in the representation of the NAEP state samples. Additional alignment studies that evaluate the congruency between the content assessed by NAEP and state content standards and assessment are crucial. The sampling procedures for NAEP should also be studied. Representation of subgroups across states varies considerably as do the inclusion and exclusion rates for students with disabilities, impacting the validity of the use of NAEP results for state-by-state comparisons and for verifying state assessment results."
  • Revise review processes for NAEP technical reports and manuals that facilitate their timely release.   "Currently, release of NAEP technical documentation can be years after results have been released, exceeding what testing programs should tolerate...  There are several reasons for releasing timely technical documentation; primarily, it assists users in understanding appropriate uses and limitations of NAEP scores."
  • Other measures of U.S. students’ educational achievement do not provide strong sources of external validity evidence for NAEP achievement levels. "It is a challenge to gather validity evidence from multiple sources outside a standard-setting study that can be used to evaluate achievement levels. Furthermore, external data are not perfect evaluation evidence due to potential differences in content, sample, and purpose. For example, some tests (like well-known college admissions tests—e.g., the SAT and ACT) involve self-selected samples of college-bound seniors, not a nationally representative sample. In many cases external tests serve purposes that are very different from NAEP. As the differences between what tests purport to do and what they measure increase, the utility of these measures as external evidence decreases." 
  • NAEP should continue to explore methodologies for setting achievement levels. "Stakeholders continue to use achievement levels as one means of interpreting NAEP results. NAEP has engaged in extensive research on standard-setting since 1992 to improve its practice. Some of this research includes the pilot studies done on the new Mapmark method (Schulz 22 and Mitzel, 2005). However, because this new methodology is not widely used, more research on whether it is appropriate for other NAEP subject areas is needed. Although we conclude that the new methodology worked well with the experts involved in the study on the 2005 grade 12 mathematics assessment, the degree to which the method will work with experts from other subject areas cannot be determined from this evaluation." 
  • NAEP should prioritize gathering external validity evidence that evaluates the intended uses and interpretations of its achievement levels. "The validity evidence collected by NAEP from internal and procedural sources suggest that the methodology was implemented as intended and that panelists had a positive experience with the process. However, the reasonableness of the results is a judgmental decision by policymakers who should consider additional sources of information. External validity evidence is an additional source of information to help policymakers make the final policy decisions about NAEP achievement levels. Such evidence may include results from additional standard-setting methods, state university entrance levels at the high school level, and transcript studies that evaluate course performance.12 The extent to which the sources of evidence may converge is affected by the intended uses and interpretations of NAEP’s achievement levels as articulated in a validity framework."
  • Current NAEP inclusion and participation policies and rates may not provide evidence to support intended uses and interpretations of NAEP. "As mentioned earlier, the intended uses and interpretations of NAEP results should be defined in a validity framework and related to how different types of students and schools are included in the results. Unlike state assessment programs developed for NCLB, all students do not take NAEP. Further, those who take NAEP do not take a full assessment but rather a sample of its content. Thus, those included or excluded can influence the results and any score interpretations. This is particularly true for students with disabilities (SWD) and English language learners (ELL). Decisions about inclusion and accommodations of SWD and ELL are made at the state level... Beyond inclusion policies, participation is also an important consideration. NAEP remains a voluntary assessment for students. Therefore, nonresponse and refusal to participate represent potential threats to the validity of NAEP scores, particularly for grade 12 and private school samples. For example, Chromy (2005) noted that recent student participation rates for grade 12 (74 percent) were considerably lower than grade 4 (94 percent) and grade 8 (92 percent). It is also unclear whether current sampling plans include all potential subgroups of interest within a state, such as students with specific ethnicities, disabilities, varying language proficiencies, and free and reduced-priced lunch program status."
  • Intended users were not familiar with NAEP scale scores and had difficulty distinguishing between achievement levels on NAEP and those that were developed by states for NCLB reporting purposes. "Most participants in our utility studies identified NAEP with state-level results. This represents a communications challenge for the future because of stakeholders’ familiarity with the reporting scales and achievement levels used for their state’s own NCLB assessment. For example, there was confusion among participants between state and NAEP achievement level results. This led to recognition that states’ definitions of Proficient are perhaps different from NAEP’s definition of Proficient. However, the nature of such differences is not readily apparent. Another source of confusion is that NAEP defines three achievement levels (i.e. basic, proficient, and advanced), yet often indirectly reports student performance at four levels (i.e. below basic, basic, proficient, and advanced). No policy definition for the achievement level below basic exists." 
  • Prioritize score reporting and interpretation as an area for research in the NAEP program. "Systematic studies of methods to report NAEP scale scores and achievement levels should be carried out with stakeholder groups prior to their operational use. Although some of this research may include print media, a more critical focus for evaluation is the expanding presence of NAEP on the World Wide Web. Where appropriate, the NAEP elements on the Web should be revised to represent empirical findings about ease of use, stakeholder interests, and accepted Web site development practices. Because NAEP reporting continues to invest in the use of interactive, online tools, the utility of these features must also be assessed." 
It is important to note that this report is almost ten years old, so it is possible that some of the concerns listed above have been addressed.  However, I could not find any research or reports that described changes like the ones suggested above in the years since it was published, and most sources indicate very little has changed about how NAEP is administered, scored, and reported in the past several years. 

Monday, April 23, 2018

Mind the Gap

Last time, we talked about the difference between Kansas and all Public Schools in the U.S. in terms of the NAEP reading and math assessments for 4th and 8th grade.  This analysis looked at the values for all students.

Today, we are going to look at the gap in performance between those who qualify for free or reduced-price lunches under the National School Lunch Program versus those who do not qualify.  For public school students in the nation, this is the closest thing we have to a direct measure of students' socioeconomic status.  The gap between performance for high and low income students indicates how successful the education system is at providing an equitable education for all students.

The comparison group we are using for this analysis is slightly different than what we used for the last one based on the available data.  Previously we were able to only include public school numbers at the national level, but for this analysis we will be looking at the total U.S. average for comparisons.

So, this analysis will examine the question, "Is Kansas doing better at providing equitable education for low and non-low income students than the nation as a whole?"


The chart above shows the average gap between scores for students eligible for free or reduced-price lunch and scores for students not eligible for free or reduced-price lunch.  The orange lines indicate the average for the United States, and the blue lines indicate the averages for Kansas.

The first thing to notice is that across time, the gap has been fairly consistent at around 20-25 points.  The second thing to notice is that over time, the gap for Kansas has been very close to, but just under, the gap for the United States.

From 2015 to 2017, however, the gap for Kansas students taking the 4th grade math exam decreased.  In 2015, the gap for Kansas was closer to the U.S. average than in any previous year, but it started moving away again in 2017.  This patterns was similar for 4th grade reading, but the gap for Kansas was actually higher than the gap for the U.S. in 2015, but dropped back below it in 2017.

The gaps for 8th grade reading and math declined for Kansas in 2015, but increased again in 2017 to almost the same as the U.S. average.

This data suggests that Kansas Public Schools have historically been about as successful at ensuring equitable education for low income students as the nation has, and that the gap in scores suggests more efforts are needed.  In addition, the gap for 4th grade students in Kansas is decreasing, while the gap for 8th grade students is getting bigger.

Percent at Basic or Above

The chart above shows the average gap between the percent of students eligible for free or reduced-price lunch who scored at Basic or above and the percent of students not eligible for free or reduced-price lunch who scored at Basic or above.  The orange lines indicate the average for the United States, and the blue lines indicate the averages for Kansas.

The patterns seen for these percentages closely follows those described for the scores; Kansas has historically had a smaller gap than the nation, but they have gotten closer together in recent years.  In 2017, 4th grade results showed a decrease in the gap for Kansas, but 8th grade results showed an increase in the gap for Kansas (after decreases seen from 2013 to 2015).

This data again suggests that the gaps in performance, both for Kansas and for the nation, are too large, and have shown little substantial change over time.  In Kansas,  this data suggests more efforts need to be made in the higher grades to see a decrease in the gap, and that the efforts being made in the lower grades needs to continue.

Percent at Proficient or Above

The chart above shows the average gap between the percent of students eligible for free or reduced-price lunch who scored at Proficient or above and the percent of students not eligible for free or reduced-price lunch who scored at Proficient or above.  The orange lines indicate the average for the United States, and the blue lines indicate the averages for Kansas.

Though the patterns seen for this metric are very similar to the trends described for the scores and for the percent performing at Basic or above, the differences between Kansas and the U.S. are much smaller for the students performing at Proficient or above.  Further, the gap for 4th grade reading was higher for Kansas than for the nation in 2013 and 2015, but dropped below the national average in 2017.

Summary & Conclusion

Taken all together, these comparisons suggest that:
  • Nationally, efforts are needed to help lower income students perform at the same levels as higher income students.  
  • The differences between how lower and higher income students perform in Kansas is very similar to the differences nationwide.  
  • Historically the differences between lower and higher income students' performance in Kansas has been lower than the differences nationwide, but recent years have seen Kansas' gap exceed that for the nation.
  • From 2015 to 2017, in general the gap for Kansas students in 4th grade decreased, and moved further below the national average than in recent years.
  • From 2015 to 2017, in general the gap for Kansas students in 8th grade increased, and moved closer to the national average than in recent years.
In Kansas, we have spent much of the last decade discussing the importance of an education that is both adequate and equitable.  The NAEP data suggests that, whether the education is adequate or not, it is not equitable.  The Kansas legislature and public school system have a responsibility to make efforts to improve equity, but it is important to note that the discrepancies we see in Kansas closely mirror those seen nationwide.  This suggests that national efforts are also needed to improve the education that lower income students are receiving to enable them to perform as well as their higher income peers.  

Tuesday, April 10, 2018

How Kansas Compares to the Nation’s Public Schools on NAEP


The 2017 NAEP results were made available this week, so many are digging through the data to try and make sense of what it might tell us.  This article is another attempt to do just that.

KASB is frequently asked to speak to how Kansas students are doing.  It would make sense to answer this question by comparing Kansas public school students to all public school students in the nation, and to focus not on the scores and percentages themselves, but on how much above or below the nation Kansas students perform. 

Therefore, for this analysis, I am presenting the difference between the Mean Scores, Percent at Basic or Above, and Percent at Proficient and Above on the 4th and 8th grade Reading and Math exams for Kansas public school students and the scores and percentages for all public school student nationwide.

There are student subgroups worth investigating, such as students eligible for free or reduced-price lunch versus those who are not eligible.  Though this article focuses on all students, further analysis may delve into these subgroups.

Mean Scores

The following graph shows the difference in mean scores between Kansas and the nation’s public schools on the four exams.

As can be seen, in terms of mean scores, Kansas students have had consistently higher mean scores than the national average for public school students with the exception of the 4th grade reading test in 2015.  However, the number of points above the national average has been declining since 2005.  That is, until 2017, when the number of points above the national average increased for all tests except for the 8th grade reading exam, which is also the only exam that saw an increase in the number of points Kansas is above the nation in 2015.

This data suggests that though Kansas has been “losing it’s lead” on the nation for several years, this trend may be reversing as of the 2017 results in terms of mean (average) scores.

Percent at Basic or Above

The following graph shows the difference in the percent of students performing at Basic or above between Kansas and the nation’s public schools on the four exams.

Similar to the mean scores, this data shows that the percent of Kansas public school students performing at Basic or above has been consistently above the national percentage (except for 4th grade reading in 2015), but Kansas’ lead has been declining since at least 2009.

For 4th grade, the number of percentage points above the national percent increased from 2015 to 2017, suggesting a reversal in the trend.  For 8th grade, Kansas’ lead decreased slightly for Math, and more noticeably for Reading.

Percent at Proficient or Above

The following graph shows the difference in the percent of students performing at Proficient or above between Kansas and the nation’s public schools on the four exams.

As with the other two metrics, the percent of Kansas students performing at or above Proficient has been consistently higher than the percent of public school students nationally.  The number of percentage points above the national average has been generally on the decline since 2005.

However, for 4th and 8th grade math and 4th grade reading, the number of percentage points above the national average increased for Kansas public schools, and the 8th grade reading number declined only slightly. 


This brief analysis suggests that the last decade has seen a change in the performance of Kansas public school students when compared to public school students nationwide, with Kansas students continuing to outperform the nation, but losing their lead in each subsequent testing year.  However, the 2017 data suggest that there may be a change in this trend, with Kansas students starting to push ahead further than their peers in other states to a greater extent than in 2015. 

With new funding for Kansas public schools, we can only hope that this new trend continues.

Thursday, September 28, 2017

The Education Commission of the States - Recent Research

I sometimes use the Sticky Note application on my desktop computer.  Not to store all my passwords, of course, but to keep track of interesting articles or studies that I want to look into "when I have time." 

One of my favorite sources for such articles is the Education Commission of the States.  The commission was started in the mid-sixties as a way for scholars and policy makers to share information and ideas across the country.   Today they produce great research and analysis on education policy and education system characteristics in each of the 50 states. 

Below are five studies they have released recently that have been on my "to review more in-depth when I have time" pile.

High School Feedback ReportsEducation Commission of the States researched high school feedback reports in all states to provide this comprehensive resource. This database reviews high school feedback reports and systems in all 50 states. A state received a “yes” if the state has established a system or report that provides data regarding postsecondary enrollment and/or postsecondary performance of high school graduates, broken down at the high school and/or district level.

Key takeaways

  • Forty-two states currently have a system or report that provides data regarding postsecondary enrollment and/or postsecondary performance of high school graduates, broken down at the high school and/or district level.
  • Twenty-four states provide one or more elements of this data broken down by race/ethnicity.
  • Twenty-one states provide one or more elements of this data broken down by income indicators.
  • Thirty-nine states’ high school feedback reports are publicly available.
  • State education agencies produce high school feedback reports or systems in 21 states, state higher education agencies produce them in 13 states, and P20W collaborative agencies or statewide longitudinal data systems produce them in eight states.

Advancing Student Success Through the Arts: This Education Trends report explores research on how the arts bolster the development of deeper learning skills, provides examples of programs that successfully increased access to the arts in education in public schools, and includes state- and local-level policy considerations.

State Innovations for Near Completers: This Promising Practices report overviews the 2017 policy landscape regarding near-completers, reviews three states’ outreach strategies to this population and includes policy considerations for state leaders.

The Civics Education Initiative 2015-17:  This Education Trends report provides an update on state adoption of the Civics Education Initiative, explains the impact of the initiative, looks at how states customized the initiative, and provides examples and opportunities for policymakers to build on civic education policy efforts, such as the Civics Education Initiative.

Outcomes-Based Funding: This Policy Snapshot defines and explores outcomes-based funding, and provides summary information on 2016 and 2017 legislative activities.

If you visit ECS's site, be sure to subscribe so you will receive notices when they release new reports.