Evidence at the Crossroads Pt. 2: Moneyball for Education
This post originally appeared on the William T. Grant Foundation website, as part of the Evidence at the Crossroads series.
By Frederick M. Hess and Bethany Little
Earlier this year, we made the bipartisan case for why and how federal education policymakers need to start playing “Moneyball.” By adopting and adapting the Oakland Athletics’ pioneering approach in baseball of making decisions informed by data—rather than hunches, biases, and “the way we’ve always done things”—we can get better returns on our federal education investments and better outcomes for students.
Specifically, playing Moneyball for Education would mean:
- Collecting better, more useful data and building evidence about how well programs and policies work;
- Using evidence to improve practice and inform policies; and
- Shifting funds toward those things that deliver more promising results.
Sounds easy, right? Not so fast. Like baseball, Moneyball is actually a game of nuance, a concept that is rarely synonymous with federal policymaking. Too often policy is made absent the data that would allow it to be more effective.
Luckily some important groundwork has been laid to allow Moneyball for Education to begin to take hold. For example, Congress has funded several tiered evidence initiatives that offer a way to explore the possibility of rewarding evidence of performance and building the body of evidence in the field. These could help make some important shifts to better policymaking, if the initiatives themselves are used as sources of data, evidence, and models for continuous improvement.
As we grapple with the information emerging from these initiatives, it is important to keep in mind both the possibilities and the limits of the Moneyball approach.
First, we don’t yet have all the advanced metrics needed to play the game really well. By necessity, the coming evaluations of the tiered evidence initiatives in education will focus largely on what is readily measured: reading and math scores and graduation rates. Those results should be valued and used, but our reactions to them should be tempered by acknowledging that there are limits to the current state of measurement. Simply put, we don’t yet know how to measure some of our most sought-after student outcomes, like critical thinking and collaborative problem solving. That limits what can be known about the true effectiveness of some interventions. Accordingly, just as baseball now benefits from a wealth of recently-developed advanced metrics, education needs a broader set of indicators that lead to improved student outcomes. The federal government (via the Institute of Education Sciences) could help, but the entire educational ecosystem, including non-governmental entities, should play a role in identifying the right metrics and developing and refining them over time.
Second, it’s as important to learn from a strikeout as from a home run. In the rush to celebrate winners, important lessons to be learned from mistakes, missteps, and disappointments are often overlooked. Accordingly, all evaluation results should be studied closely, including those finding no, mixed, and even negative effects. Learning from failure is one of the most important drivers of continuous improvement and an essential tool for creating a winning team—and better schools. Yet, too often the federal government doesn’t make the investment in evaluation that is necessary to provide insight into how programs could be improved. A small but significant percentage of all program funds should be set aside for performing the high-quality program evaluations needed for us to even be in a position to learn from all of our at-bats.
Third, there are different expectations of veterans and rookies. While some slack should be given to the young rookie with unproven potential, veterans should be held to higher standards of performance. After all, the veterans are expected to continue producing results and often cost more given their track record of success. In other words, just as tiered evidence programs differentiate the available funding based on tiers of supporting evidence, so too should they differentiate the perspective through which they review the grantees’ evaluations. Because it is vital that we encourage creative new solutions to persistent problems, we should accept as part of building the knowledge base the notion that we will spend some dollars on grants with promise that ultimately don’t pan out. However, when we are investing to scale up a proven solution, we should hold it to higher expectations. Merely saying that a lot was learned from scale up grants is insufficient. If scale up grants do not produce the desired outcomes, that may have implications for policy design (e.g., perhaps the evidence bar was set too low to merit an investment at scale, or perhaps we weren’t able to fully measure the benefits of the program given limited metrics). The ability to differentiate like this is one big reason that the tiered funding framework holds a lot of promise.
Fourth, learn to field the best team possible within the salary cap. A baseball general manager would be run out of town if he signed new players to contracts without considering the team’s overall payroll and without analyzing whether a player’s salary reflected his production. Yet in education, we typically focus just on program outcomes without paying much attention to the costs of producing them. It’s time to get smarter about measuring the return on investment through cost-benefit analyses. To do so, more precise, transparent, and common cost accounting rules are ultimately needed, but in the meantime the available cost data should be considered in making sense of the upcoming evaluations.
Fifth, don’t judge the whole season on the basis of one game. There is much we can learn from these evaluations, but we must resist the urge to make overly broad judgments on the basis of one study. This caution applies somewhat to how we view the effectiveness of discrete interventions, but more so to how we assess the impact of a big federal program that funds states and providers that vary in their activities and their quality. The complexity of federal programs means it is often difficult to determine whether the funding stream itself is effective—or to reach pat conclusions about which programs do or don’t “work.” In other words, as we prepare for the coming season, we should avoid the tendency to oversimplify, and instead seek to answer questions like, “What adjustments would bring about better results next time?”
Our paper explains more of our thoughts about how to play Moneyball in the context of federal education policy, including a set of seven specific policy recommendations for Congress and the executive branch. These recommendations, however, do not expect the federal government to play Moneyball by itself. Rather, they form a playbook for how the government can help nurture an ecology of information, institutions, and incentives that will make it easier for everyone involved in education to play this game well.
Now that’s a winning formula.
Frederick M. Hess is director of education policy studies at the American Enterprise Institute, and Bethany Little is an advisor to Results for America and a principal at EducationCounsel.