Interpreting the conflicting evidence on Head Start effectiveness

The evidence on the effectiveness of Head Start is mixed. On the one hand, the recent random assignment study of Head Start found that test score effects of Head Start mostly disappeared by the end of first grade. On the other hand, several well-done long-term studies of Head Start have found significant effects many years later on young adult outcomes.

How can we best interpret such contradictory evidence? One possible interpretation is that the random assignment evidence should be regarded as the “gold standard” evidence. Under this interpretation, the well-done studies that have found long-term effects of Head Start should be regarded as providing spurious evidence due to happenstance.  If many researchers do studies, some are bound simply by chance to find positive effects. It is possible for academic journal reviewers to be more enthusiastic about papers that report statistically significant positive effects compared to papers that report insignificant results or negative results that don’t make sense.  (For an example of this interpretation, see the recent paper in Science by Steve Barnett.)

Another interpretation of this evidence is that the relative effects of Head Start, compared to other preschool and child care options available to potential Head Start participants, have declined over time. Long-term studies of Head Start are by necessity studying the effects of Head Start as it was some years ago. And all studies of any preschool program are studying the effect of that preschool program compared to whatever mix of preschool, child care, parental care, and relative care is utilized by the control or comparison group.

One factor that has changed over time is that the alternatives to Head Start have probably improved in quality. Many states have set up high-quality preschool programs. It is also possible that many private child care and preschool programs have become more educational in focus.

Therefore, the alternative interpretation of the evidence is that Head Start was at one time considerably better than the alternative available to low-income families, but that today, Head Start on average is not much better than the alternatives available to low-income families.  Under this interpretation, the problem is that Head Start has not improved its effectiveness sufficiently to keep superior to its competition. This is a problem because Head Start requires considerable resources.

Finally, it should be noted that under either interpretation of the Head Start research evidence, we are only making statements about the effectiveness of Head Start on average. There is some evidence that some Head Start Centers are considerably more effective than average.

Under either interpretation of the evidence, it makes more sense to improve the effectiveness of Head Start rather than to defund it.  The evidence for the effectiveness of some preschool programs, such as many state-funded pre-K programs, is strong. We know that high-quality pre-K programs can work, and can work on a large scale. The question is what reforms in Head Start quality standards, staff training, curriculum, and funding approaches, will best increase the average effectiveness of Head Start.

About timbartik

Tim Bartik is a senior economist at the Upjohn Institute for Employment Research, a non-profit and non-partisan research organization in Kalamazoo, Michigan. His research specializes in state and local economic development policies and local labor markets.
This entry was posted in Early childhood program design issues, Early childhood programs, Local variation in benefits, National vs. state vs. local. Bookmark the permalink.