In my new book, From Preschool to Prosperity, one issue about Head Start that I discuss is the change over time in results from studies of the effectiveness of Head Start. Earlier studies of Head Start find evidence of Head Start’s effectiveness, in both the short-run and long-run. (For examples, see studies by Deming, Currie et al., and Ludwig et al.) But the most recent Head Start experiment finds that Head Start’s short-term benefits quickly fade. How can these conflicting results be explained?
As mentioned in yesterday’s blog post, one explanation is that the Head Start experiment’s fading test score results need not imply a lack of long-run effects on adult outcomes. However, this explanation seems incomplete, as previous Head Start studies, such as Deming’s study, although they find some fading of test score effects, do not find test score effects to decline so drastically so quickly.
One plausible additional explanation is that there has been some change in the quality of the alternatives to Head Start. Studies of Head Start or any program are always comparing the program to whatever happens to the comparison or control group in the study. Many of the studies that find long-run effects of Head Start are looking at Head Start in the environment it operated in many years ago, back in the 1960s, 1970s, and 1980s. In those time periods, the comparison groups would have in most cases not been enrolled in high-quality pre-K programs. In contrast, in more recent time periods, children that did not enroll in Head Start are more likely to have been enrolled in a high-quality pre-K programs, such as the many state pre-K programs that have grown rapidly over the past 15 years.
There is some evidence for this hypothesis from the recent Head Start experiment. In the recent Head Start experiment, the randomly-assigned treatment group ended up having 80% of the children enrolled in Head Start. The randomly-assigned control group ended up with about half of its children enrolled in some pre-K program, 14 percent in Head Start, and 35 percent in some other pre-K program.
If some of the control group pre-K enrollment was in state pre-K programs that yielded higher test score effects than Head Start, this would reduce the relative test score advantages of Head Start. The Head Start experiment may not show that Head Start has no effects relative to no preschool, but rather that its effects are not large compared to alternative pre-K programs.
If this interpretation is correct, the Head Start experiment should not be interpreted as evidence that preschool is ineffective. Rather, the experiment should be interpreted as meaning that Head Start needs to improve, or at least needed to improve as of 2002-03, to catch up with the quality of its best pre-K alternatives. And there have been attempts to improve Head Start relative to the 2002-03 time period studied in the experiment. In recent years, more literacy instruction has been pushed in Head Start. In addition, the federal government is evaluating the quality of local Head Start programs, and requiring recompetition for Head Start grants in cases where there are issues with local quality.
A 2002-2003 study that compares one preschool program, Head Start, with other preschool programs, does not show that all preschool at all times is ineffective compared to zero preschool. The case for preschool is that quality preschool programs can help children relative to zero preschool or low-quality preschool. The Head Start experiment does not provide strong evidence against the overall case for quality preschool.