Early childhood programs today have credible evidence not only from random assignment experiments, but from “silver standard” studies with good comparison groups

My new book, From Preschool to Prosperity, argues that the research evidence for the effectiveness of early childhood programs is broad. The random assignment experiments, such as Perry Preschool, are often cited. Random assignment evidence is regarded as the “gold standard” for testing whether a program truly has causal effects on desired outcomes. But early childhood programs also have good evidence for effectiveness from many “silver standard” research studies.

The challenge in determining the true cause-and-effect relationships between programs and outcomes is unobserved characteristics of program participants. Program participants and non-participants will in general differ in both observed and unobserved characteristics. We can statistically control for the observed characteristics, but cannot do so for the unobserved characteristics. Therefore, it is hard to tell whether differences between program participants and non-participants in later outcomes are due to the program, or due to the pre-existing differences in unobserved characteristics.

For example, more ambitious families may be more likely to enroll their child in preschool. This will bias the study towards finding positive effects of preschool. On the other hand, perhaps families will choose to enroll children with more problems in preschool, which will bias studies towards finding negative effects of preschool.

Random assignment experiments solve this problem of unobserved characteristics by using some randomization procedure to select participants. This assures that on average, program participants and non-participants will have the same average unobserved characteristics. Differences in later outcomes are then more likely to be due to the program.

But in many cases randomization is not possible, or is too costly or cumbersome, or is ethically dubious. In these cases, we may find “natural experiments”, in which some feature of how program access is determined makes it likely that participants and non-participants are otherwise similar. These natural experiments provide “silver standard evidence” for the effects of early childhood programs on outcomes.

For early childhood programs, we have many studies that provide such “silver standard” evidence for the effectiveness of early childhood programs.  Studies of both Head Start and North Carolina’s “More at Four” preschool program provide evidence from geographic variation by county in access to the program that is arguably close to random.  Studies of the Chicago Child-Parent Center program use evidence from geographic variation in program access by neighborhood. Other studies of Head Start use evidence from variation in program usage within a family, comparing one sibling participating in Head Start versus another sibling who did not, which holds constant family factors affecting later outcomes.  Finally, many state pre-K studies compare test scores of children just entering pre-K, and children who have completed a year of pre-K and are entering kindergarten, who differ just slightly in age, with program participation varying because the children’s birthdates come just before or just after the age cut-off for entering kindergarten.

All these studies find evidence that early childhood programs have important effects on child outcomes. Because the participant and non-participant groups seem similar, this evidence is credible.

This evidence is “silver standard” because we cannot be utterly certain, for any particular study, that in fact there may not be some subtle differences between participants and non-participants that might bias results in one direction or another. A perfectly run random assignment experiment would provide better evidence, if such an experiment was readily available. But such random assignment evidence is not always available. And random assignment experiments are not always perfect – for example, in almost any random assignment experiment, there is some sample attrition, so that we lack data from some individuals in the participant and non-participant groups, and this sample attrition might also bias the evidence in one direction or another.

In the real world of social science or natural science, one study of one program rarely trumps all other studies. Each program is somewhat different, and each study has some imperfections. Rather, in determining the likely effects of a program, we need to see what pattern we find in a variety of more-or-less sound studies. For early childhood programs, the weight of the evidence points to these programs’ effectiveness in improving future adult outcomes for former child participants.

From Preschool to Prosperity is available for free as a pdf, for $0.99 on various e-book platforms, and is also available in hard-copy form.

About timbartik

Tim Bartik is a senior economist at the Upjohn Institute for Employment Research, a non-profit and non-partisan research organization in Kalamazoo, Michigan. His research specializes in state and local economic development policies and local labor markets.
This entry was posted in Uncategorized. Bookmark the permalink.