What data are most needed to support meaningful evaluation of pre-k programs?

Sara Mead links to the recent report from the Early Childhood Data Collaborative. This report provides an overview of states’ progress towards better data systems for early childhood programs.

Such data systems have multiple uses. Economists tend to be bottom-line oriented, and think only of evaluation of program effects. But data systems are also useful for helping operate the programs well on a regular basis, and meeting the individual child’s needs. Data systems are also useful for describing the program’s characteristics and the characteristics of program participants, which policy-makers want to know. The detailed work done by the Early Childhood Data Collaborative will improve the ability of the early childhood data system to meet these multiple purposes of data collection in early childhood programs.

However, evaluating program effectiveness is a major purpose of early childhood data systems. We want to know whether a particular program works. We want to see whether a program works better for some children rather than others. And we want to see whether there is any relationship between whether a program works and the program’s characteristics.

From the point of view of evaluating pre-k programs, these recent reports should be supplemented with a quite specific and narrow data need. Specifically, if we want to evaluate the effects of the pre-k programs for 4 year olds run by many states, the main data need is the following: we need to collect data at entrance to the state pre-k program, and at entrance to kindergarten for former participants in the state pre-k program, using the same tests for both groups.  This specific need is not mentioned in the two recent reports (August 2010 and March 2011) by the Early Childhood Data Collaborative.

This extra data collection would allow state pre-k programs to be evaluated using a regression discontinuity design. Under this design, in states with a reasonably strict adherence to an age cut-off for pre-k attendance, we can compare the test scores of children who just made the age cut-off the previous year, and therefore participated in a year of state pre-k, with quite similar children who just missed the age cut-off the previous year, and are just entering the state pre-k program this year. We can identify the effects of a state pre-k program on children’s test scores by finding an abrupt jump or discontinuity in children’s test scores at the age cut-off, which breaks a smooth pattern of children having higher test scores with age.

As pointed out in the recent report by the National Institute for Early Education Research, regression discontinuity evaluation is a reliable and rigorous approach to program evaluation. Some of the best research on state pre-k programs has used this approach, including studies by William Gormley and his colleagues of Oklahoma, and studies by the National Institute for Early Education Research of multiple states. Although some states have sometimes collected such data, more systematic evaluation would be possible if such data were collected for all or at least most children in a state.

This test score data could include both “hard skills” and “soft skills”. It should include whatever we think of as the most important outcomes of pre-k participation.

This specific data need deserves to be emphasized because it no doubt seems like a strange data request that is somewhat inconvenient to implement. It is a strange data request because ordinarily we would not give the same test to children entering pre-k that we give to children entering kindergarten.  It is an inconvenient data request because ideally we would want to collect these data as soon as possible after the school year begins. (If the data are collected after a month of school, we are really evaluating the effects of the last 8 months of pre-k plus the first month of kindergarten.)  It would be easier to collect the data at the end of pre-k for all participants, but then the data would not be collected at the same time of the school year, which is important for this analysis.

However, this data request, although strange and inconvenient, allows for rigorous evaluation of the effects of state pre-k programs on kindergarten readiness. If these data are combined with individual data on student characteristics, we can tell how the effects of the state pre-k program vary across different student characteristics. If these data are combined with information on program characteristics (e.g., curriculum used, class size, teacher qualifications, teacher salaries), then with data from all sites in the state, we can determine what program characteristics promote greater pre-k program success.

Comprehensive data systems can provide many benefits. However, we can greatly improve evaluation of state pre-k programs with this narrower request: comparable data at pre-k entrance and kindergarten entrance on how children perform on meaningful tests.

About timbartik

Tim Bartik is a senior economist at the Upjohn Institute for Employment Research, a non-profit and non-partisan research organization in Kalamazoo, Michigan. His research specializes in state and local economic development policies and local labor markets.
This entry was posted in Uncategorized. Bookmark the permalink.