How should we respond to uncertainty?

Although there is good research support for the economic development benefits of early childhood programs, there is also some uncertainty. In particular, we don’t know whether large-scale implementation of some early childhood program will yield the same sizable long-term benefits as the Perry Preschool Program did or the Chicago Child-Parent Center program did.

Business incentive programs also have some uncertainty in expected benefits. There are considerable differences across studies in estimated effects of business taxes on business location decisions. These differences cause large differences in estimated costs of business tax incentives per induced job.  In addition, there is considerable uncertainty over the estimated effects of business incentives that are services to business, such as customized job training, as relatively few studies have looked at such services.

One important aspect of uncertainty is that we don’t know for certain the best program designs.  In most cases, the estimated impacts of past early childhood programs or past business incentive programs only tell us that a particular program is estimated to have a certain effect. These estimates don’t tell us why a particular program had that effect. Therefore, we are uncertain about what aspects of program design are most crucial.

How should we respond to such uncertainty in possible effects of early childhood programs and business incentive programs? I explore this question in chapter 6 of Investing in Kids.

Such uncertainty can rationalize inaction. Before large-scale implementation of new programs, perhaps we should wait for better evidence.  In particular, we should wait for better evidence of what program designs are most effective. In the case of early childhood programs, this position of “waiting for better evidence” has been argued for by Ron Haskins of the Brookings Institution. In the case of business incentive programs, this position has been argued for by Therese McGuire of Northwestern University.

But there is potentially a heavy cost to waiting. The biggest cost of “waiting for better evidence” is that we forego the potential benefits of large-scale implementation of these programs.  For example, in the case of early childhood programs, children are only 4 once. If we defer large scale implementation of pre-k programs while we await better evidence, some children will not receive high-quality pre-k education which could potentially provide large long-term benefits.

After all, the uncertainty goes in both directions. Perhaps new pre-k programs or business incentive programs will have even higher benefits than the average benefits expected based on past programs. Past experience does not perfectly predict the future, but it provides the best available predictor. This argument has been made in the context of preschool programs by Jens Ludwig and Deborah Phillips.

A better alternative is to implement what we believe to be the best program design based on past research, while structuring the programs to learn from experience.  We can implement early childhood programs or business incentive programs on a large scale while collecting data on program performance. Such data collection can include collecting data on the relative performance of different program variations. This data collection would allow programs to be improved over time based on objective evidence.

For example, it would be quite feasible to collect data on the effects of pre-k programs on kindergarten readiness. As shown in studies by Gormley of Oklahoma’s pre-k program, and by NIEER of various states’ pre-k programs, it is possible to get rigorous evidence on how pre-k programs affect student achievement levels at entrance to kindergarten. This is done by collecting similar student performance levels at entrance to the pre-k program, and at entrance to kindergarten. With such data, it is possible to provide reliable estimates that separate the effects of the pre-k program from what would have happened without the program, due to the experiences the children would have otherwise had as they aged.  This methodology is a form of what is called regression discontinuity analysis, which is considered the next best form of program evaluation to random assignment experimentation.

Thus, it is quite technically feasible to do large-scale implementation of early childhood programs and business incentive programs, while collecting ongoing data that will allow these large-scale programs to be improved over time.  Although this is technically feasible, is it politically feasible? It requires a political culture that is willing to aggressively move forward in implementing a social program, and is also willing to use potentially critical data to improve the program rather than kill it.

About timbartik

Tim Bartik is a senior economist at the Upjohn Institute for Employment Research, a non-profit and non-partisan research organization in Kalamazoo, Michigan. His research specializes in state and local economic development policies and local labor markets.
This entry was posted in Business incentives, Early childhood program design issues, Early childhood programs, Incentive design issues. Bookmark the permalink.