Nothing Special   »   [go: up one dir, main page]

IDEAS home Printed from https://ideas.repec.org/a/tpr/restat/v89y2007i4p761-783.html
   My bibliography  Save this article

Using State Administrative Data to Measure Program Performance

Author

Listed:
  • Peter R. Mueser

    (University of Missouri�Columbia and IZA)

  • Kenneth R. Troske

    (University of Kentucky�Lexington and IZA)

  • Alexey Gorislavsky

    (University of Missouri�Columbia)

Abstract
We use administrative data from Missouri to examine the sensitivity of earnings impact estimates for a job training program based on alternative nonexperimental methods. We consider regression adjustment, Mahalanobis distance matching, and various methods using propensity-score matching, examining both cross-sectional estimates and difference-in-difference estimates. Specification tests suggest that the difference-in-difference estimator may provide a better measure of program impact. We find that propensity-score matching is most effective, but the detailed implementation is not of critical importance. Our analyses demonstrate that existing data can be used to obtain useful estimates of program impact. Copyright by the President and Fellows of Harvard College and the Massachusetts Institute of Technology.

Suggested Citation

  • Peter R. Mueser & Kenneth R. Troske & Alexey Gorislavsky, 2007. "Using State Administrative Data to Measure Program Performance," The Review of Economics and Statistics, MIT Press, vol. 89(4), pages 761-783, November.
  • Handle: RePEc:tpr:restat:v:89:y:2007:i:4:p:761-783
    as

    Download full text from publisher

    File URL: http://www.mitpressjournals.org/doi/pdf/10.1162/rest.89.4.761
    File Function: link to full text
    Download Restriction: Access to full text is restricted to subscribers.
    ---><---

    As the access to this document is restricted, you may want to look for a different version below or search for a different version of it.

    Other versions of this item:

    References listed on IDEAS

    as
    1. V. Joseph Hotz & Guido W. Imbens & Jacob A. Klerman, 2006. "Evaluating the Differential Effects of Alternative Welfare-to-Work Training Components: A Reanalysis of the California GAIN Program," Journal of Labor Economics, University of Chicago Press, vol. 24(3), pages 521-566, July.
    2. Ashenfelter, Orley C, 1978. "Estimating the Effect of Training Programs on Earnings," The Review of Economics and Statistics, MIT Press, vol. 60(1), pages 47-57, February.
    3. Markus Frlich, 2004. "Finite-Sample Properties of Propensity-Score Matching and Weighting Estimators," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 77-90, February.
    4. James J. Heckman & Jeffrey A. Smith, 1995. "Assessing the Case for Social Experiments," Journal of Economic Perspectives, American Economic Association, vol. 9(2), pages 85-110, Spring.
    5. Keisuke Hirano & Guido W. Imbens & Geert Ridder, 2003. "Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score," Econometrica, Econometric Society, vol. 71(4), pages 1161-1189, July.
    6. Heckman, J.J. & Hotz, V.J., 1988. "Choosing Among Alternative Nonexperimental Methods For Estimating The Impact Of Social Programs: The Case Of Manpower Training," University of Chicago - Economics Research Center 88-12, Chicago - Economics Research Center.
    7. Andrew Dyke & Carolyn J. Heinrich & Peter R. Mueser & Kenneth R. Troske & Kyung-Seong Jeon, 2006. "The Effects of Welfare-to-Work Program Activities on Labor Market Outcomes," Journal of Labor Economics, University of Chicago Press, vol. 24(3), pages 567-608, July.
    8. Peter Hall & Jeff Racine & Qi Li, 2004. "Cross-Validation and the Estimation of Conditional Probability Densities," Journal of the American Statistical Association, American Statistical Association, vol. 99, pages 1015-1026, December.
    9. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    10. Card, David & Sullivan, Daniel G, 1988. "Measuring the Effect of Subsidized Training Programs on Movements in and out of Employment," Econometrica, Econometric Society, vol. 56(3), pages 497-530, May.
    11. James Heckman & Neil Hohmann & Jeffrey Smith & Michael Khoo, 2000. "Substitution and Dropout Bias in Social Experiments: A Study of an Influential Social Experiment," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 115(2), pages 651-694.
    12. Charles F. Manski, 1996. "Learning about Treatment Effects from Experiments with Random Assignment of Treatments," Journal of Human Resources, University of Wisconsin Press, vol. 31(4), pages 709-733.
    13. James Heckman & Hidehiko Ichimura & Jeffrey Smith & Petra Todd, 1998. "Characterizing Selection Bias Using Experimental Data," Econometrica, Econometric Society, vol. 66(5), pages 1017-1098, September.
    14. Lorraine Dearden & Javier Ferri & Costas Meghir, 2002. "The Effect Of School Quality On Educational Attainment And Wages," The Review of Economics and Statistics, MIT Press, vol. 84(1), pages 1-20, February.
    15. Ashenfelter, Orley & Card, David, 1985. "Using the Longitudinal Structure of Earnings to Estimate the Effect of Training Programs," The Review of Economics and Statistics, MIT Press, vol. 67(4), pages 648-660, November.
    16. Ben B. Hansen, 2004. "Full Matching in an Observational Study of Coaching for the SAT," Journal of the American Statistical Association, American Statistical Association, vol. 99, pages 609-618, January.
    17. Zhong Zhao, 2004. "Using Matching to Estimate Treatment Effects: Data Requirements, Matching Metrics, and Monte Carlo Evidence," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 91-107, February.
    18. Heckman, James J & Smith, Jeffrey A, 1999. "The Pre-programme Earnings Dip and the Determinants of Participation in a Social Programme. Implications for Simple Programme Evaluation Strategies," Economic Journal, Royal Economic Society, vol. 109(457), pages 313-348, July.
    19. Alberto Abadie & Guido W. Imbens, 2008. "On the Failure of the Bootstrap for Matching Estimators," Econometrica, Econometric Society, vol. 76(6), pages 1537-1557, November.
    20. Thomas Fraker & Rebecca Maynard, 1987. "The Adequacy of Comparison Group Designs for Evaluations of Employment-Related Programs," Journal of Human Resources, University of Wisconsin Press, vol. 22(2), pages 194-227.
    21. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    22. James J. Heckman & Hidehiko Ichimura & Petra E. Todd, 1997. "Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 64(4), pages 605-654.
    23. Black, Dan A. & Smith, J.A.Jeffrey A., 2004. "How robust is the evidence on the effects of college quality? Evidence from matching," Journal of Econometrics, Elsevier, vol. 121(1-2), pages 99-124.
    24. Burt S. Barnow, 1987. "The Impact of CETA Programs on Earnings: A Review of the Literature," Journal of Human Resources, University of Wisconsin Press, vol. 22(2), pages 157-193.
    25. Rajeev H. Dehejia & Sadek Wahba, 1998. "Causal Effects in Non-Experimental Studies: Re-Evaluating the Evaluation of Training Programs," NBER Working Papers 6586, National Bureau of Economic Research, Inc.
    26. Rajeev H. Dehejia & Sadek Wahba, 2002. "Propensity Score-Matching Methods For Nonexperimental Causal Studies," The Review of Economics and Statistics, MIT Press, vol. 84(1), pages 151-161, February.
    27. Bassi, Laurie J, 1984. "Estimating the Effect of Training Programs with Non-Random Selection," The Review of Economics and Statistics, MIT Press, vol. 66(1), pages 36-43, February.
    28. Heckman, James J. & Lalonde, Robert J. & Smith, Jeffrey A., 1999. "The economics and econometrics of active labor market programs," Handbook of Labor Economics, in: O. Ashenfelter & D. Card (ed.), Handbook of Labor Economics, edition 1, volume 3, chapter 31, pages 1865-2097, Elsevier.
    29. Theresa J. Devine & James J. Heckman, 1996. "The Economics of Eligibility Rules for a Social Program: A Study of the Job Training Partnership Act (JTPA)--A Summary Report," Canadian Journal of Economics, Canadian Economics Association, vol. 29(s1), pages 99-104, April.
    30. Guido W. Imbens, 2004. "Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 4-29, February.
    31. Joshua D. Angrist & Jinyong Hahn, 1999. "When to Control for Covariates? Panel-Asymptotic Results for Estimates of Treatment Effects," NBER Technical Working Papers 0241, National Bureau of Economic Research, Inc.
    32. Friedlander, Daniel & Robins, Philip K, 1995. "Evaluating Program Evaluations: New Evidence on Commonly Used Nonexperimental Methods," American Economic Review, American Economic Association, vol. 85(4), pages 923-937, September.
    33. James J. Heckman & Hidehiko Ichimura & Petra Todd, 1998. "Matching As An Econometric Evaluation Estimator," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 65(2), pages 261-294.
    34. Alberto Abadie & Guido W. Imbens, 2006. "Large Sample Properties of Matching Estimators for Average Treatment Effects," Econometrica, Econometric Society, vol. 74(1), pages 235-267, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    2. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    3. Carlos A. Flores & Oscar A. Mitnik, 2009. "Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data," Working Papers 2010-10, University of Miami, Department of Economics.
    4. Lechner, Michael & Wunsch, Conny, 2013. "Sensitivity of matching-based program evaluations to the availability of control variables," Labour Economics, Elsevier, vol. 21(C), pages 111-121.
    5. Jones A.M & Rice N, 2009. "Econometric Evaluation of Health Policies," Health, Econometrics and Data Group (HEDG) Working Papers 09/09, HEDG, c/o Department of Economics, University of York.
    6. Steven Lehrer & Gregory Kordas, 2013. "Matching using semiparametric propensity scores," Empirical Economics, Springer, vol. 44(1), pages 13-45, February.
    7. Jose C. Galdo & Jeffrey Smith & Dan Black, 2008. "Bandwidth Selection and the Estimation of Treatment Effects with Unbalanced Data," Annals of Economics and Statistics, GENES, issue 91-92, pages 189-216.
    8. Huber, Martin & Lechner, Michael & Wunsch, Conny, 2013. "The performance of estimators based on the propensity score," Journal of Econometrics, Elsevier, vol. 175(1), pages 1-21.
    9. Richard Blundell & Monica Costa Dias, 2009. "Alternative Approaches to Evaluation in Empirical Microeconomics," Journal of Human Resources, University of Wisconsin Press, vol. 44(3).
    10. Gueorgui Kambourov & Iourii Manovskii & Miana Plesca, 2020. "Occupational mobility and the returns to training," Canadian Journal of Economics/Revue canadienne d'économique, John Wiley & Sons, vol. 53(1), pages 174-211, February.
    11. V. Joseph Hotz & Guido W. Imbens & Jacob A. Klerman, 2006. "Evaluating the Differential Effects of Alternative Welfare-to-Work Training Components: A Reanalysis of the California GAIN Program," Journal of Labor Economics, University of Chicago Press, vol. 24(3), pages 521-566, July.
    12. Kluve, Jochen & Lehmann, Hartmut & Schmidt, Christoph M., 2008. "Disentangling Treatment Effects of Active Labor Market Policies: The Role of Labor Force Status Sequences," Labour Economics, Elsevier, vol. 15(6), pages 1270-1295, December.
    13. Marco Caliendo & Sabine Kopeinig, 2008. "Some Practical Guidance For The Implementation Of Propensity Score Matching," Journal of Economic Surveys, Wiley Blackwell, vol. 22(1), pages 31-72, February.
    14. Gustavo Canavire-Bacarreza & Luis Castro Peñarrieta & Darwin Ugarte Ontiveros, 2021. "Outliers in Semi-Parametric Estimation of Treatment Effects," Econometrics, MDPI, vol. 9(2), pages 1-32, April.
    15. Heckman, James J. & Lalonde, Robert J. & Smith, Jeffrey A., 1999. "The economics and econometrics of active labor market programs," Handbook of Labor Economics, in: O. Ashenfelter & D. Card (ed.), Handbook of Labor Economics, edition 1, volume 3, chapter 31, pages 1865-2097, Elsevier.
    16. Huber, Martin & Lechner, Michael & Wunsch, Conny, 2010. "How to Control for Many Covariates? Reliable Estimators Based on the Propensity Score," IZA Discussion Papers 5268, Institute of Labor Economics (IZA).
    17. Dettmann, E. & Becker, C. & Schmeißer, C., 2011. "Distance functions for matching in small samples," Computational Statistics & Data Analysis, Elsevier, vol. 55(5), pages 1942-1960, May.
    18. Chabé-Ferret, Sylvain, 2015. "Analysis of the bias of Matching and Difference-in-Difference under alternative earnings and selection processes," Journal of Econometrics, Elsevier, vol. 185(1), pages 110-123.
    19. Jochen Kluve & Boris Augurzky, 2007. "Assessing the performance of matching algorithms when selection into treatment is strong," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 22(3), pages 533-557.
    20. Richard Blundell & Lorraine Dearden & Barbara Sianesi, 2003. "Evaluating the impact of education on earnings in the UK: Models, methods and results from the NCDS," IFS Working Papers W03/20, Institute for Fiscal Studies.

    More about this item

    JEL classification:

    • C14 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Semiparametric and Nonparametric Methods: General
    • C21 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Cross-Sectional Models; Spatial Models; Treatment Effect Models
    • C52 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Model Evaluation, Validation, and Selection

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:tpr:restat:v:89:y:2007:i:4:p:761-783. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Kelly McDougall (email available below). General contact details of provider: https://direct.mit.edu/journals .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.