Sunday, September 27, 2009

Program Evaluation Karol's Thoughts

Program Evaluation Review

Karol Kryzanowski-Narfason

The Program and Approach

For the following review I have chosen to look at is a study of five state pre-kindergarten programs. The evaluation was an effectiveness-based evaluation, where the researchers asked the primary question: Do Pre-K programs improve children’s pre-literacy and math learning for kindergarten entry?

The study used a quasi-experimental approach called Regression Discontinuity Design (RDD). The Regression Discontinuity Design model is a model that evaluates the effects of interventions. In this case they are evaluated the effects of the social, emotional and cognitive development of Pre-k children whom have attended a pre-k program versus those whom have not.

The study was conducted because schools and school boards have boasted that pre-K programs foster school readiness. Because of this publicity the enrollments have increased as well as funding very rapidly.

The process used:

Five states of Pre-k classrooms voluntarily were studied. Data was collected by random sampling of students within the volunteered classrooms, and these children were given standardized tests by a trained tester.

The strengths: When evaluating, to identify and be mindful not to repeat gaps of previous evaluations increases the value and the usefulness of current evaluation. This evaluation talks about other evaluations only being from two states this evaluation includes five states, allowing a larger sampling to collect data from.

This evaluation looked at developmental outcomes such as social, emotional and cognitive development. This strength, in my opinion, demonstrated the foreseeable relationship between developmental readiness and learning readiness to give these pre-k children the foot in the door to kindergarten learning.

The approach used in this evaluation is relatively unbiased. One of the reasons is the strict birthday-cut off rules and therefore you are able to test children who age wise are within days of each other but one may have gone through the Pre-K program and the other not.

Could be viewed as a weakness and a strength

The programs studies varied in many ways, duration, funding, and teacher education to name a few. I think that the strong variation in the programs themselves could be argued as both strengths and weaknesses. I have two schools of thought: 1. certain variations in the program do not affect results and 2. Evaluation criteria could have been more specific.

The weaknesses:

The five states that were involved in the evaluation volunteered and therefore this evaluation is not a random sampling.

Money, funding, high-stakes, investment, budgeting, however you would like to say it, governments and policy makers want and need to know that their money is being put to good use. I understand accountability, and I can appreciate transparency of spending and I too would like to see results driven by dollars as dollars are increased. However, I am always cautious and aware of the weakness of any evaluation that could also have the potential to increase funding, or that the evaluation may have been state funded in the first place as to have a political impact.

3 comments:

  1. I think you hit the nail on the head, as they say. One of the key assumptions in such program evaluation designs is that participants are randomly assigned or selected to be studied. A study for which certain groups volunteer violates this basic assumption and renders the study invalid. For readers or users of this evaluation who are not familiar with research methods, this evaluation would do more harm than good. The concept is quite simple, i.e. we must ask ourselves why certain people or groups volunteer for something. Of course, just because the program evaluation method is invalid does not mean that the program is bad or ineffective. It means that results demonstrate program effectiveness, when in fact results are simply not known.

    Steven Cofrancesco
    Phoenix Government Examiner
    The Examiner.com
    src4768@gmail.com

    ReplyDelete
  2. Well done Karol

    This is an excellent subject choice. You break down the survey well. You have highlighted the appropriate issues both positive and negative. You may comment on what you would take from this evaluation to use in your own work. It is wonderful to see Steven's comments as well.

    Jay

    ReplyDelete
  3. Thanks Steven and Jay for your comments. Jay, I would comment and use this evaluation as supporting evidence in future evaluations of the similar, to do a compare and contrast. I think this would be interesting. Also, I tried to envoke further thought on those reading my evaluation? what do you think, did this happen?

    ReplyDelete