Evaluation Terms
When evaluating a program or discussing research you may come across the following definitions. Review this list to become better acquainted with evaluation terms.
Evaluation, in its most basic definition, is to determine the worth, or condition of, a program, policy or intervention usually by careful appraisal and study.
Evaluation Types
Process (sometimes referred to as “formative”) evaluations are conducted to increase the efficiency of service delivery, modify program strategies or underlying assumptions, or to change method of service delivery.
Outcome (also known as “summative”) evaluations are generally conducted to assess a program's impact on a specific population, to make a judgment on the program's worth.
Methods
Qualitative methods include the use of open-ended interviews, direct observation, and/or written documents to find out the way in which a program has impacted its participants (Patton, 1990).
Quantitative methods require the use of standardized measures to classify responses of varied participants into pre-determined categories for generalization about the program's impact on its participants.
Mixed method approaches to evaluation combine elements of both qualitative and quantitative data collection procedures in hopes of generating “deeper and broader insights (Greene and Cicarelli, 1997) into the effect of a program on its participants.
Research Design
Comparison Groups: To accurately assess the effect of a parent education program, the best method to use is to compare outcomes among two randomly assigned groups: one who received the program, one who did not. This is not necessarily feasible for many parent educators, but it is important to understand why equivalence of comparison groups is important to lending credibility of your results.
Randomization: Assigning members to two groups randomly allows for the groups to be equivalent, and means that one can infer that the two groups were the same before your program or intervention, and that any difference in outcomes is due to your program, and not due to differences between your two groups.
Pre/post tests: One commonly used method to assess program impact is to administer a pre and posttest survey of program participants. While this method may illustrate some important impacts of the program on its participants, it also has some limitations. For example, if the evaluation of a given program does not use any comparison group, the outcomes shown cannot be definitively attributed to the curriculum itself. The mere fact that the only group used to determine effect was comprised of those who participated means that any program impact may be the result of population characteristics of the participants themselves. Specifically, if people who participate in a program do so voluntarily, that implies a willingness to change behavior in and of itself – and thus, any outcomes from program participation could be the result of pre-existing attitudes of the participants and not the intervention.
Experimental Designs
One central component of research design is if the evaluation utilized an experimental, quasi-experimental or non-experimental design (i.e., measuring changes in outcomes with a comparison group as well as a treatment group).
There are three primary types of research designs:
- Randomized or true experimental designs, which are the strongest design for establishing a cause and effect relationship, as random assignment is used - and the groups involved are considered equivalent. An experimental design randomly assigns participants to a treatment (comprised of those who did participate in an education program) and control group (comprised of those who did not participate in an education program). The random assignment of these two groups allows for researchers to determine if the outcome found was attributable to the parent education program (the “treatment”) or to another reasonable factor. This design is considered the strongest method to make such an assessment.
- Quasi-Experimental designs also involve the comparison of two groups; however, members of the treatment and control groups are not randomly assigned, and thus, the comparability of the two groups is less certain. The ability of these designs to establish a cause effect relationship is dependent upon the degree to which the two groups in the study are equivalent. As a result, it is more difficult to conclude that an outcome is the result of the treatment itself and not due to some other difference between the groups.
- Non-experimental designs do not use a control group and do not use any random assignment in its design. These are usually descriptive studies conducted using a simple survey instrument only once. While they are useful in their own right - they are weak in establishing cause/effect relationships.
This work was supported by a joint research and extension program funded by Cornell University Agricultural Experiment Station (Hatch funds) and Cornell Cooperative Extension (Smith Lever funds) received from Cooperative State Research, Education, and Extension Service, U.S. Department of Agriculture. Any opinions, findings, conclusions, or recommendations expressed in this publications are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture.