Laura Sanagorski, University of Florida

Dr. Laura Sanagorski is an Assistant Professor of program evaluation and social marketing in the Agricultural Education and Communication Department and the Center for Landscape Conservation and Ecology at the University of Florida. She focuses on behavior change and develops evaluation strategies and measures impact with Extension faculty and other professionals who develop natural resources and sustainable landscaping programs. Dr. Sanagorski evaluates diverse audiences and programs and researches the link between individual characteristics and sustainable practice adoption.  She previously served as an environmental horticulture extension agent. 

Contact Information

lsanagorski@ufl.edu 

 

Cheryl L. Peters, Michigan State University Extension

Dr. Cheryl Peters is an Evaluation Specialist with Michigan State University Extension (MSUE). Her efforts are part of the Organizational Development team under the Director’s office and she works with faculty/staff across all programming areas (agriculture, community development, youth development, health and nutrition). She was formerly a County Extension Director and Educator with MSUE. Before employment with Michigan State, Cheryl was on faculty at Oregon State University as an Extension Evaluation Specialist on grant funded programs. Current scholarship project areas that Cheryl serves as program evaluator include: climate change and sustainable agriculture; new farmer leadership; youth place-based learning; and geriatric education with healthcare professionals.

Michael Lambur, Virginia Cooperative Extension, Virginia Tech

I currently provide leadership for program development (situation analysis, program design, evaluation) as an Associate Director for Virginia Cooperative Extension.  I have worked as an evaluator in Cooperative Extension since 1983 with Michigan Cooperative Extension, Virginia Cooperative Extension, and the eXtension Initiative.  I have extensive experience in all aspects of evaluation of non-formal Extension programs at the local, state, and national levels. 

Contact Information

Email: lamburmt@vt.edu
Office phone: 540/231-1634
 

Interpreting & Reporting Program Evaluation Data

After data is analyzed in ways that are planned, it is time to interpret the findings and prepare reports for sharing the program evaluation information. Many options are available for how to display results (i.e., number or percent) and what formats are most effective for different types of program evaluation methods (e.g., focus groups, survey results). Data visualization is a current strategy in program evaluation to use quality graphics to display results that are friendly to a variety of media and internet formats. A purpose of creating effective reports and using quality graphics in reporting is to improve the use of program evaluation results. Strategies for effective report writing include describing the demographic characteristics of target audiences reached by Extension programs and how outcomes may have varied across groups. Interpreting and reporting program evaluations involve finding effective ways to display program impacts for intended end-users and interested stakeholders of the data.

Top Frequently Asked Questions for Interpreting & Reporting

What if a program has been terminated early?  How does that affect the evaluation?

How do I analyze pre and post data?

How can I best document program impacts?

Is there one best way to display the results of an evaluation?

Can I compare my evaluation data from last year with my data from this year?

How can I use data to describe participants in my program?

How can I determine what my stakeholders want to know about my program?

How do I report program evaluation results from a focus group?

How do you analyze data from a focus group?

What are the standards of an effective program evaluation?

Should I report the number, percentage, or both number and percentage of participants who achieved a certain result?

Should I include quotes from participants in my impact statement?

How should I report program evaluation results from a survey?

How do I analyze data from a program evaluation survey?

What are the types of evaluation?

Most types of evaluation can be listed under two main categories: formative (process or quality check) and summative (impact or outcome assessment). Evaluations can also explore efficiency or effectiveness through (cost-benefit, investment-return). A brief description of types is posted at http://www.health.state.mn.us/divs/hpcd/chp/hpkit/text/eval_types.htm and a longer discussion appears at http://managementhelp.org/evaluatn/fnl_eval.htm

 

What is outcome-based program evaluation?

An outcome-based program evaluation focuses on determining if the stated intended program outcomes are actually occurring, in other words, is the program accomplishing what it set out to accomplish.  Outcomes are generally separated into categories such as short term, medium term and long term.  Short term outcomes often refer to learning, medium term to behavior change, and long term to changes in conditions.  Although initial outcome based evaluations generally focus on short term outcomes, it is important to plan for measuring medium term and long term outcomes and determine if baseline or pre-data should be collected early in the program cycle.

 

How do I analyze pre and post data?

Pre- and post-data are collected and analyzed to examine the effect of interventions or programs on processes (e.g. IPM), sites (e.g. fields of crops), or subjects (e.g. freshwater ducks or program participants). Pre- and post-data can represent relatively continuous data (height of plants to the millimeter), interval data (# of trees dying; frequency of a behavior based on a 5-point Likert scale of frequencies), or categorical data (favorite choice for a career). The analyses you perform on your data will depend upon the type of data you collect and the questions you wish to have answered from the analysis. For this reason, data collection tools and processes are best set up with analysis already practiced. Conducting a “test-run” of data entry and analysis reduces the likelihood of encountering unwanted surprises or wasted data once data collection and analyses begin.

You will want to review the kinds of analyses used for different data types before determining your best options. Because most Extension work involves humans and other living organisms, a great resource is Zar’s Biostatistical Analysis (1).

As an example of a common analysis for pre- and post- data when you want to know if participants have changed behavior as a result of a program intervention is to use a pre-survey of behaviors participants identify or rate before the program begins and then compare it with results using the same survey and same group of participants at the end or after the program intervention.

If you have a large, representative sample, you may want to run a paired sample student’s t-test (2). To do this, you enter data as “matched pairs” of pre- and post-scores for each individual. If you cannot match the tests, you should run an independent sample t-test. The database is set up differently for these two types of tests, so refer to the user manual for your statistical package before entering data. The results of a t-test will tell you if the difference between the pre- and post-test is significant. Educators typically seek results with significance levels less than .05.

If your group of participants is small and/ or does not necessarily represent the population you are targeting for your intervention, you may just want to examine and compare the frequencies and mean scores of the pre and post data without using statistical tests. Changes in mean scores will tell you if participants’ knowledge has increased or decreased for the whole group, though statistical significance cannot be ascertained without using statistical comparison methods.

(1) Zar, J.H. 2009. Biostatistical Analysis, 5th Edition. Prentice-Hall, N.J.

(2) Fisher Box, Joan (1987). “Guinness, Gosset, Fisher, and Small Samples”. Statistical Science 2 (1): 45–52. DOI:10.1214/ss/1177013437

What kind of statistical software do I need to analyze quantitative data?

You can use any statistical software to analyze quantitative data.  Researchers have their favorites, but any will do.  Many people use SPSS or SAS, however these can be expensive and you might be able to find cheaper or even free software.  If you work or are a student at a university, you may be able to obtain a license for SPSS or SAS that is discounted.  Some people use Excel to do statistical tests.  Excel is less efficient for this purpose, but can be used.  You will need to add the Excel Analysis Toolpak to Excel (available under “options”) if you choose this option.  Check www.youtube.com for demonstrations on using statistical software to enter and analyze data.

Where can I find a questionnaire to evaluate an activity/program just like mine?

 

Three types of answers:

  1. IF you are conducting a standardized or evidence-based program with a valid and reliable evaluation tool, USE IT! (In youth risk/enrichment, see SAMHSA Model Programs, etc.)
  2. IF you are leading a program with educational goals that match a well-tested (valid and reliable) tool used with a similar audience, USE IT! (In youth informal science, see PEAR, etc.)
  3. IF your program is not exactly like any you have seen, ADAPT an existing instrument to fit your goals and audience (e.g., Don’t use a tool that doesn’t fit your needs!)
  4. IF none of these options work, write your own or seek out help from someone who can help you write it.

You can also find ideas by reading journal articles dealing with your subject matter or evaluation topic, since authors will mention surveys used.  You can also see sample surveys for community, youth, and family topics at www.CYFERnet.org.

What are common problems with survey questions?

 

 

When surveying, you wish to maximize the percent of your target audience responding, and the accuracy and quality of their responses relative to the questions being asked. 

From the respondent’s perspective, survey questions can be confusing if words or phrasing used make the question unclear. Also, survey respondents find it frustrating when answer options are provided that do not include the answer that best fits for their situation.  Survey questions can also be too difficult or boring for respondents, thereby increasing what is termed “respondent fatigue.”  If this occurs, respondents may give up or put down the survey and never come back to it, which reduces your sample size.  Inaccurate responses or concern about answering the question may also occur due to perceived bias or loss of privacy about a personal issue.  All of these options reduce the potential for conducting an analysis using high quality data.

The list below describes how to AVOID common problems when designing survey questions.

(Adapted from Ellen Taylor-Powell, Questionnaire Design, UWEX, 1998)

  1. Use simple wording
  2. Avoid jargon and abbreviations, double negatives (“reduce moving less”)
  3. Be specific; only one question asked per question written
  4. Relax grammar to make questions easier to read (“who” vs “for whom”)
  5. Be clear with wording, phrasing, and options / directions provided
  6. Include all necessary information, but not extraneous information
  7. Phrase personal or potentially incriminating questions discretely or not at all
  8. Avoid questions that are too demanding or time consuming
  9. Use mutually exclusive categories
  10. Keep matrices of questions and options simple
  11. When providing a list of choices, strongly consider an “Other” option, so respondents know their unique situation is not overlooked
  12. Avoid making assumptions about your audience or likely responses
  13. Avoid bias
  14. Avoid redundancy