By Lori Palen, Youth Thrive Evaluator
Program evaluation is kind of like going on a road trip.
“Are we there yet?” a stakeholder calls from the backseat.
Hopefully, the evaluator (who’s riding shotgun, because she sometimes gets carsick if she sits in the back) will be able to say, “Almost! Five more miles to Mount Rushmore!”
But sometimes, the group realizes shortly after pulling out of the driveway that South Dakota is too far for a long weekend, and that it’s better to be heading south at this time of year anyway, and then there was a detour somewhere outside of Fayetteville, and…
It probably doesn’t make sense for the evaluator to quote the distance to Mount Rushmore anymore.
Youth Thrive has undergone a number of exciting changes since I was brought on as their evaluator in 2013. These changes have led to conversations about whether and how our evaluation might need to change. Today I’m sharing a few of the factors that went into our decisions, with the hope that this can help others who are experiencing similar organizational or programmatic evolution.
Let’s say that, when your evaluation started, your organization was serving elementary school students. At some point, you realized that middle school students needed your services more, so you switched your target population. If there’s no reason to believe that your efforts will have an impact on elementary school students, then it likely would be a waste of resources to continue collecting data from them.
The same rationale applies to evolution in target outcome. If your program’s focus changed from building typing skills to building Web programming skills, then it’s no longer appropriate to get out the stop watch as kids hunt-and-peck their way through a typing drill. Evaluation design should be directly linked to the population and outcomes you’re intending to change, even if that means needing to make mid-course corrections to evaluation design.
Evaluations often measure the same things over time, in order to determine how they changed. Before making mid-evaluation changes to data collection instruments, it is worth considering how these changes will impact your ability to make statements about change, and whether this would be a loss to your program, community, or field.
For example, maybe your health program switched from a focus on physical activity to a focus on nutrition, such that survey questions about daily exercise habits no longer directly correspond with program goals. However, exercise questions may still serve important purposes, such as giving background and context for evaluation results or being a source of epidemiological data for the community. Or maybe your evaluator developed an innovative way to measure exercise behavior, and collecting more data will allow them to fully test the measure and then share it for the benefit of other health programs. Continuity is about thinking more broadly than assessing program effectiveness and capitalizing on opportunities that stem from having multiple time points of data.
Every organization needs resources. Organizations typically make promises to resource givers (e.g., grant-making organizations, donors, shareholders) about what will be done with those resources. If you want to keep receiving resources, you’ll want to demonstrate (through evaluation) that you’re making good on your promises. So, if your grant application said that you would increase youths’ typing speeds by 20 words per minute, it may be in your best financial interest to keep up with those typing tests.
Many organizations and programs are only funded for a finite period of time, after which they need to seek out new or renewed funding. If the focus of your organization or program has evolved, your future funding applications may look very different from applications you’ve submitted in the past. Think strategically about the types of data a future funder will want to see described in a funding application, and consider whether you could start collecting those data now. Also, think about potential goals for future funding cycles, and consider whether you can collect data now that will serve as a baseline for evaluating future progress.
“Are we there yet?” a stakeholder calls from beneath their Mickey Mouse ears.
“Almost! This map says one hundred feet to Space Mountain!”
It’s Your Turn!
Have questions? E-mail me at email@example.com.