The program logic model is the standard evaluation framework of most program developers. However, do you really use it to its full potential? How does it drive the program’s evaluation? When was the last time you reviewed or updated it? How have the results changed the way the program is delivered? Is this experience documented? How is your program contributing to reducing large scale social issues like poverty, homelessness, isolation of seniors, caregiver burden etc.?
These are just a few of the questions I have asked myself over the years as a program director. In my experience, evaluation is one of those activities that can take a back burner to the day-to-day operational issues that tend to consume our time. Yet, funders are increasing expecting you to show the results of your labour, not just for the clients you served, but also for the organization and community. This requires a more complex understand of how all the layers are connected.
On the other end, as a grant administrator, I have also experienced program evaluations that were nothing more than a counting exercise of money spent and number of clients serviced without understanding the implications or the impact to alleviate the stated goals. As an outsider, while the efforts were to be lauded, I was not always impressed with the numbers or the outcomes.
The following are a few of the things I have learned along the way to imbed program evaluation into the day to day practices of your program planning and development activities.
Do your homework: It is not enough just to think you are doing good work. Do your homework by reviewing the literature regularly to stay on top of emerging trends and innovations in the field. Find out if other similar programs have been evaluated. Talk with the other programs and learn from their experiences. It is important to gather all the evidence to inform the development and growth of your program. There are very few truly unique programs out there so be prepared to find out what works and does not work for your clients, staff and community. In addition, do not just replicate the evaluation, but also add to the body of knowledge of the program critical success factors. Build a plan that is both useful to your program, your funder and the wider social service sector. Take the time to plan your evaluation and to stick with the plan throughout the process.
Engage evaluation users as part of the process: Using a participatory approach to evaluation calls for engaging the users of the evaluation as part of the process, whether that is clients, families, staff, Board members or community partners. They can play an important role throughout the process. Convene a focus group with clients and families; dedicate a staff/team meeting to evaluation or talk with the Board about their perceptions of the programs are some ways to include them in the process. Including clients as co-producers of your program can increase engagement, satisfaction and efficiencies.
Engage research and evaluation experts sooner rather than later: While we should be monitoring our program from an operational perspective, it is important to seek out the experts to support you in staying on top of your results and action plans for improvement and growth. To set yourself up for success, consider working with an external consultant to support establishing your plan, data collection and analysis. One of my first experiences with a local community based project had access to a Research and Evaluation Committee that was a standing committee of the Board. This resource was invaluable to establishing the evaluation plan, vetting the evaluation consultant and interpreting the results. The committee included evaluation experts as well as clients and was instrumental in building creditability with our funders and partners.
Be prepared to focus on evaluating the context, process AND outcomes: While this may seem complicated, if done right, you can show why your program has been successful (or not) using a variety of data collection methods at multiple levels. The ultimate goal of any evaluation should be to further develop, improve and grow the program. Context evaluation takes into consideration both the internal and external factors that contribute to the success of the program. Because programs do not happen in a vacuum, a context evaluation provides the groundwork for implementation and outcome evaluation. It provides an explanation of why a program has been implement the way it has and why certain outcomes have been achieved or not. Staff changes, funding cuts or new program introduced are all internal organizational factors that might influence the project and/or program. The social, economic and political climate may be external factors that affect the program, such as a change in government, an economic downturn, the closing of a significant employer in the community. These external factors can also influence the program. Without this information, it is difficult to make informed decisions about next steps or to figure out why you are not meeting your goals.
Process or implementation evaluation focus on examining the core activities undertaken to achieve program goals. This is often the one evaluation that gets the most attention. Process evaluations attempt to get at ways to improve the effectiveness of the program; provide support for maintaining the program over the long term; provide insight into why certain goals are or are not being accomplished; and help make decisions about plans.
Finally, outcome and impact evaluations assesses the short- and long-term results of a program. It also seeks to measure the changes brought about by the program at multiple levels – the client, program, organization, community and system level. In the end, the results of an outcome evaluation should not be used simply to prove it works or to justify the existence of a program. While it is important to provide evidence that it worked, outcome evaluations should be viewed as a valuable source of information that can promote the program development and growth. Therefore, it is important to conduct an outcome evaluation in combination with context and implementation evaluations.
Be your biggest fan and biggest critic: We either can sometimes be caught with blinders on, celebrating successes without being open to the issues or problems, or worse, are caught up in the negativity trap of what is wrong. I prefer to balance my fan and critic role to maintain the motivation to improve and the knowledge of what to improve. This dual focus will also ensure the sustainability of the program. Sometimes we are our biggest critics, so please temper it with a moment of pride in your small successes along the way to greatness! Know we are talking motivation!
Embrace and learn from your failure: What is failure you say? I was recently at the Leveraging your Strengths conference in Ottawa and had a refreshing conversation with a foundation program manager. She talked about investing in programs and projects that might fail and that is OK. In today’s hyper-competitive funding environment, it was refreshing because we have been programmed to believe failure is not an option. Yet, in the business community, it is understood that not all businesses succeed. Failure in Social Enterprises Report published by the SEE Change Magazine further intrigued me. It highlights what failure looks like in social enterprises. Embrace failure, but be clear what you are trying to achieve and document your journey. The opportunity to fail can be a most rewarding learning opportunity!
Seek honest feedback from your funder: This is not always an easy thing to do. You think you are doing good work, but what does the funder think? At each step of the way, be open to hearing both the positive and negative from your funder. This information is important to help you improve and be successful throughout the funding cycle. It is also important to have this conversation when you are not successful during a funding application. I have been baffled by grantees resubmitting an unsuccessful proposal twice without seeking feedback on why it was not successful the first time around. Funder program officers will tell you there prefer you to invest in the relationship to ensure your programs success as well as being good stewards of their funding dollars. Do not think of funders as banks, but instead like investors in your success. In turn, recognize the funding you have received requires good stewardship of donor, government or corporate dollars.
Be prepared to publish your results widely and support replication: One of the major issues I have with program development is that often the evaluations are not published in leading academic journals so that others can learn from the experiences. Therefore, I recommend you build into your plan the opportunity to publish at least one article in a well-known journal or trade magazine. Pick one conference to present your experience and findings at with other community partners. If possible, work with your funder to be a featured program in material distributed to other community partners. If your program is such a wonderful success, be prepared for others to emulate you by fostering the telling of your story and sharing it with others. I personally ascribe to the 1:3:25:video rule of reporting. This is a modification of the popular Canadian Foundation for Health Improvement Communications Notes titled Reader-Friendly Writing – 1:3:25. This includes a 1-page summary for decision makers, a 3-page executive summary and the 25 page full report. I also recommend producing a short 5-minute video outlining your impact to clients, staff, organisation and community. It is a great way to make your message more accessible to your multiple audiences.
So, these are the lessons I have learned on the way! What do you think? Am I missing any lessons? Email Bonnie for a free 30-minute consultation about your program evaluation!
Torjman, Are Outcomes the Best Outcome?, 1999
Connor, J, Understanding Measures: Moving from Counting to Accomplishing, 2013
Other Useful Evaluation Resources:
Ontario Centre of Excellence for Child and Youth Mental Health, Program evaluation toolkit – Tools for planning, doing and using evaluation, 2007
Government of Ontario, Program Evaluation Reference & Resource Guide for the Ontario Public Service, 2007
International Development Research Centre, Outcome Mapping: Building learning and reflection into development programs, 2001
W.K. Kellogg Foundation, Evaluation Handbook, 1998
W.K. Kellogg Foundation, Logic Model Development Guide, 2004
The J.W. McConnell Family Foundation, A Developmental Evaluation Primer, 2006
The J.W. McConnell Family Foundation, DE 201: A Practitioner’s Guide to Developmental Evaluation, 2010
Government of Canada, Centre of Excellence for Evaluation (CEE)