top of page

Developing Evaluation Tools for Impact

September 20, 2016

Written By: Delia


This past summer, Artistri Sud undertook an audit and redesign of their evaluation process and tools. Why? We want to be sure we’re achieving our objectives. Our flagship Social Entrepreneurship program is always carefully monitored and adapted–sometimes even during implementation–to ensure it meets women’s needs to increase their income and positively impact their families and communities. But could we be doing better? Can we be sure that the women are leaving the year-long program empowered? Having increased their revenue? And what does it all mean for them?



Artistri Sud recruited McGill MBA student Veronica Michiels Vargas for this important project.


With a background in economics and 7 years’ experience as an analyst for the Banco de la Republica —the state-run central bank of Colombia—Veronica brought a rigorous business-minded approach to the mandate. Working with volunteer consultants Ariella Orbach, an experienced development and community-building practitioner, and Casey Rosner, who has a background in international development studies and a strong interest in women’s empowerment, Veronica developed a process and a set of tools which are both effective and practical for Artistri Sud. Read her abstract of her work below.


To assure the success of their programs, Artistri Sud wanted to have an effective way of measuring the extent to which the program´s objectives (impacts and outputs) had been achieved. The organization also needs to have measurable results that can be shown to stakeholders as proof that the program is both effective and aligned with their interests. Artistri Sud wants to be able to show results in a systematic and orderly way, and in a timely manner. With a more detailed and structured internal evaluation process, the organization will continue to strengthen artisan women’s capacities to effectively run their own small enterprises, and so generate a sustainable income for themselves, and also to increase stakeholders´ support.


Research and ‘best practices’


My work began with research; I studied how to plan and execute an evaluation process and considered the ‘best practices’ implemented by recognized and experienced organizations. Monitoring and evaluation is considered by the United Nations as one of the three pillars of its Results Based Management Approach. The foundation of the evaluation processes of recognized organizations[1] is a well-defined results framework (program logic)—basically a large table that defines the program’s objectives in detail and identifies how an onlooker would know if they had been achieved. The results´ framework defines the questions that the organization needs to answer, and provides a guide to the most appropriate indicators that it may want to measure.

These indicators are the meat of the evaluation: by researching exactly which indicators we want to study, we can craft evaluation tools that can test these indicators. We need indicators because they are what you measure and through them you can ask and answer questions such as: who, how many, how often, and how much[2]. I also found that there are 5 commonly applied criteria to design an evaluation process: relevance, effectiveness, efficiency, sustainability and impact[3]. But not all are applicable to every evaluation. Our evaluation process was focused on effectiveness and impact.

What indicates success?

Following the methodology developed by Root Cause[4], I compiled a list of all the indicators that our organization currently tracks, including the information of how they are tracked (questions and evaluation tool used) and when they are tracked. This compilation gave us a comprehensive assessment of which indicators, given the program logic of the organization, were not carefully tracked (or not tracked at all), and then which evaluation tools were needed.

Based on the objectives of the organization, I determined two main types of indicators: indicators related to women´s economic and business situation, and indicators related to women´s empowerment.

For the first type of indicators, I redefined the indicators to be specific, measurable, achievable and relevant, as evaluation best practices require.[5] Then, I developed evaluation tools (questions), leveraging my experience and knowledge in business, and combining them with an extensive review of the training program´s objectives and learning. I created a baseline questionnaire with the objective of gathering specific and relevant information about participants’ economic and business situation before the program. This information can be compared with information about their situation one year after the training, obtained from another questionnaire. With these evaluation tools, the organization determines what changes in the indicators have taken place, and measure them accordingly.

Stimulating Empowerment

For the empowerment aspect of the program, I worked with Casey Rosner, a recent graduate from McGill University’s Institute for the Study of International Development. Casey had been collaborating with Ariella Orbach and Artistri Sud for the past year, working to articulate the program logic and identify the ways that empowerment can figure into the operations of the organization.

to capture the effectiveness of the program in stimulating personal growth and development in Artistri Sud participants. Her questions are framed based on the personal narrative approach, which understands the process of empowerment as inherently individual.



A structured and efficient evaluation tool

As part of the process of developing the evaluation tools, we conducted a pilot-test. With the help of two graduates from the 2015 ASSET cohort in Chile, we tested 11 questions on 4 participants. This test helped us to verify that the artisans understood the questions, and at the same time that the instructions (on how to conduct the interviews and register the responses) are clear and helpful. The results showed that 75% of the questions were understandable, and the instructions were clear. With the results of this test we refined 3 questions, assuring a successful implementation of the evaluation tools, and we included the feedback of the pollster.

One of the challenges that I faced in the process developing the evaluation tools was Artistri Sud ´s constraints in terms of resources. Around 90% of the work is done by volunteers and projects run on a tight budget. So I had to keep in mind that I needed to develop tools that allow us to collect data to measure each indicator not only in a structured way, but also in a very efficient one.

Finally, in order to integrate the evaluation with the various programs (entrepreneurship training, coaching, and train-the-trainer), I had to work closely with the program teams. These team members were very helpful, and thanks to our complementary backgrounds, we were able to take advantage of each interaction that the organization has with the participants (once per one or two weeks) to collect relevant and precise data. With this integration we were able to assure and promote efficiency, while assuring consistency with the organization’s mission.

I am really proud of the work that we have done, and I am confident that the evaluation tools will help Artistri Sud to continue to fulfill its mission.

[1] United Nations, and National Centre for Sustainability (NCS) & Swinburne University.

[2] According to Civicus, an international alliance dedicated to strengthening citizen action and civil society around the world.

[3] “Handbook on planning, monitoring and evaluating for development results”, United Nations Development Programme

[4] “Building a performance measurement system, using data to accelerate social impact”, Andrew Wolk, Anand Dholakia, and Kelley Kreitz, Root Cause.

[5] “Handbook on planning, monitoring and evaluating for development results”, United Nations Development Programme.

Comentários


bottom of page