In our June 11th webinar, we discussed key steps organizations can take to evaluate their own tech projects and to do it with fast adaptation in mind. Joining us with hands-on knowledge were guests Tin Geber of the Engine Room and Michael Taylor of the Land Coalition. You can view a recording of the webinar here.
TABridge lead organizer Allen Gunn began with comments on the nature of monitoring and evaluation itself (noting that even the mention of “M&E” often prompts a similar dread to a mention of the dentist). Too often, he said, NGOs fail to set down success criteria at the start of a tech project. Without clear goals, it’s more difficult to follow a clear path of implementation, much less evaluate progress or adapt so that your goals can remain paramount.
Drawing from the recommendations in our new guide to “Fundamentals for Using Technology” in transparency work, we then presented five steps for designing an evaluation plan from the very start of a project:
Step 1: Identify Your Goals
- Ask: What is the outcome we want?
- Ex.: Eliminate corruption and make government accountable
- How will this tech project help us improve our impact?
- Ex.: This website will make government data accessible to citizens
Step 2: Establish a Baseline
- You won’t know how far you’ve gone if you don’t know where you started.
- Ex.: Currently there is very little coverage in the press about or using our data.
- Ask: How will we track progress?
Step 3: Determine Metrics
- How will you measure success?
- Don’t be afraid of numbers
- Be realistic, measure for yourself not for a funder.
- Ex.: In the last year there were 4 articles by investigative journalists using our data.Next year we aim to double that amount to 8.
Step 4: Create and Evaluation Plan
- Integrate evaluation activities during your tech projects life cycle.
- Include evaluation as part of your technology project plan.
- Ex.: After each key section of our website is built, the team will debrief and we will trial with Inv Journalists.
Step 5: Implement Your Project and Plan
- Learn as you go and adjust the plan accordingly
- Use evaluations to learn how to be more effective overall
- Be inclusive and transparent about the evaluation process
- Ex.: After user feedback on new sections we decided to streamline navigation to the data itself
Webinar guest Tin Geber is a Project Pirate for the engine room and has a background in communication studies, web development, and interface design. He works to simplify the relationship between technology, data, and human beings.
The engine room just put out a guide for monitoring tech projects, “Measuring Impact On-the-Go.” Geber said NGOs can “do-it-yourself” when it comes to evaluation. There’s no need to relegate monitoring to “parachute consultants” who come and go.
When you use a hands-on, DIY approach to project evaluation, said Geber, you’re investing not only in a more informed design for your overall program, but in future efficiency and effectiveness for your own NGO, since it builds skills in the team. If your work is grounded in a strong theory of change, he added, it helps establish a strong loop between implementation, evaluation and learning “on the go.”
Guest Michael Taylor is Programme Manager for Global Policy at the International Land Coalition (ILC) Secretariat in Rome, Italy. He is also a social anthropologist and a citizen of Botswana. ILC is a global alliance of civil society and intergovernmental organisations working together to promote secure and equitable access to and control over land for women and men.
Taylor described the history of the Land Matrix portal, and the central role that “M&E” has played in its evolution. In the wake of the site’s launch, he said, a range of reactions and critiques flowed in from many sources, including several comments on the accuracy of the site data.
After some “serious soul searching,” the Land Matrix team redid the portal with crucial adjustments: more detailed transparency about the data sets used, and new ways for site users to comment on and supplement the data.
While it might have been more desirable to incorporate these transparency and feedback tools from the start, the project’s adaptations to user response offer a great example of how an open evaluation processs can not only improve a tool, but build community. By letting go of sole control of their process, the Land Matrix team achieved more engagement, higher accuracy, more shared ownership with their users.
Echoing the lessons from Geber and Taylor, Gunner reminded the group that, in any project, outside users and internal colleagues are constituencies that can provide feedback throughout the process. By showing a new information tool to selected reporters or board members, for instance, you can conduct “incremental” monitoring before, during and even after your tool or site is live online.
Webinar participants from Latin America to the U.S. to Europe asked a number of questions, including how to monitor mobile projects, and where to find other networks focused on NGOs and tech. One additional information source mentioned was National Democratic Institute’s recent report, “Citizen Participation and Technology,” on the role of digital tools in increasing citizen participation and fostering government accountability.
We have a new guide called “Fundamentals for Using Technology in Transparency and Accountability Organisations” and you can find the chapter on Integrating “Evaluation” and Learning.
Allen Gunn (@allengunn) is Executive Director of Aspiration (www.aspirationtech.org) in San Francisco, USA, and works to help NGOs, activists, foundations and software developers make more effective use of technology for social change.
Jessica Steimer (@JSteim) Jessica is the training and support manager at Aspiration, where she trains and supports community organizations around nonprofit technology best practices, specializing in business processes for nonprofit communications and technology sustainability.
You can view a recording of the webinar here. Also be sure to check out our upcoming #TABridge webinars.
With thanks to Jed Miller for additional event reporting.