Monitoring and evaluating capacity building (CB) is still a challenge to many similar endeavors. There is no single response to this challenge. There are also diverse aspects to be measured. According to LaFond and Brown (2003), “monitoring and evaluation can help answer a range of questions about:
the process of capacity change (how capacity building takes place),
capacity as an intermediate step toward performance (what elements of capacity are needed to ensure adequate performance), and
capacity as an outcome (whether capacity building has improved capacity)”
Also, as Linell (2003) affirms: “good evaluation is systematic, disciplined inquiry. Beyond that, what evaluation is and does varies enormously. An evaluation can entail a site visit by a solo consultant, a phone survey by a student intern, or rigorous scientific research conducted by a team of academics. It can cost a few hundred dollars or several millions.”
The author argues for its value: “Evaluation can illuminate feasibility, implementation issues and/or programme effectiveness at multiple levels including impact and outcomes for participants.” Also, a good M&E approach will enable better learning; under SFE this was the main purpose of evaluating our activities. In fact, I really think it´s time to always include the L to M&E: one of the main drivers for investing in M&E should be learning so that data and analysis inform our experience and help us do it better next time.
For us within SFE, learning was always on top of our agenda. We had ongoing team discussions to reflect on what was working and what was not, and why. These debates, many times based on evidence such as written evaluations, informal feedback, new demand for capacity building, lack of participation in some online communities, led decisions in terms of how to enhance operational and strategic management of CB activities. We also compared the effectiveness of different capacity-building interventions so as to sharpen focus and investment of resources throughout the programme.
GDNet also required from us, as part of the most recent annual reporting exercises, to develop a one pager of Lessons learned on the main activities of the project. This was of great help in terms of enabling us to commit some time to reflect and systematize in a formal way the knowledge that emerged through reflection. In this sense, we have found that Reporting can become the ideal opportunity to invest some time in reflecting what has worked and what has not; as well as ensure you make some strategic decisions as a consequence of this reflection. By writing this down you can become more aware of both your strengths and areas for improvement.
Finally, we also decided to invest some resources by the end of the programme to produce the Lessons learned paper from which emerged this series of lessons we have shared at P&I. Both the paper, participation in similar initiatives and the posts we produced here were ways to ensure that we shared what we think is valuable knowledge with peer organisations and colleagues. We have also invited the latter to react to what we have produced and also share their own experience. Thus, our learning has improved with the contributions of very interesting and insightful posts from Ravi Murugesan, Antonio Capillo and Alex Ademokun from INASP, Goran Buldioski (Think Tank Fund), Hans Gutbrod (independent consultant), Ricardo Ramirez and Dal Brodhead (independent consultants), and Catherine Fisher (independent consultant). If you have not ye taken a look at these, they are worth a visit!
Comments