Bodmando Consulting Group

CategoriesHealth Monitoring and Evaluation Youth

Mid-Term Evaluation of the BMZ-SSF Project

MID-TERM EVALUATION OF THE BMZ-SSF PROJECT AWO International is the Workers’ Welfare (Arbeiterwohlfahrt) Association focused on Development Cooperation and Humanitarian Action. The organisation partners with local NGOs in East Africa, South Asia, Southeast Asia, and Central America. Client Name AWO International Country Uganda Technical Area livelihoods Year 2024 CASE STUDY BMZ-SSF-UG-ME-2024 What was the client’s problem? AWO International aims to improve living conditions sustainably across these regions and promote self-reliance. It assists people and communities in creating their own life plans, accessing essential resources and services, and enhancing the social participation of disadvantaged groups including children and adolescents, women, migrants, indigenous individuals, and the elderly or ill. AWO International emphasises the strengthening of Social Structures and Civil Society Organizations, advocating for their involvement and influence within democratic decision-making processes at micro, meso, and macro levels. AWO has been active in Uganda since 2019, implementing the BMZ-SSF funded project “Integration, Food Safety and Nutrition” alongside four local partners (Agency for Accelerated Regional Development (AFARD), Community Volunteer Initiative for Development (COVOID), Uganda Community Based Association for Women and Children Welfare (UCOBAC) and Rural Initiative for Community Empowerment-West Nile (RICE-West Nile). The current 2023-2025 phase covers livelihood development, sustainable agriculture, entrepreneurship, peaceful co-existence, family planning, and climate change. Funded by Germany’s Federal Ministry for Economic Cooperation and Development (BMZ), the entire project spans nine years, divided into three-year phases Objectives of the mid-term evaluation Awo International contracted Bodmando Consulting Group to lead a mid-term evaluation of the BMZ-SSF Project in Uganda. The evaluation had Five main goals which included the following; Evaluate the project’s performance against the desired results as articulated in the project’s logical frameworks. Evaluate the project’s performance against the current development strategies of Uganda. Evaluate the project’s performance against the donor’s feedback, policy and requirements for the SSF. Assess the extent to which the project interventions are addressing cross-cutting issues i.e. gender equality, social inclusion, disability, etc. Provide AWO International with a sound basis for the new project concept 2026-2028. CASE STUDY BMZ-SSF-UG-ME-2024 What Approach and Methodology did Bodmando Undertake? The evaluation was cross sectional by design and it involved the use of mixed methods relying on both primary and secondary data sources to gather information and targeting the wider population in the refugee and host communities, as well as key partners and other stakeholders. Primary data was generated by conducting Key Informant Interviews (KIIs) with the office of the OPM, UNHCR and the BMZ partners, Focus Group Discussions (FGDs)  and a community survey with refugees and host communities. We also used the case study approach as part of the evaluation design allowing the evaluation team to observe, analyse, and assess the effects of the project on the welfare of refugees and host communities. While secondary data was generated from a desk review of relevant internal and external documents. CASE STUDY BMZ-SSF-UG-ME-2024 What value did bodmando unlock? Bodmando’s value lies in its role in conducting the midterm evaluation of AWO International’s project. This evaluation provided critical insights into the project’s progress, effectiveness, and areas for improvement. By identifying challenges and successes, Bodmando contributed to refining the project’s implementation strategy , ensuring that the initiatives in livelihood development, sustainable agriculture, family planning, and climate change adaptation remain impactful and aligned with the community’s needs.

CategoriesMonitoring and Evaluation

Evaluations in the Global South

The context of Program Evaluation in the Global South. Developing nations are providing increasing evidence that underscores the necessity for improved evaluation frameworks to ensure the long-term sustainability of South-South cooperation. Nations in the global South stress the importance of creating, testing, and consistently applying monitoring and evaluation approaches specifically designed for the principles and practices of South-South and triangular cooperation. Presently, there exists a significant gap in this area, indicating potential shortcomings in the design, delivery, management, and monitoring and evaluation (M&E) of these initiatives. It is crucial to note that the observed challenges do not suggest inherent issues with this form of cooperation but rather indicate possible deficiencies in various aspects (United Nations Office for South South Cooperation, 2018). To fully realize the developmental benefits of South-South and triangular cooperation, especially in reaching excluded and marginalized populations, greater attention must be given to addressing these challenges. As interest in these cooperation modalities grows, stakeholders are calling for discussions on methodologies to assess the impact of these initiatives. However, numerous technical challenges hinder the evaluation process, such as the absence of a universal definition for South-South and triangular cooperation, the diverse nature of activities and actors involved, and varying perspectives on measuring contributions. Various frameworks have been proposed by stakeholders to tackle these challenges. Examples include the framework detailed by China Agricultural University based on China-United Republic of Tanzania collaboration, the NeST Africa chapter’s framework drawn from extensive multi-stakeholder engagement, and the South-South Technical Cooperation Management Manual published by the Brazilian Cooperation Agency (ABC). Additionally, AMEXCID (Mexico) has outlined a strategy for the institutionalization of an evaluation policy, including pilots to assess management processes, service quality, and project relevance and results. While India lacks an overarching assessment system, the Research and Information System for Developing Countries (RIS) think tank has conducted limited case studies to develop a methodological toolkit and analytical framework for assessing the impact of South-South cooperation. In contemporary times, there is widespread acknowledgment that program evaluation initiatives have surged in the Global South. However, the primary focus in the evaluation discourse revolves around narrower aspects such as monitoring and auditing, often driven by the requirements of donors or funders. Moreover, the emphasis on evaluating “impact” often leaves program implementers with insufficient information to enhance program performance or comprehend the underlying mechanisms of program success or failure. This paper explores the gaps and challenges associated with evaluation in the Global South and proposes recommendations to embrace contemporary evaluation approaches that recognize the complexity and context specificity of international development sectors. It also advocates for intentional efforts by researchers, policymakers, and practitioners to build local capacity for designing and conducting evaluations. Program evaluation, the process of generating and interpreting information to assess the value and effectiveness of public programs, is a crucial tool for understanding the success and shortcomings of public health, education, and various social programs. In the Global South’s international development sector, evaluation plays a vital role in discerning what works and why. When appropriately implemented, program and policy evaluation assists policymakers and program planners in identifying development gaps, planning interventions, and evaluating the efficacy of programs and policies. Evaluation also serves as a valuable tool for understanding the distributional impact of development initiatives, providing insights into how programs operate and for whom (Satlaj & Trupti, 2019). Methodological Bias Currently, impact evaluations employing experimental design methods are considered the gold standard in the international development sector. However, there is a growing recognition among evaluation scholars and practitioners of the limitations of “impact measurement” itself. Some argue that a program may not be suitable for a randomized control trial (RCT) and might benefit more from program improvement techniques like formative evaluation. Scholars emphasize the need to reconsider “impact measurement” as the sole criterion for evaluating program success. The discourse has also shifted towards acknowledging the complexity of causality, advocating for evaluators to be context-aware and literate in various ways of thinking about causality. Despite this, the dominance of methods like RCTs often hinders the use of complexity approaches, even when they may be more suitable. Human-Centered Design and Development evaluation Developmental Evaluation (DE) is a form of program evaluation that informs and refines innovation, including program development (Patton, 2011). Formative and summative evaluations tend to assume a linear trajectory for programs or changes in knowledge, behavior, and outcomes. In contrast, developmental evaluation responds to the nature of change that is often seen in complex social systems. DE is currently in use in a number of fields where nonprofits play important roles, from agriculture to human services, international development to arts, and education to health. Another technique that has gained salience around addressing complexity and innovation is human-centered design (HCD) –it shares many parallels with developmental evaluation and attends specifically to the user-experiences throughout the program design process. More generally, it involves a cyclical process of observation, prototyping, and testing (Bason, 2017). Although human-centered design is seemingly focused upon initiation (or program design) and evaluation on assessment after the fact, human-centered design and developmental evaluation share a number of commonalities. Both support rapid-cycle learning among program staff and leadership to bolster learning and innovative program development (Patton,2010; Patton, McKegg & Wehipeihana, 2015). Theory-Driven Evaluation In recent years, theory-driven evaluations have gained traction among evaluators who believe that the purpose of evaluation extends beyond determining whether an intervention works or not. This approach posits that evaluation should seek to understand how and why an intervention is effective. Theory-driven evaluations rely on a conceptual framework called program theory, which consists of explicit or implicit assumptions about the necessary actions to address a social, educational, or health problem and why those actions will be effective. This approach enhances the evaluation’s ability to explain the change caused by a program, distinguishing between implementation failure and theory failure. Unlike impact evaluations using experimental methods, theory-driven evaluations provide insights on scaling up or replicating programs in different settings, explaining the underlying mechanisms responsible for program success. Evaluation Capacity Building Addressing the gaps in