CONTEXT
Modifier 25 February 2021
by UIA Permanent Secretariat & Ecorys
From
Monitoring and evaluation practices: UIA lessons learnt
Report homepage

Considerations for the evaluation of innovation

Introduction

capture illustration theory

UIA projects are to go beyond traditional policies and services – they need to be bold and innovative. Consequently, monitoring and evaluation of such projects presents its own set of challenges. First, a distinction needs to be made between these two elements. Interwoven and interconnected, monitoring and evaluation often get confused, and the lines between them blurred. Often, projects encompass robust and sophisticated monitoring systems but fall short of actual evaluation. So, what is the difference? Put very simply, monitoring is a systematic collection of information about the programme/project activities carried out to see if they are on track. It is an ongoing process, ideally starting from day one of project implementation and usually implemented by the project team members. Importantly, monitoring focuses on inputs (resources mobilised), activities (what was done with the resources) and outputs (what was produced in the process). Evaluation, in turn, is a periodic assessment of the programme/project activities (usually by external experts) designed to measure their success against established goals and objectives. It is undertaken during project implementation (halfway through, at completion, or while moving from one stage to another stage of the project). In some cases, it may be carried out by internal members of the team or a combination of both internal and external members. The table below presents some key differences between monitoring and evaluation.

What?

Monitoring

Evaluation

When?

Systematic and routine, an ongoing process starting from day one

Periodic, done at certain points during the project

How?

Collecting and analysing the project’s records (includes regular meetings, interviews, monthly and quarterly reviews, etc.)

Usually quantitative data
 

Collecting and analysing data about the project’s potential results (includes intense data collection, both qualitative and quantitative)

What?

Tracking the project’s progress

Checking if activities are on track

Focuses on input, activities, and output

Understanding and measuring the project’s impact

Measuring the project’s success against established project goals and objectives

Focuses on outcomes, impacts and overall goals
 

Who?

Usually undertaken by internal members of the team

Often carried out by external members

In some cases, undertaken by internal members of the team or by both internal and external members
 

Adapted from Sandesh Adhikari for Public Health Notes (2017), ‘20 Differences between Monitoring and Evaluation’. Available at: www.publichealthnotes.com/difference-monitoring-evaluation/

While it is plausible in some well-established, repeated and highly predictable projects that monitoring is sufficient and evaluation may (under certain circumstances) be omitted, no such scenario can take place in the case of innovative interventions (such as UIA founded projects). Here, monitoring activities, implementation, and outputs without asking bigger questions about the projects’ impact is not enough. A well-designed evaluation approach is essential for trying to grasp what, how, and why happened (or did not happen) as a result of an intervention.

Evaluating innovative projects comes with its own set of additional considerations and challenges. There has long been consensus that no ‘magic bullet’ exists in evaluation; there is no single method to answer all questions or be applied in all types of studies. Typically, evaluations will need to use a mixture of methods chosen to suit the particular needs, objectives and circumstances of implemented interventions. While it would be difficult to single out a specific evaluation method as a ‘one size fits all’ approach to grasping projects’ impact, there are at least three overarching considerations that can guide good practices, both in the monitoring and evaluation of innovation:

  1. Limits to what can actually be known and understood about the dynamics and impact of innovative projects within a given timeframe.
  2. Participatory approaches of paramount importance when monitoring and evaluating urban initiatives.
  3. System thinking as a basis for conceptualising the character and boundaries of an evaluated object.

The common denominator of these three approaches is the necessity to look beyond monitoring the progress of project implementation (even though this remains a principal necessity for understanding the evolvement of an innovation) to understand how an intervention has impacted a broader network of stakeholders. Additionally, evaluations should signal the potential that an intervention demonstrates for further impacting communities. The concept of a broader network of stakeholders that goes beyond the initially defined beneficiary group is essential to account for the multiplicity of possible outcomes an innovative intervention might bring about in an interconnected social environment, such as a city. It has been highlighted that innovation, in its essence, is unpredictable in terms of which particular activity or intervention will work or prove useful and who will benefit. Moreover, from the outset of an innovative project, it is never known when the benefits (if any) will occur and whether the discovery and application will be as initially intended, or quite different in nature. At the same time, the design and implementation of large-scale innovative interventions require significant planning and well-developed assumptions of possible scenarios. They demand a clear definition of the intervention’s primary beneficiary group and the logic of an intended impact. It is important, however, that during evaluation, this primary group is critically deconstructed so as to analyse the different possible impacts a project might exert on individuals and groups, depending on their socio-economic markers. Inclusion of the three general principles listed above in the design of monitoring and evaluation approaches to UIA projects and other innovative interventions will increase the chances for genuine, trustworthy, and meaningful learning.

Limits to evaluating innovation and the importance of learning

Innovation is a complex phenomenon which is difficult to quantify and often involves significant time lags before an impact can be measured. The progress of innovation is uneven rather than continuous, and the payoff is rarely immediate.

’The criteria for success should not be whether the project succeeded or failed in what it was trying to do, but rather should be the extent to which it truly explored something new, identified what can be learned and acted upon these.’ (Perrin, 2002)

 One of the challenges in evaluating innovative projects is being able to understand if problems and limitations are related to the concept of the intervention itself, or simply arising from inevitable start-up problems that can be worked out in time. Logic models or well-constructed theories of change can help in deciding what forms of impact are appropriate to look for at a given stage in a project cycle and what is able to be evaluated. The timespan required to wholly evaluate social innovation projects is much greater than that usually granted by funding agencies, and as a result there is a strong need to indicate how much of the intervention’s impact can actually be grasped and understood immediately after the project’s completion, within the timespan available for project evaluation, and beyond. Designing evaluation questions feasible for the time available and defining outputs that are expected at a certain moment in the project’s implementation can be helpful in establishing this. Being too ambitious or unrealistic in determining what knowledge can be produced in relation to a project can backfire at the data collection and analysis stage. Social innovations (particularly those related to the development of new attitudes, habits and practices) take time. It is therefore advisable to be clear on what is in fact measurable, and on the change that is actually achievable, within a given timeframe. At the same time, small changes can create large and sometimes unanticipated effects. Because the interrelationships between parts and players in a system are difficult to untangle, it is ‘impossible to know for sure how — or whether — one change will ‘ripple’ through to other players or change overall dynamics’. As the authors of the Smart Innovation Guide advised in 2006 (in the context of evaluating European innovation policies), ‘the best that can be hoped for in an evaluation will be to examine some leading indicators, or some early and possibly intermediate impacts, that suggest that longer-term impacts are more rather than less likely’. Put simply, there is only so much that can be known and understood about the dynamics of innovation playing out in the real, macro context within a specific limited timeframe. To acknowledge what cannot be known and measured is a way forward for creating a realistic evaluation framework. 

Another aspect of a good practice in evaluating innovation is openness and ability to understand and act on identified failures. Perrin argues that a methodological approach to evaluating innovation must ‘help identify learning and implications from ‘successes’ and ‘failures’’, and ‘be flexible enough to be open to serendipity and unexpected findings, which, particularly with innovations, can represent the key outcomes’. Since implementation of innovative projects should ultimately lead to learning, two issues become important in approaching the evaluation of such interventions: introducing or fostering an innovation and learning culture at the implementing institutions, and creating learning-oriented evaluation frameworks. These two aspects are interconnected and create the foundation for a meaningful, critical analysis of impact.

The capacity for introducing or fostering an innovation and learning culture can depend on different institutional setups within a project. It may present a challenge in interventions spearheaded by public authorities, in particular, as they operate within the context of accountability for public funds and electoral cycles. Neither accountability for public funds nor the perspective of elections favour failure as a possible option in project implementation. This is where tensions or challenges related to true project evaluation can appear. Bold and innovative interventions, by default, have a risk of failure written into them, and such a possibility needs to be considered in evaluation. Given the time lag that is often involved before the impact of an innovation becomes apparent and the often-unpredictable pathways associated with this, it is also important to apply caution in defining or declaring failure too quickly. A premature and poorly designed evaluation can cause harm:
‘When a formative or summative evaluation approach is applied to an innovation that is still unfolding, it can squelch the adaptation and creativity that is integral to success’.

Evaluating a project on how well a set of planned activities has been implemented — or predicted outcomes achieved — strongly incentivises implementers to stick to the original plan, regardless of the changes that have affected the environment or interests of stakeholders. In this way, ‘exploration and experimentation, and perhaps even the ability to envision alternative paths, are shut down’. Furthermore, taking evaluation findings as final judgments of an initiative’s impact when that project is still evolving and its full impact remains to be seen can cause project authors (or more importantly, funders) to prematurely abandon supporting such efforts. As such, projects that could be truly transformative in the long run face the risk of being discarded.

People who test new solutions to complex problems do not have the luxury of a clear or proven path for achieving their vision. They may know generally where they want to end up, but they may not know the most efficient or effective way to get there, nor do they know exactly how long it will take to arrive.(Evaluating Social Innovation, Nevada Fund)

One of the practical strategies for creating room for accepting unexpected results and acknowledgment of failure is maintaining clear, open and powerful communication that the initiative at hand is an innovative project where failures, testing of new approaches and changes of course are not only possible, but likely. ‘Failures’ tend to be perceived and treated unfavourably, with possible negative consequences for ‘those judged to have ‘failed’, even if the attempt was very ambitious’. Indeed, ‘in particular among public institutions, few celebrate the possibility of failure as much as the Danes. The Danish National Centre for Public Sector Innovation goes as far as to congratulate implementers on failures in innovation through a message conveyed in their guidelines to evaluating innovation: ‘Congratulations! You’ve just become so much smarter! Take what you have learned, tweak your initiative and start over in your innovation process.’ In fact, evaluation theorists warn against programmes or projects which claim to be innovative and have a high record of ‘success’. These should be viewed with scepticism as this probably means that what is being attempted is not very ambitious.

Another approach worth exploring is the introduction of accountability for learning, whereby project teams could be held accountable for how much has been learnt over time, how they have adapted to new information and why this adaption has been important for the improvement of development outcomes. While exploring how change occurs in the world, Green admits that it is much easier to ‘prove’ results by assuming the world is linear, reinforcing the ‘if x, then y’ mindset, but in complex systems (such as those of cities), it makes more sense to be accountable for what you have learned and how you have adapted rather than for the results achieved against an initially developed scenario.

Indeed, creating a learning-oriented evaluation framework goes hand in hand with fostering a learning-oriented culture. This requires stepping back from the specificities of particular  techniques and developing core questions that guide the whole evaluation in an open way, where different outcomes and possibilities are plausible. Here, developing a project theory of change has become a popular method for creating an intervention’s framework, which is potentially useful for organising evaluations. To this end, looking specifically at how theories of change can support learning, Valters warns that it is important that interventions do not fall into the trap of creating policy-based evidence rather than evidence-based policy. Such an approach, he argues, requires a focus on searching, rather than on validation, moving away from looking to match theories to donor narratives and exploring change through methods that are embedded in local contexts. This is in line with UIA’s understanding of its projects, which are to evolve in ‘dynamic places where changes happen on a larger scale and at a fast pace’. Well-designed theories of change can be operationalised to include and integrate ‘learning objectives into the cycle of project design, implementation, completion, and evaluation’. While it may seem contradictory, a theory of change should expand the range of potential approaches rather than narrow them down. Lastly, evaluation of innovation must be able to ‘get at the exceptions, including unintended consequences’. It is important to acknowledge that research approaches based only on counting and summations, focusing on mean scores, may easily mislead us and blur or hide projects’.

In the field of evaluating social innovation, conventional evaluation[1] is increasingly judged a poor fit for the uncertain and emergent nature of innovative and complex initiatives. Consequently, attention is being paid to approaches which can provide timely feedback and data that are necessary for supporting adaptation and reorganisation in highly dynamic, multidimensional and interdependent interventions. At the same time, voices critical of common evaluation practices argue that ‘some of the most basic and taken-for-granted practices of evaluators and evaluation commissioners are at odds with innovation in complex systems’. Looking specifically at developmental evaluation, practitioners point out that ‘routines around setting pre-determined evaluation questions, methods, timelines, and deliverables do not fit with the uncertainty and unpredictability of innovation in systems’. To this end, starting developmental evaluation with pre-defined questions and a fully developed theory of change risks ‘missing the point’. Instead, it is recommended that both those who are commissioning evaluations and those who are implementing them begin, for instance, with ‘early stage puzzling questions’, allowing for flexibility in the development of an evaluation approach.

 

[1] While considering the rich and growing portfolio of evaluation approaches and methodologies available it might be somewhat misleading to talk of ‘conventional evaluations’, authors argue that despite the broad spectrum, formative and summative evaluation approaches prevail and can still be considered the norm.

Beyond a participatory approach

Ever since Henri Lefebvre’s famous idea of the ‘right to the city’, the question of equal access to resources offered by urban areas has been on the agenda of researchers, intellectuals, activists and, consequently, policymakers. Yet, with the expansion of cities and new challenges emerging in this process, the issue of equal access to what is available has become entangled in a framework of finite resources. In this context, UIA projects set out to test new and unproven solutions to address multiple and interconnected urban challenges. These challenges are related to employment, migration, demography, water and soil pollution. While some target concrete groups of beneficiaries, the majority set out to tackle problems that ultimately impact all urban-dwellers. The issue at hand here, however, is not that of what will solve the challenges, but rather how the solutions are developed, implemented and evaluated. Who is invited to participate in knowledge creation, and how?

I'nnovation is not neutral – it is about making the world fairer and more sustainable. So, in order to think about how monitoring and evaluation can strengthen innovation, we need to ask ourselves what values are non-negotiable that we can use for judging the contribution of the innovation.' (Irene Gujit, ‘From responsible innovation to responsible monitoring and evaluation’. Conference paper, 2015)

Often included within the framework of human rights and equity, Lefebvre’s original concept in fact called for a renewed access to urban life, one that empowered city-dwellers to shape the city as they desired through the right to participation and active engagement. Followed by David Harvey’s interpretation, where the right to the city ‘is far more than the individual liberty to access urban resources’, and is rather the ‘freedom to make and remake our cities’, the framework for desired urban policymaking is now strongly embedded in the need for participation of the demos in the co-creation of a shared vision for the city of the future.

While a participatory approach to evaluation has, over the years, became commonplace, making it hard to imagine evaluation where the beneficiaries of interventions are not interviewed and consulted, true ownership of the evaluation process shared among the interested stakeholders and the impacted communities is more difficult to come across. This is even more applicable in the case of evaluations where structural power relations and historically embedded privileges are (at first sight) less relevant and evaluated projects are considered blind to the characteristics of specific groups (such as infrastructure that aims to benefit everyone, or climate change actions).

The avant-garde of evaluation practices has gone beyond simply accounting for beneficiary opinions, however, and evaluations are no longer considered value neutral, technical endeavours. Evaluations have the potential to advance equity, they are ‘not an end but means’. Evaluations understood in this way can become powerful platforms through which city-dwellers, especially those who are the most disadvantaged, can articulate and translate their individual experiences of the city into contributions and the co-shaping of potential policies.

Specific to innovation, evaluation can play a significant role in advancing ‘responsible innovations’, and questions have been raised surrounding how monitoring and evaluation can responsibly support the management and governance of innovation processes towards a sustainable and equitable future, or contribute to deeper reflexivity and transparent decision-making. Not surprisingly, the notion of equitable evaluation is, first and foremost, embedded in the context of the evaluation of philanthropic programmes. The idea endorses the alignment of practices with an equity approach that uses evaluation as a tool for advancing fairness. In practical terms, this means considering three aspects: the diversity of evaluation teams (beyond ethnic and cultural); the cultural appropriateness and validity of methods; and the ability of evaluation designs to reveal structural- and system-level drivers of inequity. The last aspect seems particularly important in understanding how large-scale interventions play out in the context of modern cities. However, most importantly, ensuring equality in evaluation means scrutinising the degree to which those affected by what is being evaluated have the power to shape and own how the evaluation happens. This requires careful, consultative design of the evaluation stage long before the planned activities begin. With the plethora of evaluation approaches ranging from transformative paradigms (focusing on the experiences of marginalised communities) and collaborative outcome reporting (including community review and consultation of outcomes and conclusions) to empowerment evaluation, the best mode should be selected to ensure equality and sufficient understanding of the multifaceted impacts of an innovative project on a community.

A participatory approach to evaluation should also be able to account for gaps in opinion where individuals are not effectively reached by the data collection efforts. Making sense of who is ‘missing’, why, and how this influences our understanding of the intervention’s impact is equally as important as analysing the voices and opinions successfully collected. ‘Silence’ or under-representation of certain groups may arise for different reasons, many of which have to do with power relations and structural injustice. The voices of children and youth, for example, tend to be overlooked in evaluations of projects which do not explicitly target them as core beneficiary groups. Effectively engaging with people with disabilities during evaluation might pose an additional set of challenges that should be considered when planning data collection. Furthermore, in strongly patriarchal communities, women might have fewer opportunities to participate in public dialogue about communal issues. These, as well as other aspects related to individual contexts, should be well-addressed at the outset of any meaningful evaluation efforts.

Lastly, opening evaluation processes to the consequential participation of multiple groups of stakeholders will require an effective approach to appropriate management and articulation of sometimes conflicting, and sometimes highly negative, voices and opinions. Not being able to convey critical or conflicting opinions in the final evaluation products runs the risk of losing processes’ legitimacy all together.

Think systems

The final element of our three-segment approach to evaluation deals with how evaluators should perceive the studied object. The concept of systems is useful in visualising and consequently deconstructing the relations, interdependencies, and spheres of influence within which project activities evolve. The imagery of a system goes directly against the notion of linearity – one that has long been criticised in relation to understanding how innovation works. Innovation never occurs alone, but always within the context of structured relationships, networks, infrastructures and a wider social and economic environment. With a history that extends over a century and origins in multiple disciplines, including natural and social sciences, the systems field offers many systems concepts that can be applied to evaluation theory and practice.

‘Treat the systems and complexity field with the respect it deserves.  It’s a big field and like the evaluation field has diverse methods and methodologies, big unresolved disputes and a history.  Do your homework and avoid grabbing hold of simple clichés. (..) while systems approaches help us deal with ambiguous and uncertain situations, the way we understand situations and why they behave the way they do is not magic, it’s not ‘stuff happens’.  Systems and complexity approaches are very disciplined approaches to making sense of how things happen the way they do. (Bob Williams)


With some variations, there are three main aspects or core systems concepts: interrelationships, perspectives and boundaries. The American Evaluation Association adds the fourth element, dynamics understood as ‘the interrelationships between elements of a system; perspectives from which a situation or system can be understood’. Smith indicated that an interactive model of innovation has emerged, and that ‘linear notions of innovation have been superseded by models which stress interactions between heterogeneous elements of innovation processes’[1]. While evaluators, in general, tend to explore interrelationships, they ‘worry and argue about what’s in the boxes, and tend to ignore the arrows between them’. In contrast, the systems and complexity field tend to focus more on what the arrows mean and less on what is in the boxes. At the same time, accounting for multiple perspectives is also fairly common in evaluation, but the question remains whether the studies really deeply engage with the consequences of those perspectives on the situations that are being evaluated. Williams argues that if that was truly the case, then ‘we’d never consider an intervention having a single purpose or single framing’. Similarly, if we commonly reflected on and critiqued boundary choices, then we would never allow the values by which an intervention is judged to be determined solely by the programme implementer or the evaluation client. Boundary demarcation, in fact, occupies an important place in evaluation of any complex system intervention and, in particular, in the case of innovative projects that tackle multiple and interconnected urban challenges. To this end, boundary critique aims for critical handling of boundary judgments – the ways we ‘delimit’ or ‘contextualise’ an issue of interest. Ulrich argues that regardless of whether we use systems terminology and methods or not, it is never a bad idea to ask what our boundary judgments are and what they might or ought to be. Boundary judgments delimit what is considered relevant from what is not. Such defining moments are crucial in the case of evaluating innovation, as the ways in which innovation works are, by definition, unexpected.


[1] Smith, K. (2000) ‘Innovation Indicators and the Knowledge Economy: Concepts, Results and Policy Challenges’, p.16.
Keynote address at the Conference on Innovation and Enterprise Creation: Statistics and Indicators, France, 23–4 November. Cited in Perrin, B. (2002).

Conclusions and resources

Advancements in evaluating innovation are uneven, particularly with regard to public institutions. Valuable and thought-provoking voices tend to come from humanitarian and development work, where the questions of true ownership of processes, the complexity of human responses, equality and power relations are translated into evaluation approaches. Here, questions surrounding the superiority of one data collection method over another are replaced by larger issues surrounding the meaning and implications of evaluation itself. The desired shift is to seeing evidence of the inclusion of different perspectives, not consensus, in analysis, and moving beyond innovation with data collection methods to looking at approaches for collective sense-making, seeking surprise. At the same time, innovation can take the form of new programmes, products, laws, institutions, ideas, relationships or patterns of interaction, and is often a mix of many of these. Perhaps more importantly, the term also describes the process of generating, testing, and adapting these novel solutions, which is inherently exploratory and uncertain.

 

Useful evaluation guides

 

Bibliography

  1. American Evaluation Association (2018) Principles for Effective Use of Systems Thinking in Evaluation. Available at www.systemsinevaluation.com/wp-content/uploads/2018/10/SETIG-Principles-FINAL-DRAFT-2018-9-9.pdf
  2. Drucker, P. F. (1998) ‘The Discipline of Innovation’, Harvard Business Review 76(6): 149–56.
  3. Green, D. (2016) How change Happens. Oxford University Press.
  4. Kusters, C.S.L., Guijt, I., Buizer, N., Brouwers, J.H.A.M, Roefs, M., van Vugt, S.M., Wigboldus, S.A., 2015. Conference report: Monitoring and Evaluation for Responsible Innovation. A conference on M&E for systemic change, 19-20 March, the Netherlands. Centre for Development Innovation, Wageningen UR (University & Research centre). Report CDI-15-103. Wageningen
  5. Louis Lengrand and Associés et al. (2006), SMART INNOVATION: A Practical Guide to Evaluating
  6. Innovation Programmes, European Commission, Brussels, available at: https://ec.europa.eu/growth/content/practical-guide-evaluating-innovation-programmes-0_en
  7. McPherson, A.H. and McDonald, S.M. (2010) ‘Measuring the outcomes and impacts of innovation
  8. interventions: assessing the role of additionality’, Int. J. Technology, Policy and Management, Vol. 10, Nos. 1/2, pp.137–156.
  9. Perrin, B. (2002) ‘How to — and How Not to — Evaluate Innovation’. Evaluation . January 2002
  10. Presskil, H. and Beer, T. (2012) Evaluating Social Innovation. Center for Evaluation Innovation.
  11. Puttick, R. and Ludlow, J. (2012) ‘Standards of Evidence for Impact Investing.’ London: Nesta.
  12. The National Centre for Public Sector Innovation -COI (2016) A guide to evaluating public sector innovations.
  13. TECHNOPOLIS GROUP & MIOIR (2012): Evaluation of Innovation Activities. Guidance
  14. on methods and practices. Study funded by the European Commission, Directorate for Regional Policy.
  15. Ulrich, W. (2017) The Concept of Systemic Triangulation Its Intent and Imagery. Available at www.wulrich.com/downloads/bimonthly_march2017.pdf
  16. Valters, C. (2015) Theories of Change. Time for a Radical Approach to Learning in Development. London. Overseas Development Institute.

 

Partager