In the overall project design, the evaluation constituted a separate workstream with dedicated partners. It was planned as an independent exercise to be conducted over three years by researchers representing two UK-based academic centres – University College London (UCL) and the University of Oxford.
The study was led by an expert from UCL trained in sociology and anthropology, supported by a small team. In addition to a trained research team, the evaluation benefited from the expertise of an academic Advisory Board, chaired by a representative from the University of Oxford. The board gathered distinguished scholars specialised in issues relevant to the project, such as the reception of asylum seekers, but also to evaluation itself. It met three times – to discuss the methodology, interim findings and draft final report.
That group which, in a sense, had oversight of the research, but no control of it, just advised on it […] It was the one forum which looked at the research exercise from an academic perspective […] Unconflicted by any other issues.
Source: U-RLP project hearing
Both the evaluators and the project management found the Advisory Board to offer important support. With its multidisciplinary in-depth academic expertise and independence, the Advisory Board was able to provide subject-matter expertise, but also challenge assumptions (underlying the theory of change, for example) or expose possible biases.
To have that external body was very helpful, because it really stopped that pro-innovation bias […] We had to work quite carefully to make sure that – you know – the message was constructive and helpful, but actually having the Advisory Board there was good really to help us push back when we needed to.
Source: U-RLP project hearing
It was also in a position to provide guidance to the researchers in a dynamic evaluation context, which required flexibility in the methodology and added nuance to the findings.
While the project beneficiaries were involved in the evaluation through a large number of individual interviews, ideally, the lead evaluator would have liked to involve representatives of the target group in data collection as community researchers. This was not possible in the project’s short timeframe, however.
The project created cooperation mechanisms which enabled implementation of the evaluation, on the one hand, and feeding of the evaluation results into the partnership on the other. The evaluation included extensive cooperation with all project partners at various stages. They were included in the development of the approach and the project’s theory of change. They supported the data collection process and were informed on the evaluation’s progress. The evaluators, while not being involved in the main activities, closely followed the project’s work and participated in its various meetings. Specific meetings were dedicated to discussions related to the evaluation itself. One meeting was specifically devoted to extensive explanation and clarifications of the applied approach for project partners. Separate meetings were also organised to present and discuss evaluation results, both interim and final.
The experiences of U-RLP’s evaluation show how important it is to set aside appropriate resources for evaluation in order for it to produce expected learning. As the project evaluators noted, the aspiration to measure impact is an ambitious one, so it should be allocated an appropriate budget and timeframe.
Ultimately, the value of the project is in what you learn from it. It was an awful lot of value for the people who were affected, but that passes and then the ultimate value is in what you learn, so evaluation shouldn’t be an afterthought in terms of resources, but more central to that.
Source: U-RLP project hearing
If appropriate resources are not available, then expectations of what can realistically be delivered should be managed.
U-RLP’s experience underscores the need for sufficient time to be earmarked in two dimensions in particular – working time for researchers and time for the evaluation itself, including for its preparation. The project’s evaluation can be seen as an example of best practice for various reasons, and both its breadth and depth are impressive. This was possible thanks to the in-depth expertise, as well as strong personal engagement and commitment, of the otherwise rather under-resourced evaluation team, supported by project management.
We found this project absolutely mushroomed. And I had the equivalent of research time, including myself, of one full time employee working on this. To do and to deliver everything that we did only happened because we all put in way, way more time.
Source: U-RLP project hearing
One of the reasons why resources were insufficient related to the fact that not all obstacles and difficulties were anticipated from the start. As the lead evaluator observed, had there been more clarity in this respect, she would have budgeted for more research time. It is, thus, an argument for devoting a sufficient amount of time for risk analysis at the evaluation design stage.
Apart from working time, U-RLP experience also shows that the project would have benefited from a preparatory phase. This would have allowed for a more comfortable development of a theory of change (see also below a discussion on how time-consuming this process was) and research tools prior to the implementation of activities themselves. The lead evaluator estimated such a phase to require roughly a couple of months, possibly as many as six. Importantly, such a preparatory phase within projects was introduced by UIA in subsequent funding editions.