The unstoppable rise of the Learning Partner

 

Service Improvement Specialist | @keira_lowther

 

Has anyone else noticed the rise in ‘Learning Partners’? A few years ago, it would be relatively unusual to find organisations advertising for a ‘Learning Partner’, as opposed to an evaluator. Now, it is common for major funders and charities large and small to commission this role.

At Dartington, we actively look to work with organisations who want to use their data to inform their decisions and make more impact. This often involves framing research questions to guide data collection, gaining multiple views on what data means before we reach an answer, and exploring how the implications of this can help them to improve (however indirectly) outcomes for children and young people.

So, it’s perhaps unsurprising that we have been drawn to the concept of the Learning Partner and all the term suggests; that is, working with organisations to enable learning and improvement, and doing so collaboratively. Having carried out several roles badged in this way, we wanted to reflect, as a team and with peers, on what the role can look like, what skills it requires, and what it can contribute to organisations, and to the sector more broadly.

We identified a number of key questions to consider about the Learning Partner role and spoke to colleagues from the Paul Hamlyn Foundation (Kirsty Gillan-Thomas and Holly Donagh), the Esmée Fairbairn Foundation (Gina Crane) and Renaisi (John Hitchin and Alice Thornton) who were kind enough to explore these issues with us. In this blog, and two follow-ups, we share what we’ve learned, and what we think it means for the sector.

Why the increase in Learning Partner roles?

There has been a change over the last five years in the way funders, delivery organisations and researchers see social interventions; specifically, there is a greater appreciation that they exist within complex systems which can make traditional efforts to evaluate and attribute impact appear unreliable and inappropriate. Holly and John both saw the rise of Learning Partner roles as a response to this appreciation of complexity - the Learning Partner can be tasked with understanding how an intervention affects, and is affected by, the surrounding system.

Closely related to this, Kirsty from Paul Hamlyn Foundation described discomfort with an approach where a set of evaluative research questions is set at the start of a grant period, which may become irrelevant as the context changes. To manage this ambiguity, funders can feel that commissioning a Learning Partner signals a more flexible approach which will respond to changes in an intervention and the environment in which it operates.

…because of the amount of change that's happening in terms of the iteration around the objectives… the issues that are coming up in the sector that we're trying to respond to; having a more traditional evaluation, with a very particular set of questions, right at the beginning, …it would be very easy for that to become irrelevant quite quickly. That might be another factor that's encouraging us to move towards having a Learning Partner slightly more commonly than evaluator.

- Kirsty Gillan-Thomas

Another perspective is that evaluations in their traditional form are often lengthy and don’t provide information to support decision-making when it is needed. Colleagues at Renaisi felt that Learning Partnerships are sometimes designed in response to the experience of reading evaluation reports full of detail about how delivery could have been improved – after the project was over. Of course, there is always some lag in learning, but the role of the Learning Partner is often to share information in as close to real-time as possible so that people can use it to inform better decision making, while it’s relevant.  

How is a learning partner different from an evaluator?

People see Learning Partners and evaluators as working differently and serving different functions. Gina, at Esmée Fairbairn Foundation, described this in terms of a Learning Partner’s role being to enable and support learning within and across the funded programmes. A good Learning Partner galvanises organisations to work together, making the most of their individual learning and sharing it with others, including the funder, collaborating towards their mutual goals. This contrasts with a more objective evaluation of the quality or impact of a programme.

Another difference that Renaisi identified was around different levels of enquiry. Learning Partners are not always working directly with delivery organisations to assess the impact on their users. They might be looking across different projects funded from the same source - these can be heterogeneous in terms of their aims and methods and not lend themselves to a single outcome evaluation. Other Learning Partner roles are designed to enable collaboration across partners through identifying learning that speaks to shared aims – again this differs greatly from the work of an evaluator.

We heard a lot about the partnership aspect of the Learning Partner role – specifically the need for the Learning Partner to create the conditions for participants to learn. These conditions include trust and openness so that people are able to discuss both what they don’t know, and what is not working well, in the knowledge that all are working to the same ends: successful delivery of the programme. Creating these conditions requires very different skills to those traditionally required by an evaluator, whose job it is to provide an objective appraisal of whether plans have come to fruition.

These differences mean that it is both very difficult for an organisation to act as both Learning Partner and evaluator, and it is essential to be clear from the outset so that the conditions can be created for learning.

“Rich learning”, but what about impact?

Perhaps as a result of this difference, a key risk of investing in Learning Partner support is that the impact of funded activity can be overlooked. Holly from Paul Hamlyn Foundation could see this described in projects where Learning Partners ‘narrate’ the learning journey.  This becomes a kind of audit of actions, creating an interesting and helpful resource to guide future implementation rather than one which can tell us what effect it has had on those involved in the longer term. Was it helpful? Not helpful? Even harmful?

It feels right to highlight this trade-off. Potential commissioners need to be aware of what a Learning Partner cannot tell them and to think through what kind of learning do they need, and when. An emphasis on learning may be right for early-stage or unstable programmes, and evaluation may be needed at other moments. Decisions about the balance in resource between ‘learning’ and ‘evaluation’ should reflect this, and without surfacing this question and taking a clear decision, the quality of overall learning is likely to suffer.

Impact of Learning Partners?

For almost any Learning Partner role, there are several groups of stakeholders whose learning needs could dominate – for example, the funder, the delivery organisations, their users, the wider sector. Each group’s needs could give rise to different questions and from there, to different learning activities. To guide this, those funding the Learning Partner need to be clear about whose learning is prioritised in this instance.  

Where organisations recognise evaluation is not always possible or appropriate, the Learning Partner role allows for organisations to provide accountability over resourced activities, have a better sense of the progress made and gain information to feed into future decisions. We think that some of the potential dangers can be avoided if enough consideration is given to whether having a Learning Partner is the right approach, who the learning is for, and whether the conditions for success are clearly met. In our next blog, we will look at what those conditions might be and how activity can be planned for.