Rapid Cycle Testing – Inspiration from the US
So far so good
Last month, I met with the COO of a US organisation that’s had two Randomised Controlled Trials, both of which showed impact on one of their outcomes – reducing reoffending amongst men recently released from prison. Not only that, but this organisation has highly developed performance management systems, with frontline staff tracking data on individual clients in real-time. All this has helped this organisation expand significantly beyond its traditional New York base.
But not good enough
But the COO was far from content. The agenda for the organisation is clear to him: why aren’t they seeing (enough) impact on their other outcome – stable employment – to show up as impact in their trials? Even more importantly: how can they change their programme to change this? He doesn’t think that the answers will come from another large Randomised Controlled Trial (let’s call this RCT1), or even from their diligent performance management. Instead he is now looking to Rapid Cycle Testing: designing, implementing, and testing a few small changes to the programme, in short loops that provide data on whether the adaptations are giving better results than the ‘programme-as-usual’ (let’s call this RCT2! And yes, it is both neat and annoying that the two techniques share an acronym).
And what Brad Dudding of the Centre for Employment Opportunities said was mirrored by Venture Philanthropy Partners and Project Evident. Staff there have been concerned, for many years, with building organisations that consistently deliver meaningful outcomes. They’ve known that developing the capacity to learn from data is a big part of that – and that both performance management and RCT1s are a part of this. But over time they’ve all grown to believe there’s a missing part of the picture.
This position doesn’t come from an abandonment of Randomised Controlled Trials – everyone I spoke to believed they are useful in establishing causation of outcomes by a programme. But they’re frustratingly unhelpful for answering ‘Will this work better than that?’ in a timely way. RCT1s are slow, expensive, and high-stakes for the evaluated organisations. For the rest of us, they provide decontextualised answers that don’t give us many clues about whether ‘this’ will work again, in a different context – and contexts are always different. And of course, we know that many evidence-based programmes with good RCT1 results behind them fail to replicate.
Rapid cycle testing
So, what does Rapid Cycle Testing (RCT2) look like? For Brad it looks like testing at least two programme adaptations to reduce attrition and increase participation amongst groups within the programme that are most at risk. These adaptations will be based on feedback from participants and staff, but also from their performance management data, the evidence-base, and experts in peer organisations and academia.
For Isaac Castillo, Director of Outcomes, Assessment and Learning at Venture Philanthropy Partners in Washington DC, it looks like supporting one of his highest-performing grantees to move towards ‘personalising’ the programme for different young people. A school tutoring programme will be using A/B testing principles to see whether they can ‘dial down’ the training hours tutors receive without imperilling outcomes, and whether changing the content of some of the sessions improves outcomes – particularly for groups of children that analysis of existing data showed weren’t doing as well as others.
For other funders we met, and heard about, it’s going to look like supporting their grantees to implement Rapid Cycle Testing. They often need help upfront to articulate their ‘learning agenda’: the one or two questions they want to answer in the next one or two years and then how they’ll resource it (both money and time) within their business plan. This work is, in part, being driven forward by Project Evident, who helped CEO develop a strategic evidence plan to map its evidence building agenda. My colleague Tim Hobbs spent time with Project Evident in Boston, and we look forward to collaborating with them over the coming months and years.
Funders are frequently concerned to help organisations over the long-term – deciding to make changes to your programme to increase impact or cost is relatively easy, seeing it through is notoriously difficult. Rapid Cycle Testing can be the bridge between these stages: implementing many changes at one time is usually overwhelming – so chunk it up into a series of learning questions (‘Can we reduce training time without loss of quality?’, ‘Can we enrol more vulnerable clients without destabilising service delivery?’) and go step by step.
This is change management
The question of resourcing this is crucial; Rapid Cycle Testing is demanding of an organisation. As Brad Dudding says, it all comes down to the frontline staff – implementing and monitoring the adaptations takes up their bandwidth. What can you take off their plates to make this new task feasible? They also need time to discuss the results with each other and senior staff to make sense of what they’re seeing.
Beyond this, this organisation needs to hold itself accountable for deciding what to do with the results – does the data justify making these adaptations a permanent part of the programme? Does it suggest they need a further tweak? Or should you dump them? This whole process involves a lot of change for the frontline – and that requires careful change management.
We know this at the Dartington Service Design Lab. We are midway through a Rapid Cycle Testing process with the Family Nurse Partnership programme. You can find out more about what it looks like for them here, but we are increasingly aware that this is as much a change management initiative as it is about experimentation and learning.
We’re also pushing up against the limits of our experience, and so are all the people I spoke to in the US. When you’re doing something difficult and somewhat unknown, it helps to know that others are doing it too, and we’re sharing our partial roadmaps to our destination: more effective services to improve outcomes of children and young people.
Deputy Director of the Dartington Service Design Lab
Let us know what you think via @DartingtonSDL
Sign up to find out more