Supporting young people into the workplace – how do we know ‘what works’?
Never before has there been so much focus on ‘what works’ in the social sector. We have a growing network of What Works Centres, including the Early Intervention Foundation, the Education Endowment Foundation and the What Works Centre for Wellbeing. This network is underpinned by sets of evidence standards that inform judgements of what counts as robust evidence of efficacy. There are now some 30 databases internationally that list social and educational programmes alongside an official rating of the extent to which they ‘work’. In some cases, this is accompanied by information on the cost of the programme, and its financial return to the state: a robust estimate of the savings to the public purse generated through the programme, by diverting participants from the criminal justice system, for example, or supporting a participant into work and off unemployment benefits. Programmes seeking to support young people’s transitions into learning and work have been put under the microscope too, and there is a growing body of research focused on achieving the best outcomes, particularly for the most disadvantaged young people.
The growth of the What Works movement is closely connected to the need for financial efficiency: in times when we need to do more with less, it arguably makes total sense to invest wisely, and focus on the programmes with the strongest evidence of positive outcomes.
And there’s an equally strong moral argument. Is it not unethical to subject young people, who may be vulnerable and ‘in need’, to programmes that we don’t know for sure work? The same argument is made in defence of randomised control trials – usually considered the highest standard of evaluation – which separate participants into a group that receives the programme and a group that doesn’t, in order to compare their outcomes. Some raise concerns about the ethics of such trials, but others respond that it is preferable to be sure of a programme’s benefits before widespread delivery.
In many ways, focusing on ‘what works’ sounds both straightforward and obvious. Why would we argue against investing scarce resource in the programmes most likely to make a difference? Surely we should do everything we can to work out whether or not something is effective? And surely it can’t be that complicated to find out? Well, regrettably, if often is.
The first challenge is finding the counterfactual – this refers to ‘what would have happened anyway’. Making the case that a programme ‘works’ involves clearly demonstrating that it creates better outcomes for participants than if they had not taken part. We have known for some time that a proportion of young people who are outside the labour market will find their own way into jobs or training, even without support. Finding the counterfactual means showing that a programme helped even more young people into jobs or training than would have done so anyway. This is very difficult, both statistically and philosophically. Can we ever really know what would have happened without us present? And the methodological demands of finding and working with counterfactuals will stretch the expertise of most delivery organisations.
The second issue is an associated one. To say confidently that a programme ‘works’, we need to know that any positive effect on outcomes was caused by the programme and not by something or someone else. This is referred to as attribution. In the case of young people who are not in employment, education or training, they will be influenced by a wide range of factors in their lives: family, friends, community, and the media, to name but a few. They are also likely to be in contact with a number of services or providers at any one time, in a variety of different ways. It is extremely challenging to be sure that it was a particular programme that had an effect, and not one of these other influences. Randomised control trials are one way of attempting to ‘control’ for these other variables, but many say that it’s impossible to control for the complexity of the real world.
The third issue relates to programmes themselves. To say that a programme ‘worked’, it’s necessary to know what it involves. This might be mentoring, work experience, one to one support and guidance, or interview skills, for example. If one of the goals of working out ‘what works’ is to scale or replicate effective programmes, then we need to know the exact mix of ingredients so we can get it right again. Many youth development programmes simply don’t work like this, particularly youth work approaches. They are much more informal, unstructured and guided by young people’s assets and interests. Their intended outcomes might be quite loosely defined, and the key ‘activity’ within the programme is likely to be a relationship with a trusted adult. This can be hard to define, and to replicate. It’s also very hard to know whether the young people participating in a programme are ‘receiving’ the same thing.
And finally, the fourth challenge is how we include the voices of young people. Finding out what works involves measuring set outcomes – the number of young people moving into training or employment, for example, and the proportion sustaining these placements. It may involve using standardised questionnaires to measure young people’s wellbeing at the beginning and the end. It does not necessarily involve asking young people what they think about the difference that a programme made in their lives, what they found particularly helpful or challenging, or about the relationship with a key worker or practitioner. A focus on what works won’t uncover unintended outcomes, or why they happened. For many youth organisations, it is inconceivable that we could make judgements about programmes without seeking open-ended feedback from young people themselves.
So what’s the answer? Acknowledging the difficulties in working out ‘what works’ does not mean that we should abandon attempts to understand the impact that programmes have on young people’s lives – intended and unintended, positive and negative. But it does mean that we need to think carefully about how we go about it.
First, we need to work out the question that we want to answer. It’s likely to be more nuanced than simply “does this programme work?” It’s more likely to be “does it achieve its aims? For whom? In what circumstances?” We should ask whether programmes have the potential to cause harm, and what young people think of the programmes in which they participate. We should ask ourselves what exactly we’re aiming to achieve and why, and what ‘good quality’ looks like. We should also ask who else is providing similar programmes for young people locally.
As we seek to answer these questions, we should invest in proper longitudinal research to understand the difference that programmes make over time, and look at where we can draw external data from that will help us track outcomes for young people. We must do more together rather than being driven by competitive advantage, and be realistic enough to focus on where we might make a contribution as part of a broader system rather than obsessing about proving attribution. Fundamentally, we should focus on the most meaningful approaches to impact measurement that get us closer to the insights we need to support better learning and work transitions for young people.
Director, Centre for Youth Impact