Providing effective services to children, young people and families in times of uncertainty: measuring impact - not counting widgets

In the midst of Covid-19 pandemic, we are all too acutely aware that it will leave a lasting impact on our society, economy and people’s mental health and well-being. Job losses and a recession are inevitable. In taking unprecedented steps to lock down, Governments have accepted this as a necessary response to prevent mass transmission of the virus.

In the scramble to preserve as many services for children and young people as possible, many service providers will be thinking about what is core to their business and must continue and what is non-core that, at least in the short term, can take a back seat. Continued delivery of frontline services, particularly those which address needs that have not gone away – and indeed may have grown – will obviously be a number one priority. In this context, it may be tempting for organisations to reduce their budgets for evaluating impact. This would be a mistake!

Regardless of sector, effective organisations:

  • Value the importance of data driven decisions;
  • Collect, report and use performance data as a core part of their organisation – not as an add-on;
  • Maintain, and in some cases expand, their collection of performance data during a downturn;
  • Gather both qualitative and quantitative data from service users – parents/carers, children and young people - to understand impact; and
  • Use data to promote service improvement

So why are data driven decisions important? The simple answer is that more decisions are based on robust data means that fewer decisions are taken on the basis of “intuition and other less-powerful guides to decision-making and action.”[1]

As someone who has been involved in evaluation for many years, I’ve often been asked, how much of the total spend on programmes and services should be set aside for evaluation and assessing impact? And what standard of evidence should an organisation aim for?

The answer to the first part is: “Determine what you want to learn from your evaluation and build a budget that can answer those research questions.”[2] This is easier said than done!

On a more practical note, if an organisation is spending less than 10% of their programme/service budget on evaluation and understanding impact, they are probably under-investing in this area. The research in this area suggests that 15%[3] of the total budget for a service or programme should be invested in evaluation. This obviously depends on the size of programme expenditure and economies of scale that can be achieved in large-scale programme spend.

The cost of evaluation rises rapidly with the standard of evidence an organisation aims for, with Randomised Controlled Trials (RCTs) being the most expensive. Most RCTs cost many millions to undertake and are used to understand the unique contribution of an intervention to the range of outcomes experienced by service users. For most organisations, RCTs are not practical – cost as well as the time lag between starting RCTs and being able to use the evidence make them an impractical choice for many.   

Here at NCB, we’ve been supporting organisations with practical ways to measure impact using Outcomes Based Accountability (OBA). Our earliest projects that incorporated OBA date back to the tail end of the last recession in 2011/12 where we worked with the then OfMDFM (Office of the First Minister and Deputy First Minister) to produce a child poverty outcomes framework[4]. At the time, Northern Ireland was grappling with a range of challenges - for example, in 2012 almost one-quarter (22.6%) of young people aged 16-24 years old were in Northern Ireland were Not in Education, Employment or Training (NEET). In the ensuing years, when the economy picked up, it was all too easy to forget where we had been and the challenges that presented themselves at the time.

However, the Draft Programme for Government (PfG) 2016-21[5] helped to put outcomes firmly in the spotlight. In the years since, many Government departments have got on with it and started to think and work in an outcomes focused way using OBA.

Now that we could potentially revisit the challenges of 2009 again, this should draw all of us back to ask, and answer, the following questions about our programmes and services:

  • How much did we do? i.e. how much activity was delivered and to whom?
  • How well did we do it? i.e. how do we know we have delivered a good quality programme or service?  
  • Is anyone better off? i.e. How and in what ways are people’s lives better off as a result of the support we have provided?         

Our experience of working with organisations across the public and community and voluntary sectors has shown that alongside a determination to improve the quality of people’s lives, there are generally high levels of interest in showing this through the use of data. This interest is tempered by concerns best defined in terms of the following statements:

  • This is about data and I’m not a researcher or data analyst;
  • I need to measures ‘soft’ outcomes like self-esteem, self-confidence and that’s not possible;
  • If my data show my service is not working, I won’t get funding, so why would I collect data that could show it’s not working?
  • Collecting and reporting performance data is a distraction from my core role which is to provide my service or programme;
  • The process of getting to a report card is too long and involves too much time; and
  • I wouldn’t know what to do with the data even if I collected it.

A lot of the statements above are genuine concerns that staff hold in relation to moving along the journey towards becoming data driven. Let me replace those same statements above with the following:

  • Whilst it is probably true that you need to be a data scientist to work on large data sets or what we term ‘big data’, you don’t need to be a data scientist to collect your own data! Firstly, you don’t need mountains of data – you just need a small amount of data. More importantly, you know your programme best, so are naturally best placed to collect your own data.
  • Self-esteem and self-confidence and a whole array of other ‘soft data’ can be measured and there are plenty of websites that provide robust tools for doing so (see, for example: https://www.cymh.ca/Modules/MeasuresDatabase/en/)
  • Collecting data that shows a service, or aspects of it, are not working is only the beginning of a journey. In the years we have worked supporting organisations, we’ve never seen a service immediately lose its funding because aspects of it are not working. The whole purpose of OBA is that it should be used as a service improvement tool – the opportunity to learn from mistakes, change and improve what we do has been the basis of many great inventions! 
  • Collecting and reporting performance data should never be a distraction from delivering service. Commissioners sometimes wrongly assume that paying for things like project management, staff supervision and/or collection and reporting of performance data as somehow secondary to delivery of services/programmes – when all the evidence points to these being key factors in delivering effective services.
  • A report card is a critical tool for showing the impact of your service. It involves significant fixed costs upfront, however once in place, the ongoing investment required is minimal.
  • For those who are afraid of not knowing what to do with the data when it is collected – asked yourself the simple question what would you do differently if outcomes really mattered? If waiting times for a counselling service are 8 weeks – ask yourself: is that good enough? What would we like it to be in an ideal world? What could we do differently to reduce it? Solutions don’t need to be rocket science – just offering a better range of appointments could significantly reduce wait times.

As we emerge from the pandemic, using data to measure impact and deliver services effectively will be as important as ever to ensure that children and young people’s life chances are not eroded. Above all else we need to stop counting ‘widgets’ and measure impact!

At NCB we offer a full suite of training and support to bring people through the process of:

  • Deciding what data to collect and how to collect;
  • Collating, analysing, visualising and reporting data using step-by-step exercises; and
  • Using performance data as a service improvement tool.

<<>>

Dr Richard Nugent is a senior researcher specialising in evaluation at NCB in Northern Ireland.     

[1] https://sloanreview.mit.edu/article/the-recessions-impact-on-analytics-a...

[2] https://www.linkedin.com/pulse/evaluation-what-percentage-your-budget-sh...

[3] As above

[4] https://www.ncb.org.uk/sites/default/files/field/attachment/19%20child-p...

[5] https://www.northernireland.gov.uk/consultations/draft-programme-governm...