What Works Wellbeing operated from 2014 to 2024. This website is a static repository of all assets captured at closure on 30 April. It will remain publicly accessible but will not be updated.  Read more
Jun 19, 2019 | by Naomi Brill

Evaluating projects for people with learning difficulties: when ‘off-the-shelf’ can miss the mark

This week’s guest blog is from Naomi Brill, Insight Manager at Leonard Cheshire. She has over four years’ experience of designing inclusive impact measurement tools and processes, and is currently responsible for developing and embedding an organisation-wide framework for measuring reach and impact. It’s the second in our Measuring Wellbeing series that focuses on wellbeing and people with disabilities.

There are a number of benefits to using validated, ‘off-the-shelf’ evaluation tools. However, you are faced with challenges if these tools don’t measure exactly what your programme is aiming to achieve, or if they haven’t been tested with your specific client group. This is especially paramount when you work with people who have learning disabilities.

For us at Leonard Cheshire – working with a pan-disability community and a diverse group of people in the UK and internationally – it is vital to develop evaluation tools that are fully accessible and inclusive so everyone has an opportunity to contribute to impact measurement, not just those who can use standard tools.

Are your wellbeing questions too general?

Importantly, do the outcomes you are looking to measure originate from your theory of change, and actually relate to the intervention you are delivering? For example, quite often when off-the-shelf tools are used to measure wellbeing, the questions are too generalised and not specific enough to your intervention. If you are delivering an employment support programme and want to find out what impact it has had on a client’s wellbeing, it doesn’t seem right to ask such a sensitive – and quite unconnected – question, such as ‘how anxious they felt yesterday’.

Bringing tailored support and empathy into evaluation

When faced with a standardised evaluation tool you should consider the following:  

  1. Will your programme be able to provide the support necessary if you receive a negative response, for example if a client reports feeling extremely anxious?
  2. Would you feel comfortable responding to the same question if you were in the same situation?

What if your project’s funder insists that you use their evaluation tool, which includes standardised questions? They may have asked for this because they are looking to collate and compare results against a shared set of data.

My advice is to have a frank conversation with the funder, explaining your concerns and alerting them to the high possibility that they may receive either meaningless results or none at all. Consider testing the survey with a group of clients to provide evidence that the survey isn’t fit for use.

You may discover that it’s suitable for some clients, but it is clear that others do not understand the questions or why they are being asked in the context of your project. Furthermore, some clients may struggle with 0-10 scales because their sense of scale and ability to self-reflect, particularly at baseline, is limited.

If resources allow, staff can support clients to complete surveys on a one-to-one basis, which could help to overcome some of the challenges around comprehension. However, there is a risk that the staff member may unwittingly influence the results – clients may give a positive answer because they think that is what they want to hear, and even if training and guidance has been provided to staff, there may be some subjectivity when explaining questions and options.

Remember, you know your clients best. There’s no point in using validated evaluation tools if they simply don’t work for your client group.

Leonard Cheshire was recently contracted by Sport England to develop an inclusive evaluation tool for their initiative: Tackling Inactivity in Colleges. Of course, standardised measures work for some, but in this case, the concept of ‘feeling that life is worthwhile’ and rating against a 0-10 scale were too challenging for the students involved in the test.

Follow this link to the case study which outlines the processes and more of the learning that emerged.

Related

Practice Examples
Jan 10, 2015
Wellbeing in UK legislation
Blog
Sep 17, 2020 | By Jon Franklin
Plugging the gaps: new cost-effectiveness guidance for charities
Guest Blog
Aug 10, 2022 | By Margherita Musella
Call for evidence: Arts and culture review
Centre Blog

[gravityform id=1 title=true description=true ajax=true tabindex=49]