Please note that What Works Wellbeing will close operations on 30 April 2024.  Read more
Jun 19, 2019 | by Naomi Brill

Evaluating projects for people with learning difficulties: when ‘off-the-shelf’ can miss the mark

This week’s guest blog is from Naomi Brill, Insight Manager at Leonard Cheshire. She has over four years’ experience of designing inclusive impact measurement tools and processes, and is currently responsible for developing and embedding an organisation-wide framework for measuring reach and impact. It’s the second in our Measuring Wellbeing series that focuses on wellbeing and people with disabilities.

There are a number of benefits to using validated, ‘off-the-shelf’ evaluation tools. However, you are faced with challenges if these tools don’t measure exactly what your programme is aiming to achieve, or if they haven’t been tested with your specific client group. This is especially paramount when you work with people who have learning disabilities.

For us at Leonard Cheshire – working with a pan-disability community and a diverse group of people in the UK and internationally – it is vital to develop evaluation tools that are fully accessible and inclusive so everyone has an opportunity to contribute to impact measurement, not just those who can use standard tools.

Are your wellbeing questions too general?

Importantly, do the outcomes you are looking to measure originate from your theory of change, and actually relate to the intervention you are delivering? For example, quite often when off-the-shelf tools are used to measure wellbeing, the questions are too generalised and not specific enough to your intervention. If you are delivering an employment support programme and want to find out what impact it has had on a client’s wellbeing, it doesn’t seem right to ask such a sensitive – and quite unconnected – question, such as ‘how anxious they felt yesterday’.

Bringing tailored support and empathy into evaluation

When faced with a standardised evaluation tool you should consider the following:  

  1. Will your programme be able to provide the support necessary if you receive a negative response, for example if a client reports feeling extremely anxious?
  2. Would you feel comfortable responding to the same question if you were in the same situation?

What if your project’s funder insists that you use their evaluation tool, which includes standardised questions? They may have asked for this because they are looking to collate and compare results against a shared set of data.

My advice is to have a frank conversation with the funder, explaining your concerns and alerting them to the high possibility that they may receive either meaningless results or none at all. Consider testing the survey with a group of clients to provide evidence that the survey isn’t fit for use.

You may discover that it’s suitable for some clients, but it is clear that others do not understand the questions or why they are being asked in the context of your project. Furthermore, some clients may struggle with 0-10 scales because their sense of scale and ability to self-reflect, particularly at baseline, is limited.

If resources allow, staff can support clients to complete surveys on a one-to-one basis, which could help to overcome some of the challenges around comprehension. However, there is a risk that the staff member may unwittingly influence the results – clients may give a positive answer because they think that is what they want to hear, and even if training and guidance has been provided to staff, there may be some subjectivity when explaining questions and options.

Remember, you know your clients best. There’s no point in using validated evaluation tools if they simply don’t work for your client group.

Leonard Cheshire was recently contracted by Sport England to develop an inclusive evaluation tool for their initiative: Tackling Inactivity in Colleges. Of course, standardised measures work for some, but in this case, the concept of ‘feeling that life is worthwhile’ and rating against a 0-10 scale were too challenging for the students involved in the test.

Follow this link to the case study which outlines the processes and more of the learning that emerged.


Practice Examples
Aug 12, 2014
If: Volunteering for wellbeing in the heritage sector
Jan 9, 2020 | By Centre
New research, guidance and opportunities in 2020
Centre Blog
Mar 17, 2020 | By Dr Lisa Müller
Using data to inform wellbeing policy: insights from the Thriving Places Index
Guest Blog

Sign up to our weekly e-mail list

Sign up to receive resources, insights and evidence as they are published.