Numbers and Measures

icon picker
Measures

Different Measurements Tools Used
There is a range of different measurement tools being used within Social Prescribing and often multiple tools being used in the same sets of reporting. The measures identified are
Measures
1
Title
Description
Background source
1
PAM
Patient Activation Measure
PAM
2
ONS4
OONS4 assess personal well-being using four measures, which capture three types of well-being: evaluative, eudemonic and affective experience.
ONS4
3
NPS
Net Promoter Score
NPS
4
Wellbeing Star
The Outcomes Star for adults self-managing health conditions
Wellbeing Star
5
Smiley Face
Effects of Smiley Face Scales on Visual Processing of Satisfaction Questions in Web Survey
Smiley Face Scales
6
SWEMWEBS
A 7 point version of WEMWBS scale to measure mental wellbieng
SWEWBS
7
G-PAQ
General Practice Physical Activity Questionnaire
GPPAQ
8
GAD-7
Generalised Anxiety Disorder Assessment (GAD-7)
GAD-7
9
PHQ-9
Patient Health Questionnaire (PHQ-9)
PHQ-9
There are no rows in this table
What was less clear is how and why the different measurement tools are being used. The capture and submission of data to CCG commissioners seemed to be an accepted process, with fairly limited understanding of why the information is collected, who would use it and the types of improvements / decisions that they were seeking to make. The paragraphs below, offer some consideration of what might be measured beneficially by which sets of roles and using what sets of information.
The core function role of SPLW is to link someone with non-medical support within their own community. In this context, the key things that arguably should be measured are
How many cases are being handled, to establish some insight into performance
How many times are people being successfully referred into non-medical community support (this is the core measure of success in terms of something that a SPLW can held to account to enable)
Any qualitative feedback on the SPLW or the process – where that might be complaints or complements – in other words some consideration of quality of that process and service (the service being the act of making the links). Put simply was the process helpful and if so (or if not) then why.

However, all of the measurement tools above, focus on whether someone is reporting some improved sense of well-being. It is important to recognise that these measures are about all SPLW interactions and hence cover cases where people may have been linked with an enormous range of offers from benefit assessments, to befriending services to singings groups and including walking groups and more structured physical activity. The purpose of the information collected is not clear to the people interviewed; significant caution is recommended where the information maybe considered as evidence to evaluate the “success” of Social Prescribing or the “success” of a specific or broader type of service. The following provides some examples of the potential risks with using the information collected to make evaluation judgements
There is no process for capturing evidence of attendance at services / activities – it is all anecdotal. It is also arguably counter-productive to mandate that this information is collected – the patients are all vulnerable and mostly anxious people lacking confidence; informing them that the information will be used to judge their improved health is likely to discourage them from participating
Where there is any correlation between anecdotal attendance and reported improved outcomes, there is no evidence these correlations are causal. In reality, a myriad of other factors (diet, other events in someone’s life such as bereavements, relationship breakdown, new birth in the family, changes to finance / employment status etc) are likely to have a greater impact on improved outcomes and there is no process to disaggregate the impact of these more important factors from the impact of either the SPLW support or an activity or group attended
The sample size of the data is often very small and doesn’t represent the whole activity (so if ten people report fantastic outcomes, but 200 people attended the activity, but did not report any feedback, then the 10 pieces of positive feedback should be used with significant caution as wider evidence for the impact of the service generally
Is the sample size representative of the broader population? The chances are that much of the information may under-present some and over-represent other specific cultural groups and hence this must be factored into any evaluation before the information is used to make any judgements or decisions
The data that is captured covers a very short timeframe snapshot. Where there is interest in longer-term behavioural change, then there would need to be the right long-term evaluation. Many of the services interviewed do undertake some follow-up assessments, but again these are at best ad hoc and hence offer no reliable data
The measures are captured across an enormous breadth of service types, including activity, but also befriending, benefits advice and perhaps tenancy advice and support. Given this breadth of scope, it is not clear what the information is reporting on
Where judgements or evaluation is considered about whether a particular service is delivering better outcomes, then there should be caution about making the assumption that a yoga service, for example, in one town is comparable to another one in another town. Similarly, if a yoga session worked well for someone with a particular set of conditions, it would be dangerous to assume that this would also be beneficial to people with different sets of conditions or risks
The assessment tools used, provide for no counter-factual consideration. The measures simply take an assessment and then re-take the same assessment x months later. Where a score has decreased, the assumption is probably that this is bad. However, it may be that actually, other things in their lives have been terrible and that
Even if many of these statistical considerations can be addressed, it must not be forgotten that very often the “service” or activity is delivered by a charity at no or very little cost. This type of evaluation mechanism is arguably better suited to the commissioning of multi-million-pound contracts than evaluating local charity service provision.

It is certainly true, however, that there is a need for some evaluation or consideration on the value of the Social Prescribing model, where the total resource invested in the current 12-month period may be in excess of £100 million (where overheads, management costs across the 1250 PCNs, contract costs with the SPLW providers and software costs are included).
It is, however, notoriously complex to measure and evidence the success of preventative expenditure. Where there is an acceptance that the role of SPLW is about prevention (preventing further escalation of a health or care need for example), rather than treatment (making it go away), then the following provides some areas for consideration
There is an opportunity and emerging appetite to improve the operational reporting of Social Prescribing, as it lacks the necessary rigour to be confident that it is making best use of resources as a process. The current situation as regarding the approach to reporting will undoubtedly be affected by the sense of impermanence of the service
There may be a value locally in capturing information about the accuracy of service information and whether there are service offers available locally that meet people’s needs. This information can then be considered locally to inform the ability to better target resources into types of support and activity for which there is a real demand / need, as well as improving the accuracy of the data provided by providers
The core purpose and hypothesis behind Social Prescribing is that where someone’s health is starting to in part be affected by a wider set of personal, behavioural and potentially emotional factors, then there is likely to be value in helping these individuals access relevant support from community services. The evidence from the discussions is that patients, managers and SPLW all believe that this core purpose (linking people to a community service) is deeply valued by patients. Indeed, intuitively this makes sense; people who are starting to report distress or unhappiness are surely better off “doing something about it”, supported by a SPLW to find the right sort of support. Doing something about it, may simply be attending befriending sessions, joining a social group, joining a choir, joining a walking group or joining a boxing club
There seems to be an opportunity for more straight forward reporting by patients, simply about whether they found the SPLW process and support valuable and helpful, with an invitation to offer some examples of what was good or what could have been better. This management of quality and customer satisfaction is critical and would seem to be a valuable focus
Where there is a will to test more scientifically, if doing “something” is worthwhile, then this should be subject to the right controlled evaluation process. Current approaches to collecting performance measures do not deliver the right evidence to undertake this judgement
There may also be an interest in going deeper and evaluating whether service a or service b (or c or d etc - the range of service is arguably limitless as there is such a breadth of service offers available) has a demonstrable improvement on conditions v, w (or x or y etc). This should be a set of controlled evaluations and will need to account for the impact that other life changes or events have on someone’s improved health outcomes (as distinct from the impact of the service itself)
Of particular interest may be setting up pilots where individual patients themselves take personal responsibility to track their own health. There are a huge range of tools available to track health indicators through blood tests, VO2 max, weight, calories used etc, alongside perception measures of general wellbeing and happiness. This data is best positioned as belonging to a patient and for them to take on responsibility to track it for their own interest and benefit. This may not provide evidence the impact of service a, b or c, but provides an opportunity to track whether people, when given the ability to regularly measure their health objectively, may generally be more willing to change their behaviour. For example, an initiative could provide people attending a 40 year old health check, the opportunity to monitor their own health compared to statistical averages and track this for a period or perhaps 2 years and comment then solely on whether being able to track this generally helped them change their behaviour. Where someone is motivated to address an emerging risk indicator for example, then there is evidence available through general web research to inform them of their choices and options. Furthermore, people could be given the opportunity to search, using open data, for the sorts of dietary and activity sessions for example, that might deliver the recommended support. The pilot would be to evaluate whether people are more motivated to change behaviour because they have access to tools to track their health and increased risk of things like cardio, diabetes, stroke.

The purpose of this paper is to provide commentary and feedback on the SPLW market and the performance measures being captured. The comments are provided to stimulate some debate within what seems to be an emerging consideration about the need for evaluation into SPLW.
Outcomes
According HSSF the following outcomes will be classed as essential:
Ability to provide
Broken link
outcome codes as determined by NHS Minimum Data set
Ability to provide coded healthcare and validated wellbeing measure
@ONS4
Ability to create bespoke templates for local need to assess and analyse outcomes, linked to a person’s record.
Ability to understand at a local level what services people are using and where there may be gaps

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.