What is evaluation and how do you get started?
What is evaluation?
Evaluation can mean many things, but broadly it is defined as an assessment of an intervention, such as social prescribing. It involves looking at the effects and impact of an intervention, and can include looking at how these were achieved or can be improved (The Magenta Book, UK Gov, 2020).
Two key types of evaluation to consider are:
- Outcome evaluation: What was achieved? Looks at impacts, effects and outcomes.
- Process evaluation: How was it achieved? Looks at factors, enablers and barriers.
Read more about the difference between the two types here.
Discrete evaluation vs continuous evaluation:
- Discrete evaluation: one-off evaluations investigating a specific intervention for a specific purpose, within a set time frame, with specific data gathered.
- Continuous evaluation/monitoring: ongoing evaluation creating reports and outputs to track progress or same impacts over time, with a system for data gathering.
Read more about the difference between these two types of evaluation here (page 60).
Find out how to access training on evaluation on our Resources page.
How to get started with evaluation?
When starting to evaluate, or assess where you currently are at with evaluating a service, it is useful to map out what stage you are at, and what already exists. The table below will help you plan the evaluation and activities around it. This website for people managing evaluation is also very useful.
Stage | Tasks, activities and outputs | Questions to consider |
---|---|---|
Scoping evaluation – taking stock of where you are already | Meet with key stakeholders to map out current process, and data available Scope connecting datasets, the functionality of current or possible social prescribing referral systems Identify local support for data and evaluation within the PCN, ICB, SP service, or public health | What data is the social prescribing service already collecting? What data is available via the GP IT system? What systems are in place currently that could collect data? What systems could you put in place? Do you have funding for an evaluation? You may want to consider commissioning an external body, guidance and advice on this is on page 69-70. |
Planning evaluation – what is needed | Map out a theory of change with stakeholders, this will help you consider all the possible impacts to measure Agree what is most important to evaluate/find out List out things that need to be developed e.g. template, patient feedback survey, report template, process for collection and roles | What do you want to achieve with the evaluation? Who is the evaluation for? Are different types needed for different purposes? What is the minimum that needs to be collected? How much capacity is there to spend on evaluation? Or how will you create capacity? Who will be responsible for what? |
Setting up data collection – how will we do it | Map out processes and roles for collection Create agreements for sharing data and processes to collate Develop templates for EMIS/Joy | Who can advocate for collection of data and encourage its use? How can this connect and be helpful for other targets or impacts e.g. around population health, inequalities broadly, long term conditions? How can you encourage high quality data collection? |
Collecting and collating data | Use quality improve ‘PDSA’ cycles to test what works and make improvements Use team meetings to all share how it is going Run implementation sessions targeted at different staff groups Run drop in sessions and support | At what level is the data being collated? SP service? PCN? What forums will discuss progress and be responsible for overcoming challenges? What is the process to overcome technical challenges? Where can people go to for support? |
Analysing data | Develop template report based on what is important to stakeholders Support team members to gain skills in analysis or reporting Ensure data and analysis is checked by multiple people | How much data do you need before you analyse it? How frequent will you analyse the report? Are trends over time important? How often will you update these? |
Presenting and sharing data | Map out the audience, what you want them to do as a result of the presentation Design presentation based on what is impactful for the viewer | What is the most important takeaway from the evaluation? What is the story of the data? And how does this lead logically to action? Where else can you share the information that would be useful? Where can you share learnings? |
Embedding evaluation as business as usual (BAU) | Continue, develop or plug into existing working groups/team meetings Hold reflective sessions around what worked well and what didn’t, agree how the challenges can be overcome Develop improvement projects | What did you learn from the first evaluation? What data is being collected that isn’t being used? What was impactful for stakeholders? Who should lead on different aspects of the process? |