Evaluating Your Pupil Premium Strategy

Published: 14 June 2022
Marc Rowland explores how we can evaluate whether our actions are making a difference for our disadvantaged pupils

Inspired by Scott Fletcher (@SFL2326), here are some reflections to help reflect on the progress made in addressing disadvantage this year, based on the DfE standard Pupil Premium template.

I realise this is not the most exciting subject matter! But it is important. Fundamentally important for pupils.

Evaluation should focus on effective processes, as well as outcomes. The process is critical for ensuring that disadvantaged pupils are thriving in the classroom and attaining well.

Process evaluation


- Has your ​‘intent’ statement created a shared understanding of ambitions, and the principles that underpin your schools’ strategy for addressing disadvantage?

- This is something that all school staff should understand. Is there a shared understanding of global objectives for disadvantaged pupils – from governance to pastoral care to the classroom?

- Does everyone in school believe that disadvantaged pupils can thrive in the classroom and attain well. Does everyone buy in?


- Have you accurately identified the needs of your disadvantaged pupils in your school? The impact of disadvantage on learning is a process, not an event. Neither is it static. Some families may not be significantly economically disadvantaged but may be exceptionally stretched / time poor… or be impacted by factors outside of their control. Disadvantage goes beyond the Pupil Premium label.

- Be wary about negative language about disadvantaged pupils and their families in the challenges section.

- Ensure that challenges are precise. Normanton Junior Academy in Wakefield is a good model, as is Clockhouse Primary School in Havering.

- Key questions to consider – have we effectively assessed:

How does disadvantage impact on pupils’ learning (in the individual school context).
What are the *controllable* factors impacting on disadvantaged pupils’ learning.
What factors are MOST preventing disadvantaged pupils from thriving in the classroom and in wider school life?

Intended outcomes and success criteria

- Well defined success criteria are key to dispassionate impact evaluation.

- Vague success criteria make it easier to claim success. Remember that colleagues involved in the implementation of an approach are not always the best judges of success. Be wary of activity being mistaken for outcomes.

- The intended outcomes should link closely to the challenges pupils face.

- It is important to be mindful of success criteria that are actually activities. For example: ​‘Learning Support reports are produced as soon as possible for children that need them. The information in the reports is shared with parents and used to plan in class (and small group) activities and interventions.’ Whilst these reports may well be extremely useful and their consistent use may be an indicator that an approach is being implemented effectively, long term goals should centre on pupils’ learning.


- Activity should link to challenges and intended outcomes / success criteria. The activity section uses the EEF’s tiered model. There is no longer any need to list the individual costs of activities, just the budget associated with each tier.

- Schools should link activities with challenges and ensure that activity is informed by research evidence so that it may be effective. Remember that research evidence can only point us in the right direction. We should use it to inform our decision making, not justify it or use it superficially. Research evidence does not just have to come from the EEF toolkit.

- Check that the teaching and learning tier focusses on the challenges identified through assessment.

- When evaluating activity, consider the following (based on the work of Thomas Guskey):

Staff acquisition of new knowledge and expertise.
Staff use of new knowledge and expertise
Organisational support / not implanting too many things
Impact on pupils as learners.

- Avoid trying to do too many things at once… which can lead to poor implementation and weaker outcomes. Ensure that activity is focussed on helping pupils to be better learners.

Impact evaluation

- High-quality impact evaluation is fundamental to better outcomes for disadvantaged pupils.

- Evaluation is fundamental to continuous improvement and to building a solid evidence base that will enable the plan to impact on disadvantaged pupils. It should not be treated as an optional extra. It is part of good implementation.

- Impact evaluation is about finding out whether activities and strategies have been successful, and why. It is not about proving that strategies and activities have been successful, or finding evidence to justify decision-making. It is important to decouple evaluation from accountability. Trying to prove a strategy has been successful is detrimental to improved outcomes for disadvantaged pupils. Governors should be involved in the design of the impact evaluation framework.

- When evaluating impact:

Focus on whether activity has been successful, and in what circumstances.
Look for evidence of impact on pupil outcomes.
Put in place a robust evaluation framework at the start of the strategy.
Ensure that the evaluation framework is transparent and shared with staff and governors. It is particularly important that staff involved in the implementation of
Report on progress against that framework.
Judge their success based on outcomes for pupils, not institutions.
Measure success based on outcomes for disadvantaged learners.
Ensure that intended outcomes and success criteria centre on impact on learners, rather than activities.

- When evaluating impact, avoid:

Cherry picking the data we are using to evaluate
Being overly reliant on the reactions of those delivering the approach
Using vague success criteria like ​‘Improve teaching’, ​‘improve engagement’ or ​‘improve reading for pleasure’
Mistaking activity – e.g. staff training – for impact.

- Case studies can be used for impact evaluation. But it is important to ensure that the pupils being used in the case studies are chosen at the start of the approach, rather than retrospectively.

The following resources can support good impact evaluation:

ImpactEd website (https://impacted.org.uk).
EEF ​‘DIY Evaluation Guide’, Practical Tools.
Thomas Guskey (2002), ​‘Does it make a difference? Evaluating professional development’, Educational Leadership, 59 (6), 45 – 51.
Robert Coe (2013), ​‘Improving education: a triumph of hope over experience’ (inaugural lecture). CEM, Durham University.
Marc Rowland (2021) ​‘Addressing educational disadvantage: The Essex Way’, John Catt.

The highest form of accountability is that to our pupils and our families. Disadvantaged pupils need learning to be the best it can be.

Good evaluation is key to this.

Marc Rowland,
 Unity Research School

Return to Articles