Benchmarking has multiple applications within the school setting – one of which is its role in monitoring pupil performance through assessment and analysis. With exam season in full swing Chris Beeden, educational data consultant at School Data Managed, considers how schools assess pupil attainment and progress – and how benchmarking is a key resource within this
In the last few years the assessment landscape has changed; the two Michaels – Gove and Wilshaw – changed everything. The first changed all the national assessments the second changed the inspection framework. Levels and GCSE grades were scrapped and scoring systems introduced while, at Ofsted, in 2015 the new framework stated, ‘In judging outcomes, inspectors will give most weight to pupils’ progress.’ It also stated, ‘Within this, they will give most weight to the progress of pupils currently in the school…’ The consequence of these changes is today’s data mess.
Levels 3c, 3b, 3a, 4c, etc., gave a feeling of certainty and people understood progress. A child at 2c in Year 2 moved to 4b by Year 6 (the approx. national average) and was most likely to achieve a C at GCSE. Anything higher was positive progress and lower negative.
Schools have now changed how they asses, with many having been told, “Thou Shalt Not Use Levels.” Some now record every single ‘skill’, ‘objective’ or ‘KPI’; some record them clumped together. Emerging, developing, secured and mastered are used for each of these. There can be up to 100 KPIs per subject – don’t mention teacher workload! There are not many different ways schools are assessing.
The key to clarity is, whenever there is a number, to set a benchmark beside it. In a small test, a pupil achieves nine out of 15. The benchmarks would be 10 – to be about at national (‘expected’ or GCSE Grade 5) and 12, representing the higher end (‘greater depth’ or GCSE Grade 7). The benchmarks can be amended for pupil’s prior knowledge. The school then can convert to a central, ‘no levels’ system, as required. Leadership does need a whole-school system; this must flow from high impact formative assessments.
At the summative level you can look at the group; 46% of the pupils achieved the score set as the benchmark for the test. Again, set 46% against a benchmark for the test taking into account the level of assessment and the prior attainment of the pupils.
The new Ofsted dashboard inspection data summary report (IDSR) uses % expected from the different starting points. For reading the national ‘expected’ is either 17%, 70% or 97% – different benchmarks depending on a pupil’s starting point. This is a far more understandable figure than saying progress is 0.1 (I can talk about Progress 8 estimates and the uncertainty of that another time). Schools need to be allowed to get back to using attainment to be clear where a pupil is – but allow for benchmarks to show summative information.
The key to a successful benchmark is that the terminology is understood by the reader. This could be a parent, teacher, governor or HMI. This allows you to talk confidently about summative data while working with teacher-led formative assessments.
This article was first written for Education Executive