The week of reckoning is here: the annual phonics screening test is being administered to Year 1 pupils throughout the UK. Ostensibly, Year 1 pupils are being assessed on their phonic knowledge and this makes sense; Year 1 is a critical year for the development of phonic knowledge, phonics underpin literacy, grapheme phoneme knowledge is a key predictor of reading. So far so good. Yet expectations for success criteria often focus on pass rates; worried teachers furiously prep the ‘alien words’ not covered in most reading schemes. It can feel like the pass rate is a reflection of good teaching and many a Year 1 teacher I know feels like this week is a pass/fail mark in the eyes of their SLT and peers. Is the annual phonics test a measure of good teaching?
For an assessment to be meaningful, it has to be reliable and valid. Reliability in assessment means that the assessment can be relied on to give you information which is consistent. If I weigh a kilo of potatoes at 2pm, and weigh them again at 5pm, they should remain the same weight: 1 kilo. My scale is reliable. It shouldn’t matter if I give the phonic screen on Monday or Wednesday – if it is reliable it should yield the same results. For an assessment to be valid, it should measure what I say it does. So, returning to the kitchen scale, a kilo of potatoes should always weigh a kilo. If the scale reads 21st June, this is not a valid measurement, because I know that my potatoes weigh 1 kilo.
If the phonic screen is a reliable measure of good teaching, every child in the class of one teacher should perform at the same level. If the phonic screen is a valid measure of good teaching, then it should correspond with attainment in other subject areas for which the teacher is responsible – maths, art, maybe games. When we consider the screen in this way, it is clear that the phonic screen is neither a reliable nor valid measure of good teaching. So how should schools approach the results?
Decades of reading research underpin this screen, much of it undertaken by specialists in dyslexia. We now know that pupils who struggle with reading and writing often have difficulties with phonological awareness, the ability to manipulate speech sounds. This correlates with difficulty mapping speech sounds to the alphabet (grapheme phoneme correspondence or GPC). Blending sounds together for word reading is challenged, and pupils who are struggling to make progress through the book bands often display poor fluency – they don’t generalise word endings and try to blend words sound-by-sound, a very inefficient strategy indeed. Here the purpose of the phonic screener is totally transparent: early identification of pupils at risk of reading delay because of weak phonological and phonemic skills.
Non-word reading – for the purpose of the screener called ‘alien words’ – is a terrific predictor of dyslexia. Specialist assessments use this as a measure because non-words rob a pupil of the ability to compensate using a strong language domain. These words cannot be predicted by a few letters and mapped onto a strong mental dictionary – they become a measure only of the capacity to blend discrete sounds. This highlights phonological awareness, a subskill required for reading and spelling, and here, again, the purpose of the phonics screener is totally transparent. More than that however, the phonics screener is both a reliable and valid measure of an underlying weakness in phonological awareness and phonemic awareness which will affect the acquisition of literacy.
In the language of another timely issue – I’m voting phonic screening in.
But now what?
Thinking beyond the pass rate, what is a classroom teacher meant to do to help a pupil who has failed the phonic screen?
Considered assessment has a specific purpose in the graduated response: to highlight areas which require support. From considered assessment we plan appropriate intervention, ‘do’ the intervention teaching and review whether the support met the need of our learner. Notice the bold words follow the SEND CoP – and for good reason. Without a plan of action, all the TA support in the world won’t help Manisha or Benjamin learn to read.
I’d like to ask you this:
Drive for Literacy helps teachers help students to access the curriculum, especially in literacy. Our next blog will answer these questions in more depth – and point you and your school to the resources which will support you. Because while the phonic screener does not measure good teaching, it does start a discussion about what good teachers need: good support.