Assessment: The journey so far…

traditional teaching

I often wonder why I’m so interested in assessment. Why I read about it in my holidays and feel perfectly happy tinkering and engineering my ideas about it all the time. It never feels like work. Well today, as I sit on a Greek island somewhere far away, I know why. It sounds too profound, too Hollywood, too idealistic, but to my mind getting assessment right will offer a great gift to our children. The gift will be to know themselves very well indeed; to look inside their own mind and understand themselves; to be deeply honest with themselves; to look towards quality and measure themselves against it; to work towards improvement because they know change is always possible; to see the change in themselves and to understand that improvement is always in their own hands. The right approach to assessment, throughout a child’s school career, has the potential to do this; it has the potential to make our kids strong people.

So, what about this year? How far have we travelled in this direction? Well, I think I can sum up this year as a time when we sought to close the gap, but this time, not just between what children know and need to learn, but at last, as an educational community we have thought about the gap between actual learning and all manner of ciphers for learning. In simple terms, this last two years have been about accepting fact that we’ve spent too much talking about units of measurement for learning rather than the learning itself. And in that mix, we often misunderstood assessment, or at least saw it only in one light, which was to act as a critical tool to hold ourselves and our teachers to account. I’ve no doubt that many teachers and leaders did recognise this and fought against the tide, but so many didn’t and perhaps still don’t.

Over the last two years, I’ve helped implement Learning Ladders, a new curriculum assessment system into our school, and supported some other schools to do the same. I consider assessment as a pedagogy, so that whatever we do we always remind ourselves that when we evaluate a child’s learning, we make sure that not only do we do something with that evaluation that benefits that child further, but we also do our best to involve the child in that process too, gradually drawing our pupils into evaluating their learning themselves and acting on that evaluation. I will always maintain that the best teachers seek to become useless to their pupils…eventually.

So now, where are we? What have we gained? What do we need to think about next? (I’m thinking both in terms of our assessment system and for assessment in general.)

The gains:

  • A large part of the educational community is taking charge, sharing ideas and practice. For example, look at the work of Beyond Levels and the #LearningFirst conferences. School leaders and teachers are coming together to share ideas and tease out the best ways forward for learners.
  • Assessment has moved away from being associated purely with data and tracking and is becoming increasingly associated with making an impact on pupils’ learning. Hurray!!
  • More teachers are being held accountable through the development of their pedagogical craft, with a view to improving learning (and learning behaviour) over time, rather than being held to account through straight forward numerical data, that may or may not accurately reflect a child’s journey towards academic progress. (I’ve just spent six months on secondment with a really challenging group of children, much of the progress they made was in their behaviour and attitude to learning; progress was certainly not always academic in some cases. Did I have any effect on that ‘progress over time’? Yes I did! Now they are well set up for next year when they will take off!
  • Senior leaders are able to lead teachers more effectively because gaps in pupils’ learning are easier to identify which results in more productive conversations about pupils’ progress. Our Learning Ladders system has a finely tuned gaps analysis tool so that when overviews of progress are looked at, conversations about why certain pupils are or aren’t making progress become very detailed about the aspects of learning in question. The result is really productive conversations about assessment, curriculum planning and pupil progress rather than those ones of the past where progress meetings were around levels or sub-levels and the details about pupils’ learning were not always foremost in people’s minds. We’ve discovered that being able to drill down to the granular curricular detail has meant that it’s much easier to pin point issues. Sometimes the issue might be teacher’s confidence in assessment; they’re just not sure about how to assess a certain aspect, it might be the first time in that year group and they’re finding their feet. Other times, it might be a teacher needs to refine and focus their planning a little more so they hit gaps in learning and at other times we might see that a child has been absent on the three times division and fractions were taught for example. This kind of depth of conversation just didn’t happen as easily with levels and for so many reasons.
  • School leaders can look at overviews of learning (which all leaders have to), but with Learning Ladders we have purposely not made inflexible bench marks or narrow progress thresholds for points within the year. Achievements in learning are noted on the system and accumulate through an algorithm into a score, but this is used as a measurement outline. This allows for the overview that school leaders need, because we have a traffic light score range based on a very general expectation of progress, but the  fact that it’s considered a range means that teachers focus on the learning rather than getting to a certain score;  plus, we have worked hard to make our assessment ethos mean that everyone understands the difference between ‘being seen to reach a level or a score’ and real progress in learning. These two were often confused under levels. Back then, moving up a level  assumed progress in learning, whereas now real progress in learning leads to an increase in the score. This might all seem like playing with words, but this is the whole impetus behind the idea of ‘learning first’… put the learning first and data will follow, but if you put the bench marks first, it might not.
  • Curriculum and assessment relate to each other in a cause (what do I want to learn) and effect (what did I learn) cycle rather than being loosely associated through summative assessment outcomes. This means that learning intentions are not merely derived from the national curriculum, but they are the curriculum. In the past there were two languages ‘curriculum’ and ‘assessment’ which meant that teachers had to translate the taught curriculum, into learning outcomes and then assessment judgments. Teachers no longer need to bridge the gaps between what is taught and assessment judgments because they are using the same language.
  • Teachers are more able to use assessment as a framework for planning because they are clearer on what children need to learn next and where there are gaps in children’s learning.
  • Teachers are able to access quality learning outcomes through shared learning moderation within our Learning Ladders group and soon these will be available to all on the system too. This means that the sloppy ‘best fit’ approach has been refined into a much sharper mastery approach for the detailed steps in learning. While I agree with many that the interim frameworks are far too demanding (that was my experience in Year 2 anyway), the Learning Ladders system means that the details required for a mastery curriculum to work well are exemplified. All assessment needs to be underpinned by shared images of quality and this should underline any decent assessment system.
  • After a year of everyone teaching the new curriculum, teachers are moving from using Learning Ladders as a ‘tick off tool’ to much more of a support for planning. Yes, we teach more than just the criteria on Learning Ladders because that is the basis for a broad and balanced curriculum, but that structure and mapping of the curriculum has been invaluable to support teachers mapping their way through all the changes.Teachers’ confidence in assessment and planning for it are now on the up!

Area of development:

  • The DfE interim frameworks don’t seem to reflect the key performance indicators considered appropriate by the rest of the education community. A lot of the guidance that goes with them is vague and open to many different interpretations. This has meant that teacher assessment is more difficult and less reliable as schools become more reactive to moderators requirements than authentic learning needs.Something isn’t right with those ITAFs! How many teachers have kicked themselves because they know that competent seven year old writers have had to be labeled ‘below expected’ because they didn’t do enough commas in a list or possessive apostrophes? This cannot be right.
  • 53% of pupils in the country reached expected in RWM the end of primary school. Really? Yes, expectations are higher, but pupils and teachers haven’t suddenly been knocked on the head so come on! Are we saying failure is a sign of success DfE? Schools need to plough ahead and make assessment work for their pupils; I know it’s hard – but we have to ignore this nonsense and follow our principles on assessment. We’re all in the same rocky boat of changing goal posts and incompetent management of national assessment from above, but we can still get on with doing what we know is right.
  • For some schools, assessment it still a vehicle for accountability much more than it is for learning. Leaders need to look at the progress over times in both hard and soft data and ensure this is aligned to authentic learning and not ‘ciphers for learning’. In other words, don’t set up a system that kids you into thinking all is well, when it isn’t!
  • Many schools still set children into ability groups and limit children’s learning through this approach. These schools need to trust learners and communities of learners and allow all pupils to reach their very highest potential; ability setting does not allow for this academically, socially or emotionally for pupils. Learning is not all about knowledge and skill acquisition.
  • Lastly, we have spent the past couple of years getting to grips with everything new, but we still need to move assessment more into the hands of pupils. Assessment is not complete unless it engages the learner into assessing themselves and moves them more and more towards independence. I think with Learning Ladders we have this in our sights. We have developed pupils’ overviews to summarise and see next steps, these have been very effective; next we need to refine these so they are easier for pupils to use regularly.  For me, this is the beauty of Learning Ladders, it is evolving to suit the needs of pupils, teachers….and leaders. This is the right away around, I promise you.
  • As always, I have to add that any assessment system can be used badly if the leaders running it don’t have sound principles on assessment; however, some systems encourage a certain approach that is modelled on the old levels system. No names here, but these should be avoided.

Final thoughts

I’m so optimistic that we can make assessment work for pupils in the UK, but we have to keep nudging the government our way and stand up for teachers in the classroom. Yes, we need to check teachers are doing the best by their pupils and then we need to check that school leaders are doing the best by their school communities, but as Mary Myatt put it so well, this must be through a culture of ‘High challenge and low threat’. The unwelcome consequences of a high threat culture in assessment mean that people then do things more out of fear rather than reasoned and deliberate action. High challenge, low threat always results in the best outcomes for pupils, teachers, leaders…and humans.

Why Assessment for Learning still matters!

Recently, I  read a tweet this week suggesting that AfL is past its sell by date. How disappointing! This means that, in their eyes at least, AfL was just another initiative that everyone raved about, said they were ‘implementing,’ then slowly forgot about. If this is the case for any practitioner, I can say without a shadow of a doubt they didn’t understand AfL.

In my post Authentic AfL: Check! I discussed Sue Swaffield’s idea regarding AfL being understood as just set of strategies or instead, as a pedagogy. In her view, too many schools think of AfL as simply a range of tools used to improve learning rather than a pedagogical approach which drives pupil towards learning autonomy. In short, implemented from the former standpoint, AfL strategies are all too often doled out in a ritualistic fashion, pupils comply by being seen to used them (perfunctorily) and in doing so become as passive as ever.  This is actually more common than we might all care to think. I’ve seen children more worried about getting success criteria stuck in their books than whether they actually understand them or know how they could help them- compliance at its worst.  As for WALT and WILF, how many times have you heard teachers and pupils talking about them, but not really discussing their content? But it’s all ok, as long as you have your WALT at the top of the page and the WILF is on the board, why waste time examining in detail what the quality aspects of WILF really are?  It worries me that assessment itself suffers from this problem too. Assessment is often understood only as a means to demonstrate accountability rather as a fundamental approach to learning.

Ironically, as research has shown (Swaffield 2011, Berger 2014), and what I know anecdotally to be true, when assessment is used to drive learners towards increasing levels of independence, agency and autonomy, learning progresses rapidly too so that the issue of accountability takes care of itself. This is not true the other way around however. When assessment it purely driven by the need to improve progress and ‘amount of learnt content’ it is does not automatically produce learners who understand learning, are motivated to learn and become increasingly better learners; often it does the opposite. This why so many young people can’t wait to leave education; there’s only so much learning to satisfy other people you can do; eventually, it becomes unbearable.

This is why it’s so disappointing that the renaissance AfL has brought to education (with things like its comment-only marking, success criteria and in-class teacher, peer and self-assessment) does not seem to have made many people examine what their desired outcomes of education (DOE) are.  In my opinion, it is the work of the Black and Wiliam (1998) which eventually led to levels being abolished because their work highlighted just how perverse the system had become and just how far the desire for levels had overtaken the need for pupils to learn well. Data quite literally led many schools to forget what they are there for! The constant pressure to ‘raise standards’ and the fear of OfSTED knocking on your door, accountability, and the panic to prove we’re really teaching, prevented many people revisiting, or even understanding in the first place, what their DOE really are.

I’m lucky enough to have got out to see lots of other schools and talk to lots of school leaders over the past couple of years, but it’s also opened my eyes to just how many school leaders say one thing, yet do another. There probably isn’t one of them who wouldn’t say I want these children to be ‘resilient life-long learners,’ yet to my mind there are only a handful who really lead on this and make it the heart of their leadership principles. The fact is that if school leaders allow accountability to motivate learning progress per se without much affect on the learners themselves, then their DOE are really only just children who are filled up with learning, but are not improved as learners and as people. On the other hand, school leaders who understand that assessment should be, as Ron Berger says, a framework for motivation as well as assessment, assessment really becomes powerful – and improves data too! Shame I feel the need to say that, but there will be those who still don’t see the difference between improving learning and improving learners.  Better learners learn more.

Dylan Wiliam talks about decision-driven data instead of data-driven decisions. For me what he’s really talking about is whether assessment motivates children as learners, teachers as educators and leaders as leaders of education, or whether pupils, teachers and leaders are instead driven by reactions to data. Data and assessment are different things. This is because a sound assessment framework in a school should support teachers in understanding the progression of the learning journey,  how to get there, each child’s next steps and what quality outcomes look like. In turn, this should mean the children know this because the teacher facilitates the children’s interaction, agreement and investment with this information through good teaching. The teacher can then assess the children against that concept of quality each time; teachers involves themselves in dialogue over this and moderate it so it’s really clear what the quality means and looks like. As well, the children can assess themselves with this and see what they need to do next, they can tell each other, advise each other too. This motivates them because they can see what to do and where they are going and if you hand children the responsibility to assess themselves as learners on this journey too, through pupil assessment conferences and presentations, assessment really does become the motivator. The more children are enabled to assess themselves, and show others how they are doing, the better they get at it and the more invested they are in themselves as learners. It becomes their learning. Not something done to them.

Like this, the assessment framework drives pupils towards improving themselves as learners, becoming more independent, reflective and self-managing – heading towards that ‘resilient life-long learner’ goal. Teachers know what the children can do and translates into data. Like this, data emerges from such a system – it doesn’t run the system. The data shows who can do what, where children, groups and cohorts of children are. It gives leaders a picture of learning across the school at which ever level they need. It forms the basis of professional dialogue about children. It also forms the basis of monitoring and pupil progress – but all this emerges from on-going, in-class assessment of learning mediated between teacher and pupil rather than the result of panicked assessment weeks, when teachers suddenly realise they need data; data that often emerges from sets of criteria given a level or score and that pupils are shoe-horned into rather than data the relates to directly to what  pupils can do, and with a good system, also informs everyone on what they need to do next.

As long as leaders remind themselves that data needs to emerge like this, then assessment has every chance of becoming a framework for learning, motivation, as well as evaluation. The Learning Ladders system we are developing at our school, along with a few other Lewisham schools, does this exceptionally well. However, this is in part because the data it produces is understood as an evaluation of learning, a means to get a picture of learning from different angles, and not as a motivator for learning. Tracking and assessment are understood to be different things, with different purposes.  If we go back to where I started, assessment is driven by the desire to enable pupils to become increasingly more independent and better learners rather than simply a means to improve learning. There is a difference. In the end though, a good system like Learning Ladders is only as good as the understanding of the people using it because assessment is at its best when it is understood as a pedagogy which improves learners, not just learning.

Science Assessment Ladders

Key Stage One Individual Pupil Science Assessment Record with pupil assessment (1)

Lower Key Stage 2 Individual Pupil Science Assessment Record with pupil assessment

Upper Key Stage 2 Individual Pupil Science Assessment Record

These are phase group science learning ladders for the primary science curriculum. As a precursor to using them, I assume hopefully that teachers are using the pupil’s prior science learning as a basis for planning what to teach rather than grabbing plans from books or on-line and just teaching, this is very old hat transmission teaching and kills science! Even better, I hope teachers are allowing children to get excited by a topic, encouraging them to ask questions about the topic and then using those questions to form the basis of the children’s science investigations.

 

I recommend then using these science learning ladders in the following ways:

 

  • To baseline assess science knowledge before a topic by for example having a ‘what do you know’ session, or a little quiz type assessment.
  • To assess science skills and knowledge as children learn – the best way to assess!
  • To summarise learning at intervals if needed (like termly or at the end of the year for reporting to parents).
  • To ensure/monitor phase group coverage and thus allow for a more fluid, child led learning cycle.
  • To ensure progression from one year to the next and avoid repetition of learning. Copies should be handed on to the next teacher each year too.
  • For gaps analysis in knowledge so you and the children know where they need to go next. Although, I would say, if you’re providing/allowing a rich child-led enquiry basis for science learning you will find that these relatively simple knowledge objectives are easily learnt. Also please avoid telling the children ‘what they will learn today’ in science as this destroys the ‘finding out’ aspect that is at the core of science learning. Leave sharing the knowledge learning intention until the end – a grand finale! Or better, let them tell you what they learnt and then let them tick it off on the ladder! Magic!
  • For gaps analysis in science skills learning. This will indicate which types of enquiry the children need to do in order to practice specific skills. Then you can choose the children’s questions that fit these enquires. Remember there are five types of enquiry the children need to try in order to hone the necessary scientific skills within working scientifically: Observing over time 2) Identifying and Classifying 3) Pattern seeking 4) Fair testing 5) Research.  The type of enquiry depends on the type of question raised. It is important teachers and children understand this and do not always resort to ‘a fair test’ whenever they do an investigation. A fair test is only one type of investigation and only answers questions which seek to compare variables. Like this, a fair test is one of five different ways to investigate a science question and teachers need to ensure all five ways are used to answer class questions. More can be read about this by reading ‘It’s not Fair,’ by Jane Turner, Brenda Keogh and Stuart Naylor . I also have a powerpoint on it too which I will post. Importantly, children don’t just learn skills through osmosis, they need them modelling and then they need to practice at them so they develop and become more systematic. So don’t forget to model them well; show them what it’s like being scientific!
  • Previously, I have posted science assessment ladders that have three columns of assessment such as working towards, achieving and exceeding. However, as I have thought about this it doesn’t make sense to try to assess each skill or knowledge criteria like this. For example, ‘to know how the rotation of the earth results in day and night’; you either know this or you don’t so it makes no sense to say working towards knowing this, achieving knowing this or exceeding knowing this. It’s a case of OSD ‘obsessive sub-dividing disorder’ if you start doing that for criteria that are either yes or no!
  • The terms ‘working towards’, ‘achieving’ or ‘exceeding’ should apply to the assessment of the child’s whole learning journey through the phase group learning ladder. However, I would err on the side of caution with ‘exceeding’ as this is a mastery science curriculum and we should think of children going wider rather than further. This means rather than ticking off the learning ladder and then dipping into the next phase group’s learning, teachers should provide rich opportunities for more able children to be challenged in their scientific thinking and to investigate their own questions at a deeper level (think Blooms taxonomy).
  • Importantly, these ladders should also form the basis of frequent phase group moderation in school, and even better still, between schools. Questions at the heart of this should be those such as, ‘what does it look like if a child is working within Year 3 for science?’  ‘Or what does it look like if a child is working towards Year 3?’ Bring your science books and folders let’s share. However, on the table should also be questions such as: ‘what does ‘to understand that light is reflected from surfaces’ look like? or ‘what do we mean ‘ask relevant questions and use different types of scientific enquiries to answer them?’ etc. This means schools are building up a repertoire of exemplary understanding on science assessment; they’re making success criteria for what good science skills and knowledge are themselves rather than ticking a box and hoping they’re right! Or worse taking a test and finding out what the children didn’t know when it’s too late. This ‘deep moderation’ approach should also avoid talking judgementally rather than descriptively about learning, as so often happened with levels. Remember those times someone brought ‘their level 3’ and you brought ‘your level 3’ and they were worlds apart? Well deep moderation on really sharing ‘what makes good’ will avoid this. It might take more time, but if this approach is stuck to teachers will build their science assessment skills over time. Schools must put the time into this. Cancel those staff meetings about stuff you don’t need and make time for moderating…and please not just English and Maths.
  • Finally, a nice data tracker can be made to go with these ladders by putting all the class names on one side and then use the year group and the ‘working towards’, ‘achieving’.. and I suppose ‘exceeding’…but I think I’m going to call it ‘deepening’!  The data entry on the sheet for a child might look like ACH 3 for achieving Year 3, or WW 3, if their working towards and DPN 3 if they’re a high flyer. At some point I’ll make one of these sheets perhaps. I expect you could also start to talk about expected progress too so that if a child starts Year 3 as ACH 2,  they should end Year 3 as ACH 3. But perhaps that should be left to English and Maths and we shouldn’t let that spoil science! We’ll see. The point is that the ladders show attainment and achievement ..or should I say they record it…the children will show it!
  • Have I forgotten anything….? Probably. I might add more to this blog another time!

 

Any questions or suggestions about these ladders, or my approach greatly appreciated and welcome.