Recently, I read a tweet this week suggesting that AfL is past its sell by date. How disappointing! This means that, in their eyes at least, AfL was just another initiative that everyone raved about, said they were ‘implementing,’ then slowly forgot about. If this is the case for any practitioner, I can say without a shadow of a doubt they didn’t understand AfL.
In my post Authentic AfL: Check! I discussed Sue Swaffield’s idea regarding AfL being understood as just set of strategies or instead, as a pedagogy. In her view, too many schools think of AfL as simply a range of tools used to improve learning rather than a pedagogical approach which drives pupil towards learning autonomy. In short, implemented from the former standpoint, AfL strategies are all too often doled out in a ritualistic fashion, pupils comply by being seen to used them (perfunctorily) and in doing so become as passive as ever. This is actually more common than we might all care to think. I’ve seen children more worried about getting success criteria stuck in their books than whether they actually understand them or know how they could help them- compliance at its worst. As for WALT and WILF, how many times have you heard teachers and pupils talking about them, but not really discussing their content? But it’s all ok, as long as you have your WALT at the top of the page and the WILF is on the board, why waste time examining in detail what the quality aspects of WILF really are? It worries me that assessment itself suffers from this problem too. Assessment is often understood only as a means to demonstrate accountability rather as a fundamental approach to learning.
Ironically, as research has shown (Swaffield 2011, Berger 2014), and what I know anecdotally to be true, when assessment is used to drive learners towards increasing levels of independence, agency and autonomy, learning progresses rapidly too so that the issue of accountability takes care of itself. This is not true the other way around however. When assessment it purely driven by the need to improve progress and ‘amount of learnt content’ it is does not automatically produce learners who understand learning, are motivated to learn and become increasingly better learners; often it does the opposite. This why so many young people can’t wait to leave education; there’s only so much learning to satisfy other people you can do; eventually, it becomes unbearable.
This is why it’s so disappointing that the renaissance AfL has brought to education (with things like its comment-only marking, success criteria and in-class teacher, peer and self-assessment) does not seem to have made many people examine what their desired outcomes of education (DOE) are. In my opinion, it is the work of the Black and Wiliam (1998) which eventually led to levels being abolished because their work highlighted just how perverse the system had become and just how far the desire for levels had overtaken the need for pupils to learn well. Data quite literally led many schools to forget what they are there for! The constant pressure to ‘raise standards’ and the fear of OfSTED knocking on your door, accountability, and the panic to prove we’re really teaching, prevented many people revisiting, or even understanding in the first place, what their DOE really are.
I’m lucky enough to have got out to see lots of other schools and talk to lots of school leaders over the past couple of years, but it’s also opened my eyes to just how many school leaders say one thing, yet do another. There probably isn’t one of them who wouldn’t say I want these children to be ‘resilient life-long learners,’ yet to my mind there are only a handful who really lead on this and make it the heart of their leadership principles. The fact is that if school leaders allow accountability to motivate learning progress per se without much affect on the learners themselves, then their DOE are really only just children who are filled up with learning, but are not improved as learners and as people. On the other hand, school leaders who understand that assessment should be, as Ron Berger says, a framework for motivation as well as assessment, assessment really becomes powerful – and improves data too! Shame I feel the need to say that, but there will be those who still don’t see the difference between improving learning and improving learners. Better learners learn more.
Dylan Wiliam talks about decision-driven data instead of data-driven decisions. For me what he’s really talking about is whether assessment motivates children as learners, teachers as educators and leaders as leaders of education, or whether pupils, teachers and leaders are instead driven by reactions to data. Data and assessment are different things. This is because a sound assessment framework in a school should support teachers in understanding the progression of the learning journey, how to get there, each child’s next steps and what quality outcomes look like. In turn, this should mean the children know this because the teacher facilitates the children’s interaction, agreement and investment with this information through good teaching. The teacher can then assess the children against that concept of quality each time; teachers involves themselves in dialogue over this and moderate it so it’s really clear what the quality means and looks like. As well, the children can assess themselves with this and see what they need to do next, they can tell each other, advise each other too. This motivates them because they can see what to do and where they are going and if you hand children the responsibility to assess themselves as learners on this journey too, through pupil assessment conferences and presentations, assessment really does become the motivator. The more children are enabled to assess themselves, and show others how they are doing, the better they get at it and the more invested they are in themselves as learners. It becomes their learning. Not something done to them.
Like this, the assessment framework drives pupils towards improving themselves as learners, becoming more independent, reflective and self-managing – heading towards that ‘resilient life-long learner’ goal. Teachers know what the children can do and translates into data. Like this, data emerges from such a system – it doesn’t run the system. The data shows who can do what, where children, groups and cohorts of children are. It gives leaders a picture of learning across the school at which ever level they need. It forms the basis of professional dialogue about children. It also forms the basis of monitoring and pupil progress – but all this emerges from on-going, in-class assessment of learning mediated between teacher and pupil rather than the result of panicked assessment weeks, when teachers suddenly realise they need data; data that often emerges from sets of criteria given a level or score and that pupils are shoe-horned into rather than data the relates to directly to what pupils can do, and with a good system, also informs everyone on what they need to do next.
As long as leaders remind themselves that data needs to emerge like this, then assessment has every chance of becoming a framework for learning, motivation, as well as evaluation. The Learning Ladders system we are developing at our school, along with a few other Lewisham schools, does this exceptionally well. However, this is in part because the data it produces is understood as an evaluation of learning, a means to get a picture of learning from different angles, and not as a motivator for learning. Tracking and assessment are understood to be different things, with different purposes. If we go back to where I started, assessment is driven by the desire to enable pupils to become increasingly more independent and better learners rather than simply a means to improve learning. There is a difference. In the end though, a good system like Learning Ladders is only as good as the understanding of the people using it because assessment is at its best when it is understood as a pedagogy which improves learners, not just learning.