We've heard of Comparative Judgement (CJ), but what is Adaptive Comparative Judgement (ACJ)?
ACJ will probably look the same as any CJ product to the user. However, the 'Adaptive' part of the gets to work as the session unfolds, usually after all of the scripts have been seen by all judges several times.
"The A for 'adaptivity' in ACJ means that the choice of which objects are presented to the judges depends on the outcomes of judgments made so far – the idea being to use judge time efficiently by not presenting objects to compare where the result of the comparison is almost certain." Bramley, T. (2015)
The adaptive element of the ACJ engine used in this assessment seeks to use its' algorithm to 'fine-tune' judgements by referring back to previous judgements involving the same scripts judged by other judges rather than simply pairing scripts randomly. In this way, the algorithm can build confidence in the Collaborative Professional Consensus (CPC) rank placement for each script more quickly, avoiding unnecessary judgements that would result from non-adaptive script pairings. This, in turn, reduces the overall time required to reach a final CPC rank order whilst maintaining a solid level of overall assessment reliability.
In short, no.
We ensure that there are many opportunities to use ACJ to moderate writing and for your children to build up a writing portfolio. We know that having more schools participate in our National moderation sessions gives you a much greater understanding of your children's relative performance. However, if you wish, you can pick and choose your sessions. Even when there's a joint year group session, you may choose to enter only one of your year groups.
It's all entirely up to you.
If you join the 'Moderation of Primary Writing' scheme, no.
Unlike some CJ engines, you will judge a randomly allocated selection of scripts from across all schools in each session, meaning it's improbable that you will consider scripts from your school. After all, you already know what writing looks like in your school, and teachers benefit from observing work from outside their context - great for moderation and improving teaching and learning.
The benefit of this is that there will be much less bias.
For instance, you may get a higher bias toward some children's work as you know those children.
With ASSESSPROGRESS, the way that scripts are allocated to judges means that you're highly unlikely to be able to work out whose scripts are whose - particularly in a large session.
Imagine saying that the CPC judgement report your school receives will have been made by 20 professionals who don't know your school or your children!
Pressure for remarking needed? With challenging issues and high cost? We don't think so!
We publish the genre for each write that we ask your children to complete, in each year group, in our Calendar of events.
Sometimes we'll give you more details than just the genre. We think this helps you plan your curriculum to support your classes in the session and keeps our low stakes approach to assessment. As we try to ensure (but cannot guarantee) that all schools complete the writing in the same conditions - with the same amount of support - we think this approach is best to maintain the administration's integrity.
We'll send you your report within the week after finishing the judgements, and it'll go to your named contact.
From 2022 we send links to online assessment pages while developing our world-class application reports for the September 2023 cohort.
One of the most valuable things for teachers and SLT is to understand the relative performance of the children in your school across two-year groups. We all know that there are overlaps, but where do these overlaps exist? Our combined year group moderation session will show you exactly where! We put most of our joint sessions towards the end of the school year. This can also help you with class allocation if you're a joint year group school. Additionally, you have more judges built into a session, speeding up the judgment sessions.
We give schools three days to complete the writing scan and upload the scripts. This ensures that everyone is clear as to time scales for the write. Then, we publish the stimulus on a Sunday night so that it's ready for you on Monday morning. Each session is scheduled to fit into one hour of your everyday teaching lesson so that you can fit it right in - simple.
We then allow one week for judging so that you factor this into staff meeting times (we love it when schools judge together and then talk about the writing - one hour of quality CPD) as we believe this is the best and most supportive way to moderate and formatively assess writing in schools - it also helps if someone brings the cake!
We've documented that we believe that children perform differently for different writing genres. By enabling you to enter into two or three other writing sessions per year, you'll see this too and support your children to a greater understanding of 'What A Good One Looks Like' - WAGOLL. We front-load sessions in terms one and two to get a benchmark, give everyone some teaching time, and then reassess in terms 5 & 6. We complete a mid-year for Year 2 and Year 6 and then a final session just before statutory data moderation so that you have a range of independent writing to look at and moderate.
From 2022, we will be using Comparative Judgement to create a Portfolio of Writing for Yr 6 pupils, meeting the DfE's statutory assessment requirements for writing at the end of KS2.
Yes, ACJ has been developed to use the system as a quick check assessment in your school, giving you a CPC order of the quality of writing in your school.
You can also do this alongside the National 'Moderation of Primary Writing' sessions from AssessProgress. We are enabling you to have not only a more localised view but, at the same time, a national view of quality writing.
Adaptive Comparative Judgement works by using a mathematical algorithm to understand judges' possibility that script A is better than script B or vice versa. It essentially judges the consistency of the judge against other judges to choose, for example, A over B and always A over C to derive the CPC. Misfits occur when one judge consistently disagrees with other judges' judgements when comparing similarly ranked scripts. High Misfit scores mean that that judge scores differently than others. This isn't always negative and can lead to professional discussions about judges' qualities in subject competencies. Judges can be removed for computational convenience or remain for conceptual purity.
No. The DfE produce a comprehensive criterion-based assessment for the end of key stage judgements. You'll need to use this guidance and the exemplars to understand the level of writing attainment in your school. Comparative Judgement cannot do this for you as it's a different type of assessment. We have heard of using the percentages of teacher assessment from previous years' national writing results and matching them to the reported percentile in sessions. However, this is unhelpful for schools, and unreliable as much depends on sample size, generalisable nature of the sample, annual change of criteria and percentages from the DfE. You can, however, use the pieces of work used for each session in your portfolio of evidence for the end of key stage moderation, which is a good thing for reducing workload and maintaining a low stakes approach for your pupils.
However, it can be one teacher in one school; the more, the merrier! The more teachers involved in the judging sessions, the better the constructive discussions between staff about excellent writing. We recommend at least five judges per session per school to reduce the number of judgements that need to be made.
If you're judging in your school and not part of the benchmarking sessions, this is how we work out the number of paired scripts each judge needs to view:
If each script is to be judged 20 times.
So:
The number of scripts X 20 = number of judgements to be made, divide by 2 = number of paired decisions, divide by the number of judges to find the number of judgements each judge needs to do.
For example:
45 pupil scripts X 20 = 900 paired scripts, divide by 2 = 450 paired judgements, divide 9 (teachers) = each judge completes 50 judgements.
Children as writers and judges: 30 pupil scripts X 20 = 600 paired scripts, divide by 2 = 300 paired judgements, divide by 30 (all students) = each child completes approx 10 judgements.
If this is too complicated(!), we have a judge calculator in the App.
Copyright © 2017 - 2023 AssessProgress - All Rights Reserved.
Powered by Adaptive Comparative Judgement
This website uses cookies. By continuing to use this site, you accept our use of cookies. Privacy Policy