Frequently Asked Questions

Why ACJ?

We've heard of Comparative Judgement (CJ) what's Adaptive Comparative Judgement (ACJ)?

ACJ will probably look the same as any CJ product to the user.  The 'Adaptive' part of the gets to work as the session unfolds, usually after all of the scripts have been seen by all judges a number of times.

"The A for ‘adaptivity’ in ACJ means that the choice of which objects are presented to the judges depends on the outcomes of judgments made so far – the idea being to use judge time efficiently by not presenting objects to compare where the result of the comparison is almost certain."   Bramley, T. (2015)

The adaptive element of the ACJ engine used in this assessment seeks to use its’ algorithm to ‘fine-tune’ judgements by referring back to previous pairwise judgements involving the same scripts judged by other judges rather than simply pairing scripts randomly. In this way, the algorithm can build confidence in the Collaborative Professional consensus (CPC) rank placement, for each script more quickly, avoiding unnecessary judgements that would result from non-adaptive script pairings.  This, in turn, reduces the overall time required to reach a final CPC rank order, whilst maintaining a strong levels of overall assessment reliability.

Can we moderate with other schools?

Yes, ACJ has been developed so that you can use the system as a quick check assessment in your school, giving you a CPC order of the quality of writing in your school.  You can also collaborate with other schools in a MAT, Partnership or Local Authority for instance to look at a wider, more broad scope of what writing looks like.  

You can also do this alongside AssessProgress' National 'Moderation of Primary Writing sessions (charged separately) enabling you to have not only a more localised view but at the same time a national view of quality writing.  Simply let us know the reports that you want and we'll collate everything for you. 

The RM Compare system is so flexible that outside of the AssessProgress National sessions (charged separately) you can collaborate with as many different schools as you wish, as often as you wish - the best thing? All of that is included in your standard RM Compare subscription.

How many judges do we need?

It can be one teacher in one school however we think, the more the merrier! We think that the more teachers involved in the judging sessions the better the formative discussions between staff as to what great writing looks like.  We recommend at least 5 judges per session per school to reduce the number of judgements that need to be made.

If you're judging in your own school and not part of the benchmarking sessions, this is how we work out the number of paired scripts each judge needs to view:

Each script needs to be judged 20 times. 


So:

Number of scripts X 20 = number of judgements to be made, divide by 2 = number of paired judgements, divide by number of judges to find the number of judgements each judge needs to do.


For example:

45 pupil scripts X 20 = 900 paired scripts, divide by 2 = 450 paired judgements, divide 9 (teachers) = each judge completes 50 judgements.

Children as writers and judges: 30 pupil scripts X 20 = 600 paired scripts, divide by 2 = 300 paired judgements, divide by 30 (all students) = each child completes approx 10 judgements.

Do we bench mark our own children's work?

If you join the Moderating Primary Writing scheme, No.  

Unlike some CJ engines you will judge a randomly allocated selection of scripts from across the session meaning it's less likely that you will judge scripts from your school. After all, you already know what writing looks like in your school and teachers really benefit from observing work from outside their context - great for moderation and improving teaching and learning.

The benefit of this is that there will be much less judgment bias. For instance, you may get higher bias toward some children work as you know that child. if you judge your own school's work. With RM Compare the way that scripts are  allocated to judges means that you're highly unlikely to able to work out whose scripts are whose - particularly in a large session.  

Imagine being able to say that the CPC judgement report your school receives will have been made by 20 professionals, who don't know your school or your children!

Pressure for remarking needed? With challenging issues and high cost? We don't think so!

Can we find out who's 'WT', 'WA' or 'EXC' at the end of Y6?

No.  The DfE produce a comprehensive criteria based assessment for end of key stage judgements.  You'll need to use this guidance and the exemplars to understand the level of writing attainment in your school. Comparative Judgement cannot do this for you as it's a different type of assessment. We have heard of using the percentages of teacher assessment from previous year's national writing results and matching them to the reported percentile in sessions. However, this is unhelpful for schools and unreliable as much depends on sample size, generalisable nature of the sample, annual change of criteria from DfE. You can however use the pieces of work used for each session in your portfolio of evidence for end of key stage moderation, which is a good thing for reducing workload and maintaining a lowstakes approach for your pupils. 

Can I tell if I'm judging well / like other judges?

Adaptive Comparative Judgement works by using a mathematical algorithm to understand the likelyhood of judges agreeing that script A is better than script B or vice versa.  It essentially judges consistency of the judge against other judges to choose, for example, A over B and always A over C to derive the CPC. Misfits occur when one judge consistently disagrees with the judgements that other judges make when comparing similarly ranked scripts.  High Misfit scores mean that that judge scores differently than others.  This isn't always negative and can lead to professional discussions about what qualities judges look for in subject competencies. Judges can be removed for computational convinience or remain for conceptual purity.