There's a data protection aspect to this standardised grades saga. GDPR requires that individuals can object to decision making about them based solely on automated processing.
In their DPIA – https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/909372/6666_Privacy_Impact_Statement_-_Grading_2020.pdf – Ofqual argue this doesn't apply because a) there's the CAGs/ranking as an input which are human-generated and b) exam board staff are meant to check the outcomes. Two quotes from that DPIA on that latter point:
1. "the exam boards will review the final outcomes before signing off their awards. The review by exam boards will involve checking the outcomes from the model for individual subjects... including, where appropriate, considering the outcomes for individual centres"
2. "exam board staff will interrogate any instances where outputs appear anomalous or they consider merit further scrutiny. Responsible officers will decide whether to sign off the awards and may choose to reject or carry out further scrutiny."
On justification a) I am not a data protection lawyer. Perhaps one could explain to me how human inputs into an automated model negates it from being an automated model?
On justification b) Is there any evidence that such interrogation of outputs has occurred/decided not to sign off awards in certain cases/carried out further scrutiny? Given the results that came out, if this never happened, that would seem to undermine this justification.
You can follow @jakeanders.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.