After today, the UK is likely to witness Europe’s greatest collective exercise of the Art. 22 #GDPR right not to be subject to fully automated decision-making that significantly affects individuals. A semi-forgotten right is ready to show what it can do for young people’s future.
As this is generating some interest, let me elaborate a bit on the legal position as I see it. The right not to be subject to a decision based solely on automated processing is not a new creation of the #GDPR. A very similar formulation has existed in English law since 1998.
Traditionally, this right has hardly been exercised, partly due to its complexity and partly due to the fact that its most obvious uses (challenging employment or financial decisions) were explicitly excluded by the exemption applicable to contracts.
But the world has changed a lot since the 90s and the increased role of technologies like #AI in our lives is paving the way for this right to become a hugely important element of the regulatory framework governing the use of personal data. We’re about to test that.
A surreal consequence of the #COVID19 pandemic has been the replacement of the 69 year old 🇬🇧 A Levels exam-based system with an untested and creative system in which algorithmic techniques play a key role in processing the vast amount of data available to decide #AlevelResults.
On the face of it, this is precisely the type of situation for which this legal right was created in the first place: to mitigate the potentially negative and unfair consequences of relying on technology alone to make these life determining decisions.
In the particular case of the #AlevelResults, the applicability of this right rests on the question of whether the decision was “based solely on automated processing”. There is no question that this qualifies as “automated processing” but was the decision “solely” based on it?
Like most things in law, this is debatable given that teachers and schools provided valuable input, but what is significant is that unlike with the exam-based A Levels, the decision on the grades awarded appears to have been entirely generated by an algorithm.
This is understandable given the sheer amount of data and the impossibility and unfairness of a human being making the decision to award the appropriate grades to hundreds of thousands of people. But the bottom line is that this right exists and provides for human intervention.
So if any student were to exercise this right with respect to their #AlevelResults, @ofqual (as the controller responsible for developing the standardisation process) would be compelled to take a new decision that is not based solely on automated processing.
PS. A relevant legal technicality is that in fact @ofqual may not be the controller (or at least the sole controller), as the decisions on #AlevelResults are actually made by the exam boards, even if in this particular case the responsibility for the algorithm lies with Ofqual.
So in practice, the right to seek human intervention may need to be exercised against the specific exam board with responsibility for each individual subject at each school.
Not the most straight forward situation, but what is??
You can follow @EUstaran.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.