The private sector is normalising #FacialRecognition. Fascinated to see their Data Protection Impact Assessment (DPIA).

There’s a case for never being able to mitigate enough risk for always-on mass surveillance FR. If so, it would need pre-processing notification to a regulator https://twitter.com/mattburgess1/status/1336949271187259392
A narrow DPIA may well not have examined wider and longer-term impacts incl. excess reuse of private facial recognition data and infrastructure by LE (see Ring for info).

You can’t just reuse your CCTV risk assessment for your Facial Recognition. It is NOT the same thing
First risks linked to data sets used for matching faces (bias, mislabelling, general quality), then risks linked to matching itself (configuration, accuracy, analysis),
then reuse of raw data, correlation info, alert / incident reports incl. huge pot of false negatives and positives (Stanford AI study - dark skinned woman misidentified 35% of time), then sharing with cloud hosts / others and their reuse now and over time.
More mundanely, what’s the lawful basis? Legitimate security interest should be a non-starter due to lack of transparency, lack of necessity to achieve aim, and poor performance meaning it is usually not fit for purpose. All before we get into medium/long-term human rights risk
May not all be pertinent here but this is a generally creeping societal risk. Too much rights infringement has been normalised by boiling us like frogs.
E.g. Roll out at airports, helped by airlines. Voluntary, with speed through security as carrot and increasing difficulty opting out as stick. Until it’s suddenly compulsory, with an illusion of public support.
For me this whole thing is intrinsically linked to the wider Facial Recognition and AI debates.

They are intimately linked in context of providing raw data, teaching machines, and testing profitable use cases.
These are other recent strands, with a key theme that vested interests can get very unhappy about scrutiny and challenge

School surveillance #EdTech software including proctoring and sentiment analysis to spot violent or suicidal tendencies. We love testing things out on kids https://twitter.com/trialbytruth/status/1301388312847020037
Reuse of data gathered in schools to enable predictive policing is a plainly stated government aim https://twitter.com/rachelcoldicutt/status/1317756020509298688
Sorting people (face, voice, gait, indicator of race, maybe a criminal nose shape) might have initially benign intent, but by what yardstick? Who monitors for individual and collective harm now and over time, and how would you set a faulty record straight? https://twitter.com/ipvideo/status/1336339395507576832?s=20
But back to the top with the @mattburgess1 piece (please do read). Shared Facewatch Watchlists are getting very popular. But what due diligence was done, what is the lawful basis, and what does the risk and reuse assessment for 3rd parties (and their 3rd parities) look like?
To finish, another one from that thread about school data reuse in context of the Government National Data Strategy.

How many parents foresaw school attendance and other pupil data used this way? Who would have known how to object if they did? https://twitter.com/rachelcoldicutt/status/1317748471785574400?s=20
Pertinent - EDPB 'Guidelines 3/2019 on processing of personal data through video devices' v2 Jan 2020

When you ID people using images it is special category data. Signs in shops don't equal consent.

Odia Kagan write up - https://foxrothschild.com/publications/video-surveillance-under-gdpr-edpb-issues-final-guidance/ EDPB - https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_201903_video_devices.pdf
You can follow @TrialByTruth.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.