Disclaimer: Tweeting not to lose a mental thread.

My talk about #SustainableRiskTriage for @opensecsummit will now be 8th March. Can’t say I’m not glad of the extra time.

The web of mental hooks and connecting threads, specially in context of AI, is my morning alarm rn 1/13
Today’s wake up call was about a unit of risk assessment. What is the ‘entity’ to which one attaches that?

For 3rd party risk I use supplier + product/service. For #DataProtection Impact Assessment it’s dept/supplier + service/product + processing purpose. But for AI? 2/13
Thought process:

Accountability 1: Can remediation be attached (is there a budget and someone empowered to spend it)?

Consistency: Can we attach a single risk profile? A supplier may provide multiple services with different risk profiles. e.g. Licensing vs Data analytics 3/13
Control: What is the boundary for procedural / technical and protective / detective controls we can implement?

Contracts: At what level can we reasonably differentiate contractual requirements and liability? To what can you attach an SLA or continuous monitoring 4/13
Regulation: What is the unit of assessment and/or typical level of scrutiny specified in regulator defined assessments and in historical investigations? You are likely to worsen any situation, if your only risk view is in the operational vulnerability weeds. 5/13
Maturity 1: If no-one has got a reliable and maintained list of suppliers, sites, systems, applications, processing purposes and associated projects, products, services and data sets. That assessment target definition may well need to wait. 6/13
Maturity 2: How well defined and structured is the split between change and run? If there is only change (e.g. software dev in supply chain), is there any acknowledged accountability for risk after things ship?

If answer is “Not very” and “No” to whom do you report risk? 7/13
Which brings me back to Accountability.

A2: Is there a RACI? Does anyone know or care who is responsible, accountable, consulted and informed?

A3: Does the change/run split reflect that? Is there good demarcation across down and upstream supply chains? 8/13
Governance: Is there a robust, transparent, and prompt mechanism to deal with exceptions? Is there an escalation route to the board? For vulnerabilities or for internal / external concerns?

“No” to those last few and refining your risk framework is not your biggest concern. 9/13
For AI. Going right back to the top, assuming there is someone who cares and a meaningful governance structure to deliver outputs into, what is that unit of assessment?

It is not “An AI”

It may be Processing Purpose + Data Set, but what definition of data set? 10/13
Adding layers to taxonomy depending on scale and business model:

Dept/supplier + product/service/project + processing purpose + data set.

What it should rarely be, if you want output to be useful:

“AI”/App/Site/Server/Device 11/13
Starting with “AI” or “Vendor” or “Website” or “Server” or “Device” will kill this.

Either too huge to meaningfully distill a treatable risk profile, or too divorced from intent and usage: The things that generate most of the risk to both organisations and data subjects 12/13
Anyway, that’s what I woke up to my brain doing. How are you 😏? /end
You can follow @TrialByTruth.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.