Why we are so bad at predicting who will (and will not) become involved in terrorism and violent extremism.
(Warning – this thread uses technical terms)
1/
(Warning – this thread uses technical terms)
1/
Aside from obvious practical/procedural issues (e.g. poor data, poor info sharing), in the abstract we can think in terms of 3 key reasons why we are so bad at predicting future involvement (& noninvolvement) in this violence
1: Equifinality
2: Nonlinearity
3: Low Base Rate
2/
1: Equifinality
2: Nonlinearity
3: Low Base Rate
2/
We are not equivalent to cannonballs, for which we can calculate a trajectory with data on initial velocity, angle of launch, firing height & little else. Human behaviors are driven by the confluence of an essentially limitless number of psych, social, econ, etc, ‘variables’
3/
3/
One person may be driven by a combo of revenge, peer pressure, marginalization etc., another may be driven by discrimination, ideology, pursuit of purpose, etc. The key concept is equifinality, which is simply the notion that there are many causal paths to the same outcome
4/
4/
Yet, the concept of equifinality alone does not come close to doing justice to the intricacies of causality, with this violence also governed by broader principles of complexity
5/
5/
Social scientists have historically tended to assume that causes & effects interact in a linear fashion, i.e. their relationships can be plotted as a straight line. This vastly distorts our understanding, as the relationship between ‘variables’ is often disproportionate.
6/
6/
This includes tipping points (e.g., grievances driving involvement in violence only after a certain point), feedback loops (e.g., a desire for revenge provoking involvement, which then provokes a greater desire for revenge), butterfly effects, etc.
7/
7/
On the low base rate issue, we need to consider sensitivity (essentially our ability to predict involvement) & specificity (our ability to predict non-involvement). Marc Sageman’s ‘Understanding Terror Networks’ provides more details about these terms for those interested
8/
8/
There is a trade-off between these metrics, i.e. their scores are inversely related. To enhance sensitivity, analysts can simply lower the inclusion threshold and predict more future participants, but this tends to undermine the specificity score. The opposite is also true.
9/
9/
To demonstrate the issue, Sageman imagines an approach that achieves sensitivity & specificity scores of 100 & 99 percent for a population of 1 million. If this pop contains 100 actual participants, the approach would correctly predict all 100 (i.e. 100 percent sensitivity)
10/
10/
But, it would also incorrectly identify an extra 10,000 innocent people (i.e. 99 percent specificity). Despite its near perfection in percentage terms, the probability of those identified actually being involved in violence is less than one percent!
11/
11/
I should mention that I am writing a book covering this and loads more material. But, I've been working on it for a couple of years already, and at current pace it make take another five.
So, I wouldn't hold your breath
12/ End
So, I wouldn't hold your breath

12/ End