
These stories often cover academic studies that are based on one particular #Bot detection tool: Botometer.
But: @OuzhouAdi and I just published a paper on the tool and why we should be very careful with studies that are based on its classifications. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0241045 2/6
But: @OuzhouAdi and I just published a paper on the tool and why we should be very careful with studies that are based on its classifications. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0241045 2/6
A recent study stated that "bots generated spikes of conversations around real-world political events". And while the general data is being shared, neither are the Botometer scores nor the accounts that have been manually validated. 3/6
https://firstmonday.org/ojs/index.php/fm/article/view/11431
https://firstmonday.org/ojs/index.php/fm/article/view/11431
While it is unclear why the authors have decided not to share the accounts, @FlorianGallwitz has been tracking so-called "social bots" for a while and, as far as I know, has yet to find one. 4/6
More generally, this study does not lack company. Many studies have been published using Botometer. And often they receive media attention as well. Above's study, for example, resulted in coverage by @nytimes 5/6 https://www.nytimes.com/live/2020/2020-election-misinformation-distortions#twitter-bots-poised-to-spread-disinformation-before-election
This is a problem. Not because bots don't exist but rather because both academics as well as journalists regularly overstate their relevance. As @kreissdaniel and @shannimcg just highlighted: alarmism is misplaced. For misinformation as well as bots. 6/6 https://slate.com/technology/2020/10/misinformation-social-media-election-research-fear.html
Postscriptum: I think studying misinfo as well as bots is highly important and relevant. I teach a class on misinfo and run the Misinfo Working Group at @BKCHarvard. But alarmism doesn't help anyone.