At the beginning of 2020 I was tired by the 'AI ethics' discourse. But by the end of the year, I'm feeling inspired and awed by the bravery, integrity, skill and wisdom of those who've taken meaningful political action against computationally-mediated exploitation and oppression.
The conversation has moved from chin-stroking, industry-friendly discussion, towards meaningful action, including worker organising, regulation, litigation, and building alternative structures. But this move from ethics to praxis inevitably creates fault lines between strategies.
Should we work to hold systems to account 'from the inside'? Legislate and enforce regulation from outside? Resist them from the ground up? Build alternative socio-technical systems aligned with counterpower?
It's easy to dismiss efforts of others as too naive or too cynical, too radical or not radical enough. Having been peripherally involved with various shades of progressive politics over the years, I've found myself on both sides of such arguments.
Each approach has its shortcomings, and the power to pursue them is not equally distributed; it typically requires navigating hostile institutional structures (in industry, academia, and elsewhere) that favour the already-privileged, and discourage critical work.
But configured in the right way, and embedded in broad-church political movements, these different strategies can be mutually supporting. Tech worker walkouts might stop a particular AI contract, but allying with broader campaigns could lead to a general ban (e.g. on FRT).
Likewise, even if current regulation isn't strong enough to stop harmful uses of tech outright, if robustly enforced it might put enough barriers in place to give activists a fighting chance to organise against it, or help alternative structures (eg. platform coops) to flourish
External audits of discriminatory algorithms by researchers and investigative journalists alone might not convince companies to abandon them, but could spark litigation that would force them to.
Legal limits on data collection and AI development will not lead to data justice on their own, but they might substantially limit the potential damage authoritarian fascists - domestic and foreign - can do with such technologies when they gain power.
To borrow loosely from the late EO Wright, we need to combine multiple strategic logics: neutralising harmful technologies in ways that better enable us to transcend the structures they support; transcending them in ways that help neutralise their harms ( https://www.jacobinmag.com/2015/12/erik-olin-wright-real-utopias-anticapitalism-democracy/).
You can follow @RDBinns.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.