I have seen a few people on here suggesting researchers who study harms of AI smile more in the wake of the exciting @DeepMind breakthrough.
I used to encounter this a lot in the tech for ‘social good’ space.
It seems to me that there are three camps. 1, People with the big ideas about how data is going to make everything better. They fundraise on the blue sky potential and are uncomfortable/unwilling to talk about politics, risk, and harm.
2, People who commentate and study the actual effects of those technologies, real politics and power, and the complexities of doing anything meaningful with tech.
3, The people who try and embrace the positives, navigate the harms, and seek realistic improvements in a society with new capabilities.
I think we should all aim to be in group 3, but most of us either gravitate towards group 1 or have been forced into group 2. The thing is, group 1 often has the power. And that power is further enabled when they shirk responsibilities of critique and caution.
They move fast because they don’t care about breaking things.
So it rankles me when I see people in group 1 take digs at people who dedicate their lives to being in group 2. Group 2 is essential, undervalued, and often chock full of people that directly experience the harms of tech.
Here’s to nuance, collective migration to group 3, and equitable distribution of resources between those who have unfettered enthusiasm about tech and those who are worried that it will perpetuate broken systems of prejudice and inequality.
You can follow @alixtrot.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.