Ethics has not historically been an explicit part of AI research. @SilverJacket says “Many kinds of researchers...encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science.” (2/n)
I’d be curious to hear from STS scholars, historians or philosophers about how disciplines like medicine, biology, psychology etc. built durable ethical frameworks, and how that compares to what is happening in AI (yeah that’s probably a book or several—sorry!) (3/n)
Because AI research is commercialized so quickly, the decisions made by institutions (academic/commercial) have real consequences. It’s reckless to think of AI research as somehow divorced from its commercial applications. (4/n)
In @SilverJacket's article, @rao2z points out that accepting problematic studies at industry conferences makes it harder to push back against harmful applications down the line. Important point bc it shows that research is the first link in a long, windy value chain. (5/n)
Anticipating the human impact of technologies (esp those that try to emulate human abilities) is *the actual job* of historians, ethicists, sociologists, anthropologists and others who have been studying the impact of technology on humans for decades (if not centuries). (6/n)
Related: Given the origins of the Internet, the idea that a CS graduate student would be surprised by the presence of NSA recruiters at an AI/ML conference is mind-blowing. (7/n)
More to the point, framing AI risk solely in apocalyptic or militaristic terms neglects the lived experience of real people, usually the most vulnerable, who have already been—and continue to be--harmed by algorithmic bias. (8/n)
The biggest barriers to establishing ethical norms for AI are more human than technical. Business models, power structures, the complexity of the technologies, and, of course, human nature all play a part. (9/n)
In this context, the omission of any mention of the firing of Timnit Gebru from Google and how she has been treated since is troubling. It's perhaps the biggest story in AI ethics rn, as touches on so many essential issues re AI ethics in the research community. (10/n)
At the same time, research is a critical part of tech ethics, but it isn’t the whole story. Many people in industry, academia and the policy world are working on principles, policies, practices (and yes, tech) focused on research ethics and education, bias remediation... (11/n)
...interpretability, auditable algorithms, governance and legislation, among other things. There is really good work being done (yes, some ethics-washing too, though I would argue that, in *some* cases, lack of public communication ≠ lack of care and action). (12/n)
So, where do we go from here? Not gonna lie; that's a lot more tweet-threading than you or I are up for rn. But a few quick things...(13/n)
1. AI/ML/tech is complex as hell, but that doesn't mean tech is off the hook. 2. Think of AI/tech ethics throughout the lifecycle, from research paper to launched product. 3. Business models matter; some are more tractable than others. (14/n)
4. Don't hire people to do the work to find the problems if you don't want them to find the problems. Game out those outcomes FIRST or ask for help doing so. May not be pretty, but will be helpful. (15/n)
5. Much of the contentiousness about AI & tech ethics is bound up in people's self-conception. It's part of what makes this so hard--it feels *personal*. But it's not about me or you--it's about choosing to value and support the safety of people you may never know. (/fin)
You can follow @setlinger.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.