There's an underappreciated element to these stories that we really need to talk about.

I want to focus not on the institutional racism and misogyny (more qualified people have taken it apart better than I can), but on the important links between AI safety and infosec. [1/10] https://twitter.com/washingtonpost/status/1341871724497956867
Basically, we need to start thinking about AI safety in the same way we're thinking about traditional vulnerability research, and protect the people working in this area just like we're protecting our security teams. Yes, it's different work... [2/10]
but it's not unprecedented for security to incorporate new sub-fields. Security UX, anti-abuse protection or privacy are all things that weren't on our radar a decade ago, but many of us now work on them because they deal with real user harm. AI safety is not so different. [3/10]
The AI safety community reminds me in important ways of the nascent information security scene of the 90s: it's developing an area that is almost certain to gain immense societal importance; it produces highly technical, fascinating research with major implications; [4/10]
as far as companies are concerned the work doesn't contribute in obvious ways to their bottom line because it brings the risks of technology into light; predictably, the work is often led by people with different backgrounds than those building the systems it investigates. [5/10]
We're also starting to see backlash against this work that may be eerily similar to those who have done infosec back in the day: attempts to strong-arm researchers, dispute the merits of their work, put a spin on the results, or claim that things are fine, actually. [6/10]
It took the security community over a decade to drag the industry, kicking and screaming, into a world where security research is seen as a crucial service for developers. There are still major companies with strong cultures of disrupting security work on their software. [7/10]
Now, imagine the chilling effects on our community a few "resignations" of key security researchers from large companies would have had back in the day. The folks who are, by and large, running the security programs of the platforms that handle most of your data today. [8/10]
In many instances in the past we rallied behind the misfits who found software flaws who were, often, young white men. But now the difference is that much of the work is spearheaded by people from underrepresented backgrounds, often at significant personal cost. [9/10]
Are we going to shield and support our fellow researchers who speak up about systemic problems in technology? Or is this a time when we suddenly keep quiet?

What we do now shows what our values really are.

[end]
You can follow @arturjanc.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.