algorithmic transparency for sure is an important research area but let's dig into why there's a lot more work to do.
🧵 (1/7) https://twitter.com/brianchristian/status/1356681364620251136
current large production systems (e.g. search and recommendation, advertising), many of which use neural networks as components, operate through an assemblage of different models, features, network architectures, objective functions. (2/7)
they also involve classic, non-adaptive algorithms and human decisions cobbled together by an organization. focusing algorithmic transparency on a single network may be a good start but the problem is a lot messier. (3/7)
moreover, all of these elements are constantly shifting, with modules swapped in and out by the day, hour, minute. any hypothesis of how it behaves or what it optimizes for, even if supported in the instant, is ephemeral. (4/7)
explaining the behavior of an algorithm requires understanding the context in which it's deployed. i can reverse engineer the neural network powering a social media recommendation system but that won't tell me much without understanding how people might use or exploit it. (5/7)
so, while the technical work of algorithmic transparency is important, it should be part of a broader perspective on how these systems are engineered, produced, and used. having a nice technical model of a dog's vision system won't alone tell me if it will bite me. (6/7)
fortunately, there is abundant literature on this from the social sciences, who have been studying these questions for much longer with more nuance. (7/7)
You can follow @841io.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.