yes, algorithms carry values and those values can promote ignorance, intolerance, and, harm.
consider a supervised learning algorithm where instances are people. if using empirical risk minimization, errors for individuals from well-represented groups will be emphasized, and those for underrepresented groups deemphasized. that's a value. https://arxiv.org/abs/1806.08010 
consider the same scenario but w an interactive algorithm (e.g. mab, rl). an exploration policy result in a higher experimentation rate on people in underrepresented groups because there is more uncertainty with those individuals, devaluing their autonomy. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2846909
consider an nlp system that claims to solve a task but assumes an english character set and tokenization. its representation pedestalizes one language and effectively dismisses others as irrelevant or, perhaps, "future work". https://thegradient.pub/the-benderrule-on-naming-the-languages-we-study-and-why-it-matters/
consider an predictive typing system that assumes text comes from a single language or dialect. such a system erases the voices of those who code-mesh and ignores the reasons why they might be doing so. https://arxiv.org/abs/2005.14050 
consider a linear model used to make hiring decisions. the functional form carries assumptions about the relationship between features and decisions. if the assumption is valid for dominant groups and not for underrepresented groups, then it is effectively devaluing the latter.
consider an algorithm for online dating. there are strong assumptions in terms of what constitutes a "good match": messages, meetups, marriages, etc. the reward definition here carries a tremendous value judgment about what is an appropriate relationship. https://dl.acm.org/doi/abs/10.1145/3274342
consider an algorithm designed for a production recommendation system. using behavioral metrics carry strong assumptions about who is doing the engaging, how their interfaces operate, and what "good" looks like. all others are "out of scope" or "an engineering problem".
consider a music recommendation system. user engagement alone as a reward ignores whether a recording was intended to be presented in a specific context or has cultural value to a community. this is often dismissed as an implementation detail. https://www.journal.radicallibrarianship.org/index.php/journal/article/view/38
consider rewards and objectives more generally. who decides on a reward definition? what alternative definitions were excluded? if we're dealing with multiple objectives, do i consider some value to optimize and others as constraints? what are the values behind that decision?
even more generally, the framing of a problem carries values and projects what is important and what is not. this is perhaps as insidious than the curatorial decisions about training data/environments that traditionally have resulted in algorithmic bias.
one way forward is to ask who is affected by these decisions and then find a way to improve the participation from all stakeholders and experts, not just algorithm designers. as @timnitGebru has said, "nothing about us without us".
but, if i consider an algorithm--or, more generally, a process--that silences or terminates those voices, then i have to ask which values are being promoted and which are not.
You can follow @841io.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.