Let's talk about this article!

Confusions over the agential status of AI are a major problem with the discourse, so I think the topic is important. Unfortunately, engaging the topic as "mythbusting" undersells the importance, and misses the core issues at stake. https://twitter.com/djleufer/status/1288423374625153025
These "mythbusting" articles seem to be doing two jobs: 1) attacking the hype and misleading claims often made in AI journalism, and 2) attacking popular misconceptions about AI. Both jobs are worthwhile! But they shouldn't be confused with each other.
Misleading headlines are not the source of AI misconceptions. Writing better headlines won't clear up our misconceptions.

Bad journalism headlines are not an AI-only problem. They are driven more by attention dynamics on social media than conceptual posits of the field.
To be clear: AI headlines are bad, and AI journalists should do better. To this extent, the recommendations and tools in the article are welcome.

My point is that headline hype is an extremely superficial problem, and has little to do with popular "myths" on AI.
But the article purports to be correcting our misconceptions on AI agency as such, not just correcting sloppy headlines.

Their central worry is that ascribing agency to AI "hides human agency".
They link the worry with AI agency to another worry, of the opacity of AI systems, quoting from this article on the ethical problems with treating AI like a "black box".

https://medium.com/@szymielewicz/black-boxed-politics-cebc0d5a54ad
Treating AI systems as opaque (and unexplainable!) is definitely a ethical problem! But it's not the same problem as ascribing agency to AI. The black box article is worried about the explainability of deep learning systems, not casual ascriptions of agency.
So we face issues of opacity even where we don't ascribe agency to AI.

The authors are argue that ascribing agency to AI will make these problems worse, and "effectively mask the human agency behind certain processes".
The authors clarify that no AI system "pulls its predictions and outputs out of thin air", but is always "the result of multiple human decisions".

They recommend that we understand AI systems are part of "sociotechnical ensembles", imbued with human decision making.
I strongly agree that AI systems are parts of sociotechnical ensembles!

But the authors assume that sociotechnical ensembles consist of exclusively human agents. And see how quickly their whole discussion falls apart when this is taken seriously. https://twitter.com/eripsa/status/1054495104575160321
Because after all: humans are also part of sociotechnical ensembles! Every human decision is the result of embedded experience, community support, and social infrastructure; ascribing agency to an individual human also masks the sociotechnical network making that agency possible.
When we ascribe agency to a person (or a group, nation, corporation, etc...) we're reducing sprawling sociotechnical systems to an individuated *thing*. Ascribing agency to anything involves this masking operation. Again, this is not unique to AI.
Thinking in terms of ensembles, in dynamic system-theoretic terms, helps us avoid the misconceptions inherent to individualized, compartmentalized static ontologies.

This virtue is lost when we imagine sociotechnical ensembles as simply consisting in human agents + artifacts.
The article attacks the "myth" of AI agency with an uncritical appeal to the myth of human agency.

It's like proving the tooth fairy isn't real by reference to the Bible.
If the article were serious about addressing our misconceptions, it would more critically engage the way "agency" is situated within sociotechnical ensembles. If humans can be both agents and parts of ensembles, why can't AI? If AI can't, why can humans?
Unfortunately, the discussion of legal personhood at the end doesn't get any better. It mocks the claim of AI "inventing something new" when making relatively minor improvements on existing designs.

How many human patents would this criticism call into question?
Our laws on personhood are not grounded in the image of "sociotechnical ensembles" proposed in the article. But nowhere is this disconnect seriously confronted.

The article endorses an opaque, individualized agency for humans, but denies it for machines. #fairplayformachines
I think #humansupremacy positions like this are fundamentally hypocritical, but I admit there's an argument to make for them.

This article doesn't make the argument. It merely trades one popular myth for another.
On my view, AI systems are agents in virtue of being parts of sociotechnical ensembles. More importantly, I don't think agency is necessarily an opaque or obscuring notion. We can have system-theoretic accounts of agency and responsibility grounded in networks of action.
But taking a systems-theoretic account seriously means also questioning the ways we take for granted humans (and corps, groups, etc) as individualized agents. Humans are also embedded in sociotechnical systems, with and alongside our AI. We're part of the same systems.
You can follow @eripsa.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.