Are there any actual examples where an anthropomorphic robot has caused any real harm through deception in virtue of its human-likeness?
Or is this an entirely hypothetical sci-fi scenario that has so many AI ethicists shaking in their genuinely real human boots?
Or is this an entirely hypothetical sci-fi scenario that has so many AI ethicists shaking in their genuinely real human boots?
For what it's worth, I have nothing against contemplating sci-fi scenarios!
The trouble is when these sci-fi scenarios are mistaken for genuine concerns. https://twitter.com/eripsa/status/1344323304891568129
The trouble is when these sci-fi scenarios are mistaken for genuine concerns. https://twitter.com/eripsa/status/1344323304891568129
My view rn is that the relevance and ethical import of anthropomorphic robots is a context sensitive thing. It might have special importance in specific domains (eg, sexbots), but it's hard to say anything more general than that.
Despite the ethical ambiguity of anthropomorphic robots, I think it is a popular topic among AI Ethicists (and headline-grabbing) because there is something we find offensive about "playing god/human". It hurts our pride, it mocks our vanity.
I think the human supremacist rhetoric we see from Joanna, Frank, Abeba, and many others in #AIEthics is ultimately grounded in a reactionary defense of the "human" in response to these apparent offenses to our sensibility.
I think this analysis rather nicely explains the leap to reactionary rhetoric like "robots should be slaves". The ethical concerns are informed by a fundamentally conservative ideology. https://twitter.com/eripsa/status/1288520029131218945
But whether or not you agree with my analysis, I think the important thing is that the #AIEthics community reflect on the attention and ink we give to anthropomorphic robots, relative to the ethical risks and harms they actually pose.
In this vein it is worth considering the extent to which the focus on anthropomorphic robots from AI ethics and policy experts serves to distract and derail our conversations on more pressing ethical concerns?