The lack of awareness of AI ethics issues by AI practitioners has been an ongoing source of very real problems. On the other hand, I have yet to hear of any harm caused by making AI practitioners think about the implications of their work.
Awareness of human consequences is a necessity in all scientific & engineering disciplines. It's even more important in fields that are "high leverage", where a very small team consisting entirely of engineers can make a big impact. Like CS, and in particular AI.
If your work has "impact", then by definition it is changing the world. You must then ask *how* the world is changing -- in which direction does your impact point? Who benefits and who loses out? Technological impact always has a moral direction.
I should add, the need for ethics awareness arises from the *applications* of AI. If your work is very theoretical, it generally does not have any materialized impact, and its potential impact could go in any direction.
Likewise with regulations -- they should target *applications*, not research or technology in an abstract sense.
You can follow @fchollet.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.