Heard in an AI in arms control discussion: "AI should do what it's expected to do." Sounds so simple. Not so. AI specific concerns aside, no software system that is operating in a complex and uncontrolled environment (i.e. the 'real world') ever just 'does what's expected.'
There is really no software engineering methodology for this. You can't totally specify the usecases and inputs, because the inputs are potentially 'any situation that can happen the world, anticipated or otherwise'.
This is the problem that autonomous driving has because it's another open-world computing application. Except that roads are at least somewhat controlled, and getting from A to B without colliding with anything is probably a lot easier than the problem of selecting targets.
So autonomous driving companies have just reverted to testing as exhaustively as possible in an attempt to determine whether self-driving cars operate at an acceptable level of safety. Waymo did 20 million miles. How do you do the same for your automated targeting system?
Access to real roads is much more readily available than real combat situations (the Forever War aside).

And simulated is not real - see M.L. Cummings' great paper here: http://hal.pratt.duke.edu/sites/hal.pratt.duke.edu/files/u39/2020-min.pdf
Regardless of all this, whether or not software systems work 'as expected' is only a small part of the puzzle. @EmeryJohnR paper which discusses the problems inherent in making ethics into simply a computational hoop to jump through is required reading: https://twitter.com/EmeryJohnR/status/1314610250842927105
You can follow @lauralifts.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.