Alright. As promised, I want to summarize a few of our key findings from this new report on trust in human-machine teaming. This is a fascinating, complex topic, and we only touch the tip of the iceberg here. But here are some of the most interesting issues we address 1/14 https://twitter.com/CSETGeorgetown/status/1361692511476023298
Perhaps one of the most interesting things we found in that research is that despite a consensus that trust is key to effective human-machine teaming, only 18/789 research components related to autonomy and 11/287 research components related to AI mention the word “trust.” 3/
This curious finding pushed us to dig further, so in the Trusted Partners report we discuss possible explanations for this gap. 1) Technology has outpaced research on trust in HMT, w/ much of what we know coming from studies on automation & experts systems, not AI/ML. 4/
2) Trust is an abstract, hard to measure concept. So defense researchers tend to focus on technology-centric solutions that 'build trust into the system' by making AI more transparent, explainable, reliable, etc.--in other words, enhancing system features related to trust. 5/
So while the word 'trust' itself is not frequently mentioned by DoD S&T research on autonomy & AI, clearly, DoD scientists are working on trust--through reliability, assurance, transparency, explainability, robustness, etc. 6/
Now, there is no doubt that systems engineering/tech-centric approaches to cultivating trust in human-machine teams are necessary for building trustworthy AI systems that can be deployed alongside humans in high-risk settings, including combat. But there may not be sufficient. 7/
Thing is, human-machine teaming is a relationship. And each element--the human, the machine, and the interactions between them are all important. Tech-centric approaches focus predominantly on the machine. But a deep understanding of the human element is also critical. 8/
Human trust is affected many factors-cognitive, emotional, demographic, situational, prior experiences, etc. The larger societal structures & organizational cultures where ppl work also condition trust. Just think about the differences between military services, even units 9/
All of these factors and differences have important implications on trust in human-machine teams that tech-centric approaches to 'building trust in' cant fully address. But paying closer attention to them can strengthen systems engineering & build better AI, & better HMT. 10/
We offer a few directions for additional/future defense research:
-Research and experimentation under operational conditions,
-Collaborative research with allied countries,
-Research on trust and various aspects of transparency,
11/
-Research on the intersection of explainability and reliability,
-Research on trust and cognitive workloads,
-Research on trust and uncertainty, and
-Research on trust, reliability, and robustness. 12/
Overall, insights from different disciplines and research on human attitudes toward technology and the interactions and interdependencies between humans and technology can strengthen and refine systems engineering approaches to building trustworthy AI systems. 13/
Finally, a lot of very smart people helped @tinahuang__ @HsjChahal & I make this report better, and we are very grateful for their insights @hlntnr @timhwang @LarryLewis_ @flaggster73 Erin Chiou, Jon Bansemer, & Igor Mikolic-Torreira. Any faults are mine alone of course. 14/14
You can follow @RitaKonaev.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.