Watched many “FSD” beta videos the past few days & my general conclusion, based on the driver commentary, is that the only unacceptable driving event is an accident.

“It works!” so long as they’re able to stop it from crashing itself. That’s not self-driving, it’s #autonowashing
Related via a study of trust in automation: "The mere fact that the system erred did not automatically lead to a loss in perceived performance." A potentially dangerous "miss" was viewed poorly but a "false alarm" (erratic behavior w/ no immediate risk) was not viewed unfavorably
The point is, both are system errors and reasonable causes for concern.

Yet, what we as humans will perceive as a performance concern is highly relative, and I would hypothesize highly influenced by other beliefs we hold about the system (e.g. “testing for the greater good”)
You can follow @lizadixon.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.