ilaria liccardi & I just posted a new draft paper arguing that cloud-reliant, always-listening devices often break a bunch of laws when they accidentally record the owner, & anyone else in earshot, due to a false positive (think "election" for "alexa") https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3781867
ilaria conducted a study with jose juan dominguez veiga on the privacy preferences & practices of voice assistant users. 1 key finding: people will prioritize the privacy preferences of those they're close to, but not so much the preferences of other people
at the same time, we're increasingly surrounded by smartphones, speakers, & other devices that can accidentally record us-- recordings that are sent to the company that makes the voice assistants, & used for various purposes, including but not limited to advertising
under the federal wiretap act, companies need to get specific & actual consent to intercept oral communications, which isn't happening when these devices record accidentally & send the recording to the amazon/google/etc cloud (a problem of which the companies are well aware)
this is esp. true for bystanders--meaning 1 party consent via privacy policy shouldn't suffice for accidental recordings of anyone you encounter--but we don't think accepting vague boilerplate should constitute consent for secret, accidental recordings of the device owner either
the companies that operate the software also aren't getting coppa-required verifiable consent for the kids they record, & their collection &/or use of accidental recordings may obviate their privacy claims, which in some cases could be deceptive &/or unfair trade practices
wiretapping statutes (& udap claims, to a certain degree) also depend on a judge or regulator's interpretation of a reasonable expectation of privacy-- a test that warps privacy protections when it doesn't adequately consider resignation to violations & lack of alternatives
the potential cases we're talking about would rest on highly fact-specific inquiries--the wording of specific marketing claims, the technical configuration of particular devices, & the contexts in which they're used will make for a big range of scenarios. ymmv
but the problem of false positives by voice-activated devices is ultimately one more failure of consent-centric privacy governance, & points to why policymakers should eschew consent-centric approaches going forward.
resignation, the ever-growing spread of voice-activated devices, & the impossibility of avoiding accidental recording by a device you're not aware of also illustrates why future laws should also avoid hinging privacy protections on expectation tests.
huge thank you to @McKennaCyberLaw, @paulohm@meganmcgraham, & the participants of PLSC 2020 for the tremendously helpful comments-- we'd love feedback from anyone with the time & inclination to give it, pls send any thoughts/fears/hopes/dreams our way!  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3781867
You can follow @LAM_Barrett.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.