AWDL is Apple's mesh networking protocol, a low-level, device-to-device wireless system that underpins tools like Airdrop. It is implemented in the Ios kernel, a high-privilege, high-risk zone in Iphone and Ipad internals.

1/
A researcher at Google's Project Zero, @i41nbeer, found a vulnerability in AWDL that allowed him to wirelessly infect Ios devices, then have them go on to spread the virus wirelessly to any Ios devices they came into contact with.

https://googleprojectzero.blogspot.com/2020/12/an-ios-zero-click-radio-proximity.html

2/
Beer developed the exploit virtually single-handedly over six months and confidentially disclosed its details to Apple, which issued patches for it earlier this year. Now that the patch has had time to propagate, Beer has released a detailed, formal account of his work.

4/
The 30,000 word technical paper is heavy reading, but if you want inspiration to delve into it, try the accompanying 14-second video, which is one of the most remarkable (and alarming) infosec clips I've ever seen.



5/
As far as can be known, this was never exploited in the wild. In his @arstechnica coverage of the exploit, @dangoodin001 drops the other shoe: "If a single person could do all of this in six months, just think what a better-resourced hacking team is capable of."

6/
It's a theme that Beer himself explores in a Twitter thread, in which he describes the tradeoffs in protocols like AWDL, whose ease of use was critical in private messaging by Hong Kong protesters last hear.

https://twitter.com/i41nbeer/status/1333884906515161089

7/
But whose "large and complex attack surface [exposed] to everyone in radio proximity" creates a security nightmare if there are any bugs at all in the code...and unfortunately the quality of the AWDL code was at times fairly poor and seemingly untested."

8/
It's a sobering reminder that companies can't fully audit their own products. Even companies with sterling security track-records like Apple slip up and miss really, really, REALLY important stuff.

9/
It's really at the heart of understanding why independent security research must be protected - at a moment in which it is under assault, as out-dated laws like the Computer Fraud and Abuse Act are used to punish researchers who go public with their work.

10/
Dominant companies - including Google and Apple - have taken the position that security disclosures should be subject to a corporate veto (in other words, that companies should be able to decide when their critics can make truthful disclosures about their mistakes).

11/
When the W3C introduced EME, it created the first-ever standardized browser component whose security defects could be suppressed under laws like the CFAA and Sec 1201 of the DMCA.

12/
In his thread, Beer rightfully praises both Apple and Google for having a bug bounty program that serves as a carrot to entice security researchers into disclosing to the company first and giving it time to patch before going public.

15/
(And he calls on Apple to award him a bounty that he can donate to charity, which, with corporate charitable matching, would come out to $500K. This is a no-brainer that Apple should totally do).

16/
But as laudable as the Bug Bounty carrot is, let us not forget that the companies still jealously guard the stick: the right to seek fines and even prison time for security researchers who decide that they don't trust the companies to act on disclosures.

17/
That may sound reasonable to you - after all, it's reckless to just blurt out the truth about an exploitable bug before it's been patched. But companies are really good at convincing themselves that serious bugs aren't serious and just sitting on them.

18/
When that happens, security researchers have to make a tough call: do they keep mum and hope that no one else replicates their findings and starts to attack users, or do they go public so that people can stop using dangerously defective products?

19/
Bug Bounties are great - essential, even. But for so long as companies get to decide who can tell the truth about the defects in their products, bug bounties won't be enough. The best, most diligent security teams can make dumb mistakes that create real risk.

22/
Your right to know whether you are at risk should not be subject to a corporate whim. The First Amendment - and free speech protections encoded in many other legal systems - provides a high degree of protection for truthful utterances.

23/
The novel and dangerous idea that corporations should have a veto over the truth about their mistakes is completely irreconcilable with these free speech norms and laws.

eof/
You can follow @doctorow.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.