Next up at #enigma2021, Alex Gaynor from @LazyFishBarrel (satirical security company) will be talking about "QUANTIFYING MEMORY UNSAFETY AND REACTIONS TO IT"
https://www.usenix.org/conference/enigma2021/presentation/gaynor
https://www.usenix.org/conference/enigma2021/presentation/gaynor
Look for places where there are a lot of security issues being handled one-off rather than fixing the underlying issue
We tried to fix credential phishing mostly by telling people to be smarter, rather than fixing the root cause: people being able to use phished credential.
2-factor auth just ... fixes the problem.
2-factor auth just ... fixes the problem.
We believe memory unsafety is one of these root causes. We just keep playing whack a mole rather than ... fixing it.
Not every language works for everything [*cough*garbage collectors*cough*]
Most of these languages have an "unsafe" keyword. That makes things unsafe, but at least you know where the risk is, and it's rarely used.
Most of these languages have an "unsafe" keyword. That makes things unsafe, but at least you know where the risk is, and it's rarely used.
Case studies: yes, memory safety bugs are getting really exploited against large numbers of people (e.g. Chinese gov't targeting people with iOS visiting Uighur websites, HeartBleed, WannaCry, WhatsApp 0-day)
Can we make this impractical or impossible?
Can we make this impractical or impossible?
The stages of grief:
1. Denial “Programming in memory unsafe languages does not cause an increased rate of vulnerabilities.”
Refute with data [in diagram]
1. Denial “Programming in memory unsafe languages does not cause an increased rate of vulnerabilities.”
Refute with data [in diagram]
2. Anger “Yes, code in memory unsafe languages can have bugs. But if you were a better programmer, you wouldn’t have this problem.”
The engineers at Google/Apple/Linux/Mozilla/etc. are good! But vulns are all over, regardless, and increase with code size
https://how.complexsystems.fail/
The engineers at Google/Apple/Linux/Mozilla/etc. are good! But vulns are all over, regardless, and increase with code size
https://how.complexsystems.fail/
3. Bargaining “Ok, yes, memory unsafety is a problem. But surely we can address it with static analysis and fuzzing and sandboxing and mitigations and red-teaming.”
This isn't wrong! It's just not sufficient! These teams are already using these techniques! A *lot*.
This isn't wrong! It's just not sufficient! These teams are already using these techniques! A *lot*.
4. Depression “Memory unsafety is a problem… but oh my god we have a trillion lines of C/C++, we can never rewrite all of it, everything is hopeless.”
Work smarter, not harder: focus on the high-leverage places.
Work smarter, not harder: focus on the high-leverage places.
5. Acceptance. Ask how, not if.
* build coalitions
* use a memory-safe language that's a good fit
* make it possible to use memory-safety for new codebases
* find the highest-leverage attack surfaces in existing code and target that
* use language as a factor when assessing sec
* build coalitions
* use a memory-safe language that's a good fit
* make it possible to use memory-safety for new codebases
* find the highest-leverage attack surfaces in existing code and target that
* use language as a factor when assessing sec
Incremental progress is possible
* Python Cryptographic Authority
* Rust-For-Linux
* Firefox
* Librsvg
Your project can be next!
* Python Cryptographic Authority
* Rust-For-Linux
* Firefox
* Librsvg
Your project can be next!
[end of talk]