"A large scale analysis of hundreds of in-memory cache clusters at Twitter" by @1a1a11a et al is a must-read for anybody using in-memory caches in their architectures ( https://www.usenix.org/system/files/osdi20-yang.pdf). Some highlights:
"... a cache with a low miss ratio most of the time, butsometimes a high miss ratio is less useful than a cache witha slightly higher but stable miss ratio." Yes! If your goal is to reduce load on the backend, worst-case miss ratio matters most.
"Moreover, cache maintenance and failures become a major source of disruption for caches with extremely low miss ratios. The combination of these factors indicate there’s typically a limit to how much cache can reduce read traffic ..."
Soft TTL (section 4.4.1) is a fairly common idea, one that deserves more attention. It's a good tradeoff for a lot of human-facing use-cases.
"Measuring all Twemcache workloads, we observe major-ity of the cache workloads still follow Zipfian distribution." Good data on this, because it's been somewhat contentious. Obviously still very workload (and likely industry) dependent.
It's hard to summarize Section 6.4 comparing LRU and FIFO eviction strategies for these workloads. Maybe the best summary is that it doesn't seem to matter (for these workloads), unless you're trying to push the cache very hard indeed.
"... database traffic is often already filtered by caches, and has the most skewed portion removed via cache hits" Again, not news, but a great reminder that a high-hit-rate cache near the top makes the workload to lower layers harder to process (per unit).
Anyway, this paper is absolutely worth checking out if you're building these kinds of systems. Remember, caches introduce modes, and modes are bad for distributed systems.
You can follow @MarcJBrooker.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.