It's @jtopper here again with a quick re:Invent update. Some S3 improvements were announced overnight.
Incidentally, Facebook reminded me today that I was in Vegas this time last year. I like to call this face 'straight off a long hall flight into "America's Playground"'. I don't imagine the author who coined that term intended to imply that it's full of toddlers, but here we are.
Anyway, S3. There are some changes to replication. You've been able to replicate data between S3 buckets for a while now (either in the same region, or a different one). Changes launched last night allow you to replicate to multiple buckets at once.
You can also now perform two-way replication between a pair of buckets, essentially allowing you to keep two buckets in sync regardless of which one you write to. I can see this making life easier for companies who run out of a pair of regions.
S3 is already 99.999999999% durable (that's not me taking the piss, by the way, that's the actual number), so you wouldn't be using these replication features to solve a durability problem. They're more useful for availability or proximity considerations.
If you're using Server Side Encryption for S3 (and if you're not, you'll get a "needs improvement" finding in a Well-Architected review), heavy workloads generate a lot of KMS requests, which can get expensive.
As of today, you can now configure S3 to use an S3 Bucket Key. This is a single KMS operation, with S3 then deriving per-object keys from that one bucket key, reducing traffic to KMS and making things cheaper. Presumably there's a performance improvement too.
And the final S3 update. I've been saving the best til last. This is a fundamental change to S3 that's just kinda being handwaved away in a minor update announcement: S3 now delivers read-after-write consistency everywhere.
Ok, that's a bit computer-sciency, here's what it means: when you write or modify a file object to S3, subsequent read operations get that exact same file back.
Now some of you are reading this thinking "well yes, isn't that how S3 works?". NO! Up until yesterday, S3 was *eventually consistent*. If you wrote or updated a file, subsequent reads were not guaranteed to get the same file back. This is a hugely common misunderstanding.
The general view is "S3 just behaves like a filesystem", because on the surface that's sort of how it looks. The lack of consistency guarantees meant that you couldn't build applications that rely on that behaviour.
I could use this as an opportunity to tell you that hiring a team like The Scale Factory to help you design and build, or review your architecture would help you avoid making these sorts of mistakes, but what is this, a marketing account?
Instead, I'm going to call out that what AWS have released here is a *huge* engineering effort, designed to make life easier for everyone who works with S3. Without any change to performance, availability, isolation, or cost.
Just like that, your assumptions about S3 as a filesystem are now a little bit more accurate in ways that really matter. We're working with one client right now who've been experiencing pain caused by exactly this issue. And now, literally overnight, it's gone.
This is why we use the cloud. Teams you've never met, working hard, to solve difficult engineering problems that you might not even know you had, whose benefits you can use without changing a line of code.
You can follow @scalefactory.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.