I did a thing and accidentally served on every major security PC in the 2020 cycle. I didn't mean to but it happened anyway.
On the bright side, I'm in an informed position to discuss the review culture at the Top 4, specifically the recent criticism of the @acm_ccs reviewing.

Anecdotally, I didn't observe any difference between the majors in 2020. Like always, there were many insightful reviews that made me think differently about a submission. Conversely, I butted heads with all the usual suspects that I butt heads with at every other conference. ; )
The numbers tell a different story, though. Here are the average review scores for each conference (weighted across reviews per deadline using HotCRP statistics):
1) #Oakland20: 2.91
2) #Sec20: 2.67
3) #NDSS20: 2.50
4) #CCS20: 2.42
1) #Oakland20: 2.91
2) #Sec20: 2.67
3) #NDSS20: 2.50
4) #CCS20: 2.42
On average, your 2020 submission's composite score was expected to be a half point higher at #Oakland20, generally viewed as a slightly-more-prestigious venue with a lower acceptance rate, than at #CCS20...
So although it's unlikely that these scoring differences are uniformly distributed across all subcommunities, there's clearly something going on here, especially given the highly similar themes and identities of Oakland and CCS.
So, especially among the concerned systems folk, what can we do to combat this trend as a rank-and-file reviewer?
My answer has been to commit to unwavering positivity in my own approach to paper reviewing -- my average score was +0.84 at #Security20, +0.69 at #NDSS20, +0.58 at #CCS20, and +0.15 at #Oakland20 where scores were already higher...
I also stick my neck out and champion papers regardless of the scores of other reviewers (especially when I'm highly knowledgeable). If I think the work is good then we're at least going to have a *long* conversation about it, and this often leads to positive outcomes like MR's.
It's a smallish sample, but individual intervention seems to be working. Across 86 reviews, here's how my pile's acceptance rate compared to the conference accept rate:
1) #NDSS20: 43% vs. 17%
2) #Sec20: 35% vs. TBD%
3) #Oakland20: 17% vs. 12.4%
4) #CCS20: 18% vs. ~14%
1) #NDSS20: 43% vs. 17%
2) #Sec20: 35% vs. TBD%
3) #Oakland20: 17% vs. 12.4%
4) #CCS20: 18% vs. ~14%
Anyhow, I know many in the systems community are concerned about disparities among review culture at the different conferences. I hope this helps! We're all in this together; would love to hear effective interventions that other have thought of at the rank-and-file level.
Errata: I forgot to adjust for the different scoring systems in my initial ranking. Here’s a more accurate version based on distance from minimum positive recommendation (e.g., Weak Accept, Maj Rev)
1) #Sec20: -0.33
2) #NDSS20: -0.50
3) #Oakland20: -1.09
4) #CCS20: -1.58
1) #Sec20: -0.33
2) #NDSS20: -0.50
3) #Oakland20: -1.09
4) #CCS20: -1.58
So the Oakland v. CCS comparison is still apt, but really we see a huge split in positivity between USENIX/NDSS and Oakland/CCS, which fits many of our expectations given the leadership on review culture that we’ve seen from those conferences in recent years.