Having been a reviewer for USENIX Security, CCS, PETS, and ACSAC, I’ve observed both good and bad reviewing practices. We’re grappling with massive growth in security conferences, and this is causing the review process to fail in a number of ways. Some things to consider:
1. Appoint associate chairs, below the TPC chairs, that are responsible for helping to shepherd reviews to ensure quality.
2. Ensure that reviewer expertise is high enough for each submitted paper and get additional reviews if not.
2. Ensure that reviewer expertise is high enough for each submitted paper and get additional reviews if not.
3. Cultivate an expectation of healthy discussion of a paper. If reviewers don’t have time to discuss, then ask them to cut back on their commitments so they can participate fully.
4. If you have rebuttals, require a rebuttal response from each reviewer.
4. If you have rebuttals, require a rebuttal response from each reviewer.
5. Have one of the reviewers summarize PC discussion of the paper and provide overall feedback on why the final decision was reached.
6. Regularly audit reviews and reviewer engagement, and discuss frankly with reviewers where improvement is needed in their service. Let those doing well know they are appreciated.
7. Foster positivity. This is cultural and takes a lot of hard work.
7. Foster positivity. This is cultural and takes a lot of hard work.
8. Consider higher acceptance rates or alternative venues such as a “letters” publication (without a conference talk) to ensure good work does get published.
It's also worth asking if the tools we use give us what we need to do the above well.