Not gonna do any model meta stuff since 1) who cares 2) everyone's forecasts are *pretty* similar. But just one stray comment on this interesting reading from the Economist team on how they think their forecast (Biden 96%) may be overconfident:
https://statmodeling.stat.columbia.edu/2020/11/02/so-whats-with-that-claim-that-biden-has-a-96-chance-of-winning-some-thoughts-with-josh-miller/
https://statmodeling.stat.columbia.edu/2020/11/02/so-whats-with-that-claim-that-biden-has-a-96-chance-of-winning-some-thoughts-with-josh-miller/
The comment is: Having a few elections under your belt helps a *lot*. No matter how much you test things in the lab, there are some things you're going to learn only by seeing how your forecast reacts to real data in real time. (I'm sure this applies to lots of other stuff too.)