I presented bits of this at AMS 2020 in Boston back in January: it's all about being deliberate with the choice of baseline when verifying forecasts, and illustrates this concept by way of incorporating environmental information into verification of tornado warnings. (2/8)
We often talk about how tornado warnings in certain regions or at certain times of day have lower (or higher) skill because the near-storm environment presents a more difficult (or more "classic") forecasting challenge. (3/8)
But it's straightforward to use the skill score format to actually quantify that kind of statement by using an environmental climo value as the baseline. (The simple example in this work is the CAPE-bulk shear parameter space, but it generalizes well.) (4/8)
So we get a little more nuance: "This event had poorer warning skill than usual, but skill was still higher than we would have expected given the near-storm environment" is a verifiable statement we can make in this framework. (5/8)
This opens up the door to some more nuanced takes on the non-classic environments that have been getting more of a spotlight recently, such as high-shear low-CAPE environments of the Southeast, or the complicated environments of tornadoes associated with tropical cyclones. (6/8)
To be clear: the intent here is not to add a new set of indices that forecasters have to contend with. The Environmental Skill Scores discussed in the paper are there to facilitate ad hoc discussion and verification of events. (7/8)
And, fundamentally, this paper only uses tornado warnings as an illustrative example. That central question - "compared to what?" - should come into play in any verification context. Choice of baseline is important! (8/8)
You can follow @WxAlexandra.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.