Well, we had one great last hurrah with the May 11 to 13th snowfall, and we’ll doubtlessly have more snowfalls at the higher elevations from now to mid-June, but with the Eldora Snotel essentially showing the snowpack is hitting 0” at that comparatively low elevation, I think it’s time to wrap up my forecasts for the season.  As always, this season recap will consist of two parts.  First, I’ll do an overview of the season as a whole, and second, I’ll look at the results of my “analysis” of all of the retrospective discussions to try to learn about each model’s strengths and weaknesses to improve next year’s forecasts. 

Season Recap:

This was an average year in the front range, but with near record breaking snowfalls in other parts of our state, and gigantic record breaking snow years in Utah and California, average just doesn’t feel anywhere close to average in comparison.  I just feel fortunate that I was able to experience a bit of the record breaking year with a day at Sunlight, five days in Utah, and two days in California.  But I digress, back to our patrol zone.

November started strong.  We faced some rough times, however, in December until it got close to Christmas.  Then we had a great Christmas and New Years.  We remained slightly above average in January and early February.  Late February and March were disappointing.  I was especially disappointed that the large spring storms we often get didn’t hit us in March.  Fortunately, although we had no cut-off lows in April, April still produced a lot of snow.  And then, we even got a big late dump in mid-May.  So, an average year overall.  And oh, it was really windy a lot – average again.  Below is the Lake Eldora Snotel Chart showing this year (black) and average (green):

Which Model Was the Winner:

Well, with the end of the year, it’s time to figure out which models did the best in predicting snowfall in our patrol zone.  Let me start with the usual caveat that my analysis is rather subjective – or as I might put it, this analysis is science like nutrition is science.  So take this with a large grain of salt. 

I did more than fifteen retrospective discussions on five models (i.e., American, Canadian, European, UK Met and WRF), with over thirty data points on the American, Canadian, and European Models, so I’ll only include those five models in this analysis.  I’m not including the models that I have fewer than ten retrospective discussions (i.e., NAM and RDPS). 

Accuracy – Of the five models from which I feel as if I have sufficient data, the one that most often was dead on or close to dead on was the Canadian Model (48% of the time), followed by the UK Met Model (38% of the time), then the American Model (34% of the time), then the WRF Model (22% of the time), and finally the European Model (21% of the time).

Mode – When the models were off, I categorized them by way too low, too low, too high, and way too high.  While the Canadian, UK Met, and American Models were more likely to be correct than fall in any other category, the European Model more often was too low than it was correct, and the WRF Model more often was way too high than it was correct. 

Median – Looking at the median prediction, overall, the American, European, and UK Met Model tended to underpredict storms by a bit, the WRF tended to overpredict storms by a bit, and the Canadian Model was evenly balanced.

Outlier predictions – While the Canadian Model is the obvious winner thus far, the picture changes a bit, however, when you look at how often the models were way off in their predictions (both way too high and way too low).  The European Model was only way off 21% of the time (and never once this season way overpredicted a storm), the UK Met Model was only way off 25% of the time (and likewise never way overpredicted a storm), the American Model was way off 32% of the time, the Canadian Model was way off 33% of the time, and the WRF Model was way off 39% of the time.

Conclusion – Looking at the whole picture, I’d say this year the overall best model was the Canadian Model (with the caveat that while it was the most balanced and the most correct, it was still wildly off fairly often).  The second best model was the UK Met (with the caveat that it tended to underpredict snow).  I’d put the American Model in third place, the European Model in fourth, and the WRF Model in fifth place.

For all you meteorology nerds, I know this is heresy.  But after years of keeping track, while the European Model may be the best model overall worldwide, it is definitely not the best model for predicting snow in our patrol zone.  From my analyses, in 2020, the WRF Model came in first and the Canadian Model came in second.  In 2021, the WRF Model came in first and the Canadian and American Models tied for second.  In 2022, the Canadian Model came in first and the European Model came in second.  Now, in 2023, the Canadian Model came in first and the UK Met Model came in second.  It’ll be interesting to see the results next year, but for now, I’ll have some Canadian Bacon tomorrow morning while humming Oh Canada to celebrate our northerly neighbors’ ability to predict our snow.

Cheers everyone.  And barring another June-uary like last year, I’ll restart these forecasts sometime in the late fall.  KBO.

-Jordan (Monday 5/15/23 evening)