As the season winds down, I think it’s probably time to wrap the forecasts up for the year.  Below, I’ll summarize the season first, and then the model performance over the season.

Season Summary:

What a great snow year it has been for our patrol zone!  It certainly didn’t have a good start to it, almost a tale of two ski seasons.  For the first several months, it seemed like we were just getting 1 or 2 inch storms over and over without any meaningful accumulation.  We had the typical January and February lull, and were sadly well below average in terms of snowfall overall.  But then, starting around mid-March, it steadily dumped on our patrol zone.  So much great powder skiing in March, April, and the first half of May.

Statistically, on average, April is the best month for our patrol zone, and March is the second best month for our patrol zone.  (Appendix B in my book Hunting Powder has all the stats if you’re curious.)  However, it’s worth noting that this year we had a much better March and April than average, and the average is already good for those months.

Here is the year summary from the Lake Eldora Snotel.  Notice how we went from below average until early March to generally above average from mid-March onwards.  Some of that time, we were significantly above average.

As I mentioned in an early post, all of this bodes well for summer skiing as spring snowfall has a large impact on summer permanent snowfield / glacier size, while winter snowfall does not.   Link here, if you’re curious to read more about the key study on this done just north of our patrol zone: a46A282 349..354 (pdx.edu).

Also, it’s worth noting that our patrol zone was one of the few winners in Colorado this year.  The east of the continental divide region of Colorado (from roughly Hidden Valley on the north to Echo Mountain on the south, including our patrol zone) got hammered with snow.  The continental divide region (roughly the Cameron Pass, Berthoud Pass, Loveland Pass, to Monarch Pass), and the Ten-Mile / Mosquito Ranges did average.  But the rest of the state was below average.  We were lucky. 

I hope everyone enjoyed this year, that by the end of the season, had turned into a great powder year.  I know I did.  And don’t forget, we’re likely still to see another storm or two in late May or June that brings a bit more powder to the higher elevations. 

How did the models do?

So, how did the various models do this year in terms of predicting snow in our patrol zone?  I’ve been putting in a retrospective discussions all year complimenting or criticizing various models for their snow predictions just so I could tabulate it at the end of the year to learn.  So, I’ve now added up my critiques and compliments, compared it to the number of times I referenced each model, and here are my thoughts.  First, a gigantic caveat, while the following is much better than just providing my gut reaction, this is still a highly subjective analysis.  It’s just doing the math on my subjective analyses in each retrospective discussion.

The winner this year was, drum-roll please …

The WRF Model.  Again.  The WRF Model hit it spot-on more times than any other model (twelve times), and although it was too optimistic a number of times, it was rarely too pessimistic, even more rarely was way too pessimistic, and (actually really surprisingly), it was never once was way too optimistic. 

Overall, the American and Canadian models probably tied for second place.  Both tended to predict less than what we actually received, but both were spot-on nine times.  The Canadian Model was arguably just slightly better, but they were very close.

All the other models didn’t do terribly well.  The European Model was far too pessimistic, and the UK Met, NAM and RDPS Models were all over the map.  Interestingly, while most models tended to underpredict as opposed to overpredict storms overall, the RDPS Model slightly overpredicted as opposed to underpredicted the storms.

Two big surprises (again like last year) from running the numbers. 

First, according to the published literature analyzes of North America, there is a clear pecking order of the four major medium-term models, with the European Model first, the UK Met Model second, the American Model third, and the Canadian Model fourth.  This certainly is not true for our patrol zone.  If anything, this might be exactly reversed when looking at each model’s predictive ability for snowfall in our patrol zone.

Second, statistically one would expect short-term models (e.g., WRF, NAM, and RDPS) to outperform medium-term models (e.g., American, Canadian, European, and UK Met), but like last year, other than the WRF model, that didn’t happen.  Perhaps I’m wasting my time looking at the NAM and RDPS Model forecasts, and should stick to the big four medium term models, plus of course the WRF Model.

Regardless, this will be very helpful in improving my forecasts next year, and I’m glad to see that the pattern this year was somewhat similar to the pattern last year.

Thanks for reading everyone, and I look forward to seeing some of you at our upcoming happy hour, our (sometimes) annual patrol Mt. Russel ski in early June, and if not, at the OEC refresher in the fall.

-Jordan (Monday May 17)