
Total snowfall map for Western Washington predicted on Nov. 14, 2025 for the Thanksgiving weekend. (The forecast didn't pan out)
We talked earlier this week about the big snow in Thanksgiving week in 1985.
There were a few hours a couple weeks ago when at least one forecast model was ready for a repeat. Its predicted snowfall map issued on Nov. 14 was showing 6-12 inches around the Puget Sound area over the heart of Thanksgiving weekend — still 11-15 days away at the time.
I post this now because this forecast obviously wasn’t going to pan out, and you didn’t hear much about it back then (I hope) because it was looking well beyond the limits of accepted accuracy. It would have been irresponsible to post such an impactful forecast and put enough trust into a prediction that far in advance.
Sure enough, the model soon gave up on the idea as quick as it came up with it in the first place and while forecasts of colder weather — which really haven’t panned out much either — remained stubbornly in the long range, lowland snow was never really a serious consideration. Gone in a flash.
MORE TO EXPLORE:
- ‘I have to get the baby’: A Vancouver mother’s sacrifice during the Northwest’s deadliest tornado
- The Thanksgiving deluge of 1990: Record floods and a sinking I-90 Bridge
- Holiday chaos: Looking back 40 years on Seattle’s epic Thanksgiving snowstorm and arctic blast
But how do these forecast models get these grandiose ideas to begin with? And what are we doing to make forecasting better?
Forecast models work by attempting to do what is, at least as of now, technologically impossible: Know what’s happening with every single air parcel on the planet. And then using what we know about atmospheric physics and dynamics, we run calculations “modeling” what those air parcels will do next, and what weather will they create in doing so?
But we don’t have the observational power to know what every parcel of air is doing right now, and we don’t have the computing power to run the calculations for every molecule even if we DID know all that was happening.
So we do the best we can with the tools we have: Satellites, radars, weather balloons, ground observations, even pilot reports all try to tell the models what’s happening now with as much as we can measure. Newer weather satellites do an excellent job these days filling in observational gaps, but it’s still not perfect.
Step 2 is then taking all that data and running the forecast calculations over the planet. But again, we can’t calculate for every point on Earth; it’s just too much data to crunch. So we run grid points spaced equally apart and run the calculations for that spot, then attempt to “fill in the gaps” with assumptions between the grid points.

But what happens when we assume wrong? It introduces a “model error”. Those errors get magnified over time because as the model’s forecast goes out in time, it’s using increasingly erroneous information to base its forecast on. This is the classic “butterfly effect” — a tiny error in the initial data or the assumption at one point can be magnified exponentially as the forecast moves out in time.
This is why forecast models degrade in accuracy as they go farther out in time, and how a 12-15 day forecast suddenly shows Snowmaggedon for Seattle. That model made an incorrect assumption whose chain reaction of cascading errors led to a spectacularly wild and incorrect forecast! (Sorry, snow fans!)
But meteorologists are constantly getting better and better at figuring out how to accurately fill in those gaps. Research has led to better atmospheric calculations; speedier computers allow us to crunch more data over less time. That allows us to run models with more grid points and more often, shrinking the gaps we have to assume for, and in turn lessening (but not eliminating) the potential for errors in the assumptions.

If you hear about model “resolutions”, that’s a factor of how many grid points it uses. Models decades ago with lesser computing power available had to run with wide grid points 20+ miles apart. These days we can get global models down to about 6-7 miles per grid point, and closer in models down to about 1-2 miles apart — HUGE progress! The UW runs a model at 0.8 miles (1.33 km) grid spacing!
ENSEMBLE MODELS: THE WAY TO ‘MIND THE GAPS’
The basic models you see a lot of times on your Facebook feed — including the one I pasted here — is a “deterministic” model. That is just one model ingesting the “now” data, and then running its calculations and assumptions based on its base programming, and you get one output. It’s like you send one movie critic into watch a movie, and then get the one review, but the movie say for every 60 seconds of film, only shows you 45. You have to assume what happened in between each gap.
However, in recent years research meteorologists developed new tools called “ensemble” models. They still deal with the same limitations on knowing what’s happening now, and having to assume what happens in between the grids.
But ensembles are a group of models that each take a slightly different tack on assuming what’s in between the grids. The European (“Euro”/ECMWF) model runs about 50 versions in their ensemble; the American GFS model runs 30; the Canadians run 20 on their model. This is like then sending several film reviewers into the theater and then get a consensus of what they thought happened in the 15-second gaps. (Unless it was like a Quinton Tarantino or Christopher Nolan movie, then you pretty much never had a chance at consensus.)
If you get a vast majority of ensemble runs still coming up with similar answers, then you’ve got a good sign that you’re doing well knowing what’s happening “between the grids” and your confidence in the forecast increases.
On the other hand, if you get wildly different answers, then your confidence decreases.

The example here is a Euro ensemble run showing 50 runs of predicted snowfall over the next 15 days. Note model No. 45 shows 3.3” of snow in Seattle, and 49 of the other 50 models are forecasting roughly zero. But what if that 3.3” model was the only model run you had? This is how when a deterministic model says “HEY LOOK AT ALL THIS SNOW SEATTLE!!!!” You can go to the ensemble models, see very little support for it, and tell the deterministic model to go fly a kite. (“REALLY? IN A SNOWSTORM? OHHH….”)
In the social media scream-for-clicks world and the fact that most model runs are readily available online, this does allow someone to cherry pick a singular scary model run that has no support and begin the scare-a-thon. Just… find your trusted sources and stick to them! 🙂
Machine learning models are even helping to take the next step because they are able to crunch numbers faster. Some of the initial AI models are showing promise by taking the data that’s occurring now and instead of taking time running all the math, instead can look at our entire history of weather observations and calculate what similar setups have done in the past. Those calculations can be done in a fraction of the time, lending hope for models that update on much faster cycles.
Weather forecasting may still never be perfect in our lifetimes but the future is bright, unlike what that Euro model thought for the ground this Thanksgiving weekend…
Thanks for the post, this was really informative! I had heard all this terminology before but didn’t really know what it is about, this is helpful (also the datacenter pictures were cool 😀 ).