Why are meteorologists apprehensive of ensemble forecasts?

A colleague in my world-wide meteorological network made me aware of a CALMet conference in Melbourne, i.e. dealing with meteorological education and training. Through the website you can access the program with more or less extensive abstracts. I have no doubt that most presentations were relevant and interesting, but what surprised me was that a search for the key words “probability” or “ensemble”  gave no hits. “Uncertainty” came up in only one (1) presentation, no 36 “To communicate forecast uncertainty by visualized product” by Jen-Wei Liu and Kuo-Chen Lu from the Central Weather Bureau in Taiwan.

This made me again ponder over the question why meteorologists still are apprehensive of ensemble systems (ENS) and probability forecasting.


1. Ensemble forecasting brings statistics into weather forecasting

Since the start of weather forecasting as we know it (in the 1860s), there has always been a rivalry between physical-dynamic-synoptic and statistical methods. Edward Lorenz’s famous 1959 experiment when he discovered the “butterfly effect” was part of a project in the late 1950’s to find out if statistical methods could be as effective in weather forecasting as numerical techniques. The answer was at the time not as clear-cut, but during the 1960’s, the numerical weather prediction (NWP) made much larger advances than the statistical approaches. Statistical methods were thereafter only used to calibrate NWP in what became known as MOS (model output statistics).

Over a lunch at ECMWF Edward Lorenz, on one of his annual visits in the 1990s, told us a parable he had got from the renowned Norwegian meteorologist Arnt Eliassen:

All the world’s birds wanted to compete who could fly the highest. They all set off ascending, but one after the other they had to drop out. Finally, only the great golden eagle was left. But as also he had to stop in order to return, a little sparrow who had been hiding in his feathers came out and managed to beat the eagle by a meter or two. The eagle is the dynamic NWP Eliassen had told Lorenz (who told us), the sparrow the statistical MOS.

To some extent the MOS can deal with uncertainties, but in a limited way since it is based on a deterministic forecast. It can estimate the general uncertainty at a certain range, but not distinguish between more or less predictable flow patterns. This is the strength, the core value of the ENS.

But ensemble forecasts are essentially statistical, probabilistic, and the meteorological education have always avoided to venture into this domain, except for those who wanted to become climatologists which in the old days was looked down upon. The ideal has been a physical-dynamic “Newtonian” approach, where perfect or almost perfect forecasts were seen as possible, if only the meteorological community got enough money to purchase better computers.

Indeed, it has paid of; the predictability range has increased by about one day per decade. Our five day deterministic forecasts today are as good and detailed as the two day forecasts in the 1980s. But also the demands and expectations from the public has increased. Even if we in a few decades from now can make more accurate and detailed seven day forecasts, there will still be questions about their reliability. The problem of uncertainty estimations will always be with us.


2. The ensemble system is a Bayesian system

But also among those meteorologists who are used to statistics, there is another problem. I became aware of that when I traveled on behalf of ECMWF to different Member States. A frequent question was: -How can you compute probabilities from those 50 members when you are not sure that they are equally likely?

My answer then was that we did not know! We did not know the likelihood of every member and we didn’t even know if they were all equally likely (probably they were not). But the verification statistics were good, and they would not have been so good if our assumption had been utterly wrong.



A typical “postage stamp map” from the ECMWF system. These 50 forecasts are not a priori equally likely, but since we do not know the probability of each of them we have to apply Laplace “Principle of insufficient reason” and assume that they are equally likely – an assumption which makes the system Bayesian. Image courtesy of ECMWF.

Only later I was made aware that my answer was the same as Siméon de Laplace had given two centuries earlier, when he was developing what is today known as “Bayesian statistics”: – We do not know, but make a qualified guess and see how it works out. Bayesian statistics, in contrast to traditional “frequentist” statistics, acknowledges the usefulness of subjective probabilities, degrees of belief. Laplace’s answer, which I unknowingly resorted to during my ECMWF days, is known as “Laplace principle of indifference”.

So part of the apprehension to ensemble forecasting cannot be attributed to ignorance, conservatism or “Newtonianism”, but has its basis in a long standing feud between “Bayesian” and “frequentist” statisticians. A “Bayesian” can look at the sky and say “there is a 20% risk of rain” whereas a frequentist would not dare to say that unless he had a diary which showed that in 34 cases out of 170 with similar sky, wind and pressure rain has occurred.

In recent years the gulf between “frequentist” and “Bayesians” has narrowed. Also, the calibration of the ENS data “à la MOS”, “washed away” much of the Bayesian characteristics and provided a more “frequentist” forecast product.


3. What is left for the forecaster?

Bayesian methods should not be alien to experienced weather forecasters. Since weather forecasting started in the 1860s there has been a strong Bayesian element in the routines, perhaps not described as such, but never the less this is how forecasters worked before the NWP. Who else but an experienced forecaster could look at the sky and give a probability estimate of rain? If the forecaster had a weather map to look at, the estimation would be even more accurate. Verification studies in the pre-NWP days in the 1950’s showed that forecasters had a good “intuitive” grasp of probabilities.

But with the advent of deterministic NWP the “unconscious” Bayesianism among weather forecasters evaporated gradually. The NWP could tell very confidently that in 72 hours time it would be +20.7 C, WSW 8.3 m/s and rain 12.4 mm within the coming six hours?

Anybody could read that information, your didn’t need to be a meteorologist. But you needed to be a meteorologist to have an opinion about the quality of the forecast: -Would it perhaps be cooler? The wind weaker? How likely is the rain?



There are currently more weather forecasters around than at any time before, in particular in the private sector where advising customers about their decision making is an important task (Photo from a training course at Meteo Group, Wageningen. Permission to use by Robert Muerau)

The risk was always that this forecast, even against the odds, would verify. So wasn’t it most tactical to accept the NWP? After all, if the forecast was wrong, the meteorologist had something to put his blame on. Some meteorologist took this easy road, but most tried to use their experience, knowledge of the models and meteorological know-how, to make a sensible modification of the NWP, including the reliability of the forecast. If the last NWP runs had been “jumpy” and/or there were large divergences among the available models.Tthis was taken as a sign of unreliability.

The “problem” for the weather forecasters was that with the arrival of the ENS they were deprived of even this chance to show their skill. The “problem” with a meteogram from ENS, compared to a more traditional deterministic from a NWP model, was that “anybody” could read the ENS meteogram! You didn’t need to be a meteorologist, not even a mathematically educated scientist. Einstein’s famous “grandma” could read the weather forecast and understand its reliability!

Grandmother with her granddaughter have fun with tablet at home

“You do not really understand something unless you can explain it to your grandmother.” – Albert Einstein


So what is left for the meteorologist?

I will stop here, because this text is already long enough. But the question above is really what educational and training seminars, conferences and workshops should be more focused on. I am personally convinced that the meteorologists have a role to play.

My conviction is based on my experiences from the hydrological forecast community, in particular the existence of this site. Is there any corresponding “Mepex“?

My conviction is also based on my experience as a forecaster myself, how the general public (and not so few scientists) need help to relate the uncertainty information to their decision making.

My conviction is finally based on my experiences from history that new tools always make traditional craftsmen more effective and prosperous – provided they are clever enough to see the new opportunities. Else they will miss the bus . . .


PS. To their credit it must be mentioned that EuMetCal is developing training resources for probabilistic forecasting.
All images from Thinkstock if not otherwise written.
Original article posted on HEPEX, 26 September 2017 (link)