Pre-, post-processing or both?

Do you think it is better to pre-process the meteorological forecasts, to post-process the hydrological forecasts or to do both? Why?

Following this blog about future directions for post-processing research, this challenge was mentioned in a comment by James Brown:

« Putting aside the choice of technique, I think there are some more fundamental questions about how to use hydrologic post-processing operationally. For example, under what circumstances does it make sense to separate between the meteorological and hydrologic uncertainties and model them separately (pre- and post- processing) versus lump them together? »

Existing comparative studies

 

Vintage processing machine (from www.torange.us, labeled with non-commercial reuse)I only ever saw three papers aimed specifically at a  «pre vs post» comparison. There is a case study on two Korean catchments by Kang et al. (2010). The authors demonstrate that, for the catchments under study, post-processing is much more efficient than pre-processing. In fact, their results show that the influence of pre-processing alone is very small compared to post-processing alone. The conclusions of Roulin and Vannitsem (2015) are similar. In another comparative study for 10 French catchments, Zalachori et al. (2012) conclude that: «Statistical corrections made to precipitation forecasts can lose their effect when propagated through the hydrological model». They also show that post-processing streamflow forecasts is quite effective in terms of improving the Ranked Probability Score and PIT histograms.

 

Verkade et al. (2013) did not perform a comparative study but rather investigated the influence of pre-processing the meteorological forecasts for streamflow forecasting. Their conclusions point in the same direction: « the improvements in precipitation and temperature do not translate proportionally into the streamflow forecasts ».

Of course, there are additional uncertainties arising from the hydrological model and pre-processing cannot account for that. Still, I find those results a bit surprising. I have often heard « garbage in, garbage out ». But now it seems that whether you pre-process meteorological forecasts before feeding them in the hydrological model is of very little consequence (in terms of final streamflow forecasts only, I mean).

Why?

Beyond this notorious uncertainty attributable to the hydrological model, to me it mostly appears that the hydrological model might be robust, in the sense that it is indifferent tosmall variations in meteorological input data. I find it similar to the problem of differentiating forecasts according to their economic value: sometimes, although system Bproduces forecasts of better quality (in terms of agreement with the observations) than system A, the difference is not significant enough to change a person’s (or organization’s) course of actions. So the quality improves but not the value (see Murphy 1993)

Also, neither Kang et al (2010) nor Verkade et al. (2013), or Zalachori et al. (2012) combined pre- and post-processing with data assimilation. I suspect it could make a difference. As mentioned in a previous post, there are probably interactions between data assimilation and post-processing. The recent study by Roulin and Vannitsem (2015) also supports this idea.  I am under the impression that if the hydrological state of the catchment and associated uncertainty could be estimated properly, then maybe those small improvements in meteorological forcings could be better translated further down the chain.

In my opinion, HEPEX needs more case studies comparing pre- and post- processing, involving:

From www.flickr.com, labeled with non-commercial reuse.

  • More catchments in different hydroclimatic regimes,
  • Various pre- and post-processing methods,
  • Different atmospheric/hydrologic models,
  • Pairing with data assimilation.

 

What is the answer?

I don’t know the answer to the question « pre-, post, or both »? Do you?

I see benefits and drawbacks in both pre- and post-processing. Using only post-processing is simpler and certainly relevant in terms of research. However, hydrologists also need reliable precipitation and temperature forecasts as such, not only the final streamflow forecast. So pre-processing is also important, although more complicated and apparently sometimes inefficient in terms of improving streamflow forecasts.

Do we need both? Is there a “correct” answer to this question?

References

Kang T.-H., Kim Y.-O. and Hong I.-P. (2010) Comparison of pre- and post-processors for ensemble streamflow prediction, Atmospheric Science Letters, 11, 153-159.

Murphy A.H. (1993) What is a good forecast? An Essay on the Nature of Goodness in Weather Forecasting, Weather and Forecasting, 8, 281-293.

Roulin E. and Vannitsem S. (2015) Post-processing of medium-range probabilistic hydrological forecasting: impact of forcing, initial conditions and model errors,Hydrological Processes, 29, 1434-1449.

Verkade J.S., Brown J.D., Reggiani P. and Weerts A.H. (2013) Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales, Journal of Hydrology, 501, 73-91.

Zalachori I., Ramos M.-H., Garçon R., Mathevet T. and Gailhard J. (2012) Statistical processing of forecasts for hydrological ensemble prediction: a comparative study of different bias correction strategies, Advances in Science and Research, 8, 135-141.

 

2 comments

  • These are interesting results… Just to confirm, when you talk about pre-processing you’re talking about bias correction of numerical weather predictions. Would you consider downscaling a form of pre-processing?

    I think there’s a couple models of running

    1. Calibrate/simulate with station observations… Forecast with Raw NWP
    2. Calibrate/simulate with station observations… Forecast with NWP bias corrected/downscaled to observations
    3. Calibrate on renanalysis… Force with raw NWP

    And then any/all of the above you could apply error correction (e.g. arma) as post processing… Similarly #3 (and possibly the others) you could translate streamflow simulated percentiles into observed percentiles.

    In the language of the above, what did the studies you mention compare?

  • Marie-Amélie Boucher

    Thank you for your comment!

    Yes, to me “pre-processing” refers to bias and dispersion correction of numerical weather predictions. I think there are complicated issues related to downscaling, especially when coupled with bias and dispersion correction. Whether or not we classify downscaling as part of pre-processing, it is necessary to implement some mean of preserving the spatial covariability of precipitation and temperature during the pre-processing. So downscaling and pre-processing are definitely related. However, since my own research so far only includes post-processing, I never considered the very basic downscaling method I use as “real” pre-processing.

    The studies I mention compare:

    – Verkade et al (2013) : #2 with #1

    – Kang et al (2010) + Zalachori et al (2012): #2 with #1 and #4, which is “Calibrate/simulate with station observations… Forecast with Raw NWP and then bias + dispersion correct the resulting streamflow forecasts”

    – Roulin and Vannitsem (2015): #2 with #5, which is “Calibrate/simulate with station observations… Force with reforecasts and then bias + dispersion correct the resulting streamflow forecasts”

Leave a Reply to Marie-Amélie Boucher Cancel reply

Your email address will not be published. Required fields are marked *