Theme: Model Evaluation and Benchmarking
OzEWEX promotes the use of observations to evaluate and compare biophysical models and data products describing energy and water cycle components and related variables. Biophysical models of particular interest include those of particular importance to the Australian research community or BoM: the CSIRO Atmosphere Biosphere Land Exchange (CABLE), the Australian Water Resources Assessment system (AWRA), the UK Met Office land surface models (MOSES and JULES). However other existing or newly developed models will also be included where deemed of interest, in particular where it is anticipated that they will overcome model weaknesses.
The Theme promotes building on past experiments and existing infrastructure, code and data to develop new opportunities for model evaluation and benchmarking. These will be focused through further development of the Protocol for the Analysis of Land Surface Models (PALS) – a web-based infrastructure for model evaluation and benchmarking that is under development with resources from CECSS and TERN under the auspices of GEWEX and the International Land Model Benchmarking project (ILAMB). It will build on experience and technologies developed through WIRADA, including the AWRA benchmarking infrastructure, gridded Evapotranspiration product Inter-Comparison and Evaluation experiment (ET-ICE) and Near-real time blended precipitation product comparison. It may also build on related inter-comparison experiments, e.g. the REgional Carbon Cycle Assessment and Processes (RECCAP) initiative. It will seek to reuse the infrastructure used in these experiments, but also make the observation data publicly accessible to the OzEWEX community where possible.
This Theme encourages the development of appropriate specifications for the inter-comparison or evaluation of estimates and predictions. It addresses such questions as: What observations should be used in model evaluation? How should differences between estimates and observations be interpreted? How does performance vary regionally across Australia and what does it mean for the driving processes? How do evaluation metrics translate into estimation uncertainty? It will use the infrastructure to carry out and publish new evaluation and benchmarking experiments using on estimates or predictions provided by WG members.
read articles – current members