Co-leads: Birgit Hassler, DLR and Forrest Hoffman, ORNL
This task team will focus on designing and integrating systematic and comprehensive model evaluation tools into the CMIP project.
The goal of CMIP is to better understand past, present, and future climate changes in a multi-model context. An important prerequisite for providing reliable climate information using climate and Earth system models is to understand their capabilities and limitations. It is therefore essential to evaluate the models systematically and comprehensively with the best available observations and reanalysis data.
A full integration of routine benchmarking and evaluation of the models into the CMIP publication workflow has not yet been achieved, and new challenges stemming from models with higher resolution and enhanced complexity need to be tackled. These challenges are both on the technical side (e.g., memory limits, increasingly unstructured and regional grids), as well as on the scientific side, in particular the need to develop innovative diagnostics, including the support of machine learning-based analysis of CMIP simulations.
Aim & Objectives
The aim of the Model Benchmarking TT is to provide a systematic and rapid performance assessment of the expected models participating in CMIP7 with a set of new and informative diagnostics and performance metrics, ideally along with the model output and documentation.
The goal is to fully integrate the evaluation tools into the CMIP publication workflow, and their diagnostic outputs published alongside the model output on the ESGF, ideally displayed through an easily accessible website.
Main objective: to pave the way for enhancing existing community evaluation tools that facilitate the systematic and rapid performance assessment of models while addressing new challenges such as higher resolution, unstructured grids, and enhanced complexity, and creating a framework in which these tools are applied optimally and their diagnostics output published alongside the CMIP7 model output.
Early objectives will be:
- Ensuring that all necessary information is available for all data that are produced with the different simulations (in collaboration with the Data Request TT).
- Ensuring that the data can be accessed relatively easily with possible evaluation tools (in collaboration with the Data Access TT).
- Working on a framework that allows quick simulation access and evaluation.
The TT will also coordinate with the following WCRP activities:
- Climate and Cryosphere (CliC)
- Climate and Ocean Variability, Predictability and Change (CLIVAR)
- Lighthouse Activity Explaining and Predicting Earth System Change (EPESC)
Climate Model Benchmarking members
|Rebecca Beadling||2022-||Member||Temple University||USA|
|Ed Blockley||2022-||Member||UK Met Office||UK|
|Jared Lewis||2022-||Member||Climate Resource Pty Ltd||Australia|
|Jianhua Lu||2022-||Member||SYSU & SML||China|
|Luke Madaus||2022-||Member||Jupiter Intelligence, Inc.||USA|
|Elizaveta Malinina||2022-||Member||Environment Canada||Canada|
|Wilfried Pokam Mba||2022-||Member||University of Yaoundé I||Cameroon|
|Enrico Scoccimarro||2022-||Member||CMCC Foundation||Italy|
|Ranjini Swaminathan||2022-||Member||University of Reading||UK|
Open call for members closed in October 2022 – call text available here.