Climate Model Benchmarking

Co-leads: Birgit Hassler, DLR and Forrest Hoffman, ORNL

This task team will focus on designing and integrating systematic and comprehensive model evaluation tools into the CMIP project.

The goal of CMIP is to better understand past, present, and future climate changes in a multi-model context. An important prerequisite for providing reliable climate information using climate and Earth system models is to understand their capabilities and limitations. It is therefore essential to evaluate the models systematically and comprehensively with the best available observations and reanalysis data.

Challenge

Aim & Objectives

Members

Activities

Challenge

A full integration of routine benchmarking and evaluation of the models into the CMIP publication workflow has not yet been achieved, and new challenges stemming from models with higher resolution and enhanced complexity need to be tackled. These challenges are both on the technical side (e.g., memory limits, increasingly unstructured and regional grids), as well as on the scientific side, in particular the need to develop innovative diagnostics, including the support of machine learning-based analysis of CMIP simulations.

Aim & Objectives

The aim of the Model Benchmarking TT is to provide a systematic and rapid performance assessment of the expected models participating in CMIP7 with a set of new and informative diagnostics and performance metrics, ideally along with the model output and documentation.

The goal is to fully integrate the evaluation tools into the CMIP publication workflow, and their diagnostic outputs published alongside the model output on the ESGF, ideally displayed through an easily accessible website.

Main objective: to pave the way for enhancing existing community evaluation tools that facilitate the systematic and rapid performance assessment of models while addressing new challenges such as higher resolution, unstructured grids, and enhanced complexity, and creating a framework in which these tools are applied optimally and their diagnostics output published alongside the CMIP7 model output.

Early objectives will be:

  1. Ensuring that all necessary information is available for all data that are produced with the different simulations (in collaboration with the Data Request TT).
  2. Ensuring that the data can be accessed relatively easily with possible evaluation tools (in collaboration with the Data Access TT).
  3. Working on a framework that allows quick simulation access and evaluation.

The TT will also coordinate with the following WCRP activities:

  • Climate and Cryosphere (CliC)
  • Climate and Ocean Variability, Predictability and Change (CLIVAR)
  • Lighthouse Activity Explaining and Predicting Earth System Change (EPESC)

Members

Climate Model Benchmarking members

Birgit Hassler 2022- Co-lead DLR Germany
Forrest Hoffman 2022- Co-lead ORNL USA
Rebecca Beadling 2022- Member Temple University USA
Ed Blockley 2022- Member UK Met Office UK
Jiwoo Lee 2022- Member PCMDI/LLNL USA
Valerio Lembo 2022- Member ISAC Italy
Jared Lewis 2022- Member Climate Resource Pty Ltd Australia
Jianhua Lu 2022- Member SYSU & SML China
Luke Madaus 2022- Member Jupiter Intelligence, Inc. USA
Elizaveta Malinina 2022- Member Environment Canada Canada
Brian Medeiros 2022- Member NCAR USA
Wilfried Pokam Mba 2022- Member University of Yaoundé I Cameroon
Enrico Scoccimarro 2022- Member CMCC Foundation Italy
Ranjini Swaminathan 2022- Member University of Reading UK

Activities

Tools gallery

The Model Benchmarking Task Team have compiled detailed information about a number of model evaluation and benchmarking tools. You can view the gallery of tools here. You can submit a tool to the gallery here.

Membership calls

Open call for members closed in October 2022 – call text available here.

To top