Climate Model Benchmarking

Co-leads: Birgit Hassler, DLR and Forrest Hoffman, ORNL

This task team will focus on designing and integrating systematic and comprehensive model evaluation tools into the CMIP project.

The goal of CMIP is to better understand past, present, and future climate changes in a multi-model context. An important prerequisite for providing reliable climate information using climate and Earth system models is to understand their capabilities and limitations. It is therefore essential to evaluate the models systematically and comprehensively with the best available observations and reanalysis data.

Challenge

A full integration of routine benchmarking and evaluation of the models into the CMIP publication workflow has not yet been achieved, and new challenges stemming from models with higher resolution and enhanced complexity need to be tackled. These challenges are both on the technical side (e.g., memory limits, increasingly unstructured and regional grids), as well as on the scientific side, in particular the need to develop innovative diagnostics, including the support of machine learning-based analysis of CMIP simulations.

Aim & Objectives

The aim of the Model Benchmarking TT is to provide a systematic and rapid performance assessment of the expected models participating in CMIP7 with a set of new and informative diagnostics and performance metrics, ideally along with the model output and documentation.

The goal is to fully integrate the evaluation tools into the CMIP publication workflow, and their diagnostic outputs published alongside the model output on the ESGF, ideally displayed through an easily accessible website.

Main objective: to pave the way for enhancing existing community evaluation tools that facilitate the systematic and rapid performance assessment of models while addressing new challenges such as higher resolution, unstructured grids, and enhanced complexity, and creating a framework in which these tools are applied optimally and their diagnostics output published alongside the CMIP7 model output.

The TT’s updated objectives (December 2024) are:

  1. Devise and provide guidance for context  for how the diagnostics are used and implemented within the AR7 Fast Track.
  2. Finalisation of diagnostic selection for the AR7 FT Framework and the data request opportunity
  3. Set up interim steering panel for oversight of the development of the  REF
  4. Oversee the validation and testing of the implemented framework for the AR7 Fast Track, and provide context for the appropriate interpretation of the metric results
  5. Oversee  launch of the AR7 Fast Track REF for use by the community
  6. Liaise with, and gather requirements from, the data request, model documentation task teams to ensure compatibility with the REF
  7. Liaise with the WIP and ESGF with any quality assurance and quality control requirements including representation on the joint WIP ESGF QAQC Task Team.
  8. Produce a preparedness review of the REF for CMIP7 (beyond AR7 Fast Track incorporating needs of next generation models)
  9. Track community suggestions for additional diagnostic metrics and functions for the REF.
  10. Develop a vision for future model benchmarking tools and methods.

The TT will also coordinate with the following WCRP activities:

  • Climate and Cryosphere (CliC)
  • Climate and Ocean Variability, Predictability and Change (CLIVAR)
  • Lighthouse Activity Explaining and Predicting Earth System Change (EPESC)

Members

Climate Model Benchmarking members

Birgit Hassler 2022- Co-lead DLR Germany
Forrest Hoffman 2022- Co-lead ORNL USA
Rebecca Beadling 2022- Member Temple University USA
Ed Blockley 2022- Member UK Met Office UK
Jiwoo Lee 2022- Member PCMDI/LLNL USA
Valerio Lembo 2022- Member ISAC Italy
Jared Lewis 2022- Member Climate Resource Pty Ltd Australia
Jianhua Lu 2022- Member SYSU & SML China
Luke Madaus 2022- Member Jupiter Intelligence, Inc. USA
Elizaveta Malinina 2022- Member Environment Canada Canada
Brian Medeiros 2022- Member NCAR USA
Wilfried Pokam Mba 2022- Member University of Yaoundé I Cameroon
Enrico Scoccimarro 2022- Member CMCC Foundation Italy
Ranjini Swaminathan 2022- Member University of Reading UK

Activities

Tools gallery

The Model Benchmarking Task Team have compiled detailed information about a number of model evaluation and benchmarking tools. You can view the gallery of tools here. You can submit a tool to the gallery here.

Rapid Evaluation Framework

The Task Team have developed a Rapid Evaluation Framework. The outline of the framework is designed to be open source and modular, with immediate focus on delivery of a rapid evaluation for the AR7 simulations but envisaged to build upon existing community evaluation packages.  The framework will improve the availability of, and global access to, evaluated simulations as well as supporting community efforts to reduce the carbon footprint.

Membership calls

Open for new members opened in December 2024 – deadline for applications is 09:00 UTC Monday 13th January 2025. Further information and application form can be found here.

Open call for members closed in October 2022 – call text available here.

To top
On this Page