Institut Pierre-Simon Laplace, CNRS/UPMC, Paris, France
Grenville Lister
Science and Technology Facilities Council, Abingdon, UK
Silvia Mocavero
Centro Euro-Mediterraneo sui Cambiamenti Climatici (CMCC) Foundation, Lecce, Italy
Seth Underwood
Engility Inc., Dover, NJ, USA
NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, NJ, USA
Garrett Wright
Engility Inc., Dover, NJ, USA
NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, NJ, USA
Abstract. A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance.
Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance.
We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance.
We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).
Climate models are among the most computationally expensive scientific applications in the world. We present a set of measures of computational performance that can be used to compare models that are independent of underlying hardware and the model formulation. They are easy to collect and reflect performance actually achieved in practice. We are preparing a systematic effort to collect these metrics for the world's climate models during CMIP6, the next Climate Model Intercomparison Project.
Climate models are among the most computationally expensive scientific applications in the...