General Circulation Models (GCMs) of Earth’s climate provide robust simulations of large-scale average climatic variables, such as end-of-century global average temperature, under various future greenhouse gas emissions scenarios. Each GCM is an imperfect representation, reflecting the strengths and weaknesses of different approaches simulating historically observed climate. Averaging the simulations of a set of GCMs in a multi-model ensemble produces predictions that represent a best estimate given the possible range of approximate representations of the climate system, which provides critical information for formulation of climate mitigation policy.

Adaptation to the most severe impacts of climate change urgently requires reliable information about near-term, local-scale changes in the statistics of extreme, rather than average, climate to inform decision-making in government and business. A wide range of spatial downscaling, bias correction and other statistical post-processing methods have been developed to derive such information from GCM simulations. Prediction in the context of adaptive decision-making requires a tailored, task-specific approach to the choice of GCM post-processing steps, and selection of method to combine the outputs of multiple GCMs in a multi-model ensemble.

Optimal selection of GCM post-processing and ensemble methods for a specific climate risk prediction application requires multiple data-intensive experiments. The repetitive and parametrizable nature of this kind of experiment should favour their automation. This project is investigating approaches to such automation, using data-oriented architectures and software engineering.