A mass- and energy-conserving framework for using machine learning to speed computations: a photochemistry example
Large air quality models and large climate models simulate the physical and chemical properties of the ocean, land surface, and/or atmosphere to predict atmospheric composition, energy balance and the future of our planet. All of these models employ some form of operator splitting, also called the method of fractional steps, in their structure, which enables each physical or chemical process to be simulated in a separate operator or module within the overall model. In this structure, each of the modules calculates property changes for a fixed period of time; that is, property values are passed into the module, which calculates how they change for a period of time and then returns the new property values, all in round-robin between the various modules of the model. Some of these modules require the vast majority of the computer resources consumed by the entire model, so increasing their computational efficiency can either improve the model's computational performance, enable more realistic physical or chemical representations in the module, or a combination of these two. Recent efforts have attempted to replace these modules with ones that use machine learning tools to memorize the input–output relationships of the most time-consuming modules. One shortcoming of some of the original modules and their machine-learned replacements is lack of adherence to conservation principles that are essential to model performance. In this work, we derive a mathematical framework for machine-learned replacements that conserves properties – say mass, atoms, or energy – to machine precision. This framework can be used to develop machine-learned operator replacements in environmental models.
Published in: Geoscientific Model Development, 10.5194/gmd-13-4435-2020, Copernicus