TY - JOUR
T1 - From calibration to parameter learning: Harnessing the scaling effects of big data in geoscientific modeling
AU - Tsai, Wen Ping
AU - Feng, Dapeng
AU - Pan, Ming
AU - Beck, Hylke
AU - Lawson, Kathryn
AU - Yang, Yuan
AU - Liu, Jiangtao
AU - Shen, Chaopeng
N1 - Generated from Scopus record by KAUST IRTS on 2023-02-14
PY - 2021/12/1
Y1 - 2021/12/1
N2 - The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs (and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization methods, or required only ~12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.
AB - The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs (and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization methods, or required only ~12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.
UR - https://www.nature.com/articles/s41467-021-26107-z
UR - http://www.scopus.com/inward/record.url?scp=85117361272&partnerID=8YFLogxK
U2 - 10.1038/s41467-021-26107-z
DO - 10.1038/s41467-021-26107-z
M3 - Article
SN - 2041-1723
VL - 12
JO - Nature Communications
JF - Nature Communications
IS - 1
ER -