Neural Manifold Operators for Learning the Evolution of Physical Dynamics [paper]
We propose Neural Manifold Operator (NMO), an operator learning paradigm for learning the intrinsic dimension representation of the underlying operator.
Intrinsic dimension representation: NMO learns the intrinsic dynamics of the physics system by learning the invariant subspace of the underlying infinite-dimensional operators with intrinsic dimension.
Generic operator learning paradigm: NMO is a generic operator learning paradigm for various network structure implementations including Multi-Layer Perceptron, Convolutional Neural Network, and Transformer.
Benefits in multi-disciplinary areas: NMO achieves state-of-the-art performance in several real-world and equation-governed scenarios (Fig 2), ranging from mathematics, physics, chemistry and earth science.
Efficiency and Accuracy: By intrinsic dimension projection, NMO significantly reduces the training parameters and effectively improves the capability of generalization and physical consistency.
Figure 2. The experiment scenarios of NMO.
State-of-the-art performance. We compare our model with nine baseline models in seven scenarios. NMO achieves state-of-the-art performance in statistical and physical metrics and gains 23.35% average improvement on three real-world scenarios and four equation-governed scenarios across a wide range of multi-disciplinary fields.
The best physical performance. NMO achieves the best performance in physical metrics without specific inductive bias.
Figure 3. Left: Relative mass error at each time step and visualization of prediction results of each model on the Shallow-Water equations scenario. Mid: Turbulence energy spectrum on the Rayleigh-Bénard convection scenario. Right: The average of absolute divergence convection at each time step and RMSE associated with the prediction step of each model on the Rayleigh-Bénard convection scenario.
Accuracy and efficiency. Our model achieves the best balance among training speed, parameter size, and performance.
Figure 4. The training time and RMSE performance rankings of various models on SEVIR and Navier-Stokes equation scenario.
It is shown that the intrinsic dimension calculated by our paradigm is the optimal dimensional representation of the underlying operators.
We have implemented it across various scenarios, further demonstrating the effectiveness of our algorithm.
Figure 5. The prediction performance of various dimensions of the time evolution operator. The dotted lines represent the ID calculated by our algorithm in each scenario.
If our paper or code is helpful to your academic research, please cite our paper.
@inproceedings{Wu2024NMO,
author = {Wu, Hao and Weng, Kangyu and Zhou, Shuyi and Huang, Xiaomeng and Xiong, Wei},
title = {Neural Manifold Operators for Learning the Evolution of Physical Dynamics},
year = {2024},
isbn = {9798400704901},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3637528.3671779},
doi = {10.1145/3637528.3671779},
booktitle = {Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3356–3366},
numpages = {11},
series = {KDD '24}
}
If you have any questions about our paper or code, please contact Hao Wu (wuhao2022@mail.ustc.edu.cn), Wei Xiong (xiongw21@mails.tsinghua.edu.cn; wei.xiong@yale.edu), Xiaomeng Huang (hxm@tsinghua.edu.cn) or any author of this paper.


