Lab Home | Phone | Search | ||||||||
|
||||||||
Can we use machine learning (ML) to predict the evolution of complex, chaotic systems? Recent work has shown that the answer is conditionally affirmative provided we use some additional “help” through a random bath and observers, as defined through reservoir computing (RC) [1]. What about using other “standard” ML methods in forecasting the future of complex systems? It is shown that long-short-term-memory (LSTM) method may work in general spatiotemporal evolution of the Kuramoto type [2]. We focused on the following question: Under what circumstances ML can predict spatiotemporal structures that emerge in complex evolution that involves nonlinearity as well as some form of stochasticity? To address this question we used two extreme phenomena, one being turbulent chimeras while the second involves stochastic branching. The former phenomenon generates partially coherent structures in highly nonlinear oscillators interacting through short or long range coupling while the latter appears in wave propagation in weakly disordered media. Examples of the former include biological networks, SQUIDs (superconducting quantum interference devices), coupled lasers, etc while the latter geophysical waves, electronic motion in a graphene surface and other similar wave propagation configurations. We applied and compared three ML methods, viz. LSTM, RC as well as the standard Feed-Forward neural networks (FNNs) in the two extreme spatiotemporal phenomena dominated by coherence, i.e. chimeras, and stochasticity, i.e. branching, respectively [3]. In order to increase the predictability of the methods we augmented LSTM (and FNNs) with observers; specifically we assigned one LSTM network to each system node except for "observer" nodes which provide continual "ground truth" measurements as input; we refer to this method as "Observer LSTM" (OLSTM). We found that even a small number of observers greatly improves the data-driven (model-free) long-term forecasting capability of the LSTM networks and provide the framework for a consistent comparison between the RC and LSTM methods. We find that RC requires smaller training datasets than OLSTMs, but the latter requires fewer observers. Both methods are benchmarked against Feed-Forward neural networks (FNNs), also trained to make predictions with observers (OFNNs). [1] Z. Lu Z, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model free inference of unmeasured variables in chaotic systems. Chaos 27, 041102 (2017); J. Pathak, B. Hunt,M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Let. 120, 024102 (2018)[2] P. R. Vlachas, W. Byeon, Z. Y. Wan, T. P. Sapsis and P. Koumoutsakos, Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proc.R.Soc.A 474, 20170844 (2018).[3] G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis and E. Kaxiras, Machine learning with observers predicts complex spatiotemporal evolution, arXiv 1807.10758 (2018) Host: Avadh Saxena |