Time and Date: 14:40 - 16:20 on 12th June 2018
Chair: Rossella Arcucci
|334|| Data assimilation in a nonlinear time-delayed dynamical system with Lagrangian optimization [abstract]
Abstract: When the heat released by a flame is sufficiently in phase with the acoustic pressure, a self-excited thermoacoustic oscillation can arise. These nonlinear oscillations are one of the biggest challenges faced in the design of safe and reliable gas-turbines and rocket motors. In the worst-case scenario, uncontrolled thermoacoustic oscillations can shake an engine apart. Reduced-order thermoacoustic models, which are nonlinear and time-delayed, can only qualitatively predict thermoacoustic oscillations. To make reduced-order models quantitatively predictive, we develop a data assimilation framework for state estimation. We numerically estimate the most likely nonlinear state of a Galerkin-discretized time delayed model of a prototypical combustor. Data assimilation is an optimal blending of observations with previous system’s state estimates (background) to produce optimal initial conditions. A cost functional is defined to measure (i) the statistical distance between the model output and the measurements from experiments; and (ii) the distance between the model’s initial conditions and the background knowledge. Its minimum corresponds to the optimal state, which is computed by Lagrangian optimization with the aid of adjoint equations. We study the influence of the number of Galerkin modes, which are the natural acoustic modes of the duct, with which the model is discretized. We show that decomposing the measured pressure signal in a finite number of modes is an effective way to enhance the state estimation, especially when highly nonlinear modal interactions occur in the assimilation window. This work represents the first application of data assimilation to nonlinear thermoacoustics, which opens new possibilities for real time calibration of reduced-order models with experimental measurements.
|Tullio Traverso and Luca Magri|
|97|| Machine learning to approximate solutions of ordinary differential equations: Neural networks vs. linear regressors [abstract]
Abstract: We discuss surrogate models based on machine learning as approximation to the solution of an ordinary differential equation. Neural networks and a multivariate linear regressor are assessed for this application. Both of them show a satisfactory performance for the considered case study of a damped perturbed harmonic oscillator. The interface of the surrogate model is designed to work similar to a solver of an ordinary differential equation, respectively a simulation unit. Computational demand and accuracy in terms of local and global error are discussed. Parameter studies are performed to discuss the sensitivity of the method and to tune the performance.
|130|| Kernel Methods for Discrete-Time Linear Equations [abstract]
Abstract: Methods from learning theory are used in the state space of linear dynamical and control systems in order to estimate the system matrices and some relevant quantities such as a the topological entropy. The approach is illustrated via a series of numerical examples.
|Boumediene Hamzi and Fritz Colonius|
|150|| Data-driven inference of the ordinary differential equation representation of a chaotic dynamical model using data assimilation [abstract]
Abstract: Recent progress in machine learning has shown how to forecast and, to some extent, learn the dynamics of a model from its output, resorting in particular to neural networks and deep learning techniques. We will show how the same goal can be directly achieved using data assimilation techniques without leveraging on machine learning software libraries, with a view to high-dimensional models. The dynamics of a model are learned from its observation and an ordinary differential equation (ODE) representation of this model is inferred using a recursive nonlinear regression. Because the method is embedded in a Bayesian data assimilation framework, it can learn from partial and noisy observations of a state trajectory of the physical model. Moreover, a space-wise local representation of the ODE system is introduced and is key to deal with high-dimensional models. The method is illustrated on several chaotic discrete and continuous models of various dimensions, with or without noisy observations, with the goal to identify or improve the model dynamics, build a surrogate or reduced model, or produce forecasts from mere observations of the physical model. It has recently been suggested that neural network architectures could be interpreted as dynamical systems. Reciprocally, we show that our ODE representations are reminiscent of deep learning architectures. Furthermore, numerical analysis considerations on stability shed light on the assets and limitations of the method.
|Marc Bocquet, Julien Brajard, Alberto Carrassi and Laurent Bertino|