# A Bayesian Nonlinear Reduced Order Modeling Using Variational AutoEncoders

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Some General Notations and Concepts

- u is an observable function;
- $p\left(u\right)$ is the evidence of the function u or its marginal likelihood;
- $\widehat{\alpha}$ is a random multi-dimensional variable or a latent unobservable variable depending on u. The dimension of $\widehat{\alpha}$ depends on the characteristics and the features of the function u;
- $p\left(\widehat{\alpha}\right)$ is a given prior probability of the unobservable variable before any value of the function u is observed;
- $p\left(u\right|\widehat{\alpha})$ is a Gaussian probability distribution based on which we may compute a logarithm-likelihood cost function in order to compute an output function $\mu $ that resembles at most the observable function u;
- $p\left(\widehat{\alpha}\right|u)$ is the posterior or the true probability of the unobservable random variable given the observed evidence.

## 3. Bayesian Inference

#### 3.1. The Empirical Bayesian Method

#### 3.2. The Variational Bayesian Method

## 4. Nonlinear Data Compression by Variational Autoencoders

- $\widehat{\alpha}\sim q\left(\widehat{\alpha}\right|u)=h\left(u\right)=m\left(u\right)+\mathrm{exp}(1/2\delta \left(u\right))z$, where $m\left(u\right)$ and $\delta \left(u\right)=\mathrm{log}\left({s}^{2}\left(u\right)\right)$ are respectively the parametrized mean and logarithm of variance of the random latent variable $\widehat{\alpha}$, and $z\sim N(0,1)$ is a random sample of a standard multivariate normal distribution of dimension n.
- This formulation is usually considered during the training of the VAE as it allows the backward differentiation of the deep neural network only on deterministic quantities that depend on the parameters $\mathsf{\Phi}$ of the encoder. The coordinates of z are independent and identically distributed following a standard normal distribution. This latter formulation is what is usually called the reparametrization trick.
- The Kullback–Leibler divergence used is the relative entropy between multivariate normal and standard normal distribution. The latent cost function in VAEs is given by [40]:$${D}_{KL}\left(N\left({({m}_{1},\dots ,{m}_{n})}^{T},diag({s}_{1}^{2},\dots ,{s}_{n}^{2})\right)\left|\right|N(0,1)\right)=\frac{1}{2}\sum _{i=1}^{n}\left({s}_{i}^{2}+{m}_{i}^{2}-1-ln\left({s}_{i}^{2}\right)\right).$$

## 5. Formulation of the Bayesian Nonlinear Reduced Order Modeling for the Unsteady and Incompressible Navier–Stokes Equations Based on the Chorin Projection Method

#### 5.1. Recall of the Chorin Projection Technique

#### 5.2. Framework of the Bayesian Nonlinear Reduced Order Modeling

#### Our Proposed Algorithm

- Sample from the latent posterior distribution given the velocity at time instant ${t}_{i-1}$ advanced with the momentum equations without the pressure gradient term and decode the sampled random variable using g in order to obtain a sample of the intermediate velocity field as follows:$$\begin{array}{c}{\left(\tilde{u}\left({t}_{i}\right),\tilde{\frac{P}{\rho}}\left({t}_{i}\right)\right)}^{*}\hfill \\ =g\left(\widehat{\alpha}{\left({t}_{i}\right)}^{*}\sim q\left[\widehat{\alpha}{\left({t}_{i}\right)}^{*}|\left(\tilde{u}\left({t}_{i-1}\right)+\Delta t(\nu \Delta \tilde{u}\left({t}_{i-1}\right)-(\tilde{u}\left({t}_{i-1}\right)\xb7\nabla )\tilde{u}\left({t}_{i-1}\right)),\tilde{\frac{P}{\rho}}\left({t}_{i-1}\right)\right)\right]\right)\hfill \end{array}$$
- Compute the pressure field at time instant ${t}_{i}$ following the Poisson Equation (6) with Neumann boundary conditions:$$\Delta \frac{{P}^{\prime}}{\rho}\left({t}_{i}\right)=\frac{1}{\Delta t}\nabla \xb7\tilde{u}{\left({t}_{i}\right)}^{*},$$
- Perform the following consistency requirement that will be useful afterwards:$$\left(\tilde{u}{\left({t}_{i}\right)}^{*},\tilde{\frac{P}{\rho}}\left({t}_{i}\right)\right)=g\circ h\left(\tilde{u}{\left({t}_{i}\right)}^{*},\frac{{P}^{\prime}}{\rho}\left({t}_{i}\right)\right).$$In other words, we simply compress the pressure field at time instant ${t}_{i}$ in the reduced manifold given by the decoder mapping g.
- Sample from the latent posterior distribution given the intermediate velocity at time instant ${t}_{i}$ corrected with the projection equation and decode the sampled random variable using g in order to obtain a sample of the velocity field at time instant ${t}_{i}$ as follows:$$\left(\tilde{u}\left({t}_{i}\right),\tilde{\frac{P}{\rho}}\left({t}_{i}\right)\right)=g\left(\widehat{\alpha}\left({t}_{i}\right)\sim q\left[\widehat{\alpha}\left({t}_{i}\right)|\left(\tilde{u}{\left({t}_{i}\right)}^{*}-\Delta t\nabla \tilde{\frac{P}{\rho}}\left({t}_{i}\right),\tilde{\frac{P}{\rho}}\left({t}_{i}\right)\right)\right]\right)$$
- Perform the above four steps at each time instant until the end of the solution time duration.
- Repeat the above step for the desired number of samples for the incompressible and unsteady solution of the Navier–Stokes equations.
- Aggregate the ensemble of the unsteady solutions obtained from the previous step: compute statistics of this ensemble such as the mean and the standard deviation.
- Compute a confidence interval of the ROM mean prediction for each time instant.

**Remark**

**1.**

#### 5.3. Consistency of the Approximation

## 6. Numerical Experiments

#### 6.1. Flow Solver

#### 6.2. Application to a 2D Karman Vortex Street Flow

#### 6.2.1. Training Phase of the VAE

#### 6.2.2. POD for the Linear Reduced Order Modeling

#### 6.2.3. Comparison between the Nonlinear Bayesian ROM and the POD-Galerkin ROM

#### 6.2.4. Results for Mesh n°1

#### 6.2.5. Results for Mesh n°2

#### 6.2.6. Results for Mesh n°3

#### 6.2.7. Discussion

- We showed the sharpness of the confidence interval of the ROM mean prediction with respect to the high-fidelity solution for relatively coarse structured grids of size $80\times 80$ and $68\times 68$. The length of this confidence interval is defined by three times the standard deviation of the solution samples.
- We showed that, for a very coarse structured mesh, which is typically of size $48\times 48$, the confidence interval of the ROM mean prediction of a length three times the standard deviation of the solution samples could not account for the high-fidelity solution with respect to time. This means that the high-fidelity solution was outside the confidence interval sometimes. This limitation is related to the adaptive time step in the Chorin time-explicit numerical scheme, which became important for the case of mesh n°3. This time step is around $0.1$ s, whereas it was around $0.05$ s and $0.06$ s, respectively, for mesh n°1 and mesh n°2. The time advance of the velocity field was biased by the large time step in the mesh n°3 case. This deviation with respect to the high-fidelity solution is not taken into account within the random latent space of the VAE because the neural network learned the high-fidelity solutions (obtained with a very small adaptive time step from the Yales2 solver around $0.001$ s) projected on coarse structured grids.

#### 6.3. Application to a 3D Flow in an Aeronautical Injection System

#### 6.3.1. Training Phase of the VAE

#### 6.3.2. Results

## 7. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Swischuk, R.; Mainini, L.; Peherstorfer, B.; Willcox, K. Projection-based model reduction: Formulations for physics-based machine learning. Comput. Fluids
**2019**, 179, 704–717. [Google Scholar] [CrossRef] - Ghattas, O.; Willcox, K. Learning physics-based models from data: Perspectives from inverse problems and model reduction. Acta Numer.
**2021**, 30, 445–554. [Google Scholar] [CrossRef] - Parish, E.J.; Rizzi, F. On the impact of dimensionally-consistent and physics-based inner products for POD-Galerkin and least-squares model reduction of compressible flows. arXiv
**2022**, arXiv:2203.16492. [Google Scholar] [CrossRef] - Qian, E.; Kramer, B.; Peherstorfer, B.; Willcox, K. Lift & learn: Physics-informed machine learning for large-scale nonlinear dynamical systems. Phys. D Nonlinear Phenom.
**2020**, 406, 132401. [Google Scholar] - Li, X.; Zhang, W. Physics-informed deep learning model in wind turbine response prediction. Renew. Energy
**2022**, 185, 932–944. [Google Scholar] [CrossRef] - Zhang, R.; Liu, Y.; Sun, H. Physics-informed multi-LSTM networks for metamodeling of nonlinear structures. Comput. Methods Appl. Mech. Eng.
**2020**, 369, 113226. [Google Scholar] [CrossRef] - Akkari, N.; Casenave, F.; Daniel, T.; Ryckelynck, D. Data-Targeted Prior Distribution for Variational AutoEncoder. Fluids
**2021**, 6, 343. [Google Scholar] [CrossRef] - Salvador, M.; Dede, L.; Manzoni, A. Non intrusive reduced order modeling of parametrized PDEs by kernel POD and neural networks. Comput. Math. Appl.
**2021**, 104, 1–13. [Google Scholar] [CrossRef] - Kwok, J.T.; Tsang, I.W. The pre-image problem in kernel methods. IEEE Trans. Neural Netw.
**2004**, 15, 1517–1525. [Google Scholar] [CrossRef][Green Version] - Holmes, P.; Lumley, J.; Berkooz, G.; Rowley, C. Turbulence, Coherent Structures, Dynamical Systems and Symmetry, 2nd ed.; Cambridge University Press: Cambridge, UK, 2012; pp. 358–377. [Google Scholar]
- Sirovich, L. Turbulence and the dynamics of coherent structures. Part III Dyn. Scaling Q. Appl. Math.
**1987**, 45, 583–590. [Google Scholar] [CrossRef][Green Version] - Iollo, A.; Lanteri, S.; Désidéri, J.A. Stability properties of POD–Galerkin approximations for the compressible Navier–Stokes equations. Theor. Comput. Fluid Dyn.
**2000**, 13, 377–396. [Google Scholar] [CrossRef] - Sirisup, S.; Karniadakis, G.E. A spectral viscosity method for correcting the long-term behavior of POD models. J. Comput. Phys.
**2004**, 194, 92–116. [Google Scholar] [CrossRef] - Akkari, N.; Casenave, F.; Moureau, V. Time Stable Reduced Order Modeling by an Enhanced Reduced Order Basis of the Turbulent and Incompressible 3D Navier–Stokes Equations. Math. Comput. Appl.
**2019**, 24, 45. [Google Scholar] [CrossRef][Green Version] - Reyes, R.; Codina, R. Projection-based reduced order models for flow problems: A variational multiscale approach. Comput. Methods Appl. Mech. Eng.
**2020**, 363, 112844. [Google Scholar] [CrossRef] - Akkari, N.; Hamdouni, A.; Erwan, L.; Jazar, M. On the sensitivity of the POD technique for a parameterized quasi-nonlinear parabolic equation. Adv. Model. Simul. Eng. Sci.
**2014**, 1, 14. [Google Scholar] [CrossRef][Green Version] - Grimberg, S.; Farhat, C.; Youkilis, N. On the stability of projection-based model order reduction for convection-dominated laminar and turbulent flows. J. Comput. Phys.
**2020**, 419, 109681. [Google Scholar] [CrossRef] - Noack, B.R.; Papas, P.; Monkewitz, P.A. The need for a pressure-term representation in empirical Galerkin models of incompressible shear flows. J. Fluid Mech.
**2005**, 523, 339–365. [Google Scholar] [CrossRef][Green Version] - Stabile, G.; Rozza, G. Finite volume POD-Galerkin stabilised reduced order methods for the parametrised incompressible Navier–Stokes equations. Comput. Fluids
**2018**, 173, 273–284. [Google Scholar] [CrossRef][Green Version] - Baiges, J.; Codina, R.; Idelsohn, S. Reduced-order subscales for POD models. Comput. Methods Appl. Mech. Eng.
**2015**, 291, 173–196. [Google Scholar] [CrossRef][Green Version] - Guo, M.; McQuarrie, S.A.; Willcox, K.E. Bayesian operator inference for data-driven reduced-order modeling. arXiv
**2022**, arXiv:2204.10829. [Google Scholar] [CrossRef] - Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 1 September 2022).
- Kashima, K. Nonlinear model reduction by deep autoencoder of noise response data. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; pp. 5750–5755. [Google Scholar] [CrossRef]
- Hartman, D.; Mestha, L.K. A deep learning framework for model reduction of dynamical systems. In Proceedings of the 2017 IEEE Conference on Control Technology and Applications (CCTA), Maui, HI, USA, 27–30 August 2017; pp. 1917–1922. [Google Scholar] [CrossRef]
- Lee, K.; Carlberg, K. Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. J. Comput. Phys.
**2020**, 404, 108973. [Google Scholar] [CrossRef][Green Version] - Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv
**2013**, arXiv:1312.6114. [Google Scholar] [CrossRef] - Chorin, A.J.; Marsden, J.E.; Marsden, J.E. A Mathematical Introduction to Fluid Mechanics; Springer Science+Business Media: New York, NY, USA, 1990; Volume 3. [Google Scholar]
- Fauque, J.; Ramière, I.; Ryckelynck, D. Hybrid hyper-reduced modeling for contact mechanics problems. Int. J. Numer. Methods Eng.
**2018**, 115, 117–139. [Google Scholar] [CrossRef] - Santo, N.D.; Manzoni, A. Hyper-reduced order models for parametrized unsteady Navier–Stokes equations on domains with variable shape. Adv. Comput. Math.
**2019**, 45, 2463–2501. [Google Scholar] [CrossRef] - Ryckelynck, D.; Lampoh, K.; Quilicy, S. Hyper-reduced predictions for lifetime assessment of elasto-plastic structures. Meccanica
**2016**, 51, 309–317. [Google Scholar] [CrossRef] - Amsallem, D.; Zahr, M.J.; Farhat, C. Nonlinear model order reduction based on local reduced-order bases. Int. J. Numer. Methods Eng.
**2012**, 92, 891–916. [Google Scholar] [CrossRef] - Mignolet, M.P.; Soize, C. Stochastic reduced order models for uncertain geometrically nonlinear dynamical systems. Comput. Methods Appl. Mech. Eng.
**2008**, 197, 3951–3963. [Google Scholar] [CrossRef][Green Version] - Partaourides, H.; Chatzis, S.P. Asymmetric deep generative models. Neurocomputing
**2017**, 241, 90–96. [Google Scholar] [CrossRef] - Rezende, D.J.; Mohamed, S. Variational Inference with Normalizing Flows. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015. [Google Scholar] [CrossRef]
- Caterini, A.L.; Doucet, A.; Sejdinovic, D. Hamiltonian Variational Auto-Encoder. arXiv
**2018**. [Google Scholar] [CrossRef] - Joyce, J. Bayes’ Theorem. In The Stanford Encyclopedia of Philosophy, Fall 2021 ed.; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2021. [Google Scholar]
- Casella, G. An Introduction to Empirical Bayes Data Analysis. Am. Stat.
**1985**, 39, 83–87. [Google Scholar] - Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational Inference: A Review for Statisticians. J. Am. Stat. Assoc.
**2017**, 112, 859–877. [Google Scholar] [CrossRef][Green Version] - Kullback, S. Information Theory and Statistics; Courier Corporation: North Chelmsford, MA, USA, 1997. [Google Scholar]
- Tran, V.H. Copula Variational Bayes inference via information geometry. arXiv
**2018**, arXiv:1803.10998. [Google Scholar] [CrossRef] - Payne, L. Uniqueness and continuous dependence criteria for the Navier–Stokes equations. Rocky Mt. J. Math.
**1972**, 2, 641–660. [Google Scholar] [CrossRef] - Moureau, V.; Domingo, P.; Vervisch, L. Design of a massively parallel CFD code for complex geometries. Comptes Rendus Mécanique
**2011**, 339, 141–148. [Google Scholar] [CrossRef] - Moureau, V.; Domingo, P.; Vervisch, L. From Large-Eddy Simulation to Direct Numerical Simulation of a lean premixed swirl flame: Filtered laminar flame-PDF modeling. Combust. Flame
**2011**, 158, 1340–1357. [Google Scholar] [CrossRef] - Malandain, M.; Maheu, N.; Moureau, V. Optimization of the deflated conjugate gradient algorithm for the solving of elliptic equations on massively parallel machines. J. Comput. Phys.
**2013**, 238, 32–47. [Google Scholar] [CrossRef] - Nicolaides, R.A. Deflation of conjugate gradients with applications to boundary value problems. SIAM J. Numer. Anal.
**1987**, 24, 355–365. [Google Scholar] [CrossRef] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2014**, arXiv:1412.6980. [Google Scholar] [CrossRef] - Bergmann, M.; Cordier, L. Contrôle optimal par réduction de modèle POD et méthode à région de confiance du sillage laminaire d’un cylindre circulaire. Mech. Ind.
**2007**, 8, 111–118. [Google Scholar] [CrossRef] - Lourier, J.M.; Stöhr, M.; Noll, B.; Werner, S.; Fiolitakis, A. Scale Adaptive Simulation of a thermoacoustic instability in a partially premixed lean swirl combustor. Combust. Flame
**2017**, 183, 343–357. [Google Scholar] [CrossRef][Green Version] - Franzelli, B.; Riber, E.; Gicquel, L.Y.; Poinsot, T. Large Eddy Simulation of combustion instabilities in a lean partially premixed swirled flame. Combust. Flame
**2012**, 159, 621–637. [Google Scholar] [CrossRef]

**Figure 1.**On the top left, we find the high-fidelity mesh. On the top right, we find mesh n°1. On the bottom left, we find the mesh n°2. On the bottom right, we find the mesh n°3.

**Figure 7.**Mesh n°1: The first two components of the parametrized logarithm of variance by the VAE encoder.

**Figure 8.**Mesh n°1: High-fidelity Yales2 horizontal velocity projected on the Cartesian mesh in brown and the horizontal component of the mean prediction of the Bayesian ROM in dark blue, at eight different spatial locations. The training data correspond to $15\phantom{\rule{3.33333pt}{0ex}}$s simulation time starting from $0\phantom{\rule{3.33333pt}{0ex}}$s. An extrapolation is performed from $15\phantom{\rule{3.33333pt}{0ex}}$s until $22.64\phantom{\rule{3.33333pt}{0ex}}$s. The clear blue zones are the ones contained in three standard deviations around the mean prediction of the ROM denoted $m(t,x)$. The mean prediction is obtained using an aggregation of three samples of the Navier–Stokes solution by the Bayesian nonlinear ROM.

**Figure 9.**Mesh n°1: High-fidelity Yales2 horizontal velocity projected on the Cartesian mesh in brown and the horizontal component of the mean prediction of the Bayesian ROM in dark blue, at six different spatial locations. The training data correspond to $15\phantom{\rule{3.33333pt}{0ex}}$s simulation time starting from $0\phantom{\rule{3.33333pt}{0ex}}$s. An extrapolation is performed from $15\phantom{\rule{3.33333pt}{0ex}}$s until $22.64\phantom{\rule{3.33333pt}{0ex}}$s. The clear blue zones are the ones contained in three standard deviations around the mean prediction of the ROM denoted $m(t,x)$. The mean prediction is obtained using an aggregation of 5 samples of the Navier–Stokes solution by the Bayesian nonlinear ROM.

**Figure 10.**Mesh n°1: On the top, the high-fidelity horizontal velocity fields in m/s by the solver Yales2. In the middle, the POD-Galerkin horizontal velocity fields in m/s. On the bottom, the horizontal velocity fields in m/s by the mean prediction of the Bayesian nonlinear ROM.

**Figure 12.**Mesh n°2: The first two components of the parametrized logarithm of variance by the VAE encoder.

**Figure 13.**Mesh n°2: High-fidelity Yales2 horizontal velocity projected on the Cartesian mesh in brown and the horizontal component of the mean prediction of the Bayesian ROM in dark blue, at eight different spatial locations. The training data correspond to $15\phantom{\rule{3.33333pt}{0ex}}$s simulation time starting from $0\phantom{\rule{3.33333pt}{0ex}}$s. An extrapolation is performed from $15\phantom{\rule{3.33333pt}{0ex}}$s until $22.64\phantom{\rule{3.33333pt}{0ex}}$s. The clear blue zones are the ones contained in three standard deviations around the mean prediction of the ROM denoted $m(t,x)$. The mean prediction is obtained using an aggregation of three samples of the Navier–Stokes solution by the Bayesian nonlinear ROM.

**Figure 14.**Mesh n°2: High-fidelity Yales2 horizontal velocity projected on the Cartesian mesh in brown and the horizontal component of the mean prediction of the Bayesian ROM in dark blue, at eight different spatial locations. The training data correspond to $15\phantom{\rule{3.33333pt}{0ex}}$s simulation time starting from $0\phantom{\rule{3.33333pt}{0ex}}$s. An extrapolation is performed from $15\phantom{\rule{3.33333pt}{0ex}}$s until $22.64\phantom{\rule{3.33333pt}{0ex}}$s. The clear blue zones are the ones contained in three standard deviations around the mean prediction of the ROM denoted $m(t,x)$. The mean prediction is obtained using an aggregation of five samples of the Navier–Stokes solution by the Bayesian nonlinear ROM.

**Figure 16.**Mesh n°3: The first two components of the parametrized logarithm of variance by the VAE encoder.

**Figure 17.**Mesh n°3: High-fidelity Yales2 horizontal velocity projected on the Cartesian mesh in brown and the horizontal component of the mean prediction of the Bayesian ROM in dark blue, at eight different spatial locations. The training data correspond to $15\phantom{\rule{3.33333pt}{0ex}}$s simulation time starting from $0\phantom{\rule{3.33333pt}{0ex}}$s. An extrapolation is performed from $15\phantom{\rule{3.33333pt}{0ex}}$s until $22.64\phantom{\rule{3.33333pt}{0ex}}$s. The clear blue zones are the ones contained in three standard deviations around the mean prediction of the ROM denoted $m(t,x)$. The mean prediction is obtained using an aggregation of three samples of the Navier–Stokes solution by the Bayesian nonlinear ROM.

**Figure 18.**Mesh n°3: High-fidelity Yales2 horizontal velocity projected on the Cartesian mesh in brown and the horizontal component of the mean prediction of the Bayesian ROM in dark blue, at eight different spatial locations. The training data correspond to $15\phantom{\rule{3.33333pt}{0ex}}$s simulation time starting from $0\phantom{\rule{3.33333pt}{0ex}}$s. An extrapolation is performed from $15\phantom{\rule{3.33333pt}{0ex}}$s until $22.64\phantom{\rule{3.33333pt}{0ex}}$s. The clear blue zones are the ones contained in three standard deviations around the mean prediction of the ROM denoted $m(t,x)$. The mean prediction is obtained using an aggregation of five samples of the Navier–Stokes solution by the Bayesian nonlinear ROM.

**Figure 19.**On the left, we find the high-fidelity mesh. On the right, we find the coarse Cartesian mesh for the Bayesian ROM, of size $60\times 28\times 28$.

**Figure 21.**A 2D plane of the finite element projection of the instantaneous velocity shown in Figure 20, on a Cartesian mesh of size $60\times 28\times 28$.

**Figure 24.**High-fidelity Yales2 horizontal velocity projected on the Cartesian mesh in brown and the horizontal component of the mean prediction of the Bayesian ROM in dark blue, at three spatial points during $0.005\phantom{\rule{3.33333pt}{0ex}}$s starting from $0\phantom{\rule{3.33333pt}{0ex}}$s. The clear blue zones are the ones that are contained in three standard deviations around the mean prediction of the ROM denoted $m(t,x)$. The mean prediction is obtained using an aggregation of 10 samples of the Navier–Stokes solution by the Bayesian nonlinear ROM.

**Figure 25.**On the top, a 2D plane of the high-fidelity horizontal velocity field in m/s by the solver Yales2 at $0.005\phantom{\rule{3.33333pt}{0ex}}$s. On the bottom, a 2D plane of the horizontal velocity field in m/s by the mean prediction of the Bayesian nonlinear ROM at $0.005\phantom{\rule{3.33333pt}{0ex}}$s.

Mesh n°1 | Mesh n°2 | Mesh n°3 |
---|---|---|

Cartesian of size $80\times 80$ | Cartesian of size $68\times 68$ | Cartesian of size $48\times 48$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Akkari, N.; Casenave, F.; Hachem, E.; Ryckelynck, D.
A Bayesian Nonlinear Reduced Order Modeling Using Variational AutoEncoders. *Fluids* **2022**, *7*, 334.
https://doi.org/10.3390/fluids7100334

**AMA Style**

Akkari N, Casenave F, Hachem E, Ryckelynck D.
A Bayesian Nonlinear Reduced Order Modeling Using Variational AutoEncoders. *Fluids*. 2022; 7(10):334.
https://doi.org/10.3390/fluids7100334

**Chicago/Turabian Style**

Akkari, Nissrine, Fabien Casenave, Elie Hachem, and David Ryckelynck.
2022. "A Bayesian Nonlinear Reduced Order Modeling Using Variational AutoEncoders" *Fluids* 7, no. 10: 334.
https://doi.org/10.3390/fluids7100334