- Research Article
- Open access
- Published:
Robust model reduction by \(L^{1}\)-norm minimization and approximation via dictionaries: application to nonlinear hyperbolic problems
Advanced Modeling and Simulation in Engineering Sciences volume 3, Article number: 1 (2016)
Abstract
We propose a novel model reduction approach for the approximation of non linear hyperbolic equations in the scalar and the system cases. The approach relies on an offline computation of a dictionary of solutions together with an online \(L^1\)- norm minimization of the residual. It is shown why this is a natural framework for hyperbolic problems and tested on nonlinear problems such as Burgers’ equation and the one-dimensional Euler equations involving shocks and discontinuities. Efficient algorithms are presented for the computation of the \(L^1\)-norm minimizer, both in the cases of linear and nonlinear residuals. Results indicate that the method has the potential of being accurate when involving only very few modes, generating physically acceptable, oscillation-free, solutions.
Background
Many engineering applications require the ability to simulate the behavior of a physical system in real-time. This requirement holds in particular when a full parametric exploration of the behavior of the system is sought. In aerodynamics, such an exploration can be done to compute the flow around an aircraft for varying boundary conditions or to design its shape to maximize lift and minimize drag. Uncertainty quantification also requires a large number of simulations with varying parameters in order to propagate chaos by means of a Monte-Carlo method or calibrating input parameters by a Markov chain technique. A third important application is flow control.
When such a large number of simulations is required, the cost of one simulation is critical to the application at hand. This cost can be lowered by using sophisticated computer science techniques such as parallelization but such techniques are usually not enough to allow full parametric exploration, especially when computational resources are limited.
Alternatively, model reduction techniques can alleviate the cost of such repeated simulations with limited computational resources [1–4]. Model reduction is directly based on the underlying high-dimensional model (HDM) that results from a standard finite element, finite volume or finite differences formulation. In the present paper, partial differential equations (PDE) of the following type are considered:
L is a differential operator (for example the Laplacian or the divergence of a flux), and B a boundary operator. In this paper, we are particularly interested in the case where the solution \(U(x,t)\in {\mathbb R}^p\) is a scalar or a vector and L is the divergence of a flux F. Three examples will be considered by increasing the order of complexity:
-
Burgers’ equation for which \(U=u\) is scalar:
-
Its unsteady version,
$$\begin{aligned} \dfrac{\partial u}{\partial t}+\dfrac{\partial }{\partial x}\left( \frac{1}{2}u^2\right) =0, \quad u(x,0)=u_0(x) \end{aligned}$$with periodic boundary conditions
-
It steady version with weak Dirichlet boundary conditions
-
-
The one-dimensional compressible Euler equations for which
$$\begin{aligned} U=(\rho , \rho u, E), F(U) = \left( \rho u, \rho u^2+p, u(E+p)\right) \end{aligned}$$and the perfect gas equation of state holds:
$$\begin{aligned} p=(\gamma -1)\left( E-\frac{1}{2}\rho u^2\right) . \end{aligned}$$\(\rho \) denotes the density, u the velocity, p the pressure and E the energy.
-
An example of a steady flow through a nozzle.
After discretization in space, the solution is denoted as \(\mathbf {u}(t)\in \mathbb {R}^{Np}\). The PDE is here parametrized by a parameter vector \(\varvec{\mu }\in \mathbb {R}^m\) that allows changes in the operator L, the boundary operator B or the initial conditions. For simplicity and without loss of generality, this parametric dependency will be omitted in the next paragraphs.
Instead of allowing any value of the solution degrees of freedom \(\mathbf {u}\), model reduction however restricts the solution to be contained in a subspace of the underlying high-dimensional space. This subspace is determined by an optimized reduced basis that is determined in a training phase. Thus, a large number of degrees of freedom (say millions) are represented by only a few number of coefficients in the representation of the full solution in terms of the reduced basis vectors, leading to important computational savings. Two important questions arise at this point: (1) how can an optimal reduced basis be constructed? and (2) how can the evolution of the reduced coefficients be computed in a stable fashion?
A popular method for choosing an “optimal” basis is Proper Orthogonal Decomposition (POD), first introduced as a tool for the analysis of flows by Lumley [5] and then extended and popularized by Sirovich [6]. The idea behind POD is to collect a few snapshots of the solution and then compute the best approximation of these snapshots in terms of a small number of reduced basis vectors. Mathematically speaking, if \(\mathbf {u}_i(t_l)\in \mathbb {R}^p\) denotes the value of the discrete solution \(\mathbf {u}\) at grid point \(\mathbf {x}_i,~i=1,\ldots ,N\) and at time \(t_l,~l=1,\ldots ,N_t\), POD constructs M orthogonal functions \(\varvec{\phi }_\ell \in \big [L^2({\mathbb R}^d)\big ]^p\) such that the following functional is minimized:
where \({\varvec{\phi }}_{\ell i}\in \mathbb {R}^p\) denotes the value of \(\varvec{\phi }\) at \(\mathbf {x}_i\). \(\Vert ~\cdot ~\Vert \) denotes here the Euclidean norm in \({\mathbb R}^p\), and \(\langle ~\cdot ~,~\cdot ~\rangle \) is the \(L^2\) scalar product. A minimum of the functional \(\mathcal {J}\) can be analytically computed by Singular Value Decomposition and the reduced basis vectors \(\varvec{\phi }_{\ell }\) are the left singular vectors of the snapshots matrix
Defining by \(\{\lambda _\ell \}_{l=1}^{N_t}\) the positive eigenvalues of \(\mathbf {S}^T\mathbf {S}\) sorted decreasingly, the error associated with the minimum of the functional is
In the continuous case, the functions \(\mathbf {\varvec{\phi }}_{\ell }(\mathbf {x})\in \mathbb {R}^p\), are the solution of Fredholm alternative
where \(\mathbf {R}(\mathbf {x},\mathbf {x}') = \mathbf {u}(\mathbf {x})\mathbf {u}(\mathbf {x}')^T\).
In both the discrete and continuous cases, the basis dimension M is depending on how fast is the decay of the eigenvalues \(\lambda _\ell \). Given a tolerance \(\epsilon \ll 1\), M is selected as the smallest dimension such that the following relative truncation error is smaller than \(\epsilon \),
In general, one expects the eigenvalues \(\lambda _\ell \) to decrease very rapidly to 0. This allows, when this assumption is true, to consider only the most energetic modes in the decomposition. Unfortunately, it is not always the case that the eigenvalues \(\lambda _\ell \) are rapidly converging to zero. This is demonstrated by the following simple counter example for which a simple scalar advection problem defined on \(\Omega = [0,1[\) is considered:
with the boundary condition
and the initial condition
The solution is given by a traveling discontinuity
Considering grids \(x_i=i/N\), \(i=0, \ldots , N\) for varying number of grid points N and snapshots collected at times as \(t_k=k\Delta t\), with \(\Delta t=1/N\), a series of POD bases is constructed numerically. For each grid size N, the eigenvalues \(\lambda _\ell (N)\) are reported in Fig. 1. One can observe that the ratio \(\lambda _\ell (N)/\lambda _1(N)\) behaves like 1 / k. This illustrates that it is not possible to select only a few dominant modes, due to the slow decay of the POD eigenvalues. This example also illustrates why most of the work on model reduction has been focused on regular problems, and for fluids, on incompressible flows, see e.g. among many others [7–9]. For compressible (but regular) flows, one of the early work is [10], then one may mention [11] for compressible turbulent flows and [3, 12, 13] for the case of linearized compressible inviscid flows.
Concerning compressible fluids, there is another difficulty. In problem (4), one needs a norm. In the case of incompressible flows, a natural norm is related to the kinetic energy. For compressible materials, however, one needs to take into account the density, velocity and the energy, i.e. the thermodynamics. A simple \(L^2\)-norm cannot be used because one cannot combine in a quadratic manner these variables, for dimensional reasons. Only a non-dimensionalization of the variables can alleviate the dimensionality issue [11, 12].
The natural equivalent of the \(L^2\)-norm is however related to the entropy, which is not quadratic: if a minimization problem can be set up, its solution is non trivial. These arguments were raised in [10], and an energy-based norm was developed in [13] for linearized compressible flows.
To circumvent those issues, an approach based on a dictionary of solutions [14] is developed in this work as an alternative to using a truncated reduced basis based on POD. The elements of this dictionary are solutions \(\mathbf {u}(t_l;\varvec{\mu }_j)\) computed for varying values of time \(t_l\) and parameter \(\varvec{\mu }_j\in \mathbb {R}^m\). Selecting appropriate parameter samples \(\varvec{\mu }_j\in \mathcal {D} \subset \mathbb {R}^m\) is a crucial step that can affect the accuracy of the reduced-order model in the parameter domain. Greedy sampling procedures have been developed when error estimates are known [7, 9, 15–17]. In this work, we do not elaborate much on this, we are more focused on showing that such a method can actually work. The strategy to look for the “best” \(\varvec{\mu }\) in this context will be the topic of further research.
In addition to choosing an appropriate dictionary \(\mathcal {D}\), selecting an approach for computing a reduced solution based on that dictionary is also crucial. For self-adjoint systems, Galerkin projection is a natural approach but it there is no motivation for using Galerkin projection for nonlinear compressible flows. Instead, strategies based on the minimization of the residual arising from the reduced approximation have been successfully developed for compressible flows in [1, 2, 11]. These approaches rely on a minimization of the residual in the \(L^2\) sense. In the present work, this minimization problem is extended to the more general minimization using a \(L^q\)-norm, with emphasis on \(q=1\). For nonlinear systems, an additional step, hyper-reduction, is required to ensure an efficient solution of the reduced system [11, 18]. Hyper-reduction is not considered in this work but will be the subject of follow-up work.
Methods
This section is organized as follows. Motivations for using the \(L^1\) norm in the case of hyperbolic systems are presented first. We show that \(q=1\) is very closely linked to the concept of weak solutions of hyperbolic problems. Then, the proposed model reduction approach is developed in both the steady and unsteady cases. Finally, the proposed procedure is applied to the model reduction of several steady and unsteady systems and conclusions are given in the end.
Motivation for the \(L^1\)-norm
In solving minimization problems, it is quite usual to minimize some residual with respect to the \(L^q\) norm for suitable q. The choice \(q=2\) is very common because it amounts to minimize in some least square sense and many efficient algorithms are available. In the case of hyperbolic problems, as we are concerned with here, this is still a convenient choice (after proper adimensionalization as mentioned above), but it might not be the most natural one. For example in [19, 20] it is shown at least experimentally, that the numerical solution has an excellent non oscillatory behavior, without doing explicitly anything but to minimize the \(L^1\) norm of the PDE residual. In fact, this observation was our original motivation for choosing the \(L^1\) norm, since we are interested in keeping the non oscillatory nature of the solution. In this section, we further justify the choice of the \(L^1\) norm applied to the residual, and show that it is closely related to the weak formulation of the problem. The discussion is here formal.
Let us consider the problem
defined on \(\Omega \subset {\mathbb R}^d\), \(t> 0\). The steady problem can be done in the same exact manner. We assume that the solution U belongs to \({\mathbb R}^p\), so that \(F=(F_1, \ldots , F_p)\). The weak form of this is: for any \(\varphi \in \left[ C^1(\Omega )\right] ^p\) and with compact support, we have:
Integrating by parts, yields
If we restrict ourselves to the set of test functions \(\left\{ \varphi \in \left[ C^1(\Omega )\right] ^p, ||\varphi ||_{\infty }\le 1\right\} \), U is a solution if:
Let us now recall the definition of the total variation
and the definition of the bounded variation of a function \(g\in L^1({\mathbb R}^n)\):
We see that if in addition \(g\in C^1({\mathbb R}^n)\), \(TV(g)=\int _{{\mathbb R}^n}||\nabla g||dx=||\nabla g||_{L^1({\mathbb R}^n)}.\)
Thanks to this definition, we see that if we define the space-time flux \(\mathcal {F}=(U,F)\), U is a weak solution if and only if the total variation of \(\mathcal {F}\) vanishes, \( TV\big ( \mathcal {F}) =0.\)
Before going further, let us mention the following classical result that will be useful. Consider \(\{x_i\}_{i\in {\mathbb Z}}\) a strictly increasing sequence in \({\mathbb R}\), we define \(x_{i+1/2}=\frac{x_i+x_{i+1}}{2}\). We assume that \({\mathbb R}=\cup _{i\in {\mathbb Z}}[x_{i-1/2},x_{i+1/2}[\) and consider g defined by: for any \(i\in {\mathbb Z}\),
we see that
Now, instead of having the exact solution, but some approximation procedure that enables, from \(\mathbf {u}^{n}\approx U(~.~, t_n)\), to compute \(\mathbf {u}^{n+1}\approx U(~.~, t_{n+1})\), say \(\mathcal {L}(\mathbf {u}^n, \mathbf {u}^{n+1})\).
For instance, assume that we have a finite volume method and \(d=1\): for any grid point \(i\in \{1,\ldots ,N\}\),
A way to evaluate \(\mathbf {u}^{n+1}\) is to minimize the total variation, i.e.
Clearly, if \(\mathcal {I}\) is equal to the set of grid points, the solution is given by
and nothing new is gained.
When \(\mathcal {I}\) is not equal to the set of degrees of freedom, then something new happens. We expect precisely to exploit this idea, or ideas related to this.
In the remainder of this paper, this idea is exploited in the case of model reduction, for which \(\mathcal {I}\) is not equal to the set of grid points and TV semi norm slightly modified in order to guaranty that a unique solution to the minimization problem exists, as well as the minimization problem is as easy as possible to solve.
Formulation
High-dimensional model
Without loss of generality, the case of the classical finite volume method is considered to define the High Dimensional Model (HDM). A computational domain \(\Omega \subset \mathbb {R}^d\) is considered, and in most of this paper, \(\Omega \subset {\mathbb R}\), that is \(d=1\). Starting from a subdivision \(\cdots < x_j <x_{j+1}< \cdots \), we construct control volumes \(K_{j}=[x_{j-1/2},x_{j+1/2}[\), \(j\in {\mathbb Z}\) where
A finite volume semi-discrete formulation of (1) writes
where \(\mathbf {f}_{j+1/2}\) is a consistent numerical flux. In each applications, we consider Roe’s formulation and a first order scheme. We assume either compactly supported initial conditions or initial conditions with periodicity
In (8a), \(\mathbf {u}_j\) stands for an approximation of the average of the solution in the cell \(K_j\),
The time stepping is done in a standard way, for instant by Euler time stepping.
Model reduction by residual minimization over a dictionary
Steady problems
Two approaches are available to solve a steady state associated with problem (1). The first one is to use a homotopy approach with pseudo-time stepping, resulting in the solution of an unsteady problem which limit solution is the desired steady state. The procedure described in Sect. “Unsteady problems” can be, in principle applied to this case. The second approach is by a direct solution of the steady-state problem. The discretized steady-state problem writes
where \(\mathbf {r}(\cdot ,\cdot )\) is usually a nonlinear function of its arguments, referred to as the residual. This set of nonlinear equations is typically solved by Newton-Raphson’s method. This second approach is followed in this work for steady problems.
The parameter vector \(\varvec{\mu }\in \mathcal {P}\subset \mathbb {R}^m\) can, for instance, parametrize the boundary conditions associated with the steady-state problem. The parametric domain of interest \(\mathcal {P}\) is assumed here to be a bounded set of \(\mathbb {R}^m\).
The solution manifold \(\mathcal {M} = \left\{ \mathbf {u}(\varvec{\mu })~\text {s.t}~ \varvec{\mu }\in \mathcal {P}\subset \mathbb {R}^m\right\} \) is assumed to be of small dimension. This manifold \(\mathcal {M}\) belongs to \(L^\infty ({\mathbb R}^d)\cap BV({\mathbb R}^d)\), and thus can be locally described by some mapping \(\theta : \mathcal {P}\mapsto L^\infty ({\mathbb R}^d)\cap BV({\mathbb R}^d)\). To approximate this mapping, we consider a family of r parameters in \(\mathcal {P}\), \(\{\varvec{\mu }_\ell \}_{\ell =1}^r\), and compute the associated solutions \(\left\{ \mathbf {u}(\varvec{\mu }_\ell )\right\} _{\ell =1}^r\) of (8).
The steady-state \(\mathbf {u}(\varvec{\mu })\) is then approximated as a linear combination of the precomputed dictionary elements \(\mathcal {D}\) as
For a new value of the parameters \(\varvec{\mu }\in \mathcal {P}\), the reduced coordinates \(\left\{ \alpha _\ell (\varvec{\mu })\right\} _{\ell =1}^r\) are then computed as the solution of the minimization problem
In this paper we consider for J the \(L^1\)-norm \(J(\mathbf {r},\varvec{\beta })=\Vert \mathbf {r}\Vert _1\) or its regularized variant, \(J(\mathbf {r},\varvec{\beta })=\Vert \mathbf {r}\Vert _1+\eta \Vert \varvec{\beta }\Vert _1\) with \(\eta >0\).
In order to minimize J when \(\mathbf {r}\) is a linear function of \(\varvec{\beta }\), the Linear Programming (LP) approach is considered, involving the solution of an optimization problem with \(2m+r\) variables and 3m constraints.
When \(\mathbf {r}\) is a nonlinear function of \(\varvec{\beta }\), a Gauss-Newton-like procedure can be used in combination with the LP approach. Unicity of the solution can be guaranteed by setting the regularization term \(\eta >0\). That’s why we are not doing the linear example.
Remark
-
Decreasing the dimensionality of the solution space from N to r is not enough to gain computational speedup when the system to be solved is nonlinear. An additional level of approximation, hyper-reduction, is necessary.
-
A careful selection of the sample parameter samples \(\left\{ \varvec{\mu }_\ell \right\} _{\ell =1}^r\) is necessary in order to generate a reduced-order model that is accurate in the entire parameter domain \(\mathcal {P}\). Greedy sampling techniques, associated with a posteriori error estimates, have been successfully used to construct reduced models that are robust and accurate in a parameter domain \(\mathcal {P}\). These techniques are not considered in this paper but will also be the focus of future work.
Unsteady problems
For simplicity, in the remainder of this section, we assume that only the initial condition \(\mathbf {u}^0(\varvec{\mu })\) depends on a parameter vector \(\varvec{\mu }\in \mathcal {P}\subset \mathbb {R}^m\). Again, the family of solutions \(\mathbf {u}(\varvec{\mu })\) of the Cauchy problem (8) is then conjectured to belong to a low dimensional manifold \(\mathcal {M}\) when the initial condition is parametrized in (8b).
To approximate this mapping, we consider a family of r parameters in \(\mathcal {P}\), \(\{\varvec{\mu }_\ell \}_{\ell =1}^r\), and compute the associated solutions of (8) for respective initial conditions \(\mathbf {u}^0(\varvec{\mu }_\ell )\), \(\ell =1, \ldots , r\).
Once these solutions are computed, we propose to approximate, for any parameter \(\varvec{\mu }\in \mathcal {P}\) the solution \(\{\mathbf {u}^n(\varvec{\mu })\}_{n=0}^{N_t}\) associated with an initial condition \(\mathbf {u}^0(\varvec{\mu })\) by approximating it as
with the following procedure:
-
1.
Initialization: determine the reduced coefficients \(\{\alpha ^0_\ell (\varvec{\mu })\}_{\ell =1}^r\) as:
$$\begin{aligned} \varvec{\alpha }^0(\varvec{\mu }):=\left( \alpha _1^0(\varvec{\mu }), \ldots , \alpha _r^0(\varvec{\mu })\right) = \mathop {{{\mathrm{\arg \!\min }}}}\limits _{\varvec{\beta }\,=\, \left( \beta _1,\ldots ,\beta _r\right) }J\left( \sum _{\ell =1}^r \beta _\ell \mathbf {u}^0({\varvec{\mu }_\ell }),\varvec{\beta }\right) , \end{aligned}$$for a given choice of functional \(J(\mathbf {u},\varvec{\beta })\).
-
2.
Assume that \(\varvec{\alpha }^n(\varvec{\mu }) = (\alpha _1^n(\varvec{\mu }), \ldots , \alpha _r^n(\varvec{\mu }))\) is known, determine \(\varvec{\alpha }^{n+1} = (\alpha _1^{n+1}, \ldots , \alpha _r^{n+1})\) such that:
$$\begin{aligned}&{\varvec{\alpha }}^{n+1}(\varvec{\mu })= \mathop {{{\mathrm{\arg \!\min }}}}\limits _{\varvec{\beta }\,=\,(\beta _1,\ldots ,\beta _r)}J\Bigg ( \sum _{\ell =1}^r \beta _\ell \mathbf {u}^{n+1}({\varvec{\mu }_\ell })-\mathbf {w}^n(\varvec{\mu })-\dfrac{\Delta t}{\Delta x} \bigg (\mathbf {f}_{1/2}(\mathbf {w}^n)-\mathbf {f}_{-1/2}(\mathbf {w}^n)\bigg ),\varvec{\beta }\Bigg ) \end{aligned}$$where
$$\begin{aligned} \mathbf {w}^n(\varvec{\mu })=\sum _{\ell =1}^r \alpha ^n_\ell (\varvec{\mu }) \mathbf {u}^{n}({\varvec{\mu }_\ell }). \end{aligned}$$
We see that the second step can be written as: find \(\varvec{\alpha }^{n+1}(\varvec{\mu })\) that minimizes
where the matrix \(\mathbf {A}^{n+1}\) can be written by blocks as
and \(\mathbf {b}^n\) depends on known data.
A few immediate remarks can be made.
Remark
-
In the case of a linear flux, Problem (1) is linear. If \(\mathcal {S}_t\) is the mapping between the initial condition \(u_0\) and the solution at time t, we have \(\mathcal {S}_t(u+v)=\mathcal {S}_t(u)+\mathcal {S}_t(v)\). This means the exact solution of the Cauchy problem with \(U_0=\sum _\ell \alpha _\ell ^0 U_0(\varvec{\mu }_\ell )\) is \(\mathcal {S}_t(U_0)=\sum _\ell \alpha _\ell ^0 \mathcal {S}_t(U({\varvec{\mu }_\ell ,0}))\). In the case of a linear scheme, minimizing the functional J should result in \(\varvec{\alpha }^n=\varvec{\alpha }^0\) for any \(n\ge 0\).
-
In the case of an explicit background scheme, the choice of the numerical flux, how high order is reached, and the choice of time stepping has no influence on the overall procedure: any sub-time step would be treated similarly. In this paper, we have chosen a first order method with Euler time stepping in the case of unsteady problem.
-
In the case of an implicit scheme, a Newton-like procedure can be applied to minimize the functional as in [11]. At each time step, the procedure is then identical as in the steady case described above.
Results and discussion
Model reduction of unsteady problems
Unsteady Burgers’ equation
We consider here the system (7) in \(\Omega =[0,2\pi ]\) with periodic boundary conditions and the initial conditions parameterized by
where \(\mu \in [0,1]\). In this setting, the solution develops a shock that moves with the velocity \(\sigma _\mu =0.55 \mu \). A dictionary \(\mathcal {D}\) is constructed by sampling the parameters \(\{0,0.2,0.4,0.6,1.0\}\) (\(r=5\)) and the solution sought for the predictive case \(\mu ^\star =0.5\). A shock appears at \(t=1\). We display the solutions obtained by \(L^1\)-norm by LP minimization procedure for \(t=\tfrac{\pi }{4}<1\), \(t=\tfrac{\pi }{2}\) and \(t=\pi \) in Figs. 2, 3.
After the shock, the \(L^1\)-norm-type solutions are all close to each other and the shock is rather well reproduced with, however, an artifact that develops for longer times, as seen at \(t=\pi \). Nevertheless, the \(L^1\)-norm-type solutions are within the bounds of the “exact” solution, and no large oscillation develops.
In a second set of numerical experiments, we consider the influence of the sampling parameter set included in the dictionary \(\mathcal {D}\). We consider two dictionaries \(\mathcal {D}_1=\{0.4,0.45, 0.55, 0.6\}\) and \(\mathcal {D}_0=\{0,0.2,0.4,0.45, 0.55, 0.6,1.0\}\), for the same target value of \(\mu ^\star =0.5\). These choices amounts to selecting samples close to the target value 0.5 while varying elements of the dictionary that are not close to 0.5 (see Fig. 4).
We see that refining the dictionary has a positive influence as the target solution is much closer to the dictionary elements. This is confirmed by additional experiments where the samples of \(\mu \) used to generate the dictionary where more numerous and closer to 0.5 (not reported here). The \(L^1\)-norm-type solutions are however unaffected by the presence of these “outliers” in the dictionary.
Euler equations
The one-dimensional Euler equations are considered on \(\Omega = [0,1]\)
for which \(U=(\rho ,\rho u,E)^T\) and the pressure is given by
with \(\gamma =1.4\).
This problem is parametrized by the initial conditions \(U_0(x;\mu )\). To define the parametrized initial conditions of the problem, the Lax and Sod cases are first introduced as follows.
The state \(U_{\text {Sod}}(x)\) is defined by the primal physical quantities:
and \(U_{\text {Lax}}(x)\) defined by
The Sod condition presents a fan, followed by a contact and a shock. For the density and the pressure, the solution behaves monotonically, and the contact is moderate. The Lax solution has a very different behavior and the contact is much stronger. This is depicted in Fig. 5 where the two solutions are shown for \(t=0.16\).
The initial condition are parametrized for \(\mu \in [0,1]\) as
and the conservative initial variables \(U_0(x;\mu )\) constructed from \(V_0(x;\mu )\).
In the subsequent numerical experiments, two strategies are exploited to construct, from the dictionary \(\mathcal {D}\), the approximation \(\mathbf {u}^n(\mu )\) of the solution at each time step n:
-
Either we reconstruct together the discretized density vectors \(\varvec{\rho }\), momentum \(\mathbf {m}=\varvec{\rho }\mathbf {u}\) and energy \(\mathbf {E}\), i.e. the state variable at time \(t_n\) using only one coefficient vector \(\varvec{\alpha }^n=(\alpha ^n_1,\ldots ,\alpha ^n_r)\)
$$\begin{aligned} \mathbf {u}^n=\begin{pmatrix} \varvec{\rho }^{n}\\ \mathbf {m}^n\\ \mathbf {E}^n\end{pmatrix} \approx \sum _{j=1}^r \alpha _j^n \mathbf {u}^n(\mu _j). \end{aligned}$$(13)Here the \(\{\alpha _j^n\}_{j=1}^r\) are obtained by minimizing J on the density components of the state because the density enable to detect fans, contact discontinuities and shocks, contrarily to pressure and velocity which are constant across contact waves. Doing so we expect to control better the numerical oscillations, if any, than with the other physical variables. Similar arguments could be applied with the other conserved variables as well.
-
Alternatively, we reconstruct each conserved variable separately
$$\begin{aligned} \varvec{\rho }^n\approx \sum _{j=1}^r \alpha _{j}^n \varvec{\rho }^n(\mu _j), \quad \mathbf {m}^n\approx \sum _{j=1}^r \alpha _{j}^n \mathbf {m}^n(\mu _j), \quad \mathbf {E}^n\approx \sum _{j=1}^r \alpha _{j}^n \mathbf {E}^n(\mu _j). \end{aligned}$$(14)where the minimization procedures are done independently on each conserved variable.
In order to test these approaches, the PDE is discretized by finite volumes using a discretization resulting in \(Np=3000\) dofs. The parameter range \(\mathcal {D}=\{0.0, 0.2,0.4,0.5,0.8,1\}\) is considered together with a target \(\mu ^\star =0.6\). The results using the first strategy, see eq. (13), are displayed in Fig. 6 and those using the second strategy, see eq. (14), reported in Fig. 7.
From both figures, we can see that the overall structure of the solutions is correct. Nevertheless, there are differences that can be highlighted. From Fig. 6, we can observe that the density predictions, besides an undershoot at the shock, are well reproduced. However, we cannot recover correct values of the initial velocity (see left boundary), because there is no reason to believe that the coefficient \(\varvec{\alpha }\), evaluated from the density only, will also be correct for the momentum. A careful observation of the pressure plot also reveals the same behavior which is not satisfactory. For the same reason, if any other single variable is used for a global approximation of each conservative variables, there no reason why better qualitative results could be obtained.
This problem does not occur with the second strategy for the reconstruction (14): the correct initial values are recovered. We have some slight problems on the velocity, between the contact and the shock.
In order to obtain these results we have been faced to the following issue. Take the momentum, for example. For at least half of the mesh points, its value is 0, and for half of the points, its value is set to a constant. Hence, the matrix \(\mathbf {A}\) used in the minimization procedure and built on the momentum dictionary has rank 2 only. The same is true for the other variables, and we are looking here for r coefficients. Several approaches can be followed to address this issue. The first one relies on Gram-Schmidt orthogonalization of the solutions prior to their use as a basis for the solution. The second approach, followed here, consists into perturbing infinitesimally and randomly the matrices involved in the procedure, so \(A_{ij}\) is replaced by \(A_{ij}+\varepsilon _{ij}\). The distribution of \(\varepsilon _{ij}\) is uniform. This has the effect of giving the maximum possible rank to the perturbed matrix. We have expressed that \(\epsilon _{ij}\) should depend on the variable, we have chosen
where \(L_{\text {ref}}\) is the difference between the minimum and the maximum, over the dictionary, of the considered variable. Choosing the same \(\varepsilon _{ij}\) for all variables, this has the effect of increasing the amplitude of the oscillations after then shock.
All this being said, the solution using three distinct coefficients obtained independently is of significantly much better quality than the one using only one expansion.
Model reduction of steady problems
Nozzle flow
To illustrate the ability of the reduced model, we consider the nozzle flow numerical experiment. The PDF is
where
and A is the area of the nozzle flow. Depending on the boundary conditions, we can have a fully smooth flow or a flow with steady discontinuity. We illustrate the method on a case where a discontinuity exists (see Fig. 8). All the other variables behave in the same manner. The experiment has been conducted for the density case with the choice of the target parameter \(\mu =1.5\).
Conclusions
A novel model reduction that relies on a dictionary approach is developed and tested on several steady and unsteady hyperbolic problems. All of the solutions of the problem tested are parametrized and have regions of their spatial domain with discontinuities, leading to solutions with very distinct behaviors, such as different wave speeds and shock locations, making them challenging to reduce using classical projection-based model reduction techniques. To address this challenge, the proposed approach is based on a dictionary of solutions is coupled with a functional minimization. The analysis and numerical experiments conducted in this work show that the proposed approach is robust (at least for one-dimensional problems) and performs the best when the functional is of \(L^1\)-norm-type. As an extension to this work, other related minimization techniques which are less CPU intensive will be considered.
Current work includes a multidimensional fluid case, an error estimate, the storage of the dictionary and an application of the hyper-reduction to the dictionary framework.
Abbreviations
- HDM:
-
High dimensional model
- PDE:
-
Partial differential equations
- POD:
-
Proper orthogonal decomposition
- LP:
-
Linear programming
- TV:
-
Total variation
- BV:
-
Bounded variation
References
LeGresley PA, Alonso JJ. Airfoil design optimization using reduced order models based on proper orthogonal decomposition. In: AIAA Paper 2000–2545 Fluids 2000 Conference and Exhibit, Denver, CO. 2000. pp. 1–14.
Bui-Thanh T, Willcox K, Ghattas O. Parametric reduced-order models for probabilistic analysis of unsteady aerodynamic applications. AIAA J. 2008;46(10):2520–9.
Amsallem D, Cortial J, Farhat C. Toward real-time computational-fluid-dynamics-based aeroelastic computations using a database of reduced-order information. AIAA J. 2010;48(9):2029–37.
Amsallem D, Zahr MJ, Choi Y, Farhat C. Design optimization using hyper-reduced-order models. Struct Multidiscip Optim. 2014:1–22.
Lumley JL. The structure of inhomogeneous turbulent flows. In: Iaglom AM, Tatarski VI, editors. Atmospheric turbulence and Radio wave propagation. Moscow: Nauka; 1967. p. 221–7.
Sirovich L. Turbulence and the dynamics of coherent structures. Part I: coherent structures. Q Appl Math. 1987;45(3):561–71.
Maday Y, Rønquist EM. A reduced-basis element method. J Sci Comput. 2002;17(1–4):447–59.
Willcox K. Unsteady flow sensing and estimation via the gappy proper orthogonal decomposition. Comput Fluids. 2006;35(2):208–26.
Veroy K, Patera AT. Certified real-time solution of the parametrized steady incompressible navier stokes equations: rigorous reduced-basis a posteriori error bounds. Int J Numer Methods Fluids. 2005;47:773–88.
Rowley CW, Colonius T, Murray RM. Model reduction for compressible flow using POD and Galerkin projection. Phys D Non Linear Phenom. 2004;189(1–2):115–29.
Carlberg K, Farhat C, Cortal J, Amsallem D. The GNAT method for non linear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows. J Comput Phys. 2013;242:623–47.
Amsallem D, Farhat C. Interpolation method for adapting reduced-order models and application to aeroelasticity. AIAA J. 2008;46(7):1803–13.
Barone MF, Kalashnikova I, Segalman DJ, and Thornquist H. Stable galerkin reduced order models for linearized compressible gfows. J Comput Phys. 2009;288:1932–46.
Maday Y, Stamm B. Locally adaptive greedy approximations for anisotropic parameter reduced basis spaces. SIAM J Sci Comput. 2013;35(6):2417–41.
Prud’homme C, Rovas DV, Veroy K, Machiels L, Maday Y, Patera AT, Turicini G. Reliable real-time solution of parametrised partial differential equations: reduced basis output bound methods. J Fluids Eng. 2002;124:70–80.
Barrault M, Maday Y, Nguyen NC, Patera AT. An “empirical interpolation” method: Application to efficient reduced-basis discretization of partial differential equations. C R Math Acad Sci Paris. 2004;339(9):667–72.
Paul-Dubois-Taine A, Amsallem D. An adaptive and efficient greedy procedure for the optimal training of parametric reduced-order models. Int J Numer Methods Eng. 2014:1–31.
Ryckelynck D. A priori hyperreduction: an adaptive approach. J Comput Phys. 2005;202:346–66.
Guermond JL, Popov B. \(L^1\)-approximation of stationnary Hamilton-Jacobi equations. SIAM J Numer Anal. 2008;47(1):339–62.
Guermond JL, Marpeau F, Popov B. A fast algorithm for solving first-order PDEs by \(L^1\) minimization. Commun Math Sci. 2008;6(1):199–216.
Acknowledgements
The first author has been funded in part by the MECASIF Project (2013–2017) funded by the French “Fonds Unique Interministériel” and SNF Grant # 200021_153604 of the Swiss National Foundation. The second author would like to acknowledge partial support by the Army Research Laboratory through the Army High Performance Computing Research Center under Cooperative Agreement W911NF-07-2-0027, and partial support by the Office of Naval Research under Grant No. N00014-11-1-0707. The third author has been supported in part by the SNF Grant # 200021_153604 of the Swiss National Foundation. This document does not necessarily reflect the position of these institutions, and no official endorsement should be inferred. Rémi Abgrall would also like to thank Y. Maday, LJLL, Université Pierre and Marie Curie for several very insightful conversations.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Abgrall, R., Amsallem, D. & Crisovan, R. Robust model reduction by \(L^{1}\)-norm minimization and approximation via dictionaries: application to nonlinear hyperbolic problems. Adv. Model. and Simul. in Eng. Sci. 3, 1 (2016). https://doi.org/10.1186/s40323-015-0055-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40323-015-0055-3