Keywords

1 Introduction

In this paper we investigate the solution of large-scale saddle point systems arising in control problems for coupled partial differential equations (PDEs). The starting points are recent publications concerning the boundary feedback stabilization of non-coupled flows like the linear Stokes flow in [3] and the non-linear Navier-Stokes flow in [1]. The analytic approach for this feedback stabilization is given by Raymond in, e.g., [13].

Using the projection idea proposed by Heinkenschloss et al. [8], Benner et al. [1, 3] show that the solution of certain saddle point systems is the key ingredient to ensure that the numerical solution lies on the correct solution manifold, i.e., the space of discretely divergence-free velocity fields, without performing an explicit projection.

Applying these ideas to a coupled flow problem, namely the Navier-Stokes equations combined with a diffusion-convection equation, leads to saddle point systems with a more complicated block structure [2]. Solving these systems efficiently requires the use of appropriate preconditioners. This paper investigates an efficient iterative solution strategy via the use of preconditioned Krylov subspace methods based on the framework derived in [3]. Here we consider the full feedback system for the coupled multi-field flow problem, while in [3], only the linear Stokes case was treated without coupling to another field equation. Moreover, this paper complements [2] in the sense that there, we have focused on presenting results on the convergence of the Newton-ADI method for solving the algebraic Riccati equation determining the stabilizing feedback control for the coupled system, where the saddle point problems in the innermost step of the Newton-ADI iteration were solved by sparse direct methods, while here, we study preconditioned iterative solvers for this step.

The remainder of this paper is organized as follows. Section 2 briefly recalls the feedback stabilization approach for multi-field flow problems from [2] that leads to large-scale saddle point systems. Afterwards, we discuss properties of these saddle point systems to derive a suitable preconditioner in Sect. 3. Section 4 shows numerical results before we conclude the paper and give a short outlook to further investigations in Sect. 5.

2 Derivation of Saddle Point Systems

The derivation of the block structured saddle point systems in [2] starts with the linearized coupled flow problem defined for \(t\in [0,\infty )\) and \(\varvec{x}\in \varOmega \subset \mathbb {R}^{2}\). The linearized Navier-Stokes equations that describe, up to first order, the difference between actual and desired velocity and pressure are given as

$$\begin{aligned} \left. \begin{aligned} \frac{\partial }{\partial t}\varvec{z}-\frac{1}{\text {Re}}\varDelta \varvec{z}+ (\varvec{w}\cdot \nabla )\varvec{z}+(\varvec{z}\cdot \nabla )\varvec{w}+\nabla p&=\varvec{f}_{l},\\ \mathop {\mathrm {div}}{\varvec{z}}&=0, \end{aligned} \right. \quad \text {on }[0,\infty )\times \varOmega . \end{aligned}$$
(1)

They are then coupled via the velocity field \(\varvec{z}(t,\varvec{x})\) with the linearized diffusion-convection equation

$$\begin{aligned} \frac{\partial }{\partial t}c_{\varvec{z}}-\frac{1}{\text {Re}\text {Sc}}\varDelta c_{\varvec{z}}+(\varvec{w}\cdot \nabla )c_{\varvec{z}}+(\varvec{z}\cdot \nabla )c_{\varvec{w}}=0, \quad \text {on }[0,\infty )\times \varOmega \end{aligned}$$
(2)

that describes the concentration of a reactive species denoted by \(c_{\mathbf {z}}(t,\varvec{x})\). The stationary linearization points \(\varvec{w}(\varvec{x})\) for the velocity and \(c_{\varvec{w}}(\varvec{x})\) for the concentration are assumed to be given. The equations are scaled with the Reynolds number \(\text {Re}\) and the Schmidt number \(\text {Sc}\). Using the mixed Taylor-Hood finite elements [9] for the velocity and pressure in Eq. (1) as well as linear ansatz functions for the concentration in Eq. (2), we end up with a system of discrete differential-algebraic equations (DAE) that can be written as the control system:

$$\begin{aligned} \begin{bmatrix} M_{\mathbf {z}}&0&0\\ 0&0&0\\ 0&0&M_{\mathbf {c}} \end{bmatrix}\frac{d}{dt} \begin{bmatrix} \mathbf {z}\\\mathbf {p}\\\mathbf {c}\end{bmatrix}&= \begin{bmatrix} A_{\mathbf {z}}&G&0\\ G^{T}&0&0\\ -R&0&A_{\mathbf {c}} \end{bmatrix}~ \begin{bmatrix} \mathbf {z}\\\mathbf {p}\\\mathbf {c}\end{bmatrix}+ \begin{bmatrix} B_{\mathbf {z}}\\0\\0 \end{bmatrix}~ \mathbf {u},\end{aligned}$$
(3a)
$$\begin{aligned} \mathbf {y}&= \begin{bmatrix} 0&0&C_{\mathbf {c}} \end{bmatrix}~ \begin{bmatrix} \mathbf {z}\\\mathbf {p}\\\mathbf {c}\end{bmatrix} \end{aligned}$$
(3b)

with the first block row for velocity (of dimension \(n_{\mathbf {z}}\)), the second row for pressure (of dimension \(n_{\mathbf {p}}\)), and the third row for concentration (of dimension \(n_{\mathbf {c}}\)) [2].

The matrix pencil

$$ \Bigg ( \underbrace{ \begin{bmatrix} A_{\mathbf {z}}&G&0\\ G^{T}&0&0\\ -R&0&A_{\mathbf {c}} \end{bmatrix}}_{\mathbf {A}}, \underbrace{ \begin{bmatrix} M_{\mathbf {z}}&0&0\\ 0&0&0\\ 0&0&M_{\mathbf {c}} \end{bmatrix}}_{\mathbf {M}}\Bigg ) $$

is of dimension \(n\times n\) with \(n=n_{\mathbf {z}}+n_{\mathbf {c}}+n_{\mathbf {p}}\) and has \(2n_{\mathbf {p}}\) infinite eigenvalues [5].

In [2], a linear-quadratic regulator (LQR) approach is applied to system (3) for determining the stabilizing control function \(\mathbf {u}\). The solution of this LQR problem is a linear feedback control \(\mathbf {u}(t) = \mathbf {K}(\mathbf {z}(t),\mathbf {p}(t),\mathbf {c}(t))\), determined via the solution of an algebraic Riccati equation (ARE) defined on the subspace of discretely divergence-free vector fields. The resulting ARE is then solved using a Newton-ADI algorithm. This method yields a threefold nested iteration. In the innermost loop, saddle point systems of the form

$$\begin{aligned} \underbrace{\begin{bmatrix} \;A_{\mathbf {z}}^{T}+q_{i}M_{\mathbf {z}}\;&G&-R^{T} \\ G^{T}&0&0\\ 0&0&\;A_{\mathbf {c}}^{T}+q_{i}M_{\mathbf {c}}\;\\ \end{bmatrix}}_{=\mathbf {A}^{T}+q_{i}\mathbf {M}=:\mathbf {F}_{i}}~ \underbrace{\begin{bmatrix} \varLambda _{\mathbf {z}}\\ \varLambda _{\mathbf {p}}\\ \varLambda _{\mathbf {c}} \end{bmatrix} }_\mathbf{\varLambda }&= \underbrace{\begin{bmatrix} \tilde{Y}_{\mathbf {z}}\\0\\\tilde{Y}_{\mathbf {c}} \end{bmatrix}. }_{\mathbf {Y}} \end{aligned}$$
(4)

have to be solved for certain ADI shifts \(q_{i}\in {\mathbb {C}}^{-}\) and a block right hand side \(\mathbf {Y}\). The whole nested iteration is given in [2, Algorithm 1] and is omitted here due to space constraints.

3 Preconditioned Iterative Solvers for Block Structured Saddle Point Systems

The use of direct solvers in (4) is only suitable for moderate problem sizes and two-dimensional problems. Although iterative methods can handle much larger systems, their performance will deteriorate if the mesh-size decreases. To avoid this, a suitable preconditioner \(\mathbf {P}_{i}\in {\mathbb {C}}^{n\times n}\) is introduced such that

is solved instead of (4) (see [7, 16]). Before we derive a suitable preconditioner \(\mathbf {P}_{i}\) we need to describe the properties of the saddle point system and their influence on the chosen preconditioner.

3.1 Properties

The matrices \(M_{\mathbf {z}}, M_{\mathbf {c}}\) are symmetric and positive definite, \(G, R\) are of full rank, and the ADI shift \(q_{i}\in {\mathbb {C}}^{-}\) is contained in the convex hull of the finite spectrum of \((\mathbf {A},\mathbf {M})\). The shifted system matrix \(\mathbf {F}_{i}\) is indefinite \(\forall q_{i}\in {\mathbb {C}}^{-}\). Due to the different \(q_{i}\), the matrix \(\mathbf {F}_{i}\) changes in each ADI step and, therefore, the preconditioner has to be adapted in each ADI step as well. Nevertheless, for the remainder of this section we assume a fixed ADI shift \(q_{i}=q\) to omit the index \(i\) if it is obvious.

3.2 Derivation of Block Preconditioner

Adapting the ideas from [3, Sect. 3.2] we consider

$$\begin{aligned} \mathbf {F}=\begin{bmatrix} F_{\mathbf {z}}&G&-R^{T}\\ G^{T}&0&0\\ 0&0&F_{\mathbf {c}}\\ \end{bmatrix}= \begin{bmatrix} \mathbf {F}_{NSE}&-\tilde{R}^{T}\\ 0&F_{\mathbf {c}}\end{bmatrix} \quad \text { with }\quad \begin{matrix} F_{\mathbf {z}}:=A^{T}_{\mathbf {z}}+q M_{\mathbf {z}},\\ F_{\mathbf {c}}:=A^{T}_{\mathbf {c}}+q M_{\mathbf {c}},\\ \tilde{R}:= \begin{bmatrix} R &{}0 \end{bmatrix}, \end{matrix} \end{aligned}$$
(5)

and \(\mathbf {F}_{{NSE}}\) as the saddle point system for the non-coupled Navier-Stokes flow as it is used in [1]. Using the preconditioner \(\mathbf {P}_{{NSE}}\) from [3], we define a block preconditioner for the use with GMRES [17] to solve with the block structured saddle point system (5) as follows:

$$\begin{aligned} \mathbf {P}&= \begin{bmatrix} \mathbf {P}_{NSE}&-\tilde{R}^{T}\\ 0&P_{\mathbf {c}}\end{bmatrix}= \begin{bmatrix} P_{\mathbf {z}}&0&-R^{T}\\ G^{T}&-P_{SC}&0\\ 0&0&P_{\mathbf {c}}\end{bmatrix}\\ \Rightarrow \mathbf {P}^{-1}&= \begin{bmatrix} P_{\mathbf {z}}^{-1}&0&P_{\mathbf {z}}^{-1}R^{T}P_{\mathbf {c}}^{-1}\\ \;P_{SC}^{-1}G^{T}P_{\mathbf {z}}^{-1}\;&\;-P_{SC}^{-1}\;&\;P_{SC}^{-1}G^{T}P_{\mathbf {z}}^{-1}R^{T}P_{\mathbf {c}}^{-1}\;\\ 0&0&P_{\mathbf {c}}^{-1}\\ \end{bmatrix}. \end{aligned}$$

In contrast to the preconditioner derived in [3], we cannot achieve a block lower triangular matrix due to the coupling matrix \(R\). Applying \(\mathbf {P}^{-1}\) to \(\mathbf {F}\) yields

$$\begin{aligned}&\mathbf {P}^{-1}\mathbf {F}= \nonumber \\&\Bigg [ \begin{array}{ccc} P_{\mathbf {z}}^{-1}F_{\mathbf {z}}&{}P_{\mathbf {z}}^{-1}G&{}-P_{\mathbf {z}}^{-1}R^{T}+P_{\mathbf {z}}^{-1}R^{T}P_{\mathbf {c}}^{-1}F_{\mathbf {c}}\\ \;P_{SC}^{-1}G^{T}P_{\mathbf {z}}^{-1}F_{\mathbf {z}}-P_{SC}^{-1}G^{T}\;&{}\;P_{SC}^{-1}G^{T}P_{\mathbf {z}}^{-1}G\;&{} \;-P_{SC}^{-1}G^{T}P_{\mathbf {z}}^{-1}R^{T}+P_{SC}^{-1}G^{T}P_{\mathbf {z}}^{-1}R^{T}P_{\mathbf {c}}^{-1}F_{\mathbf {c}}\;\\ 0&{}0&{}P_{\mathbf {c}}^{-1}F_{\mathbf {c}}\end{array}\Bigg ] \end{aligned}$$
(6)

If one assumes \(P_{\mathbf {z}}=F_{\mathbf {z}}, P_{\mathbf {c}}=F_{\mathbf {c}}\), and \(P_{SC}=G^{T}F_{\mathbf {z}}^{-1}G\) as ideal approximations in (6), this leads to

$$\begin{aligned} \begin{bmatrix} I_{\mathbf {z}}&F_{\mathbf {z}}^{-1}G&-F_{\mathbf {z}}^{-1}R^{T}+F_{\mathbf {z}}^{-1}R^{T}\\ \;P_{SC}^{-1}G^{T}-P_{SC}^{-1}G^{T}\;&\;P_{SC}^{-1}G^{T}F_{\mathbf {z}}^{-1}G\;&\;-P_{SC}^{-1}G^{T}F_{\mathbf {z}}^{-1}R^{T}+P_{SC}^{-1}G^{T}F_{\mathbf {z}}^{-1}R^{T}\;\\ 0&0&I_{\mathbf {c}}\\ \end{bmatrix}\\ = \begin{bmatrix} I_{\mathbf {z}}&*&0\\ 0&I_{\mathbf {p}}&0\\ 0&0&I_{\mathbf {c}} \end{bmatrix} \end{aligned}$$

and our iterative method would converge within one step. The goal is to find good approximations for \(P_{\mathbf {z}},P_{\mathbf {c}}\), and \(P_{SC}\) that can be evaluated fast and still cluster the eigenvalues in a suitable way such that our iterative solver shows fast convergence [7]. Instead of calculating the inverse \(\mathbf {P}^{-1}\) to apply the preconditioner \(\mathbf {P}\), we consider the solution of a linear system

$$\begin{aligned} \begin{bmatrix} P_{\mathbf {z}}&0&-R^{T}\\ G^{T}&-P_{SC}&0\\ 0&0&P_{\mathbf {c}}\end{bmatrix}~ \begin{bmatrix} x_{\mathbf {z}}\\x_{\mathbf {p}}\\x_{\mathbf {c}} \end{bmatrix}= \begin{bmatrix} b_{\mathbf {z}}\\b_{\mathbf {p}}\\b_{\mathbf {c}} \end{bmatrix} \end{aligned}$$
(7)

that can be solved in three steps:

$$\begin{aligned}&\text {Step I:}\quad&x_{\mathbf {c}}&=P_{\mathbf {c}}^{-1}b_{\mathbf {c}},\end{aligned}$$
(8a)
$$\begin{aligned}&\text {Step II:}\quad&x_{\mathbf {z}}&=P_{\mathbf {z}}^{-1}(R^{T}x_{\mathbf {c}}+b_{\mathbf {z}}),\end{aligned}$$
(8b)
$$\begin{aligned}&\text {Step III:}\quad&x_{\mathbf {p}}&=P_{SC}^{-1}(G^{T}x_{\mathbf {z}}-b_{\mathbf {p}}). \end{aligned}$$
(8c)

In conclusion, the coupling matrix \(R\) only leads to a matrix-vector multiplication. In steps I and II, one needs to solve with the shifted velocity and concentration system matrices as defined in (5). For both steps, an algebraic multigrid (AMG) method can be used as it is described below. But first, we discuss the more challenging step III that is handled as follows.

3.3 Approximation Methods

Schur Complement Approximation. \(P_{SC}\) is an approximation of the Navier-Stokes Schur complement \(SC:=G^{T}F_{\mathbf {z}}^{-1}G\in \mathbb {R}^{n_{\mathbf {p}}\times n_{\mathbf {p}}}\). Unfortunately, the matrix \(SC\) would be a dense matrix that includes the inverse of \(F_{\mathbf {z}}\). To avoid the use of this matrix, we follow the approach in [3, 18] and use a slightly modified variant of the least squares commutator approach as it is described in [7, Sect. 8.2]. Namely, we consider the shifted Oseen operator in the velocity space

$$\begin{aligned} \mathcal {F}_{\varvec{z}}=-\frac{1}{\text {Re}}\nabla ^{2}+\varvec{w}\cdot \nabla +q\mathcal {I}. \end{aligned}$$

Note that it is common practice to omit the reaction term \((\varvec{z}\cdot \nabla )\varvec{w}\) that appears in the linearized Navier-Stokes equations to derive preconditioners [7, Sect. 8]. Similar to [7, Sect. 8.2] and [6], we suppose that there exists an analogous operator on the pressure space defined as

$$\begin{aligned} \mathcal {F}_{p}=(-\frac{1}{\text {Re}}\nabla ^{2}+\varvec{w}\cdot \nabla +q\mathcal {I})_{p}. \end{aligned}$$

The least squares commutator of the shifted Oseen operator with the gradient operator is defined as

$$\begin{aligned} \mathcal {E}=(\mathcal {F})\nabla -\nabla (\mathcal {F}_{p}) \end{aligned}$$

and is supposed to become small in some sense [7]. Using the discrete versions of the operators, we end up with

$$\begin{aligned} E=(M_{\mathbf {z}}^{-1}F_{\mathbf {z}})M_{\mathbf {z}}^{-1}G-M_{\mathbf {z}}^{-1}G(M^{-1}_{\mathbf {p}}F_{\mathbf {p}}) \end{aligned}$$

with \(M_{\mathbf {p}}\) the mass matrix and \(F_{\mathbf {p}}=A^{T}_{\mathbf {p}}+qM_{\mathbf {p}}\) the shifted system matrix, both defined on the pressure space. Premultiplying this by \(G^{T}F_{\mathbf {z}}^{-1}M_{\mathbf {z}}\) and postmultiplying by \(F^{-1}_{\mathbf {p}}M_{\mathbf {p}}\) yields [3]

$$\begin{aligned} G^{T}M_{\mathbf {z}}^{-1}GF^{-1}_{\mathbf {p}}M_{\mathbf {p}}\approx G^{T}F_{\mathbf {z}}^{-1}G=SC. \end{aligned}$$

The large and dense matrix \(G^{T}M_{\mathbf {z}}^{-1}G\) cannot be used explicitly, but it is shown in [7, Sect. 5.5.1] that this matrix is spectrally equivalent to the Laplacian \(S_{\mathbf {p}}\) defined on the pressure space for an inf-sup stable discretization and an inflow-outflow problem [7, Sect. 8.2], as it is considered in this paper. Finally, we obtain

$$\begin{aligned} P_{SC}\approx S_{\mathbf {p}}F_{\mathbf {p}}^{-1}M_{\mathbf {p}}\quad \Rightarrow \quad P_{SC}^{-1}\approx M_{\mathbf {p}}^{-1}F_{\mathbf {p}}S_{\mathbf {p}}^{-1}. \end{aligned}$$

In [4] the authors use a similar approach for the Navier-Stokes equations. In summary, the application of \(P_{SC}^{-1}\) requires to solve with \(S_{\mathbf {p}}\) (step IIIa), multiply with \(F_{\mathbf {p}}\) (step IIIb), and solve with \(M_{\mathbf {p}}\) (step IIIc). The step IIIa can be solved with an AMG method, similar to the steps I and II.

Algebraic Multigrid. As it is described above, the steps I (8a), II (8b) and IIIa are solved using an AMG method [14]. Due to the possibly complex ADI shifts \(q\) in (8a) and (8b), we use the AGMG package developed by the group of Y. Notay [1012]. In all three cases we use the MATLAB®-based implementation to solve systems of the form

$$\begin{aligned} Fx=b \end{aligned}$$

with a sparse matrix \(F\in \{F_{\mathbf {z}},F_{\mathbf {c}},S_{\mathbf {p}}\}\). Details about the used parameters for the function agmg are discussed in Subsect. 4.2. For more details about the internally used methods and the implemented syntax we refer the reader to [11]. Although the AGMG method can handle complex arithmetic, it needs significantly more steps to converge to the desired tolerance. Additionally, we note that agmg is a non-linear function, such that one should use a flexible iterative method, e.g., FGMRES [15]. However, our numerical experiments do not show any drawbacks using a standard GMRES implementation.

Chebyshev-Semi-Iteration. Although the solution of step IIIc with the symmetric positive definite mass matrix \(M_{\mathbf {p}}\) is relatively cheap, this can still be accelerated by using the Chebyshev-Semi-Iteration as it is described, e.g., in [18]. Numerical tests showed that one needs only \(4{-}6\) steps to obtain a suitable result for the preconditioner, which results in a speedup that is shown in Subsect. 4.2.

The next section depicts selected results to show the performance of the preconditioned iterative method.

4 Numerical Examples

To test the efficiency of the preconditioned iterative method, the same data and configurations as in [2] are used. After refining the initial triangulation of the reactor model in Fig. 1, we end up with the variable dimensions as depicted in Table 1b. Furthermore, we define five parameter sets for different combinations of Reynolds and Schmidt numbers as shown in Table 1a. We use the MATLAB implementation of GMRES [17] to solve the saddle point systems (4) for selected ADI shifts \(q_{i}\) that appear during the Newton-ADI iteration. Each \(q_{i}\) is used for three ADI steps with four right hand sides every time. Thereby, the number of GMRES steps and the CPU times are measured and arithmetically averaged. The preconditioner \(\mathbf {P}\) is evaluated as a MATLAB function handle that solves the linear system (7) using the steps (8). The GMRES tolerance is set to \(10^{-10}\) to ensure the same convergence of the ADI iteration that a direct solve would imply [3]. Although a few complex ADI-shifts \(q_{i}\) appear for each parameter set during the Newton-ADI process, the pictures only show the real parts of \(q_{i}\).

Fig. 1.
figure 1

Initial triangulation of the reactor model with coordinates and boundary conditions [2].

Table 1. Test parameter settings.
Fig. 2.
figure 2

Average number of GMRES steps for a representative selection of ADI shifts from the Newton-ADI iteration for the configuration sets in Table 1a.

All computations were executed in MATLAB R2012a on a 64-bit server with \(2\times \)Intel® Xeon® X5650 @2.67 GHz, 12 Cores (6 Cores per CPU) and 48 GB main memory available.

4.1 Influence of ADI Shifts and Reynolds and Schmidt Numbers

The influence of the variation of the Reynolds and Schmidt numbers as given in Table 1 is depicted in Fig. 2. To obtain the best approximations for the preconditioning steps (8a)–(8c), a direct solver is used to solve with \(F_{\mathbf {z}},F_{\mathbf {c}},\) and \(S_{\mathbf {p}}\). It can be observed that for ADI shifts \(-10^{5}<\mathop {\mathrm{Re}}\,(q_{i})<-10^{1}\), between 20–25 GMRES steps are needed. As soon as the absolute value of \(q_{i}\) gets smaller then 10 the number of steps increases. This is a natural behavior, because the influence of the mass matrices \(M_{\mathbf {z}}\) and \(M_{\mathbf {p}}\) vanishes. Nevertheless, GMRES converges within at most 40–80 steps for all parameter configurations. An empirical test to set: \(q_{i}=-10\quad \forall |q_{i}|<10\), during the Newton-ADI process showed similar ADI convergence behavior as for the original shift selection, without the drawback of higher GMRES cost for certain shifts. In summary, the derived preconditioner is suitable concerning different Reynolds and Schmidt numbers, as well as different ADI shifts.

4.2 Approximations Using AMG and Chebyshev-Semi-Iteration

As described in Subsect. 3.3, the different preconditioning steps should be solved by an easy to evaluate approximation that is accurate enough to ensure the convergence of GMRES, but avoids the use of sparse factorizations of large-scale sparse matrices. We exchanged the direct solver by its approximation step by step and depict the results in Fig. 3. At first, we use the MATLAB based function agmg [11] to solve with \(F_{\mathbf {z}}\) and \(F_{\mathbf {c}}\) in (8b) and (8a) with an accuracy of \(10^{-10}\). Depending on the used ADI shift, the function agmg needed 1–30 steps. Thus, the times to solve the whole saddle point system with the same number of GMRES steps increased a little bit compared to the direct solver. At second, we approximately solved with \(S_{\mathbf {p}}\) in step IIIa using agmg as a preconditioner. This was sufficient enough to achieve the GMRES accuracy and, furthermore, decreased the time. Finally, we applied a Chebyshev-Semi-Iteration to approximately solve with \(M_{\mathbf {p}}\) in step IIIc. The obtained speedup finally decreased the times below the time used by the direct solver in each step without the loss of any accuracy in GMRES. Due to the above addressed problems with complex ADI shifts in agmg, we restrict our comparison in Fig. 3 to a selection of real ADI shifts. The selection has been performed such that the span of all ADI shifts appearing in the entire Newton-ADI process is covered. Where those shifts clustered we only chose one representative per cluster.

Fig. 3.
figure 3

Average time to solve Eq. (4) with GMRES for a representative selection of real ADI shifts from the Newton-ADI iteration for different approximations of the preconditioning steps (8).

At the end of this section it should be noted that the suggested preconditioned GMRES method for the considered class of saddle point problems would show its full strength in comparison to a direct solver when using finer discretizations, leading to larger dimensions, and in particular when moving to 3D problems. This will be addressed in future work.

5 Conclusions and Outlook

We have recalled the formation of block structured saddle point systems as they arise within the Riccati-based feedback stabilization approach for coupled flow problems that avoids any explicit projection [2]. We were able to extend the results from [3], developed for uncoupled Stokes flow, to the coupled flow described by incompressible Navier-Stokes and a diffusion-convection equation. For that reason, the least-squares commutator approach in [7] has been modified to approximate the shifted Navier-Stokes Schur complement. Exploiting the block structure of the arising preconditioner guarantees a fast evaluation within GMRESĖach of the blocks can either be approximated by an AMG method or a Chebyshev-Semi-Iteration. Several numerical experiments showed that the derived preconditioning method is able to solve the arising saddle point systems efficiently independent of the different parameter settings. Only the use of complex ADI shifts during the Newton-ADI process is not yet optimally covered by this approach and will be investigated in the future.