Abstract
We present a hybridization technique for summation-by-parts finite difference methods with weak enforcement of interface and boundary conditions for second order, linear elliptic partial differential equations. The method is based on techniques from the hybridized discontinuous Galerkin literature where local and global problems are defined for the volume and trace grid points, respectively. By using a Schur complement technique the volume points can be eliminated, which drastically reduces the system size. We derive both the local and global problems, and show that the resulting linear systems are symmetric positive definite. The theoretical stability results are confirmed with numerical experiments as is the accuracy of the method.
Similar content being viewed by others
Data availability
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Notes
The free parameter in the \(2p=6\) operator from Strand [28] is taken to be \(x_1=0.70127127127127\). This choice of free parameter is necessary for the values of the Borrowing Lemma given in Virta and Mattsson [30] to hold; the Borrowing Lemma is discussed in “Proof of Theorem 1 (Symmetric Positive Definiteness of the Local Problem)” of the appendix.
Simulations run with Julia 1.5.3.
References
Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: a fresh approach to numerical computing. SIAM Rev. 59(1), 65–98 (2017). https://doi.org/10.1137/141000671
Carpenter, M.H., Gottlieb, D., Abarbanel, S.: Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: methodology and application to high-order compact schemes. J. Comput. Phys. 111(2), 220–236 (1994). https://doi.org/10.1006/jcph.1994.1057
Carpenter, M.H., Nordström, J., Gottlieb, D.: A stable and conservative interface treatment of arbitrary spatial accuracy. J. Comput. Phys. 148(2), 341–365 (1999). https://doi.org/10.1006/jcph.1998.6114
Chen, Y., Davis, T.A., Hager, W.W., Rajamanickam, S.: Algorithm 887: Cholmod, supernodal sparse Cholesky factorization and update/downdate. ACM Trans. Math. Softw. 3(3), 22:1-22:14 (2008). https://doi.org/10.1145/1391989.1391995
Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. Society for Industrial and Applied Mathematics, Philadelphia (2002). https://doi.org/10.1137/1.9780898719208
Cockburn, B., Gopalakrishnan, J., Lazarov, R.: Unified hybridization of discontinuous Galerkin, mixed, and continuous Galerkin methods for second order elliptic problems. SIAM J. Numer. Anal. 47(2), 1319–1365 (2009). https://doi.org/10.1137/070706616
Davis, T.A.: Direct methods for sparse linear systems. Society for Industrial and Applied Mathematics, Philadelphia (2006). https://doi.org/10.1137/1.9780898718881
Erickson, B.A., Day, S.M.: Bimaterial effects in an earthquake cycle model using rate-and-state friction. J. Geophys. Res. Solid Earth 121, 2480–2506 (2016). https://doi.org/10.1002/2015JB012470
Erickson, B.A., Dunham, E.M.: An efficient numerical method for earthquake cycles in heterogeneous media: alternating subbasin and surface-rupturing events on faults crossing a sedimentary basin. J. Geophys. Res. Solid Earth 119(4), 3290–3316 (2014). https://doi.org/10.1002/2013JB010614
Gassner, G.: A skew-symmetric discontinuous Galerkin spectral element discretization and its relation to SBP-SAT finite difference methods. SIAM J. Sci. Comput. 35(3), A1233–A1253 (2013). https://doi.org/10.1137/120890144
George, A.: Nested dissection of a regular finite element mesh. SIAM J. Numer. Anal. 10(2), 345–363 (1973). https://doi.org/10.1137/0710032
Guyan, R.J.: Reduction of stiffness and mass matrices. AIAA J. 3(2), 380 (1965). https://doi.org/10.2514/3.2874
Karlstrom, L., Dunham, E.M.: Excitation and resonance of acoustic-gravity waves in a column of stratified, bubbly magma. J. Fluid Mech. 797, 431–470 (2016). https://doi.org/10.1017/jfm.2016.257
Kozdon, J.E., Dunham, M., Nordström, J.: Interaction of waves with frictional interfaces using summation-by-parts difference operators: weak enforcement of nonlinear boundary conditions. J. Sci. Comput. 50, 341–367 (2012). https://doi.org/10.1007/s10915-011-9485-3
Kozdon, J.E., Wilcox, L.C.: Stable coupling of nonconforming, high-order finite difference methods. SIAM J. Sci. Comput. 38(2), A923–A952 (2016). https://doi.org/10.1137/15M1022823
Kreiss, H., Scherer, G.: Finite element and finite difference methods for hyperbolic partial differential equations. In: Mathematical Aspects of Finite Elements in Partial Differential Equations; Proceedings of the Symposium, Madison, WI, pp. 195–212 (1974). https://doi.org/10.1016/b978-0-12-208350-1.50012-1
Kreiss, H., Scherer, G.: On the Existence of Energy Estimates for Difference Approximations for Hyperbolic Systems. Technical report, Department of Scientific Computing, Uppsala University (1977)
Lotto, G.C., Dunham, E.M.: High-order finite difference modeling of tsunami generation in a compressible ocean from offshore earthquakes. Comput. Geosci. 19(2), 327–340 (2015). https://doi.org/10.1007/s10596-015-9472-0
Mattsson, K.: Summation by parts operators for finite difference approximations of second-derivatives with variable coefficients. J. Sci. Comput. 51(3), 650–682 (2012). https://doi.org/10.1007/s10915-011-9525-z
Mattsson, K., Carpenter, M.H.: Stable and accurate interpolation operators for high-order multiblock finite difference methods. SIAM J. Sci. Comput. 32(4), 2298–2320 (2010). https://doi.org/10.1137/090750068
Mattsson, K., Ham, F., Iaccarino, G.: Stable boundary treatment for the wave equation on second-order form. J. Sci. Comput. 41(3), 366–383 (2009). https://doi.org/10.1007/s10915-009-9305-1
Mattsson, K., Nordström, J.: Summation by parts operators for finite difference approximations of second derivatives. J. Comput. Phys. 199(2), 503–540 (2004). https://doi.org/10.1016/j.jcp.2004.03.001
Mattsson, K., Parisi, F.: Stable and accurate second-order formulation of the shifted wave equation. Commun. Comput. Phys. 7(1), 103 (2010). https://doi.org/10.4208/cicp.2009.08.135
Nissen, A., Kreiss, G., Gerritsen, M.: High order stable finite difference methods for the Schrödinger equation. J. Sci. Comput. 55(1), 173–199 (2013). https://doi.org/10.1007/s10915-012-9628-1
Nordström, J., Carpenter, M.H.: High-order finite difference methods, multidimensional linear problems, and curvilinear coordinates. J. Comput. Phys. 173(1), 149–174 (2001). https://doi.org/10.1006/jcph.2001.6864
Roache, P.: Verification and Validation in Computational Science and Engineering, 1st edn. Hermosa Publishers, Albuquerque (1998)
Ruggiu, A.A., Weinerfelt, P., Nordström, J.: A new multigrid formulation for high order finite difference methods on summation-by-parts form. J. Comput. Phys. 359, 216–238 (2018). https://doi.org/10.1016/j.jcp.2018.01.011
Strand, B.: Summation by parts for finite difference approximations for \(d/dx\). J. Comput. Phys. 110(1), 47–67 (1994). https://doi.org/10.1006/jcph.1994.1005
Thomée, V.: From finite differences to finite elements: a short history of numerical analysis of partial differential equations. In: Numerical Analysis: Historical Developments in the 20th Century, pp. 361–414. Elsevier, Amsterdam (2001). https://doi.org/10.1016/S0377-0427(00)00507-0
Virta, K., Mattsson, K.: Acoustic wave propagation in complicated geometries and heterogeneous media. J. Sci. Comput. 61(1), 90–118 (2014). https://doi.org/10.1007/s10915-014-9817-1
Wang, S., Virta, K., Kreiss, G.: High order finite difference methods for the wave equation with non-conforming grid interfaces. J. Sci. Comput. 68(3), 1002–1028 (2016). https://doi.org/10.1007/s10915-016-0165-1
Funding
J.E.K. was supported by National Science Foundation Award EAR-1547596. B.A.E. was supported by National Science Foundation Awards EAR-1547603 and EAR-1916992. L.C.W. has no support to declare for this work. The SCEC contribution number for this article is 10992.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Code availability
The Julia based computer codes used that support the findings of this study are available in the github repository https://github.com/bfam/HybridSBP.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The views expressed in this document are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Approved for public release; distribution unlimited.
J.E.K. was supported by National Science Foundation Award EAR-1547596. B.A.E. was supported by National Science Foundation Awards EAR-1547603 and EAR-1916992. The SCEC contribution number for this article is 10992.
Proofs of Key Results
Proofs of Key Results
To simplify the presentation of the results, the proofs of the key results in the paper are given here in the appendix.
1.1 Proof of Theorem 1 (Symmetric Positive Definiteness of the Local Problem)
Here we provide conditions that ensure that the local problem is symmetric positive definite. To do this we need a few auxiliary lemmas.
First we assume that the operators \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) are compatible with the first derivative (volume) operator \({\varvec{{D}}}\) in the sense of Mattsson [19, Definition 2.4]:
Assumption 1
(Remainder assumption) The matrices \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) satisfy the following remainder equalities:
where \({\tilde{\varvec{R}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{R}}}_{ss}^{\left( c_{ss}\right) }\) are symmetric positive semidefinite matrices and that
The assumption on the nullspace was not a part of the original assumption of from Mattsson [19], but is reasonable for a consistent approximation of the second derivative. The operators used in Sect. 5 satisfy the Remainder Assumption [19].
We also utilize the following lemma from Virta and Mattsson [30, Lemma 3] which relates the \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) to boundary derivative operators \({\varvec{{d}}}_{0}\) and \({\varvec{{d}}}_{N}\):
Lemma 1
(Borrowing Lemma) The matrices \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) satisfy the following borrowing equalities:
Here \({\tilde{\varvec{\mathcal {A}}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{\mathcal {A}}}}_{ss}^{\left( c_{ss}\right) }\) are symmetric positive semidefinite matrices and the parameter \(\beta \) depends on the order of the operators but is independent of N. The diagonal matrices \({\varvec{{\mathcal {C}}}}_{rr}^{0:}\), \({\varvec{{\mathcal {C}}}}_{rr}^{N:}\), \({\varvec{{\mathcal {C}}}}_{ss}^{:0}\), and \({\varvec{{\mathcal {C}}}}_{ss}^{:N}\) have nonzero elements:
where l is a parameter that depends on the order of the scheme and the notation \({\{\cdot \}}_{ij}\) denotes that the grid function inside the bracket is evaluated at grid point i, j.
The values of \(\beta \) and l used in the Borrowing Lemma (Lemma 1) for the operators used in this work are given in Table 4.
We additionally make the following linearity assumption (which the operators we use satisfy) concerning the operators’s dependence on the variable coefficients and an assumption concerning the symmetric positive definiteness of the variable coefficient matrix at each grid point.
Assumption 2
The matrices \({\tilde{\varvec{A}}}_{rr}^{(c_{rr})}\) and \({\tilde{\varvec{A}}}_{ss}^{(c_{ss})}\) depend linearly on the coefficient grid functions \(c_{rr}\) and \(c_{ss}\) so that they can be decomposed as
where \(\delta \) is a grid function.
Assumption 3
At every grid point the grid functions \(c_{rr}\), \(c_{ss}\), and \(c_{rs} = c_{sr}\) satisfy
which implies that the matrix
is symmetric positive definite with eigenvalues
We now state the following lemma which allows us to separate \({\tilde{\varvec{A}}}\) into three symmetric positive definite matrices by peeling off \(\psi _{\min }\) at every grid point.
Lemma 2
The matrix \({\tilde{\varvec{A}}}\), defined by (12b), can be written in the form
where \({\tilde{\varvec{\mathcal {A}}}}\), \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\), and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) are symmetric positive semidefinite matrices. Here \(\psi _{\min }\) is the grid function defined by (23). Furthermore the nullspace of \({\tilde{\varvec{\mathcal {A}}}}\) is \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}\{{\tilde{\varvec{1}}}\}\), where \({\tilde{\varvec{1}}}\) is the vector of ones.
Proof
By Assumption 2 we can write
The matrix
is clearly symmetric by construction. To show that the matrix is positive semidefinite we note that
Here we have defined the vectors \({\tilde{\varvec{u}}}_{r} = \left( {\varvec{{I}}} \otimes {\varvec{{D}}}\right) {\tilde{\varvec{u}}}\) and \({\tilde{\varvec{u}}}_{s} = \left( {\varvec{{D}}} \otimes {\varvec{{I}}}\right) {\tilde{\varvec{u}}}\). Inequalities (24) and (25) follow from the Remainder Assumption and equality (26) follows from (2) and the symmetry assumption (\(c_{rs} = c_{sr}\)). Using relationships (24)–(26) we have that
where the notation \({\left\{ \cdot \right\} }_{i,j}\) denotes that the grid function inside the brackets is evaluated at grid point i, j. The \(2\times 2\) matrix in (27) is the shift of the matrix C by its minimum eigenvalue, thus by Assumption 3 is symmetric positive semidefinite. It then follows that each term in the summation is non-negative and the matrix \({\tilde{\varvec{\mathcal {A}}}}\) is symmetric positive semidefinite.
The matrices \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\) and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) are clearly symmetric by construction, with positive semidefiniteness following from the positivity of \(\psi _{\min }\) and the Remainder Assumption.
We now show that \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}\{{\tilde{\varvec{1}}}\}\). For the right-hand side of (27) to be zero it is required that \({\left( u_{r}\right) }_{i,j} = {\left( u_{s}\right) }_{i,j} = 0\) for all i, j. The only way for this to happen is if \({\tilde{\varvec{u}}} = \alpha {\tilde{\varvec{1}}}\) for some constant \(\alpha \). Thus we have shown that \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) \subseteq \mathrm{span}\{{\tilde{\varvec{1}}}\}\). To show equality we note that by Assumption 1 and the structure of \({\tilde{\varvec{A}}}_{rs}^{(C_{rs})}\) and \({\tilde{\varvec{A}}}_{sr}^{(C_{sr})}\) given in (2), the constant vector \({\tilde{\varvec{1}}} \in \mathrm{null}({\tilde{\varvec{\mathcal {A}}}})\). Together the above two results imply that \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}\{{\tilde{\varvec{1}}}\}\). \(\square \)
We now state the following lemma concerning \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\) and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) which combine the Remainder Assumption and the Borrowing Lemma to provide terms that can be used to bound indefinite terms in the local operator \({\tilde{\varvec{M}}}\).
Lemma 3
The matrices \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\) and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) satisfy the following inequalities:
with \(\alpha = \min \left\{ {\left\{ {\varvec{{H}}}\right\} }_{00}, {\left\{ {\varvec{{H}}}\right\} }_{NN}\right\} / h\), i.e., the unscaled corner value in the H-matrix, and the (boundary) derivative vectors are defined as
The diagonal matrices \({\tilde{\varvec{\varPsi }}}_{\min }^{0:}\), \({\tilde{\varvec{\varPsi }}}_{\min }^{N:}\), \({\tilde{\varvec{\varPsi }}}_{\min }^{:0}\), and \({\tilde{\varvec{\varPsi }}}_{\min }^{:N}\) are defined by (22) using \(\psi _{\min }\).
Proof
We will prove the relationship for \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\), and the proof \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) is analogous. First we note that by the Borrowing Lemma it immediately follows that
Additionally by the Remainder Assumption it follows that
since each term of the summation is positive, the last inequality follows by dropping all but the \(j=0\) and \(j=N\) terms of the summation. The result follows immediately by averaging (28) and (29). \(\square \)
We can now prove Theorem 1 on the symmetric positive definiteness of \({\tilde{\varvec{M}}}\) as defined by (12a).
Proof
The structure of (12a) directly implies that \({\tilde{\varvec{M}}}\) is symmetric, in the remainder of the proof it is shown that \({\tilde{\varvec{M}}}\) is also positive definite.
We begin by recalling the definitions of \({\tilde{\varvec{C}}}_{k}\) and \({\varvec{{F}}}_{k}\) in (12) which allows us to write
Now considering the \({\tilde{\varvec{M}}}\) weighted inner product we have that
Here we have used Lemma 2 to split \({\tilde{\varvec{A}}}\).
If \({\varvec{{\tau }}}_{k} > 0\) then it follows for all \({\tilde{\varvec{u}}}\) that
Additionally, if \({\tilde{\varvec{u}}} = c {\tilde{\varvec{1}}}\) for some constant \(c \ne 0\) then it is a strict inequality since
Since by Lemma 2 the matrix \({\tilde{\varvec{A}}}\) is symmetric positive semidefinite with \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}({\tilde{\varvec{1}}})\), this implies that the matrix
that is the matrix is positive definite. To complete the proof all that remains is to show the remaining matrix in (31) is positive semidefinite, namely
Considering the quantity \({\tilde{\varvec{u}}}^{T} \left( {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} + {\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\right) {\tilde{\varvec{u}}}\) and using Lemma 3 we can write:
Now considering the \(k=1\) term of the last summation in (31) we have
We now need to use the positive term related to face 1 of (32) to bound the negative contribution from (33). Doing this subtraction for face 1 then gives:
In the above calculation we have used the fact that \({\varvec{{H}}}\), \({\varvec{{\tau }}}_{1}\), \({\varvec{{C}}}_{rr}^{0:}\), and \({\varvec{{C}}}_{rs}^{0:}\) are diagonal as well as made the following definitions:
The eigenvalues of the matrix in (34) are:
The first eigenvalue \(\mu _{1}\) is clearly positive and \(\mu _{2}\) will be positive if:
With such a definition of \({\varvec{{\tau }}}_{1}\) all the terms in (34) are positive and thus for face 1 the terms in (31) are positive. An identical argument holds for the other faces if:
and thus \({\tilde{\varvec{M}}}\) is positive definite since \({\tilde{\varvec{u}}}^{T}{\tilde{\varvec{M}}}{\tilde{\varvec{u}}} > 0\) for all \({\tilde{\varvec{u}}} \ne {\tilde{\varvec{0}}}\). \(\square \)
1.2 Proof of Theorem 2 (Positive Definiteness of the Local Problem with Neumann Boundary Conditions)
Here we prove Theorem 2 on the symmetric positive definiteness of \({\tilde{\varvec{M}}}\) with Neumann boundary conditions.
Proof
We begin by considering
where we define the modified surface matrices \({\tilde{\varvec{\mathcal {C}}}}_{k}\) to be
if face k is a Neumann boundary and \({\tilde{\varvec{\mathcal {C}}}}_{k} = {\tilde{\varvec{C}}}_{k}\) otherwise; see the definition of the modified \({\tilde{\varvec{M}}}\) with Neumann boundary conditions (16) and (30). In the proof of Theorem 1 it was shown that terms of the form of (36) combine with \({\tilde{\varvec{A}}}\) is a way that is non-negative if \({\varvec{{\tau }}}_{k}\) satisfy (35); see (32) and following. Thus \({\tilde{\varvec{u}}}^{T}{\tilde{\varvec{M}}}{\tilde{\varvec{u}}} \ge 0\) for all \({\tilde{\varvec{u}}}\). The inequality will be strict for \({\tilde{\varvec{u}}} \ne {\tilde{\varvec{0}}}\) as long as one face is Dirichlet; the argument is that same as that made in the proof of Theorem 1. \(\square \)
1.3 Proof of Theorem 3 and Corollary 2 (Positive Definiteness of the Global Problem)
Proof of Theorem 3
Without loss of generality, we consider a two block mesh with Dirichlet boundary conditions with a single face \(f \in \mathcal {F}_{I}\) and assume that it is connected to face \(k^{+}\) of block \(B^{+}\) and face \(k^{-}\) of block \(B^{-}\). Solving for \(\lambda _{f}\) in the global coupling Eq. (18) in terms of \({\tilde{\varvec{u}}}_{B^{+}}\) and \({\tilde{\varvec{u}}}_{B^{-}}\) gives
Plugging this expression into the local problem (12), gives
Here \({\tilde{\varvec{q}}}_{B^{\pm } {\setminus } f}\) denotes \({\tilde{\varvec{q}}}_{B^{\pm }}\) (see (12d)) with the term dependent on \({\tilde{\varvec{u}}}\) associated with face f removed. Using (30) which relates \({\tilde{\varvec{C}}}_{f,B^{\pm }}\) to \({\varvec{{F}}}_{f,B^{\pm }}\) we have that
Plugging this into (37), and rewriting the two equations as single system gives:
where we have defined the following matrices:
The matrix \({\varvec{{\mathbb {A}}}}\) is block diagonal, and each of the blocks was shown in the proof of Theorem 1 to be symmetric positive semidefinite. Thus, if \({\varvec{{\mathbb {T}}}}\) is symmetric positive semidefinite, then the whole system is symmetric positive semidefinite. Since \({\varvec{{\tau }}}_{f,B^{\pm }}\) are diagonal, the eigenvalues \({\varvec{{\mathbb {T}}}}\) are the same as the eigenvalues of the \(2\times 2\) systems
for each \(j = 0\) to \(N_{f}\) (number of points on the face). The eigenvalues of \({\varvec{{\mathbb {T}}}}^{j}\) are
which shows that \({\varvec{{\mathbb {T}}}}^{j}\) and that \({\varvec{{\mathbb {T}}}}\) are positive semidefinite as long as \(\tau _{f,B^{\pm }}^{j} > 0\).
An identical argument holds for each interface \(f \in \mathcal {F}\), thus the interface treatment guarantees the global system of equations is symmetric positive semidefinite. Positive definiteness results as long as one of the faces of the mesh is a Dirichlet boundary since only the constant state over the entire domain is in the \(\mathrm{null}({\tilde{\varvec{A}}}_{B})\) for all \(B\in \mathcal {B}\) and this is removed as long as some face of the mesh has a Dirichlet boundary condition; see proof of Theorem 1. \(\square \)
Proof of Corollary 2
Begin by noting that
By Theorem 3 and structure of \(\bar{{\varvec{{D}}}}\) the block diagonal center matrix is symmetric positive definite. Since the outer two matrices are the transposes of one another, it immediately follows that the global system matrix is symmetric positive definite.
Since the global system matrix and \(\bar{{\varvec{{M}}}}\) are symmetric positive definite, symmetric positive definiteness of the Schur complement of the \(\bar{{\varvec{{M}}}}\) block follows directly from the decomposition
\(\square \)
Rights and permissions
About this article
Cite this article
Kozdon, J.E., Erickson, B.A. & Wilcox, L.C. Hybridized Summation-by-Parts Finite Difference Methods. J Sci Comput 87, 85 (2021). https://doi.org/10.1007/s10915-021-01448-5
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-021-01448-5