Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Hybridized Summation-by-Parts Finite Difference Methods

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We present a hybridization technique for summation-by-parts finite difference methods with weak enforcement of interface and boundary conditions for second order, linear elliptic partial differential equations. The method is based on techniques from the hybridized discontinuous Galerkin literature where local and global problems are defined for the volume and trace grid points, respectively. By using a Schur complement technique the volume points can be eliminated, which drastically reduces the system size. We derive both the local and global problems, and show that the resulting linear systems are symmetric positive definite. The theoretical stability results are confirmed with numerical experiments as is the accuracy of the method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Notes

  1. The free parameter in the \(2p=6\) operator from Strand [28] is taken to be \(x_1=0.70127127127127\). This choice of free parameter is necessary for the values of the Borrowing Lemma given in Virta and Mattsson [30] to hold; the Borrowing Lemma is discussed in “Proof of Theorem 1 (Symmetric Positive Definiteness of the Local Problem)” of the appendix.

  2. Simulations run with Julia 1.5.3.

References

  1. Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: a fresh approach to numerical computing. SIAM Rev. 59(1), 65–98 (2017). https://doi.org/10.1137/141000671

    Article  MathSciNet  MATH  Google Scholar 

  2. Carpenter, M.H., Gottlieb, D., Abarbanel, S.: Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: methodology and application to high-order compact schemes. J. Comput. Phys. 111(2), 220–236 (1994). https://doi.org/10.1006/jcph.1994.1057

    Article  MathSciNet  MATH  Google Scholar 

  3. Carpenter, M.H., Nordström, J., Gottlieb, D.: A stable and conservative interface treatment of arbitrary spatial accuracy. J. Comput. Phys. 148(2), 341–365 (1999). https://doi.org/10.1006/jcph.1998.6114

    Article  MathSciNet  MATH  Google Scholar 

  4. Chen, Y., Davis, T.A., Hager, W.W., Rajamanickam, S.: Algorithm 887: Cholmod, supernodal sparse Cholesky factorization and update/downdate. ACM Trans. Math. Softw. 3(3), 22:1-22:14 (2008). https://doi.org/10.1145/1391989.1391995

    Article  MathSciNet  Google Scholar 

  5. Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. Society for Industrial and Applied Mathematics, Philadelphia (2002). https://doi.org/10.1137/1.9780898719208

    Book  MATH  Google Scholar 

  6. Cockburn, B., Gopalakrishnan, J., Lazarov, R.: Unified hybridization of discontinuous Galerkin, mixed, and continuous Galerkin methods for second order elliptic problems. SIAM J. Numer. Anal. 47(2), 1319–1365 (2009). https://doi.org/10.1137/070706616

    Article  MathSciNet  MATH  Google Scholar 

  7. Davis, T.A.: Direct methods for sparse linear systems. Society for Industrial and Applied Mathematics, Philadelphia (2006). https://doi.org/10.1137/1.9780898718881

    Book  MATH  Google Scholar 

  8. Erickson, B.A., Day, S.M.: Bimaterial effects in an earthquake cycle model using rate-and-state friction. J. Geophys. Res. Solid Earth 121, 2480–2506 (2016). https://doi.org/10.1002/2015JB012470

    Article  Google Scholar 

  9. Erickson, B.A., Dunham, E.M.: An efficient numerical method for earthquake cycles in heterogeneous media: alternating subbasin and surface-rupturing events on faults crossing a sedimentary basin. J. Geophys. Res. Solid Earth 119(4), 3290–3316 (2014). https://doi.org/10.1002/2013JB010614

    Article  Google Scholar 

  10. Gassner, G.: A skew-symmetric discontinuous Galerkin spectral element discretization and its relation to SBP-SAT finite difference methods. SIAM J. Sci. Comput. 35(3), A1233–A1253 (2013). https://doi.org/10.1137/120890144

    Article  MathSciNet  MATH  Google Scholar 

  11. George, A.: Nested dissection of a regular finite element mesh. SIAM J. Numer. Anal. 10(2), 345–363 (1973). https://doi.org/10.1137/0710032

    Article  MathSciNet  MATH  Google Scholar 

  12. Guyan, R.J.: Reduction of stiffness and mass matrices. AIAA J. 3(2), 380 (1965). https://doi.org/10.2514/3.2874

    Article  Google Scholar 

  13. Karlstrom, L., Dunham, E.M.: Excitation and resonance of acoustic-gravity waves in a column of stratified, bubbly magma. J. Fluid Mech. 797, 431–470 (2016). https://doi.org/10.1017/jfm.2016.257

    Article  MathSciNet  MATH  Google Scholar 

  14. Kozdon, J.E., Dunham, M., Nordström, J.: Interaction of waves with frictional interfaces using summation-by-parts difference operators: weak enforcement of nonlinear boundary conditions. J. Sci. Comput. 50, 341–367 (2012). https://doi.org/10.1007/s10915-011-9485-3

    Article  MathSciNet  MATH  Google Scholar 

  15. Kozdon, J.E., Wilcox, L.C.: Stable coupling of nonconforming, high-order finite difference methods. SIAM J. Sci. Comput. 38(2), A923–A952 (2016). https://doi.org/10.1137/15M1022823

    Article  MathSciNet  MATH  Google Scholar 

  16. Kreiss, H., Scherer, G.: Finite element and finite difference methods for hyperbolic partial differential equations. In: Mathematical Aspects of Finite Elements in Partial Differential Equations; Proceedings of the Symposium, Madison, WI, pp. 195–212 (1974). https://doi.org/10.1016/b978-0-12-208350-1.50012-1

  17. Kreiss, H., Scherer, G.: On the Existence of Energy Estimates for Difference Approximations for Hyperbolic Systems. Technical report, Department of Scientific Computing, Uppsala University (1977)

  18. Lotto, G.C., Dunham, E.M.: High-order finite difference modeling of tsunami generation in a compressible ocean from offshore earthquakes. Comput. Geosci. 19(2), 327–340 (2015). https://doi.org/10.1007/s10596-015-9472-0

    Article  MathSciNet  MATH  Google Scholar 

  19. Mattsson, K.: Summation by parts operators for finite difference approximations of second-derivatives with variable coefficients. J. Sci. Comput. 51(3), 650–682 (2012). https://doi.org/10.1007/s10915-011-9525-z

    Article  MathSciNet  MATH  Google Scholar 

  20. Mattsson, K., Carpenter, M.H.: Stable and accurate interpolation operators for high-order multiblock finite difference methods. SIAM J. Sci. Comput. 32(4), 2298–2320 (2010). https://doi.org/10.1137/090750068

    Article  MathSciNet  MATH  Google Scholar 

  21. Mattsson, K., Ham, F., Iaccarino, G.: Stable boundary treatment for the wave equation on second-order form. J. Sci. Comput. 41(3), 366–383 (2009). https://doi.org/10.1007/s10915-009-9305-1

    Article  MathSciNet  MATH  Google Scholar 

  22. Mattsson, K., Nordström, J.: Summation by parts operators for finite difference approximations of second derivatives. J. Comput. Phys. 199(2), 503–540 (2004). https://doi.org/10.1016/j.jcp.2004.03.001

    Article  MathSciNet  MATH  Google Scholar 

  23. Mattsson, K., Parisi, F.: Stable and accurate second-order formulation of the shifted wave equation. Commun. Comput. Phys. 7(1), 103 (2010). https://doi.org/10.4208/cicp.2009.08.135

    Article  MathSciNet  MATH  Google Scholar 

  24. Nissen, A., Kreiss, G., Gerritsen, M.: High order stable finite difference methods for the Schrödinger equation. J. Sci. Comput. 55(1), 173–199 (2013). https://doi.org/10.1007/s10915-012-9628-1

    Article  MathSciNet  MATH  Google Scholar 

  25. Nordström, J., Carpenter, M.H.: High-order finite difference methods, multidimensional linear problems, and curvilinear coordinates. J. Comput. Phys. 173(1), 149–174 (2001). https://doi.org/10.1006/jcph.2001.6864

    Article  MathSciNet  MATH  Google Scholar 

  26. Roache, P.: Verification and Validation in Computational Science and Engineering, 1st edn. Hermosa Publishers, Albuquerque (1998)

    Google Scholar 

  27. Ruggiu, A.A., Weinerfelt, P., Nordström, J.: A new multigrid formulation for high order finite difference methods on summation-by-parts form. J. Comput. Phys. 359, 216–238 (2018). https://doi.org/10.1016/j.jcp.2018.01.011

    Article  MathSciNet  MATH  Google Scholar 

  28. Strand, B.: Summation by parts for finite difference approximations for \(d/dx\). J. Comput. Phys. 110(1), 47–67 (1994). https://doi.org/10.1006/jcph.1994.1005

    Article  MathSciNet  MATH  Google Scholar 

  29. Thomée, V.: From finite differences to finite elements: a short history of numerical analysis of partial differential equations. In: Numerical Analysis: Historical Developments in the 20th Century, pp. 361–414. Elsevier, Amsterdam (2001). https://doi.org/10.1016/S0377-0427(00)00507-0

  30. Virta, K., Mattsson, K.: Acoustic wave propagation in complicated geometries and heterogeneous media. J. Sci. Comput. 61(1), 90–118 (2014). https://doi.org/10.1007/s10915-014-9817-1

    Article  MathSciNet  MATH  Google Scholar 

  31. Wang, S., Virta, K., Kreiss, G.: High order finite difference methods for the wave equation with non-conforming grid interfaces. J. Sci. Comput. 68(3), 1002–1028 (2016). https://doi.org/10.1007/s10915-016-0165-1

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

J.E.K. was supported by National Science Foundation Award EAR-1547596. B.A.E. was supported by National Science Foundation Awards EAR-1547603 and EAR-1916992. L.C.W. has no support to declare for this work. The SCEC contribution number for this article is 10992.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeremy E. Kozdon.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Code availability

The Julia based computer codes used that support the findings of this study are available in the github repository https://github.com/bfam/HybridSBP.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The views expressed in this document are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Approved for public release; distribution unlimited.

J.E.K. was supported by National Science Foundation Award EAR-1547596. B.A.E. was supported by National Science Foundation Awards EAR-1547603 and EAR-1916992. The SCEC contribution number for this article is 10992.

Proofs of Key Results

Proofs of Key Results

To simplify the presentation of the results, the proofs of the key results in the paper are given here in the appendix.

1.1 Proof of Theorem 1 (Symmetric Positive Definiteness of the Local Problem)

Here we provide conditions that ensure that the local problem is symmetric positive definite. To do this we need a few auxiliary lemmas.

First we assume that the operators \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) are compatible with the first derivative (volume) operator \({\varvec{{D}}}\) in the sense of Mattsson [19, Definition 2.4]:

Assumption 1

(Remainder assumption) The matrices \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) satisfy the following remainder equalities:

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }&= \left( {\varvec{{I}}} \otimes {\varvec{{D}}}^{T}\right) {\tilde{\varvec{C}}}_{rr} \left( {\varvec{{H}}} \otimes {\varvec{{H}}}\right) \left( {\varvec{{I}}} \otimes {\varvec{{D}}}\right) + {\tilde{\varvec{R}}}_{rr}^{\left( c_{rr}\right) },\\ {\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }&= \left( {\varvec{{D}}}^{T} \otimes {\varvec{{I}}}\right) {\tilde{\varvec{C}}}_{ss} \left( {\varvec{{H}}} \otimes {\varvec{{H}}}\right) \left( {\varvec{{D}}} \otimes {\varvec{{I}}}\right) + {\tilde{\varvec{R}}}_{ss}^{\left( c_{ss}\right) }, \end{aligned} \end{aligned}$$

where \({\tilde{\varvec{R}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{R}}}_{ss}^{\left( c_{ss}\right) }\) are symmetric positive semidefinite matrices and that

$$\begin{aligned} {\tilde{\varvec{1}}} \in \mathrm{null}\left( {\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\right) , \qquad {\tilde{\varvec{1}}} \in \mathrm{null}\left( {\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\right) . \end{aligned}$$

The assumption on the nullspace was not a part of the original assumption of from Mattsson [19], but is reasonable for a consistent approximation of the second derivative. The operators used in Sect. 5 satisfy the Remainder Assumption [19].

We also utilize the following lemma from Virta and Mattsson [30, Lemma 3] which relates the \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) to boundary derivative operators \({\varvec{{d}}}_{0}\) and \({\varvec{{d}}}_{N}\):

Lemma 1

(Borrowing Lemma) The matrices \({\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }\) satisfy the following borrowing equalities:

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{A}}}_{rr}^{\left( c_{rr}\right) }&= h\beta \left( {\varvec{{I}}} \otimes {\varvec{{d}}}_{0}\right) {\varvec{{H}}}{\varvec{{\mathcal {C}}}}_{rr}^{0:} \left( {\varvec{{I}}} \otimes {\varvec{{d}}}_{0}^{T}\right) \\&\quad + h\beta \left( {\varvec{{I}}} \otimes {\varvec{{d}}}_{N}\right) {\varvec{{H}}}{\varvec{{\mathcal {C}}}}_{rr}^{N:} \left( {\varvec{{I}}} \otimes {\varvec{{d}}}_{N}^{T}\right) +{\tilde{\varvec{\mathcal {A}}}}_{rr}^{\left( c_{rr}\right) },\\ {\tilde{\varvec{A}}}_{ss}^{\left( c_{ss}\right) }&= h\beta \left( {\varvec{{d}}}_{0} \otimes {\varvec{{I}}}\right) {\varvec{{H}}}{\varvec{{\mathcal {C}}}}_{ss}^{:0} \left( {\varvec{{d}}}_{0}^{T} \otimes {\varvec{{I}}}\right) \\&\quad + h\beta \left( {\varvec{{d}}}_{N} \otimes {\varvec{{I}}}\right) {\varvec{{H}}}{\varvec{{\mathcal {C}}}}_{ss}^{:N} \left( {\varvec{{d}}}_{N}^{T} \otimes {\varvec{{I}}}\right) +{\tilde{\varvec{\mathcal {A}}}}_{ss}^{\left( c_{ss}\right) }. \end{aligned} \end{aligned}$$

Here \({\tilde{\varvec{\mathcal {A}}}}_{rr}^{\left( c_{rr}\right) }\) and \({\tilde{\varvec{\mathcal {A}}}}_{ss}^{\left( c_{ss}\right) }\) are symmetric positive semidefinite matrices and the parameter \(\beta \) depends on the order of the operators but is independent of N. The diagonal matrices \({\varvec{{\mathcal {C}}}}_{rr}^{0:}\), \({\varvec{{\mathcal {C}}}}_{rr}^{N:}\), \({\varvec{{\mathcal {C}}}}_{ss}^{:0}\), and \({\varvec{{\mathcal {C}}}}_{ss}^{:N}\) have nonzero elements:

$$\begin{aligned} \begin{aligned} {\left[ {\varvec{{\mathcal {C}}}}_{rr}^{0:}\right] }_{jj}&= \min _{k=0,\dots ,l} {\left\{ c_{rr}\right\} }_{kj},\quad&{\left[ {\varvec{{\mathcal {C}}}}_{rr}^{N:}\right] }_{jj}&= \min _{k=N-l,\dots ,N} {\left\{ c_{rr}\right\} }_{kj},\\ {\left[ {\varvec{{\mathcal {C}}}}_{ss}^{:0}\right] }_{ii}&= \min _{k=0,\dots ,l} {\left\{ c_{ss}\right\} }_{ik},&{\left[ {\varvec{{\mathcal {C}}}}_{ss}^{:N}\right] }_{ii}&= \min _{k=N-l,\dots ,N} {\left\{ c_{ss}\right\} }_{ik}, \end{aligned} \end{aligned}$$
(22)

where l is a parameter that depends on the order of the scheme and the notation \({\{\cdot \}}_{ij}\) denotes that the grid function inside the bracket is evaluated at grid point ij.

The values of \(\beta \) and l used in the Borrowing Lemma (Lemma 1) for the operators used in this work are given in Table 4.

Table 4 Borrowing Lemma parameters l and \(\beta \) for operators used in this work [30, Table 1]

We additionally make the following linearity assumption (which the operators we use satisfy) concerning the operators’s dependence on the variable coefficients and an assumption concerning the symmetric positive definiteness of the variable coefficient matrix at each grid point.

Assumption 2

The matrices \({\tilde{\varvec{A}}}_{rr}^{(c_{rr})}\) and \({\tilde{\varvec{A}}}_{ss}^{(c_{ss})}\) depend linearly on the coefficient grid functions \(c_{rr}\) and \(c_{ss}\) so that they can be decomposed as

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{A}}}_{rr}^{(c_{rr})}&= {\tilde{\varvec{A}}}_{rr}^{(c_{rr}-\delta )} + {\tilde{\varvec{A}}}_{rr}^{(\delta )},\\ {\tilde{\varvec{A}}}_{ss}^{(c_{ss})}&= {\tilde{\varvec{A}}}_{ss}^{(c_{ss}-\delta )} + {\tilde{\varvec{A}}}_{ss}^{(\delta )}, \end{aligned} \end{aligned}$$

where \(\delta \) is a grid function.

Assumption 3

At every grid point the grid functions \(c_{rr}\), \(c_{ss}\), and \(c_{rs} = c_{sr}\) satisfy

$$\begin{aligned} c_{rr}> 0,\qquad c_{ss}> 0,\qquad c_{rr} c_{ss} > c_{rs}^2 \end{aligned}$$

which implies that the matrix

$$\begin{aligned} C = \begin{bmatrix} c_{rr} &{}\quad c_{rs}\\ c_{rs} &{}\quad c_{ss} \end{bmatrix} \end{aligned}$$

is symmetric positive definite with eigenvalues

$$\begin{aligned} \begin{aligned} \psi _{\max } = \frac{1}{2}\left( c_{rr} + c_{ss} + \sqrt{{\left( c_{rr}-c_{ss}\right) }^{2} + 4c_{rs}^{2}}\right) ,\\ \psi _{\min } = \frac{1}{2}\left( c_{rr} + c_{ss} - \sqrt{{\left( c_{rr}-c_{ss}\right) }^{2} + 4c_{rs}^{2}}\right) . \end{aligned} \end{aligned}$$
(23)

We now state the following lemma which allows us to separate \({\tilde{\varvec{A}}}\) into three symmetric positive definite matrices by peeling off \(\psi _{\min }\) at every grid point.

Lemma 2

The matrix \({\tilde{\varvec{A}}}\), defined by (12b), can be written in the form

$$\begin{aligned} {\tilde{\varvec{A}}} = {\tilde{\varvec{\mathcal {A}}}} + {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} + {\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}, \end{aligned}$$

where \({\tilde{\varvec{\mathcal {A}}}}\), \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\), and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) are symmetric positive semidefinite matrices. Here \(\psi _{\min }\) is the grid function defined by (23). Furthermore the nullspace of \({\tilde{\varvec{\mathcal {A}}}}\) is \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}\{{\tilde{\varvec{1}}}\}\), where \({\tilde{\varvec{1}}}\) is the vector of ones.

Proof

By Assumption 2 we can write

$$\begin{aligned} {\tilde{\varvec{A}}}&= {\tilde{\varvec{A}}}^{\left( c_{rr}-\psi _{\min }\right) }_{rr} + {\tilde{\varvec{A}}}^{\left( c_{ss}-\psi _{\min }\right) }_{ss} + {\tilde{\varvec{A}}}^{\left( c_{rs}\right) }_{rs} + {\tilde{\varvec{A}}}^{\left( c_{sr}\right) }_{sr} + {\tilde{\varvec{A}}}^{\left( \psi _{\min }\right) }_{rr} + {\tilde{\varvec{A}}}^{\left( \psi _{\min }\right) }_{ss}. \end{aligned}$$

The matrix

$$\begin{aligned} {\tilde{\varvec{\mathcal {A}}}}&= {\tilde{\varvec{A}}}^{\left( c_{rr}-\psi _{\min }\right) }_{rr} + {\tilde{\varvec{A}}}^{\left( c_{ss}-\psi _{\min }\right) }_{ss} + {\tilde{\varvec{A}}}^{\left( c_{rs}\right) }_{rs} + {\tilde{\varvec{A}}}^{\left( c_{sr}\right) }_{sr} \end{aligned}$$

is clearly symmetric by construction. To show that the matrix is positive semidefinite we note that

$$\begin{aligned} {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}^{\left( c_{rr}-\psi _{\min }\right) }_{rr}{\tilde{\varvec{u}}}&\ge {\tilde{\varvec{u}}}_{r}^{T} \left( {\varvec{{H}}} \otimes {\varvec{{H}}}\right) \left( {\tilde{\varvec{C}}}_{rr}-{\tilde{\varvec{\psi }}}_{\min }\right) {\tilde{\varvec{u}}}_{r}, \end{aligned}$$
(24)
$$\begin{aligned} {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}^{\left( c_{ss}-\psi _{\min }\right) }_{ss}{\tilde{\varvec{u}}}&\ge {\tilde{\varvec{u}}}_{s}^{T} \left( {\varvec{{H}}} \otimes {\varvec{{H}}}\right) \left( {\tilde{\varvec{C}}}_{ss}-{\tilde{\varvec{\psi }}}_{\min }\right) {\tilde{\varvec{u}}}_{s}, \end{aligned}$$
(25)
$$\begin{aligned} {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}^{\left( c_{rs}\right) }_{rs}{\tilde{\varvec{u}}}&= {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}^{\left( c_{sr}\right) }_{sr}{\tilde{\varvec{u}}} ={\tilde{\varvec{u}}}_{r}^{T} \left( {\varvec{{H}}} \otimes {\varvec{{H}}}\right) {\tilde{\varvec{C}}}_{rs} {\tilde{\varvec{u}}}_{s}. \end{aligned}$$
(26)

Here we have defined the vectors \({\tilde{\varvec{u}}}_{r} = \left( {\varvec{{I}}} \otimes {\varvec{{D}}}\right) {\tilde{\varvec{u}}}\) and \({\tilde{\varvec{u}}}_{s} = \left( {\varvec{{D}}} \otimes {\varvec{{I}}}\right) {\tilde{\varvec{u}}}\). Inequalities (24) and (25) follow from the Remainder Assumption and equality (26) follows from (2) and the symmetry assumption (\(c_{rs} = c_{sr}\)). Using relationships (24)–(26) we have that

$$\begin{aligned} {\tilde{\varvec{u}}}^{T}{\tilde{\varvec{\mathcal {A}}}}{\tilde{\varvec{u}}}&\ge \sum _{i=0}^{N} \sum _{j=0}^{N} {\left\{ \left( {\varvec{{H}}} \otimes {\varvec{{H}}}\right) \right\} }_{ij} {\left\{ \begin{bmatrix} u_{r}\\ u_{s} \end{bmatrix}^{T} \begin{bmatrix} c_{rr} - \psi _{\min } &{}\quad c_{rs}\\ c_{rs} &{}\quad c_{ss} - \psi _{\min } \end{bmatrix} \begin{bmatrix} u_{r}\\ u_{s} \end{bmatrix} \right\} }_{i,j}, \end{aligned}$$
(27)

where the notation \({\left\{ \cdot \right\} }_{i,j}\) denotes that the grid function inside the brackets is evaluated at grid point ij. The \(2\times 2\) matrix in (27) is the shift of the matrix C by its minimum eigenvalue, thus by Assumption 3 is symmetric positive semidefinite. It then follows that each term in the summation is non-negative and the matrix \({\tilde{\varvec{\mathcal {A}}}}\) is symmetric positive semidefinite.

The matrices \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\) and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) are clearly symmetric by construction, with positive semidefiniteness following from the positivity of \(\psi _{\min }\) and the Remainder Assumption.

We now show that \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}\{{\tilde{\varvec{1}}}\}\). For the right-hand side of (27) to be zero it is required that \({\left( u_{r}\right) }_{i,j} = {\left( u_{s}\right) }_{i,j} = 0\) for all ij. The only way for this to happen is if \({\tilde{\varvec{u}}} = \alpha {\tilde{\varvec{1}}}\) for some constant \(\alpha \). Thus we have shown that \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) \subseteq \mathrm{span}\{{\tilde{\varvec{1}}}\}\). To show equality we note that by Assumption 1 and the structure of \({\tilde{\varvec{A}}}_{rs}^{(C_{rs})}\) and \({\tilde{\varvec{A}}}_{sr}^{(C_{sr})}\) given in (2), the constant vector \({\tilde{\varvec{1}}} \in \mathrm{null}({\tilde{\varvec{\mathcal {A}}}})\). Together the above two results imply that \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}\{{\tilde{\varvec{1}}}\}\). \(\square \)

We now state the following lemma concerning \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\) and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) which combine the Remainder Assumption and the Borrowing Lemma to provide terms that can be used to bound indefinite terms in the local operator \({\tilde{\varvec{M}}}\).

Lemma 3

The matrices \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\) and \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) satisfy the following inequalities:

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} {\tilde{\varvec{u}}}&\ge \frac{1}{2}\left[ h\beta {\left( {\varvec{{v}}}_{r}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{0:} {\varvec{{v}}}_{r}^{0:} + h\beta {\left( {\varvec{{v}}}_{r}^{N:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{N:} {\varvec{{v}}}_{r}^{N:} \right] \\&\quad + \frac{1}{2}\left[ h \alpha {\left( {\varvec{{w}}}_{r}^{:0}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:0} {\varvec{{w}}}_{r}^{:0} + h \alpha {\left( {\varvec{{w}}}_{r}^{:N}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:N} {\varvec{{w}}}_{r}^{:N} \right] ,\\ {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}_{ss}^{(\varPsi _{\min })} {\tilde{\varvec{u}}}&\ge \frac{1}{2}\left[ h \alpha {\left( {\varvec{{w}}}_{s}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{0:} {\varvec{{w}}}_{s}^{0:} + h \alpha {\left( {\varvec{{w}}}_{s}^{N:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{N:} {\varvec{{w}}}_{s}^{N:} \right] \\&\quad + \frac{1}{2}\left[ h\beta {\left( {\varvec{{v}}}_{s}^{:0}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:0} {\varvec{{v}}}_{s}^{:0} + h\beta {\left( {\varvec{{v}}}_{s}^{:N}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:N} {\varvec{{v}}}_{s}^{:N} \right] , \end{aligned} \end{aligned}$$

with \(\alpha = \min \left\{ {\left\{ {\varvec{{H}}}\right\} }_{00}, {\left\{ {\varvec{{H}}}\right\} }_{NN}\right\} / h\), i.e., the unscaled corner value in the H-matrix, and the (boundary) derivative vectors are defined as

$$\begin{aligned} \begin{aligned} {\varvec{{v}}}_{r}^{0:}&= \left( {\varvec{{I}}} \otimes {\varvec{{d}}}_{0}^{T}\right) {\tilde{\varvec{u}}},\quad&{\varvec{{v}}}_{r}^{N:}&= \left( {\varvec{{I}}} \otimes {\varvec{{d}}}_{N}^{T}\right) {\tilde{\varvec{u}}},\\ {\varvec{{w}}}_{r}^{:0}&= \left( {\varvec{{e}}}_{0}^{T} \otimes {\varvec{{D}}}\right) {\tilde{\varvec{u}}},&{\varvec{{w}}}_{r}^{:N}&= \left( {\varvec{{e}}}_{N}^{T} \otimes {\varvec{{D}}}\right) {\tilde{\varvec{u}}},\\ {\varvec{{w}}}_{s}^{0:}&= \left( {\varvec{{D}}} \otimes {\varvec{{e}}}_{0}^{T}\right) {\tilde{\varvec{u}}},&{\varvec{{w}}}_{s}^{N:}&= \left( {\varvec{{D}}} \otimes {\varvec{{e}}}_{N}^{T}\right) {\tilde{\varvec{u}}},\\ {\varvec{{v}}}_{s}^{:0}&= \left( {\varvec{{d}}}_{0}^{T} \otimes {\varvec{{I}}}\right) {\tilde{\varvec{u}}},&{\varvec{{v}}}_{s}^{:N}&= \left( {\varvec{{d}}}_{N}^{T} \otimes {\varvec{{I}}}\right) {\tilde{\varvec{u}}}. \end{aligned} \end{aligned}$$

The diagonal matrices \({\tilde{\varvec{\varPsi }}}_{\min }^{0:}\), \({\tilde{\varvec{\varPsi }}}_{\min }^{N:}\), \({\tilde{\varvec{\varPsi }}}_{\min }^{:0}\), and \({\tilde{\varvec{\varPsi }}}_{\min }^{:N}\) are defined by (22) using \(\psi _{\min }\).

Proof

We will prove the relationship for \({\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })}\), and the proof \({\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\) is analogous. First we note that by the Borrowing Lemma it immediately follows that

$$\begin{aligned} {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} {\tilde{\varvec{u}}} \ge h\beta {\left( {\varvec{{v}}}_{r}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{0:} {\varvec{{v}}}_{r}^{0:} + h\beta {\left( {\varvec{{v}}}_{r}^{N:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{N:} {\varvec{{v}}}_{r}^{N:}. \end{aligned}$$
(28)

Additionally by the Remainder Assumption it follows that

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{u}}}^{T} {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} {\tilde{\varvec{u}}}&\ge {\tilde{\varvec{u}}}^{T}\left( {\varvec{{I}}}\otimes {\varvec{{D}}}^{T}\right) \left( {\varvec{{H}}} \otimes {\varvec{{H}}}\right) {\tilde{\varvec{\varPsi }}}_{\min } \left( {\varvec{{I}}}\otimes {\varvec{{D}}}\right) {\tilde{\varvec{u}}}\\&= \sum _{j=0}^{N} {\left\{ {\varvec{{H}}}\right\} }_{jj} {\tilde{\varvec{u}}}^{T}\left( {\varvec{{e}}}_{j}\otimes {\varvec{{D}}}^{T}\right) {\varvec{{H}}}{\tilde{\varvec{\varPsi }}}_{\min }^{:j} \left( {\varvec{{e}}}_{j}^{T}\otimes {\varvec{{D}}}\right) {\tilde{\varvec{u}}}\\&\ge \alpha h {\left( {\varvec{{w}}}_{r}^{:0}\right) }^{T} {\varvec{{H}}}{\tilde{\varvec{\varPsi }}}_{\min }^{:0} {\left( {\varvec{{w}}}_{r}^{:0}\right) }^{T} + \alpha h {\left( {\varvec{{w}}}_{r}^{:N}\right) }^{T} {\varvec{{H}}}{\tilde{\varvec{\varPsi }}}_{\min }^{:N} {\left( {\varvec{{w}}}_{r}^{:N}\right) }^{T}; \end{aligned} \end{aligned}$$
(29)

since each term of the summation is positive, the last inequality follows by dropping all but the \(j=0\) and \(j=N\) terms of the summation. The result follows immediately by averaging (28) and (29). \(\square \)

We can now prove Theorem 1 on the symmetric positive definiteness of \({\tilde{\varvec{M}}}\) as defined by (12a).

Proof

The structure of (12a) directly implies that \({\tilde{\varvec{M}}}\) is symmetric, in the remainder of the proof it is shown that \({\tilde{\varvec{M}}}\) is also positive definite.

We begin by recalling the definitions of \({\tilde{\varvec{C}}}_{k}\) and \({\varvec{{F}}}_{k}\) in (12) which allows us to write

$$\begin{aligned} {\tilde{\varvec{C}}}_{k} = {\varvec{{F}}}_{k} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1}{\varvec{{F}}}_{k}^{T} - {\varvec{{G}}}_{k}^{T}{\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1}{\varvec{{G}}}_{k}. \end{aligned}$$
(30)

Now considering the \({\tilde{\varvec{M}}}\) weighted inner product we have that

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{u}}}^{T}{\tilde{\varvec{M}}}{\tilde{\varvec{u}}}&= {\tilde{\varvec{u}}}^{T} \left( {\tilde{\varvec{A}}} + \sum _{k=1}^{4}{\tilde{\varvec{C}}}_{k}\right) {\tilde{\varvec{u}}}\\&= {\tilde{\varvec{u}}}^{T} \left( {\tilde{\varvec{\mathcal {A}}}} + \sum _{k=1}^{4} {\varvec{{F}}}_{k} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1} {\varvec{{F}}}_{k}^{T} \right) {\tilde{\varvec{u}}}\\&\quad +\, {\tilde{\varvec{u}}}^{T} \left( {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} + {\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })} - \sum _{k=1}^{4} {\varvec{{G}}}_{k}^{T}{\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1}{\varvec{{G}}}_{k} \right) {\tilde{\varvec{u}}}. \end{aligned} \end{aligned}$$
(31)

Here we have used Lemma 2 to split \({\tilde{\varvec{A}}}\).

If \({\varvec{{\tau }}}_{k} > 0\) then it follows for all \({\tilde{\varvec{u}}}\) that

$$\begin{aligned} {\tilde{\varvec{u}}}^{T} {\varvec{{F}}}_{k} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1} {\varvec{{F}}}_{k}^{T}{\tilde{\varvec{u}}} \ge 0. \end{aligned}$$

Additionally, if \({\tilde{\varvec{u}}} = c {\tilde{\varvec{1}}}\) for some constant \(c \ne 0\) then it is a strict inequality since

$$\begin{aligned} {\varvec{{F}}}_{k}^{T} {\tilde{\varvec{1}}} = -{\varvec{{H}}} {\varvec{{\tau }}}_{k}{\varvec{{1}}} \ne {\varvec{{0}}}. \end{aligned}$$

Since by Lemma 2 the matrix \({\tilde{\varvec{A}}}\) is symmetric positive semidefinite with \(\mathrm{null}({\tilde{\varvec{\mathcal {A}}}}) = \mathrm{span}({\tilde{\varvec{1}}})\), this implies that the matrix

$$\begin{aligned} {\tilde{\varvec{\mathcal {A}}}} + \sum _{k=1}^{4} {\varvec{{F}}}_{k} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1} {\varvec{{F}}}_{k}^{T} \succ 0, \end{aligned}$$

that is the matrix is positive definite. To complete the proof all that remains is to show the remaining matrix in (31) is positive semidefinite, namely

$$\begin{aligned} {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} + {\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })} - \sum _{k=1}^{4} {\varvec{{G}}}_{k}^{T}{\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1}{\varvec{{G}}}_{k} \succeq 0. \end{aligned}$$

Considering the quantity \({\tilde{\varvec{u}}}^{T} \left( {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} + {\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })}\right) {\tilde{\varvec{u}}}\) and using Lemma 3 we can write:

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{u}}}^{T}&\left( {\tilde{\varvec{A}}}_{rr}^{(\psi _{\min })} + {\tilde{\varvec{A}}}_{ss}^{(\psi _{\min })} \right) {\tilde{\varvec{u}}}\\&\quad \ge \frac{1}{2} \left( h\beta {\left( {\varvec{{v}}}_{r}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{0:} {\varvec{{v}}}_{r}^{0:} + h \alpha {\left( {\varvec{{w}}}_{s}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{0:} {\varvec{{w}}}_{s}^{0:} \right) \\&\qquad + \frac{1}{2} \left( h\beta {\left( {\varvec{{v}}}_{r}^{N:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{N:} {\varvec{{v}}}_{r}^{N:} + h \alpha {\left( {\varvec{{w}}}_{s}^{N:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{N:} {\varvec{{w}}}_{s}^{N:} \right) \\&\qquad + \frac{1}{2} \left( h\alpha {\left( {\varvec{{w}}}_{r}^{:0}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:0} {\varvec{{w}}}_{r}^{:0} + h \beta {\left( {\varvec{{v}}}_{s}^{:0}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:0} {\varvec{{v}}}_{s}^{:0} \right) \\&\qquad + \frac{1}{2} \left( h\alpha {\left( {\varvec{{w}}}_{r}^{:N}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:N} {\varvec{{w}}}_{r}^{:N} + h \beta {\left( {\varvec{{v}}}_{s}^{:N}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{:N} {\varvec{{v}}}_{s}^{:N} \right) . \end{aligned} \end{aligned}$$
(32)

Now considering the \(k=1\) term of the last summation in (31) we have

$$\begin{aligned} {\tilde{\varvec{u}}}^{T} {\varvec{{G}}}_{1}^{T}{\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{1}^{-1}{\varvec{{G}}}_{1} {\tilde{\varvec{u}}} = {\left( {\varvec{{C}}}_{rr}^{0:}{\varvec{{v}}}_{r}^{0:} + {\varvec{{C}}}_{rs}^{0:} {\varvec{{w}}}_{s}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\tau }}}_{1}^{-1} \left( {\varvec{{C}}}_{rr}^{0:}{\varvec{{v}}}_{r}^{0:} + {\varvec{{C}}}_{rs}^{0:} {\varvec{{w}}}_{s}^{0:}\right) . \end{aligned}$$
(33)

We now need to use the positive term related to face 1 of (32) to bound the negative contribution from (33). Doing this subtraction for face 1 then gives:

$$\begin{aligned} \begin{aligned}&\frac{1}{2} \left( h\beta {\left( {\varvec{{v}}}_{r}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{0:} {\varvec{{v}}}_{r}^{0:} + h \alpha {\left( {\varvec{{w}}}_{s}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\varPsi }}}_{\min }^{0:} {\varvec{{w}}}_{s}^{0:} \right) \\&\quad - {\left( {\varvec{{C}}}_{rr}^{0:}{\varvec{{v}}}_{r}^{0:} + {\varvec{{C}}}_{rs}^{0:} {\varvec{{w}}}_{s}^{0:}\right) }^{T} {\varvec{{H}}}{\varvec{{\tau }}}_{1}^{-1} \left( {\varvec{{C}}}_{rr}^{0:}{\varvec{{v}}}_{r}^{0:} + {\varvec{{C}}}_{rs}^{0:} {\varvec{{w}}}_{s}^{0:}\right) \\&= \begin{bmatrix} {\varvec{{\hat{v}}}}_{r}^{0:}\\ {\varvec{{\hat{w}}}}_{s}^{0:} \end{bmatrix}^{T} \left( {\varvec{{I}}}_{2\times 2} \otimes {\varvec{{H}}}\right) \begin{bmatrix} {\varvec{{I}}} - {\left( {\varvec{{\hat{C}}}}_{rr}^{0:}\right) }^{2} {\varvec{{\tau }}}_{1}^{-1} &{} -{\varvec{{\hat{C}}}}_{rr}^{0:}{\varvec{{\tau }}}_{1}^{-1}{\varvec{{\hat{C}}}}_{rs}^{0:}\\ -{\varvec{{\hat{C}}}}_{rs}^{0:}{\varvec{{\tau }}}_{1}^{-1}{\varvec{{\hat{C}}}}_{rr}^{0:}&{} {\varvec{{I}}} - {\left( {\varvec{{\hat{C}}}}_{rs}^{0:}\right) }^{2} {\varvec{{\tau }}}_{1}^{-1} \end{bmatrix} \begin{bmatrix} {\varvec{{\hat{v}}}}_{r}^{0:}\\ {\varvec{{\hat{w}}}}_{s}^{0:} \end{bmatrix} \\ {}&= \sum _{j=0}^{N} H_{s}^{j} \begin{bmatrix} \hat{v}_{r}^{0j}\\ \hat{w}_{s}^{0j} \end{bmatrix}^{T} \begin{bmatrix} 1 - \frac{{\left( \hat{C}_{rr}^{0j}\right) }^{2}}{\tau ^{j}_{1}} &{} -\frac{\hat{C}_{rr}^{0j}\hat{C}_{rs}^{0j}}{\tau ^{j}_{1}}\\ -\frac{\hat{C}_{rs}^{0j}\hat{C}_{rr}^{0j}}{\tau ^{j}_{1}}&{} 1 - \frac{{\left( \hat{C}_{rs}^{0j}\right) }^{2}}{\tau ^{j}_{1}} \end{bmatrix} \begin{bmatrix} \hat{v}_{r}^{0j}\\ \hat{w}_{s}^{0j} \end{bmatrix}. \end{aligned} \end{aligned}$$
(34)

In the above calculation we have used the fact that \({\varvec{{H}}}\), \({\varvec{{\tau }}}_{1}\), \({\varvec{{C}}}_{rr}^{0:}\), and \({\varvec{{C}}}_{rs}^{0:}\) are diagonal as well as made the following definitions:

$$\begin{aligned} \begin{aligned} \hat{v}_{r}^{0j}&= v_{r}^{0j} \sqrt{\frac{1}{2}h\beta \varPsi _{\min }^{0j}},&\quad \hat{C}_{rr}^{0j}&= C_{rr}^{0j}\sqrt{\frac{2}{h\beta \varPsi _{\min }^{0j}} },\\ \hat{w}_{s}^{0j}&= w_{s}^{0j} \sqrt{\frac{1}{2}h\alpha \varPsi _{\min }^{0j}},&\quad \hat{C}_{rs}^{0j}&= C_{rs}^{0j}\sqrt{\frac{2}{h\alpha \varPsi _{\min }^{0j}} }. \end{aligned} \end{aligned}$$

The eigenvalues of the matrix in (34) are:

$$\begin{aligned} \mu _{1} = 1,\quad \mu _{2} = 1 - \frac{{\left( \hat{C}_{rr}^{0j}\right) }^{2} + {\left( \hat{C}_{rs}^{0j}\right) }^{2}}{\tau ^{j}_{1}}. \end{aligned}$$

The first eigenvalue \(\mu _{1}\) is clearly positive and \(\mu _{2}\) will be positive if:

$$\begin{aligned} \tau _{1}^{j} > {\left( \hat{C}_{rr}^{0j}\right) }^2 + {\left( \hat{C}_{rs}^{0j}\right) }^2 = \frac{2{\left( C_{rr}^{0j}\right) }^2}{h\beta \varPsi _{\min }^{0j}} + \frac{2{\left( C_{rs}^{0j}\right) }^2}{h\alpha \varPsi _{\min }^{0j}}. \end{aligned}$$
(35a)

With such a definition of \({\varvec{{\tau }}}_{1}\) all the terms in (34) are positive and thus for face 1 the terms in (31) are positive. An identical argument holds for the other faces if:

$$\begin{aligned} \tau _{2}^{j}&> \frac{2{\left( C_{rr}^{Nj}\right) }^2}{h\beta \varPsi _{\min }^{Nj}} + \frac{2{\left( C_{rs}^{Nj}\right) }^2}{h\alpha \varPsi _{\min }^{Nj}}, \end{aligned}$$
(35b)
$$\begin{aligned} \tau _{3}^{i}&> \frac{2{\left( C_{rs}^{i0}\right) }^2}{h\alpha \varPsi _{\min }^{i0}} + \frac{2{\left( C_{ss}^{i0}\right) }^2}{h\beta \varPsi _{\min }^{i0}}, \end{aligned}$$
(35c)
$$\begin{aligned} \tau _{4}^{i}&> \frac{2{\left( C_{rs}^{iN}\right) }^2}{h\alpha \varPsi _{\min }^{iN}} + \frac{2{\left( C_{ss}^{iN}\right) }^2}{h\beta \varPsi _{\min }^{iN}}, \end{aligned}$$
(35d)

and thus \({\tilde{\varvec{M}}}\) is positive definite since \({\tilde{\varvec{u}}}^{T}{\tilde{\varvec{M}}}{\tilde{\varvec{u}}} > 0\) for all \({\tilde{\varvec{u}}} \ne {\tilde{\varvec{0}}}\). \(\square \)

1.2 Proof of Theorem 2 (Positive Definiteness of the Local Problem with Neumann Boundary Conditions)

Here we prove Theorem 2 on the symmetric positive definiteness of \({\tilde{\varvec{M}}}\) with Neumann boundary conditions.

Proof

We begin by considering

$$\begin{aligned} {\tilde{\varvec{u}}}^{T}{\tilde{\varvec{M}}}{\tilde{\varvec{u}}}&= {\tilde{\varvec{u}}}^{T} \left( {\tilde{\varvec{A}}} + {\tilde{\varvec{\mathcal {C}}}}_{1} + {\tilde{\varvec{\mathcal {C}}}}_{2} + {\tilde{\varvec{\mathcal {C}}}}_{3} + {\tilde{\varvec{\mathcal {C}}}}_{4}\right) {\tilde{\varvec{u}}}, \end{aligned}$$

where we define the modified surface matrices \({\tilde{\varvec{\mathcal {C}}}}_{k}\) to be

$$\begin{aligned} {\tilde{\varvec{\mathcal {C}}}}_{k}&= {\tilde{\varvec{C}}}_{k} - {\varvec{{F}}}_{k} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1} {\varvec{{F}}}_{k}^{T} = - {\varvec{{G}}}_{k}^{T} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{k}^{-1} {\varvec{{G}}}_{k}, \end{aligned}$$
(36)

if face k is a Neumann boundary and \({\tilde{\varvec{\mathcal {C}}}}_{k} = {\tilde{\varvec{C}}}_{k}\) otherwise; see the definition of the modified \({\tilde{\varvec{M}}}\) with Neumann boundary conditions (16) and (30). In the proof of Theorem 1 it was shown that terms of the form of (36) combine with \({\tilde{\varvec{A}}}\) is a way that is non-negative if \({\varvec{{\tau }}}_{k}\) satisfy (35); see (32) and following. Thus \({\tilde{\varvec{u}}}^{T}{\tilde{\varvec{M}}}{\tilde{\varvec{u}}} \ge 0\) for all \({\tilde{\varvec{u}}}\). The inequality will be strict for \({\tilde{\varvec{u}}} \ne {\tilde{\varvec{0}}}\) as long as one face is Dirichlet; the argument is that same as that made in the proof of Theorem 1. \(\square \)

1.3 Proof of Theorem 3 and Corollary 2 (Positive Definiteness of the Global Problem)

Proof of Theorem 3

Without loss of generality, we consider a two block mesh with Dirichlet boundary conditions with a single face \(f \in \mathcal {F}_{I}\) and assume that it is connected to face \(k^{+}\) of block \(B^{+}\) and face \(k^{-}\) of block \(B^{-}\). Solving for \(\lambda _{f}\) in the global coupling Eq. (18) in terms of \({\tilde{\varvec{u}}}_{B^{+}}\) and \({\tilde{\varvec{u}}}_{B^{-}}\) gives

$$\begin{aligned} {\varvec{{\lambda }}}_{f} = {\varvec{{D}}}_{f}^{-1} \left( \frac{1}{2}{\varvec{{H}}}\left( {\varvec{{\tau }}}_{f, B^{+}} - {\varvec{{\tau }}}_{f, B^{-}} \right) {\varvec{{\delta }}}_{f} -{\varvec{{F}}}_{f,B^{+}}^{T} {\tilde{\varvec{u}}}_{B^{+}} -{\varvec{{F}}}_{f,B^{-}}^{T} {\tilde{\varvec{u}}}_{B^{-}} \right) . \end{aligned}$$

Plugging this expression into the local problem (12), gives

$$\begin{aligned} \begin{aligned}&\left( {\tilde{\varvec{A}}}_{B^{+}} - {\varvec{{F}}}_{f,B^{+}} {\varvec{{D}}}_{f}^{-1} {\varvec{{F}}}_{f,B^{+}}^{T} + \sum _{k=1}^{4} {\tilde{\varvec{C}}}_{k,B^{+}} \right) {\tilde{\varvec{u}}}_{B^{+}}\\&\quad - {\varvec{{F}}}_{f,B^{+}} {\varvec{{D}}}_{f}^{-1} {\varvec{{F}}}_{f,B^{-}}^{T}{\tilde{\varvec{u}}}_{B^{-}} = {\tilde{\varvec{q}}}_{B^{+} {\setminus } f},\\&\left( {\tilde{\varvec{A}}}_{B^{-}} - {\varvec{{F}}}_{f,B^{-}} {\varvec{{D}}}_{f}^{-1} {\varvec{{F}}}_{f,B^{-}}^{T} + \sum _{k=1}^{4} {\tilde{\varvec{C}}}_{k,B^{-}} \right) {\tilde{\varvec{u}}}_{B^{-}}\\&\quad - {\varvec{{F}}}_{f,B^{-}} {\varvec{{D}}}_{f}^{-1} {\varvec{{F}}}_{f,B^{+}}^{T}{\tilde{\varvec{u}}}_{B^{+}} = {\tilde{\varvec{q}}}_{B^{-} {\setminus } f}. \end{aligned} \end{aligned}$$
(37)

Here \({\tilde{\varvec{q}}}_{B^{\pm } {\setminus } f}\) denotes \({\tilde{\varvec{q}}}_{B^{\pm }}\) (see (12d)) with the term dependent on \({\tilde{\varvec{u}}}\) associated with face f removed. Using (30) which relates \({\tilde{\varvec{C}}}_{f,B^{\pm }}\) to \({\varvec{{F}}}_{f,B^{\pm }}\) we have that

$$\begin{aligned} \begin{aligned} {\tilde{\varvec{C}}}_{f,B^{\pm }} - {\varvec{{F}}}_{f,B^{\pm }} {\varvec{{D}}}_{f}^{-1} {\varvec{{F}}}_{f,B^{\pm }}^{T}&= {\varvec{{F}}}_{f,B^{\pm }} {\varvec{{H}}}^{-1} \left( {\varvec{{\tau }}}_{f,B^{\pm }}^{-1} - {\left( {\varvec{{\tau }}}_{f,B^{\pm }} + {\varvec{{\tau }}}_{f,B^{-}} \right) }^{-1} \right) {\varvec{{F}}}_{f,B^{\pm }}^{T}\\&\quad -{\varvec{{G}}}_{k,B^{\pm }}^{T} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{f,B^{\pm }}^{-1}{\varvec{{G}}}_{k,B^{\pm }}. \end{aligned} \end{aligned}$$

Plugging this into (37), and rewriting the two equations as single system gives:

$$\begin{aligned} \left( {\varvec{{\mathbb {A}}}} + {\varvec{{\mathbb {F}}}}\; {\varvec{{\mathbb {T}}}}\; {\varvec{{\mathbb {F}}}}^{T} \right) \begin{bmatrix} {\tilde{\varvec{u}}}_{B^{+}}\\ {\tilde{\varvec{u}}}_{B^{-}} \end{bmatrix} = \begin{bmatrix} {\tilde{\varvec{q}}}_{B^{+}{\setminus } f}\\ {\tilde{\varvec{q}}}_{B^{-}{\setminus } f} \end{bmatrix}, \end{aligned}$$

where we have defined the following matrices:

$$\begin{aligned} {\varvec{{\mathbb {F}}}}&= \begin{bmatrix} {\varvec{{H}}}^{1/2} {\varvec{{F}}}_{f,B^{+}} &{} {\varvec{{0}}}\\ {\varvec{{0}}} &{} {\varvec{{H}}}^{1/2} {\varvec{{F}}}_{f,B^{-}} \end{bmatrix},\\ {\varvec{{\mathbb {T}}}}&= \begin{bmatrix} {\varvec{{\tau }}}_{f,B^{+}}^{-1} - {\left( {\varvec{{\tau }}}_{f,B^{+}} + {\varvec{{\tau }}}_{f,B^{-}} \right) }^{-1}&{} -{\left( {\varvec{{\tau }}}_{f,B^{+}} + {\varvec{{\tau }}}_{f,B^{-}} \right) }^{-1}\\ -{\left( {\varvec{{\tau }}}_{f,B^{+}} + {\varvec{{\tau }}}_{f,B^{-}} \right) }^{-1}&{} {\varvec{{\tau }}}_{f,B^{-}}^{-1} - {\left( {\varvec{{\tau }}}_{f,B^{-}} + {\varvec{{\tau }}}_{f,B^{-}} \right) }^{-1}&{} \end{bmatrix}, \\ {\varvec{{\mathbb {A}}}}&= \begin{bmatrix} \mathbb {A}^{+} &{} {\varvec{{0}}}\\ {\varvec{{0}}} &{} \mathbb {A}^{-} \end{bmatrix},\\ {\varvec{{\mathbb {A}^{\pm }}}}&= {\tilde{\varvec{A}}}_{B^{\pm }} -{\varvec{{G}}}_{k,B^{\pm }}^{T} {\varvec{{H}}}^{-1}{\varvec{{\tau }}}_{f,B^{\pm }}^{-1}{\varvec{{G}}}_{k,B^{\pm }} + \sum _{\begin{array}{c} k=1\\ k\ne k^{\pm } \end{array}}^{4} {\tilde{\varvec{C}}}_{k,B^{\pm }}. \end{aligned}$$

The matrix \({\varvec{{\mathbb {A}}}}\) is block diagonal, and each of the blocks was shown in the proof of Theorem 1 to be symmetric positive semidefinite. Thus, if \({\varvec{{\mathbb {T}}}}\) is symmetric positive semidefinite, then the whole system is symmetric positive semidefinite. Since \({\varvec{{\tau }}}_{f,B^{\pm }}\) are diagonal, the eigenvalues \({\varvec{{\mathbb {T}}}}\) are the same as the eigenvalues of the \(2\times 2\) systems

$$\begin{aligned} {\varvec{{\mathbb {T}}}}^{j}&= \begin{bmatrix} \frac{1}{\tau ^{j}_{f,B^{+} }} - \frac{1}{ \tau ^{j}_{f,B^{+}} + \tau ^{j}_{f,B^{-}} }&{} -\frac{1}{ \tau ^{j}_{f,B^{+}} + \tau ^{j}_{f,B^{-}} }\\ -\frac{1}{ \tau ^{j}_{f,B^{+}} + \tau ^{j}_{f,B^{-}} }&{} \frac{1}{\tau ^{j}_{f,B^{-} }} - \frac{1}{ \tau ^{j}_{f,B^{-}} + \tau ^{j}_{f,B^{-} }} \end{bmatrix}\\&= \frac{1}{ \tau ^{j}_{f,B^{+}} + \tau ^{j}_{f,B^{-}} } \begin{bmatrix} \frac{\tau ^{j}_{f,B^{-} }}{\tau ^{j}_{f,B^{+} }} &{} -1 \\ -1 &{} \frac{\tau ^{j}_{f,B^{+} }}{\tau ^{j}_{f,B^{-} }} \end{bmatrix}, \end{aligned}$$

for each \(j = 0\) to \(N_{f}\) (number of points on the face). The eigenvalues of \({\varvec{{\mathbb {T}}}}^{j}\) are

$$\begin{aligned} \mu _{1}&= 0,&\mu _{2}&= \frac{\tau _{f,B^{+}}^2 + \tau _{f,B^{-}}^2}{\tau _{f,B^{+}}\tau _{f,B^{-} }}, \end{aligned}$$

which shows that \({\varvec{{\mathbb {T}}}}^{j}\) and that \({\varvec{{\mathbb {T}}}}\) are positive semidefinite as long as \(\tau _{f,B^{\pm }}^{j} > 0\).

An identical argument holds for each interface \(f \in \mathcal {F}\), thus the interface treatment guarantees the global system of equations is symmetric positive semidefinite. Positive definiteness results as long as one of the faces of the mesh is a Dirichlet boundary since only the constant state over the entire domain is in the \(\mathrm{null}({\tilde{\varvec{A}}}_{B})\) for all \(B\in \mathcal {B}\) and this is removed as long as some face of the mesh has a Dirichlet boundary condition; see proof of Theorem 1. \(\square \)

Proof of Corollary 2

Begin by noting that

$$\begin{aligned} \begin{bmatrix} \bar{{\varvec{{M}}}} &{}\quad \bar{{\varvec{{F}}}}\\ \bar{{\varvec{{F}}}}^{T} &{}\quad \bar{{\varvec{{D}}}} \end{bmatrix} = \begin{bmatrix} \bar{{\varvec{{I}}}} &{}\quad \bar{{\varvec{{F}}}}\bar{{\varvec{{D}}}}^{-1}\\ \bar{{\varvec{{0}}}} &{}\quad \bar{{\varvec{{I}}}} \end{bmatrix} \begin{bmatrix} \bar{{\varvec{{M}}}} - \bar{{\varvec{{F}}}} \bar{{\varvec{{D}}}}^{-1}\bar{{\varvec{{F}}}}^{T} &{}\quad \bar{{\varvec{{0}}}}\\ \bar{{\varvec{{0}}}} &{}\quad \bar{{\varvec{{D}}}} \end{bmatrix} \begin{bmatrix} \bar{{\varvec{{I}}}} &{}\quad \bar{{\varvec{{0}}}}\\ \bar{{\varvec{{D}}}}^{-1}\bar{{\varvec{{F}}}}^{T} &{}\quad \bar{{\varvec{{I}}}} \end{bmatrix}. \end{aligned}$$

By Theorem 3 and structure of \(\bar{{\varvec{{D}}}}\) the block diagonal center matrix is symmetric positive definite. Since the outer two matrices are the transposes of one another, it immediately follows that the global system matrix is symmetric positive definite.

Since the global system matrix and \(\bar{{\varvec{{M}}}}\) are symmetric positive definite, symmetric positive definiteness of the Schur complement of the \(\bar{{\varvec{{M}}}}\) block follows directly from the decomposition

$$\begin{aligned} \begin{bmatrix} \bar{{\varvec{{M}}}} &{}\quad \bar{{\varvec{{F}}}}\\ \bar{{\varvec{{F}}}}^{T} &{}\quad \bar{{\varvec{{D}}}} \end{bmatrix} = \begin{bmatrix} \bar{{\varvec{{I}}}} &{}\quad \bar{{\varvec{{0}}}}\\ \bar{{\varvec{{F}}}}^{T}\bar{{\varvec{{M}}}}^{-1} &{}\quad \bar{{\varvec{{I}}}} \end{bmatrix} \begin{bmatrix} \bar{{\varvec{{M}}}} &{}\quad \bar{{\varvec{{0}}}}\\ \bar{{\varvec{{0}}}} &{}\quad \bar{{\varvec{{D}}}} - \bar{{\varvec{{F}}}}^{T} \bar{{\varvec{{M}}}}^{-1}\bar{{\varvec{{F}}}} \end{bmatrix} \begin{bmatrix} \bar{{\varvec{{I}}}} &{}\quad \bar{{\varvec{{M}}}}^{-1}\bar{{\varvec{{F}}}}\\ \bar{{\varvec{{0}}}} &{}\quad \bar{{\varvec{{I}}}} \end{bmatrix}. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kozdon, J.E., Erickson, B.A. & Wilcox, L.C. Hybridized Summation-by-Parts Finite Difference Methods. J Sci Comput 87, 85 (2021). https://doi.org/10.1007/s10915-021-01448-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01448-5

Keywords

Navigation