Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

On the Structural Theorem of Persistent Homology

  • Published:
Discrete & Computational Geometry Aims and scope Submit manuscript

But the power of homology is seldom of much efficacy, except in those happy dispositions where it is almost superfluous.

with apologies to Edward Gibbon.

Abstract

We study the categorical framework for the computation of persistent homology, without reliance on a particular computational algorithm. The computation of persistent homology is commonly summarized as a matrix theorem, which we call the Matrix Structural Theorem. Any of the various algorithms for computing persistent homology constitutes a constructive proof of the Matrix Structural Theorem. We show that the Matrix Structural Theorem is equivalent to the Krull–Schmidt property of the category of filtered chain complexes. We separately establish the Krull–Schmidt property by abstract categorical methods, yielding a novel nonconstructive proof of the Matrix Structural Theorem. These results provide the foundation for an alternate categorical framework for decomposition in persistent homology, bypassing the usual persistence vector spaces and quiver representations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Adámek, J., Herrlich, H., Strecker, G.E.: Abstract and Concrete Categories: The Joy of Cats. Online Edition (2004). http://katmat.math.uni-bremen.de/acc

  2. Alperin, J.L., Bell, R.B.: Groups and Representations. Graduate Texts in Mathematics, vol. 162. Springer, New York (1995)

    Google Scholar 

  3. Atiyah, M.F.: On the Krull–Schmidt theorem with application to sheaves. Bull. Soc. Math. France 84, 307–317 (1956). http://www.numdam.org/item?id=BSMF_1956__84__307_0

  4. Awodey, S.: Category Theory. Oxford Logic Guides, vol. 52, 2nd edn. Oxford University Press, Oxfrord (2010)

    MATH  Google Scholar 

  5. Boissonnat, J.-D., Chazal, F., Yvinec, M.: Geometric and Topological Inference, online notes dated January 17, 2017. http://geometrica.saclay.inria.fr/team/Fred.Chazal/papers/CGLcourseNotes/main.pdf

  6. Carlsson, G.: Topology and data. Bull. Am. Math. Soc. 46, 255–308 (2009) http://www.ams.org/journals/bull/2009-46-02/S0273-0979-09-01249-X/

  7. Carlsson, G.: Topological pattern recognition for point cloud dat. Acta Numer. 23, 289–368 (2014). http://math.stanford.edu/~gunnar/actanumericathree.pdf

  8. Carlsson, G., de Silva, V.: Zigzag persistence. Found. Comput. Math. 10(4), 367–405 (2010). arXiv:0812.0197

  9. Carlsson, G., de Silva, V., Morozov, D.: Zigzag persistent homology and real-valued functions. In: Proceedings of the 25th Annual Symposium on Computational Geometry (SoCG’09). ACM, New York (2009). http://www.mrzv.org/publications/zigzags/socg09/

  10. Cohen-Steiner, D., Edelsbrunner, H., Harer, J.: Stability of persistence diagrams. Discrete Comput. Geom. 37(1), 103–120 (2007)

    Article  MathSciNet  Google Scholar 

  11. Cohen-Steiner, D., Edelsbrunner, H., Harer, J., Morozov, D.: Persistent homology for kernels, images, and cokernels. In: Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’09), pp. 1011–1020. SIAM, Philadelphia (2009). http://www.mrzv.org/publications/kic/full-soda09/

  12. Cohen-Steiner, D., Edelsbrunner, H., Morozov, D.: Vines and vineyards by updating persistence in linear time. In: Proceedings of the 22nd Annual Symposium on Computational Geometry (SCG’06), pp. 119–126. ACM, New York (2006). https://pdfs.semanticscholar.org/69c7/12bb113bd06b6c5d6d71f5f3a24f5b243336.pdf

  13. de Silva, V., Morozov, D., Vejdemo-Johansson, M.: Dualities in persistent (co)homology. Inverse Probl. 27(12), (2011). arXiv:1107.5665v1 [math.AT]

  14. Edelsbrunner, H.: CPS296.1: Computational Topology, Duke University Course Notes (2006). https://www.cs.duke.edu/courses/fall06/cps296.1/

  15. Edelsbrunner, H., Harer, J.L.: Computation Topology: An Introduction. American Mathematical Society, Providence (2009)

    Book  Google Scholar 

  16. Edelsbrunner, H., Letscher, D., Zomorodian, A.: Topological persistence and simplification. Discrete Comput. Geom. 28(4), 511–533 (2002). https://users.cs.duke.edu/~edels/Papers/2002-J-04-TopologicalPersistence.pdf

  17. Freyd, P.J.: Abelian Categories. Harper’s Series in Modern Mathematics. Harper and Row, New York (1964). Reprints in Theory and Applications of Categories, vol. 3 (2003). ftp://ftp.sam.math.ethz.ch/EMIS/journals/TAC/reprints/articles/3/tr3.pdf

  18. Geck, M.: An Introduction to Algebraic Geometry and Algebraic Groups. Oxford Graduate Texts in Mathematics, vol. 20. Oxford University Press, Oxford (2003)

    Google Scholar 

  19. Ghrist, R.: Barcodes: the persistent topology of data. Bull. Am. Math. Soc. 45(1), 61–75 (2008). http://www.ams.org/journals/bull/2008-45-01/S0273-0979-07-01191-3/

  20. Hatcher, A.: Algebraic Topology. Cambridge University Press, Cambridge (2001). https://www.math.cornell.edu/~hatcher/AT/ATpage.html

  21. Krause, H.: Krull–Schmidt categories and projective covers. Expo. Math. 33(4), 535–549 (2015)

    Article  MathSciNet  Google Scholar 

  22. Liu, S.: Auslander–Reiten theory in a Krull–Schmidt category. São Paulo J. Math. Sci. 4(3), 425–472 (2010). https://www.ime.usp.br/~spjm/articlepdf/432.pdf

  23. Lusztig, G.: Bruhat decomposition and applications (2010). arXiv:1006.5004

  24. MacLane, S.: Categories for the Working Mathematician. Graduate Texts in Mathematics, vol. 5. Springer, New York (1971)

    Google Scholar 

  25. Maria, C., Oudot, S.Y.: Zigzag persistence via reflections and transpositions. In: Proceedings of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’15). SIAM, Philadelphia (2015). https://hal.inria.fr/hal-01091949

  26. Maria, C., Oudot, S.: Computing zigzag persistent cohomology (2016). arXiv:1608.06039

  27. Miyachi, J.: Derived categories with applications to representations of algebras. http://www.u-gakugei.ac.jp/~miyachi/papers/ChibaSemi.pdf

  28. Oudot, S.Y.: Persistence Theory: From Quiver Representations to Data Analysis. Mathematical Surveys and Monographs, vol. 209. American Mathematical Society, Providence (2015)

    Book  Google Scholar 

  29. Pavlichenko, A., Segert, J., Meehan, K.: On kernels and cokernels in persistent homology (in preparation)

  30. Shiffler, R.: Quiver Representations, CMS Books in Mathematics. Springer, Berlin (2014)

    Google Scholar 

  31. Skraba, P., Vejdemo-Johansson, M.: Parallel & scalable zig-zag persistent homology (2012)

  32. Stacks Project Authors: Homological Algebra, The Stacks Project. http://stacks.math.columbia.edu/download/homology.pdf

  33. Weinberger, S.: What is...persistent homology? Notices Am. Math. Soc. 58(1), 36–39 (2011). http://www.ams.org/notices/201101/rtx110100036p.pdf

  34. Zomorodian, A.J.: Topology for Computing. Cambridge Monographs on Applied and Computational Mathematics, vol. 16. Cambridge University Press, Cambridge (2005)

    Google Scholar 

  35. Zomorodian, A., Carlsson, G.: Computing persistent homology. Discrete Comput. Geom. 33(2), 249–274 (2005)

    Article  MathSciNet  Google Scholar 

  36. Zomorodian, A., Carlsson, G.: Localized homology. Comput. Geom. 41(3), 126–148 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Segert.

Additional information

Editor in Charge: Kenneth Clarkson

Appendices

Appendix A: Bruhat Uniqueness Lemma

Here we establish the uniqueness of the persistence canonical form \(\underline{D}\) appearing in the Matrix Structural Theorem 1.4, as well as in the ungraded version Theorem 1.2. Our result generalizes the uniqueness statement for the usual Bruhat factorization of an invertible matrix [2, 18].

It is convenient to make the following definitions. We call an (upper) triangular matrix Uunitriangular if it is unipotent, meaning that each diagonal entry is 1. We call a matrix Mquasi-monomial if each row has at most one nonzero entry and each column has at most one nonzero entry. We remark that a unitriangular matrix is always square, but a quasi-monomial matrix need not be square. The key to proving uniqueness is:

Lemma A.1

Suppose \(M_1\,U = V \,M_2 \), where \(M_1\) and \(M_2\) are quasi-monomial and U and V are unitriangular. Then \(M_2=M_1\).

In the following proof, the term row-pivot denotes a matrix entry that is the leftmost nonzero entry in its row, and column-pivot denotes a matrix entry that is the bottommost nonzero entry in its column.

Proof

The first half of the proof consists of showing that every nonzero entry of \(M_2\) is also an entry of \(M_1\). A nonzero entry of the quasi-monomial matrix \(M_2\) is a column-pivot. Similarly a nonzero entry of the quasi-monomial matrix \(M_1\) is a row-pivot. It now suffices to show that a column-pivot of \(M_2\) is a row-pivot of \(M_1\). Since V is unitriangular, \(VM_2\) has the same column-pivots as \(M_2\). Similarly since U is unitriangular, \(M_1U\) has the same row-pivots as \(M_1\). It now suffices to prove that a column-pivot of \(S = VM_2\) is a row-pivot of \(S =M_1U\). Suppose to the contrary that some column-pivot of S is not a row-pivot of S. Let x be the leftmost such column-pivot. Since x is not a row-pivot, there exists a row-pivot y to the left of x in the same row. If y were a column-pivot of \(S = VM_2\), then it would be a column-pivot of \(M_2\). But the quasi-monomial matrix \(M_2\) cannot have two nonzero entries y and x in the same row. So y is not a column-pivot of S, and there exists a column-pivot z below y in the same column. If z were a row-pivot of \(S = UM_1\), then it would be a row-pivot of \(M_1\). But the quasi-monomial matrix \(M_1\) cannot have two nonzero entries z and y in the same column. So z is a column-pivot of S that is not a row-pivot of S, and z is to the left of (and below) x. This is a contradiction, because x is the leftmost such column-pivot.

The second half of the proof consists of showing that every nonzero entry of \(M_1\) is also an entry of \(M_2\). This is analogous to the first half, and we omit the details. The two matrices then have the same nonzero entries, so they must also have the same zero entries. Since all the entries of the two matrices are the same, we have proved \(M_2=M_1\). \(\square \)

Recall that a matrix M is Boolean if every non-zero entry is 1. An almost-Jordan differential matrix \({\underline{D}}\) is Boolean and quasi-monomial.

Proposition A.2

Suppose \(P_1A=BP_2\) where \(P_1\) and \(P_2\) are Boolean quasi-monomial and A and B are invertible triangular. Then \(P_2=P_1\).

Proof

Factor \(A = T_1U\) as the product of an invertible diagonal matrix \(T_1\) and a unitriangular matrix U. Factor \(B = VT_2\) as the product of a unitriangular matrix V and an invertible diagonal matrix \(T_2\). Then \((P_1 T_1)U = V(T_2 P_2)\), with \((P_1 T_1)\) and \((T_2 P_2)\) quasi-monomial. Lemma A.1 then gives the \(P_1 T_1 =T_2 P_2\). Since the quasi-monomial matrices \(P_1\) and \(P_2\) are Boolean, the conclusion follows. \(\square \)

We remark that any permutation matrix P is Boolean and quasi-monomial, so Proposition A.2 generalizes the standard uniqueness result for Bruhat factorization of an invertible matrix [2, 18].

The uniqueness of the persistence canonical form \(\underline{D}\) appearing in Theorem 1.2 and in the Matrix Structural Theorem 1.4 now follows easily:

Corollary A.3

Suppose D is a differential matrix and \(B_1\) and \(B_2\) are invertible triangular matrices. If both differential matrices \({\underline{D}}_1 = B_1^{-1} D B_1\) and \({\underline{D}_2} = B_2^{-1} D B_2\) are almost-Jordan, then \({\underline{D}}_2 ={\underline{D}}_1\).

Proof

\({\underline{D}_1} (B_1^{-1} B_2) = (B_1^{-1} B_2) {\underline{D}_2}\), and the result follows from Proposition A.2. \(\square \)

Appendix B: Constructively Proving the Matrix Structural Theorem

1.1 B.1 Linear Algebra of Reduction

In this section we discuss column-reduction of a matrix \(M:{\mathbb F}^m \rightarrow {\mathbb F}^n\), including its application to describing the kernel and image of the matrix. Column-reduction of a differential matrix D is a standard tool in the computation of persistent homology, where it is usually just called reduction [12, 16, 35, 36]. We prefer the more precise terminology in order to maintain the distinction with row-reduction, since both are used for Bruhat factorization [2, 18, 23].

As in Appendix A, the term column-pivot denotes a matrix entry that is the bottommost nonzero entry in its column. A matrix R is said to be column-reduced if each row has at most one column-pivot.

Definition B.1

A column-reduction of a matrix M is an invertible triangular matrix V such that \(R = MV\) is column-reduced.

A column-reduction V exists for any matrix M, but is not unique in general. Column-reduction algorithms used for persistent homology [13, 16, 35, 36] usually prioritize computational efficiency. For our computational examples, we will use a column-reduction algorithm that is popular for Bruhat factorization [2, 18]. This algorithm is easy to implement, but is not very efficient computationally. The algorithm starts at the leftmost column of M and proceeds rightward by successive columns as follows:

  • If the current column is zero, do nothing.

  • If the current column is nonzero, add an appropriate multiple of the current column to each column to the right in order to zero the entries to the right of the column-pivot (in the same row).

Stop if the current column is the rightmost column, otherwise proceed to the column immediately to the right and repeat. By design, the resulting matrix R has the property that any column-pivot has only zeros to the right of it (in the same row). So a row of R cannot contain more than one column-pivot, implying that R is column-reduced. The invertible triangular column-reduction matrix V is constructed by performing the same column operations on the identity matrix I, where I has same number of columns as M.

We briefly discuss some linear-algebraic properties of column-reduction. A column-reduction easily yields a basis for the kernel of a matrix as well as a basis for the image. By contrast, Gaussian elimination easily yields a basis for the image a matrix, but requires additional back-substitution to produce a basis for the kernel. Column-reduction algorithms are therefore a convenient alternative to Gaussian elimination for matrix computations in general, and this fact seems to be underappreciated. We use a variant of the usual adapted basis for a filtered vector space, disregarding the ordering of basis elements. We will say that a basis of a finite-dimensional vector space X is almost-adapted to a subspace \(Y \subseteq X\) if Y is spanned by the set of basis elements that are contained in Y. Definition B.1 yields:

Corollary B.2

Let \(V:{\mathbb F}^m \rightarrow {\mathbb F}^m\) be a column-reduction of a matrix \(M:{\mathbb F}^m \rightarrow {\mathbb F}^n\). Then:

  1. 1.

    The nonzero columns of the column-reduced matrix \(R = MV\) are a basis of \({\mathrm{im}}\,M\).

  2. 2.

    The columns of the invertible triangular matrix V are a basis of \({\mathbb F}^m\), and this basis is almost-adapted to \(\ker M\).

Proof

  1. 1.

    The nonzero columns of R span \({\mathrm{im}}\,M\). The nonzero columns of R are linearly independent because R is column-reduced.

  2. 2.

    The columns of V are a basis of \({\mathbb F}^m\) because V is invertible. This basis is almost-adapted to \(\ker M\) because the nonzero columns of \(R = MV\) are linearly independent.

\(\square \)

Example B.3

We compute in detail a column-reduction of the matrix \(M:{\mathbb Q}^4 \rightarrow {\mathbb Q}^3\), which is presented below with a column augmentation by the identity matrix I.

figure bc

The result of the computation is a factorization \(R = MV\), where R is column-reduced and V is invertible triangular (and unipotent). We describe each step of the computation:

  1. 1.

    The first column of M is nonzero, so it has a column-pivot. At the next processing step, boldface the column-pivot for clarity, and add an appropriate multiple of the first column to each column to the right in order to zero the entries to the right of the column-pivot (in the same row).

  2. 2.

    At this point the second column is zero, so requires no processing step.

  3. 3.

    At this point the third column is nonzero, so it has a column-pivot. At the next processing step, boldface the column pivot, and add an appropriate multiple of the third column to each column to the right in order to zero the entries to the right of the column-pivot (in the same row).

  4. 4.

    At this point the fourth column is zero, so requires no processing step.

Columns 1 and 3 of R are the nonzero columns, so they are a basis of \({\mathrm{im}}\,M \subseteq {\mathbb Q}^3\). The four columns of V are a basis of \({\mathbb Q}^4\) that is amost-adapted to \(\ker M\). Columns 2 and 4 of V correspond to the zero columns of R, so they are a basis of \(\ker M\subseteq {\mathbb Q}^4\).

1.2 B.2 Matrix Structural Theorem via Reduction

The standard algorithm of persistent homology [16, 35, 36] starts with a differential matrix D and constructs a matrix B satisfying the conditions of:

Theorem 1.2

(Ungraded Matrix Structural Theorem) Any differential matrix D factors as \(D = B {\underline{D}} B^{-1}\) where \({\underline{D}}\) is an almost-Jordan differential matrix and B is a triangular matrix.

The matrix formulation of the standard algorithm constructs a matrix \(B = \widehat{V}\) from a column-reduction V of a differential matrix D, as discussed in [5, 13] for \({\mathbb F} = {\mathbb Z}/2 {\mathbb Z}\). Since \(R = DV\) is column-reduced, there exists at most one nonzero column of R that has its column-pivot in row k. Here \(1 \le k \le m\) where m is the number of rows of the square matrix D. \(\widehat{V}\) is constructed one column at a time using the following rule:

  • If there exists a nonzero column of R that has its column-pivot in row k, then column k of \(\widehat{V}\) is equal to this column of R.

  • If there does not exist a nonzero column of R that has its column-pivot in row k, then column k of \(\widehat{V}\) is equal to column k of V.

The matrix \(\widehat{V}\) is invertible triangular, because each column is nonzero and has its column-pivot on the diagonal.

Unlike its progenitor V, the matrix \(\widehat{V}\) contains all of the nonzero columns of R. We introduce the “pivot matrix” of R to encode the combinatorial data needed to recover R. For any column-reduced matrix M, the pivot matrix of M is constructed by replacing every column-pivot of M with 1, and every other nonzero entry of M with 0. It follows that the pivot matrix is is Boolean and quasi-monomial (Appendix A).

Lemma B.4

Let \(\underline{D}\) be the pivot matrix of R. Then \(\widehat{V} \underline{D} = R\).

Proof

Supposing column k of R is nonzero, let j be the row number of the unique nonzero entry in column k of \(\underline{D}\). Then by construction, column j of \(\widehat{V}\) is equal to column k of R. \(\square \)

But like its progenitor V, the matrix \(\widehat{V}\) is a column-reduction of the differential D:

Lemma B.5

\(D \widehat{V} = R\).

Proof

The triangular matrix V is a column-reduction of D, as per Definition B.1. We now show that the triangular matrix \(\widehat{V}\) is also a column-reduction of D. Recall that \(\widehat{V}\) is constructed as a modification of V, by replacing a (possibly empty) subset of the columns of V with columns of R. Every column of R is in \(\ker D\), because \(D R = D^2 V = 0\). So \(\widehat{R} := D \widehat{V}\) is constructed as a modification of \(R = D V\), by replacing a (possibly empty) subset of the nonzero columns of R by zero columns. This particular modification preserves the column-reduced property, so \(\widehat{R} = D \widehat{V}\) is column-reduced. It follows that \(\widehat{V}\) is a column-reduction of D, as per Definition B.1.

Since \(\widehat{V}\) and V are column-reductions of the same matrix D, we see from Corollary B.2 that \(\widehat{R}\) and R have the same number (namely \({\mathrm{rank}}\,D\)) of nonzero columns. It follows that in the construction of \(\widehat{R}\) as a modification of R, none of the nonzero columns of R can be replaced by zero columns. This establishes the equality \(\widehat{R} = R\), and the conclusion \(D \widehat{V} = R\) follows. \(\square \)

We can now complete the constructive proof of Theorem 1.2:

Proof

Letting \(B = \widehat{V}\), Lemmas B.4 and B.5 give \(B {\underline{D}} = R = D B\). The pivot matrix \({\underline{D}} =B^{-1} D B\) is a differential matrix, since it is conjugate to the differential matrix D. It only remains to check that the differential matrix \({\underline{D}}\) is almost-Jordan. This requires constructing a permutation matrix P such that \(P^{-1} {\underline{D}} P\) is Jordan. Since the differential matrix \({\underline{D}}\) is furthermore Boolean and quasi-monomial, P can be constructed by the procedure previously explained in Example 4.4. \(\square \)

An immediate corollary of the proof is the important and generally known fact that the almost-Jordan differential \(\underline{D}\) can be easily constructed as the pivot matrix of the column-reduced matrix \(R = DV\). Furthermore, while R may depend on the choice of column-reduction V, Corollary A.3 guarantees that the pivot matrix \(\underline{D}\) is an invariant of D (which we call the persistence canonical form of D in Sect. 1.3) independent of the choice of column-reduction V. The matrix \(\underline{D}\) contains all of the data for the multiplicities of the summands in a decomposition, and this is independent of the choice of decomposition because of the Krull–Schmidt property. The data required to compute a particular decomposition is conveniently encoded by in the matrix \(\widehat{V}\), which is a column-reduction with additional special properties. These points are illustrated in the example at the end of the section. In the language of the standard framework, one says that the “barcodes” are contained in \(\underline{D}\), and the “creators and destroyers” of persistent homology are contained in \(\widehat{V}\).

We also note that the invertible triangular matrix \(B = \widehat{V}\) produced by the standard algorithm is not in general normalized (see the discussion in Sect. 1.3 following Theorem 1.2). But it is easy to construct a diagonal matrix T such that the invertible diagonal matrix \(B = \widehat{V} T\) is normalized. This will also be illustrated in the example at the end of the section.

The graded case is an easy modification:

Theorem 1.4

(Matrix Structural Theorem) Any block-superdiagonal differential matrix D factors as \(D = B {\underline{D}} B^{-1}\) where \({\underline{D}}\) is a block-superdiagonal almost-Jordan differential matrix and B is a block-diagonal triangular matrix.

Proof

Let D be a block-superdiagonal differential matrix D. Then the invertible triangular column-reduction marix V produced by a reduction algorithm, such as [16, 35, 36] or our Appendix B.1, is block-diagonal. If V is block-diagonal, then so is the invertible triangular matrix \(B = \widehat{V}\) constructed by the standard algorithm from V and \(R = D V\). \(\square \)

The following example of a standard algorithm computation illustrates both block-structure and normalization.

Example B.6

We work with block-superdiagonal differential \(D:{\mathbb Q}^7 \rightarrow {\mathbb Q}^7\) of Example 1.5, which is presented below with a column augmentation by the identity matrix I. The identity matrix is block-diagonal with respect to the grading structure inherited from D. We first compute a column-reduction of D:

figure bd

The result of the computation is a factorization \(R = DV\), where R is column-reduced and V is invertible triangular (and unipotent). The intervening steps are omitted for brevity. The block-superdiagonal almost-Jordan differential \(\underline{D}\) is now easily computed as the pivot matrix of R, by setting every column-pivot to 1 and every other nonzero entry to 0:

figure be

The barcode invariants can be computed from the matrix \(\underline{D}\) and the filtration levels of the basis elements.

Proceeding to compute a particular decomposition as in Example 4.4, we use the standard algorithm to construct \(\widehat{V}\) as a modification of V. Each nonzero column of R replaces the column of V that has its column-pivot in the same row. Then \(\widehat{V}\) inherits the block-diagonal structure of V:

figure bf

We list the columns of \(\widehat{V}\) that are equal to columns of R; this data is also encoded by the nonzero entries of the pivot matrix \(\underline{D}\):

  • Column 2 of \(\widehat{V}\) is equal to column 4 of R; row 2 column 4 of \(\underline{D}\) has entry 1.

  • Column 3 of \(\widehat{V}\) is equal to column 5 of R; row 3 column 5 of \(\underline{D}\) has entry 1.

  • Column 6 of \(\widehat{V}\) is equal to column 7 of R; row 6 column 7 of \(\underline{D}\) has entry 1.

Each of the remaining columns of \(\widehat{V}\) is equal to the corresponding column of V. One may now check by matrix multiplication that \({\widehat{V}^{-1}} D \, \widehat{V}={\underline{D}}\), where \({\underline{D}}\) is the pivot matrix of R as above.

The invertible triangular matrix \(\widehat{V}\) is not normalized: column 6 of \(\widehat{V}\) corresponds to a zero column of \({\underline{D}}\), but its diagonal entry is not equal to 1. We can normalize by scalar multiplication of the appropriate columns. Let T be the diagonal matrix with 1 in the first five diagonal entries and \(-1\) in the last two. Then the invertible triangular matrix \(B = {\widehat{V}}T\) is normalized, and this is the matrix that appears in Example 1.5. Note that \(B^{-1} D B = {\underline{D}} = {\widehat{V}}^{-1} D\, {\widehat{V}}\) by Corollary A.3.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meehan, K., Pavlichenko, A. & Segert, J. On the Structural Theorem of Persistent Homology. Discrete Comput Geom 62, 945–989 (2019). https://doi.org/10.1007/s00454-018-0042-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00454-018-0042-9

Keywords

Mathematics Subject Classification

Navigation