Constrained Graph Partitioning via Matrix Differential Equations
A novel algorithmic approach is presented for the problem of partitioning a connected undirected weighted graph under constraints such as cardinality or membership requirements or must-link and cannot-link constraints. Such constrained problems can be NP-...
Randomized Subspace Iteration: Analysis of Canonical Angles and Unitarily Invariant Norms
This paper analyzes the randomized subspace iteration for the computation of low-rank approximations. We present three different kinds of bounds. First, we derive both bounds for the canonical angles between the exact and the approximate singular ...
Effective Resistance Preserving Directed Graph Symmetrization
This work presents a new method for symmetrization of directed graphs that constructs an undirected graph with equivalent pairwise effective resistances as a given directed graph. Consequently a graph metric, square root of effective resistance, is ...
A Class of Efficient Locally Constructed Preconditioners Based on Coarse Spaces
In this paper we present a class of robust and fully algebraic two-level preconditioners for symmetric positive definite (SPD) matrices. We introduce the notion of algebraic local symmetric positive semidefinite splitting of an SPD matrix and we give a ...
Concentration of the Frobenius Norm of Generalized Matrix Inverses
In many applications it is useful to replace the Moore--Penrose pseudoinverse (MPP) by a different generalized inverse with more favorable properties. We may want, for example, to have many zero entries, but without giving up too much of the stability of ...
Convergence of Some Iterative Methods for Symmetric Saddle Point Linear Systems
We consider the iterative solution of linear systems with a symmetric saddle point system matrix. We address a family of solution techniques that exploit the knowledge of a preconditioner (or approximate solution procedure) both for the top left block of ...
On the Best Approximation of the Hierarchical Matrix Product
The multiplication of matrices is an important arithmetic operation in computational mathematics. In the context of hierarchical matrices, this operation can be realized by the multiplication of structured blockwise low-rank matrices, resulting in an almost ...
The Polynomial Eigenvalue Problem is Well Conditioned for Random Inputs
We compute the exact expected value of the squared condition number for the polynomial eigenvalue problem, when the input matrices have entries coming from the standard complex Gaussian distribution, showing that in general this problem is quite well ...
Necessary and Sufficient Conditions for Noiseless Sparse Recovery via Convex Quadratic Splines
The problem of exact recovery of an individual sparse vector using the Basis Pursuit (BP) model is considered. A differentiable Huber loss function (a convex quadratic spline) is used to replace the $\ell_1$-norm in the BP model. Using the theory of duality ...
Stable Computation of Generalized Matrix Functions via Polynomial Interpolation
Generalized matrix functions (GMFs) extend the concept of a matrix function to rectangular matrices via the singular value decomposition. Several applications involving directed graphs, Hamiltonian dynamical systems, and optimization problems with low-rank ...
Euclidean-Norm Error Bounds for SYMMLQ and CG
For positive definite and semidefinite consistent $Ax_\star=b$, we use the Gauss--Radau approach of Golub and Meurant (1997) to obtain an upper bound on the error $\|x_\star-x_k^L\|_2$ for SYMMLQ iterates, assuming exact arithmetic. Such a bound, computable ...
LSLQ: An Iterative Method for Linear Least-Squares with an Error Minimization Property
We propose an iterative method named LSLQ for solving linear least-squares problems of any shape. The method is based on the Golub and Kahan (1965) process, where the dominant cost consists in products with the linear operator and its transpose. In the rank-...
High Order Bellman Equations and Weakly Chained Diagonally Dominant Tensors
We introduce high order Bellman equations, extending classical Bellman equations to the tensor setting. We introduce weakly chained diagonally dominant (w.c.d.d.) tensors and show that a sufficient condition for the existence and uniqueness of a positive ...
Low-Rank Matrix Approximations Do Not Need a Singular Value Gap
Low-rank approximations to a real matrix $\mathbf{A}$ can be computed from $\mathbf{Z}\mathbf{Z}^T\mathbf{A}$, where $\mathbf{Z}$ is a matrix with orthonormal columns, and the accuracy of the approximation can be estimated from some norm of $\mathbf{A}-\...
Max-Balanced Hungarian Scalings
A Hungarian scaling is a diagonal scaling of a matrix that is typically applied along with a permutation to a sparse linear system before calling a direct or iterative solver. A matrix that has been Hungarian scaled and reordered has all entries of modulus ...
The General Matrix Pencil Completion Problem: A Minimal Case
In this paper we resolve a classical, long-standing open general matrix pencil completion problem in a minimal case. The problem consists of describing the possible Kronecker invariants of a pencil with a prescribed subpencil, and the restriction we ...