Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Relativistic Formulation in Dual Minkowski Spacetime
Next Article in Special Issue
Ordered Patterns of (3+1)-Dimensional Hadronic Gauged Solitons in the Low-Energy Limit of Quantum Chromodynamics at a Finite Baryon Density, Their Magnetic Fields and Novel BPS Bounds
Previous Article in Journal
Systemic Financial Risk Forecasting with Decomposition–Clustering-Ensemble Learning Approach: Evidence from China
Previous Article in Special Issue
Toward Enhanced Geological Analysis: A Novel Approach Based on Transmuted Semicircular Distribution
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Globally Convergent Iterative Method for Matrix Sign Function and Its Application for Determining the Eigenvalues of a Matrix Pencil

1
Department of Mathematics, Thapar Institute of Engineering and Technology, Patiala 147004, India
2
School of Commerce, SVKM’s Narsee Monjee Institute of Management Studies, Chandigarh 160014, India
3
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(4), 481; https://doi.org/10.3390/sym16040481
Submission received: 11 March 2024 / Revised: 8 April 2024 / Accepted: 10 April 2024 / Published: 16 April 2024

Abstract

:
In this research article, we propose a new matrix iterative method with a convergence order of five for computing the sign of a complex matrix by examining the different patterns and symmetry of existing methods. Analysis of the convergence of the method was explored on a global scale, and attraction basins were demonstrated. In addition to this, the asymptotic stability of the scheme was explored.Then, an algorithm for determing thegeneralized eigenvalues for the case of regular matrix pencils was investigated using the matrix sign computation. We performed a series of numerical experiments using numerous matrices to confirm the usefulness and superiority of the proposed method.

1. Introduction

Over the past several years, the efficient computing of matrix functions has emerged as a prominent area of research. Symmetry plays a significant role in the study of matrix functions, particularly in understanding their properties, behavior, and computational methods. Symmetry principles are widely used in physics and engineering to analyze and solve problems involving matrix functions. For instance, in quantum mechanics, symmetric matrices often arise in the context of Hamiltonians, and efficient computation of matrix functions is crucial for simulating quantum systems and predicting their behavior. In order to compute matrix functions, various iterative schemes have been suggested, which include methods for the evaluation of matrix sign function. The scalar sign function defined for z C such that R e ( z ) 0 which is given as,
s i g n ( z ) = 1 , R e ( z ) > 0 , 1 , R e ( z ) < 0 .
is extended to form the matrix sign function. Roberts [1] introduced the Cauchy integral representation of the matrix sign function in 1980, given by
s i g n ( A ) = 2 π A 0 ( t 2 I + A 2 ) 1 d t ,
which can be applied in model reduction and in solving algebraic Riccati and Lyapunov equations. Recall the definition of matrix sign function s i g n ( A ) via the Jordan canonical form [2], in which if A = P J P 1 is the Jordan decomposition of A such that J = d i a g ( J 1 , J 2 ) , then
s i g n ( A ) = P I p 0 0 I n p P 1 ,
where A C n × n has no eigenvalue on the imaginary axis, P is a nonsingular matrix, and p and n p are sizes of Jordan blocks J 1 , J 2 corresponding to eigenvalues lying in left half plane ( C ) and right half plane ( C + ) , respectively.
In the pursuit of evaluating s i g n ( A ) as a solution of
W 2 I = 0 ,
Roberts applied Newton’s method ( N M ) resulting into the following iterative formula:
W l + 1 = 1 2 ( W l + W l 1 ) , l = 0 , 1 , 2 , ,
with the initial matrix W 0 = A and convergence of order two.
Another alternative is accelerated Newton’s method ( A N M ) that can be achieved via norm scaling parameter in (5) given by,
W 0 = A , μ l = W l 1 W l , W l + 1 = 1 2 ( μ l W l + μ l 1 W l 1 ) .
In order to improve the convergence, acceleration and scaling of iteration (5), Kenney and Laub [3,4] suggested a family of matrix iterations by employing the Padé approximant to the function
H ( x ) = ( 1 x ) 1 / 2 ,
where x = 1 z 2 and taking into the consideration
s i g n ( z ) = z ( 1 x ) 1 / 2 .
Let ( r , s ) -Padé approximant to H ( x ) with r + s 1 is denoted as P r , s ( x ) Q r , s ( x ) , then
w l + 1 = w l P r , s ( 1 w l 2 ) Q r , s ( 1 w l 2 ) = ψ 2 r + 1 , 2 s ,
converges to ± 1 if r s 1 and it has convergence rate equal to r + s + 1 . So, the ( r , s ) -Padé approximant iteration ( P A r , s ) for computing s i g n ( A ) is
W l + 1 = W l [ P r , s ( 1 W l 2 ) ] [ Q r , s ( 1 W l 2 ) ] 1 .
These iterations were later investigated by Gomilko et al. [5]. Other popular iterative approaches for s i g n ( A ) , such as the Newton-Schulz method ( N S M ) [6]
W l + 1 = 1 2 W l ( 3 I W l 2 ) ,
and Halley’s method ( H M ) [2]
W l + 1 = [ I + 3 W l 2 ] [ W l ( 3 I + W l 2 ) ] 1 .
are either members of the Padé family (10) or its reciprocal family.
A recent fifth order iterative method [7] (represented as Lotfi’s Method ( L M )) for s i g n ( A ) can be investigated and is given by
W l + 1 = [ W l ( 18 I 20 W l 2 30 W l 4 ) ] [ 5 I + 15 W l 2 45 W l 4 7 W l 6 ] 1 .
For more recent literature on matrix sign function, one can go through references [8,9,10].
Motivated by constructing efficient iterative methods like the methods discussed above and to overcome on some of the drawbacks of the existing iterative methods, here we aim to propose a globally convergent iterative method for approximation of s i g n ( A ) . Matrix sign function has several application in scientific computing (see e.g., [11,12]). Another applications of matrix sign function include solving various matrix equations [13], solving stochastic differential equations [14] and many more.
In this article, we present a fifth order iterative scheme for evaluation of s i g n ( A ) . The outline for the rest of the article is defined here: In Section 2, the construction of new iterative scheme for evaluating matrix sign function has been concluded. In Section 3, basins of attraction guarantees the global convergence of developed method. Also, convergence analysis for the fifth rate of convergence is discussed along with stability analysis. Then the proposed method is extended to determine the generalized eigenvalues of a regular matrix pencil in Section 4. Computational complexity is examined in Section 5. Numerical examples are provided to evaluate the performance and to demonstrate the numerical precision of suggested method in Section 6. The final conclusion is presented in Section 7.

2. A Novel Iterative Method

In this section, we have developed an iterative method for evaluation of s i g n ( A ) . The primary goal is to develop the scheme by examining the factorization patterns and symmetry of existing schemes rather than using a non-linear equation solver [13]. We aim to design an iterative formula that does not conflict with the Padé family members (9) or its reciprocals.
If we assume the eigenvalues to converge to 1 or 1 , and we aim to get a higher convergence order five, then the iterative formula will satisfy the equation
w l + 1 1 w l + 1 + 1 = ( w l 1 ) 5 ( w l + 1 ) 5 .
If we solve Equation (14) for w l + 1 , then we get
w l + 1 = w l ( 5 + 10 w l 2 + w l 4 ) 1 + 10 w l 2 + 5 w l 4 .
Nevertheless, the formula mentioned above is not a novel one, as it is a member of Padé family (9). So, we make some modifications in (14) so that the formula becomes better in terms of converegnce as follows:
w l + 1 1 w l + 1 + 1 = ( w l 1 ) 5 ( 4 + w l ) ( w l + 1 ) 5 ( 4 w l ) ,
provided that 4 + w l 4 w l < 1 . Solving (16) for w l + 1 , we get
w l + 1 = w l ( 21 + 50 w l 2 + 9 w l 4 ) 4 + 45 w l 2 + 30 w l 4 + w l 6 ,
which contains a polynomial of degree 5 in the numerator and a polynomial of degree 6 in the denominator. From (9), it is clear that Padé family has a unique member with a polynomial of degree 2 r + 1 in numerator and a polynomial of degree 2 s in denominator for each r , s 0 . So 2 r + 1 = 5 and 2 s = 6 will result into r = 2 and s = 3 , i.e., P A 2 , 3 and will have order 2 + 3 + 1 = 6 which is different from the order 5 that we are achieving. So, the above formula (17) is not a part of the family (9) or reciprocals of (9) and therefore our proposed method ( P M ) is as follows:
W l + 1 = [ W l ( 21 I + 50 W l 2 + 9 W l 4 ) ] [ 4 I + 45 W l 2 + 30 W l 4 + W l 6 ] 1 ,
where W 0 is the appropriate initial guess which we discuss in Section 3.
Remark 1.
Convergence can be slow if  W l > > 1  for some l. In that situation, it is advisable to introduce the scaling parameter  μ l  [15] given as,
μ l = W l 1 W l ( norm scaling ) , ρ ( W l 1 ) ρ ( W l ) ( spectral scaling ) , d e t ( W l ) 1 / n ( determinant scaling ) .
to speed up the convergence [16] of the scheme.
Hence, we also present the accelerated proposed method ( A P M ) which is accelerated version of (18) given by
W 0 = A , μ l = Scaling parameter computed by ( 19 ) , W l + 1 = [ μ l W l ( 21 I + 50 μ l 2 W l 2 + 9 μ l 4 W l 4 ) ] [ 4 I + 45 μ l 2 Z l 2 + 30 μ l 4 W l 4 + μ l 6 W l 6 ] 1 ,
where lim l μ l = 1 and lim l W l = S = s i g n ( A ) .

3. Convergence and Stability Analysis

In this section, we discuss the convergence analysis to ensure that the sequence generated from (18) converges to s i g n ( A ) for a matrix A C n × n . We draw the basins of attraction to confirm the global convergence of the proposed scheme and compared with the existing schemes for solving w 2 1 = 0 [17].
We consider a region R = [ 2 , 2 ] × [ 2 , 2 ] C . The region has grid size of 256 × 256 and has two simple roots 1 and 1. The convergence was determined by the stopping criteria f ( w l )     10 2 . The exact location of roots is shown by two white dots within the basins. The dark blue color (left side of basins) represent the convergence region corresponding to the root w = 1 and the light blue color (right side of basins) represent the convergence region corresponding to the root w = 1 .
Figure 1 illustrates the basins for N M (5), N S M (11), H M (12), P A r , s (10), and the proposed method P M (18). It is evident from Figure 1b,d that methods N S M and P A 3 , 1 exhibit local convergence, while the other methods exhibit the global convergence. Moreover, Figure 1e,f illustarte broader and lighter basins of attraction because of higher convergence order of P A 2 , 2 and P M as compared to the basins of N M and H M as shown in Figure 1a,c. One can notice the symmetry in convergence region of each method. The convergence regions are symmetric about imaginary axis.
After achieving global convergence of the proposed method, it is important to examine certain critical factors related to matrix iterations, including convergence rate and numerical stability.

3.1. Convergence Analysis

Theorem 1.
Suppose that the initial matrix  W 0 = A C n × n  has no eigenvalue on the imaginary axis, then the sequence of matrices  { W l } l = 0  generated from (18) converges to  s i g n ( A ) = S .
Proof. 
Let  W = P J P 1  be Jordan decomposition of matrix  W C n × n  and consider
R ( W ) = P R ( J ) P 1 .
where R is the rational function linked to (18). Consequently, this will lead to the mapping of an eigenvalue  λ  of  W l  to an eigenvalue  R ( λ )  of  W l + 1 . Also, R will satisfy the properties given as follows:
  • s i g n ( R ( w ) ) = s i g n ( w ) w C .
  • w l + 1 = R ( w l ) with w 0 = w converges to s i g n ( w ) , where R e ( w ) 0 .
To achieve this objective, consider  A = P Λ P 1  as the Jordan decomposition of A that can be modified as [2]
P 1 A P = Λ = C 0 0 N ,
where P is a non singular matrix while Jordan blocks C and N are associated to the eigenvalues in  C  and  C +  respectively. Let the principal diagonals of blocks C and N have values  λ 1 , λ 2 , , λ p  and  λ p + 1 , λ p + 2 , , λ n  respectively. Then
s i g n ( A ) = P I p 0 0 I n p P 1 .
Therefore,
s i g n ( Λ ) = s i g n ( P 1 A P ) = P 1 s i g n ( A ) P = s i g n ( λ 1 ) s i g n ( λ p ) s i g n ( λ p + 1 ) s i g n ( λ n )
Now, define a sequence  D l = P 1 W l P  with  D 0 = P 1 A P . Then from (18), we get
D l + 1 = [ D l ( 21 I + 50 D l 2 + 9 D l 4 ) ] [ 4 I + 45 D l 2 + 30 D l 4 + D l 6 ] 1 .
Mathematical induction states that all  D l ’s for  l = 1 , 2 , 3  will be diagonal if  D 0  is diagonal. When  D 0  is not diagonal, the proof may be considered in the similar methodology as stated at the end of the proof.
Now, (25) can be written for scalar case  f ( w ) = w 2 1 = 0  as follows:
d l + 1 i = d l i ( 21 + 50 d l i 2 + 9 d l i 4 ) 4 + 45 d l i 2 + 30 d l i 4 + d l i 6 ,
where  d l i = ( D l ) i , i  and  1 i n . Furthermore,  s i g n ( λ i ) = s i , with  s i = ± 1  and therefore from (26),
d l + 1 i s i d l + 1 i + s i = s i + d l i s i + d l i 5 4 s i d l i 4 s i + d l i .
The factor  4 s i d l i 4 s i + d l i  is bounded for  1 i n  and does not affect the convergence. Furthermore, by selecting the right initial matrix  W 0 = A  and  d 0 i s i d 0 i + s i < 1 , we get
lim l d l + 1 i s i d l + 1 i + s i = 0 ,
implying that  lim l ( d l i ) = s i = s i g n ( λ i )  and hence  lim l ( D l ) = s i g n ( Λ ) .
In the case when  D 0  is not a diagonal matrix, it is essential that we investigate the relation between the eigenvalues of  W i ’s which was briefly explained at the starting of the proof.
λ l + 1 i = [ λ l i ( 21 I + 50 λ l i 2 + 9 λ l i 4 ) ] [ 4 I + 45 λ l i 2 + 30 λ l i 4 + λ l i 6 ] 1 .
In a comparable approach, it is clear from (29) that  λ i ’s will converge to  s i = ± 1 , i.e.,
lim l λ l + 1 i s i λ l + 1 i + s i = 0 .
As a result, it is easy to conclude that
lim l W l = P ( lim l D l ) P 1 = P s i g n ( Λ ) P 1 = s i g n ( A ) ,
which finishes the proof.    □
Theorem 2.
Considering the assumptions of Theorem 1, the matrix sequence  { W l } l = 0  generated from (18) has convergence rate five to  s i g n ( A ) = S .
Proof. 
Since,  W l  depends on matrix A l 0  and the matrix A commutes with S, so  W l  also commutes with S l 0 . Taking into consideration the substitution
B l = 4 I + 45 W l 2 + 30 W l 4 + W l 6 ,
we get,
W l + 1 S = [ W l ( 21 I + 50 W l 2 + 9 W l 4 ) ] B l 1 S = [ 21 W l + 50 W l 3 + 9 W l 5 S B l ] B l 1 = [ 21 W l + 50 W l 3 + 9 W l 5 4 S 45 S W l 2 30 S W l 4 S W l 6 ] B l 1 = [ 4 S 5 + 21 S 4 W l 45 S 3 W l 2 + 50 S 2 W l 3 30 S W l 4 + 9 W l 5 S W l 6 ] B l 1 = ( W l S ) 5 ( 4 I S W l ) B l 1
Apply any type of matrix norm to the both sides of (33),
W l + 1 S 4 I S W l B l 1 W l S 5 .
This completes the proof.    □

3.2. Stability Issues

Theorem 3.
Considering the assumptions of Theorem 1, the sequence of matrix iterations  { W l } l = 0  generated from (18) is stable asymptotically.
Proof. 
The iterate obtained from (18) is a function of A because of  W 0 = A , and hence commutes with A. Let  Δ l  be the perturbation generated at  l t h  iterate in (18). Then
W ˜ l = W l + Δ l .
Based on the results of the error analysis of order one, we may consider that  ( Δ l ) j 0 j 2 . As long as  Δ l  is small enough, this usual adjustment will work. Also suppose that  W l s i g n ( A ) = S  for larger l, then we get
W ˜ l + 1 = [ W ˜ l ( 21 I + 50 W ˜ l 2 + 9 W ˜ l 4 ) ] [ 4 I + 45 W ˜ l 2 + 30 W ˜ l 4 + W ˜ l 6 ] 1 = [ ( W l + Δ l ) ( 21 I + 50 ( W l + Δ l ) 2 + 9 ( W l + Δ l ) 4 ) ] [ 4 I + 45 ( W l + Δ l ) 2 + 30 ( W l + Δ l ) 4 + ( W l + Δ l ) 6 ] 1 [ 80 S + 148 Δ l + 68 S Δ l S ] [ 80 I + 108 S Δ l + 108 Δ l S ] 1 S + 37 20 Δ l + 17 20 S Δ l S I 27 20 S Δ l 27 20 Δ l S S + 1 2 Δ l 1 2 S Δ l S .
Here, the identity [18] given by
( B + C ) 1 = B 1 B 1 C B 1 + O ( C 2 ) .
has been applied, where B is any nonsingular matrix and C is any matrix. Also, considering  Δ l + 1 = W ˜ l + 1 W l + 1 , we get
Δ l + 1 1 2 Δ l 1 2 S Δ l S .
At this point, it is possible to deduce that  Δ l + 1 is bounded, i.e.,
Δ l + 1 1 2 Δ 0 S Δ 0 S ,
which completes the proof.    □

4. Extension to Determine Generalized Eigenvalues of a Matrix Pencil

Let A , B C n × n . Then λ C is said to be a generalized eigenvalue of a matrix pencil A λ B if there exists a non-zero vector x C n such that
A x = λ B x .
Here x will be a generalized eigenvector corresponding to λ and (40) is referred as a generalized eigenvalue problem. This can be determined by finding the zeros of characteristic polynomial d e t ( A λ B ) .
Complex problem solving in physics was the first scenario for the appearance of the characteristic polynomial and the eigenproblem. For a recent coverage of the topic one can see [19]. When more generality is needed, generalized eigenvalue problem is taking the place of the classical approach [20].
Problem (40) arises in several fields of numerical linear algebra. However, the dimensions of matrices A and B are typically quite enormous, hence further complicating the situation. There are a lot of different approaches that claim to be able to solve (40) including the necessary eigenvectors in a smaller subspace or series of matrix pencils [21,22,23] or using matrix decomposition [24,25,26]. Our interest is in employing the matrix sign function which involves evaluating certain matrix sign computations based on the given matrix pencil to compute the required decomposition and hence to solve the problem (40).
To solve the problem (40) for the case of regular matrix pencils using matrix sign function, we consider rank revealing Q R ( R R Q R ) decomposition [27] after computing sign of two matrices depending on the given matrix pencil.
Theorem 4.
Let  A , B C n × n  such that  A λ B  be a regular matrix pencil and let  U 1  and  U 2  are defined as
U 1 = I S 1 2 , U 2 = I S 2 2 ,
where  S 1  and  S 2  are sign functions given by
S 1 = s i g n ( ( A B ) 1 ( A + B ) ) , S 2 = s i g n ( ( A + B ) ( A B ) 1 ) .
If  U 1 Π 1 = Q 1 R 1  and  U 2 Π 2 = Q 2 R 2  are  R R Q R  decompositions of  U 1  and  U 2  such that  Q 2 * A Q 1  and  Q 2 * B Q 1  are upper triangular, then
λ i = α i i β i i ,
where  λ i ’s are eigenvalues of  A λ B ,  α i i  and  β i i  are the diagonal entries of  Q 2 * A Q 1  and  Q 2 * B Q 1  respectively for  i = 1 , 2 , , n . Moreover, if B is non-invertible, then  β i i = 0  for some i which implies that some eigenvalues of  A λ B  will be infinity.
Let τ denotes tolerance value and m a x be the maximum iterations. Then, in accordance with the proposed method (18) and Theorem 4, we propose Algorithm 1 to find out the eigenvalues of a regular matrix pencil A λ B .
Algorithm 1 Evaluating generalized eigenvalues of A λ B
Require:  A , B , m a x , τ
Ensure:  S 1 , S 2 , λ i s
  1:
Initialize W 0 = ( A B ) 1 ( A + B ) .
  2:
while  l m a x and W l 2 I * > τ  do
  3:
        Apply method (18).
  4:
        return  W = s i g n ( ( A B ) 1 ( A + B ) ) = S 1 .
  5:
end while
  6:
Construct R R Q R decomposition of U 1 = I S 1 2 as follows:   U 1 Π 1 = Q 1 R 1 .
  7:
Initialize W 0 = ( A + B ) ( A B ) 1 .
  8:
Repeat step 2–5 to return  W = s i g n ( ( A + B ) ( A B ) 1 ) = S 2 .
  9:
Construct R R Q R decomposition of U 2 = I S 2 2 as follows:   U 2 Π 2 = Q 2 R 2 .
10:
Compute two matrices Q 2 * A Q 1 = α i j and Q 2 * B Q 1 = β i j , where 1 i , j n .
11:
if  α i j = β i j = 0  ∀  i > j  then
12:
     for i=1:n do
13:
           λ i = α i i β i i .
14:
     end for
15:
end if

5. Computational Complexity

The computational efficiency index ( C E I ) [28] of an iterative method can be defined as follows:
C E I = p 1 C .
Here p denotes convergence order and C denotes computational cost per iteration.
The approximate cost of matrix multiplications in current programming packages is 2.373 [29]. For a matrix of size n, we therefore consider that the cost of one matrix-matrix multiplication is n 2373 1000 and cost of one matrix inverse is 2 n 3 3 . Additionally, we do not take into account the cost per iteration of norm computation (in A N M (6)), addition and subtraction. Then, C E I ’s for the methods N M (5), A N M (6), H M (12) and P M (18) to compute s i g n ( A ) are given in Table 1.
Figure 2 presents a comparison of the efficiencies of several approaches, demonstrating the greater efficiency index of the suggested method.

6. Numerical Aspects

In this section, we have examined a variety of examples to evaluate our proposed method by comparing it with various existing techniques. For this study, we have assumed the following stopping criteria:
W l 2 I * τ ,
where τ stands for the tolerance value and . * will be suitable matrix norm. Also, l and l 2 norms are considered for real and complex input matrices, respectively [14].
For the sake of comparison, we only consider the methods that exhibit global convergence and we do not include any methods that have local convergence. The methods that are being compared will be N M (5), A N M (6), H M (12), P A 2 , 2 ((10) for r = s = 2 ), L M (13), P M (18) and A P M (20) with spectral scaling (19). All simulations are done in Mathematica using system specifications “Windows 11 Home Intel(R) Core(TM) i7-1255U CPU running at 1.70 GHz processor having 16.0 RAM (64-bit operating system)”.
Example 1.
Here, we perform the eigenvalue clustering of a random real  500 × 500  matrix given as
SeedRandom[100]; A = RandomReal[{-5,5},{500,500}];
The outcomes are shown in Figure 3 using the stopping criteria (45) with l norm and τ = 10 4 . We can see that eigenvalues of W 0 are scattered at the initial stage as shown in Figure 3a, while proceeding for the next iterations W i in Figure 3b–g for i = 1 , 2 , , 6 respectively using P M (18), the eigenvalues are converging towards 1 or 1 . Then in Figure 3h, all eigenvalues converge to 1 or 1 (based on the stopping criteria) implying that the sequence of matrices { W l } l = 0 converges to s i g n ( A ) . The result of Theorem 1 is therefore confirmed.
Example 2.
In this example, convergence of many different methods for computing  s i g n ( A )  has been compared, where A is a random  1000 × 1000  complex square matrix given as follows:
SeedRandom[101];
A = RandomComplex[{-1 - 1.5 I, 1 + 1.5 I}, {1000, 1000}]
The comparison is shown in Figure 4 with τ = 10 4 and l 2 norm which illustrates the results with regard to the number of iterations and the absolute error and demonstrates the consistent behavior of both P M and A P M .
Example 3.
This example presents a comparison on convergence of many different methods for computing  s i g n ( A ) , where A is a random  2000 × 2000  real square matrix given as
 
SeedRandom[111];
A = RandomReal[{-25, 25}, {2000, 2000}];
The comparison is presented in Figure 5 with τ = 10 4 and l norm which illustrates the results with respect to the number of iterations and the absolute error and demonstrates the consistent behavior of both P M and A P M for evaluating s i g n ( A ) .
Example 4.
Here, we examine the convergence of various methods with regard to the iteration number and the timings of CPU for evaluating the sign function of ten random matrices:
 
SeedRandom[121]; number = 10; tolerance = 10^(-5);
Table[A[j] = RandomComplex[{-3 - 2 I, 3 + 2 I}, {50j, 50j}];, {j, number}];
The comparative analysis based on iteration number and CPU timings for matrices of various sizes, are presented in Table 2 and Table 3 respectively using τ = 10 5 and l 2 -norm. The implementation of new methods ( P M ) and ( A P M ) result in an improvement in the average iteration number and the amount of time spent by CPU. The outcomes demonstrate that the suggested method ( P M ) is significantly better as compared to other fifth order methods like P A 2 , 2 and L M .
Now, in Examples 5 and 6, we aim to evaluate generalized eigenvalues of matrix pencils using matrix sign function for which we use Algorithm 1 for different methods by replacing (18) with existing methods in Step 3.
Example 5.
Let  A λ B  be a matrix pencil, where matrices A and B are given by [30]
A = 99 100 1 100 0 0 0 98 100 0 0 0 1 100 1 100 0 0 0 0 100 100 × 100 , B = 0 0 I 20 100 × 100 ,
where  I 20  is the identity matrix of size 20.
The eigenvalues of given pencil A λ B are λ i = i 1 100 for i = 1 , 2 , 20 and remaining eigenvalues are infinite. We verify these eigenvalues using proposed method and Algorithm 1. We compute the matrices Q 2 * A Q 1 = [ α i j ] 100 × 100 and Q 2 * B Q 1 = [ β i j ] 100 × 100 and then calculate λ i s from Algorithm 1. The outcomes are presented in Table 4 for i = 1 , 2 , 20 .
Moreover, β i i = 0 for 21 i 100 which implies that λ i = for 21 i 100 . Hence this example verifies the results given in Theorem 4.
Example 6 (Bounded finline dielectric waveguide problem).
The generalized eigenvalue problem (40) arises in the finite element analysis for finding the propagating modes of a rectangular waveguide. We consider three cases for the pairs of matrices A and B as  B F W 62 A , B F W 62 B ;  B F W 398 A ,   B F W 398 B  and  B F W 782 A ,   B F W 782 B  from the Matrix Market [31].
Table 5 shows the outcomes depending on the iteration numbers l 1 and l 2 for computing S 1 = s i g n ( ( A B ) 1 ( A + B ) ) and S 2 = s i g n ( ( A + B ) ( A B ) 1 ) with corresponding absolute errors ϵ l 1 = W l 1 2 I and ϵ l 2 = W l 2 2 I using stopping criterion (45) with τ = 10 10 . Also, total computational time in seconds taken for computing S 1 and S 2 has been compared in Figure 6. The outcomes demonstrate that the suggested method is better with respect to errors and timings.

7. Conclusions

This paper has discussed an iterative approach for evaluation of sign of a matrix. Our proposed iterative method attains a convergence order of five, showcasing its efficiency in evaluation of the sign of complex matrices. Through comprehensive convergence analysis discussed globally and the demonstration of asymptotic stability, we have established the robustness and reliability of the proposed approach. Then an algorithm has been designed for determining the generalized eigenvalues of a regular matrix pencil utilizing matrix sign computation. Furthermore, extensive numerical experiments performed on matrices of different sizes have validated the effectiveness and superiority of our method over existing techniques.

Author Contributions

Conceptualization, M.K.; methodology, M.K.; software, P.S.; validation, M.K. and V.S.; formal analysis, L.J.; investigation, P.S.; resources, L.J.; data curation, L.J.; writing—original draft preparation, P.S.; writing—review and editing, L.J.; visualization, V.S.; supervision, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to sincerely thank the editor and anonymous reviewers for their constructive and valuable suggestions that have helped in improving the quality of the manuscript.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Roberts, J.D. Linear model reduction and solution of the algebraic Riccati equation by use of the sign function. Int. J. Control 1980, 32, 677–687. [Google Scholar] [CrossRef]
  2. Higham, N.J. Functions of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar] [CrossRef]
  3. Kenney, C.; Laub, A.J. Rational iterative methods for the matrix sign function. SIAM J. Matrix Anal. Appl. 1991, 12, 273–291. [Google Scholar] [CrossRef]
  4. Kenney, C.S.; Laub, A.J. The matrix sign function. IEEE Trans. Autom. Control 1995, 40, 1330–1348. [Google Scholar] [CrossRef]
  5. Gomilko, O.; Greco, F.; Ziȩtak, K. A Padé family of iterations for the matrix sign function and related problems. Numer. Linear Algebra Appl. 2012, 19, 585–605. [Google Scholar] [CrossRef]
  6. Schulz, G. Iterative berechung der reziproken matrix. J. Appl. Math. Mech. Angew. Math. Mech. 1933, 13, 57–59. [Google Scholar] [CrossRef]
  7. Momenzadeh, M.; Lotfi, T. New iterative methods for finding matrix sign function: Derivation and application. Comput. Appl. Math. 2019, 38, 53. [Google Scholar] [CrossRef]
  8. Ullah, M.Z.; Alaslani, S.M.; Mallawi, F.O.; Ahmad, F.; Shateyi, S.; Asma, M. A fast and efficient Newton-type iterative scheme to find the sign of a matrix. AIMS Math. 2023, 8, 19264–19274. [Google Scholar] [CrossRef]
  9. Wang, X.; Cao, Y. A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function. AIMS Math. 2023, 8, 12456–12471. [Google Scholar] [CrossRef]
  10. Liu, T.; Ullah, M.Z.; Alshahrani, K.M.A.; Shateyi, S. From fractal behavior of iteration methods to an efficient solver for the sign of a matrix. Fractal Fract. 2023, 7, 32. [Google Scholar] [CrossRef]
  11. Benner, P.; Quintana-Ortí, E.S. Solving stable generalized Lyapunov equations with the matrix sign function. Numer. Algorithms 1999, 20, 75–100. [Google Scholar] [CrossRef]
  12. Howland, J.L. The sign matrix and the separation of matrix eigenvalues. Linear Algebra Its Appl. 1983, 49, 221–232. [Google Scholar] [CrossRef]
  13. Soleymani, F.; Kumar, A. A fourth-order method for computing the sign function of a matrix with application in the Yang–Baxter-like matrix equation. Comput. Appl. Math. 2019, 38, 64. [Google Scholar] [CrossRef]
  14. Soheili, A.R.; Toutounian, F.; Soleymani, F. A fast convergent numerical method for matrix sign function with application in SDEs. J. Comput. Appl. Math. 2015, 282, 167–178. [Google Scholar] [CrossRef]
  15. Kenney, C.; Laub, A.J. On scaling Newton’s method for polar decomposition and the matrix sign function. SIAM J. Matrix Anal. Appl. 1992, 13, 688–706. [Google Scholar] [CrossRef]
  16. Soleymani, F.; Stanimirović, P.S.; Stojanović, I. A novel iterative method for polar decomposition and matrix sign function. Discrete Dyn. Nat. Soc. 2015, 2015, 649423. [Google Scholar] [CrossRef]
  17. Iannazzo, B. Numerical Solution of Certain Nonlinear Matrix Equations. Ph.D. Thesis, Dipartimento di Matematica, Università di Pisa, Pisa, Italy, 2007. [Google Scholar]
  18. Henderson, H.V.; Searle, S.R. On deriving the inverse of a sum of matrices. SIAM Rev. 1981, 23, 53–60. [Google Scholar] [CrossRef]
  19. Jäntschi, L. Eigenproblem Basics and Algorithms. Symmetry 2023, 15, 2046. [Google Scholar] [CrossRef]
  20. Fischer, M.; Kostrzewa, B.; Ostmeyer, J.; Ottnad, K.; Ueding, M.; Urbach, C. On the generalised eigenvalue method and its relation to Prony and generalised pencil of function methods. Eur. Phys. J. A 2020, 56, 206. [Google Scholar] [CrossRef]
  21. Ruhe, A. Rational Krylov sequence methods for eigenvalue computation. Linear Algebra Appl. 1984, 58, 391–405. [Google Scholar] [CrossRef]
  22. Gallivan, K.; Grimme, E.; Van Dooren, P. Asymptotic waveform evaluation via a Lanczos method. Appl. Math. Lett. 1994, 7, 75–80. [Google Scholar] [CrossRef]
  23. Gallivan, K.; Grimme, E.; Van Dooren, P. A rational Lanczos algorithm for model reduction. Numer. Algor. 1996, 12, 33–63. [Google Scholar] [CrossRef]
  24. Moler, C.B.; Stewart, G.W. An algorithm for generalized matrix eigenvalue problems. SIAM J. Num. Anal. 1973, 10, 241–256. [Google Scholar] [CrossRef]
  25. Bai, Z.; Demmel, J. Using the matrix sign function to compute invariant subspaces. SIAM J. Matrix Anal. Appl. 1998, 19, 205–225. [Google Scholar] [CrossRef]
  26. Sun, X.; Quintana-Ortí, E. Spectral division methods for block generalized Schur decompositions. Math. Comput. 2004, 73, 1827–1847. [Google Scholar] [CrossRef]
  27. Quintana-Ortí, G.; Sun, X.; Bischof, C.H. A BLAS-3 Version of the QR Factorization with Column Pivoting. SIAM J. Sci. Comp. 1998, 19, 1486–1494. [Google Scholar] [CrossRef]
  28. Soleymani, F.; Khaksar Haghani, F.; Shateyi, S. Several numerical methods for computing unitary polar factor of a matrix. Adv. Differ. Equ. 2016, 2016, 4. [Google Scholar] [CrossRef]
  29. Le Gall, F. Powers of tensors and fast matrix multiplication. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, Kobe, Japan, 23–25 July 2014; ACM: New York, NY, USA, 2014; pp. 296–303. [Google Scholar] [CrossRef]
  30. Sakurai, T.; Sugiura, H. A projection method for generalized eigenvalue problems using numerical integration. J. Comput. Appl. Math. 2003, 159, 119–128. [Google Scholar] [CrossRef]
  31. Matrix Market. Available online: https://math.nist.gov/MatrixMarket/ (accessed on 15 January 2024).
Figure 1. Attraction basins of various methods for equation w 2 1 = 0 .
Figure 1. Attraction basins of various methods for equation w 2 1 = 0 .
Symmetry 16 00481 g001
Figure 2. Comparative analysis of efficiency indices for different methods.
Figure 2. Comparative analysis of efficiency indices for different methods.
Symmetry 16 00481 g002
Figure 3. Eigenvalue clustering of 500 × 500 matrix at every iterate in Example 1.
Figure 3. Eigenvalue clustering of 500 × 500 matrix at every iterate in Example 1.
Symmetry 16 00481 g003
Figure 4. Convergence history of different methods for Example 2.
Figure 4. Convergence history of different methods for Example 2.
Symmetry 16 00481 g004
Figure 5. Convergence history of different methods for Example 3.
Figure 5. Convergence history of different methods for Example 3.
Symmetry 16 00481 g005
Figure 6. Comparison of CPU time for various methods to compute S 1 and S 2 in Example 6.
Figure 6. Comparison of CPU time for various methods to compute S 1 and S 2 in Example 6.
Symmetry 16 00481 g006
Table 1. C E I ’s for various methods along with their convergence orders.
Table 1. C E I ’s for various methods along with their convergence orders.
MethodConvergence Order CEI
N M 2 2 1 n 2373 / 1000 + 2 n 3 3
A N M 3 + 1 ( 3 + 1 ) 1 n 2373 / 1000 + 2 n 3 3
H M 3 3 1 3 n 2373 / 1000 + 2 n 3 3
P M 5 5 1 5 n 2373 / 1000 + 2 n 3 3
Table 2. Comparative analysis for the iteration number in Example 4.
Table 2. Comparative analysis for the iteration number in Example 4.
Matrix NM ANM HM LM PA 2 , 2 PM APM
A 50 × 50 121186654
A 100 × 100 151397765
A 150 × 150 1515108765
A 200 × 200 151598765
A 250 × 250 1517108765
A 300 × 300 141597665
A 350 × 350 1516108765
A 400 × 400 17191110775
A 450 × 450 141597665
A 500 × 500 1821129876
Mean1515.79.77.86.86.15
Table 3. Comparative analysis for the CPU-time (in seconds) in Example 4.
Table 3. Comparative analysis for the CPU-time (in seconds) in Example 4.
Matrix NM ANM HM LM PA 2 , 2 PM APM
A 50 × 50 0.030.020.010.010.020.010.01
A 100 × 100 0.070.150.050.050.090.050.10
A 150 × 150 0.160.230.120.150.210.100.24
A 200 × 200 0.290.400.190.260.340.180.40
A 250 × 250 0.470.670.320.390.530.250.60
A 300 × 300 0.580.870.440.550.630.441.02
A 350 × 350 0.821.290.700.881.210.631.44
A 400 × 400 1.202.041.461.781.620.991.70
A 450 × 450 1.312.061.081.401.871.162.29
A 500 × 500 2.063.581.912.323.071.603.32
Mean0.701.130.630.780.960.541.11
Table 4. Eigenvalues ( λ i ’s) of matrix pencil A λ B in Example 5.
Table 4. Eigenvalues ( λ i ’s) of matrix pencil A λ B in Example 5.
i12345678910
α i i 0.1897620.18−0.170.160.15−0.140.13−0.12−0.110.1
β i i 0.998751−111−11−1−11
λ i 0.190.180.170.160.150.140.130.120.110.1
i 11121314151617181920
α i i −0.090.08−0.070.05−0.030.020.010−0.060.04
β i i −11−11−111−1−11
λ i 0.090.080.070.050.030.020.0100.060.04
Table 5. Comparison of results obtained from various methods in Example 6.
Table 5. Comparison of results obtained from various methods in Example 6.
MatricesMethod l 1 ϵ l 1 l 2 ϵ l 2
BFW62A, BFW62B N M 2 5.79607 × 10 11 2 5.79607 × 10 11
H M 2 4.44089 × 10 16 2 4.44089 × 10 16
P A 2 , 2 1 1.65201 × 10 13 1 1.64757 × 10 13
L M 1 9.80327 × 10 13 1 9.81437 × 10 13
P M 1 9.90319 × 10 14 1 1.00364 × 10 13
BFW398A, BFW398B N M 2 6.18883 × 10 12 2 6.18883 × 10 12
H M 2 4.44089 × 10 16 2 4.44089 × 10 16
P A 2 , 2 1 1.50990 × 10 14 1 1.50990 × 10 14
L M 1 9.08162 × 10 14 1 9.23706 × 10 14
P M 1 1.11022 × 10 14 1 1.11022 × 10 14
BFW782A, BFW782B N M 2 5.82201 × 10 13 2 5.82201 × 10 13
H M 2 4.44089 × 10 16 2 4.44089 × 10 16
P A 2 , 2 1 6.43929 × 10 15 1 3.99680 × 10 15
L M 1 7.99361 × 10 15 1 7.99361 × 10 15
P M 1 5.32907 × 10 16 1 5.32907 × 10 16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kansal, M.; Sharma, V.; Sharma, P.; Jäntschi, L. A Globally Convergent Iterative Method for Matrix Sign Function and Its Application for Determining the Eigenvalues of a Matrix Pencil. Symmetry 2024, 16, 481. https://doi.org/10.3390/sym16040481

AMA Style

Kansal M, Sharma V, Sharma P, Jäntschi L. A Globally Convergent Iterative Method for Matrix Sign Function and Its Application for Determining the Eigenvalues of a Matrix Pencil. Symmetry. 2024; 16(4):481. https://doi.org/10.3390/sym16040481

Chicago/Turabian Style

Kansal, Munish, Vanita Sharma, Pallvi Sharma, and Lorentz Jäntschi. 2024. "A Globally Convergent Iterative Method for Matrix Sign Function and Its Application for Determining the Eigenvalues of a Matrix Pencil" Symmetry 16, no. 4: 481. https://doi.org/10.3390/sym16040481

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop