Mean-Square Performance of Adaptive Filter Algorithms in Nonstationary Environments
Mean-Square Performance of Adaptive Filter Algorithms in Nonstationary Environments
Mean-Square Performance of Adaptive Filter Algorithms in Nonstationary Environments
AbstractEmploying a recently introduced unied adaptive lter theory, we show how the performance of a large number of important adaptive lter algorithms can be predicted within a general framework in nonstationary environment. This approach is based on energy conservation arguments and does not need to assume a Gaussian or white distribution for the regressors. This general performance analysis can be used to evaluate the mean square performance of the Least Mean Square (LMS) algorithm, its normalized version (NLMS), the family of Afne Projection Algorithms (APA), the Recursive Least Squares (RLS), the Data-Reusing LMS (DR-LMS), its normalized version (NDR-LMS), the Block Least Mean Squares (BLMS), the Block Normalized LMS (BNLMS), the Transform Domain Adaptive Filters (TDAF) and the Subband Adaptive Filters (SAF) in nonstationary environment. Also, we establish the general expressions for the steady-state excess mean square in this environment for all these adaptive algorithms. Finally, we demonstrate through simulations that these results are useful in predicting the adaptive lter performance. KeywordsAdaptive lter, general framework, energy conservation, mean-square performance, nonstationary environment.
I. I NTRODUCTION ERFORMANCE analysis of adaptive ltering algorithms in nonstationary environments has been, and still is, an area of active research [1], [2], [3]. When the input signal properties vary with time, the adaptive lters are able to track these variations. The aim of tracking performance analysis is to characterize this tracking ability in nonstationary environments. In this area, many contributions focus on a particular algorithm, making more or less restrictive assumptions on the input signal. For example in [4], [5], the transient performance of the LMS was presented in the nonstationary environments. The former uses a random-walk model for the variations in the optimal weight vector, while the latter assumes deterministic variations in the optimal weight vector. The steady-state performance of this algorithm in the nonstationary environment for the white input is presented in [6]. The tracking performance analysis of the signed regressor LMS algorithm can be found in [7], [8], [9]. Also, the steady-state and tracking analysis of this algorithm without the explicit use of the independence assumptions are presented in [10]. Obviously, a more general analysis encompassing as many different algorithms as possible as special cases, while at the same time making as few restrictive assumptions as possible, is highly desirable. In [11], a unied approach for steadystate and tracking analysis of LMS, NLMS, and some adaptive lters with the nonlinearity property in the error is presented. Their approach was based on energy-conservation relation
Mohammad Shams Esfand Abadi and John H kon Husy are with the a University of Stavanger, Department of Electrical and Computer Engineering, N-4036 Stavanger, Norway, Email: John.H.Husoy@uis.no.
which was originally derived in [12] and [13]. Also in [14], a unied approach to steady-state performance analysis of a family of afne projection and data-reusing adaptive lter algorithms in the stationary environments and without using the independence assumptions have been presented based on a theory of averaging analysis. An important recent contribution is the tracking analysis of the Afne Projection Algorithm(s) (APA) [15]. The analysis is based on an energy conservation argument and using the energy relation. Also, the transient and steady-state analysis of data-reusing adaptive algorithms in the stationary environments is presented in [16] based on the weighted energy relation. But the performance of these algorithms in nonstationary environment is not presented. There is a general performance analysis of adaptive lters in [17] and [18]. But again, this analysis was performed in the stationary environments. We have shown previously [19] that the least mean squares (LMS), the normalized LMS (NLMS), the afne projection algorithm [3], the recursive least squares (RLS), the transform domain adaptive lters (TDAF) [20] and the subband adaptive lters (SAF) [21], [22], [23] can be derived through parameter selections in the generic lter vector update equations presented in [19]. In this paper we extend these generic update equations to show that other adaptive lter algorithms such as the binormalized data-reusing (BNDR-LMS) [24], the NLMS with orthogonal correction factors (NLMS-OCF) [25], the data-reusing adaptive algorithms such as the datareusing LMS (DR-LMS) [26], the normalized data-reusing LMS (NDR-LMS) [27] and the block adaptive algorithms such as the block LMS (BLMS) and the block NLMS (BNLMS) adaptive algorithms [2] are established through parameter selections in these generic update equations. Accordingly, a general formalism for the mean-square performance analysis of adaptive lters in nonstationary environment is presented. The strategy of the analysis is based on energy conservation arguments and does not need to assume a Gaussian or white distribution for the regressors [3]. Especially, we derive the general expressions for the steady-state mean square error in nonstationary environment for all the adaptive lter algorithms covered by the generic update equations. We have organized our paper as follows: In the following section we briey present and extend the generic adaptive lter update equations of [19] forming the basis of our analysis. In the next section, the general mean square performance analysis of adaptive lters in nonstationary environment and the general expression for the steady-state mean square error in this environment are established. We conclude the paper by showing a comprehensive set of simulations supporting the
182
validity of our results. Throughout the paper, the following notations are adopted: . Euclidean norm of a vector. -Weighted Euclidean norm of a column vector t t 2 dened as tT t. vec(T) Creating an M 2 1 column vector t through stacking the columns of the M M matrix T. vec(t) Creating an M M matrix T from the M 2 1 column vector t. A B Kronecker product of matrices A and B. Tr(.) Trace of a matrix. Transpose of a vector or a matrix. (.)T diag{...}Diagonal matrix of its entries {...}. E{.} Expectation operator. II. T HE GENERIC ADAPTIVE FILTER UPDATE EQUATIONS
AND ADAPTIVE FILTER ALGORITHMS
where v(n) is the measurement noise and assumed to be zero mean, white, Gaussian, and independent of the input signal matrix X(n) and ht (n) is the true time-variant unknown column vector. We assume that the variation of ht (n) is according to the random walk model [1], [2], [3] ht (n + 1) = ht (n) + q(n). (6)
In Figure 1 we show the prototypical adaptive lter setup, where x(n), d(n) and e(n) are the input, desired and output error signals, respectively. h(n) is the M 1 column vector of lter coefcients at time n. From [19], the generic lter
In Eq. 6, the sequence of q(n) is an independent and identically distributed sequence with autocorrelation matrix Q = E{q(n)q T (n)} and independent of other sequences. The matrix C(n) is some M M invertible matrix called the preconditioner. Selecting C(n) as an approximate inverse of the autocorrelation matrix, we can improve the convergence speed dramatically relative to the case when no preconditioner is employed [19]. One strategy for selecting the matrix C(n) is using the regularized inverse of the estimated autocorrelation matrix as a preconditioner. In this case, by using the matrix inversion lemma, we can write C(n)X(n) = X(n)W(n) (7)
where W(n) is the K K invertible matrix called the weighting matrix [17], [18]. For more details please refer to [19]. From this one might argue that in some cases, a suitable alternative form of the generic adaptive lter of Eq. 1 can be stated as: h(n + 1) = h(n) + X(n)W(n)e(n). (8)
Fig. 1.
vector update equation can be stated as, h(n + 1) where = h(n) + C(n)X(n)e(n). (1) (2)
is the output error vector. The matrix X(n) is the M K input signal matrix dened as X(n) = [x(nL), x(nL D), . . . , x(nL (K 1)D)], (3) where x(nL) = [x(nL), x(nL 1), . . . , x(nL M + 1)]T , and d(n) is a K 1 vector of desired signal which is dened as d(n) = [d(nL), d(nL D), . . . , d(nL (K 1)D)]T . (4) The parameter K is a positive integer (usually, but not necessarily K M ), L is the block length1 , and D is the positive integer parameter (D 1) that can increase the separation, and consequently reduce the correlation among the regressors in X(n)2 . The desired signal arise from the following data model d(n) = xT (n)ht (n) + v(n), (5)
We are now in the position to make specic choices for the preconditioner matrix C(n) or the weighting matrix W(n) as well as for the parameters K, L, and D. Different adaptive lter algorithms can now be seen as specic instantiations of the generic adaptive lter update equations (Eq. 1 and Eq. 8). These algorithms are the least mean squares (LMS), the normalized LMS (NLMS), -NLMS3 , family of afne projection algorithms (APA) such as the standard version of APA, the regularized APA (R-APA) [28], the binormalized data-reusing LMS (BNDR-LMS) [24], the NLMS with orthogonal correction factors (NLMS-OCF) [25], the datareusing adaptive algorithms such as the data-reusing LMS (DR-LMS), and the normalized DR-LMS (NDR-LMS) [27], the recursive least squares (RLS)4 , the transform domain adaptive lter (TDAF) algorithms5 [20], and the subband adaptive lters (SAF)6 . The particular choices and their corresponding algorithms are summarized in Table I. It is interesting to note that the adaptive lter algorithms in [21], [22], [23], while derived from different points of view, are the same [18]. Selecting the parameters in the generic adaptive lter according to Table I for the SAF and setting = 0, result in Eq. 8 from [22].
is the regularization parameter and I is the identity matrix. signal matrix X(n) has the same structure as X(n), but with horizontal dimension exceeding the vertical dimension M and 0 1. 5 The matrix T is an M M orthogonal transform matrix. 6 F is the K L matrix whose columns are unit pulse responses of a L channel orthogonal perfect reconstruction critically sampled lter bank system. In this case, L is the number of subband and K is the length of the channel lters of the analysis lter bank.
3 4 The
1 Setting L = 1 we get sample-by-sample algorithms whereas selecting L > 1 results in block-based algorithms in which chunks of L samples are input to the algorithm for each coefcient update. 2 The choice D 1 and L = 1 corresponds to NLMS-OCF adaptive lter algorithm [25].
183
TABLE I T HE MOST COMMON FAMILIES OF ADAPTIVE FILTER ALGORITHMS CAN BE DESCRIBED THROUGH EITHER h(n + 1) = h(n) + C(n)X(n)e(n) OR h(n + 1) = h(n) + X(n)W(n)e(n).
Algorithm LMS NLMS NLMS K K=1 K=1 K=1 L L=1 L=1 L=1 D D=1 D=1 D=1 C(n)/W(n) C(n) = I W(n) = [1/ x(n)
T 2
].I
KM K=2 KM
W(n) = (XT (n)X(n))1 W(n) = (XT (n)X(n))1 W(n) = ( I + XT (n)X(n))1 or C(n) = ( I + X(n)XT (n))1
W(n) = (XT (n)X(n))1 C(n) = I C(n) = I W(n) = diag{1/ x(n) W(n) = diag{1/ x(nL)
2 2
, ..., 1/ x(n K + 1)
}
2
, ..., 1/ x(nL K + 1)
ni x(i)xT (i)]1
(exp.weighted window) TDAF K=1 L=1 D=1 C(n) = T.{diag[TT X(n)XT (n)T]}1 .TT or C(n) = T.{diag[
n i=0
Substitute Eq. 11 in Eq. 10, we obtain (n + 1) = (n) + q(n) C(n)X(n)(XT (n) (n) + v(n)). (12) Now taking the -weighted norm from both sides of Eq. 12, (n + 1)
2 T 2
In this section based on the generic update equations, we present the general mean-square performance analysis and develop the general expression for the steady-state excess mean square error (EMSE) in nonstationary environment. A. General mean-square performance analysis of adaptive lter algorithms in nonstationary environment based on Eq. 1 In the mean-square performance analysis, we need to study the time evolution of the E{ (n) 2 }, where is any Hermi tian and positive-denite matrix7 , and (n) is the weight-error vector which is dened as (n) = ht (n) h(n). (9)
(n)
+ q(n)
+ (13)
v (n)X (n)v(n) + {Some Cross T erms}, where = C(n)X(n)XT (n) X(n)X (n)CT (n) + 2 X(n)X (n)XT (n) and X (n) = XT (n)CT (n)C(n)X(n).
2 } 2 } T 2 } T
(14) (15)
Taking the expectation from both sides of Eq. 13 E{ (n + 1) = E{ (n) + E{ q(n) +2 E{v (n)X (n)v(n)},
From Eq. 9, the generic weight-error vector update equation based on Eq. 1 can be stated as (n + 1) = (n) + q(n) C(n)X(n)e(n). (10)
(16)
From Eq. 2 and Eq. 5, the output estimation error vector e(n) can be represented as e(n) = XT (n) (n) + v(n). (11)
7 When = I, the Mean Square Deviation (MSD) and when = R, where R = E{x(n)xT (n)} is the autocorrelation matrix of the input signal, the Excess Mean Square Error (EMSE) expressions are established.
we obtain the time evolution of the weight-error variance. The expectation of (n) 2 is difcult to calculate because of dependency of on C(n) , X(n), and of (n) on prior regressors. To solve this problem we need to use the following independence assumptions [15]. 1) The matrix sequence X(n) is independent and identically distributed. This assumption guarantees that (n) is independent of both and X(n).
184
2) (n) is independent of C(n)X(n)XT (n). Using these independence assumptions, the nal results is E{ (n + 1) where now is = E{C(n)X(n)XT (n)} E{X(n)XT (n)CT (n)} +2 E{X(n)X (n)XT (n)}.
2 }
With the above considerations, the recursion of Eq. 20 can now be stated as
2 + 2 v T + Tr(Q). (28) From this recursion, we will be able to evaluate the steadystate excess mean square error (EMSE). When n goes to innity, we obtain
= E{ (n)
2 } T
+ E{ q(n)
2 }
E{ (n + 1)
2 }
= E{ (n)
2 G }
(17)
E{ () (18) therefore
2 }
= E{ ()
2 (IG) }
2 2 2 T G }+ v +Tr(Q).
(29)
Looking only at the second term of the right hand side of Eq. 17 we obtain E{v T (n)X (n)v(n)} = E{Tr(v(n)v T (n)X (n))} = Tr(E{v(n)v T (n)}E{X (n)}).
E{ ()
2 = 2 v T + Tr(Q).
(30)
(19)
If (I G) = vec(I) and (I G) = vec(R) = r, the steady-state MSD and EMSE expressions in nonstationary environment are established respectively. Doing this, the nal results are
2 EMSE = 2 v T (IG)1 r+Tr(Qvec((IG)1 r)), (31)
2 2 Since E{v(n)v T (n)} = v I, where v is the variance of the 2 measurement noise, and E{ q(n) } = Tr(Q), Eq. 17 can be stated as
E{ (n + 1)
2 }
(32)
(20)
Applying the vec(.) operator [29] on both sides of Eq. 18 yields vec( ) = vec() vec(E{C(n)X(n)X (n)}) vec(E{X(n)XT (n)CT (n)}) + 2 vec(E{X(n)X (n)XT (n)}). (21) Since, in general vec(PQ) = (QT P)vec() [29], we nd that Eq. 21 can be written as
T
Also, from Eq. 11, we know that e(n) = x (n) (n) + v(n). Therefore, the steady-state MSE is given by
2 MSE = EMSE + v .
(33)
From the general expression (Eq. 31), we will be able to predict the steady-state performance of LMS, -NLMS, R-AP, BLMS, DR-LMS, RLS, and the transform domain adaptive lter algorithms in nonstationary environment.
B. General mean-square performance analysis of adaptive lter algorithms in nonstationary environment based on Eq. 8 = (E{X(n)XT (n)CT (n)} I). Following the same approach in previous section for the (I E{X(n)XT (n)CT (n)}). + generic update equation (Eq. 8), the steady-state EMSE is 2 (E{(X(n)XT (n)CT (n)) (X(n)XT (n)CT (n))})., (22) given by:
2 EMSE = 2 v T
where = vec( ) and = vec(). With denition of the M 2 M 2 matrix G, G = I E{X(n)X (n)C (n)} I I E{X(n)XT (n)CT (n)} + 2 T E{(X(n)X (n)CT (n)) (X(n)XT (n)CT (n))}, Eq. 22 can be stated as = G.. (24)
T T
where Z = I E{X(n)W(n)XT (n)} I I E{X(n)W(n)XT (n)} + 2 E{(X(n)W(n)XT (n)) (X(n)W(n)XT (n))}, and = vec(E{X(n)W2 (n)XT (n)}).
(23)
(35) (36)
The second term of the right hand side of Eq. 20 can be written as Tr(E{X (n)}) = Tr(E{C(n)X(n)XT (n)CT (n)}.). (25) Dening through = vec(E{C(n)X(n)XT (n)CT (n)}) we have Tr(E{C(n)X(n)XT (n)CT (n)}) = T . (27) (26)
Also, the mean square coefcient deviation (MSD) in the steady-state is obtained by
2 MSD = 2 v T
and, the steady-state MSE is given by Eq. 33. From the general expression (Eq. 34), we will be able to predict the steadystate performance of LMS, -NLMS, AP, R-AP, BNDRLMS, NLMS-OCF, BLMS, BNLMS, DR-LMS, NDR-LMS, and the subband adaptive lter algorithms in nonstationary environment.
185
IV. S IMULATION RESULTS We justify the theoretical results presented in this paper by several computer simulations in a system identication setup. The unknown system has 8 taps and is selected at random. The input signal, x(n) is a rst order autoregressive (AR(1)) signal generated according to x(n) = x(n 1) + w(n), (38)
nonstationary environment. Compared with the other simulations, the theoretical values dont have as good agreement with the simulated values as before. But still, reasonable agreements, especially for the large value of the step-size for both colored Gaussian and uniform input signals are observed.
23 23.5 (a) NDR-LMS (K=2), Simulation (b) NDR-LMS (K=2), Theory
where w(n) can be either a zero mean white Gaussian signal or a zero mean uniformly distributed random sequence between 1 and 1. For the Gaussian case, is set to 0.9. As a result, a highly colored Gaussian signal is generated. For the uniform case, is set to 0.5. The measurement noise, v(n), 2 with v = 103 was added to the noise free desired signal generated through d(n) = ht (n)T x(n). The unknown channel changes according to Eq. 6. We assumed an independent and identically distributed sequence for q(n) with autocorrelation 2 2 2 matrix Q = q .I where q = 0.0025v . The adaptive lter and the unknown channel are assumed to have the same number of taps. For the TDAF algorithm, an 8-point Discrete Cosine Transform (DCT) was employed as the orthogonal transform. The lter bank used in the subband adaptive lters was the four subband Extended Lapped Transform (ELT) [30]. In all the simulations actually observed steady-state MSE are obtained by averaging over 500 steady-state samples from 500 independent realizations for each value for a given algorithm. Figs. 2 and 3 show the steady-state MSE curves of NDRLMS adaptive algorithm as a function of the step-size in nonstationary environment for both colored Gaussian and uniform input signals with K = 2. The theoretical results are calculated according to Eq. 33 and Eq. 34. As we can see there is a global minimum for the steady-state MSE in nonstationary environment. The theoretical results are in good agreement with simulation results. The agreement is better for the small value of the step-size for both colored and Gaussian input signals. For the large value of the step-size, some deviation between simulated and theoretical values is observed. But the results are still useful. Figs. 4 and 5 show the the steady-state MSE curves of the TDAF algorithm as a function of the step-size in nonstationary environment. Fig. 4 shows the results for colored Gaussian input. The theoretical results have been obtained through Eq. 31 and Eq. 33. Again in the results, there is an optimal value for the step-size, that minimizes the MSE in the nonstationary environments. This fact can be seen in Fig. 5 for colored uniform input signal. Good agreement between simulated and theoretical values, especially for small step-size is again observed. Figs. 6 and 7 show the steady-state MSE curves of the subband adaptive lter algorithm as a function of the stepsize in nonstationary environment for both colored Gaussian and uniform input signals. In both simulations, the the number of subband is set to 4. The theoretical results are calculated according to Eq. 33 and Eq. 34. The results are in good agreement with simulation results and as before, there is an optimal value for the step-size, that minimizes the MSE in
24 24.5
MSE in dB
(a)
(b)
0 0.1 0.2 0.3 0.4 0.5
Step-Size()
Fig. 2. Steady-state MSE of normalized data-reusing LMS (NDR-LMS) algorithm as a function of the step-size with K = 2 in nonstationary environment for colored Gaussian input (Input: Gaussian AR(1), = 0.9).
(a)
28
(b)
28.5
Step-Size()
Fig. 3. Steady-state MSE of normalized data-reusing LMS (NDR-LMS) algorithm as a function of the step-size with K = 2 in nonstationary environment for colored Uniform input (Input: Uniform AR(1), = 0.5).
V. S UMMARY AND CONCLUSION In this paper we have presented a general framework for the mean square performance analysis of adaptive lter algorithms based on generic adaptive lter update equations presented in [19] in the nonstationary environments. Through the general expressions and selection of the parameters according to Table I, the steady-state EMSE of the LMS, NLMS, -NLMS, family of AP (R-APA, BNDR-LMS, NLMS-OCF), the datareusing (DR-LMS, NDR-LMS), RLS, the transform domain,
186
18 19 20 21 22 23 24 25 26 24 (a) TDAF, Simulation (b) TDAF, Theory 16 SAF, Simulation SAF, Theory
18
MSE in dB
20
(a)
22
(b)
(b)
26 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1
Step-Size()
Step-Size()
Fig. 4. Steady-state MSE of TDAF algorithm as a function of the step-size in nonstationary environment for colored Gaussian input (Input: Gaussian AR(1), = 0.9).
Fig. 6. Steady-state MSE of SAF algorithm as a function of the step-size in nonstationary environment for colored Gaussian input (Input: Gaussian AR(1), = 0.9).
26.5
25
26
(b)
27.5
(a)
27
28 28
(b) (a)
29
30
0.2
0.4
0.6
0.8
Step-Size()
Step-Size()
Fig. 5. Steady-state MSE of TDAF algorithm as a function of the stepsize in nonstationary environment for colored Uniform input (Input: Uniform AR(1), = 0.5).
Fig. 7. Steady-state MSE of SAF algorithm as a function of the step-size in nonstationary environment for colored Uniform input (Input: Uniform AR(1), = 0.5).
the block adaptive lters (BLMS, BNLMS), and the subband, adaptive lter algorithms were predicted in the nonstationary environment. We demonstrated the usefulness of the general performance results for NDR-LMS, the transform domain, and the subband adaptive lter algorithms. R EFERENCES
[1] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985. [2] S. Haykin, Adaptive Filter Theory. NJ: Prentice-Hall, 4th edition, 2002. [3] A. H. Sayed, Fundamentals of Adaptive Filtering. Wiley, 2003. [4] B. Widrow, J. M. McCool, M. Larimore, and C. R. Johnson, Stationary and nonstationry learning characteristics of the LMS adaptive lter, in Proc. IEEE, 1976, pp. 11511162. [5] N. J. Bershad, P. Feintuch, A. Reed, and B. Fisher, Tracking charcteristics of the LMS adaptive line enhancer: Response to a linear chrip signal in noise, IEEE Trans. Acoust., Speech, Signal Processing, vol. 28, pp. 504516, 1980.
[6] S. Marcos and O. Macchi, Tracking capability of the least mean square algorithm: Application to an asynchronous echo canceller, IEEE Trans. Acoust., Speech, Signal Processing, vol. 35, pp. 15701578, 1987. [7] E. Eweda, Analysis and design of a signed regressor LMS algorithm for stationary and nonstationary adaptive ltering with correlated Gaussian data, IEEE Trans. Circuits, Syst., vol. 37, pp. 13671374, Nov. 1990. [8] , Optimum step size of the sign algorithm for nonstationary adaptive ltering, IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 18971901, 1990. [9] , Comparsion of RLS and LMS, and sign algorithms for tracking randomly time-varying channels, IEEE Trans. Signal Processing, vol. 42, pp. 29372944, 1994. [10] N. R. Yousef and A. H. Sayed, Steady-state and tracking analyses of the sign algorithm without the explicit use of the independence assumption, IEEE Signal Processing Letters, vol. 7, pp. 307309, 2000. [11] , A unied approach to the steady-state and tracking analyses of adaptive lters, IEEE Trans. Signal Processing, vol. 49, pp. 314324, 2001. [12] A. H. Sayed and M. Rupp, A time-domain feedback analysis of adaptive algorithms via the small gain theorem, in Proc. SPIE, vol. 2563, 1995, pp. 458469. [13] M. Rupp and A. H. Sayed, A time-domain feedback analysis of ltered-
187
[14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]
error adaptive gradient algorithms, IEEE Trans. Signal Processing, vol. 44, pp. 14281439, 1996. M. S. E. Abadi and A. M. Far, A unied approach to steady-state performance analysis of adaptive lters without using the independence assumptions, Signal Processing, vol. 87, pp. 16421654, 2007. H.-C. Shin and A. H. Sayed, Mean-square performance of a family of afne projection algorithms, IEEE Trans. Signal Processing, vol. 52, pp. 90102, Jan. 2004. H.-C. Shin, W. J. Song, and A. H. Sayed, Mean-square performance of data-reusing adaptive algorithms, IEEE Signal Processing Letters, vol. 12, pp. 851854, Dec. 2005. J. H. Husy and M. S. E. Abadi, A common framework for transient analysis of adaptive lters, in Proc. 12th IEEE Mediterranean Electrotechnical Conference, Dubrovnik, Croatia, May 2004, pp. 265268. , Transient analysis of adaptive lters using a general framework, Automatika, Journal for Control, Measurement, Electronics, Computing and Communications, vol. 45, pp. 121127, 2004. J. H. Husy, A streamlined approach to adaptive lters, in Proc. EUSIPCO, Firenze, Italy, Sept. 2006, published online by EURASIP at http://www.arehna.di.uoa.gr/Eusipco2006/papers/1568981236.pdf. P. S. R. Diniz, Adaptive Filtering: Algorithms and practical implementation, 2nd ed. Kluwer, 2002. S. S. Pradhan and V. E. Reddy, A new approach to subband adaptive ltering, IEEE Trans. Signal Processing, vol. 47, pp. 655664, 1999. M. de Courville and P. Duhamel, Adaptive ltering in subbands using a weighted criterion, IEEE Trans. Signal Processing, vol. 46, pp. 2359 2371, 1998. K. A. Lee and W. S. Gan, Improving convergence of the NLMS algorithm using constrained subband updates, IEEE Signal Processing Letters, vol. 11, pp. 736739, 2004. J. Apolinario, M. L. Campos, and P. S. R. Diniz, Convergence analysis of the binormalized data-reusing LMS algorithm, IEEE Trans. Signal Processing, vol. 48, pp. 32353242, Nov. 2000. S. G. Sankaran and A. A. L. Beex, Normalized LMS algorithm with orthogonal correction factors, in Proc. Asilomar Conf. on Signals, Systems, and Computers, 1997, pp. 16701673. B. A. Schnaufer and W. K. Jenkins, New data-reusing LMS algorithms for improved convergence, in Proc. Asimolar Conf.,, Pacic Groves, CA, May 1993, pp. 15841588. R. A. Soni, W. K. Jenkins, and K. A. Gallivan, Acceleration of normalized adaptive ltering data-reusing methods using the Tchebyshev and conjugate gradient methods, in Proc. Int. Symp. Circuits Systems, 1998, pp. 309312. S. L. Gay and J. Benesty, Acoustic Signal Processing for Telecommunication. Boston, MA: Kluwer, 2000. T. K. Moon and W. C. Sterling, Mathematical Methods and Algorithms for Signal Processing. Upper Saddle River: Prentice Hall, 2000. H. Malvar, Signal Processing with Lapped Transforms. Artech House, 1992.
Mohammad Shams Esfand Abadi was born in Tehran, Iran, on September 18, 1978. He received the B.S. degree in electrical engineering from Mazandaran University, Mazandaran, Iran and the M.S. degree in electrical engineering from Tarbiat Modares University, Tehran, Iran, in 2000 and 2002, respectively and the Ph.D. degree in biomedical engineering from Tarbiat Modares University, Tehran, Iran in 2007. Since 2004 he has been with the department of electrical engineering, Shahid Rajaee University, Tehran, Iran. During the fall of 2003, spring 2005, and again in the spring of 2007, he has a visiting scholar with the Signal Processing Group at the University of Stavanger, Norway. His research interests include digital lter theory and adaptive signal processing algorithms.
John H kon Husy received the M.Sc. and Ph.D. a degrees in electrical engineering from the Norwegian University of Science and Technology. In his early career he was involved in hardware and software development in various positions in several companies in Canada and Norway. Since 1992 he has been a Professor with the Department of Electrical and Computer Engineering, University of Stavanger, Norway. His research interests include adaptive algorithms, digital ltering, signal representations, image compression, bioelectrical signal processing, and image analysis.
188