A Recursive Least-Squares Algorithm for the Identification of Trilinear Forms
<p>Computational complexity of the proposed RLS-TF algorithm, as compared to the conventional RLS and NLMS algorithms, as a function of <math display="inline"><semantics> <msub> <mi>L</mi> <mn>1</mn> </msub> </semantics></math>; the other dimensions are set to <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>: (<b>a</b>) number of multiplications per iteration and (<b>b</b>) number of additions per iteration.</p> "> Figure 2
<p>The components of the third-order system used in simulations: (<b>a</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">h</mi> <mn>1</mn> </msub> </semantics></math> is the first impulse response (of length <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>64</mn> </mrow> </semantics></math>) from the G168 Recommendation [<a href="#B26-algorithms-13-00135" class="html-bibr">26</a>]; (<b>b</b>) <math display="inline"><semantics> <msub> <mi mathvariant="bold">h</mi> <mn>2</mn> </msub> </semantics></math> is a randomly generated impulse response (of length <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>), with Gaussian distribution; (<b>c</b>) the impulse response <math display="inline"><semantics> <msub> <mi mathvariant="bold">h</mi> <mn>3</mn> </msub> </semantics></math> (of length <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>), with the coefficients computed as <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mrow> <mn>3</mn> <mo>,</mo> <msub> <mi>l</mi> <mn>3</mn> </msub> </mrow> </msub> <mo>=</mo> <msup> <mn>0.5</mn> <mrow> <msub> <mi>l</mi> <mn>3</mn> </msub> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, with <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>L</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; and (<b>d</b>) the global impulse response (of length <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <msub> <mi>L</mi> <mn>2</mn> </msub> <msub> <mi>L</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>2048</mn> </mrow> </semantics></math>) results based on (<a href="#FD17-algorithms-13-00135" class="html-disp-formula">17</a>), <math display="inline"><semantics> <mrow> <mi mathvariant="bold">h</mi> <mo>=</mo> <msub> <mi mathvariant="bold">h</mi> <mn>3</mn> </msub> <mo>⊗</mo> <msub> <mi mathvariant="bold">h</mi> <mn>2</mn> </msub> <mo>⊗</mo> <msub> <mi mathvariant="bold">h</mi> <mn>1</mn> </msub> </mrow> </semantics></math>.</p> "> Figure 3
<p>Normalized projection misalignment (NPM) evaluated based on (<a href="#FD55-algorithms-13-00135" class="html-disp-formula">55</a>)–(57), in dB, for the identification of the individual impulse responses from <a href="#algorithms-13-00135-f002" class="html-fig">Figure 2</a>a–c, using the RLS-TF algorithm with different values of the forgetting factors <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <mi>K</mi> <msub> <mi>L</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math> (varying the value of <span class="html-italic">K</span>): (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>NPM</mi> <mfenced separators="" open="[" close="]"> <msub> <mi mathvariant="bold">h</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi mathvariant="bold">h</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mfenced> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>NPM</mi> <mfenced separators="" open="[" close="]"> <msub> <mi mathvariant="bold">h</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi mathvariant="bold">h</mi> <mo stretchy="false">^</mo> </mover> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mfenced> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>NPM</mi> <mfenced separators="" open="[" close="]"> <msub> <mi mathvariant="bold">h</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi mathvariant="bold">h</mi> <mo stretchy="false">^</mo> </mover> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mfenced> </mrow> </semantics></math>.</p> "> Figure 4
<p>Normalized misalignment (NM) evaluated based on (<a href="#FD58-algorithms-13-00135" class="html-disp-formula">58</a>), in dB, for the identification of the global impulse response <math display="inline"><semantics> <mi mathvariant="bold">h</mi> </semantics></math> (of length <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2048</mn> </mrow> </semantics></math>) from <a href="#algorithms-13-00135-f002" class="html-fig">Figure 2</a>d, using the RLS-TF algorithm with different values of the forgetting factors <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <mi>K</mi> <msub> <mi>L</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math> (varying the value of <span class="html-italic">K</span>).</p> "> Figure 5
<p>Normalized projection misalignment (NPM) evaluated based on (<a href="#FD55-algorithms-13-00135" class="html-disp-formula">55</a>)–(57), in dB, for the identification of the individual impulse responses from <a href="#algorithms-13-00135-f002" class="html-fig">Figure 2</a>a–c, using the NLMS-TF algorithm [<a href="#B17-algorithms-13-00135" class="html-bibr">17</a>] (with different normalized step-sizes <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>α</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>α</mi> <mn>3</mn> </msub> <mo>=</mo> <mi>α</mi> </mrow> </semantics></math>) and the RLS-TF algorithm (with the forgetting factors <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <mi>K</mi> <msub> <mi>L</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>): (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>NPM</mi> <mfenced separators="" open="[" close="]"> <msub> <mi mathvariant="bold">h</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi mathvariant="bold">h</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mfenced> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>NPM</mi> <mfenced separators="" open="[" close="]"> <msub> <mi mathvariant="bold">h</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi mathvariant="bold">h</mi> <mo stretchy="false">^</mo> </mover> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mfenced> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>NPM</mi> <mfenced separators="" open="[" close="]"> <msub> <mi mathvariant="bold">h</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi mathvariant="bold">h</mi> <mo stretchy="false">^</mo> </mover> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mfenced> </mrow> </semantics></math>.</p> "> Figure 6
<p>Normalized misalignment (NM) evaluated based on (<a href="#FD58-algorithms-13-00135" class="html-disp-formula">58</a>), in dB, for the identification of the global impulse response <math display="inline"><semantics> <mi mathvariant="bold">h</mi> </semantics></math> (of length <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2048</mn> </mrow> </semantics></math>) from <a href="#algorithms-13-00135-f002" class="html-fig">Figure 2</a>d, using the NLMS-TF algorithm [<a href="#B17-algorithms-13-00135" class="html-bibr">17</a>] (with different normalized step-sizes <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>α</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>α</mi> <mn>3</mn> </msub> <mo>=</mo> <mi>α</mi> </mrow> </semantics></math>) and the RLS-TF algorithm (with the forgetting factors <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <mi>K</mi> <msub> <mi>L</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>).</p> "> Figure 7
<p>Normalized misalignment (NM) evaluated based on (<a href="#FD58-algorithms-13-00135" class="html-disp-formula">58</a>), in dB, for the identification of the global impulse response <math display="inline"><semantics> <mi mathvariant="bold">h</mi> </semantics></math> (of length <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2048</mn> </mrow> </semantics></math>) from <a href="#algorithms-13-00135-f002" class="html-fig">Figure 2</a>d, using the conventional RLS algorithm (with different values of the forgetting factor <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mn>1</mn> <mo>/</mo> <mo>(</mo> <mi>K</mi> <mi>L</mi> <mo>)</mo> </mrow> </semantics></math>, varying the value of <span class="html-italic">K</span>) and the RLS-TF algorithm (with the forgetting factors <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <mi>K</mi> <msub> <mi>L</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>).</p> ">
Abstract
:1. Introduction
2. Third-Order Tensors
3. RLS Algorithm for Trilinear Forms
4. Simulation Results
5. Conclusions and Future Works
Author Contributions
Funding
Conflicts of Interest
References
- Comon, P. Tensors: A brief introduction. IEEE Signal Process. Mag. 2014, 31, 44–53. [Google Scholar] [CrossRef] [Green Version]
- Cichocki, A.; Mandic, D.; De Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, H.A. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef] [Green Version]
- Vervliet, N.; Debals, O.; Sorber, L.; De Lathauwer, L. Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific computing in big data analysis. IEEE Signal Process. Mag. 2014, 31, 71–79. [Google Scholar] [CrossRef]
- Boussé, M.; Debals, O.; De Lathauwer, L. A tensor-based method for large-scale blind source separation using segmentation. IEEE Trans. Signal Process. 2017, 65, 346–358. [Google Scholar] [CrossRef]
- Sidiropoulos, N.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
- Da Costa, M.N.; Favier, G.; Romano, J.M.T. Tensor modelling of MIMO communication systems with performance analysis and Kronecker receivers. Signal Process. 2018, 145, 304–316. [Google Scholar] [CrossRef] [Green Version]
- Ribeiro, L.N.; de Almeida, A.L.; Mota, J.C.M. Separable linearly constrained minimum variance beamformers. Signal Process. 2019, 158, 15–25. [Google Scholar] [CrossRef]
- De Lathauwer, L. Signal Processing Based on Multilinear Algebra. Ph.D. Thesis, Katholieke Universiteit Leuven, Leuven, Belgium, 1997. [Google Scholar]
- Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
- Benesty, J.; Paleologu, C.; Ciochină, S. On the identification of bilinear forms with the Wiener filter. IEEE Signal Process. Lett. 2017, 24, 653–657. [Google Scholar] [CrossRef]
- Paleologu, C.; Benesty, J.; Ciochină, S. Adaptive filtering for the identification of bilinear forms. Digit. Signal Process. 2018, 75, 153–167. [Google Scholar] [CrossRef]
- Elisei-Iliescu, C.; Stanciu, C.; Paleologu, C.; Benesty, J.; Anghel, C.; Ciochină, S. Efficient recursive least-squares algorithms for the identification of bilinear forms. Digit. Signal Process. 2018, 83, 280–296. [Google Scholar] [CrossRef]
- Dogariu, L.-M.; Ciochină, S.; Paleologu, C.; Benesty, J. A connection between the Kalman filter and an optimized LMS algorithm for bilinear forms. Algorithms 2018, 11, 211. [Google Scholar] [CrossRef] [Green Version]
- Ribeiro, L.N.; de Almeida, A.L.F.; Mota, J.C.M. Identification of separable systems using trilinear filtering. In Proceedings of the 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Cancun, Mexico, 13–16 December 2015; pp. 189–192. [Google Scholar]
- Rupp, M.; Schwarz, S. A tensor LMS algorithm. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, QLD, Australia, 19–24 April 2015; pp. 3347–3351. [Google Scholar]
- Dogariu, L.-M.; Ciochină, S.; Benesty, J.; Paleologu, C. An iterative Wiener filter for the identification of trilinear forms. In Proceedings of the 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019; pp. 88–93. [Google Scholar]
- Dogariu, L.-M.; Ciochină, S.; Benesty, J.; Paleologu, C. System identification based on tensor decompositions: A trilinear approach. Symmetry 2019, 11, 556. [Google Scholar] [CrossRef] [Green Version]
- Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
- Parathai, P.; Tengtrairat, N.; Woo, W.L.; Gao, B. Single-channel signal separation using spectral basis correlation with sparse nonnegative tensor factorization. Circuits Syst. Signal Process. 2019, 38, 5786–5816. [Google Scholar] [CrossRef]
- Woo, W.L.; Dlay, S.S.; Al-Tmeme, A.; Gao, B. Reverberant signal separation using optimized complex sparse nonnegative tensor deconvolution on spectral covariance matrix. Digit. Signal Process. 2018, 83, 9–23. [Google Scholar] [CrossRef] [Green Version]
- Gao, B.; Lu, P.; Woo, W.L.; Tian, G.Y.; Zhu, Y.; Johnston, M. Variational Bayes sub-group adaptive sparse component extraction for diagnostic imaging system. IEEE Trans. Ind. Electron. 2018, 65, 8142–8152. [Google Scholar] [CrossRef]
- Kiers, H.A.L. Towards a standardized notation and terminology in multiway analysis. J. Chemom. 2000, 14, 105–122. [Google Scholar] [CrossRef]
- Kroonenberg, P. Applied Multiway Data Analysis; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
- Van Loan, C.F. The ubiquitous Kronecker product. J. Comput. Appl. Math. 2000, 123, 85–100. [Google Scholar] [CrossRef] [Green Version]
- Bertsekas, D.P. Nonlinear Programming, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
- Digital Network Echo Cancellers; ITU-T Recommendations G.168; ITU: Geneva, Switzerland, 2002.
- Gay, S.L.; Benesty, J. (Eds.) Acoustic Signal Processing for Telecommunication; Kluwer Academic Publisher: Boston, MA, USA, 2000. [Google Scholar]
- Morgan, D.R.; Benesty, J.; Sondhi, M.M. On the evaluation of estimated impulse responses. IEEE Signal Process. Lett. 1998, 5, 174–176. [Google Scholar] [CrossRef]
- Ciochină, S.; Paleologu, C.; Benesty, J.; Enescu, A.A. On the influence of the forgetting factor of the RLS adaptive filter in system identification. In Proceedings of the 2009 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 9–10 July 2009; pp. 205–208. [Google Scholar]
- Paleologu, C.; Benesty, J.; Ciochină, S. Linear system identification based on a Kronecker product decomposition. IEEE/ACM Trans. Audio Speech Lang. Process. 2018, 26, 1793–1808. [Google Scholar] [CrossRef]
- Elisei-Iliescu, C.; Paleologu, C.; Benesty, J.; Ciochină, S. A recursive least-squares algorithm based on the nearest Kronecker product decomposition. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 4843–4847. [Google Scholar]
- Elisei-Iliescu, C.; Paleologu, C.; Benesty, J.; Stanciu, C.; Anghel, C.; Ciochină, S. Recursive least-squares algorithms for the identification of low-rank systems. IEEE/ACM Trans. Audio Speech Lang. Process. 2019, 27, 903–918. [Google Scholar] [CrossRef]
- Benesty, J.; Rey, H.; Rey Vega, L.; Tressens, S. A non-parametric VSS NLMS algorithm. IEEE Signal Process. Lett. 2006, 13, 581–584. [Google Scholar] [CrossRef]
- Paleologu, C.; Benesty, J.; Ciochină, S. A robust variable forgetting factor recursive least-squares algorithm for system identification. IEEE Signal Process. Lett. 2008, 15, 597–600. [Google Scholar] [CrossRef]
- Jouppi, N.P.; Young, C.; Patil, N.; Patterson, D.; Agrawal, G.; Bajwa, R.; Bates, S.; Bhatia, S.; Boden, N.; Borchers, A.; et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, ON, Canada, 24–28 June 2017; pp. 1–12. [Google Scholar]
Initialization: |
Set , , and based on (52)–(54) |
Compute , , and based on (27)–(29) |
Compute , , and based on (30)–(32) |
Evaluate the error signal based on (33) |
Compute , , and based on (49)–(51) |
Update , , and based on (43)–(45) |
Update , , and based on (46)–(48) |
Evaluate based on (22) |
Algorithms | × | + | ÷ |
---|---|---|---|
RLS | 1 | ||
RLS-TF | 3 | ||
NLMS | 1 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Elisei-Iliescu, C.; Dogariu, L.-M.; Paleologu, C.; Benesty, J.; Enescu, A.-A.; Ciochină, S. A Recursive Least-Squares Algorithm for the Identification of Trilinear Forms. Algorithms 2020, 13, 135. https://doi.org/10.3390/a13060135
Elisei-Iliescu C, Dogariu L-M, Paleologu C, Benesty J, Enescu A-A, Ciochină S. A Recursive Least-Squares Algorithm for the Identification of Trilinear Forms. Algorithms. 2020; 13(6):135. https://doi.org/10.3390/a13060135
Chicago/Turabian StyleElisei-Iliescu, Camelia, Laura-Maria Dogariu, Constantin Paleologu, Jacob Benesty, Andrei-Alexandru Enescu, and Silviu Ciochină. 2020. "A Recursive Least-Squares Algorithm for the Identification of Trilinear Forms" Algorithms 13, no. 6: 135. https://doi.org/10.3390/a13060135
APA StyleElisei-Iliescu, C., Dogariu, L. -M., Paleologu, C., Benesty, J., Enescu, A. -A., & Ciochină, S. (2020). A Recursive Least-Squares Algorithm for the Identification of Trilinear Forms. Algorithms, 13(6), 135. https://doi.org/10.3390/a13060135