- Research
- Open access
- Published:
The research of Kronecker product-based measurement matrix of compressive sensing
EURASIP Journal on Wireless Communications and Networking volume 2013, Article number: 161 (2013)
Abstract
The theory of compressive sensing is briefly introduced, and some construction methods for measurement matrix are deduced. A measurement matrix based on Kronecker product is devised, and it has been proved in mathematical proof. Numerical simulations on 2-D image verify that the proposed measurement matrix has better performance in storage space, construction time, and image reconstruction effect when compared with commonly used matrices in compressive sensing. This novel measurement matrix offers great potential for hardware implementation of compressive sensing in image and high-dimensional signal.
1 Introduction
The compressive sensing (CS) theory in signal processing provides a new approach to data acquisition which overwhelms the common Nyquist sampling theory. Considering a signal that is sparse in some basis (often using a wavelet-based transform coding scheme), the basic idea of compressive sensing is projecting the high-dimensional signal onto a measurement matrix, which is incoherent with the sparsifying basis, resulting to a low-dimensional sensed sequence. Then, with a relatively small number of appropriately designed projection measurements, the underlying signal may be recovered exactly. In contrast to the common framework of collecting as much data as possible at first and then discarding the redundant data by digital compression techniques, CS seeks to minimize the collection of redundant data in the acquisition step. Because of the special advantage of compressive sensing, many data compressing and reconstruction methods based on CS have been researched.
The feasibility of embedded hardware implementation of measurement matrix is the key to realizing the practical application of compressed sensing. If the key problem can be solved, the compressed sensing theory which aims at reducing the number of measured values as much as possible can achieve reliable signal sampling process in a system which has relatively low hardware complexity. The construction of sampling measurement matrix needs to solve the following problems [1, 2]:
-
1.
Reduce the number of measured values as much as possible on the premise of meeting the reconstruction precision.
-
2.
The design of observation matrix which can be available in embedded hardware implementation.
-
3.
The design of the observation matrix which has universal adaptability.
The signal reconstruction precision of current commonly used random matrices which are higher than certainty matrix [3], but the randomness characteristics of random matrices determine that they have complex tectonic process calculation and the limitations of the hardware implementation. So they are difficult to be implemented on the embedded hardware.
Although the certainty matrix reconstruction accuracy is not as high as random matrix, the certainty matrix can be structured quickly and easily for hardware implementation. So it will be the trend of the development of measurement matrix in the future. In this research, we consider the two aspects: one is measurement matrix reconstruction precision and the other is structure complexity; and then, we construct the measurement matrix based on Kronecker product which is easy hardware-implemented and has high reconstruction precision.
2 The measurement matrix based on Kronecker product
2.1 Basic theory of Kronecker product
Kronecker product has been named after the mathematician, Kronecker Leopold, which is a special form of the tensor product. It presents the mathematical relationship between two matrices of any size. The details are as follows:
Set A is a m × n size matrix, and B is a p × q size matrix, then the Kronecker product is a mp × nq partitioned matrix. It can be shown as:
The Kronecker product of the matrice has many important properties, such as bilinear, associative law, etc. This research is based on the following properties of Kronecker product of matrix operation:
Set (α1, α2,⋯,αn) are n column vectors which are m-D and linearly independent, (b1,b2,⋯,b q ) are q column vectors which are q-D and linearly independent. So vectors α i ⊗ b j (i,j = 1,2,⋯,n) are linear independent vectors. On the other hand, if α i ⊗ b j are linearly independent, the (α1, α2,⋯,α n ), (b1,b2,⋯,b q ) are both linearly independent. The proofs of that process are as follows:
so we can get:
As the size of the matrices α i ⊗ b j is mp ⊗ nq, the matrix rank, which is the nq column vectors α i ⊗ b j of A ⊗ B is linearly independent. On the other hand, if α i ⊗ b j are linearly independent, then all the column vectors of A ⊗ B are linearly independent, and the matrix rank of A ⊗ B is nq = Rank (A ⊗ B) = Rank A × Rank B.
Finally, we can deduce that: Rank A = n, Rank B = q, which means (α1, α2,⋯,α n ) and (b1,b2,⋯,b q ) are linearly independent.
By the above properties, the high-dimensional orthogonal vector that we get by the Kronecker product is still linearly independent just like the low-dimensional orthogonal base vector. To get the final high-dimensional vector which is still linearly independent, we should choose the appropriate orthogonal basis vector [4].
It can be seen that the times of Kronecker product operation are related to the number of the dimensions n of orthogonal basis vector and the number of the dimensions N of high-dimensional column [5]. To achieve the change from n to N, it often requires several Kronecker product operations, and the value of n is smaller than N.
The times of Kronecker product operation can be expressed as k = log n N. For the commonly used Lena256.bmp image as an example, the dimensions N of image column vector are N = 256. We pick two basis vectors (n= 2) as the low-dimensional orthogonal vector, so the times we need of the Kronecker product is k = log n N = log 2 256 = 8. Namely, the constructing process only needs eight Kronecker product operations. So this method is simpler than commonly used measurement matrice construction methods and has relatively smaller amount of calculation [6, 7].
2.2 Construction method of measurement matrix based on Kronecker product
The process of measurement matrix construction based on orthogonal matrix Kronecker product is as follows:
-
1.
Choose orthogonal basis vectors. It is required that the number of base vector dimensions and elements are n; U is formed by the base vector matrix which has the properties of orthogonal matrix, U × U Γ = E, where E is the unit for matrix.
-
2.
Calculate the times of Kronecker product operations which are required. Based on orthogonal basis vector dimensions n and N which are the dimension numbers of the high-dimensional column vector that we needed, we can calculate that the number of Kronecker product operation is k and the calculation method is k = log n N.
-
3.
After k times Kronecker product operations, we get the N order matrix X k, which reserves base properties of the basis matrix. According to matrix analysis theory about matrix QR decomposition, the orthogonal matrix and upper triangular matrix can be obtained through QR decomposition, where the orthogonal matrix is what we need.
-
4.
Decide the M according to the actual required compression ratio to form the measurement matrix which is M × N, choose M row vectors from orthogonal matrix U to form the measurement matrix, which is the measurement matrix based on orthogonal matrix Kronecker product. It can be represented as:
Where c is the set of the M row vectors selected from U matrix. Φ is the final measurement matrix [8].
Finally, we process the matrix orthogonal normalization to make the matrix satisfy the conditions for RIP approximately.
3 Simulation results and analysis
In order to analyze the performance and characteristics of the Kronecker product measurement matrix, we apply the measurement matrix to a 2-D image acquisition simulation experiment and compare it with the Gaussian random matrix, Fourier matrix, Bernoulli matrix, Toeplitz matrix, polynomial matrix, and measurement matrix which are commonly used.
The simulation environment using the ordinary PC and Matlab7.0 simulation software, the 2-D image signal is 256 × 256 Lena.bmp. The image sparse algorithm is the wavelet transform [9], and the image reconstruction algorithm is the OMP reconstruction algorithm.
Reference objects are divided into two categories, namely, random matrice which contains Gaussian random matrix, Fourier matrix, and Bernoulli matrix, and the certain matrix which contains Toeplitz matrix, polynomial matrix, and the rotation matrix.
3.1 Simulation experiments when the compression ratio is constant
In the fixed compression ratio of 0.6 (M = 150, N = 256), the image sparse algorithm is the wavelet transform, and the image reconstruction algorithm is the OMP reconstruction algorithm. We adopt various measurement matrices to process the two-dimensional image signal Lena.bmp images and the simulation of experiment peak-to-noise ratio (PSNR) of the reconstructed image [10], the measurement matrix construction, and image reconstruction time consumed, as shown in Table 1.
According to Table 1, when the target image, image sparse algorithm, the reconstruction algorithm, and other conditions are all the same and the compression ratio is fixed, the Kronecker product matrix based on orthogonal basis performs better than the commonly used certain measurement matrice in the image reconstruction effect of PSNR (4 to 5 dB higher) [11]. The difference in the image reconstruction effect is relatively small when compared with random measurement matrix which has impressive reconstruction effect [12]; the PSNR gap is only 1 to 2 dB. From the PSNR, the measurement matrix based on Kronecker product matrix is significantly better than the commonly used certain measurement matrice and is quite close to random matrix.
According to Table 1, when the target image, image sparse algorithm, the reconstruction algorithm, and other conditions are all the same and the compression ratio is fixed, the time consumed of the Kronecker product matrix n is quite close to certain matrix, only half of the random measurement matrix [13].
In general, the measurement matrix based on orthogonal matrix Kronecker product has the same image reconstruction accuracy of random measurement matrix and are as fast as the certainty measurement matrix.
3.2 The simulation results under different compression ratios
Under the condition that the sparse algorithm is the wavelet transform and reconstruction algorithm for reconstructing is OMP algorithm, the simulation of each measurement matrix under different compression ratios to reconstruct the 2-D image Lena.bmp has been made. The experimental data are shown in Table 2 of 256 × 256 Lena.bmp 2-D image.
It can be seen from Table 2 that, with the increase of compression ratio, the image reconstruction precision of the measurement matrix is improving [14] which can be told from the increasing PSNR of the reconstructed image. The tendency and the image reconstruction accuracy comparison between each matrix under different compression ratios are shown in Figure 1.
According to Figure 1, using the different compression ratios, we can see that the PSNR of the reconstructed image of the Gaussian random measurement matrix is better than the other measurement matrix. However, because the matrix is generated randomly according to a certain distribution way, the matrix structure process can be a large amount of calculation, and it needs to call the special function library. So it is not easy to be applied in the embedded hardware implementation [15]. Being similar to Gaussian random matrix, Fourier matrix and Bernoulli matrix are also the random matrix; although they perform well in the image reconstruction, there is also a difficult problem in the embedded hardware implementation [16]. Belonging to deterministic matrix, the polynomial matrix construction only needs a small amount of parameters on the embedded hardware implementation, but it can be seen from the simulation result that the reconstruction effect is not ideal. Toeplitz matrix and rotation matrix belong to the uncertainty matrix and have some advantages of small time consumed in matrix construction, small amount of calculation, and small need for ram space, which mean an easy embedded hardware implementation; but compared with the random matrix, there is an obvious difference in the PSNR of the reconstructed image.
The matrix based on Kronecker product is formed by Gaussian orthogonal basis through the Kronecker product operation. It is processed in the way of the QR decomposition of matrix and the orthogonal normalized processing which makes the matrix perform better [17]. It can be seen from the PSNR of the reconstructed image that the matrix's performance tends to be random measurement matrix's, and it is better than the common certainty measurement matrix. The calculation aims to construct the matrix computation which is small so it is easy to be embedded in the hardware.
Overall, Kronecker product measurement matrix based on orthogonal matrix performs well not only in accuracy but also in time consumed which means this novel measurement matrix offers great potential for hardware implementation of compressive sensing in image and high-dimensional signal.
4 Conclusions
In this research, we use the basic properties of Kronecker product operation and low-dimensional orthogonal basis to get higher dimensional column vector, Kronecker product, and construct the Kronecker product measurement matrix based on orthogonal basis.
In this research, we have proved that the novel matrix is linearly independent and occupies less storage space with less calculation. It also has the advantage of shorter construction time. At the same time, it also performs pretty well in image reconstruction effect. So, the Kronecker product measurement matrix based on orthogonal basis that we have constructed is suitable in the embedded hardware implementation and offers great potential for hardware implementation of compressive sensing in image and high-dimensional signal.
References
Tropp J, Gilbert A: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theor. 2007, 53: 4655-4666.
Xu L, Liang Q: Compressive sensing in radar sensor networks using pulse compression waveforms. In IEEE International Conference on Communications. Ottawa; 2012:794-798.
Xu L, Liang Q, Cheng X, Chen D: Compressive sensing in distributed radar sensor networks using pulse compression waveforms. EURASIP J. Wirel. Commun. Netw. 2013, 2013: 36. 10.1186/1687-1499-2013-36
Liang Q: Compressive sensing for radar sensor networks. In IEEE Global Telecommunications conference (Globecom 2010). Miami; 2010:1-5.
Wu J, Wang W, Liang Q, Wu X, Zhang B: Compressive sensing-based signal compression and recovery in UWB wireless communication system. Wiley Wireless Comm. Mobile Comput. 2012. 10.1002/ wcm.2228
Wu J, Wang W, Liang Q, Wu X, Zhang B: Compressive sensing based data encryption system with application to sense-through-wall UWB noise radar. Wiley Secur Commun. Netw. 2013. 10.1002/sec.670
Wu J, Liang Q, Cheng X, Chen D, Narayanan RM: Amplitude based compressive sensing for UWB noise radar signal. In IEEE Globecom (Workshop on Radar and Sonar Networks). Anaheim; 2012:1430-1434.
Wu J, Liang Q, Kwan C: A novel and comprehensive compressive sensing-based system for data compression. In IEEE Globecom (Workshop on Radar and Sonar Networks). Anaheim; 2012:1420-1425.
Xu L, Liang Q, Zhang B, Wu X: Compressive Sensing in Radar Sensor Networks for Target RCS Value Estimation. In IEEE Globecom (Workshop on Radar and Sonar Networks). Anaheim; 2012:1410-1415.
Liang Q, Wu J, Cheng X, Chen D, Liang J: Sparsity and compressive sensing of sense-through-foliage radar signals. In IEEE International Conference on Communications. Ottawa; 2012:6376-6380.
Chen J, Liang Q, Paden J, Gogineni P: Compressive sensing analysis of synthetic aperture radar raw data. In IEEE International Conference on Communications. Ottawa; 2012:6362-6366.
Chen J, Liang Q: Rate distortion performance analysis of compressive sensing. In IEEE Global telecommunications Conference (Globecom 2011). Houston; 2011:1-5.
Liang Q: Compressive sensing for synthetic aperture radar in fast-time and slow-time domains. In Conference on Signals Systems and Computers (ASILOMAR). Pacific Grove; 2011:1479-1483.
Kirachaiwanich D, Liang Q: Compressive sensing: to compress or not to compress. In Conference on Signals Systems and Computers (ASILOMAR). Pacific Grove; 2011:809-813.
Candès EJ, Wakin MB, Boyd SP: Enhancing sparsity by reweighted ℓ1 minimization. J. Fourier Anal. Appl. 2008, 14: 877-905. 10.1007/s00041-008-9045-x
Do TT, Lu G, Nguyen N, Tran TD: Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In 42nd Asilomar Conference on Signals, Systems and Computers. Pacific Grove; 2008:581-587.
Jung A, Taubock G, Hlawatsch F: Compressive spectral estimation for nonstationary random processes. In IEEE International Conference on Acoustics, Speech and Signal Processing. Taipei; 2009:3029-3032.
Acknowledgment
This research was supported by the Tianjin Younger Natural Science Foundation (12JCQNJC00400), National Science Foundation of China: 61271411.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zhang, B., Tong, X., Wang, W. et al. The research of Kronecker product-based measurement matrix of compressive sensing. J Wireless Com Network 2013, 161 (2013). https://doi.org/10.1186/1687-1499-2013-161
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1499-2013-161