An improved sparsity estimation variable step-size matching pursuit algorithm

An improved sparsity estimation variable step-size matching pursuit algorithm

Zhang Ruoyu  Zhao Honglin

(Communication Research Center, Harbin Institute of Technology, Harbin 150080, China)

Abstract:To improve the reconstruction performance of the greedy algorithm for sparse signals, an improved greedy algorithm, called sparsity estimation variable step-size matching pursuit, is proposed. Compared with state-of-the-art greedy algorithms, the proposed algorithm incorporates the restricted isometry property and variable step-size, which is utilized for sparsity estimation and reduces the reconstruction time, respectively. Based on the sparsity estimation, the initial value including sparsity level and support set is computed at the beginning of the reconstruction, which provides preliminary sparsity information for signal reconstruction. Then, the residual and correlation are calculated according to the initial value and the support set is refined at the next iteration associated with variable step-size and backtracking. Finally, the correct support set is obtained when the halting condition is reached and the original signal is reconstructed accurately. The simulation results demonstrate that the proposed algorithm improves the recovery performance and considerably outperforms the existing algorithm in terms of the running time in sparse signal reconstruction.

Key words:compressed sensing; sparse signal reconstruction; matching pursuit; sparsity estimation

Compressed sensing (CS)[1-2], which can be used to sample analog signals at far lower rate than the Nyquist sampling rate, has aroused tremendous interests over the past few years. CS is simply divided into three procedures: sparse representation, non-related linear measurement and signal reconstruction. The reconstruction algorithm aims to obtain the original sparse signal from measurements and is one of most important components in CS.

Various efforts have been made for sparse signal reconstruction with reliable accuracy and robustness to noise. l1-minimization and greedy pursuit algorithms are two major approaches among the existing recovery algorithms. l1-minimization algorithms, such as basis pursuit (BP)[3] and its optimization algorithm—gradient projection[4], solve a convex minimization problem. The convex relaxation algorithms need the minimal number of measurements but have the drawback of high computational complexity. Greedy pursuit algorithms, which are based on the idea of iteration, identify the supports set of the target signal and construct an approximate signal based on the set of the chosen supports, until a halting condition is met. Orthogonal matching pursuit (OMP)[5] is the typical greedy algorithm and its computation complexity is significantly lower than those of l1-minimization algorithms. Regularized OMP[6] and stagewise OMP[7] are the classical improved greedy algorithms of OMP, which additionally refine the selected column atoms of measurement matrix. Subspace pursuit (SP)[8] and compressive sampling matching pursuit (CoSaMP)[9], which also refine the columns selected from each iteration, are proposed by the idea of backtracking. The reconstruction performance is improved by further selecting the projection coefficients in subspace spanned by previous columns. SP and CoSaMP require the sparsity K as a priority which may not be available in practical applications. In order to overcome this drawback, the sparsity adaptive matching pursuit (SAMP)[10] algorithm is proposed, which can reconstruct the signals by dividing the recovery process into several stages with fixed step size without knowing the sparsity. Another sparsity adaptive approach based on the matching test[11] is used to obtain an initial estimated value for iteration. Furthermore, a sparsity adaptive algorithm[12] utilizing restricted isometry property (RIP) and distributed compressed sensing was proposed to apply to wideband compressive spectrum sensing. However, the fixed step size leads to a long reconstruction time or inaccurate recovery. Moreover, both the VSSAMP[13] and MSAMP[14] can reduce the reconstruction time for speech signals by changing the step size. RIP was studied in detail in Ref.[15] and introduced into the application of compressed sensing.

In this paper, we propose a new greedy algorithm called sparsity estimation variable step-size matching pursuit (SEVSMP). The proposed algorithm estimates the sparsity of the target signal by utilizing RIP. Then, the estimated sparsity level and the support set are leveraged as the initial value of the iteration, which provides preliminary sparsity information for signal reconstruction. In addition, the variable step size is chosen according to the residual in the proposed algorithm SEVSMP, which is more flexible compared with the fixed step size in the SAMP algorithm. The simulation results demonstrate that the proposed algorithm can reduce reconstruction time dramatically and recover signals effectively.

1 Compressed Sensing

Suppose that signal x is an N-length vector and Ψ={φ12,…,φN} is the sparse basis where φi is an N-length column vector (i=1, 2, …,N). The signal x can be denoted as K-sparse signal on the basis Ψ if x satisfies the following equation:

(1)

‖α‖0=K

(2)

where α∈RN×1 is a sparse vector and KN. ‖·‖0 denotes the number of nonzero elements. Then, the signal x is equivalently represented by vector α under some linear transform Ψ in some domains. Based on compressed sensing, the signal x can be reconstructed by measurement as follows:

y=Φx

(3)

where y∈RM×1 is the measurement vector and Φ∈RM×N is the measurement matrix where M<N. According to Eq.(3), we collect the M-dimensional non-adaptive linear measurement signal y, which contains the whole information of the N-dimensional signal x.

Substituting Eq.(1) into Eq.(3), we can obtain

y=ΦΨα=ACSα

(4)

where ACS=ΦΨ.

The reconstruction problem of signal x is transformed into recovering α from measurements y. As long as α is obtained, we can use Eq.(1) to obtain the original signal x. A sufficient condition for exact recovery is that matrix ACS satisfies the RIP condition[1].

Definition 1 Matrix ACS∈RM×N is said to satisfy the RIP condition with the smallest number of K-restricted isometry constant δK, if

(1-δK)‖v‖2≤‖ACSv‖≤(1+δK)‖v‖2

(5)

holds for any K-sparse vector v∈RN×1 with ‖v‖0K.

Now we need to reconstruct the unknown signal x from the measurement y. A natural formulation of the recovery problem is within an l0 norm minimization framework, which seeks a solution to the problem:

min‖α‖0  s.t. y=ACSα

(6)

Unfortunately, the above problem is NP-hard. One way to avoid using this computationally intractable formulation is to consider an l1-minimization problem:

min‖α‖1 s.t. y=ACSα

(7)

For the above two problems, as previously mentioned, there are various kinds of reconstruction methods to solve them. The proposed algorithm will be depicted elaborately in the next section.

2 SEVSMP Algorithm

The SEVSMP algorithm is described in this section. The SAMP adopts a stagewise approach to search the real support set stage by stage. The stage is determined by the residual between the measurement signal and the projection on the space spanned by the support set. The iteration process of OMP and ROMP is determined by the sparsity. The process of SAMP, however, is determined by the step size due to the unknown of the sparsity. The running time of the reconstruction will be too long if the step size is too small, while the error support will appear if the step size is too large. The step size plays a significant role in terms of the running time and performance in reconstruction. Consequently, it is important to choose an appropriate step size.

2.1 Sparsity estimation

The algorithm SAMP needs an initialization of the finalist size in the first stage. On the one hand, the recovery may be inaccurate if the finalist size is large. On the other hand, if the finalist size starts from 1, the reconstruction time will be extremely long. The most highlighted modification in the proposed algorithm is the sparsity estimation. The main idea is to conduct a matching test to obtain a support set whose cardinality K0 is less than real sparsity K. If K0>K, the following iteration cannot operate. The real support of signal x is represented by Γ and supp(Γ)=K. Also, ΦΓ represents the submatrix formed by the columns of Φ, whose indices are in set Γ. Moreover, for g=ΦTy, where gi stands for the i-th element of g. Then the set Γ0 consists of indices corresponding to the K0 largest absolute value of gi and supp(Γ0)=K0. Finally, the proposition is followed as below.

Proposition 1 Suppose that Φ satisfies the restricted isometry property with parameters K and δK. If K0K, then we can obtain the formula:

‖y‖2

(8)

Proof Select K maximum indices from (1≤iN), and then we can obtain a set represented by .Γ0. Then we obtain

2≥‖2

(9)

Furthermore, we have

2=‖2

(10)

Denote ΦΓ) as the eigenvalue of matrix ΦΓ. According to the definition of RIP, we have ≤1+δK. Therefore, we can obtain the following expression:

2≥(1-δK)‖x‖2

(11)

On the other hand, utilizing the property of RIP, we have

(12)

Combining inequality (10), (11) and (12), the following formula can be obtained:

‖y‖2

(13)

Proof is completed.

Considering the converse-negative proposition of Proposition 1, we have

‖y‖2

(14)

Then K0<K.

According to Proposition 1, if we obtain a set satisfying inequality (14), then the sparsity estimation K0 can be obtained. The concrete step of sparsity estimation is as follows:

Let K0=1, if (14) is true, then K0=K0+1; until inequality (14) is not satisfied.

At the same time, Γ0 is the estimation of true support set Γ.

2.2 Algorithm description

From the above description of sparsity estimation, the initial sparsity and the support set can be obtained. Fig.1 shows the conceptual diagram of SEVSMP. In this diagram, Ck denotes the candidate set and Fk corresponds to the finalist set. Both of them are adaptive. That is to say, sets Ck and Fk change with the iterations so that the algorithm can refine atoms and choose the final support.

The variable step size is incorporated according to certain criteria. The step size declines when the residual becomes small. This modification can reduce the running time effectively and the recovery process is divided into several stages, each of which contains several iterations. The concrete steps are shown in the pseudo code. Function max() returns indices corresponding to the largest magnitude value in parentheses. This algorithm halts when the norm of residual ‖r‖2 is smaller than a certain threshold ε in this paper. Parameter η is added to the algorithm, whose value is determined by a large number of empirical experiments, and it is used to control the step size of each iteration.

Fig.1 Conceptual diagram of the SEVSMP

Algorithm 1 SEVSMP

Input: Φ, y, ε, and η.

Output: A K-sparse approximation of the original signal.

Initialization: K0=1; δK=0.26; Γ0=∅; F0=∅;

Sparsity estimation:

repeat

g0Ty;

Γ0={K0 indices corresponding to the largest magnitude value in g0};

if ‖‖y‖2

K0=K0+1;

end

until ‖‖y‖2 is violated.

calculate resti=min‖

Subspace backtracking:

Ω=K0; k=1; r0=resti;

repeat

Ck=Fk-1Sk;

if ‖r‖2≥‖rk-12

if ‖r‖2>η·‖y‖2

s=「K0/2⎤;

else

s=「K0/4⎤;

end

Ω=Ω+s;

else

Fk=F; rk=r; k=k+1;

end

until halting condition: ‖r‖2<ε

2.3 Complexity analysis

As a matter of fact, SEVSMP has two main procedures. One is sparsity estimation and the other is subspace backtracking iteration. At the stage of the sparsity estimation, the corresponding complexity is (2M-1)N, N-1, (K0-1)MK0+K0(K0-3)/2 and +2(M-1)K0. Next, Algorithm 1 moves into the stage of subspace tracking and the complexity of the k-th iteration is approximately k+2MK0k. In total, the whole cost of Algorithm 1 is O(MN). More importantly, the iterations of SEVSMP is fewer than those of SAMP. The reason is that at the beginning of the algorithm, a larger step size is used to approach the real sparsity. With the decrease of the residual, the step size declines according to the extent of the residual. The limit of the step size is 1. As a result, the recovery of the original signal can be acquired by SEVSMP within much less time.

3 Simulation Results

In this section, we illustrate that the SEVSMP is a more powerful algorithm for sparse signal reconstruction compared with other greedy algorithms by using numerical simulations. Measurement matrix Φ is generated randomly from Gaussian distribution which is independent identically distributed N(0,1/M) and its dimension is M=100, N=256. Algorithms OMP, ROMP, SAMP are compared to the proposed algorithm. The experiments are conducted in the environment of a Pentium(R) Dual-Core CPU at 2.6 GHz, using Matlab Version 2011b(7.13.0.5664).

3.1 Experiment 1

In fact, the restricted isometry constant δK is difficult to calculate due to the randomness of the measurement matrix. Besides, it is difficult to calculate δK in the field of compressed sensing. As we know, δK is small and ranges between 0 and 1. The experiment is implemented in order to illustrate the effectiveness of sparsity estimation. The sparsity K=30. For a fixed value K, the experiment is repeated 50 times for each different δK.

According to Fig.2, a smaller δK means a greater fluctuation of the sparsity estimation, and the estimation is higher than the real sparsity in certain experiments. Conversely, the sparsity estimation is steady when δK is large. The extreme case is that the sparsity estimation maintains 1 when δK is large enough. At this time, Algorithm 1 degenerates into SAMP. This means that the SAMP is a special case of Algorithm 1. It is clear that δK=0.26 is a proper initial value.

Fig.2 Sparsity estimation of different δK for a fixed sparsity

3.2 Experiment 2

The variable step size is another feature of the proposed algorithm. In order to explain the practical process of the variable step size, a typical algorithm process is shown in Fig.3. The sparsity is set to be K=30. With the increase in step iterations, the step of SAMP is fixed with 1. Before reaching the final sparsity, the iteration of stage is 30. However, due to the variable step, there are only 19 iterations, which is more than a 1/3 reduction than that of SAMP. In SEVSMP, it is clear that the first step size of first iteration is 3, and then the step size decreases to 2 until the 10th iteration. Finally, the step size decreases to 1, which approaches the real sparsity. This means that different step sizes are used to approach the real support set in each iteration.

Fig.3 Size of support set with variable step sizes between SEVSMP and SAMP

3.3 Experiment 3

The major interest of this experiment is to explore the reconstruction successful rate among these four algorithms. For each K, 500 simulations are conducted to calculate the probability of exact reconstruction for different algorithms. The halting condition ε is set to be 0.1 in both SAMP and SEVSMP, and the threshold η=0.15.

Figs.4(a) and (b) demonstrate that for binary sparse signal and Gaussian sparse signal, the performance of the proposed algorithm is close to that of SAMP and superior to those of OMP and ROMP. As can be seen, for the binary sparse signal, SEVSMP requires less measurement when the sparsity is fixed. For the Gaussian sparse signal, SEVSMP is slightly inferior to SAMP. The proposed algorithm performs better for reconstructing the binary sparse signal.

(a)

(b)
Fig.4 Probability of reconstruction of different algorithms.  (a) Using binary sparse signal; (b) Using Gaussian sparse signal

3.4 Experiment 4

This experiment investigates the execution time among different sparsities using these four algorithms. We use the binary sparse signal as the original signal. For each value of K, the experiment is repeated 500 times and the average running time of each K is calculated.

From Fig.5(a), we can see that the running time of SEVSMP is much less than that of SAMP and the running time is reduced by at least 100%. On account of sparsity estimation in the proposed algorithm, the proper initial sparsity and support set are obtained to further pursue the real support. Due to the initial sparsity information and the variable step size, the running time is reduced significantly. When K<40, the execution time increases with the sparsity. The reason why execution time remains unchanged when K>40 is that these two algorithms cannot precisely reconstruct the target signal.

(a)

(b)
Fig.5 Binary sparse signal reconstruction of different algorithms. (a) Running time; (b) MSE

The mean square error (MSE) of reconstruction using binary sparse signal is shown in Fig.5(b). The MSE of reconstruction is defined as

(15)

where is the reconstruction signal.

From Fig.5(b), we can see that the MSE using SEVSMP is the smallest among the four algorithms when K<35. The reconstruction MSE of the proposed algorithm converges and remains steady similar to other algorithms when K>35.

4 Conclusion

In this paper, a novel greedy algorithm SEVSMP is proposed for sparse signal reconstruction. As its name suggests, the proposed reconstruction algorithm is featured mostly by sparsity estimation and the variable step size. In addition, the prior knowledge of sparsity is unnecessary. The simulation results prove that the reconstruction time of the proposed algorithm is considerably lower than that of SAMP due to these two features and the reconstruction performance slightly outperforms SAMP. It is shown that the proposed algorithm is appropriate for reconstructing sparse signals.

References:

[1]Donoho D L. Compressed sensing [J]. IEEE Transactions on Information Theory, 2006, 52(4): 1289-1306. DOI:10.1109/tit.2006.871582.

[2]Li Y, Sha X J, Wang K, et al. The weight-block compressed sensing and its application to image reconstruction [C]//2nd International Conference on Instrumentation, Measurement, Computer, Communication and Control. Hangzhou, China, 2012: 723-727.

[3]Candes E J, Tao T. Decoding by linear programming [J]. IEEE Transactions on Information Theory, 2005, 51(12):4203-4215. DOI:10.1109/tit.2005.858979.

[4]Figueiredo M, Nowak R D, Wright S J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems [J]. IEEE Journal of Selected Topics in Signal Processing, 2007, 1(4): 586-597. DOI:10.1109/jstsp.2007.910281.

[5]Tropp J A, Gilbert A C. Signal recovery from random measurements via orthogonal matching pursuit [J]. IEEE Transactions on Information Theory, 2007, 53(12): 4655-4666. DOI:10.1109/tit.2007.909108.

[6]Needell D, Vershynin R. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit [J]. IEEE Journal of Selected Topics in Signal Processing, 2007, 4(2): 310-316. DOI:10.1109/jstsp.2010.2042412.

[7]Donoho D L, Tsaig Y, Drori I, et al. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit[J]. IEEE Transactions on Information Theory, 2012, 58(2): 1094-1121. DOI:10.1109/tit.2011.2173241.

[8]Dai W, Milenkovic O. Subspace pursuit for compressive sensing signal reconstruction [J]. IEEE Transactions on Information Theory, 2009, 55(5): 2230-2249. DOI:10.1109/tit.2009.2016006.

[9]Needell D, Tropp J A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples [J]. Applied and Computational Harmonic Analysis, 2009, 26(3):301-321. DOI:10.1016/j.acha.2008.07.002.

[10]Do T T, Lu G, Nguyen N, et al. Sparsity adaptive matching pursuit algorithm for practical compressed sensing [C]//Asilomar Conference on Signals, Systems and Computers. Pacific Grove, USA, 2008: 581-587.

[11]Yang C, Feng W, Feng H, et al. A sparsity adaptive subspace pursuit algorithm for compressive sampling [J]. Chinese Journal of Electronics, 2010, 38(8): 1914-1917. (in Chinese)

[12]Zhao Z J, Hu J W. A sparsity adaptive algorithm for wideband compressive spectrum sensing [J]. Telecommunications Science, 2014, 3(1): 100-104. (in Chinese)

[13]Yu Z. Variable step-size compressed sensing-based sparsity adaptive matching pursuit algorithm for speech reconstruction [C]//IEEE Chinese Control Conference. Nanjing, China, 2014: 7344-7349.

[14]Zhu Y W, Zhao Y J, Sun B. A modified sparsity adaptive matching pursuit algorithm [J]. Signal Processing, 2012, 28(1): 80-86.

[15]Candès E J. The restricted isometry property and its implications for compressed sensing [J]. Comptes Rendus Mathematique, 2008, 346(9/10): 589-592. DOI:10.1016/j.crma.2008.03.014.

一种改进的稀疏度估计变步长匹配追踪算法

张若愚  赵洪林

(哈尔滨工业大学通信技术研究所, 哈尔滨 150080)

摘要:为了提高稀疏信号贪婪算法的重构性能,提出了一种改进的贪婪重构算法,即稀疏度估计变步长匹配追踪算法.与现有的贪婪算法相比,该算法用约束等距常数和变步长分别来进行稀疏度估计和减少重构所需的时间.通过稀疏度估计,在重构的开始阶段得到估计的稀疏度和支撑集作为初始值,为信号重构提供了初始的稀疏信息.然后,根据初始值计算相关值以及残差,通过回溯思想和可变步长更新上一次迭代得到的支撑集.最后,当满足算法终止条件时,得到正确的信号支撑集,从而准确地重构出原始信号.仿真结果证明,针对稀疏信号重构,所提出的算法提高了重构性能,所需要的运算时间较之前的算法大幅减少.

关键词:压缩感知;稀疏信号重构;匹配追踪;稀疏度估计

中图分类号:TN911.7

Received:2015-09-16.

Biographies:Zhang Ruoyu(1992—), male, graduate; Zhao Honglin(corresponding author), male, doctor, professor, hlzhao@hit.edu.cn.

Foundation item:The National Basic Research Program of China (973 Program) (No.2013CB329003).

Citation:Zhang Ruoyu, Zhao Honglin. An improved sparsity estimation variable step-size matching pursuit algorithm[J].Journal of Southeast University (English Edition),2016,32(2):164-169.doi:10.3969/j.issn.1003-7985.2016.02.006.

doi:10.3969/j.issn.1003-7985.2016.02.006