Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2014, Article ID 586014, 8 pages http://dx.doi.org/10.1155/2014/586014

Research Article

The Nonlocal Sparse Reconstruction Algorithm by Similarity Measurement with Shearlet Feature Vector

Wu Qidi, Li Yibing, Lin Yun, and Yang Xiaodong

College of Information and Communication Engineering, Harbin Engineering University, Harbin, China Correspondence should be addressed to Lin Yun; linyun@hrbeu.edu.cn

Received 24 August 2013; Revised 7 January 2014; Accepted 19 January 2014; Published 4 March 2014 Academic Editor: Yi-Hung Liu

Copyright © 2014 Wu Qidi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Due to the limited accuracy of conventional methods with image restoration, the paper supplied a nonlocal sparsity reconstruction algorithm with similarity measurement. To improve the performance of restoration results, we proposed two schemes to dictionary learning and sparse coding, respectively. In the part of the dictionary learning, we measured the similarity between patches from degraded image by constructing the Shearlet feature vector. Besides, we classified the patches into different classes with similarity and trained the cluster dictionary for each class, by cascading which we could gain the universal dictionary. In the part of sparse coding, we proposed a novel optimal objective function with the coding residual item, which can suppress the residual between the estimate coding and true sparse coding. Additionally, we show the derivation of self-adaptive regularization parameter in optimization under the Bayesian framework, which can make the performance better. It can be indicated from the experimental results that by taking full advantage of similar local geometric structure feature existing in the nonlocal patches and the coding residual suppression, the proposed method shows advantage both on visual perception and PSNR compared to the conventional methods.

1. Introduction

The reasonable representation is the basis of many tasks in image processing. The meaning of "reasonable" is to express the important information in image with less coefficients, which is called sparse representation [1]. We always want to express the signal in a cost-effective way, which could reduce the cost in signal processing. However, the emerging of sparse representation satisfied the requirements exactly. Wavelet is the landmark work, which is the most optimal expression of one-dimension signal. But it shows weak ability to express the high-dimension signal for its limited directions. For solving the problem, Donoho proposed the multiscale geometric analysis theory, which contains Ridgelet, Curvelet [2], Bandlet, and Contourlet [3] mainly, and then applied them to image restoration, which achieved good effect.

As for the sparse representation in image restoration, there are two main approaches, including multiscale geometric analysis (MGA) and dictionary learning. With the MGA, Donoho proposed the pioneer wavelet threshold shrinking method. After that, Yin designed a non-gaussian bivariate

Shrinkage function [4] under the MAP norm, which got the better performance. In 2006, as the limited directions in wavelet, Arthur proposed the NSCT transformation [5, 6] and applied it in the restoration area by Bayesian framework. But these conventional restoration models may not be accurate enough when the image is degraded seriously for their fixed over-completed dictionary, which is not sufficient to present the abundant structures in nature images. So, a new concept is proposed, which is called dictionary learning [79]. Besides, many methods about how to sparse coding are proposed, such as MP [10], OMP [11], and GP [12], which is called sparse coding. Further, a method called K-SVD [13,14] is proposed by Aharon, which updates the dictionary by making SVD decomposition to the residual signal, and is applied to precision image restoration.

The goal of this paper is to research the sparse construction algorithm for image restoration based on the dictionary learning framework. We propose a nonlocal block sparse reconstruction scheme by shearlet feature vector, by which we measure the similarity between image patches and classify them for learning cluster dictionary. Then, we propose

a new objective function for sparse coding to realize the high accuracy image restoration.

2. Image Restoration with Dictionary Learning

Suppose that the model among the observation signal y, original signal x, and noise n satisfied

y = x + n.

According to the sparse representation theory in [1], the sparse coding is equal to solving the optimization problem in (2):

s.t ||y-Da||2 < e.

In which, D is the sparse dictionary of x and a is the coding vector. Then, we can get the original signal by x = Doc. e is a constant with the noise standard deviation, e = (Ca) .

For the general restoration algorithm based on the sparse theory, dictionary D is fixed. So a new design for dictionary learning called K-SVD is proposed in reference [13] and good results have been achieved in image restoration. In the K-SVD, we implement the sparse coding and dictionary learning simultaneously. And, the optimization task in (2)can be changed to

(ax,D) = argmin {||y - Da^ + A||a||o}, x = Dax. (3)

Elad et al. divided the image into many patches for not increasing the dictionary dimension and got the block-based sparse coding at j with

ai,j = arg min - Daf2 + A||a||o } .

In which, yt j presents the patch whose central is (i, j). The size of patch is 7 x 7. They are gained by sliding a squared windows in the image. Then, we can get the restored image by averaging these patches.

Generally, K-SVD learns the dictionary with some random training examples from the image, so the learned atoms show some weakness in presenting certain local structure in image, which makes the coding not sparsity enough. So, the paper has two contributions. The first is that we proposed a dictionary learning scheme based on clustering, which trained subdictionary for each cluster and then produced the universal dictionary with them. Secondly, we proposed a novel objective function for sparse coding, which added a coding residual item compared to the traditional ones. The detail for the two contributions will be introduced as follows.

3. A Novel Scheme for Image Restoration

3.1. Nonsubsampling Shearlet Transform. Nonsubsampling Shearlet [15] is a multiscale geometric analytical method advanced by Lim and Kutyniok. It consisted of laplacian

pyramid and direction filter bands with multidirections and shift invariant. For its parabola scale principle, Nonsampling Shearlet has excellent performances on capturing geometrical feature and singularity in high-dimension, which makes it widely used in image restoration.

3.2. Similarity Measurement. As to the many advantages with Shearlet, we propose a similarity measurement method. Implement the Shearlet transform with levels and construct the feature vector for each pixel as

,, / 1,0 V = ( sv

1,e, ; 1

. . , SV

Here, is the number of subband in Ith level and svl'£ is the Shearlet coefficient. With the anisotropy feature, the coefficient for signal is large while the local geometric structure is similar to the basis function. Conversely, the coefficient is small, while the noise has the isotropy feature, so the coefficients are uniform in each primary function. Due to the above two reasons, the vector in (5) has the better antinoise ability, according to which, we take (5) to measure the similarity between the two patches. Suppose the size of patch is 5 x 5, we get the vector in (5) for all the 25 pixels. Then, we construct some subvector Vsub>fc generated by the kth dimension of V:

VSub,fc = (Vi (k),V2(k),...,V25 (k)f

In which, the Vt(k) presents the kth element of V (i is the order number of the pixel in each patch, i = 1,..., 25). Suppose V.1^ k and Vs2ub k are from two patches to be measured, we compute the index Si as follows:

Si = exp

IIV1 -V2 II

|| sub,fc sub,fc||2

In which, vk is called shearlet wave coefficient vk = yak (ak is the coefficient standard deviation with kth sub-band). With the formulation in (6), we can measure the similarity between two patches better than the Euclidean distance of gray in spatial domain.

3.3. Nonlocal Dictionary Learning

3.3.1. Nonlocal Dictionary Learning. The restoration algorithm consisted of two main parts that are dictionary learning and sparse coding, respectively. In this paper we improve the algorithm with the method for similarity measurement in Section 3.2. It is different to the conventional methods that we did not simply select the training patches from different example images to train the dictionary, while we select the nonlocal similar patches from the degraded image we want to restoration. And then, we cluster these patches to train the cluster subdictionary, which is more sufficient to make the universal dictionary better adapt to local geometrical feature than global K-SVD. Besides, the training data in the same cluster is similar to each other, so there is no need to produce an over-completed dictionary. So each subdictionary size is

! ^ I u

(a) Noisy

(b) Original

(d) SSBM3D (e) MK-SVD (f) Proposed

Figure 1: Restoration performance comparison on house (a = 15).

half of the size of corresponding cluster. Now we present the concrete description for learning scheme as follows.

Step 1. Implement the Shearlet transform to the image Y.

Step 2. Construct the Shearlet feature vector according to the formula (5) (The patch size is 5 x 5).

Step 3. Calculate the standard deviation of fcth subband (ak =

(1/N2) (y{ -y)2) and gain the wave coefficient vk, y{ and y is the shearlet coefficient and the mean value of coefficients; N is the total number of coefficients in fcth subband.

Step 4. Cluster the patches by method in [16] with the index Si in formula (6) and then produce the cluster Q; (i = 1,2,...,K is the cluster indicator).

Step 5. Take the patches in Q; as the training data and learning the cluster subdictionary with K-SVD.

Step 6. By cascading all the cluster dictionaries, we can get the universal dictionary for the whole image.

With the ability of capturing the local geometrical feature, the patches in the same set are highly similar by their geometry structure, which makes atoms in subdictionary have the strong adaptability to local structure. So, they can

sufficiently present the local geometrical feature. Cascaded by all the subdictionaries, the universal dictionary realizes the goal that presents all kinds of features in the whole image.

Now we show a simple example to show the advantage of clustering-based dictionary learning. We list a set of training examples:

[100,200,0]', [l10*randn (1), 120*randn (1),0]T, [l00*randn (1), 80*randn (1), 0]J,

[0,150*randn (1), 200*randn (1)f.

Here, "randn" means produce a Gaussian stochastic variable, and then, we take the above two groups of training data to learn a dictionary with two atoms. We adopt the proposed algorithm and K-SVD, respectively. Set the Maximum iteration number with KSVD to be 10 and we show the learned dictionary in Table 1.

As can be seen in Table 1, the atoms by proposed algorithm have the more similar structure with the training data.

3.3.2. Sparse Coding. For improving the coding accuracy, we proposed a novel optimization called coding residual

(a) Noisy

(b) Original

(c) NSCT

(d) SSBM3D (e) MK-SVD (f) Proposed

Figure 2: Restoration performance comparison on Barbara (a = 15).

Table 1: Learned Dictionary with different methods.

Method

Learned Dictionary

Proposed algorithm

" -0.5630 -0.0459

-0.7995 0.2835

-0.2096 -0.9579

" -0.5375 0

-0.8432 0.3294

[ 0.0000 -0.9442 _

suppression optimization, which is more sufficient than norm sparse coding. When dictionary is given, the -norm sparse coding for original signal can be realized as follows:

ax = arg min {||% - Da\\2 + AHa^}.

In the restoration problem, we can only get the observation y. So the sparse coding can be rewritten as

ocy = arg min - Da||2 +

Comparing (9) and (10), we can see that if we want to reconstruct x with high accuracy, the coding ay is expected to be as close as possible to the true sparse coding So we introduce the residual coding as as follows:

ax = a„ - ar

Then we change the object function (10) into the residual suppression form:

ay = arg min

'-Daf + A£||afc||i +FX||as||1

= ak- m>

where k is the indicator of fcth patch. In addition, we cannot get the true sparse coding practically, so the is a good estimation of the true sparse coding and can be calculated from weighted average of sparse coding of patches in the same cluster; A and ^ are the regularization parameters. If fcth patch is from Q;,we calculate i^:

= c exp {-||*i -2i,Pf2}

(c is the normalization parameter).

The second term in optimization equation (12) is used to ensure the local sparsity that only part of atoms are selected for the dictionary. But, in our scheme, we select one subdictionary for each patch in a certain cluster, which

(a) Noisy (b) Original (c) NSCT

(d) SSBM3D (e) MK-SVD (f) Proposed

Figure 3: Restoration performance comparison on Bridge (a = 20).

means that the coding of another subdictionary is zero. So our scheme guarantees the sparsity naturally; that is, we can move the second term in (12) and rewrite it as follows:

ay = arg min

' - Da\\2 + ^^

With (14), we can compact the optimization only with coding residual constraint, which means that the coding from the observed signal by (14) is close to that from the original signal by (9).

3.3.3. Scheme Summary. The whole scheme consisted of two-level iterative algorithm; now we present the brief steps as follows.

Initial: set the initial image X = Y (Y is the degraded image); D(0) is the DCT complete dictionary and calculate the initial sparse coding for each patch with any pursuit algorithm.

Outer Loop:

(a) on the (OL)th iteration: learn the dictionary D(OL) by X with the algorithm proposed in Section 3.3.1 (L is the maximum iterative number, OL = 1,2.. .,L);

(b) compute the good estimation set {^¡}i=1 K for all the clusters under the D(OL).

(c) Inner Loop:

(i) for each patch, we get its coding a.y by (14), which can be solved with the method in [17];

(ii) repeat (i) until all the patches are processed;

(d) estimate all the restoration patches by x = D(OL^ay;

(e) OL = OL + 1, repeat.

On one hand, the j is more and more approach to the true sparse coding, which makes the residual suppression more sufficient. On the other hand, by alternating the sparse coding and dictionary learning, coding and dictionary are all improved and promote each other.

3.3.4. Parameter Selection. In conventional models, the regu-larization parameter is generally a constant. Hence, for making (14) more sufficient, we give the derivation of self-adaptive regularization parameter ^ under the Bayes framework.

For written convenience, we take S = ak - j as the residual coding vector. Under the Bayes framework, the MAP estimation of S can be computed as

S = arg max {log (p (S | y))} = arg max {log (p (y | 8)) + log (p(S))}.

(a) Noisy

(b) Original

(c) NSCT

(d) SSBM3D (e) MK-SVD (f) Proposed

Figure 4: Restoration performance comparison on Butterfly (a = 20).

And the p(y | S) is p{y 1 S) = p{y 1 a, if) =

-Da\\l

where on is the standard deviation of noise.

As for p(S), with some statistics experiments, we gain the experience model with the i.i.d Laplacian distribution:

P(S) = nnp(sk ()

■ 2°k,i

\8k (i)

In which, Sk(i) is the ith element of the residual vector of the kth patch and aki is the standard deviation of Sk(i). Combining (15), we can obtain the following:

ocy = arg min

2°n ak,i k i

. (18)

For a given rjt (ak = t\i + Sk), the optimal sparse coding a can be obtained as follows:

oiy = arg min

-Dah + -r H\s* « I

°k,t T

Compared to the regularization in (14), we can set the ^ to be the following self-adaptive form:

4. Experimental Results and Analysis

To verify the performance of the algorithm proposed in the paper, we show some contrast restoration experiments for image denoising. The contrast algorithms are, respectively, NSCT method in [6], SSBM3D in [18], MK-SVD in [14], and the proposed algorithm in this paper. The noisy image is generated by adding the Gaussian noise with different standard deviation (a = 15,20). The size of test images is all 256 x 256 and we set the parameters as follows.

e = 3, 1 = 3 (mentioned in Section 3.2); K = 50 (mentioned in Section 3.3.1); p = 5, (mentioned in Section 3.3.2).

To show the objective evaluation for restoration image, we take the PSNR as the indicator for different algorithms. Meanwhile, we show part of the experiment results to view the denoising performance. Suppose x and x are the original

Table 2: The PSNR (dB) indicator with a = 15.

Noisy NSCT BM3D K-SVD Proposed

House 24.89 33.11 34.67 33.92 34.22

Barbara 24.63 30.49 32.33 31.40 31.97

Peppers 24.61 31.51 32.70 31.77 32.16

Bridge 24.70 27.71 28.83 28.55 28.62

Butterfly 24.63 29.17 30.64 29.89 30.18

Table 3: The PSNR (dB) indicator with a = 20.

Noisy NSCT BM3D K-SVD Proposed

House 22.10 31.78 33.76 32.67 32.89

Barbara 22.05 28.95 30.68 29.75 30.19

Peppers 22.09 30.12 31.36 30.46 31.12

Bridge 22.13 26.41 27.25 26.87 27.02

Butterfly 21.11 27.65 29.46 28.32 28.96

image and restoration image, respectively, the definition of PSNR is

PSNR = 10 log-

11% - %

Figures 1 and 2 are the restoration performance with the standard deviation a = 15, while Figures 3 and 4 are the performance with a = 20. As for the other images we only show the PSNR value reported in Tables 2 and 3.

For each group, by the local detail image, we can get some conclusion. Though all the four methods can achieve the denoising task, there are some differences between each other. For the NSCT method, the ability of capturing structure is not adaptive to different image due to its fixed dictionary. Additionally, some scratches appeared in the restoration results, which did not exist in the original image and the PSNR is lowest among the four methods. Compared to the other method, BM3D shows advantage on PSNR value. But its restoration results are smoothed excessively, which leads to lose much detail texture. Owing to the dictionary that is learned by the random example patches from different images, the K-SVD algorithm generates some visual artifacts and cannot recover the local geometry feature sufficiently. So the restoration results of K-SVD are only better than the NSCT in the four methods. The proposed method in this paper shows advantage both on the restoration performance and PSNR value. Though its PSNR is lower than the BM3D, the detail recovered ability is stronger than the other three methods.

5. Conclusion

Owing to that the abundant geometry information and the self-similarity are the important influencers to be utilized in the image restoration by sparse representation, the paper proposed a novel scheme that restores the image by the nonlocal sparse reconstruction with similarity measurement. We cluster the patches from the degraded image by similarity measurement with Shearlet feature vector, which is good at

capturing the local geometry structure in image and then take them to train the cluster dictionary and get good estimation of true sparse coding for original image. Additionally, we also show the derivation of regularization parameter under the Bayes framework. By the cluster dictionary learning and the coding residual suppression, the proposed scheme shows advantages both on the visual performance and PSNR value compared to the leading denoising methods. Besides the image denoising, the proposed model can be extensive to other restoration tasks such as deblurring and superresolution.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Nation Nature Science Foundation of China nos. 61201237 and 61301095, Nature Science Foundation of Heilongjiang Province of China no. QC2012C069, and the Fundamental Research Funds for the Central Universities no. HEUCFZ1129. Meantime, this paper is funded by the Defense preresearch foundation Project of shipbuilding industry science no. 10J3.1.6.

References

[1] M. Alferdo, D. L. Donoho, and M. Elad, "From sparse solutions of systems of equations to sparse modeling of signals and images," Society for Industrial and Applied Mathematics, vol. 51, no. 1, pp. 34-81, 2009.

[2] G. G. Bhutada, R. S. Anand, and S. C. Saxena, "Edge preserved image enhancement using adaptive fusion of images denoised by wavelet and curvelet transform," Digital Signal Processing, vol. 21, no. 1, pp. 118-130, 2011.

[3] L. Shang, P.-G. Su, and T. Liu, "Denoising MMW image using the combination method of contourlet and KSC shrinkage," Neurocomputing, vol. 83, no. 15, pp. 229-233, 2012.

[4] S. Yin, L. Cao, Y. Ling, and G. Jin, "Image denoising with anisotropic bivariate shrinkage," Signal Processing, vol. 91, no. 8, pp. 2078-2090, 2011.

[5] A. L. da Cunha, J. Zhou, and M. N. Do, "The nonsubsampled contourlet transform: theory, design, and applications," IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089-3101, 2006.

[6] J. Jia and L. Chen, "Using normal inverse Gaussian model for image denoising in NSCT domain," Acta Electronica Sinica, vol. 39, no. 7, pp. 1563-1568, 2011.

[7] R. Rubinstein, A. M. Bruckstein, and M. Elad, "Dictionaries for sparse representation modeling," Proceedings of the IEEE, vol. 98, no. 6, pp. 1045-1057, 2010.

[8] Y. He, T. Gan, W. Chen, and H. Wang, "Multi-stage image denoising based on correlation coefficient matching and sparse dictionary pruning," Signal Processing, vol. 92, no. 1, pp. 139-149, 2012.

[9] K. Labusch, E. Barth, and T. Martinetz, "Soft-competitive learning of sparse codes and its application to image reconstruction," Neurocomputing, vol. 74, no. 9, pp. 1418-1428, 2011.

[10] Z. Xue, D. Han, and J. Tian, "Fast and robust reconstruction approach for sparse fluorescence tomography based on adaptive matching pursuit," in Proceedings of the IEEE Conference on Asia Communications and Photonics, pp. 1-6, November 2011.

[11] D. Needell and R. Vershynin, "Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit," IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 310-316, 2010.

[12] T. Blumensath and M. E. Davies, "Stagewise weak gradient pursuits," IEEE Transactions on Signal Processing, vol. 57, no. 11, pp. 4333-4346, 2009.

[13] M. Aharon, M. Elad, and A. Bruckstein, "K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation," IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311-4322, 2006.

[14] J. Mairal, G. Sapiro, and M. Elad, "Learning multiscale sparse representations for image and video restoration," Multiscale Modeling and Simulation, vol. 7, no. 1, pp. 214-241, 2008.

[15] W.-Q. Lim, "The discrete shearlet transform: a new directional transform and compactly supported shearlet frames," IEEE Transactions on Image Processing, vol. 19, no. 5, pp. 1166-1180, 2010.

[16] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, "An efficient k-means clustering algorithms: analysis and implementation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 881892, 2002.

[17] I. Daubechies, M. Defrise, and C. de Mol, "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint," Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413-1457, 2004.

[18] M. Poderico, S. Parrilli, G. Poggi, and L. Verdoliva, "Sigmoid shrinkage for BM3D denoising algorithm," in Proceedings of the IEEE International Workshop on Multimedia Signal Processing (MMSP '10), pp. 423-426, October 2010.

Copyright of Mathematical Problems in Engineering is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.