International Scholarly Research Network ISRN Computational Mathematics Volume 2012, Article ID 982792, 12 pages doi:10.5402/2012/982792

Research Article

Nonconvex Compressed Sampling of Natural Images and Applications to Compressed MR Imaging

Wenze Shao,1 Haisong Deng,2 and Zhuihui Wei3

1 College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210046, China

2 School of Mathematics and Statistics, Nanjing Audit University, Nanjing 211815, China

3 School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 210094, China Correspondence should be addressed to Wenze Shao, shaowenze1010@yahoo.com.cn

Received 25 July 2011; Accepted 5 September 2011 Academic Editors: K. T. Miura and E. Weber

Copyright © 2012 Wenze Shao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

There have been proposed several compressed imaging reconstruction algorithms for natural and MR images. In essence, however, most of them aim at the good reconstruction of edges in the images. In this paper, a nonconvex compressed sampling approach is proposed for structure-preserving image reconstruction, through imposing sparseness regularization on strong edges and also oscillating textures in images. The proposed approach can yield high-quality reconstruction as images are sampled at sampling ratios far below the Nyquist rate, due to the exploitation of a kind of approximate i0 seminorms. Numerous experiments are performed on the natural images and MR images. Compared with several existing algorithms, the proposed approach is more efficient and robust, not only yielding higher signal to noise ratios but also reconstructing images of better visual effects.

1. Introduction

In the past several decades, image compression [1, 2] and superresolution [3, 4] have been the primary techniques to alleviate the storage/transmission burden in image acquisition. As for image compression, it is known that, however, the compression-and-then-decompression scheme is not economical [5]. Though superresolution is capable of economically reconstructing high-resolution images to subpixel precision from multiple low-resolution images of the similar view, subpixel shifts have to be estimated in advance. It is a pity that accurate motion estimation is not an easy job for superresolution, thus resulting in a possible compromise of image quality (e.g., spatial resolution, signal to noise ratio (SNR)). Recently, a novel sampling theory, called compressed sensing or compressive sampling (CS) [5-9], asserts that one can reconstruct signals from far fewer samples or measurements than traditional sampling methods use. The emergence of CS has offered a great opportunity to economically acquire signals or images even as the sampling ratio is significantly below the Nyquist rate. In fact, CS has become one of the hottest research topics in the field of signal

processing. Though there have been many relevant results on encoding and decoding of sparse signals, our focus in this paper is mainly on the compressed sampling of natural images and its applications to magnetic resonance imaging (MRI) reconstruction.

Recently, there have been proposed several algorithms for compressed imaging reconstruction (e.g., [6, 11-17]). In essence, each of them solves a minimization problem of single £i norm or total variation (TV) or their combination either directly or asymptotically, and the difference of their reconstruction quality is anticipated to be not apparent. In this paper, a nonconvex CS approach is proposed for structure-preserving image reconstruction, through imposing sparseness regularization on strong edges and also oscillating textures in images. Even as images are sampled at sampling ratios far below the Nyquist rate, the proposed approach still yields much higher quality reconstruction than the aforementioned methods because of utilizing a kind of approximate £0 seminorms. Numerous experiments are performed on test natural images and MR images, showing that reconstructed images obtained by the proposed approach not only are of higher SNR values but also are of

better visual effects even as the sampling ratio becomes much lower.

The paper is organized as follows. In Section 2, a review of compressed sampling signal and image reconstruction is given, including the basic theory of compressed sampling and reconstruction algorithms for compressed imaging. The proposed nonconvex compressed sampling approach is described in Section 3, solved by the half-quadratic regularization, primal-dual, and operator-splitting methods. Section 4 provides numerous experiments on test natural images and ordinary MR images and compares the proposed method with several existing algorithms in terms of signal to noise ratios, relative errors, and visual effects. Finally, the paper is concluded in Section 5.

2. Review of Compressed Sampling Signal and Image Reconstruction

The success of CS theory is based on two fundamental principles, that is, sparsity and incoherence, or restricted isometry property (RIP). Sparsity says that a signal should be sparse itself or have a sparse representation in a certain transform domain, and incoherence implies that a sampling/measurement matrix should have an extremely dense representation in the transform domain [9]. Consider the CS problem y = Ox for the moment, where x e CN is a sparse signal, O e cMxN is a measurement matrix, and y e CM is the measurement vector. Suppose the sparsity of x is K, then signal reconstruction can be recast as the minimization of £0-problem: minx{||x||0 : y = Ox}. Since the problem is NP-hard, it is more realistic to solve the computationally tractable £p-problem (0 < p < 1) : minx{HxHp : y = Ox}. For the problem, a sufficient condition of exact reconstruction has been provided in [18]: if O satisfies the inequality SaK + bS(a+i)K < b - 1 (b > 1, a = bp/(2-p)), then the unique minimizer of the £p -problem is exactly x. In any real applications, due to the finite precision of sensing devices, measurements are to be inevitably corrupted by at least a small amount of noise. Hence, the constraint y = Ox must be relaxed, resulting in either the problem

min{ ||x||p : ||y - Ox||2 < aj (1)

or its Lagrange version

minjyMp + 11|y - Ox||2j, (2)

where a and y are positive parameters. For the noisy compresses sensing problem, it is shown in [19] that the solution to problem (1) exhibits reconstruction error on the order of C • a, the constant C depending only on the RIP constant of O. Moreover, the reconstruction error bound is also provided in [19] as signals are not sparse but just compressible.

In the literature, numerous computational methods have been proposed to resolve the £p-problem particularly in the case of p = 1 and its relaxations for sparse solutions. One representative algorithm is the interior-point method, such

as the primal log-barrier approach for CS [20]. Compared with interior point methods, however, gradient methods are generally more competitive on CS problems with very sparse solutions, for example, iterative splitting and thresholding [21], fixed-point continuation (FPC) [10], and gradient projection sparse reconstruction (GPSR) [22]. For the £p (0 < p < 1) nonconvex minimization [23, 24], for example, iterative reweighted least squares (IRLSs) [23], they do not always give global minima and are also slower. Besides, a kind of Bayesian CS approaches are also proposed based on sparse Bayesian learning to solve the £0-problem [25, 26]. While, in essence, they correspond to the nonconvex methods just like IRLS.

CS theory is originally emerged from the community of MR imaging, where Candes et al. [6] proposed to reconstruct piecewise constant Logan-Shepp phantoms based on TV and theoretically proved that exact signal reconstruction from incomplete frequency information is based on the assumption that the image has a sparse representation. Afterwards, researchers proposed to reconstruct images using the wavelet transform, since it is also a good sparse representation for piecewise constant images (e.g., [14, 15, 18, 22-26]). However, images, for example, natural images, MR images, are seldom piecewise constant but commonly piecewise smooth. The single TV regularization or wavelet-£p (0 < p < 1) seminorm is not a good sparse representation for piecewise smooth images, thereby leading to avoidable mistakes on those images. For example, the iterative reweighted methods and Bayesian hierarchical methods have behaved aggressively in terms of sparsification [16].

Recently, several papers focused specifically on the compressed imaging reconstruction of natural and MR images, (e.g., [6, 11-17]). In essence, however, most of them aim at good reconstruction of edges through minimizing the combination of TV and wavelet-^ regularization. Besides, their reconstruction quality is anticipated to be similar in terms of SNR and visual effects. Actually, images are usually of morphological diversities implying that several types of geometric structures exist in images, for example, strong edges, oscillating textures, and so on. Hence, more careful sparseness regularization is required for faithful image reconstruction.

3. Nonconvex Compressed Sampling for Structure-Preserving Image Reconstruction

3.1. Sparseness Modeling for Nonconvex Compressed Sampling. We consider the following image CS problem:

7 = Opdft u + z, (3)

where u is an original image, y is a measurement vector, z is random noise or deterministic unknown error, and OpDFT e CMxN is a partial discrete Fourier measurement matrix, such as the uniformly random selection of M rows from an N X N discrete Fourier transform [6]. The sampling ratio is defined as M/N (M <r N). The original strategy is the TV-based convex optimization proposed by Candes and his Caltech team, obtaining perfect reconstruction on the

piecewise constant Logan-Shepp phantom and many other similar test phantoms in medical imaging [6]. It has been mentioned also that other researchers [11, 13, 16, 17] proposed to minimize TV and wavelet£ norm to improve the reconstruction quality of natural and MR images.

Our idea originates from the fact that images are of morphological diversities [27, 28]. It is believed that sparseness regularization should be imposed on each morphological component in images. In practice, strong edges and oscillating textures are of our particular interests, here. It has come to light that the TV regularization is quite a good candidate for sparseness modeling of strong edges in piece-wise constant images [6, 28]. To get more out of the linear measurements, for example, oscillating textures, a proper sparseness modeling should also be imposed on those texture components. To make the idea possible, the local DCT is adopted to specifically capture the oscillating textures and of course other image details. Since natural images and MR images are often rich in textures and fine details, their local DCT coefficients tend to be approximately sparse thus satisfying the fundamental principle of sparsity underlying in CS. Furthermore, images are usually sampled at ratios far below the Nyquist sampling rate in compressed imaging. Hence, it is more appropriate to recast image reconstruction as the minimization of following variational functional:

J(u) = y1ll^LDCT«llo + ^TV(u) + 2 II7 - ®PDFTU||2, (4)

where TV(u) is the total variation of u, defined as TV(u) = Xkif (Vk,u), f (■) = || ■ II2, Vk>lu = (V1Uk,i, V2uw); ¥ldct stands for the matrix of local DCT, the window width of which is denoted as 5; y1 and y2 are positive regularization parameters, prescribing the importance of the solution having a small £0 seminorm in the local DCT domain versus having a small TV seminorm in the spatial domain. However, it is not easy to efficiently solve the functional (4), since the related £0-problem is NP-hard.

To solve (4), our strategy differs from the relaxations of £p (0 < p < 1), usually solved by the iterative reweighted least squares [14, 18, 23, 29]. In this paper, a kind of approximate £0 seminorms are proposed. Suppose x e CN is a sparse signal and the sparsity of x is K, then the £0 seminorm of x can be approximated by

N \x¡\p

appEll-0(x; p) = lim Y -—-p—

* - 0+=! \Xi \p

where 1 < p < 2. In practice, however, £ acts as a regular-ization parameter, resulting in a practicable appropriate £0 seminorm, that is, appEll-0(x; p, £). Then, formula (4) can be relaxed as follows:

/(w; p, = Yi appEll-0( ^ldctW p, £

+ y2tv(w) + 2 ii7 - opdft«||2-

It is to demonstrate in the following section that a larger p makes formula (6) more efficient and robust to the reconstruction of texture-rich natural images and ordinary MR images, particularly as the images are highly undersampled.

3.2. Implementation Using Primal-Dual and OperatorSplitting Methods. Till now, we come to the issue of numerical implementation of formula (6). To ease the computation, rewrite formula (6) in an equivalent form, given as

/(x; p, = y1appEll-^ x; p, e)

+ Y2TV(^lDctx) +217 - 0x||2,

where © = Opdft^-Dct, ^ldct represents the inverse local DCT and u = ¥-jDCTx. Nevertheless, it is difficult to directly solve formula (7) since appEll-0(x;p, £) is nonconvex and TV(^-DcTx) is nonsmooth. Thanks to the half-quadratic regularization method [30], the minimization of (7) can be translated into a computationally more tractable expression, that is, x(k+1) = minx {J(x; b(k), p, £)}, where

/(x; b(k),p, e) = Yi l(b« ■ \x\2)

+Y2TV^LDCTx) + 217 - ©X|2,

b(k) = £1 2

(k)|p-2

(x,)(k)\p + £

Given the current estimate b(k), the remaining problem is to solve the formula (8). Borrowing the idea of primal-dual methods [11,31], x* is the optimal solution of (8) if and only if there exists an auxiliary variable Z * such that

V^-DCTx* e df*(Z*),

0 e d/(x*; b(k),p, e),

(10) (11)

where £ * = ), ^^ e C2, f * is the convex conjugate of f, and the subdifferential dJ(x*; p, £) is of the form y1dx(x*)TW(kh* + ^ldctSwV**,;(£*;,) + dx 1/21y - ©x* Il2, where V**l is the adjoint operator of Vk,; and W(k) is a diagonal matrix with elements b(k) on the diagonal and zeros elsewhere. In the following, we apply the operator-splitting method to (10) and (11). The splitting expression of (10) is given as

0 e Kdf *(Zl) + - Zkj,

Zk,i = Zl + KVkii ^LiDctx*

and that of (11) corresponds to

0 e TY1dx(x * )TW(k)x * + x * - x,

X = x * - T ( Y2^LDCTIv**,i(Z*) + dx21|7 - ©X

(12) (13)

Figure 1: Test natural images of size 256 X 256.

Figure 2: Reconstruction of the natural image (Figure 1(d)) from partial Fourier data (SR = 21.59%) by FPC [10], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 3.

where k and t are positive auxiliary scalars. Once again, (12) and (14) can be calculated by the primal-dual method, yielding

It can be proved that [11] both (16) and (17) have closed form solutions to yield £* = (£*) and x*, respectively,

m^U ll2 + " Zkl(16)

minjTy1(x*)TW(k)x* + I||x* -■

= mmj 1> \Kd\2

x* = (l + ryi W(k)) 1 ■ x.

(18) (19)

(a) (b) (c)

Figure 3: Reconstruction of the natural image (Figure 1(f)) from partial Fourier data (SR = 21.59%) by FPC [10], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 3.

(d) (e) (f)

Figure 4: Test MR images of size 256 X 256.

From the above discussion, the iterative reconstruction scheme of formula (6) can be described by formulas (9), (13), (15), (18), and (19). When initial estimates £(0),x(0) of £ * and x * are given, b(0) can be estimated using (9), C(1),x(1) can be estimated using (13) and (15), and £(0),x(0) can be updated subsequently using (18) and (19), obtaining £(1), x(1). For simplicity, the algorithm is called HQRCSparET (half-quadratic regularized CS based on sparseness modeling of edges and textures), specified in detail as follows. Notice that HQRCSparET has incorporated the continuation idea in FCP [10] and hence is a scheme of iteration-and-then-continuation.

Algorithm (HQRCSparET)

Input. An image u, a measurement matrix Opdft, initial points £10),x(0), regularization parameters e, y\, and y2, auxiliary scalars k and t, maximum iteration times (MITs), initial iteration time k = 0, the width s, and a factor

Output. Reconstructed image u *.

(1) Initialize. y(1) = max{p\\®Ty\\^,2(ep)-1yi}, y(1) = Y(11>Y2/Y1, k = 1.

(2) Iterate. Estimate blk-1) using (9), Z(k),using formulas (13) and (15) and £(k),x(k) using formulas (18) and (19).

Figure 5: MRI reconstruction of Figures 4(d), 4(e), and 4(f) from partial Fourier data (SR = 38.56%) by CTVCS [6], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 4.

If the acceptance tests on x(k), £(k) (i.e., their relative errors reach to 1e - 4) are not passed, repeat the iteration, and set k = k + 1.

(3) Continuation. If the acceptance tests on x(k), £(k) are passed, but the stopping criterion (i.e., y(k) = y1) does not hold, reduce y(k)and y® by a factor p, and go to step (2).

(4) Check. If the stopping criterion holds, terminate with

x* = x(k), and u* = ^LDCTx *.

4. Numerical Examples

4.1. General Description. In this section, compressed imaging reconstruction is applied to both the natural and MR images utilizing the proposed approach and several other recent algorithms. In each test, the observed data y is synthesized

as y = 0PDFTu + z, where u is the original image, Opdft is a partial DFT measurement matrix as designed in [11], and z is the Gaussian white noise with a mean zero and standard deviation 0.01. To be noticed, many other measurement matrices [6, 16, 32] are also applicable to our algorithm framework. All experiments are performed in MATLAB v7.0 running on a Toshiba laptop with an Intel Core-Duo at 2 GHZ and 2 GB of memory. On average, our unoptimized implementation processes a 256 X 256 test image is about 4-5 minutes.

In the proposed HQRCSparET, £(0) and x(0) are simply initialized by zero, though it is not sensitive to the starting points; the parameter y1 is set to 0.35e - 3 and y2 to 1e - 3; the scalars k and t are equally set to 1.6; MIT is set to 100; £ is set to 0.1, ^ is set to 2/3, and p is set to 2 if not specified. For the choice of 5, it istobe clarified in each specific experiment. For the fair comparison, all tuning parameters in other methods are adjusted manually to achieve the best reconstruction

Figure 6: MRI reconstruction of Figures 4(d), 4(e), and 4(f) from partial Fourier data (SR = 21.59%) by CTVCS [6], TVCMRI [11], and HQRCSparET, respectively. Values on SNR and relative error are listed in Table 5.

quality. The signal to noise ratio (SNR) and relative error (Rel. Err.) are used to evaluate the performance of different methods.

4.2. Performance on Natural Image Reconstruction. In Figure 1, six natural images are shown to test the performance of the proposed HQRCSparET, FPC [10], and TVCMRI [11]. The images are all of size 256 X 256. It is observed that there are a large amount of textures and fine details in images, especially in the latter three ones. One can obtain the partial Fourier data from each natural image based on the observation model in Section 4.1 and reconstruct the original image u utilizing FPC, TVCMRI, and HQRCSparET, respectively.

Table 1 shows experiment results on the six images as the sampling ratio (SR) is 38.56%, providing the values of SNR, relative error, and iteration times. Here, s is chosen as 16, and p is set to 1 for the moment. From Table 1, it is clearly observed that the SNR values of the reconstructed

images from HQRCSparET are smaller than those from FPC and TVCMRI. In terms of relative errors, HQRCSparET also performs better than FPC and TVCMRI. A larger p, however, makes CS strategy (6) more efficient to reconstruct the texture-rich images. Let p be equal to 2 and apply the algorithm HQRCSparET to Figures 1(d)-1(f). It is observed that experiment results on the latter three texturerich images, that is, Figures 1(d)-1(f), as shown in Table 2, have manifested the efficiency of formula (6) more obviously. Take Figure 1(e), for example. About 6dB is improved by HQRCSparET compared with TVCMRI, and about 11 dB is improved by HQRCSparET compared with FPC. However, it is also observed that more iteration times are required here implying that HQRCSparET converges slower as p becomes larger.

HQRCSparET also performs more robust than FPC and TVCMRI as sampling ratios become lower. Table 3 shows experiment results as SR is 21.59%, obtained by FPC, TVCMRI, and HQRCSparET, respectively. Here, s is chosen

ISRN Computational Mathematics Table 1: Results on natural images in Figure 1 by three methods (SR = 38.56%, 5 = 16, p = 1).

Image no. Method SNR Exp. Result Rel. err. Iter

FPC [10] 27.8091 1.89% 56

Figure 1(a) TVCMRI [11] 31.1099 0.30% 87

HQRCSparET 31 6049 0.28% 59

FPC [10] 27.3302 1.36% 55

Figure 1(b) TVCMRI [11] 30.4733 0.26% 83

HQRCSparET 311738 0.23% 62

FPC [10] 23.8318 2.13% 55

Figure 1(c) TVCMRI [11] 30. 6934 0.16% 86

HQRCSparET 323210 0.14% 60

FPC [10] 21.7186 3.06% 55

Figure 1(d) TVCMRI [11] 26.9904 0.33% 80

HQRCSparET 296475 0.23% 62

FPC [10] 25.3691 2.02% 56

Figure 1(e) TVCMRI [11] 30.3776 0.21% 82

HQRCSparET 34.7217 0.15% 59

FPC [10] 29.6160 1.10% 55

Figure 1(f) TVCMRI [11] 33.9065 0.31% 79

HQRCSparET 42.2842 0.12% 63

Table 2: Experiment results on texture-rich natural images in Figure 1 by three methods (SR = 38.56%, 5 = 16, p = 2).

Image no. Method SNR Exp. Result Rel. err. Iter

Figure 1(d) HQRCSparET 30.1561 0.20% 67

Figure 1(e) HQRCSparET 36.2824 0.14% 67

Figure 1(f) HQRCSparET 46.4262 0.57%o 65

Table 3: Results on natural images in Figure 1 by three methods (SR = 21.59%, 5 = 8, p = 2).

Image no. Method SNR Exp. result Rel. err. Iter

FPC [10] 18.5257 5.51% 53

Figure 1(a) TVCMRI [11] 21.1391 0.71% 62

HQRCSparET 21.6413 0.71% 65

FPC [10] 20.0902 3.14% 53

Figure 1(b) TVCMRI [11] 22.3289 0.60% 65

HQRCSparET 23.8168 0.45% 64

FPC [10] 16.5973 4.89% 53

Figure 1(c) TVCMRI [11] 18.3677 0.73% 62

HQRCSparET 20.7946 0.48% 65

FPC [10] 13.8952 7.54% 53

Figure 1(d) TVCMRI [11] 15.4621 1.74% 67

HQRCSparET 18.4731 1.05% 64

FPC [10] 17.6930 4.88% 53

Figure 1(e) TVCMRI [11] 20.5017 0.64% 64

HQRCSparET 24.8633 0.46% 64

FPC [10] 19.8630 3.38% 53

Figure 1(f) TVCMRI [11] 22.2937 0.98% 67

HQRCSparET 39.8884 0.18% 62

Figure 7: MRI reconstruction of Figures 4(d), 4(e), and 4(f) from partial Fourier data (SR : HQRCSparET, respectively. Values on SNR and relative error are listed in Table 6.

12.91%) by CTVCS [6], TVCMRI [11], and

as 8. Reconstructed images of Figures 1(d) and 1(f) are accordingly shown in Figures 2 and 3, showing that HQRC-SparET is capable of structure-preserving image reconstruction. Take Figure 1(d), for example. On the one hand, TV norm in formula (6) is able to remove the obvious ringing artifacts occurred in Figure 2(a); on the other hand, the approximate £0 norm in formula (6) is able to simultaneously preserve the textures and remove the slight ringing artifacts occurred in Figure 2(b). That is why HQRCSparET yields higher SNR values and smaller relative error values than FPC and TVCMRI in Tables 2 and 3. Similar results can be also obtained as SR is 12.91%. Take the texture-rich image Figure 1(f), for instance. The SNR value produced by HQRCSparET is 15.6560 dB, which is higher than 12.9269 dB of FPC and 13.8814 dB of TVCMRI. Moreover, it is to be mentioned that the parameters in FPC and TVCMRI have to be adjusted accordingly to achieve the best reconstruction quality. However, those in HQRCSparET are chosen the same as in Section 4.1.

4.3. Applications to Compressed MR Imaging. In literature, several papers focused on reconstruction of MR images using TV and perhaps additional wavelet-^ constraints (e.g., [6, 11, 13, 16, 17]). In this subsection, the proposed HQRC-SparET is applied to compressed MRI and compared with Candes's TV-constrained CS (CTVCS) [6] and TVCMRI [11].

Six MR images as shown in Figure 4 are used to test the performance of above three methods. All the images are of size 256 X 256. It is noticed that, the ordinary MR images are rich in textures and fine details and also of much low contrast. From the experience of tests on natural images in Section 4.2, HQRCSparET is to perform much better than CTVCS and TVCMRI. Sample partial Fourier data from each MR image using the observation model in Section 4.1. Table 4 shows experiment results on the six MR images as the sampling ratio is 38.56%, using CTVCS, TVCMRI, and HQRCSparET, respectively. The reconstructed MR images of Figures 4(d), 4(e), and 4(f) are shown in Figure 5. In this

Table 4: Results on MR images in Figure 4 by three methods (SR = 38.56%, 5 = 8, p = 2).

Image no. Method SNR Exp. result Rel. err. Iter

CTVCS [6] 33.0915 0.32% 58

Figure 4(a) TVCMRI [11] 33.3962 0.31% 81

HQRCSparET 48.8687 0.72 62

CTVCS [6] 36.5660 0.27% 56

Figure 4(b) TVCMRI [11] 38.5121 0.22% 84

HQRCSparET 53.1667 0.52 63

CTVCS [6] 33.9407 0.40% 57

Figure 4(c) TVCMRI [11] 35.0528 0.36% 80

HQRCSparET 54.3319 0.37 62

CTVCS [6] 33.7718 0.38% 58

Figure 4(d) TVCMRI [11] 34.3157 0.35% 79

HQRCSparET 37.1377 0.27% 62

CTVCS [6] 27.5838 0.91% 55

Figure 4(e) TVCMRI [11] 28.4253 0.79% 81

HQRCSparET 29.8415 0.78% 64

CTVCS [6] 30.8622 0.74% 54

Figure 4(f) TVCMRI [11] 32.5762 0.61% 76

HQRCSparET 35.3367 0.53% 64

Table 5: Results on MR images in Figure 4 by three methods (SR = 21.59%, 5 = 8, p = 2).

Image no. Method SNR Exp. result Rel. err. Iter

CTVCS [6] 21.3881 1.23% 57

Figure 4(a) TVCMRI [11] 21.9085 1.05% 71

HQRCSparET 24.2058 0.85% 64

CTVCS [6] 24.4943 0.91% 56

Figure 4(b) TVCMRI [11] 25.1557 0.81% 70

HQRCSparET 28.5230 0.67% 66

CTVCS [6] 23.6254 1.20% 57

Figure 4(c) TVCMRI [11] 23.9760 1.16% 71

HQRCSparET 29.4106 0.67% 65

CTVCS [6] 19.5002 1.64% 57

Figure 4(d) TVCMRI [11] 19.9529 1.57% 70

HQRCSparET 22.5328 1.08% 64

CTVCS [6] 17.8911 2.76% 55

Figure 4(e) TVCMRI [11] 18.0497 2.57% 57

HQRCSparET 19.5605 2.01% 65

CTVCS [6] 19.0326 2.60% 54

Figure 4(f) TVCMRI [11] 20.0130 2.49% 56

HQRCSparET 21.4833 2.05% 66

subsection, 5 is chosen as 8. From the SNR and relative error values in Table 4, it is concluded that HQRCSparET has performed fairly better than CTVCS and TVCMRI, just like the case of texture-rich natural image reconstruction. From Figure 5, it is observed that HQRCSparET gives a faithful reconstructed image at the sufficient SR of 38.56%, and there is no obvious difference from other two reconstructed images, either.

In addition, Table 5 shows experiment results on MR images in Figure 4 as SR is 21.59%, obtained, respectively,

by CTVCS, TVCMRI, and HQRCSparET, and Table 6 shows the results as SR is 12.91%. Reconstructed images of Figures 4(d), 4(e), and 4(f) are shown in Figures 5 and 6, respectively. On the one hand, it is found that HQRCSparET still performs better than CTVCS and TVCMRI from Tables 5 and 6. Take Figure 4(d), for instance. As SR is 21.59%, about 3 dB is improved by HQRCSparET compared with CTVCS, and about 2.5 dB compared with TVCMRI; as SR is 12.91%, about 3 dB is improved compared with CTVCS and about 2 dB compared with TVCMRI. On the other hand, from

Table 6: Results on MR images in Figure 4 by three methods (SR = 12.91%, s = 8, p = 2).

Image no. Method Exp. result

SNR Rel. err. Iter

CTVCS [6] 14.9796 3.19% 57

Figure 4(a) TVCMRI [11] 15.4836 2.48% 58

HQRCSparET 172671 1.94% 65

CTVCS [6] 18.0811 2.72% 56

Figure 4(b) TVCMRI [11] 19.8468 1.52% 58

HQRCSparET 216602 1.26% 66

CTVCS [6] 14.8695 7.92% 58

Figure 4(c) TVCMRI [11] 17.7077 2.99% 56

HQRCSparET 192339 2.23% 66

CTVCS [6] 12.1082 4.80% 58

Figure 4(d) TVCMRI [11] 12.8954 3.70% 58

HQRCSparET 15.0052 2.94% 65

CTVCS [6] 12.6082 5.60% 55

Figure 4(e) TVCMRI [11] 13.8043 4.35% 58

HQRCSparET 14.6152 4.04% 65

CTVCS [6] 12.5622 5.75% 54

Figure 4(f) TVCMRI [11] 15.0856 4.28% 55

HQRCSparET 16.0054 4.07% 66

Figures 6 and 7 it is observed that HQRCSparET still yields higher quality reconstruction as images are highly under-sampled. As for CTVCS, however, there are more and more staircase artifacts as reducing the sampling ratio; as for TVCMRI, obvious staircase artifacts also exist in the reconstructed MR images, and the image contrast becomes lower as reducing the sampling ratio.

5. Conclusions

In this paper, a novel CS strategy is proposed for image reconstruction based on respective sparseness modeling for strong edges and oscillating textures. Numerous experiments demonstrate that TV and perhaps additional wavelet-^ regularization are not enough to reconstruct faithful images. Unlike existing methods, our proposed approach is capable of structure-preserving image reconstruction. It is particularly efficient and robust to the reconstruction of texture-rich natural and ordinary MR images even as images are highly undersampled because of exploitation a kind of approximate £0 seminorms. However, our approach consumes more CPU time than TVCMRI in per iteration, due to the superlinear computational complexity of local DCT. Therefore, an ongoing research direction is the computationally efficient sparse representation to model oscillating textures and fine details.

Acknowledgments

The authors would like to thank Professor Yizhong Ma for his kind and constructive suggestions on improving this paper. H. Deng's work is supported by the Talent Introduction Project (NSRC11009) of Nanjing Audit University, and Z. Wei's work is supported in part by the National High

Technology Research and Development Plan of China under Grant 2007AA12E100 and by the Natural Science Foundation of China under Grants 60802039 and 60672074.

References

[1] A. Skodras, C. Christopoulos, and T. Ebrahimi, "The JPEG 2000 still image compression standard," IEEE Signal Processing Magazine, vol. 18, no. 5, pp. 36-58, 2001.

[2] T. Acharya and P. S. Tsai, JPEG 2000 Standard for Image Compression, John Wiley & Sons, Hoboken, NJ, USA, 2005.

[3] A. K. Katsaggelos, R. Molina, and J. Mateos, Super-Resolution of Images and Videos, Morgan and Claypool, 2007.

[4] S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruction: a technical overview," IEEE Signal Processing Magazine, vol. 20, no. 3, pp. 21-36, 2003.

[5] E. J. Candes and T. Tao, "Decoding by linear programming," IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203-4215, 2005.

[6] E. J. Candes, J. Romberg, and T. Tao, "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information," IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489-509, 2006.

[7] E. J. Candies and T. Tao, "Near-optimal signal recovery from random projections: universal encoding strategies?" IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 54065425, 2006.

[8] E.J. Candes, J. K. Romberg, and T. Tao, "Stable signal recovery from incomplete and inaccurate measurements," Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207-1223, 2006.

[9] E. Candes and J. Romberg, "Sparsity and incoherence in compressive sampling," Inverse Problems, vol. 23, no. 3, article no. 008, pp. 969-985, 2007.

[10] E. Hale, W. Yin, and Y. Zhang, "A fixed-point continuation method for L1-regularized minimization with applications to compressed sensing," Tech. Rep., Rice University, 2007.

[11] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty, "An efficient algorithm for compressed MR imaging using total variation and wavelets," in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1-8, June 2008.

[12] K. T. Block, M. Uecker, and J. Frahm, "Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint," Magnetic Resonance in Medicine, vol. 57, no. 6, pp. 1086-1098, 2007.

[13] M. Lustig, D. Donoho, and J. M. Pauly, "Sparse MRI: the application of compressed sensing for rapid MR imaging," Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182-1195, 2007.

[14] C. Y. Jong, S. Tak, Y. Han, and W. P. Hyun, "Projection reconstruction MR imaging using FOCUSS," Magnetic Resonance in Medicine, vol. 57, no. 4, pp. 764-775, 2007.

[15] H. Jung, J. C. Ye, and E. Kim, "Improved k-t BLAST and k-t SENSE using FOCUSS," Physics in Medicine and Biology, vol. 52, no. 11, article 018, pp. 3201-3226, 2007.

[16] M. Seeger andH. Nickisch, "Compressed sensing and Bayesian experimental design," in Proceedings of the 25th International Conference on Machine Learning, pp. 912-919, July 2008.

[17] J. Yang, Y. Zhang, and W. Yin, "A fast TVL1-L2 minimization algorithm for signal reconstruction from partial Fourier data," Tech. Rep., Rice University, 2009.

[18] R. Chartrand, "Exact reconstruction of sparse signals via nonconvex minimization," IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707-710, 2007.

[19] E. J. Candies, "The restricted isometry property and its implications for compressed sensing," Comptes Rendus Mathematique, vol. 346, no. 9-10, pp. 589-592, 2008.

[20] S. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, "A method for large-scale L1-regularized least squares problems with applications in signal processing and statistics," Tech. Rep., Stanford University, 2007.

[21] I. Daubechies, M. Defrise, and C. De Mol, "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint," Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413-1457, 2004.

[22] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, "Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems," IEEE Journal on Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586-597, 2007.

[23] I. Daubechies, R. DeVore, M. Fornasier, and S. Gunturk, "Iteratively Re-weighted Least Squares minimization: proof of faster than linear rate for sparse recovery," In Proceedings of the 42nd Annual Conference on Information Sciences and Systems, pp. 26-29, 2008.

[24] E. J. Candes, M. B. Wakin, and S. P. Boyd, "Enhancing sparsity by reweightedf 1 minimization," Journal of Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 877-905, 2008.

[25] S. Ji, Y. Xue, and L. Carin, "Bayesian compressive sensing," IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 23462356, 2008.

[26] S. D. Babacan, R. Molina, and A. K. Katsaggelos, "Bayesian compressive sensing using laplace priors," IEEE Transactions on Image Processing, vol. 19, no. 1, Article ID 5256324, pp. 5363, 2010.

[27] J. L. Starck, M. Elad, and D. L. Donoho, "Image decomposition via the combination ofsparse representations and a variational

approach," IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1570-1582,2005.

[28] Y. Meyer, Oscillating Patterns in Ilmage Processing and Nonlinear Evolution Equation, vol. 22 of University Lecture Series, American Mathematical Society, 2001.

[29] R. Chartrand and W. Yin, "Iteratively reweighted algorithms for compressive sensing," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869-3872, April 2008.

[30] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing, Springer, New York, NY, USA, 2000.

[31] E. Esser, X. Zhang, and T. Chan, "A general framework for a class of first order primal-dual algorithms for TV minimization," Tech. Rep., UCLA, 2009.

[32] E. J. Candes and M. B. Wakin, "An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition," IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21-30, 2008.

Copyright of ISRN Computational Mathematics is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.