Scholarly article on topic 'Bound Alternative Direction Optimization for Image Deblurring'

Bound Alternative Direction Optimization for Image Deblurring Academic research paper on "Mathematics"

CC BY
0
0
Share paper
Academic journal
Mathematical Problems in Engineering
OECD Field of science
Keywords
{""}

Academic research paper on topic "Bound Alternative Direction Optimization for Image Deblurring"

Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2014, Article ID 206926, 12 pages http://dx.doi.org/10.1155/2014/206926

Research Article

Bound Alternative Direction Optimization for Image Deblurring

Xiangrong Zeng1,2

1 College of Information System and Management, National University of Defense Technology, Changsha 410073, China

2 Instituto Superior Tecnico, Universidade de Lisboa, 1049001 Lisboa, Portugal

Correspondence should be addressed to Xiangrong Zeng; zengxrong@gmail.com Received 8 March 2014; Revised 19 July 2014; Accepted 20 July 2014; Published 13 August 2014 Academic Editor: Fatih Yaman

Copyright © 2014 Xiangrong Zeng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper proposes a new method, bound alternative direction method (BADM), to address the £p e (0,1)) minimization problems in image deblurring. The approach is to first obtain a bound unconstrained problem through bounding the £p regularizer by a novel majorizer and then, based on a variable splitting, to reformulate the bound unconstrained problem into a constrained one, which is then addressed via an augmented Lagrangian method. The proposed algorithm actually combines the reweighted l1 minimization method and the alternating direction method of multiples (ADMM) such that it succeeds in extending the application of ADMM to £p minimization problems. The conducted experimental studies demonstrate the superiority of the proposed algorithm for the synthesis £p minimization over the state-of-the-art algorithms for the synthesis l1 minimization on image deblurring.

1. Introduction

The mission of image deblurring is to restore an original image x from noisy blurred observation y e Rm modeled as

y = Ax + n,

where x e R" (stacking an M x N-image into an (n = MN)-vector in lexicographic order), A e Rmx" is the matrix representation of a convolution operator, and n e Rm is Gaussian white noise. This imaging inverse problem has recently inspired a considerable amount of research ([1-11] and further references therein) in which the optimization problems fall into two varieties: synthesis formulation and analysis formulation [12] which are detailed below.

1.1. Formulations and Algorithms. In the synthesis formulation, the unknown image x is represented as x = Ws, where W is a wavelet frame or a redundant dictionary and s are the sparse coefficients estimated usually via one sparsity-promoting regularizer, yielding

s* = arg min^IjAWs - ;

; + A<p (s),

where A is the positive regularization parameter and <p is the regularizer, typically the l1 norm. The deblurred image is then obtained by x* = Ws*.

In the analysis formulation, as opposed to (2), it minimizes the cost function with respect to x:

x* = argmin-||Ax - y||^ + A<p(Tx), (3)

where T is a sparsifying transform (such as wavelet or finite differences) and <p(Tx) analyzes the image x itself against the coefficients s that <p(s) in (2) works on. If T = W is a wavelet frame, then usually <p(Tx) = HTxl^ where Hv^ = v;; if T is a matrix representing finite differences at horizontal and vertical directions, then <p(Tx) is the discrete anisotropic or isotropic total variation [8, 9].

In the past several decades, a lot of algorithms have been proposed to solve (2) and (3). Among these, the iterative shrinkage/thresholding (1ST, [2]) algorithm can be considered as the standard one. However, 1ST tends to be slow in particular when A in (1) is poorly conditioned. To overcome this problem, several fast variants of 1ST have been proposed. They are TwIST [13], FISTA [14], and SpaRSA [15]. TwIST is actually a two-step version of IST in which each iterate depends on the two previous iterates, rather than

only on the previous one (as in 1ST); FISTA is a nonsmooth variant of Nesterov's optimal gradient-based algorithm for smooth convex problems [16,17]; and SpaRSA adopted more aggressive choices of step size in each iteration. These three algorithms, which only use the gradient information of the data-fitting term, have been shown to clearly outperform 1ST. Some other efficient algorithms, using the second-order information of the data-fitting term, have also been proposed. A noble representative is the so-called split augmented Lagrangian shrinkage algorithm (SALSA) [10,11], which was found to be faster than the previous FISTA, TwIST, and SpaRSA when the matrix inversion (ATA + ^I)-1 (where ^ is a positive parameter) can be efficiently computed (e.g., if A is a circulant matrix, then the matrix inversion can be efficiently calculated by FFT). Actually, SALSA is an instance of alternating direction method of multipliers (ADMM) [18, 19] which has a close relationship [20] with the Bregman iterations [21-24], amongst which, the split Bregman method (SBM) [22] has been recently frequently applied to handle imaging inverse problems [25-27].

1.2. l^ Minimization. Convergences of above algorithms are guaranteed, benefiting from the fact that the regularizer <p in (2) and (3) is usually convex. In this paper, a nonconvex (also nonsmooth) regularizer, that is, the l^ (_p e (0,1)) norm, is adopted since using the l^ norm is able to find sparser solution than using the l1 norm which was demonstrated in many studies [28-32]. Thus, there comes the l^ minimization problem of the synthesis formulation:

msin 1llAWs - yII2+AiMip

and of the analysis formulation:

mxin 2|IAx - +

where ||v||£ = |v/.

Next, a brief survey on the literature of l^ minimization is given. Recently, remarkable attention has been paid to the l^ minimization problem:

min ||x||£ : s.t. Ax = y,

II lip />

where p e (0,1). Some sufficient conditions of (6) have been established in [29, 33-35]. Many efficient reweighted minimization methods address (6)through iterativelysolving

xfc+1 = arg min/ (x) + (x),

where xfc+1 is the estimated signal at the (fc + 1)th iteration; / is the data-fitting term which preserves the consistency; for instance, /(x) = (1/2)||Ax - y||2 for the Gaussian noise; ^fc+1(x) is the regularization term at the (fc + 1)th iteration, which is aiming to encourage the desirable properties of the target x such as sparsity. Most of reweighted minimization methods (see [2, 28, 30, 31, 34, 36, 37] and many references therein) focus on the reweighted l1 (IRL1) and the reweighted

l2 (IRL2) minimization, and their corresponding regularizers are ^fc+1(x) = wf+1|xi| and wf+1|xi|2, respectively, where is the f'th component of x and is the weight of x{ at the fcth iteration and it is a function of the previous estimated component In IRL1, usually, ^ = + e)?_1,

while wf = (|xf|2 + e)9/2-1 in IRL2, where q e (0,1]. Notice that q is always set to 1 (e.g., IRL1: [28] and IRL2: [2]) or p (e.g., IRL1: [34, 36, 38] and IRL2: [30, 31, 38]), even 1/2 [32]. It is worth noting that above weights are separable, even though there also exist some reweighted minimization methods [39, 40] with inseparable weights.

1.3. Proposed Approach. In this paper, the l^ minimization problem will be used for image deblurring, but the resulting problem cannot be efficiently solved by above l^ minimization methods stated in Section 1.2, since they usually adopt slow iterations (such as the generic IST algorithm), and in image deblurring, the matrix-vector products are always computationally expensive. Considering this, the ADMM is used in this paper to tackle the resulting l^ minimization problem because as stated in Section 1.1, the image deblurring problems can be efficiently handled by the ADMM. The proposed approach is to first bound it via a variant of the majorizer proposed in [4, 7], obtaining a bound unconstrained one; then reformulate it into a constrained problem via the technique of variable splitting [41, 42]; and lastly attack this resulting constrained problem using an augmented Lagrangian method (ALM) [43]. If the majorizer that is proposed in [4, 7] is used to bound the original l^ minimization unconstrained problem, then the components of the estimate in the iterations would be stuck at zero forever if they became zero, which very likely prevents convergence to a minimizer. To overcome this problem, in this paper, a variant of this majorizer is proposed by adding a small positive parameter that shrinks in each iteration. The obtained bound unconstrained problem is reformulated into a constrained one through variable splitting which splits the original variable into a pair of variables, serving as the arguments of the data-fitting term and the l^ regularizer, respectively, under the constraint that these two variables have to be equal. This resulting constrained problem is then attacked by an ALM, obtaining a Lagrangian function with two variables resulting from variable splitting. Next, the Lagrangian function is alternatively solved with respect to the two variables, leading to the method called bound alternative direction method (BADM), which is equivalent to the combination of the reweighted l1 minimization method and the ADMM and is able to extend the application of ADMM to the l^ minimization problems.

The proposed BADM has only O(n log n) cost in each iteration in solving the synthesis l^ formulation with a normalized Parseval frame. Experiments on a set of benchmark problems show that the BADM for (4) is favorably competitive with the state-of-the-art algorithms FISTA [14], SALSA [11], and SBM [22] for (2).

1.4. Terminology and Notation. In this section, some useful elements of convex analysis will be given. Let H be a real Hilbert space equipped with the inner product (•, •} and associated norm || • ||. Let /: H ^ [-ot, +ot] be a function and let r(x) be the class of all lower semicontinuous convex functions that are not equal to +ot everywhere and are never equal to -ot. The proximity operator of / is defined as

Prox^ (v) = arg min (r/ (x) + ^Hx - v||2) ,

where r is positive. If / e r(x), then ProxTy is unique [44]; if /(x) = ic(x), the indicator function of a nonempty closed convex set C, then Prox^(x) becomes the projection of v onto C, and in this sense, (8) is therefore a generalization of projection operator; and if / is the l1 norm, then (8) becomes a well-known soft thresholding:

soft (v, t) = sign (v) © max {|v| - -r, 0},

where © is the componentwise multiplication between two vectors of the same dimension and sign (•) is the sign function.

2. Bound Alternative Direction Optimization

Consider a £p regularized unconstrained model

min F (x) := f (x) + A||s

where p e (0,1) and / : R" ^ R is a smooth function with L-Lipschitz-continuous gradient and is bounded below. It is unsuitable to directly apply the existing proximal splitting algorithms (such as IST, FISTA, TwIST, and SpaRSA discussed in the section of Introduction) to solve (10), since, for p e (0,1), the nonconvex nature of ||x||£ blocks the use of the proximity operator (see (8)), which is well defined only on the functions which belong to r(x). To overcome this problem, a bound optimization approach will be considered in this paper.

2.1. Bound Optimization

2.1.1. Bound F(x) via the Majorizer Proposed in [7]. Since ||x||^ (_p e (0,1)) i r(x), it is reasonable to bound F(x) through

G(x, x) = /(x) + 0(x,x), where 0(x, x) = A <>(% x) with

0 (%, %) =

Q(3t)|%| + (l-p)|3tf, if 3c = 0, +œ, if 3c = 0, x = 0,

0, if 3c = 0, x = 0,

where Q(x) = ^jX^-1. It is easy to see that F(x) < G(x, x) with equality for x = x, since A||x||^ < 0(x, x) with equality for x = x. One benefit from the above bound is that the following lemma holds.

Lemma 1. Given x, 0(x, x) belongs to r(x) with respect to x.

Proof. Given iCj for i = 1,..., n,if 3X; = 0, then <>(%;, x) is an affine function of | and thus <>(% 5c;) e r(x;), since |x;| e r(x;); if X = 0, then <>(%;, X) equals 0 if xt = 0 and +ot otherwise, leading to <>(%;, X) e r(x;). Therefore, 0(x, x) = A£; ^„X) er(x). □

Therefore, the proximity operator of 0(x, x) given x is obtained by

Prox$(x>ï) (v) = arg min (Q (x, x) + 1 ||x - v|| = soft (v, cö (x)),

where w(x) = [u>x(^),..., ^„(xj] with

wi (X) = -

AQ (3c;), +œ,

if X = 0,

if 3c; = 0.

Notice that soft(x, +ot) = 0 for any

From above, a closed-form solution of the proximity operator of 0(x, x) can be obtained after the bound operation. However, in many iteratively minimization algorithms discussed in the section of Introduction, the proximity operator of the regularization term is commonly used. For O(x.x), a nature way is to set x as the previous estimate xk and obtain the current estimate x + by computing the proximity operator of 0(x, xfc); that is,

xfc+1 = Prox$(x>xt) (/) = soft id (xfc)), (15)

where vk is an iteratively temporary variable which differs in each algorithm. According to (13) with (14), if a component of xk becomes zero forever, then this component of xfc+1 is set to zero and will be stuck at zero forever, which may prevent convergence to a local minimizer, letting alone a global one. To overcome this shortcoming, a new bound method is proposed below.

2.1.2. Proposed Bound Method. A method is presented that bounds F(x) through

Ge (x, x) = /(x) + Oe (x, x), (16)

where Oe(x, x) = ) with

(x,x) = Qe (x)|x| + (1-p)|x|', (17)

where Qe (x) = p(|x| + e)^-1 and e is a small positive parameter. It is clear that F(x) < Ge(x, x) with equality for x = x = 0 (where 0 is a vector composed of all zeros) or x = x with e = 0. Oe(x, x) has the following property.

Lemma 2. Oe(x, x) belongs to r(x) with respect to x.

Proof. For any X, i = 1,...,n, Qe(jci) > 0; thus 0e(x;,X) is an affine function of |x;| and thus 0e(x;,X) e r(x;). Therefore, Oe(x, x) = (%X) e r(x). □

(1) Set k = 0, and choose an arbitrary x°, e° > 0 and y > 1.

(2) repeat

(3) xk+1 = argminxGet (x, xk)

(4) ek+1 =ek/y

(5) k^k + 1

(6) until some stopping criterion is satisfied.

Algorithm 1: IBO.

(1) Set k = 0, and choose ft > 0, y > 1, e° >0, a° and an arbitrary x°.

(2) repeat

(3) (vk+1, xk+1) e arg minv xL(v, x; x\ ak, ft, ek)

(4) ak+l = ak -ft(vk+ - xk+v)

(5) ek+1 = ek/y

(6) k^k+1

(7) until some stopping criterion is satisfied

Algorithm 2: BALM.

(1) Set k = 0, and choose ft > 0, y > 1, e° > 0, d° and an arbitrary x°.

(2) repeat

(3) (vt+1,xk+1) e argminv,x/(v) + ^ (x,xk) + (ft/2) |v - x - d^

(4) dk+1 = dk -(vk+1 - xk+1)

(5) et+1 = ek/y

(6) k^k+1

(7) until some stopping criterion is satisfied.

Algorithm 3: Variant of BALM.

(1) Set k = 0, and choose ft > 0,y > 1,e° > 0, d° and an arbitrary x0.

(2) repeat

(3) vk+1 = arg minv/(v) + (ft/2) ||v - xk - dkf2

(4) xk+1 = arg minx®et (x, xk) + (ft/2) ||vfc+1 - x -df \\2

(5) dk+1 = dk - (vk+1 - xk+1)

(6) ek+ =ek/y

(7) k^k+1

(8) until some stopping criterion is satisfied.

Algorithm 4: BADM.

(1) Set k = 0, and choose ft > 0, y > 1, e° > 0, d° and an arbitrary s0.

(2) repeat

(3) vk+1 = [(AW)rAW + ftl]- [(AW)ry + ft (sk + d*0]

(4) s^1 =soft(vt+1 - dk, (sk)/ft)

(5) dk+1 = dk - (V+1 - sk+1)

(6) ek+ = ek/y

(7) k^k+1

(8) until some stopping criterion is satisfied.

Algorithm 5: BADM for (4).

(a) Cameraman

(b) Mandril Figure 1: Test images.

(c) Lena

Deblurred from (a) Deblurred from (b)

Figure 2: Deblurred Cameraman images by BADM.

Deblurred from (c)

Therefore, a closed-form solution of the proximity operator of Oe(x, x) can be obtained

Prox®£(x,x) (v) = argmin (x,x) + i||x - v||2)

= soft (v, we (x)),

2.1.3. Iterative Bound Optimization (IBO). Now, it is ready to propose a framework of IBO as in Algorithm 1.

A key observation is that the sequence |efc|, generated by the IBO, approaches to zero as k ^ such that (10) can be solved by the IBO which iteratively solves a sequence of problems:

where wE(x) = [AQE(:>c1),..., AQE(:>cn)] .

minFe (x) := /(x) + + efc) ,

Deblurred from (a) Deblurred from (b)

Figure 3: Deblurred Mandril images by BADM.

Deblurred from (c)

where is the ith component of xfc. Any accumulation point of the sequence |xfc| generated by the IBO is a first-order stationary point of (19), which is guaranteed by the following theorem.

Theorem 3. Let the sequence |xfc| begenerated by above IBO and suppose that x* is an accumulation point of |x }; then x* is a first-order stationary point of (19).

The rationale behind variable splitting is that it may be easier to solve (21) than it is to solve (20). The augmented Lagrangian function of (21) is

L (v, x; x, a, ß, e)

= /(v) + (x,x) + aT (v - x) + 2||v - x||2,

Proof. IBO is actually a specific case of the reweighted £a (a = 1 or 2) method proposed in [45], such that this theorem is also a specific case of Theorem 3.1 in [45]. □

2.2. Bound Alternative Direction Method. Considering the unconstrained optimization problem that corresponds to Step 3 of IBO,

min Ge (x, x) := f (x) + (x, x). (20)

Using the technique of variable splitting, (20) becomes a constrained optimization problem:

min / (v) + (x, x) s.t. v = x. (21)

where a is the vector of Lagrangian multipliers and ^ is the penalty parameter. According to the augmented Lagrangian method (ALM) [43], (21) can be solved through repeating the following iterative process until some stopping criterion is satisfied: minimize (22) with respect to v and x fixing a and setting x to the previous estimate of x and then update a, yielding a variant of ALM called bound ALM (BALM) (Algorithm 2).

Notice that the terms added to Ge(x, x) in the definition of L(v, x; x, a, e) (see (22)) can be reformulated as a single quadratic term, leading to a variant of BALM (Algorithm 3).

In (Algorithm 3), dk corresponds to that these

two parameters are used in BALM. It is usually difficult to simultaneously obtain vfc+1 and xfc+1 in Step 3. To overcome this difficulty, the technique of nonlinear block Gauss-Seidel (NLBGS) [46] is naturally used, in which the objective function of Step 3 is solved by alternatively minimizing it with respect to v and x, while keeping the other variable fixed,

Deblurred from (a)

Deblurred from (b) Figure 4: Deblurred Lena images by BADM.

Deblurred from (c)

FISTA BADM

-a- SALSA -A- SBM

1800« - 1 1

1400 1200 I li i\i H II 1 1' 11 1 11 *

1000 1 1 1 '

800 M* i 11 11 11 ,

600 T \ 11 in

400 i 1 111

200 i 1 111 i 1 \ \

0 111 «---— e-

-o- FISTA -e- BADM

-a- SALSA -a- SBM

(a) (b)

Figure 5: Evolutions in experiment (I) (Cameraman): (a) objective function and (b) MSE.

О 107

1800 1600 1400 1200 1000 800 600 400 200 0

---FISTA

--- SALSA

SBM - BADM

---FISTA

--- SALSA

SBM - BADM

Figure 6: Evolutions in experiment (II) (Cameraman): (a) objective function and (b) MSE.

■S 108

111 111 1 11 iii 1 1 1 1 1

-iii 111 1 1 1 1 I 1 * 1 \

S4A, ««Оал-

1800 1 ,

1600 1 i i

1400 1 i 1

1200 1 1 1 i 1 > 1 > i >

1000 1 1 1 \ 1 \ V

800 1 \ 1 \ 1 \

600 1 ' 1 1 \ 1 1 1 1

400 1 1 1 I 1 1 \ \

200 1 \ 1

0 \ <5-4- , И ill U

-0- FISTA -a- SBM

-e- BADM -в- SALSA

-0- FISTA -a- SBM

1.5 (s)

BADM SALSA

Figure 7: Evolutions in experiment (III) (Cameraman): (a) objective function and (b) MSE.

yielding the following bound alternative direction method (BADM) (Algorithm 4).

In (Algorithm 4), Step 3 is equivalent to Proxy/^(xfc + dfc), while Step 4 (see (13) and (9)) is equivalent to

РгОХф fxJyp (у

(vfc+1 - dk ) = soft (yfc+1 - dk, ^

Moreover, since the objective function of (10) is nonconvex for p e (0,1), BADM cannot be guaranteed to converge to

a global optimum. Nevertheless, as stated in Theorem 3, the proposed algorithm is able to obtain a stationary point, which in practice is always a good quality deblurred image.

3. Image Deblurring Using Synthesis lp Formulation and BADM

In this section, the synthesis Ep formulation (see (4)) and the BADM are applied to image deblurring. Note that using the analysis Ep formulation can be naturally extended

0.4 0.6

0.2 0.4 0.6 0.8

UNI GAU PSF

UNI GAU PSF

0.2 0.4 0.6 0.8

UNI GAU PSF

Figure 8: Performance of BADM over p on deblurring the corrupted Cameraman images: (a) time; (b) MSE; and (c) ISNR.

Table 1: Results of deblurring the Cameraman images.

Algorithm Time (seconds) Iterations MSE ISNR

UNI GAU PSF UNI GAU PSF UNI GAU PSF UNI GAU PSF

FISTA 16.52 11.01 5.02 150 98 47 102.71 217.85 112.32 7.19 2.67 4.39

SALSA 0.80 2.17 1.72 4 8 6 90.40 214.41 109.25 7.80 2.75 4.52

SBM 0.95 2.53 1.50 4 8 6 90.40 214.36 109.25 7.80 2.75 4.52

BADM 1.44 3.28 1.55 4 9 6 85.42 212.29 105.18 8.05 2.78 4.66

Table 2: Results of deblurring the Mandril images.

Algorithm Time (seconds) Iterations MSE ISNR

UNI GAU PSF UNI GAU PSF UNI GAU PSF UNI GAU PSF

FISTA 56.82 49.11 17.22 110 94 33 119.23 288.45 142.12 5.50 1.42 2.19

SALSA 3.41 5.41 3.63 4 8 5 119.12 255.30 82.98 5.59 1.95 4.51

SBM 3.06 4.51 2.81 4 8 5 119.63 255.30 82.98 5.59 1.95 4.51

BADM 4.26 9.34 3.60 4 8 5 117.24 246.12 81.15 5.69 2.11 4.60

Table 3: Results of deblurring the Lena images.

Algorithm UNI Time (seconds) GAU PSF UNI Iterations GAU PSF UNI MSE GAU PSF UNI ISNR GAU PSF

FISTA 53.13 40.8 21.41 98 81 42 43.89 82.84 53.90 6.01 2.82 2.17

SALSA 2.91 4.95 4.09 4 6 6 41.15 72.42 34.32 6.15 3.34 4.14

SBM 2.52 4.07 3.42 4 6 6 41.15 72.42 34.32 6.15 3.34 4.14

BADM 4.22 5.21 6.66 4 5 6 38.06 69.10 34.10 6.47 3.59 4.18

and will be left as future work. Therefore, here /(s) = (1/2)||AWs - y||2, and (s, sfc) is the proposed majorizer of A||s||^ at k iteration, which are inserted into the BADM yielding (Algorithm 5), where Step 4 is derived from (18). Assume that A represents a (periodic) convolution and W is a normalized Parseval frame (i.e., WWT = I, where I is the identity matrix and possibly WTW = I). According to the Sherman-Morrison-Woodbury matrix inversion formula,

[(AW)TAW + ^I]-1 = ^ [i - WTAT(AAT + ^I)-1AW] .

Moreover, under the periodic boundary condition for s, AAT

is block circulant, such that (AA + ^I) can be diagonalized by two-dimensional discrete Fourier transform (DFT). Let F = At(AAt + ^I) A; then F is equivalent to a filter in the Fourier domain and the cost of products by F using FFT algorithms is O(n log n) [10]. Thus,

= - WFW) uk,

where ufc = (AW)Ty + ^(sfc + dfc). As the computational

complexity analysis in [10], the cost of computing vfc+1 is

O(nlog n). Moreover, computing sfc+1 is the soft thresholding whose cost is O(n) and computing dfc+1 also has O(n) cost. Therefore, each iteration of the BADM for (4) has O(n log n) cost.

4. Experiments

In this section, the proposed algorithm BADM for (4) is compared with the state-of-the art algorithms: FISTA [14], SALSA [11], and SBM [22] for (2) (from now on, the BADM is only for (4) while FISTA, SALSA, and SBM are for (2) if without specification). Consider the low-frequency images (Cameraman), high-frequency images (Mandril), and both

low- and high-frequency images (Lena) (see Figure 1), with size 512 x 512 pixels, corrupted by the following three benchmark cases [5, 11]: (I) uniform blur of size 9x9 and noise variance <r2 = 0.562 (termed UNI); (II) Gaussian blur kernel with a2 = 8 (termed GAU); (III) the point spread function of the blur operator is fyj = 1/(1 + f2 + j ) for «, j = -7,..., 7 and a2 = 8 (termed PSF). All the experiments were performed using MATLAB on a 64-bit Windows 7 PC with an Intel Core i7 3.07 GHz processor and 6.0 GB ofRAM. In order to measure the performance of different algorithms, the following four metrics (where x is the original image and yfc and xfc are the observed image and the estimated image at the k iteration, resp.) are employed: (a) consumed CPU time (Time); (b) number of iterations (Iterations); (c) mean square error (MSE = ||x - xfc|| /n); and (d) improvement in SNR (ISNR = 10 log10(£fc ||x - yfc||2/ £fc ||x - xfc||2)) and the stopping criterion is chosen as ||xfc+1 - xfc||/||xfc+11| < where p = 0.9, and as well as other necessary parameters (such as A, p for the proposed BADM algorithm), is hand tuned for each algorithm in each experiment for the best ISNR; that is, A = 0.0075 for experiment (I); 0.09 for (II); and 0.25 for (III) and p = A/10 in (I), (II), and (III).

The obtained results of four metrics for the Cameraman, Mandril, and Lena images are listed in Tables 1, 2, and 3, respectively, and the deblurred images by BADM are shown in Figures 2, 3, and 4, respectively. And the evolutions of objective function and MSE by different algorithms in experiments (I), (II), and (III) (to avoid redundancy, only for the results of Cameraman images) are shown in Figures 5, 6, and 7, respectively. Moreover, the results of time, MSE, and ISNR over p are shown in Figure 8. From above results, it is clear that the BADM outperforms the FISTA, SALSA, and SBM in terms of MSE and ISNR.

5. Conclusions

This paper has proposed a new bound alternative direction method (BADM) for the (p e (0,1)) minimization

problems in image deblurring. In order to solve the unconstrained £p optimization problem, the idea of BADM is to first bound the £p regularizer to obtain a bound unconstrained problem, which is then reformulated into a constrained one by variable splitting. The resulting constrained problem is further addressed by an augmented Lagrangian method, more specifically, the alternating direction method of multipliers (ADMM). Therefore, the BADM is an extension of the ADMM to solve the £p (p e (0,1)) minimization problems. Experiments on a set of image deblurring problems have shown that the BADM for the synthesis £p formulation is favorably competitive with the state-of-the-art algorithms for the synthesis l1 formulation.

In future work, the BADM will be applied to the analysis £p formulation and other applications such as in painting and magnetic resonance imaging.

Conflict of Interests

The author declares that there is no conflict of interests regarding to the publication of this paper.

Acknowledgments

This work was partially supported by the China Scholarship Council (CSC: 2010611017). Xiangrong Zeng would like to thank the anonymous reviewers who have helped to improve the quality of this paper.

References

[1] C. Chaux, P. L. Combettes, J.-C. Pesquet, and V. R. Wajs, "A variational formulation for frame-based inverse problems," Inverse Problems, vol. 23, article 1495, no. 4, 2007.

[2] I. Daubechies, M. Defrise, and C. de Mol, "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint," Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413-1457, 2004.

[3] M. Elad, B. Matalon, and M. Zibulevsky, "Image denoising with shrinkage and redundant representations," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1924-1931, June 2006.

[4] M. A. T. Figueiredo and R. D. Nowak, "A bound optimization approach to wavelet-based image deconvolution," in Proceedings of the IEEE International Conference on Image Processing (ICIP 05), vol. 2, pp. 782-785, September 2005.

[5] M. A. T. Figueiredo and R. D. Nowak, "An EM algorithm for wavelet-based image restoration," IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 906-916, 2003.

[6] J. M. Bioucas-Dias, "Bayesian wavelet-based image deconvolution: a gem algorithm exploiting a class of heavy-tailed priors," IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 937951, 2006.

[7] M. A. T. Figueiredo, J. M. Bioucas-Dias, and R. D. Nowak, "Majorization-minimization algorithms for wavelet-based image restoration," IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2980-2991, 2007.

[8] T. Chan, S. Esedoglu, F. Park, and A. Yip, "Recent developments in total variation image restoration," in Mathematical Models of Computer Vision, vol. 17, 2005.

[9] J.-F. Cai, B. Dong, S. Osher, and Z. Shen, "Image restoration: total variation, wavelet frames, and beyond," Journal of the American Mathematical Society, vol. 25, no. 4, pp. 1033-1089, 2012.

[10] M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, "An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems," IEEE Transactions on Image Processing, vol. 20, no. 3, pp. 681-695, 2011.

[11] M. V. Afonso, J. M. Bioucas-Dias, and M. A. T. Figueiredo, "Fast image recovery using variable splitting and constrained optimization," IEEE Transactions on Image Processing, vol. 19, no. 9, pp. 2345-2356, 2010.

[12] M. Elad, P. Milanfar, and R. Rubinstein, "Analysis versus synthesis in signal priors," Inverse Problems, vol. 23, no. 3, pp. 947-968, 2007.

[13] J. M. Bioucas-Dias and M. A. T. Figueiredo, "A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration," IEEE Transactions on Image Processing, vol. 16, no. 12, pp. 2992-3004, 2007.

[14] A. Beck and M. Teboulle, "A fast iterative shrinkage-thresholding algorithm for linear inverse problems," SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202, 2009.

[15] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, "Sparse reconstruction by separable approximation," IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2479-2493, 2009.

[16] Y. Nesterov, Introductory Lectures on Convex Optimization, 2004.

[17] Y. Nesterov, "A method of solving a convex programming problem with convergence rate o (1/k2)," Soviet Mathematics Doklady, vol. 27, no. 2, pp. 372-376,1983.

[18] J. Eckstein and D. P. Bertsekas, "On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators," Mathematical Programming, vol. 55, no. 1-3, pp. 293-318,1992.

[19] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, "Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1-122, 2010.

[20] S. Setzer, "Split bregman algorithm, douglas-rachford splitting and frame shrinkage," in Scale Space and Variational Methods in Computer Vision,vol. 5567 of Lecture Notes in Computer Science, pp. 464-476, Springer, Berlin, Germany, 2009.

[21] W. Yin, S. Osher, D. Goldfarb, and J. Darbon, "Bregman iterative algorithms for l1-minimization with applications to compressed sensing," SIAM Journal on Imaging Sciences, vol. 1, no. 1, pp. 143-168, 2008.

[22] T. Goldstein and S. Osher, "The split Bregman method for L1-regularized problems," SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 323-343, 2009.

[23] S. Osher, Y. Mao, B. Dong, and W. Yin, "Fast linearized Bregman iteration for compressive sensing and sparse denoising," Communications in Mathematical Sciences, vol. 8, no. 1, pp. 93-111, 2010.

[24] A. Langer and M. Fornasier, "Analysis of the adaptive iterative bregman algorithm," preprint, 2010.

[25] H. Zhang, L. Cheng, and J. Li, "Reweighted minimization model for MR image reconstruction with split Bregman method," Science China: Information Sciences, vol. 55, no. 9, pp. 2109-2118, 2012.

[26] J. Cai, S. Osher, and Z. Shen, "Split bregman methods and frame based image restoration," Multiscale Modeling and Simulation, vol. 8, no. 2, pp. 337-369, 2009.

[27] S. Setzer, G. Steidl, and T. Teuber, "Deblurring Poissonian images by split Bregman techniques," Journal ofVisual Communication and Image Representation, vol. 21, no. 3, pp. 193-199, 2010.

[28] E. Candes, M. Wakin, and S. Boyd, "Enhancing sparsity by reweighted lj minimization," Journal of Fourier Analysis and Applications, vol. 14, no. 5, pp. 877-905, 2008.

[29] R. Chartrand, "Exact reconstruction of sparse signals via non-convex minimization," IEEE Signal Processing Letters, vol. 14, no. 10, pp. 707-710, 2007.

[30] R. Chartrand and W. Yin, "Iteratively reweighted algorithms for compressive sensing," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 3869-3872, Las Vegas, Nev, USA, April 2008.

[31] R. Saab, R. Chartrand, and O. Yilmaz, "Stable sparse approximations via nonconvex optimization," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 3885-3888, April 2008.

[32] Z. Xu, X. Chang, F. Xu, and H. Zhang, "L J/2 regularization: a thresholding representation theory and a fast solver," IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 7, pp. 1013-1027, 2012.

[33] R. Chartrand and V. Staneva, "Restricted isometry properties and nonconvex compressive sensing," Inverse Problems, vol. 24, no. 3, Article ID 035020,14 pages, 2008.

[34] S. Foucart and M. Lai, "Sparsest solutions of underdetermined linear systems via /^-minimization for 0 < q < 1," Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 395-407,

[35] Q. Sun, "Recovery of sparsest signals via ¿^-minimization," Applied and Computational Harmonic Analysis, vol. 32, no. 3, pp. 329-341, 2012.

[36] X. Chen and W. Zhou, "Convergence of reweighted 11 minimization algorithms and unique solution of truncated lp minimization," Tech. Rep., The Hong Kong Polytechnic University,

[37] D. Needell, "Noisy signal recovery via iterative reweighted L j-minimization," in Proceedings of the 43rd Asilomar Conference on Signals, Systems and Computers, pp. 113-117, Pacific Grove, Calif, USA, November 2009.

[38] I. Daubechies, R. DeVore, M. Fornasier, and C. S. Giintiirk, "Iteratively reweighted least squares minimization for sparse recovery," Communications on Pure and Applied Mathematics, vol. 63, no. 1, pp. 1-38, 2010.

[39] D. Wipf and S. Nagarajan, "Iterative reweighted lj and l2 methods for finding sparse solutions," IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 317-329, 2010.

[40] M. A. Khajehnejad, W. Xu, A. S. Avestimehr, and B. Hassibi, "Improved sparse recovery thresholds with two-step reweighted 11 minimization," in Proceeding of the IEEE International Symposium on Information Theory (ISIT 10), pp. 1603-1607, Austin, Tex, USA, June 2010.

[41] R. Courant, "Variational methods for the solution of problems of equilibrium and vibrations," Bulletin of the American Mathematical Society, vol. 49, no. 1, pp. 1-23,1943.

[42] Y. Wang, J. Yang, W. Yin, and Y. Zhang, "A new alternating minimization algorithm for total variation image reconstruction," SIAM Journal on Imaging Sciences, vol. 1, no. 3, pp. 248-272, 2008.

[43] J. Nocedal and S. Wright, Numerical Optimization, Springer, 1999.

[44] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, CMS Books in Mathematics, Springer, New York, NY, USA, 2011.

[45] Z. Lu, "Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming," Mathematical Programming, pp. 1-31, 2012.

[46] L. Grippo and M. Sciandrone, "On the convergence of the block nonlinear Gauss-Seidel method under convex constraints," Operations Research Letters, vol. 26, no. 3, pp. 127-136, 2000.

Copyright of Mathematical Problems in Engineering is the property of Hindawi Publishing Corporation and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.