Computational Visual Media DOI 10.1007/s41095-016-0036-6

Vol. 2, No. 1, March 2016, ???-???

Research Article

High-resolution images based on directional fusion of gradient

Liqiong Wu1, Yepeng Liu1, Brekhna1, Ning Liu1, and Caiming Zhang1 (El)

© The Author(s) 2016. This article is published with open access at Springerlink.com

Abstract This paper proposes a novel method for image magnification by exploiting the property that the intensity of an image varies along the direction of the gradient very quickly. It aims to maintain sharp edges and clear details. The proposed method first calculates the gradient of the low-resolution image by fitting a surface with quadratic polynomial precision. Then, bicubic interpolation is used to obtain initial gradients of the high-resolution (HR) image. The initial gradients are readjusted to find the constrained gradients of the HR image, according to spatial correlations between gradients within a local window. To generate an HR image with high precision, a linear surface weighted by the projection length in the gradient direction is constructed. Each pixel in the HR image is determined by the linear surface. Experimental results demonstrate that our method visually improves the quality of the magnified image. It particularly avoids making jagged edges and bluring during magnification.

Keywords high-resolution (HR); image magnification; directional fusion; gradient direction

1 Introduction

The aim of image magnification is to estimate the unknown pixel values of a high-resolution (HR) version of an image from groups of pixels in a corresponding low-resolution (LR) image [1]. As a basic operation in image processing, image magnification has great significance for applications in many fields, such as computer vision, computer animation, and medical imaging [2]. With the

1 School of Computer Science and Technology, Shandong University, Jinan 250101, China. E-mail: L. Wu, wuliqiong.june@gmail.com; C. Zhang, czhang@sdu.edu. cn(E3).

Manuscript received: 2015-11-30; accepted: 2015-12-09

rapid development of visualization and virtual reality, image magnification has been widely applied to diverse applications, such as high-definition television, digital media technology, and image processing software. However, image magnification methods face great challenges because of the increased demand for robust technology and application challenges. In recent years, although many researchers have proposed a variety of methods for image magnification, there is not yet a unified method suitable for all image types. Considering the characteristics of different types of images, it is still hard to achieve low computational time while maintaining edges and detailed texture during the process of magnification. Based on the analysis above, this paper focuses on generating an HR image maintaining the edge sharpness and structural details of a single LR image by means of the directional fusion of image gradients.

1.1 Traditional methods

Traditional methods, including nearest neighbor, bilinear [3], bicubic [4, 5], and Lanczos resampling [6], are widely applied in a variety of commercial software and business applications for image processing. The main advantages of such conventional methods are that they are easy to understand, simple to implement, and fast to calculate. However, there are limitations for these methods. Using a unified mathematical model causes loss of high frequency information at edges. Thus, conventional methods are likely to introduce jagged edges and blur details at significant transitions in an image, such as edges and texture details.

1.2 Advanced methods

Studies have shown that human eyes are more sensitive to the edges of an image that transmit most of the information of the image, so images with good

ÄTSINGHUA Soringer

^gS? UNIVERSITY PRESS i-1 opiülgCl

quality edges can help to clearly describe boundaries and the outlines of objects. Edges that contain important information are of great significance in image magnification. Various edge-directed methods have been proposed in recent years, most of which take advantage of edge information to overcome the shortcomings of conventional methods, e.g., Refs. [713].

The edge-guided interpolation method put forward by Li and Orchard [10] is based on image covariance, and exploits local covariance coefficients estimated from the pixel values of the LR image to calculate the covariance coefficients of the HR image, utilizing the geometric duality between LR and HR images. These covariance coefficients are used to perform interpolation. Zhang and Wu [12] present a nonlinear interpolation method, based on inserting a missing pixel in two mutually orthogonal directions, and use a minimum mean square error estimation technique to fuse them for realizing interpolation.

Zhang et al. [8] propose a method based on a combination of quadratic polynomials to construct a reverse fitting surface for a given image in which the edges of the image act as a constraint, which ensures the fitted surface has a better approximation accuracy. Fan et al. [14] present a robust and efficient high-resolution detail-preserving algorithm based on a least-squares formulation. A gradient-guided image interpolation method is presented in Ref. [9], assuming that the variation in pixel values is constant along the edge. The method can be implemented simply and has good edge retention, but it leads to a wide edge transition zone because of the diffusion of the HR image gradients, and so it is not suitable for magnification of images with complicated textures and detail.

Corresponding patches between low- and highresolution images from a database can be used with machine learning-based techniques or sampling methods to achieve interpolation [15-20].

Traditional methods often introduce artifacts such as jagged edges and blurred details during magnification. Often, edge-based methods tend to generate artifacts in small scale edge structures and complicated texture details. Learning-based techniques are complex and time-consuming, with the outcome influenced by the training data. Because of these issues, this paper proposes a novel

method to produce an HR image based on the directional fusion of gradients.

2 Related work

In this study, we use a degradation model that assumes the LR image can be directly down-sampled from the HR image, rather than by using Gaussion smoothing. Since the proposed method is partly based on CSF [8] and GGI [9], this section will briefly introduce both methods.

2.1 Quadratic surface fitting constrained (CSF) by edges

In CSF, image data is supposed to be sampled from an original scene that can be approximated by piecewise polynomials [8]. The fitted surface is constructed by a reversal process of image sampling using the edge information as constraints. That makes the surface a good approximation to the original scene, with quadratic polynomial precision. Assuming that Pi,j is an image of size N x N generally sampled from the original scene F(x, y) on a unit square, so

w(x,y)F (x,y)dxdy (1)

where w(x, y) is a weight function set to be 1.

In the region [i — 1.5, i + 1.5] x [j — 1.5, j + 1.5], let u = x — i,v = y — j. See Fig. 1. The fitted surface fi,j (x,y) of F(x,y) is defined as fi j (x, y) = aiu2 + a2uv + a3v2 + a4u + a5v + a6

where ai, a2, a3, a4, a5, and a6 are to be determined. Determination of the unknown coefficients is performed by a least-squares method constrained by edge information [8]. Since a good quality surface can help to produce high precision interpolation, we

Fig. 1 Constructing surface.

TSINGHUA Springer

UNIVERSITY PRESS i-1 J^llilgtl

will later make use of the constructed surface to interpolate gradients.

2.2 Gradient-guided interpolation (GGI)

In order to eliminate jagged edges, a gradient-guided interpolation method is proposed in Ref. [9], based on the idea that the variation in pixel values is constant along the edge direction. GGI uses a Sobel kernel to calculate gradients of the LR image, and adopts bicubic interpolation to determine the gradients of the HR image, then uses gradient diffusion. Finally, the unknown HR pixels Pi,j to be interpolated are divided into three categories with different LR pixels Px,y in the neighborhood Nij.

^xy Px,y

Pi,j is estimated by summing the neighborhood pixels Nij weighted by wxy, where a shorter distance carries greater weight. Let dxy denote the distance between Px,y and Pi,j projected along the gradient direction of Pi,j. Then

wxy Se

where a = 0.2 controls decrease of the exponential and S is defined as

Although the method of Ref. [9] provides good quality interpolation at edges by significantly decreasing jagged edges, it can cause loss of detail in non-edge regions in some cases. In particular, it is unsuitable for image areas containing complex details and abundant texture.

of the HR image. We estimate the gray values of the unknown pixels in the HR image, using a linear approximation of the neighboring pixels. For simplicity of discussion, we mainly focus on enlargement by a factor of 2, to produce an HR image of size 2m x 2n from an LR image of size m x n. The general information flows in our proposed method are shown in Fig. 2.

3.1 Calculating the gradients of the HR

In order to compute the LR gradients with high accuracy, our method adopts Eq. (2) to compute the LR gradient for each Pi,j. The gradient vector of the LR image is defined as = (gx,gy), where gx and gy are defined as

gx = 2aiu + a2v + a A

} (6) gy = a2u + 2a2 v + a5 J

Thus, for each Pi,j we can get the LR gradients as

gx = a4,gy = a5. The LR gradients are used to

calculate HR gradients, denoted by m

by bicubically interpolating the LR gradients.

3.2 Diffusing the gradients of the HR image

The GGI method [9] utilizes the gradient information in order to maintain the sharpness of edges. However, the spatial distribution of gradients is not considered effectively during diffusion: the norm of the gradient takes a local maximum in the gradient direction [21]. It may cause the gradient direction to change in an inapprorpiate way in detail-rich portions by directly replacing the gradient at a central pixel by the mean of some region, which may

1 x,y ~NiJ

3 High-resolution image based on directional fusion of gradient

In this section, a new magnification method is put forward based on fusion of gradient direction, which exploits the property that the pixel values change very quickly in the gradient direction. From the analysis above, maintaining is sharpness of edges and the clarity of detailed textures becomes the key mission in image magnification, since most information in the image is transmitted by edges and detail textures. Our method first finds approximate gradients of the LR image, then calculates those

Fig. 2 Flowchart of the method.

m T SIN G H U A ¿A Sorineer

Wg? UNIVERSITY PRESS ¿-I Jfl

result in distortion of details.

Therefore, we take account of the spatial correlation between the gradient directions to improve the diffusion of gradients —g. Diffusion deals with gradient values in the vertical Gx and horizontal Gy directions. A local window of size 5 x 5, with Pi,j as the central pixel, see Fig. 3, is used to adjust the gradient direction. Our method adjusts the gradient vector of the center pixel using the average value of gradients whose direction falls within a certain rage relative to that of the central pixel.

By considering the spatial correlations between gradient directions, our method can approximate HR gradients that not only maintain the sharpness of edges, but also better retain the structure of textures and details. Let k denote the number of pixels satisfying the condition ¡3xy < a, and a — 45°.

Y, GXX

i f.. GXi

fxy <45°

Gy, —

y <45°

After conducting the diffusion of iG — (GX ,GY). obtain the adjusted HR gradients cG — (Gx ,Gy), which are used to calculate the gray values of HR pixels.

3.3 Estimation of HR image

In this section, we give the strategy for calculating the unknown pixels of the HR image. In Section 2.2 we noted that the GGI method [9] yields a precise constant. In comparison with GGI, our method

Fig. 3 Diffusion of gradient. The blue dots Px,y stand for neighboring HR pixels of Pij, and the blue arrow represents the gradient direction at Px,y, while the red arrow indicates the gradient direction at Pij. j3xy is the angle between the gradient directions Pi,j and Px,y. The dashed area defines the range of angles for which the gradient direction of Pxy is positively correlated with that of P.

provides higher precision of polynomial interpolation by constructing a linear surface to approximate the intensity of the HR image. It performs well in maintaining the details of the image. Depending on the known pixels in the neighborhood window with the unknown pixel as the center (see Fig. 4(b)), the unknown pixels of the HR image may be divided into three categories:

(1) Black I(2n - 1, 2m — 1)h;

(2) Blue I(2n, 2m)H;

(3) Pink I(2n — 1, 2m) h and I(2n, 2m — 1)h, where n — 1, ••• ,N, m — 1, ••• ,M. Therefore, the estimation of the unknown pixels in the HR image is achieved in three steps.

Step 1:

In this step, we assign the values of LR pixels to the corresponding HR pixels. For an LR image Il of size n x m enlarged to give an HR image of size 2n x 2m, we have I(2n — 1, 2m — 1)h — I(n,m)l, where n — 1, ■ ■ ■ ,N and m — 1, ■ ■ ■ , M. I(n, m)l and I(2n — 1, 2m — 1) h are the solid black dots shown in Fig. 4(a) and Fig. 4(b), respectively. Step 2:

In this step, we use four neighboring black pixels to calculate the central pixels Pi,j (the blue dots in Fig. 5(a)) satisfying Pi,j G I(2n, 2m)h. In order to precisely obtain Pi,j, we construct a linear surface to approximate the image data via directional fusion of gradients. Within the neighborhood window Nij centered on Pi,j, our method constructs a linear

surface /H using a linear polynomial as follows:

(x, y) — a * x + b * y + c

Fig. 4 Degradation mode. (a) Pixels of LR image. (b) Pixels of HR image. The solid black dots in (a) represent pixels of the LR image. The dots in (b) are pixels of the HR image, where the black dots are the known pixels of HR image I(2n — 1, 2m — 1)h that are directly determined by the corresponding LR image pixels, blue dots stand for the case where I(2n, 2m)H, while the pink points represent the cases where I(2n — 1, 2m)H or I(2n, 2m — 1)h.

tsinghua ^¡SDrineer

UNIVERSITY PRESS "r 1 5

Fig. 5 Three cases for constructing the linear surface fHj. (a) represents the linear surface constructed in Step 2. (b) and (c) represent the linear surface constructed in Step 3. In the figure, black dots give known pixels of the HR image from the corresponding LR pixels, and the blue dots stand for unknown HR pixels calculated in Step 2.

where a, b, and c are unknown coefficients to be found.

We determine the unknown coefficients (i.e., a, b, c) in Eq. (8) by a least-squares method, weighted by the gradients and the values of pixels in the neighborhood window.

G(a,b, c) = wXy * (a * x + b * y + c - PXy)2

Px,y G Nij

where Nij represents the neighboring pixels Pxy of the central pixel Pi,j, satisfying (x,y) G {(-1,1), (1,1), (-1, -1), (1, -1)}. The procedure to calculate wxy is given in Eq. (4) (see Fig. 6(a)). Minimizing Eq. (9) requires

dG (10)

da = 0

dG (11)

~db = 0

dG (12)

dc = 0

Substituting the variables (a, b, c) into Eq. (8) gives the approximate pixel value, i.e., Pi,j = c .

Fig. 6 Weighting. (a) represents the case of what is solved in Step 2. (b) and (c) are the two situations to be determined in Step 3 using the results of Step 2. The black dots are known pixels of the HR image, and the blue dots are the unknown HR pixels. Pxy stands for the neighboring pixels of Pij. is the gradient direction at the center pixel Pi,j.

Step 3:

In this step, we use the results of Step 1 and Step 2 to estimate the remianing unknown HR pixels (the pink dots in Fig. 4(b), i.e., Pi,j G {I(2n — 1, 2m)h, I(2n, 2m — 1)h} ). The gray value of the central pixel Pi,j is calculated using the same procedure as in Step 2. We use Eq. (8) to construct a linear surface (see Figs. 5(b) and 5(c)). The surface is constrained by Eq. (9) in order to get an approximate surface, where (x,y) G {(—1, 0), (0,1), (1, 0), (0, —1)}. The weight wxy is calculated from Eq. (4) (see Figs. 6(b) and 6(c)).

Finally, the pixels located on the image boundary are calculated by averaging the existing neighboring pixels, instead of by constructing a surface.

4 Results and discussion

In order to verify the effectiveness of the proposed method, we have carried out many experiments with different kinds of images, including natural images, medical images, and synthetic images. The results of our experiments demonstrate that the proposed method can obtain better quality image magnification, especially at edges and in detail-rich areas. To demonstrate the advantages of our proposed method, we compare magnification results with several methods, including bicubic interpolation (Bicubic) [4], cubic surface fitting with edges as constraints (CSF) [8], the new edge-directed interpolation method (NEDI) [10], and gradient-guided interpolation (GGI) [9]. We now analyze the experimental results in detail.

In the experiment, we carried out tests with different types of images by magnifying LR images of size 256 x 256 to get HR images of size 512 x 512. Figures 7 and 8 show the magnified images with labeling of local windows containing edges and details extracted from the HR image. Comparing the corresponding regions of the boat image in Fig. 7, we can see that our method is more capable of dealing with edge portions of an image, while other methods introduce jagged edges or blurring artifacts near edges. It is also clear from Fig. 8 that Bicubic [4] and CSF [8] methods tend to introduce bluring artifacts: see the moustache of the baboon. NEDI [10] produces zigzags that are particularly

(d) (e) (f)

Fig. 7 Results of magnifying the boat image: (a) ground truth; (b) Bicubic; (c) GSF; (d) NEDI; (e) GGI; (f) ours.

Fig. 8 Results of magnifying the baboon image: (a) ground truth; (b) Bicubic; (c) CSF; (d) NEDI; (e) GGI; (f) ours.

g@J TSINGHUA Sorineer

UNIVERSITY PRESS ¿-J OJJI IllgCI

evident, while GGI [9] causes loss of detail in the area of the moustache. Our method leads to better visual quality than other methods.

We also conducted experiments with MRI images of a brain which were segmented into four classes by the MICO (multiplicative intrinsic component optimization) segmentation algorithm [22]. Although the results of MICO algorithm provide high accuracy segmentation, there are still rough edges due to limitations of the segmentation method. Figures 9(a)-9(f) show Bicubic, CSF, NEDI, GGI, and our results from top to bottom. The results of magnification shown in Fig. 9 illustrate that our method can deal well with a segmented image with severe zigzags, effectively retaining sharp edges while avoiding jagged artifacts during magnification.

For synthetic images, Fig. 10, Fig. 11, and Fig. 12 show the map of gray values at edge portions after applying several methods mentioned above. It is clear that our method is able to maintain the sharpest edges with less blur: other methods produce fuzzy data around the edges which results in blurring artifacts.

In order to evaluate the quality of the magnification results, we use the three objective methods based on comparisons with explicit numerical criteria [23] , including peak signal to noise ratio (PSNR), structural similarity (SSIM), and percentage edge error (PEE). PSNR measures the disparity between the magnified image and the ground truth image, and is defined as

PSNR =10 x logio MSE (13)

where the mean square error (MSE) between two images is

m—1n —1

MSE = — E E n%j) — S(j)ii (14)

i=0 j=0

SSIM measures the similarity of the structural information between the magnified image and the ground truth image [24]. It is related to quality perceived by the human visual system (HVS), and is given by

SSIM = (2^S MI + Ci)(2as^i + C2)(asi + C3) 8,1 (mS + m2 + Ci )(o§ + o? + C2)(asai + C3)

where ms and mi denote the mean value of the ground truth image and the magnified image respectively, as and oi represent variances of the corresponding images, and osi denotes the covariance of the two images.

For the images shown in Fig. 13, values of PSNR and SSIM are listed in Table 1 and Table 2, respectively. It is clear that our proposed method performs well in most cases, giving the highest values for PSNR and SSIM.

In addition, the percentage edge error (PEE) [25] was also used to measure perceptual errors. PEE is very suitable for measuring dissatisfaction of image magnification, where the major artifact is blurring. PEE measures the closeness of details in the interpolated image to the ground truth image. Generally in image interpolation, a positive value of PEE means that the magnified image is over smoothed, with likely loss of details. Thus, a method with smaller PEE performs better at avoiding blurring artifacts. PEE is defined by

ESs — ESI

where ESs denotes the edge strength of the ground truth image and ESi is that of the magnified image. ES is defined as

ES = EE EI(j) (17) i=1j=1

where EI(i,j) denotes the edge intensity value of the image.

The PEE values for each interpolation method are shown in Table 3. It is clear that the PEE value for the proposed method is very low compared with the values for other techniques, so structural edges are better preserved and less blurring is produced in our method.

The analysis of the experimental results above shows that the proposed method achieves a good balance between edge-preservation and blurring, performing especially well on synthetic images and segmented medical images. The major drawback of this method lies in the limitation of using the gradients only in horizontal and vertical directions, making it hard to get accurate gradient values for images with very low contrast. Our future work will consider how to calculate gradients in more directions, and use a surface of high accuracy to approximate the image data. We hope to develop a method for magnification that can maintain edges and detailed texture perfectly with low computational time.

5 Conclusions

This paper presents a novel method of producing an

Fig. 9 Enlarged image of a brain. (A) and (B) are segmented brain images produced by MICO. Images (a), (b), and (c) are results of enlarging a specified area of (A). Images (d), (e), and (f) are the results of enlarging a specified area of (B).

HR image by making use of gradient information. It maintains sharpness of edges and clear details in an image. Our proposed method first obtains LR image gradient values by fitting a surface with quadratic

fil T SIN G H U A <£) Sorineer

^gg? UNIVERSITY PRESS ^F1 lllgcl

polynomial precision, then the method adopts a bicubic method to get initial values of the HR image gradients. It then adjusts the gradients according to the spatial correlation in the gradient direction

0 0 0 128

0 0 0 128

0 0 0 128

0 0 0 128

0 0 0 128

0 0 0 128

0 0 0 11281

0 0 0 128

0 0 0 0 0 0 0 0

0 0 0 55 KEEIBSgBigBSg

0 0 0 55 lEEIBjgBSgBig

0 0 0 55 KEE1EEE1EEE1FEE1

0 0 0 55 KEElEEEiraClEEEi

0 0 0 55 IET-1EEE1EEE1FEE1

0 0 0 55 FETlFEElEEElgEEl

0 0 0 55 i m n FTI

0 0 0 55 KTOIFEEIEEEIFEEI

0 0 0 50 BBïlBigBSgBîg

0 0 0 50 BîgB3ïlB3ilB%l

0 0 0 50 Eîg B^ B^

0 0 0 50 B!EiB3aB3iiB3a

0 0 0 so | m r~i

0 0 0 50 Bîg Big Big Big

0 0 0 50 Baa Baa Big Baa

0 0 0 50 EiEjBSlESgBSl

0 0 0 0 128 BsgB^iBsg

0 0 0 32 128 raa Big Big

0 0 0 0 128 Bsg BigBsg

0 0 0 32 128

0 0 0 0 128 Big Big Big

0 0 0 32 128 vmwmwm

0 0 0 0 128 Big Big Big

0 0 0 32 128 ranmfTTi

0 0 0 127 BglB%lB3ilB%l

0 0 0 127 Big Big Big Big

0 0 0 127 wmtmvmwm

0 0 0 127 Big Big Bsg Big

0 0 0 127 B%i Big Big Big

0 0 0 127 Big Big Big B33

0 0 0 127 Bin Big Big Big

0 0 0 127 Big Big Big Big

Fig. 10 Magnification of vertical edges: (a) original image and gray value; (b) ours; (c) Bicubic; (d) CSF; (e) NEDI; (f) GGI.

to constrain the gradients of the HR image. Finally it estimates the missing pixels using a linear surface weighted by neighboring LR pixels. Experimental results demonstrate that our proposed method can achieve good quality image enlargement, avoiding jagged artifacts that arise by direct interpolation; it preserves sharp edges by gradient fusion.

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable suggestions that greatly improved the paper. This project was supported by the National Natural Science Foundation of

China (Nos. 61332015, 61373078, 61572292, and 61272430), and National Research Foundation for the Doctoral Program of Higher Education of China (No. 20110131130004).

References

[1] Siu, W.-C.; Hung, K.-W. Review of image interpolation and super-resolution. In: Proceedings of Asia-Pacific Signal & Information Processing Association Annual Summit and Conference, 1-10, 2012.

[2] Gonzalez, R. C.; Woods, R. E. Digital Image Processing, 3rd edn. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 2006.

0 0 0 0

0 0 0 0

255 255 255 255

255 255 255 255

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

55 55 55 55 55 55 55 55 |

199 199 199 199 199 199 199 199

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 32 0 32 0 32 0 32

128 128 128 128 128 128 128 128|

255 223 255 223 255 223 255 223

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

128 128 128 128 128 128 128 128

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

0 0

0 0

50 50 50 50 50

205 205 205 205 205 205 205 205

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

127 127 127 127 127 127 127 127

254 255 254 255 254 255 254 255

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

255 255 255 255 255 255 255 255

Fig. 11 Magnification of horizontal edges: (a) original image and gray value; (b) ours; (c) Bicubic; (d) CSF; (e) NEDI; (f) GGI.

Franke, R. Scattered data interpolation: Tests of some methods. Mathem,atics of Computation Vol. 38, No. 157, 181-200, 1982.

Keys, R. G. Cubic convolution interpolation for digital image processing. IEEE Transactions on Acoustics, Speech and Signal Processing Vol. 29, No. 6, 11531160, 1981.

Park, S. K.; Schowengerdt, R. A. Image reconstruction by parametric cubic convolution. Computer Vision, Graphics, and Image Processing Vol. 23, No. 3, 258272, 1983.

Duchon, C. E. Lanczos filtering in one and two dimensions. Journal of Applied Meteorology Vol. 18, No. 8, 1016-1022, 1979.

Allebach, J.; Wong, P. W. Edge-directed interpolation. In: Proceedings of International Conference on Image Processing, Vol. 3, 707-710, 1996.

Zhang, C.; Zhang, X.; Li, X.; Cheng, F. Cubic surface fitting to image with edges as constraints. In: Proceedings of the 20th IEEE International Conference on Image Processing, 1046-1050, 2013. Jing, G.; Choi, Y.-K.; Wang, J.; Wang, W. Gradient guided image interpolation. In: Proceedings of IEEE International Conference on Image Processing, 18221826, 2014.

Li, X.; Orchard, M. T. New edge-directed

interpolation. IEEE Transactions on Image Processing

Vol. 10, No. 10, 1521-1527, 2001.

Tam, W.-S.; Kok, C.-W.; Siu, W.-C. Modified edge-

directed interpolation for images. Journal of Electronic

Imaging Vol. 19, No. 1, 013011, 2010.

Zhang, D.; Wu, X. An edge-guided image interpolation

algorithm via directional filtering and data fusion.

TSINGHUA <£) Sorineer

UNIVERSITY PRESS £=i ^f1 U16cl

255 255

255 255 | 255 | 255 |

29 1 0 0 0 0 0 0

ITT1 29 0 0 0 0 0 0

raillffTil 29 1 0 0 0 0

vmnn | 29 0 0 0 0

E33 jgrei 29 1 0 0

PEE1 IE3EC1 29 0 0

E1E1I ES 29 1

I 254 I 254 [255 I 255 253 I 225 I 148 1 29

E23 128 0 15. 9 0 0 0 0

| 63. 8 15. 9 0 0 0

I 128 0 15.9 0 0

EE] IUJI 128 63.8 15.9 0

ES3 128 0 15. 9

ES3 \m*mi ■EDI 128 63.8

533 EDI 128

128 0 0 0 0 0 0 0

128 | 0 0 0 0 0 0

0 0 0 0 0

CO tsi 0 0 0 0

128 0 0 0

128 0 0

255 255 255 255 255 255 VETtl

255 255 255 255 255 255 255 n

23 0 0 0 0 0 0 0

23 0 0 0 0 0 0

23 0 0 0 0 0

23 0 0 0 0

23 0 0 0

23 0 0

255 255 255 23

255 255 | 255 255 255 255 | 232 EF71

0 0 0 0 0 0 0

BSl 127 0 0 0 0 0 0

FlRTl 127 0 0 0 0 0

EH1E3E9 127 0 0 0 0

127 0 0 0

127 0 0

255 | 255 | 255 | 255 | 255 | 255 | 239 | 1 91 1 | 255 | 254 | 255 | 255 | 255 | 254 | 255

Fig. 12 Magnification of diagonal edges: (a) original image and gray value; (b) ours; (c) Bicubic; (d) CSF; (e) NEDI; (f) GGI.

X ' """" 4 < JLvm / iffj: W r 'M M

K ¿3

Fig. 13 Test images. Top row, left to right: cameraman, baboon, boat, goldhill, lake. Bottom row: peppers, couple, Lena, crowd, medical.

IEEE Transactions on Image Processing Vol. 15, No. 8, 2226-2238, 2006. [13] Zhang, L.; Zhang, C.; Zhou, Y.; Li, X. Surface interpolation to image with edge preserving. In:

Proceedings of the 22nd International Conference on Pattern Recognition, 1055-1060, 2014.

[14] Fan, H.; Peng, Q.; Yu, Y. A robust high-resolution details preserving denoising algorithm for meshes. Science China Information Sciences Vol. 56, No. 9, 1-12, 2013.

[15] Chang, H.; Yeung, D.-Y.; Xiong, Y. Super-resolution through neighbor embedding. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, I, 2004.

[16] Dong, W.; Zhang, L.; Lukac, R.; Shi, G. Sparse representation based image interpolation with nonlocal autoregressive modeling. IEEE Transactions on Image Processing Vol. 22, No. 4, 1382-1394, 2013.

[17] Freeman, W. T.; Jones, T. R.; Pasztor, E. C. Example-based super-resolution. IEEE Computer Graphics and Applications Vol. 22, No. 2, 56-65, 2002.

Table 1 Values of PSNR

Image Bicubic CSF NEDI GGI Ours

Cameraman 30.37 30.09 33.94 34.27 35.45

Baboon 20.91 20.88 22.79 22.31 22.62

Boat 25.61 25.53 28.79 28.54 28.84 [22]

Goldhill 25.96 25.89 28.33 28.13 28.42

Lake 24.18 24.10 27.41 26.77 27.72

Peppers 27.35 27.27 30.44 30.49 30.66

Couple 23.99 23.91 26.82 26.65 26.89 [23]

Lena 26.90 26.80 29.37 30.11 30.38

Crowd 24.89 24.83 27.86 27.75 28.25

Medical 24.51 24.72 26.05 25.99 26.39 [24]

Table 2 Values of SSIM

Image Bicubic CSF NEDI GGI Ours [25]

Cameraman

Baboon

Goldhill

Peppers

Couple

Medical

0.943 0.511 0.769 0.654 0.708 0.754 0.664 0.780 0.786 0.732

0.941 0.515 0.770 0.655 0.708 0.754 0.666 0.782 0.787 0.776

0.891 0.662

0.853 0.773 0.800 0.819 0.785 0.834 0.874 0.858

0.944 0.627 0.847 0.775 0.803 0.809 0.775 0.841 0.868 0.847

0.965 0.649 0.854 0.782 0.808 0.821 0.786 0.849 0.882 0.865

Table 3 Values of PEE as percentages

Image Bicubic CSF NEDI GGI Ours

Cameraman 25.79 19.89 23.41 15.26 10.73

Baboon 13.62 9.44 11.37 -3.40 -6.82

Boat 23.67 16.12 18.52 12.33 8.14

Goldhill 19.24 17.95 16.92 13.08 11.32

Lake 21.74 14.75 19.02 10.23 7.93

Peppers 30.57 25.31 28.94 17.78 14.43

Couple 23.67 16.03 17.95 9.86 6.39

Lena 18.83 16.45 17.65 9.25 7.82

Crowd 15.76 10.85 14.16 6.79 4.28

Medical 27.51 25.02 20.43 14.77 9.61

[18] Sun, J.; Sun, J.; Xu, Z.; Shum, H.-Y. Image superresolution using gradient profile prior. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1-8, 2008.

[19] Wu, W.; Liu, Z.; He, X. Learning-based super resolution using kernel partial least squares. Image and Vision Computing Vol. 29, No. 6, 394-406, 2011.

[20] Yang, J.; Wright, J.; Huang, T. S.; Ma, Y. Image super-resolution via sparse representation. IEEE Transactions on Image Processing Vol. 19 No. 11,

2861-2873, 2010. 21] Ohtake, Y.; Suzuki, H. Edge detection based multi-material interface extraction on industrial CT volumes. Science China Information Sciences Vol. 56, No. 9, 1-9, 2013.

Li, C.; Gore, J. C.; Davatzikos, C. Multiplicative intrinsic component optimization (MICO) for MRI bias field estimation and tissue segmentation. Magnetic Resonance Imaging Vol. 32, No. 7, 913-923, 2014.

Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In: Proceedings of the 20th International Conference on Pattern Recognition, 2366-2369, 2010. Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 3, No. 4, 600-612, 2004. Al-Fohoum, A. S.; Reza. A. M. Combined edge crispiness and statistical differencing for deblocking JPEG compressed images. IEEE Transactions on Image Processing Vol. 10, No. 9, 1288-1298, 2001.

Liqiong Wu received her B.S. degree in computer science and technology from Shandong University, Jinan, China, in 2014. Currently, she is a master student in the School of Computer Science and Technology, Shandong University, Jinan, China. Her research interests include computer graphics and image

processing.

Yepeng Liu received his B.S. degree in computer science and technology from Shandong University, Jinan, China, in 2014. He is currently pursuing the Ph.D. degree in the School of Computer Science and Technology, Shandong University, Jinan, China. His research interests include computer graphics, image processing, and geometry processing.

! ! • i/l

Brekhna received her B.S. degree in computer science and technology from the University of Peshawar, Pakistan, in 2010. She received her M.S. degree in computer science and technology from Comsats Institute of Technology, Islamabad, Pakistan, in 2013. Currently, she is a Ph.D. candidate in the School of Computer Science and Technology, Shandong University, Jinan, China. Her research interests include image processing, computer graphics, and machine learning.

TSINGHUA Springer

UNIVERSITY PRESS i-1 J^llilgtl

Ning Liu received her B.S. degree in library science from Wuhan University. Since 2002, she has been an associate research librarian in the School of Computer Science and Technology, Shandong University, Jinan, China.

Caiming Zhang is a professor and doctoral supervisor of the School of Computer Science and Technology at Shandong University. He received his B.S. and master degrees in computer science from Shandong University in 1982 and 1984, respectively, and Ph.D. degree in computer science from Tokyo Institute of Technology, Japan, in 1994. From 1997 to 2000, Dr. Zhang has held a visiting position at the

University of Kentucky, USA. His research interests include CAGD, CG, information visualization, and medical image processing.

Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

{i> TSINGHUA Sorineer

UNIVERSITY PRESS OJJllllgCI