Scholarly article on topic 'An adaptive gamma correction for image enhancement'

An adaptive gamma correction for image enhancement Academic research paper on "Electrical engineering, electronic engineering, information engineering"

CC BY
0
0
Share paper
Keywords
{""}

Academic research paper on topic "An adaptive gamma correction for image enhancement"

Rahman etal. EURASIP Journal on Image and Video Processing (2016) 2016:35 DOI 10.1186/s13640-016-0138-1

EURASIP Journal on Image and Video Processing

RESEARCH Open Access

An adaptive gamma correction for image enhancement

Shanto Rahman1*, Md Mostafijur Rahman1, M. Abdullah-Al-Wadud2, Golam Dastegir Al-Quaderi3 and Mohammad Shoyaib1

Abstract

Due to the limitations of image-capturing devices or the presence of a non-ideal environment, the quality of digital images may get degraded. In spite of much advancement in imaging science, captured images do not always fulfill users' expectations of clear and soothing views. Most of the existing methods mainly focus on either global or local enhancement that might not be suitable for all types of images. These methods do not consider the nature of the image, whereas different types of degraded images may demand different types of treatments. Hence, we classify images into several classes based on the statistical information of the respective images. Afterwards, an adaptive gamma correction (AGC) is proposed to appropriately enhance the contrast of the image where the parameters of AGC are set dynamically based on the image information. Extensive experiments along with qualitative and quantitative evaluations show that the performance of AGC is better than other state-of-the-art techniques.

Keywords: Contrast enhancement, Gamma correction, Image classification

1 Introduction

Since digital cameras have become inexpensive, people have been capturing a large number of images in everyday life. These images are often affected by atmospheric changes [1], the poor quality of the image-capturing devices, the lack of operator expertise, etc. In many cases, these images might demand enhancement for making them more acceptable to the common people. Furthermore, image enhancement is needed because of its wide range of application in areas such as atmospheric sciences [2], astrophotography [3], medical image processing [4], satellite image analysis [5], texture analysis and synthesis [6], remote sensing [7], digital photography, surveillance [8], and video processing applications [9].

Enhancement covers different aspects of image correction such as saturation, sharpness, denoising, tonal adjustment, tonal balance, and contrast correction/enhancement. This paper mainly focuses on contrast enhancement for different types of images. The existing contrast enhancement techniques can be categorized into three groups: global, local, and hybrid techniques.

Correspondence: bit0321@Mt.du.acbd

1 Institute of Information Technology, University of Dhaka, Dhaka-1000, Dhaka,

Bangladesh

Full list of author information is available at the end of the article

In global enhancement techniques, each pixel of an image is transformed following a single transformation function. However, different parts of the image might demand different types of enhancement, and thus global techniques may create over-enhancement and/or under-enhancement problems at some parts of the image [10]. To solve this problem, local enhancement techniques are proposed where transformation of an image pixel depends on the neighboring pixels' information. Hence, it lacks global brightness information and may result in local artifacts [11]. Moreover, the computational complexities of these methods are large as compared to that of global enhancement techniques. Hybrid enhancement techniques comprise of both global and local enhancement techniques. Here, the transformation considers both neighboring pixels' and global image information [12]. However, the parameter(s) controlling the contributions of the local and global transformations to the final output needs to be tuned differently for different images. Hence, a trade-off must be made in choosing the type of the enhancement technique. In this work, we focus on deriving a global enhancement technique which is computationally less complex and, at the same time, suitable for a large variety of images.

Springer Open

© 2016 The Author(s). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a linkto the Creative Commons license, and indicate if changes were made.

A very common observation for most of the available techniques is that any single technique may not perform well for different images due to different characteristics of the images. Figure 1 presents two visually unpleasant images on which two renowned global image enhancement techniques, i.e., histogram equalization (HE) [13] and adaptive gamma correction with weighting distribution (AGCWD) [14], have been applied. The results show that HE produces better result for the "bean" image but not for the "girl" image while AGCWD produces better result for the "girl" image but not for the "bean" image. Hence, to overcome this problem by applying a single technique, image characteristics should be analyzed first, and based on these characteristics, images need to be separated into classes. An enhancement technique should transform the images appropriately according to the class they belong to.

To handle different types of images, Tsai et al. [15] classified images into six groups and applied enhancement techniques for the respective groups of images. However, the predefined values used in the classification may not work for all the cases, whereas an adaptive classification method based on the statistical information is expected to work well in most of the cases.

To mitigate these problems, we propose a global technique named as adaptive gamma correction (AGC), which requires less computation and enhances each type of image according to its characteristics. To this end, the main contributions of our work are as follows:

• We propose an automatic image classification technique based on the statistical information of an image.

• For enhancing the contrast of each class of the images, we develop a modified gamma correction technique

(a) Original

(b) HE

(c) AGCWD

Fig. 1 Enhancement by different methods (top-down ^ "bean," "girl"). a Original. b HE. c AGCWD

where the parameters are dynamically set, resulting in quite different transformation functions for different classes of images and requiring less amount of time.

Experimental results show that the dynamic parameters are set well to produce expected improvement of the images.

The rest of this paper is organized as follows. Section 2 presents an overview of the existing works. Section 3 presents our proposed solution. Section 4 provides demonstration of the efficacy of AGC and lists the experimental results to illustrate the performance of AGC as compared to other existing methods. Finally, Section 5 concludes the findings.

2 Literature review

To enhance the contrast of an image, various image enhancement techniques have been proposed [10,16-19]. Histogram equalization (HE) is such a widely used technique [13]. However, HE does not always give satisfactory results since it might cause over-enhancement for frequent gray levels and loss of contrast for less frequent ones [18]. In order to mitigate over-enhancement problems, brightness preserving bi-histogram equalization (BBHE) [19], dualistic sub-image histogram equalization (DSIHE) [20], and minimum mean brightness error bi-histogram equalization (MMBEBHE) [21] have been proposed, which partition a histogram before applying the HE. BBHE partitions the histogram based on the image mean whereas DSIHE uses image median to partition. MMBEBHE recursively partitions the image histogram into multiple groups based on mean brightness error (MBE). In this technique, however, desired improvement may not always be achieved, and the difference between input and output image is minimal [18]. Moreover, because of recursive calculation of MBE, the computational complexity is very large as compared to other techniques [22].

A combination of BBHE and DSIHE is the recursively separated and weighted histogram equalization (RSWHE) [18], which preserves the brightness and enhances the contrast of an image. The core idea of this algorithm is to break down a histogram into two or more parts and apply weights in the form of a normalized power law function for modifying the sub-histograms. Finally, it performs histogram equalization on each of the weighted histograms. However, statistical information of the image may be lost after the transformation, deteriorating the quality of the image [14].

Some other methods are also proposed ranging from traditional gamma correction to more complex methods utilizing depth image histogram [23], pixel contextual information [11], etc., for analyzing image context and pipelining of different stages [24] to speed up the process.

Celik and Tjahjadi propose contextual and variational contrast (CVC) [11] where inter pixel contextual information is taken and the enhancement is performed using a smoothed 2D target histogram. As a result, the computational complexity of this technique becomes very large. Adaptive gamma correction with weighting distribution (AGCWD) [14] derives a hybrid histogram modification function combining traditional gamma correction and histogram equalization methods. Although AGCWD enhances the contrast preserving the overall brightness of an image, this technique may not give desired results when an input image lacks bright pixels since the highest intensity in the output image is bounded by the maximum intensity of the input image [25]. This is because, the highest enhanced intensity will never cross the maximum intensity of the input image.

Coltuc et al. propose exact histogram specification (EHS) [16] based on the strict ordering of the pixels of an image. It guarantees that the histogram will be uniform [26] after enhancement. It thus increases the contrast of the image ignoring insignificant errors. However, EHS uses Gaussian model which is not appropriate for most of the natural images [27]. Tsai and Yeh introduce an appropriate piecewise linear transformation (APLT) function for color images by analyzing the contents of an image [28]. APLT may cause over-enhancement and loss of image details in some cases, when an image contains homogeneous background [29].

Celik and Tjahjadi have recently proposed an adaptive image equalization algorithm [30] where the input histogram is first transformed into a Gaussian mixture model and then the intersection points of Gaussian components are used to partition the dynamic range of the image. This technique may not enhance very low illuminated images [31].

The layered difference representation (LDR) proposed by Lee et al. [32] divides an image into multiple layers, derives a transformation function for each layer, and aggregates them into a final desired transformation function. Here, all pixels are considered equally though foreground pixels have more importance than background pixels [33]. Histogram modification framework (HMF) [34] handles these types of problems by reducing the contributions of large smooth areas which often correspond to the background regions. Thus, it enhances object details by degrading background details, and hence this method may not suffice if we want to see the background detail [35].

In order to enhance different parts of an image in different ways, bilateral Bezier curve (BBC) method [36] partitions the image histogram into dark and bright regions, creates transformation curves separately for each segment, and merges these two curves to get the final mapping. However, BBC often generates significant

distortions in the image due to brightening and overenhancement [37].

In general, most of the contrast enhancement techniques fail to produce satisfactory results for diversified images such as dark, low-contrast, bright, mostly dark, high-contrast, mostly bright ones. To get rid of this problem, Tsai et al. [15] propose a decision tree-based contrast enhancement technique, which first classifies images into six groups, and then applies a piecewise linear transformation for each group of the images. However, the classification is performed using manually defined thresholds which may not always fit to enhance different types of images properly.

From the above discussion, it is evident that the available techniques for enhancing the contrast of an image might not be applied for all types of images. A technique producing good results for some images may fail on some other images. To solve this problem, we propose a computationally simple method utilizing an automatic image classification mechanism along with a suitable enhancement method for each of the image classes.

3 Proposed method

The main objective of the proposed technique is to transform an image into a visually pleasing one through maximizing the detail information. This is done by increasing the contrast and brightness without incurring any visual artifact. To achieve this, we propose an adaptive gamma correction (AGC) method which dynamically determines an intensity transformation function according to the characteristics of the input image. The proposed AGC consists of several steps as presented in Fig. 2. The detail of each step is described in the followings.

3.1 Color transformation

Several color models [13], such as red-green-blue (RGB), Lab, HSV, and YUV are available in the image processing domain. However, images are usually available in RGB color space where the three channels are much correlated. And hence, intensity transformations done in the RGB space are likely to change the color of the image. For AGC, we adopt HSV color space which separates the color and brightness information of an image into hue (H), saturation (S), and value (V) channels. HSV color model provides a number of advantages such as having a good capacity of representing a color for human perception and the ability of separating color information completely from the brightness (or lightness) information [28, 38, 39]. Hence, enhancing the V-channel does not change the original color of a pixel.

3.2 Image classification

Every image has its own characteristics, and the enhancement should be done based on that. To appropriately

Output Image ,

Fig. 2 Functional block diagram of the proposed technique

handle different images, the proposed AGC first classifies an input image I into either low-contrast class g1 or high (or moderate) contrast class g2 depending on the available contrast of the image using Eq. (1).

g (I ) =

ei, d < i/T q2, otherwise

where D = diff((x + 2a), (x — 2a)) and t is a parameter used for defining the contrast of an image. a and ¡x are the standard deviation and mean of the image intensity, respectively.

Equation (1) classifies an image as a low-contrast one when most of the pixel intensities of that image are clustered within a small range (cf. Fig. 3). The criterion in Eq. (1) is chosen being guided by the Chebyshev's inequality which states that at least 75 % values of any distribution will stay within 2a around its mean on both sides [40].

This leads to the simpler form of the criterion for an image to be classified as a low-contrast one as 4a < 1/r. From our experience, we have found that t = 3 is a suitable choice for characterizing the contrasts of different images.

Again, depending on the brightness of the image, different image intensities should be modified differently. Hence, we divide each of the g1 and g2 classes into two sub-classes, bright and dark, based on whether the image mean intensity ¡i > 0.5 or not. Thus, AGC makes use of the four classes as shown in Fig. 4.

3.3 Intensity transformation

The transformation function of the proposed AGC is based on the traditional gamma correction given by

/out = < (2)

where Iin and Iout are the input and output image intensities, respectively. c and y are two parameters that control the shape of the transformation curve. In contrast to traditional gamma correction, AGC sets the values of y and c automatically using image information, making it an adaptive method. In the following subsections, we describe the procedure of setting these two parameters for different classes of images.

3.3.1 Enhancement of low-contrast image

According to the classification done in Eq. (1), the images falling into group g1 have poor contrast. Low a implies that most of the pixels have similar intensities. So, the pixel values should be scattered over a wider range to enhance the contrast.

In gamma correction, y controls the slope of the transformation function. The higher the value of y is, the steeper the transformation curve becomes. And the steeper the curve is, the more the corresponding intensities are spread, causing more increase of contrast. In AGC, we conveniently do this for low-contrast images by choosing the value of y calculated by

y = -log2 (a)

Figure 5 demonstrates a plot of y with respect to a using the above formula which shows a decreasing curve. Note that, a is small in g1 class. Hence, large y values will be

Low contrast Moderate or high contrast

Dark Bright Dark Bright

Fig.4 Image classification

obtained, which will cause large increase of contrast, as expected.

In traditional gamma correction, c is used for brightening or darkening the output image intensities. However, in AGC, we allow c to make more influence on the transformation. The proposed AGC uses different values of c for different images depending on the nature of the respective image according to

1 + Heaviside(0.5 — ¡£) x (k — 1)

where k is defined by

k = & + (1 - Y ) * ^

and the Heaviside function [41] is given by 0, x < 0

Heaviside(x) =

1, x > 0

Such choices of y and c enable AGC to handle bright and dark images in qi class in different and appropriate manners. The effectiveness of the proposed transformation function is described in the following subsections.

3.3.1.1 Bright images in qi

For low-contrast bright images (x > 0.5), the major concern is to increase the contrast for better distinguisha-bility of the image details that are made up of high intensities. Hence, in AGC, according to Eq. (4), c becomes 1 for such images, and the transformation function becomes

For increasing the contrast in this type of images, the transformation curve should spread out the bright intensities over a wider range of darker intensities. To achieve this, according to AGC, we need y to be larger than 1, which is assured by Lemma 1.

Lemma 1 For low-contrast images, y remains greater than 1.

Proof For low-contrast images, the minimum value of y in AGC will be Ymin = - log2 (Omax) = - log2 • For a choice of t = 3, we get ymin = — log2 (0.0833) = 3.585 > 1. □

The lower curves in Fig. 6 represent the transformation effects for low-contrast bright images. We get different curves with different slopes depending on the value of o . A lower o produces higher spread of intensities, resulting in more increase of contrast.

3.3.1.2 Dark images in qi

Most of the intensities of an image in this class are clustered in a small range of dark gray levels around the image mean. For increasing the contrast of such images, the transformation curve needs to be spread out the dark intensities to the higher intensities. This requires a transformation curve that lies above the line Iout = Iin. The transformation function is also desired to spread the "clustered" intensities more than the other intensities.

For a dark image (x < 0.5) with low-contrast, Eqs. (4) and (5) are used and the final transformation function becomes

Iout = !y \"n Tve Y (8)

Iin +11 — IiJ x ¡Y

Figure 6 shows that the transformation functions produced by the AGC for low-contrast dark images indeed fall above the line Iout = Iin. Again, the steepness of the curves are more for the lower contrast (i.e., low o) images, as desired. More interestingly, the steep portion of the curve moves with the value of ¡x. This ensures that the intensities around ¡x are more spread in the output image. Such a behavior of the transformation is very much expected, since most of the intensities fall around the ¡x in this class of images.

Figure 7 presents two low-contrast dark and bright images and their histograms along with the corresponding transformation curves as well as the enhanced images and histograms after applying AGC. In the input image histograms, most of the intensities are accumulated within a very limited range. After applying the AGC, the intensities are distributed over wider ranges.

3.3.2 Enhancement of high- or moderate-contrast image

An image falls into q2 class when the intensities are appreciably scattered over the available dynamic range. Brightness adjustment is usually more important than contrast enhancement in such images. In this case, Iout and c are calculated similarly as in Eqs. (2) and (4). y is now calculated differently using Eq. (9), not to make much stretching of the contrast

Input pixel Intensity

Fig. 6 Transformation curves for images with low-contrast

Image intensity Image intensity Image intensity image intensity

(a) (b) (c) (d)

Fig. 7 Low-contrast images. a Original dark ("bean"), b enhanced by AGC along with transformation curve of a. c Original bright ("cup"), d enhanced by AGC along with transformation curve of c

y = exp [(1 - (x + a))/2]

Lemma 2 confirms that y falls within a small range around 1 for this class of images, as desired, ensuring not much change in contrast.

Lemma 2 For high- or moderate-contrast images, y e [0.90,1.65].

Proof The minimum value of y is found when (x + a) has a maximum possible value of max (¡x + ^x — x2^

since for x e[0,1], we have a2 < x — ¡2. Thus, the maximum of (x + a) is 1 + — = 1.2071. This gives the

minimum value for y = exp

Hence, 0.9016278 < y < Je = 1.64872."

= 0.9016278.

We now discuss the effect of y on the dark and bright images.

3.3.2.1 Dark images in q2

For images with ¡x < 0.5, (x + a) < 1, since both ¡x and a are less than (or equal to) 0.5 which implies y > 1.

Figure 8 presents the transformation curves for different values of x and a. Here, we see that the transformation curves pass above the linear curve Iout = Iin, transforming the dark pixels into brighter ones. Note also that, the lower the mean in the input image, the sharper increase in darker pixel values are performed (steeper curves in Fig. 8). This increases the visibility of the dark images.

For dark images with comparatively larger mean (x ~ 0.5 but x < 0.5), the transformation curves are very close to the linear curve, i.e., not much changes are made in the intensities.

3.3.2.2 Bright images in q2

For this class of images, Iout, c, and y are calculated using Eqs. (2), (4), and (9), respectively. In this case, images

have good quality with respect to brightness and contrast. Here, the main target is to preserve the image quality. Figure 9 shows the transformation curves for different values of the x and a. Here, the curves lie very close to the line Iout = Iin, causing little change in contrast and ensuring not much changes of intensities as expected.

Note that for the maximally scattered image, i.e., for a = amax = 2 and x = 5, (i.e., when half of the image pixels are at zero intensity and the other half at the maximum intensity 1), we need not change the image. It has the maximum contrast and is already enhanced. Here, we need a linear transformation curve, and Eq. (9) exactly produces Y = 1 meeting the requirement.

Figure 10 presents two moderate- or high-contrast images and their histograms along with the corresponding transformation curves as well as the enhanced images and histograms after applying AGC. Upon the application of AGC, the gray levels of the images are distributed over wider ranges in the histograms as desired.

4 Experimental result

In this paper, we have three major concerns: (i) appropriate classification along with (ii) transformation of an image for acceptable contrast enhancement with (iii) low computational complexity. We compare the performance of the proposed AGC with some other state-of-the-art techniques, namely HE [13], EHS [16], CVC [11], LDR [32], HMF [34], RSWHE [18], and AGCWD [14]. The comparison is performed in both qualitative and quantitative manners.

4.1 Qualitative assessment

For qualitative assessment, we first consider few representative images from each of the four proposed classes (shown in Fig. 11). Beside these, we also consider some other images used in [42] and [13].

The "bean" and "cat" images belong to dark low-contrast class (in scale of 2551, bean, D = 69.05, x = 38.73 and cat, D = 44.17, x = 82.95) and contain most of the pixels in

fl 1-I;; 0.1 / /'' / /s - M=0.10.' 0=0.25

Input pixel intensity

Fig. 8 Transformation curve for high- or moderate-contrast dark

images

Input pixel intensity

Fig. 9 Transformation curve for high or moderate contrast bright images

(a) (b) (c) (d)

Fig. 10 Moderate- or high-contrast image. a Original dark ("fountain"), b enhanced by AGC along with transformation curve of a. c Original bright ("lenna"), d enhanced by AGC along with transformation curve of c

the dark region. The main challenge of these images is to remove the haziness and increase the brightness without creating any artifact. To demonstrate the superiority of AGC, let us consider a portion from the "cat" image (i.e., rectangular red marked region in Fig. 11) that represents a plain wall (where intensities vary from 88 to 99). HE creates abnormal effects because the intensities of this region range vary from 161 to 255 in the output image which creates local artifacts. Over-enhancement is also observed in EHS. In contrast, following Eqs. (3), (4), (5), and (8), AGC transforms the intensities to the range [202, 255]. As a result, a visual soothing area is created on the wall in the transformed image. Similar effect is also observed in the "bean" image for different methods. Large contrast is desired to clearly visualize black dots on the white beans. HE and EHS are almost successful in this case. However, CVC highly deteriorates the image quality due to the creation of local artifacts. AGCWD does not create any significant effect on the input images because this method is unable to increase the brightness beyond the highest intensity (highest intensity for "cat" is 104 and for "bean" is 83). RSWHE slightly equalizes the original image in order to preserve the original brightness but does not improve the image quality. LDR largely increases the brightness whereas HMF fails to increase that sufficiently. In contrast, due to the proposed classification along with the proper transformation function, the results of AGC are quite acceptable.

The "cup" image of Fig. 11 is also of low contrast but most of the pixels of this image are too bright (in scale of

255, D = 81.70, p = 204.86). The most frequent intensity of the cup image is 255 which is also the maximum intensity of this image whereas the minimum intensity is 111. In case of HE and EHS, some of the pixels become too dark which is not desirable. CVC, LDR, and HMF fail to extract detail information in a few cases. AGCWD transforms most of the intensities into a white range ([128, 255]) and makes the image brighter than expected. On the other hand, AGC appropriately transforms (using Eqs (3), (4), (5), and (7)) a large number of the brighter pixels into darker ones, enhancing the contrast. The proposed AGC performs superior than others because one pixel does not influence other neighboring pixels' transformation directly. Rather, information such as image mean and standard deviation from the whole image influence the transformation of AGC.

The "cave" and "woman" images represent dark moderate-/high-contrast class (in scale of 255, cave, D = 261.22, p = 61.99 and woman, D = 210.73, p = 85.22). This is because the pixels of these images are scattered over the whole histogram, and among those, more than half of the pixels are in the dark region. The main challenge of the "cave" and "woman" images is to increase the brightness without creating any artifact, especially in the light regions of the images. In the "woman" image, 75 % of the pixel intensities are lower than 100 and we need to preserve the effect of the lights. HE and EHS over-enhance the image because intensity values above 100 are dramatically changed, creating local artifacts around the lights. LDR, HMF, and CVC produce almost identical results,

(a) Original (b) HE (c) EHS (d) CVC (e) LDR (f) HMF (g) RSWHE (h) AGCWD (i) Proposed

Fig. 11 Enhancement results of different methods (top-down ^ "cat", "bean", "cup", "woman", "cave" and "rose"). a Original. b HE. c EHS. d CVC. e LDR. f HMF. g RSWHE. h AGCWD. i Proposed

and the outputs are mostly dark. RSWHE largely deteriorates the original image. Although, AGCWD produces comparatively better result than other existing methods, the outputs lack desired brightness. The proposed method (AGC) increases the brightness of the input images and also keeps good contrast using Eqs. (2), (4), and (9). In the "cave" image, 85 % intensities are lower than 75, but the maximum intensity is 255. Like the "woman" image, similar effects are also observed for the "cave" image. AGC produces better contrast by effectively classifying and transforming the pixel intensities using Eqs. (2), (4), and (9).

The "rose" is an image having good contrast and soothing brightness (in scale of 255, D = 265.51, p = 218.79). We choose this image to experiment on the outcomes of

different methods on a good quality image. In this image, the minimum and maximum intensities are 11 and 255, respectively. More than 60 % intensities are exactly 255 (mostly coming from the white background). HE and EHS create annoying artifacts which are strongly visible on the leaves and the petals. HE transforms most of the bright pixels to darker regions (e.g., 254 into 101, 200 into 53). As a result, the petals of the rose become dark. Similarly, EHS and RSWHE transform pixels of intensity 255 to lower ones and create dark shades in the background. CVC, LDR, and HMF make the image darker whereas RSWHE and AGCWD increase the brightness. In all of these cases, the output image loses its original contrast. However, AGC does not affect most of the intensities because of the linear nature of its transformation curve

(the transformation curve is shown in Fig. 9 using Eqs. (2), (4), and (9)). Hence, visual information of the image is preserved and slightly enhanced.

Although we have discussed our four different cases with example images (e.g., Fig. 11), several other images from [42] and [13] are also considered to show the robustness of AGC as shown in Fig. 12. These results also advocate the superiority of AGC. For example, in case of "girl" image, HE makes the thin necklace visible but there exist too many artifacts on the hair. LDR, HMF, and CVC produce good results on the hair but they do not keep the original color of the face, and the brightness is not increased. EHS enhances the image with too many artifacts on the hair. Although AGCWD produces comparatively better result than other existing methods, the output has lack of desired brightness. On the other side,

AGC increases the brightness of the input image and also keeps desired color contrast. For "dark-ocean," the main challenges are to increase the brightness keeping the sunlight ray and river stream arcs clearly visible. Figure 12 ("room") presents a raw image which is almost dark. In this case, too, AGC increases the contrast and brightness better than other techniques. The "white-rose" image in Fig. 12 contains pattern noise [13]. Here, our method yields a better image than the other methods without amplifying the noise much.

4.2 Quantitative measurement

Enhancement or improvement of the visual quality of an image is a subjective matter because its judgment varies from person to person. Through quantitative measurements, we can establish numerical justifications. However,

(a) Original (b) HE (c) EHS (d) CVC (e) LDR (f) HMF (g) RSWHE (h) AGCWD (i) Proposed

Fig. 12 Enhancement results of different methods (top-down ^ "girl," "dark-ocean," "fountain," "building," "room," and "white-rose"). a Original. b HE. c EHS. d CVC. e LDR. f HMF. g RSWHE. h AGCWD. i Proposed

Table 1 Root-mean-square contrast for the images used in Fig. 11

Image name Original HE CVC LDR HMF RSWHE AGCWD AGC

Bean 0.07 0.29 0.27 0.28 0.17 0.08 0.09 0.32

Cat 0.04 0.22 0.22 0.25 0.14 0.06 0.05 0.24

Cup 0.08 0.19 0.18 0.19 0.16 0.17 0.10 0.19

Cave 0.22 0.24 0.26 0.23 0.24 0.26 0.27 0.28

Woman 0.21 0.27 0.26 0.25 0.25 0.28 0.27 0.29

Rose 0.31 0.29 0.32 0.31 0.32 0.21 0.29 0.30

quantitative evaluation of contrast enhancement is not an easy task due to the lack of any universally accepted criterion. Here, we assess the performance of enhancement techniques using three quality metrics, namely root-mean-square (rms) contrast, execution time (ET), and discrete entropy (DE).

4.2.1 Root-mean-square (rms) contrast

A common way to define the contrast of an image is to measure the rms contrast [43, 44]. The rms is defined by Eq. (10). Larger rms represents better image contrast.

1 M-1N-1

y yu - Ii,)2 MN ^ ^ ^

i=0 j=0

0 2 <u

•£ o

I-0.5 « -1 -1.5 -2

1 . .....1 ... J

[l 1 i « ■ it s i 1 íí i Eg TB -¡ 1 !

»bean = cat !i cup ■ cave

Image enhancement methods

Fig. 13 Execution time (log10 Scale)

Tables 1 and 2 present the rms contrast produced by different methods. In Table 1, we use the images presented in Fig. 11. Again, to show the robustness of the proposed method, in Table 2, we consider a collection of 4000 images (1000 for each group: low-contrast bright (G1), and dark (G2) images and moderate-/high-contrast bright (G3), and dark (G4) images). We select the images from Gonzalez et al. [13], Extended Yale B [45], Caltech-UCSD birds 200 [46], Corel [47], Caltech 256 [48], and 0utex-TC-00034 [49] datasets. From these two tables, it is clear that the proposed method provides larger rms values as compared to other methods. Though HE produces the highest rms in three groups (shown in Table 2), such results are due to the over-enhancement of the

Table 2 Root-mean-square contrast calculated for the test images of four groups

images. Except this, AGC demonstrates the best performances than the other state-of-the-art techniques due to its better classification and appropriate transformation function.

4.2.2 Execution time

The execution time is an important metric in image processing to measure the running time of an algorithm. Figure 13 presents the execution time needed to run each algorithm which shows that CVC requires more execution time than all other methods due to considering neighboring pixel information. LDR, EHS, AGCWD, RSWHE, and HMF also take comparatively larger execution time. The execution time of AGC is always the lowest compared to that of other existing methods, even HE This is because its transformation function only considers image current pixel's information, image mean, and standard deviation. Hence, AGC can be easily adopted with real-time applications such as surveillance camera, web cam, and cameras for its low computational cost.

4.2.3 Discrete entropy

Entropy is a measure of the uncertainty of a random variable. The more random the nature of the pixel intensities, the more entropy the image has [50]. Low entropy means low-contrast of the image. Tables 3 and 4 illustrate the DE

Techniques G1 G2 G3 G4 Average Table 3 Discrete entropy for the images used in Fig. 11

Original 0.052 0.041 0.216 0.215 0.131 Image HE HMF LDR RSWHE AGCWD CVC AGC

AGC 0.252 0.255 0.298 0.291 0.274 name

HE 0.283 0.279 0.294 0.285 0.285 Bean 5.64 5.58 5.63 5.02 5.17 5.55 5.65

AGCWD 0.072 0.085 0.245 0.295 0.174 Cat 5.17 5.18 5.19 4.97 4.85 5.18 5.19

LDR 0.190 0.201 0.245 0.257 0.223 Cup 6.29 6.34 6.36 6.21 5.98 6.34 6.36

RSWHE 0.130 0.082 0.231 0.176 0.155 Cave 7.55 7.16 6.13 6.30 6.80 7.15 7.81

WAHE 0.137 0.129 0.243 0.247 0.189 Woman 7.9 7.68 7.62 7.14 7.61 7.78 7.96

CVC 0.191 0.206 0.248 0.262 0.227 Rose 3.33 4.06 3.86 3.10 3.70 3.53 3.77

Table 4 Discrete entropy calculated for the test images of four

groups

Techniques G1 G2 G3 G4 Average

Original 4.82 4.80 6.98 7.02 5.91

AGC 4.78 4.76 6.90 6.87 5.83

HE 4.69 4.72 6.74 6.79 5.74

AGCWD 4.52 4.62 6.55 6.72 5.60

LDR 4.74 4.73 6.88 6.94 5.82

RSWHE 4.51 4.23 5.90 4.45 4.77

WAHE 4.73 4.77 6.89 6.86 5.81

CVC 4.75 4.76 6.87 6.85 5.80

values achieved by the different methods. Table 3 shows that AGC creates larger entropy in most of the cases and its performance is the best as compared to the other methods. Table 4 presents the DE values calculated for the 4000 images mentioned earlier, where we can also conclude that AGC has performed better than other state-of-the-art methods.

5 Conclusions

In this paper, we have proposed a simple, efficient, and effective technique for contrast enhancement, called adaptive gamma correction (AGC). This method generates visually pleasing enhancement for different types of images. Unlike most of the methods, AGC dynamically changes the values of the different parameters of the method. Performance comparisons with other state-of-the-art enhancement algorithms show that AGC achieves the most satisfactory contrast enhancements in different illumination conditions. As AGC takes low time complexity, it can be incorporated in several application areas such as digital photography, video processing, and other applications in consumer electronics.

Endnote

1 For better understanding instead of [0-1] scale, we use [0-255] scale.

Acknowledgements

We are really grateful to the anonymous reviewers for the corrections and useful suggestions that have substantially improved the paper. Further, we would like to thank M. G. Rabbani for discussion on some statistical issues. We would also like to thank C. Lee, Chul Lee, and C.S. Kim for providing their source codes and A.R. Rivera and A. Seal for providing a few test images.

Authors' contributions

SR, MR, AW, GA and MS have contributed in designing, developing, and analyzing the methodology, performing the experimentation, and writing and modifying the manuscript. All the authors have read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author details

11nstitute of Information Technology, University of Dhaka, Dhaka-1000, Dhaka,

Bangladesh. 2Department of Software Engineering, King Saud University,

Riyadh, Saudi Arabia. 3Department of Physics, University of Dhaka,

Dhaka-1000, Dhaka, Bangladesh.

Received: 30 December 2015 Accepted: 29 September 2016

Published online: 18 October 2016

References

1. S-C Huang, B-H Chen, W-J Wang, Visibility restoration of single hazy images captured in real-world weather conditions. Circ. Syst. Video Technol. IEEE Trans. 24(10), 1814-1824 (2014)

2. GP Ellrod, Advances in the detection and analysis of fog at night using goes multispectral infrared imagery. Weather Forecast. 10(3), 606-619 (1995)

3. S Bedi, R Khandelwal, Various image enhancement techniques—a critical review. Int. J. Adv. Res. Comput. Commun. Eng. 2(3), 1605-1609 (2013)

4. M Zikos, E Kaldoudi, S Orphanoudakis, Medical image processing. Stud. Health Technol. Inf. 43(Pt B), 465-469 (1997)

5. M Neteler, H Mitasova, in Open Source GIS:A Grass GISApproach. Satellite image processing (Springer, New York, 2002), pp. 207-262

6. AA Efros, WT Freeman, in Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Image quilting for texture synthesis and transfer (ACM, Los Angeles, 2001), pp. 341-346

7. JR Jensen, K Lulla, Introductory digital image processing: a remote sensing perspective. Geocarto International. 2(1), 65 (1987)

8. MA Renkis, Video surveillance sharing system and method. Google Patents. Patent 8, US, 842,179 (2014)

9. F Arman, A Hsu, M-Y Chiu, in Proceedings of the First ACM International Conference on Multimedia. Image processing on compressed data for large video databases (ACM, Anaheim, 1993), pp. 267-272

10. H Cheng, X Shi, A simple and effective histogram equalization approach to image enhancement. Digital Signal Process. 14(2), 158-170 (2004)

11. T Celik, TTjahjadi, Contextual and variational contrast enhancement. Image Process. IEEE Trans. 20(12), 3431-3441 (2011)

12. A Ross, A Jain, J Reisman, A hybrid fingerprint matcher. Pattern Recog. 36(7), 1661-1673 (2003)

13. RC Gonzalez, RE Woods, "Digital image processing", Pearson/Prentice Hall, Upper Saddle River, NJ, 3rd ed. (2008)

14. S-C Huang, F-C Cheng, Y-S Chiu, Efficient contrast enhancement using adaptive gamma correction with weighting distribution. Image Process. IEEE Trans. 22(3), 1032-1041 (2013)

15. C-M Tsai, Z-M Yeh, Y-F Wang, Decision tree-based contrast enhancement for various color images. Mach. Vis. Appl. 22(1), 21-37 (2011)

16. DColtuc, P Bolon,J-M Chassery, Exact histogram specification. Image Process. IEEE Trans. 15(5), 1143-1152 (2006)

17. K Hussain, S Rahman, S Khaled, M Abdullah-Al-Wadud, M Shoyaib, in Software, Knowledge, Information Management and Applications (SKIMA), 2014 8th International Conference On. Dark image enhancement by locally transformed histogram (IEEE, Dhaka, 2014), pp. 1-7

18. M Kim, MG Chung, Recursively separated and weighted histogram equalization for brightness preservation and contrast enhancement. Consum. Electron. IEEE Trans. 54(3), 1389-1397 (2008)

19. Y-T Kim, Contrast enhancement using brightness preserving bi-histogram equalization. Consum. Electron. IEEE Trans. 43(1), 1-8 (1997)

20. Y Wang, Q Chen, B Zhang, Image enhancement based on equal area dualistic sub-image histogram equalization method. Consum. Electron. IEEE Trans. 45(1), 68-75 (1999)

21. S-D Chen, AR Ramli, Minimum mean brightness error bi-histogram equalization in contrast enhancement. Consum. Electron. IEEE Trans. 49(4), 1310-1319(2003)

22. S-D Chen, AR Ramli, Preserving brightness in histogram equalization based contrast enhancement techniques. Digital Signal Process. 14(5), 413-428 (2004)

23. S-W Jung, Image contrast enhancement using color and depth histograms. Signal Process. Lett. IEEE. 21 (4), 382-385 (2014)

24. S-C Huang, W-C Chen, A new hardware-efficient algorithm and reconfigurable architecture for image contrast enhancement. Image Process. IEEE Trans. 23(10), 4426-4437 (2014)

S Rahman, MM Rahman, K Hussain, SM Khaled, M Shoyaib, in Computer and Information Technology(ICCIT), 2014 17th International Conference On. Image enhancement in spatial domain: a comprehensive study (IEEE, Dhaka, 2014), pp. 368-373

D Sen, SK Pal, Automatic exact histogram specification for contrast enhancement and visual system based quantitative evaluation. Image Process. IEEE Trans. 20(5), 1211-1220 (2011) Y Wan, D Shi, Joint exact histogram specification and image enhancement through the wavelet transform. Image Process. IEEE Trans. 16(9), 2245-2250(2007)

C-M Tsai, Z-M Yeh, Contrast enhancement by automatic and parameter-free piecewise linear transformation for color images. Consum. Electron. IEEE Trans. 54(2), 213-219 (2008) S-H Yun, JH Kim, S Kim, in Consumer Electronics (ICCE), 2011 IEEE InternationalConference On. Contrast enhancement using a weighted histogram equalization (IEEE, Las Vegas, 2011), pp. 203-204 TCelik, TTjahjadi, Automatic image equalization and contrast enhancement using gaussian mixture modeling. Image Process. IEEE Trans. 21(1), 145-156(2012)

G Phadke, R Velmurgan, in Applications of Computer Vision (WACV), 2013 IEEE Workshop On. Illumination invariant mean-shift tracking (IEEE, Florida, 2013), pp. 407-412

C Lee, C Lee, C-S Kim, Contrast enhancement based on layered difference representation of 2d histograms. Image Process. IEEE Trans. 22(12), 5372-5384(2013)

J-T Lee, C Lee, J-Y Sim, C-S Kim, in Image Processing (ICIP), 2014 IEEE International Conference On. Depth-guided adaptive contrast enhancement using 2d histograms (IEEE, Paris, 2014), pp. 4527-4531 T Arici, S Dikbas, Y Altunbasak, A histogram modification framework and its application for image contrast enhancement. Image Process. IEEE Trans. 18(9), 1921-1935 (2009)

C Lee, C Lee, Y-Y Lee, C-S Kim, Power-constrained contrast enhancement for emissive displays based on histogram equalization. Image Process. IEEE Trans. 21(1), 80-93 (2012)

F-C Cheng, S-C Huang, Efficient histogram modification using bilateral Bezier curve for the contrast enhancement. Display Technol. J. 9(1), 44-50 (2013)

EF Arriaga-Garcia, RE Sanchez-Yanez, J Ruiz-Pinales, M

de Guadalupe Garcia-Hernandez, Adaptive sigmoid function bihistogram

equalization for image contrast enhancement. J. Electron. Imaging. 24(5),

053009-053009(2015)

H-D Cheng, X Jiang, Y Sun, J Wang, Color image segmentation: advances and prospects. Pattern Recog. 34(12), 2259-2281 (2001) NA Ibraheem, MM Hasan, RZ Khan, PK Mishra, Understanding color models: a review. ARPN J. Sci.Technol. 2(3), 265-75 (2012) JG Saw, MC Yang, TC Mo, Chebyshev inequality with estimated mean and variance. Am. Stat. 38(2), 130-132 (1984)

KF Riley, MP Hobson, SJ Bence, Mathematical/Methods for Physics and Engineering: a Comprehensive Guide. (Cambridge University Press, Cambridge, 2006)

AR Rivera, B Ryu, O Chae, Content-aware dark image enhancement through channel division. Image Process. IEEE Trans. 21 (9), 3967-3980 (2012)

E Peli, Contrast in complex images. JOSA A. 7(10), 2032-2040 (1990) Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. Image Process. IEEE Trans. 13(4), 600-612 (2004)

KC Lee, J Ho, D Kriegman, Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intelligence. 27(5),

684-698 (2005)

P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, Caltech-ucsd birds 200.Tech. Report CNS-TR-2010-001, California Institute of Technology (2010)

P Duygulu, K Barnard, JF de Freitas, DA Forsyth, in Computer Vision-ECCV 2002. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary (Springer, Copenhagen, 2002), pp. 97-112 G Griffin, A Holub, P Perona, Caltech-256 object category dataset. Technical Report CNS-TR-2007-001, California Institute of Technology (2007). http://authors.library.caltech.edu/7694/

TOjala, T Mâenpââ, M Pietikainen, J Viertola, J Kyllonen, S Huovinen, in

Pattern Recognition, 2002. Proceedings. 16th International Conference On. Outex-new framework for empirical evaluation of texture analysis algorithms, vol. 1 (IEEE, Quebec, 2002), pp. 701-706 S Gull, J Skilling, Maximum entropy method in image processing. Commun. Radar Signal Process. IEE Proc. F. 131(6), 646-659 (1984)

Submit your manuscript to a SpringerOpen journal and benefit from:

7 Convenient online submission 7 Rigorous peer review 7 Immediate publication on acceptance 7 Open access: articles freely available online 7 High visibility within the field 7 Retaining the copyright to your article

Submit your next manuscript at 7 springeropen.com