Scholarly article on topic 'Logarithmic Adaptive Neighborhood Image Processing (LANIP): Introduction, Connections to Human Brightness Perception, and Application Issues'

Logarithmic Adaptive Neighborhood Image Processing (LANIP): Introduction, Connections to Human Brightness Perception, and Application Issues Academic research paper on "Medical engineering"

0
0
Share paper
OECD Field of science
Keywords
{""}

Academic research paper on topic "Logarithmic Adaptive Neighborhood Image Processing (LANIP): Introduction, Connections to Human Brightness Perception, and Application Issues"

Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2007, Article ID 36105, 22 pages doi:10.1155/2007/36105

Research Article

Logarithmic Adaptive Neighborhood Image Processing (LANIP): Introduction, Connections to Human Brightness Perception, and Application Issues

J.-C. Pinoli and J. Debayle

Ecole Nationale Supérieure des Mines de Saint-Etienne, Centre Ingénierie et Santé (CIS), Laboratoire LPMG, UMR CNRS 5148, 158 cours Fauriel, 42023 Saint-Etienne Cedex 2, France

Received 29 November 2005; Revised 23 August 2006; Accepted 26 August 2006

Recommended by Javier Portilla

A new framework for image representation, processing, and analysis is introduced and exposed through practical applications. The proposed approach is called logarithmic adaptive neighborhood image processing (LANIP) since it is based on the logarithmic image processing (LIP) and on the general adaptive neighborhood image processing (GANIP) approaches, that allow several intensity and spatial properties of the human brightness perception to be mathematically modeled and operationalized, and computationally implemented. The LANIP approach is mathematically, computationally, and practically relevant and is particularly connected to several human visual laws and characteristics such as: intensity range inversion, saturation characteristic, Webers and Fechners laws, psychophysical contrast, spatial adaptivity, multiscale adaptivity, morphological symmetry property. The LANIP approach is finally exposed in several areas: image multiscale decomposition, image restoration, image segmentation, and image enhancement, through biomedical materials and visual imaging applications.

Copyright © 2007 J.-C. Pinoli and J. Debayle. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. INTRODUCTION

In its broad acceptation [1], the notion of processing an image involves the transformation of that image from one form into another. The result may be a new image or may take the form of an abstraction, parametrization, or a decision. Thus, image processing is a large and interdisciplinary field which deals with images. Within the scope of the present article, the term image will refer to a continuous or discrete (including the digital form) two-dimensional distribution of light intensity [2, 3], considered either in its physical or in its psychophysical form.

1.1. Fundamental requirements for an image processing framework

In developing image processing techniques, Stockham [1] has noted that it is of central importance that an image processing framework must be physically consistent with the nature of the images, and that the mathematical rules and structures must be compatible with the information to be

processed. Jain [4] has clearly shown the interest and power of mathematics for image representation and processing. Granrath [5] has recognized the important role of human visual laws and models in image processing. He also highlighted the symbiotic relationship between the study of image processing and of the human visual system. Marr [6] has pointed out that, to develop an effective computer vision technique, the following three points must be considered:

(1) what are the particular operations to be used and why?

(2) how the images can be represented? and (3) what implementation structure can be used? Myers [7] has also pointed out that there is no reason to persist with the classical linear operations, if via abstract analysis, more easily tractable or more physically consistent abstract versions of mathematical operations can be created for image and signal processing. Moreover, Schreiber [8] has argued that image processing is an application field, not a fundamental science, while Gonzalez and Wintz [9] have insisted on the value of image processing techniques in a variety of practical problems.

Therefore, it can be inferred from the above brief discussion and more generally from a careful reading of the

specialized literature that an image processing framework must satisfy the following four main fundamental requirements [10]: (1) it is based on a physically (and/or psy-chophysically) relevant image formation model, (2) its mathematical structures and operations are both powerful and consistent with the physical nature of the images, that is, with the image formation and combination laws, (3) its operations are computationally effective, or at least tractable, and (4) it is practically fruitful in the sense that it enables to develop successful applications in real situations.

1.2. The important role of human vision in image processing

The important role that human visual perception shall or should play in image processing has been recognized for a long time [5, 6]. Although numerous papers have been published on the modeling and operationalization of different visual laws and characteristics, it must be noticed that in practice the integration of visually-based computer modules in image processing softwares and artificial vision systems still remains relatively poor [11]. Indeed, there exists a large gap between, on one hand, the strong ability of human vision to perform difficult perceptual tasks, such as pattern recognition or image correlation, and on the other hand, the weakness of even sophisticated algorithms to successfully address such problems [12]. The first reason is that computer vision remains a hard problem since the human vision system is complex and still remains partially unknown or not known enough [12]. The second reason is that a lot of human visual treatments are not available in operational and implementa-tional forms allowing their processing to be performed by a computer.

Most conventional processors consider little the influence of human visual psychology [11, 13], and more generally the human visual perception theories [14]. For example, computer vision uses very little and almost nothing of the Gestalt theory [15, 16] results, mainly due to the fact that this theory is essentially qualitative and thus does not offer quantitative laws that allow mathematical operations to be defined and computationally implemented [17]. Although, some authors have addressed the difficult problem of defining unified frameworks that involve and integrate several laws, characteristics, and models of human visual perception (e.g., [6, 18]), the way towards an efficient unification, allowing a full understanding of visual processing, will be long and hard. The purpose ofthe present paper is to contribute to progress in that direction.

1.3. The logarithmic image processing (LIP) approach

In the mid 1980's, an original mathematical approach called logarithmic image processing (LIP) has been introduced by Jourlin and Pinoli [19,20] as a framework for the representation and processing of nonlinear images valued in a bounded intensity range. A complete mathematical theory has been developed [21-23]. It consists of an ordered algebraic and functional framework, which provides a set of special operations: addition, subtraction, multiplication, differentiation,

integration, convolution, and so on, for the processing of bounded range intensity images.

The LIP theory was proved to be not only mathematically well defined, but also physically and psychophysically well justified (see [22] for a survey of the LIP physical and psychophysical connections). Indeed, it is consistent with the transmittance image formation laws [20, 24], the multiplicative reflectance image formation model, the multiplicative transmittance image formation model [10, 25, 26], and with several laws and characteristics of human brightness perception [22, 27, 28]. The LIP framework has been compared with other abstract-linear-mathematical-based approaches showing several theoretical and practical advantages [10].

1.4. The general adaptive neighborhood image processing (GANIP) approach

The image processing techniques using spatially invariant transformations, with fixed operational windows, give efficient and compact computing structures, with the conventional separation between data and operations. However, these operators have several strong drawbacks, such as removing significant details, changing the detailed parts of large objects, and creating artificial patterns [29].

Alternative approaches towards context-dependent processing have been proposed with the introduction of spatially-adaptive operators, where the adaptive concept results from the spatial adjustment of the window [30-33]. A spatially-adaptive image processing approach implies that operators will no longer be spatially invariant, but must vary over the whole image with adaptive windows, taking locally into account the image context.

An original approach, called general adaptive neighborhood image processing (GANIP), has been recently introduced by Debayle and Pinoli [34-36]. This approach allows the building of multiscale and spatially adaptive image processing transforms using context-dependent intrinsic operational windows. With the help of a specified analyzing criterion and general linear image processing (GLIP) frameworks [10, 25, 37, 38], such transforms both perform a more significant spatial analysis, taking intrinsically into account the local radiometric, morphological or geometrical characteristics of the image, and are consistent with the physical and/or physiological settings of the image to be processed [39-41].

1.5. The proposed LANIP (LIP + GANIP) framework

The general adaptive neighborhood image processing (GANIP) approach is here specifically introduced and applied together with the logarithmic image processing (LIP) framework. The so-called logarithmic adaptive neighborhood image processing (LANIP = LIP + GANIP) will be shown to be consistent with several human visual laws and characteristics such as intensity range inversion, saturation characteristic, Weber's and Fechner's laws, psy-chophysical contrast, spatial adaptivity, morphological symmetry property, multiscale analysis. Combining LIP and GANIP, the proposed logarithmic adaptive neighborhood image processing (LANIP) framework satisfies four main

fundamental requirements that are needed for a relevant human perception-based framework (Section 1.1). It is based on visual laws and characteristics, (2) its mathematical structures and operations are both powerful and consistent with the psychophysical nature of the visual images, (3) its operations are computationally effective, and (4) it is practically fruitful in the sense that it enables to develop successful applications in real situations.

1.6. Applications of the LANIP framework to biomedical, materials, and visual imaging

In this paper, practical application examples of the LANIP, more particularly in the context of biomedical, materials, and visual imaging, are exposed in the field of image mul-tiscale decomposition, image restoration, image segmentation, and image enhancement, successively. In order to evaluate the proposed approach, a comparative study is so far as proposed, between the LANIP-based and the classical operators. The results are achieved on a brain image, the "Lena" visual image, a metallurgic grain image, a human endothelial cornea image, and a human retina vessels image.

1.7. Outline of the paper

First, the logarithmic image processing (LIP) framework is surveyed through its initial motivation and goal, mathematical fundamentals and addressed application issues. A detailed summary of its connections with the human brightness perception is exposed. Secondly, the general adaptive neighborhood image processing (GANIP) approach is introduced and its connections to human brightness perception are introduced. Then, the logarithmic adaptive neighborhood image processing (LANIP) is introduced by combining the LIP framework with the GANIP framework. Next, practical application examples are illustrated on biomedical, materials, and visual images. Finally, in the concluding part, the main characteristics of the LANIP approach are summarized and the objectives of future work are briefly exposed.

2. LIP: LOGARITHMIC IMAGE PROCESSING 2.1. Initial motivation and goal

The logarithmic image processing (LIP) approach was originally developed by Jourlin and Pinoli and formally published in the mid-1980s [19, 20, 23] for the representation and processing of images valued in a bounded intensity range. They started by examining the following (apparently) simple problem: how to add two images? They argued that the direct usual addition of the intensity of two images is not a satisfactory solution in several physical settings, such as images formed by transmitted light [42-44] or the human brightness perception system [45-47], and in many practical cases of digital images [48, 49]. For the two first non- (classically) linear physical image settings, the major reason is that the usual addition + and scalar multiplication X operations

are not consistent with their combination and amplification physical laws. Regarding digital images, the problem lies in the fact that a direct usual sum of two intensity values may be out of the bounded range where such images are valued, resulting in an unwanted out-of-range. From a mathematical point of view, the initial aim [19, 20, 23] of developing the LIP approach was to define an additive operation closed in the bounded real number range [0, M), which is mathematically well defined, and also physically consistent with concrete physical (including the psychophysical aspects) and/or practical image settings [21, 22]. Then, the focus was to introduce an abstract ordered linear topological and functional framework.

2.2. Mathematical fundamentals

In developing the LIP approach, Jourlin and Pinoli [19, 20, 23] first argued that, from a mathematical point of view, most of the classical or less classical image processing techniques have borrowed their basic tools from functional analysis. They further argued that these tools realize their full efficiency when they are put into a well-defined algebraic structure, most of the time of a vectorial nature. In the LIP approach, images are represented by mappings, called gray tone functions and denoted by f, g,... defined on the spatial domain D and valued in the positive real number set [0,M), called the gray tone range. The elements of [0, M) are called gray tones. Thereafter, the set constituted by those gray tones functions extended to the real number set ( —oo,M), structured with a vector addition, denoted by A, a scalar multiplication, denoted by , and the opposite, denoted by , defines a vector space [23].

Those LIP basic operations are directly defined as follows [25,26]:

Vf,geS fAg = f + g-M

Vf e S, VA e R \Af = M - M^ 1- M) ,

VfeS, Af =

The opposite operation allows the difference between two gray tone functions f and g, denoted by fAg, to be defined

fAg = M

f-g M-g.

In addition to abstract linear algebra, the ordered sets theory plays a fundamental role within the LIP approach. Indeed, it has been shown [23] that this set of extended gray tone functions is an ordered vector space with the classical order relation . The modulus of a gray tone function f is then denoted by j f |S and defined by

\f\s =

f if f 0, Af if f< 0.

In fact, it has been proved [23] that the order relation induces the topological structuring of the LIP framework. Such a result is of fundamental importance and explains why the modulus notion is of great interest, since it gives a mathematical meaning to the physical "magnitude" within the LIP approach. However, the LIP mathematical framework is not only an ordered vectorial structure. Algebraic and functional extensions have been developed allowing powerful concepts and notions operating on special classes of gray tone functions to be defined: metrics, norms, scalar product, differentiation, integration, convolution [50], correlation [51], and also the specific LIP Fourier and LIP wavelet transformations [52]. These fundamental aspects of the LIP approach will not be exposed in the present article and the interested reader is invited to refer to the published articles and thesis for detailed developments [23, 25, 51, 53].

2.3. Application issues

During the last twenty years, successful application examples were reported in a number of image processing areas, for example, background removing [24, 54], illumination correction [55, 56], image enhancement (dynamic range and sharpness modification) [57-62], image 3D-reconstruction and visualization [63, 64], contrast estimation [27, 28, 65, 66], image restoration and filtering [54, 56, 67, 68], edge detection and image segmentation [65, 69-72], image multi-scale decomposition [73, 74], image data compression [75, 76], and color image representation and processing [77-79].

2.4. LIP versus general linear image processing (GLIP)

From an image processing point of view, the LIP framework appears as an abstract-linear-mathematical-based approach and belongs to the so-called general linear image processing (GLIP) family [10]. Therefore, it has been compared with the classical linear image processing (CLIP) approach (e.g., see [9, 49]), the multiplicative homomorphic image processing (MHIP) approach [1, 80, 81], the log-ratio image processing (LRIP) approach [82-84], and the unified model for human brightness perception [18], showing its mathematical, physical, computational, and practical characteristics and advantages [10, 25, 27, 58, 67, 85]. Interested readers are invited to refer to [10] for a detailed report of the comparative study of the MHIP, LRIP, and LIP approach.

2.5. Connections of the LIP model with several properties of the human brightness perception

The LIP approach has been proved [20,25, 28, 58, 59, 67, 86] to be consistent with several properties of the human visual system, because it satisfies the brightness range inversion and the saturation characteristic, Weber's and Fechner's laws, and the psychophysical contrast notion. This section aims at surveying these connections.

2.5.1. Intensity functions

In the context of human brightness perception, a gray tone function f (x, y) corresponds to an incident light intensity function F(x, y) through the following relationship [20]:

F(x,y) = m( 1- fFX^), (4)

V Fmax '

where Fmax is the saturating light intensity level, called the "upper threshold" [45] or the "glare limit" [48] of the human visual system. Thus, a gray tone function f (x, y) corresponds to an intensity function F(x, y) valued in the positive bounded real number range (0, Fmax].

In fact, the definition (4) of a gray tone function in the context of human brightness perception is nothing else than a normalized intensity function in an inverted value range. Indeed, contrary to the classical convention, the gray range is inverted in the LIP model, since the gray tone 0 designates the total whiteness, while the real number M represents the absolute blackness. This will now be justified and explained.

2.5.2. The intensity range inversion

The limits of the interval [0, M) have to be interpreted as follows: the value 0 represents the "upper threshold" or "glare limit" (i.e., the saturating light intensity level Fmax), whilst the value M corresponds to the physical complete darkness (i.e., the light intensity level 0). Indeed, it is known [87, 88] that the eyes are sensitive to a few quanta of light photons. The brightness range inversion of the gray tone range [0, M) (0 and M representing, respectively, the white and black values) has been first justified in the setting of transmitted light imaging processes: the value 0 corresponds to the total transparency and logically appears as the neutral element for the mathematical addition [19, 20]. This brightness range inversion also appears valid in the context of the human visual perception. Indeed, Baylor et al. [89, 90] have shown through physiological experiments on monkeys that, in complete darkness, the bioelectrical intensity delivered by the retinal stage of the eyes is equal to a mean constant value. They have also established that the increase of the incident light intensity produces a decrease (and not an increase) of this bioelectrical intensity. Such a property physically justifies the brightness range inversion in the LIP model in the context of human visual perception.

2.5.3. The saturation characteristic

A lot of the LIP model mathematical operations are stable in the positive gray tone range [0, M), which corresponds to the light intensity range (0, Fmax] (see Section 2.5.1). This important boundedness property allows to argue that the LIP model is thus consistent with the saturation characteristic of the human visual system, as also noted by Brailean et al. [27, 67, 85]. Indeed, it is known [45, 46] that the human eyes, beyond a certain limit (i.e., the "upper threshold"), denoted Fmax in the present article, cannot recognize any further increase in the incident light intensity.

2.5.4. Weber's law

The response to light intensity of the human visual system is known to be nonlinear since the middle of the 19th century, when the psychophysician Weber [91] established its visual law. He argued that the human visual detection depends on the ratio, rather than the difference, between the light intensity values F and F + AF, where AF is the so-called "just noticeable difference," which is the amount of light necessary to add to a visual test field of intensity value F such that it can be discriminated from a reference field of intensity value F [45, 46]. Weber's law is expressed as

where W is a constant called Weber's constant [45, 92].

It has been shown [20, 67, 86] that the LIP subtraction is consistent with Weber's law. Indeed, choosing two light intensity values F and G, the difference of their corresponding gray tones f and g, using the definitions (2) and (4), is given by

gAf = M

If the light intensity values F and G are just noticeable, that is, G = F + AF, then the gray tone difference, denoted f, yields

Af = - M — = - MW. F

Thus, the constancy is established (the minus sign coming from the brightness range inversion in the LIP model). However, the value of Weber's constant depends on the size of the detection target [93, 94], and only holds for light intensity values larger than a certain level [95], and is known to be context-dependent [96]. Although other researchers have criticized or rejected Weber's law (see Krueger's article [96] and related open commentaries for a detailed discussion), this does not limit the interest ofthe LIP approach in the context of human visual perception. Indeed, the consistency between the LIP subtraction and Weber's law established in (6), (7) means that in whatever situation Weber's law holds, the LIP subtraction expresses it. In fact, the LIP model is consistent with a less restrictive visual law than Weber's law, known as Fechner's law.

2.5.5. Fechner's law

A few years after Weber, Fechner [97] explained the nonlin-earity of human visual perception as follows [45, 96]: in order to produce incremental arithmetic steps in sensation, the light intensity must grow geometrically. He proposed the following relationship between the light intensity F (stimulus) and the brightness B (sensation):

A* = k AF,

where AF is the increment of light that produces the increment AB of sensation (brightness) and k is a constant. The Fechner law can then be expressed as

B = k ln

where k is an arbitrary constant and Fmin is the "absolute threshold" [96] of the human eye, which is known to be very close to the physical complete darkness [87, 88]. Fechner's law can be equivalently expressed as

B k ln

F ^ , i ' l (Fmax

+ k ln

where Fmax is the "upper threshold" (or "glare limit") of the human eye.

In this article, the relationships (8) and (9) or( 10) will be called the discrete Fechner law and the continuous Fechner law, respectively. It has been shown [19, 22, 67] that the LIP subtraction is consistent with the discrete Fechner law. This result is easy to obtain since formulas (8) and (9) are equivalent. The LIP framework is also consistent with the continuous Fechner law [59].

In fact, the Fechner approach was an attempt to find a classical linear range for the brightness (light intensity sensation) with which the usual operations "+" and "x" can be used, whilst the LIP model defines specific operations acting directly on the light intensity function (stimulus) through the gray tone function notion.

(7) 2.5.6. The psychophysical contrast

Using the LIP subtraction operation (2) and the modulus notion (3), Jourlin et al. [28] have proposed in the discrete case (i.e., with the spatial domain D being a nonempty discrete set in the Euclidean space R2) a definition of the contrast between the gray tones f and g of two neighboring points:

C(f,g) = max(f,g)A min(f,g).

An equivalent definition [25, 51, 86] which allows subsequent mathematical operations to become more tractable is given by

C(f, g) = |f Agls,

where | ■ Is is the positive gray tone valued mapping function, called modulus in Section 2.2 and defined by (3).

It was shown [28, 86] that definition (12) is consistent with Weber's and discrete Fechner's laws. Indeed, the contrast between the gray tones f and g of two neighboring points is

C(f, g) = IfAgls = M

lf ~gl R

M min( f, g)

|G-F| r max(F, G)

where G and F are the corresponding intensity values given by (using (4)):

G = Fmax( 1 M

F (1-f)

Fmaxl 1 m).

For two just noticeable intensity values F and G, the consistency with Weber's law is then shown by using the formulas (7), (8):

C( f, g) = M1-^- = MW. F

In fact, it has been proved [86] that the LIP contrast definition coincides with the classical psychophysical definition [45, 55, 90], since selecting the first equality in (15) yields

C(f, g) = M

\AF\ F

which is a less restrictive equation than (15), and is related to discrete Fechner's law, instead of Weber's law. Therefore, the LIP subtraction and modulus notion are closely linked to the classical psychophysical definition of contrast, which expresses the (geometric and not arithmetic) difference in intensity between two objects observed by a human observer. Starting with the psychophysically and mathematically well-justified definition (12), Jourlin et al. [28] have shown that the LIP model permits the introduction of the contrast definition in the discrete case. Pinoli [86] has extended this work in the continuous case.

2.5.7. Other visual laws and recently reported works

The classical literature describing human brightness response to stimulus intensity includes many uncorrelated results due to the different viewpoints of researchers from different scientific disciplines (e.g., physiology, neurology, psychology, psychophysics, optics, engineering and computer sciences). Several human visual laws have been reported and studied, for example, Weber's law [45, 91, 95], Fechner's law [45, 97, 98], DeVries-Rose's law [95, 99-102], Stevens' law [45, 103-105], and Naka-Rushton' law [106-108] (see Krueger's article [96] and related open commentaries for a detailed study in the field of psychophysics and Xie and Stockham's article [18] for a discussion in terms of image processing in the context of human vision). Some authors tried to relate some of these human visual laws, for example, see [109, 110]. Some other authors proposed modified or unified human visual laws, for example, see [18,111,112].

Recently, reported modern works [113-116] suggest that instead of logarithmic scaling, the visual response to a stimulus intensity takes the form of a kind of sigmoidal curve: a parabola near the origin and approximately a logarithmic function for higher values of the input. Therefore, it can be only argued that the LIP approach is consistent with Weber's and Fechner's laws, and thus appears as a powerful and tractable algebraic mathematical and computational framework for image processing in the context of human visual perception under the logarithmic assumption.

Image representation in the domain of local frequencies is appropriate and has strong statistical grounds [117119] and remarkable biological counterparts [120-122]. Weber-Fechner-Stevens (and other authors) luminance non-linearities are a particular (zero frequency) case of the more general non-linear wavelet-like behavior [120,121,123,124].

Nevertheless, the mathematical introduction of a wavelet transformation within a function vector space is based on an integral operation and thus on an addition operation (+ becomes A within the LIP framework). This is of key importance since from a mathematical point of view, the setup of additive operation is the starting point for the definition of the Fourier and the wavelet transformations. Therefore, the LIP framework enables to define logarithmic wavelet transformations [52] whose behavior is adapted to the human visual system.

3. GANIP: GENERAL ADAPTIVE NEIGHBORHOOD IMAGE PROCESSING

In the so-called general adaptive neighborhood image processing (GANIP) approach which has been recently introduced [34, 41], a set of general adaptive neighborhoods (GANs set) is identified according to each point in the image to be analyzed. A GAN is a subset of the spatial domain D constituted by connected points whose measurement values, in relation to a selected criterion (such as luminance, contrast, thickness, curvature, etc.), fit within a specified homogeneity tolerance. Then, for each point to be processed, its associated GANs set is used as adaptive operational windows of the considered transformation. It allows to define operators for image processing and analysis which are adaptive with spatial structures, intensities, and analyzing scales of the studied image [34].

3.1. General adaptive neighborhood (GAN) paradigm

In adaptive neighborhood image processing (ANIP) [125, 126], a set of adaptive neighborhoods (ANs set) is defined for each point within the image. Their spatial extent depends on the local characteristics of the image where the seed point is situated. Then, for each point to be processed, its associated ANs set is used as spatially adaptive operational windows of the considered transformation.

The AN paradigm can spread over a more general case, in order to consider the radiometric, morphological or geometrical characteristics of the image, allowing a more consistent spatial analysis to be addressed and to develop operators that are consistent with the physical and/or physiological settings of the image to be processed. In the so-called general adaptive neighborhood image processing (GANIP) [34, 35, 41], local neighborhoods are identified in the image to be analyzed as sets of connected points within a specified homogeneity tolerance in relation with a selected analyzing criterion such as luminance, contrast, orientation, thickness, curvature, and so forth; see [35]. They are called general for two main reasons. First, the addition of a radiometric, morphological, or geometrical criterion in the definition of the usual AN sets allows a more significant spatial analysis to be performed. Second, both image and criterion mappings are represented in general linear image processing (GLIP) frameworks [10, 25, 37, 38] using concepts and structures coming from abstract linear algebra, in order to include situations in which signals or images are combined by processes other

than the usual vector addition [10]. Consequently, operators based on such intensity-based image processing frameworks should be consistent with the physical and/or physiological settings of the images to be processed. For instance, the logarithmic image processing (LIP) framework (Section 2) with its vector addition + and its scalar multiplication has been proved to be consistent with the transmittance image formation model, the multiplicative reflectance image formation model, the multiplicative transmittance image formation model, and with several laws and characteristics of human brightness perception.

In this paper, GANIP-based operators will be specifically introduced and applied together with the LIP framework, because of its superiority among the GLIP frameworks (Section 2.4).

3.2. General adaptive neighborhoods (GANs) sets

The space of criterion mappings, defined on the spatial domain D and valued in a real number interval E, is represented in a GLIP framework, denoted by C, structured by its vectorial operations ©, O, and O.

For each point x e D and for an image f, the general adaptive neighborhoods (GANs), denoted by V^ (x), are included as subsets within D. They are built upon a criterion mapping h e C (based on a local measurement such as luminance, contrast, thickness,..., related to f), in relation with a homogeneity tolerance mo belonging to the positive intensity value range E+ = {t e E \ t > 0}.

More precisely, the GAN V^ (x) is a subset of D which fulfills the two following conditions:

(i) its points have a criterion measurement value closed to that of the seed (the point x to be processed):

Vy^Vhmn (x) |h( y)o h(x) |E<m0; (17)

(ii) it is a path-connected set [127] (according to the usual Euclidean topology on D c R2),

where E is the vector modulus given by (3).

In this way, for a point x, a range of tolerance mo is first computed around h(x) : [h(x)o mo, h(x)© mo ]. Secondly, the inverse map of this interval gives the subset {y e D; h(y) e [h(x)o mo, h(x)o mo ]} of the spatial domain D. Finally, the path-connected component holding x provides its GAN set Vmh (x).

The general adaptive neighborhoods (GANs) are then defined as

V(m0,h,x) e E+ x CxD h (18)

VmQ (x) = Ch KihWO m0 ,h(x)© mQ j)^

where CX(x) denotes the path-connected component [127]

measurement value

Figure 1: One-dimensional computation of a general adaptive neighborhood set Vh,. (x) using the LIP framework. For a point x, a range of tolerance mA is first computed around h(x). Secondly, the inverse map of this interval gives a subset of the 1D spatial domain. Finally, the path-connected component holding x provides its logarithmic adaptive neighborhood (LAN = LIP + GAN) set VhA (x).

(according to the usual Euclidean topology on D c R2) of X c D containing ^D.

Figure 1 gives a visual impression, on a 1D example, of the computation of a GAN set defined in the LIP framework, that is to say with the A and A as GLIP vectorial operations.

Figure 2 illustrates the GAN set of a point x on an elec-trophoresis gel image f provided by the software Micro-morph computed with the luminance criterion hi (that is to say with h1 = f) or the contrast (in the sense of [28, 86]) criterion h2 (defined by (30)). In practice, the choice of the appropriate criterion results from the specific kind of the addressed imaging situation.

These GAN sets satisfy several properties such as reflexiv-ity, increasing with respect to mo (nesting property), equality between iso-valued neighbors points, addition invariance and multiplication compatibility [34, 35].

To illustrate the nesting property, the GAN sets of four initial points are computed on the "Lena" image (Figure 3) with the luminance acting as analyzing criterion and different values of the homogeneity tolerance m. These GANs are built within the classical linear image processing (CLIP) framework, that is to say with the usual operations + and —.

Figure 3 shows that the GAN sets are, through the analyzing criterion and the homogeneity tolerance, nested and spatially adaptive relating to the local structures of the studied image, allowing an efficient multiscale analysis to be performed.

3.3. Connections of the GANIP framework with human brightness perception

The purpose of this section is to discuss the connections of the GANIP approach to human brightness perception, namely, the spatial adaptivity, the multiscale adaptivity, and the morphological symmetry property which are known to be spatial abilities of the human visual system.

(a) Original image

(b) hi luminance

(c) hi contrast

(d) Seed point x

(e) VOW

(f) V3h0(x)

Figure 2: Original electrophoresis gel image (a). The adaptive neighborhood set for the seed point highlighted in (d) is, respectively, homogeneous in (e) and (f), according to the tolerance m = 10 and mA = 30, in relation to the luminance criterion (b) or to the contrast criterion (c).

(a) Criterion: luminance (b) GAN sets (c) Color table linked to ho-

mogeneity tolerances m

Figure 3: Nesting of GAN sets of four seed points (b) using the luminance criterion (a) and different homogeneity tolerances: m = 5,10,15,20, and 25 encoded by the color table (c). The GANs are nested with respect to m. Following the color table (c), a GAN set defined with a certain homogeneity tolerance m could be represented by several tinges of the color associated to its seed point. For instance, the GAN set of the point highlighted in the hairs of Lena for m = 25 is represented by all the points colored with all tinges of yellow.

3.3.1. Spatial adaptivity

Generally, images exhibit a strong spatial variability [128] since they are composed of different textures, homogeneous areas, patterns, sharp edges, and small features. The importance of having a spatially adaptive framework is shown by the failure of stationary approaches to correctly model images, especially when dealing with inverse problems such as denoising or deblurring. However, taking into account the space-varying characteristics of a natural scene is a difficult task, since it requires to define additional parameters. The GANIP approach has been built to be spatially-adaptive by means of an analyzing criterion h that could be selected, for example, as the luminance or the contrast of the image to be studied. Therefore, it can be argued that the GANIP

approach is closely related to the visual spatial adaptivity which is known to be an ability of the human visual system.

3.3.2. Multiscale adaptivity

A multiscale image representation such as pyramids [129], wavelet decomposition [130] or isotropic scale-space [131], generally takes into account analyzing scales which are global and a priori defined, that is to say based on extrinsic scales. This kind of multiscale analysis possesses a main drawback since an a priori knowledge, related to the features of the studied image, is consequently required. On the contrary, an intrinsic multiscale representation such as anisotropic scale-space [132] takes advantage of scales which are self-determined by the local image structures. Such an intrinsic

decomposition does not need any a priori information and is consequently much more adapted to vision problems: the image itself determines the analyzing scales. The GANIP framework is an intrinsic multiscale approach, that is, adaptive with the analyzing scales. Indeed, the different structures of an image, seen at specific scales, fit with the GANs Vjl^ (x) with respect to the homogeneity tolerance mo, without any a priori information about the image. A more specific study on the comparison between extrinsic and intrinsic approaches is exposed in [34].

The advantage of GANIP, contrary to most multiscale decomposition, is that analyzing scales are automatically determined by the image itself. In this way, a GANIP multiscale decomposition, such as that proposed in [40], saves significant details while simplifying the image along scales, which is suitable for segmentation.

3.3.3. Morphological symmetry property

Visually meaningful features are often geometrical, for example, edges, regions, objects [13]. According to the Gestalt theory [15, 133], "grouping" is the main process of the human visual perception [16, 17, 134, 135]. That means whenever points or group of points (curves, patterns, etc.) have one or several characteristics in common, they get grouped and form a new larger visual object called a gestalt [17]. This grouping processes are known as grouping laws (in fact, rules or heuristics are more suited terms instead of law): proximity, good continuation, closure, and so forth, and symmetry which is indeed an interesting property used by the human visual system for pattern analysis [13, 136-139]. In the GANIP framework, the GANs adaptive structuring elements used for the morphological analysis of an image are chosen to be autoreflected (21) or symmetric (see Remark 2), according to the analyzing criterion h. This symmetry condition is more adapted to image analysis for topological and visual reasons [36]. It is important to note that this symmetry property is of morphological nature and not only of a geometrical nature (i.e., a simple mirror symmetry [13]) that suits the way human visual system performs a local "geodesic" (and not Euclidean) shape analysis [140, 141].

4. LANIP: LOGARITHMIC ADAPTIVE NEIGHBORHOOD IMAGE PROCESSING

The so-called logarithmic adaptive neighborhood image processing (LANIP) is a combination of the GANIP and the LIP frameworks. In this way, the GANs are specifically introduced in the LIP context with its A, A, and A vectorial operations. Therefore, LANIP-based mean, rank (Section 4.1) or morphological (Section 4.2) operators will be defined with those logarithmic adaptive neighborhoods (LANs) Vmh (x) x as operational windows.

4.1. LANIP-based mean and rank filtering

Mean and rank filtering are simple, intuitive, and easy to implement methods for spatially smoothing images, that is,

reducing the amount of intensity variation between one pixel and the next one. They are often used to reduce noise effects in images [9, 142].

The idea of mean filtering is simply to replace the gray tone of every point in an image with the mean ("average") gray tone of its neighbors, including itself. This has the effect to eliminate point values which are unrepresentative of their surroundings. Mean filtering is usually thought of as a convolution filter. Like other convolutions it is based on an operational window, which represents the shape and size of the neighborhood to be slided within the image when calculating the mean. Often an isotropic operational window is used, as a disk of radius 1, although larger operational windows (e.g., disk of radius 2 or more) can be used for more severe smoothing. (Note that a small operational window can be applied more than one time in order to produce a similar—but not identical—effect as a single pass with a large operational window.)

Rank filters in image processing sort (rank) the gray tones in some neighborhood of every point following the ascending order, and replace the seed point by some value k in the sorted list of gray tones. When performing the well-known median filtering [142], each point to be processed is determined by the median value of all points in the selected neighborhood. The median value k of a population (set of points in a neighborhood) is that value for which half of the population has smaller values than k, and the other half has larger values than k.

So, the LANIP-based mean and rank filters are introduced by substituting the classical isotropic neighborhoods, generally used for this kind of filtering, with the (anisotropic) logarithmic adaptive neighborhoods (LANs).

4.2. LANIP-based morphological filtering

The origin of mathematical morphology (MM) stems from the study of the geometry of porous media [143]. The mathematical analysis is based on set theory, integral geometry, and lattice algebra. Its development has been characterized by a cross-fertilization between applications, methodologies, theories, and algorithms. It leads to several processing tools in the aim of image filtering, image segmentation and classification, image measurements, pattern recognition, or texture analysis [144].

The proposed LANIP-based mathematical morphology approach is introduced by using the LANs set to define adaptive structuring elements. In the presented paper, only the flat MM (i.e., with structuring elements as subsets in R2) is considered, though the approach is not restricted and can also address the general case of functional MM (i.e., with functional structuring elements) [35].

The space of images from D into R, denoted by I, is provided with the partial ordering relation defined in terms of the usual ordering relation < of real numbers:

V(f, g )e I f<^ (V xeDf (x) < g (x)). (19)

Thus, the partially-ordered set (I, <), still named I, is a complete lattice [145].

ViA (zi)

vm, (z2)

Figure 4: Representation of a logarithmic adaptive neighborhoods (LANs) structuring element Rhm^ (x).

4.2.1. LAN s tructuring elements

To get the morphological duality (adjunction) between erosion and dilation, reflected (or transposed) structuring elements (SEs) [145], whose definition is mentioned below, shall be used. The reflected subset of A(x) c D, element of a collection {A(z)}zeD, is defined as

A(x) = {z e D; x e A(z)}.

The notion of autoreflectedness is then defined as follows [145]: the subset A(x) c D, element of a collection {A(z)}z^D, is autoreflected if and only if

A(x) = A(x), (21)

that is to say: for all (x, y) e D2 x e A( y) «• ye A(x).

Remark 1. The term autoreflectedness is used instead of symmetry which is generally applied in the literature [145], so as to avoid the confusion with the geometrical symmetry. Indeed, an autoreflected subset A(x) c D belonging to A(z) z D is generally not symmetric with respect to the point x.

Spatially adaptive mathematical morphology using adaptive SEs which do not satisfy the autoreflectedness condition (21) has been formally proposed by [146] and practically used in image processing [147, 148]. Nevertheless, while autoreflectedness is restrictive from a strict mathematical point of view, it is relevant for topological, visual, morphological, and practical reasons [36]. From this point, au-toreflected adaptive structuring elements are considered in this paper. Therefore, as the LAN sets V^A (x) are not autore-flected [34], it is necessary to introduce adaptive structuring elements (ASEs), denoted by {Rm (x)}x^D. They are defined while satisfying the GAN paradigm and the autoreflectedness condition

V(mA, h, x) e E+ x Cx D RhmA(x) = U {VL(z) I xeVhmA(z)}

Figure 5: Example of adaptive RmA and nonadaptive Br structuring elements with three values both for the homogeneity tolerance parameter mA, and for the disks radius r. The shapes of Br (x1) and Br(x2) are identical and {Br(x)}r is a family of homothetic sets for each point x e D. On the contrary, the shapes of RhmK (x3) and RhmA (x4) are dissimilar and {RmA (x)}mA is not a family of homo-thetic sets.

properties such as symmetry, reflexivity, increasing, and geometric nesting with regard to m , translation invariance and multiplication compatibility with regard to h,... [34].

Figure 5 compares the shape of usual SEs Br (x) as disks of radius r e R+ and adaptive SEs Rhm (x) as sets self-defined with respect to the criterion mapping h and the homogeneity tolerance mA e E+.

Remark 2. Autoreflectedness is argued to be more adapted to image analysis from both topological and morphological reasons. In fact, it allows a morphological symmetric neighborhood system Rm^ (x) to be defined at each point x belonging to D. Topologically, it means that if x is in the neighborhood of y at level m (x e R^A(y)), then y is as close to x as x is close to y (ye R^A (x)). In terms of metric, this is a required condition to define a distance function d, starting from all the Rm& (■), satisfying the symmetry axiom: d(x, y) = d(y, x) [149]. Indeed, symmetry is needed to introduce a nonde-generate topological metric space (the authors are currently working on topological approaches with respect to the GAN paradigm).

The next step consists in defining adaptive elementary operators of mathematical morphology in the aim of building adaptive filters.

4.2.2. LAN elementary morphological operators

The elementary dual operators of adaptive dilation and erosion are defined accordingly to the flat ASEs Rm„ (x). The formal definitions are given as follows: for all (mA, h, f, x) e E+ x Cxi xD,

Those adaptive SEs are anisotropic and self-defined with respect to the criterion mapping h. They satisfy several

DhmA (f )(x) = sup f (w),

W£RhmA (x)

EhmA ( f )(x) = inf f (w).

w^R* (x)

Next, the lattice theory allows to define the most elementary (adaptive) morphological filters [145]. More precisely, the adaptive closing and opening are, respectively, defined as for all (mA, h, f, x) e E+ x CxIxD,

(f )(x) h (f )(x) = Eh„

= Dh °Eh

.if )(x), ,(f )(x).

Moreover, with the "luminance" criterion (h = f ), the adaptive dilation and erosion satisfy the connectedness [150] condition which is of great morphological importance:

VmA e E+

dL (f ) eL (f )

are connected operators.

Remark 3. An operator 0 : I I is connected if and only if Vfe IV(x,y) neighbors f (x) = f (y) ^ 0(f)(x)

= 0(f)(y).

This property is an overwhelming advantage compared to the usual ones which fail to this connectedness condition. Besides, it allows to define several connected operators built by composition or combination with the supremum and the infimum [150] of these adaptive elementary morphological operators, as adaptive closings and openings. Thus, the op-

erators OChmA = OhmAChmA and COhmA

= ChmAOhmA, called

adaptive opening-closing and adaptive closing-opening, respectively, are (adaptive) morphological filters [151], and in addition, connected operators with the luminance criterion.

4.2.3. LAN sequential morphological operators

The families of adaptive morphological filters {O>0 and Ch m 0 are generally not ordered collection. Nevertheless, such families, practically fruitful in multiscale image decomposition, are built by naturally reiterating adaptive dilation or erosion. Explicitly, adaptive sequential dilation, erosion, closing and opening, are, respectively, defined as for all (mA, p, h, f, x) e E+ x N x CxIxD,

DhmA ,p (f )(x) = DhmAo...oDhm A (f )(x),

v <r 1

p times

EhmA p(f )(x) = EhmAo...oEhmA (f )(x),

p times

ChmA, p(f )(x) = EhmA poDhmA ,p (f )(x), OhmA ,p (f )(x) = DhmA, poEm, ,p (f )(x).

The morphological duality of Dmh ,p and Emh ,p provides, so among other things, the two sequential morphological filters

ChA ,p and OmA ,p.

Moreover, these last ones generate an ordered collection of operators: for all (mA, h) e E^ X C,

(1) {O^,. ,p\p>0 is a decreasing sequence,

(2) {Cm„A,p}p>0 is an increasing sequence.

Such filters will be used in a real application situation (Section 5.1).

4.3. Implementation issues

From a computational point of view, the algorithms of the proposed LANIP-based operators are built in two steps. First, the LAN sets are computed and stored in random access memory (RAM). Some properties [36] are used to save memory and reduce computation time. Second, the operators are run with the stored LAN sets. In this way, LAN sets are computed one time even for advanced operators, such as composed morphological filters.

Compared to the classical transformations where the operational windows are fixed-shape and fixed-size for all the points of the image, the computation of the LANs sets, which depends on several characteristics such as the selected criterion or the considered GLIP framework, increases the running time of those adaptive operators.

5. APPLICATION ISSUES FOCUSED ON BIOMEDICAL, MATERIALS, AND VISUAL IMAGING

LANIP-based processes are now exposed and applied in the field of image multiscale decomposition, image restoration, image segmentation and image enhancement on practical application examples more particularly focused on biomedical, materials, and visual imaging.

The detection of metallurgic grain boundaries, endothelial corneal cells, cerebrovascular accidents (CVA) and vascular network of a human retina are investigated, successively.

5.1. Image multiscale decomposition

This application addresses the detection of cerebrovascular accidents (CVA). A stroke or a cerebrovascular accident occurs when the blood supply to the brain is disturbed in some way. As a result, brain cells are starved of oxygen causing some cells to die and leaving other cells damaged. A mul-tiscale representation of a brain image f is built with an LANIP-based decomposition process using adaptive sequential openings OmA,p (Figure 6), using the LANs structuring elements with the criterion mapping f and the homogeneity tolerance mA = 7. Several levels of decomposition are exposed: p = 1,2,4,6,8, and 10 (see Section 4.2.3). The main aim of this multiscale process is to highlight the stroke area, in order to help the neurologist for the diagnosis of the kind of stroke, and/or to allow a robust segmentation to be performed.

These results show the advantages of spatial adaptivity and intrinsic multiscale analysis of the LANIP-based operators. Moreover, the detection of the stroke area seems to be reachable at level p = 10, while accurately preserving its spatial and intensity characteristics which are needed for a robust segmentation.

(a) Original f

(b) 07^1 (f)

(e) 07,6(f)

(f) 07,8 (f)

(d) 07,4( f)

(g) 07,10 (f)

Figure 6: Detection of cerebrovascular accidents. A decomposition process (b)-(g) is achieved with the LAN-based morphological sequential openings applied on the original image (a). The detection of the stroke area seems to be reachable at level p = 10.

5.2. Image restoration

Most of the time, image filtering is a necessary step in image preprocessing, such as restoration, presegmentation, enhancement, sharpening, brightness correction, and so forth. The LANIP filtering allows such transformations to be defined. This section addresses the image restoration area with a concrete application example in visual image denoising.

The adaptive filters using the elementary LANs work well if the processed images are noise free or a bit corrupted [35, 41].

In the presence of impulse noise, such as salt and pepper noise or uniformly distributed noise, the LANs need to be combined so as to provide efficient filtering operators. Indeed, the elementary LAN of a corrupted point by such a noise does not generally fit to the "true" region of which it belongs, for any homogeneity tolerance value mA.

Consequently, a specific kind of LANs, called the combined logarithmic adaptive neighborhoods (C-LANs) and de-f

noted by ZmA (■), are introduced so as to enable images corrupted by such a kind of noise to be restored. They are

built by combination (i.e., with the set union) of the LANs

Vm ( y) y D using the luminance criterion. Explicitly, the C-LANs are defined as follows: for all (mA, f, x) e E+X lxD,

ZtmA (x) —

U WL(y)} ifA(vIa(x)) <n*t2,

yeB:(x)

Vmf (x)

otherwise,

where B\(x) refers to the disk of radius 1 (due to the punctual spatial characterization of the noise) centered to x and A(X) to the area of X.

The parameter t, that acts like the radius of an equivalent disk, has been visually tuned to 0.6. A specific study should be lead in order to find an automatic way of picking this parameter (probably linked to the percentage of damaged points in the image). These C-LANs allow to detect the "true" neighborhood of a corrupted point x, with the help of the area f

value of its LAN Vm± (x).

The basic example of Figure 7 illustrates the ability of the C-LANs to represent the expected neighborhood of a corrupted point.

Remark 4. It is possible to introduce other combined LANs relating to the kind of noise [35].

Several operators, based on these C-LANs, could be introduced. Figure 8 illustrates a restoration process using median filtering. The classical median filter Medr using a disk of radius r and the adaptive filters using LANs (V — Medf0( ■)) and C-LANs (Z — Medf0(-)) with the luminance criterion and the parameter value 20 as homogeneity tolerance are applied on the "Lena" image g which is corrupted by a uniformly distributed impulse noise.

The results show the necessity to combine the W-LANs so as to perform a significant filtering. In addition, the median filter using the C-LANs supplies a better result than the usual median filter using an isotropic disk. Indeed, the edges are

(a) Original image f with a black noisy point x located inside the white rectangle

(b) The LAN VfA (x) of the corrupted point x is the point itself

(c) The C-LAN ZfmA (x) of the corrupted point x is the white rectangle

Figure 7: Image (a) contains a black point x (gray-tone 0) inside the white rectangle (gray-tone 255). The point x visually appears as corrupted by an impulse noise. For mA < 255 and n * t2 > 1, the LAN vlA (x) of the noisy point x is the point itself (b), while its C-LAN ZfA (x) is the whole rectangle (c). In this way, adaptive median filtering in presence of impulsive noise should be more accurate using the C-LANs.

(a) Clean image f (b) Noisy image g (c) Medi (g)

(d) V-Medfo (g)

(e) Z-Medfo (g)

Figure 8: Image restoration. Image (a) is corrupted with a 10% uniformly distributed impulse noise (b). Median filtering is used to filter the noisy image: usual filtered image (c) with a disk of radius 1 as operational window, adaptive median filtered image (d) with Vf0(-) LANs and adaptive filtered image (e) with Z20(-). The most efficient denoising is supplied by the C-LANs filter.

damaged with the classical approach (blur around the eyes and the hairs), contrary to the LANIP one.

5.3. Image segmentation

The segmentation of an intensity image can be defined by its partition (in fact the partition of the spatial domain D) into different connected regions, relating to a homogeneity condition [142].

In this paper, the segmentation process is based on a morphological transformation called watershed [152] and a LANIP-based decomposition process. It will be illustrated on the boundaries detection both in a human corneal endothe-lial image and in a metallurgic grains image.

5.3.1. Human corneal endothelium

The cornea is the transparent surface in front of the eye. Ex vivo controls are done by optical microscopy on corneal cells before grafting. That image acquisition system gives gray tones images (a part is proposed in Figure 9(a)) which are segmented, for example by the SAMBA software [153], into regions representing cells. These ones are used to compute statistics in order to quantify the corneal quality before transplantation.

The authors proposed an LANIP-based approach to segment the cornea cells (Figure 9(d)). The process is achieved by the alternating closing-opening morphological using the LAN sets, followed by a watershed transformation, denoted

(c) W(CO4(f )) (d) W(co{0(f ))

Figure 9: Segmentation of human endothelial cornea cells (a). The process achieved by the LANIP-based morphological approach (d) provides better results (from the point of view of ophthalmologists) than the SAMBA software (b) or the correspondent classical morphological approach (c).

by W. The result is compared with the correspondent usual approach (Figure 9(c)). Another comparison is proposed with the SAMBA software, whose process consists in thresholding, filtering, and skeletonization (Figure 9(b)) [153]. The parameters and r of the adaptive and classical morphological filters have been tuned to visually provide the best possible segmentation.

The detection process achieved by the LANIP-based morphological approach provides better results (from the point of view of ophthalmologists) than the SAMBA software or the correspondent classical morphological approach. These results highlight the spatial adaptivity of the LANIP-based operators, contrary to the usual morphological ones. A more specific study should be investigated for this promising cells detection process.

5.3.2. Metallurgic grain boundaries detection

A real example in the field of image segmentation is presented here (Figure 10) on a metallurgic grain boundaries image, in presence of a locally small change in scene illumination. The goal of the application is to detect the boundaries of the grains. Several methods, addressing this problem, have ever been exposed. For example, Chazallon and Pinoli [154] proposed an efficient approach based on the

residues of alternating sequential filters. Nevertheless, the method still has few drawbacks for complex images: its inability to remove some artifacts and to preserve disconnected grain boundaries. On the whole, the published methods need most of the time advanced processes and metallographically pertinent and tractable a priori knowledges, requiring expert intervention.

In this application example, a simple segmentation method resulting from two steps is proposed:

(1) a decomposition process, through nonadaptative and LANIP-based closing-openings, is applied on the original image,

(2) the watershed transformation is then computed on these segmentation functions.

This approach does not require a gradient operator. Indeed, seeing that the crest lines of the original image fit with the narrow grain boundaries, the watershed transformation, denoted by W, is directly computed on filtered images (processed with closing-openings) in order to avoid an over-segmentation seen in Figure 10(e).

A comparison between the LANIP-based approach and the corresponding classical one is performed through the filtering process.

(a) Original f

(b) COi(f)

(c) CO2( f)

(d) CO3(f)

(e) W(f)

(f) W(COi(f))

(g) W(CO2(f))

(h) W(CO3(f))

(l) W(CO]O(f))

(m) W(Co20(f))

(n) W(CO^3(f))

Figure 10: Segmentation of a real metallurgic grain boundaries (a) image. Pyramidal segmentation of the original (a) image. First, the original image is decomposed using classical (b)-(d) and adaptive LANIP-based (i)-(k) closing-openings. Second, the watershed transformation, denoted by W, is computed on the decomposed images, achieving the images (f)-(h) and (l)-(n), for the nonadaptive and LANIP approach, respectively. The original image is decomposed so as to avoid an over-segmentation (e). The adaptive approach provides a well-accepted segmentation reached for the homogeneity tolerance m = 23.

Note that C0r (resp., C0mA) represents the usual closing-opening (resp., the adaptive closing-opening using the disk

of radius r centered in x, denoted by Br (x)) as usual SE

(resp., using the ASEs {RmA (x)}xeD, computed with the luminance" criterion).

For each approach, three parameters have been fixed: r = 1,2,3 for the radius of usual SEs, and mA = 10,20,30 for the homogeneity tolerance of adaptive SEs.

The LANIP approach overwhelmingly overcomes the usual nonadaptive one, achieving a much better segmentation of the original image, with the visually expected result reached for mA = 23. Indeed, these connected LAN filters do not damage the grain boundaries and well smooth the image inside the grains due to the spatial adaptivity of the GANIP approach. The uneven illumination conditions are robustly addressed by the LIP approach showing its

connections with the psychophysical settings of the image, contrary to the usual frameworks. Consequently, the combination of the GANIP and the LIP, that is to say LANIP, is needed to robustly address this application.

5.4. Image enhancement

Image enhancement is the improvement of image quality [9, 142], wanted, for example, for visual inspection or for machine analysis. Physiological experiments have shown that very small changes in luminance are recognized by the human visual system in regions of continuous gray tones, and not at all seen in regions of some discontinuities [1]. Therefore, a design goal for image enhancement often is to smooth images in more uniform regions, but to preserve edges. On the other hand, it has also been shown that somehow degraded images with enhancement of certain features, for example, edges, can simplify image interpretation both for a human observer and for machine recognition [1]. A second design goal, therefore, is image sharpening [142].

In this paper, the considered image enhancement technique is an edge sharpening process: the approach is similar with unsharp masking [155] type enhancement where a high pass portion is added to the original image. The contrast enhancement process is realized through the toggle contrast [144], whose operator Kr is defined as follows: for all (f, X, r) e IxDx R+,

the LIP framework (required for the LANs definition), respectively, for all (f, x, mA) e I x D x E+,

k%{ \f )(x) =

d^\f )(x) ifD%P(f )(x) - f (x)

<f (x) - e^\f )(x),

emi (f)(x) otherwise,

C(f) C( f)

where Dm and Em denote the adaptive dilation and adaptive erosion, respectively, using ASEs computed on the criterion mapping C( f) with the homogeneity tolerance m .

Figure 11 exposes an illustration example of image enhancement through usual and adaptive toggle contrast. The process is applied on a real image acquired on the retina of a human eye.

This image enhancement example confirms that the LA-NIP operators are more effective than the corresponding classical ones. Indeed, the adaptive toggle LIP contrast performs a locally accurate image enhancement, taking into account the notion of contrast within spatial structures of the image. Consequently, only the transitions are sharpened while preserving the homogeneous regions. On the contrary, the usual toggle contrast enhances the image in a uniform way. Thus, the spatial zones around transitions are rapidly damaged as soon as the filtering becomes too strong.

Kr (f )(X) =

Dr(f )(x) ifDr(f )(x) - f (x)

<f (X)- Er(f )(x), (29) Er (f )(x) otherwise,

where Dr and Er denote the classical dilation and erosion, respectively, using a disk ofradius r as structuring element.

This (nonadaptive) toggle contrast will be compared with the adaptive LIP toggle contrast, using a "contrast" criterion. This transformation requires a "contrast" definition which is introduced in the digital setting of the LIP framework [28] (see [86] for the continuous setting): the LIP contrast at a point x e D of an image f e I, denoted by C( f )(x), is defined with the help of the gray values of its neighbors included in a disk V(x) of radius 1, centered in x:

C(f )(x)

#V (x)

a X (max(f (x), f (y))amin(f (x), f (y))),

y^v (x)

where X^ and # denote the sum in the LIP sense, and the cardinal symbol, respectively.

Consequently, the so-called adaptive toggle LIP contrast

is the transformation Kml , where C( f) and mA represent the criterion mapping and the homogeneity tolerance within

6. CONCLUSION AND FUTURE WORKS

In this paper, the logarithmic adaptive neighborhood image processing (LANIP) framework has been presented and applied in several image processing areas: image multi-scale decomposition, image restoration, image segmentation, and image enhancement. The LANIP approach is a combination of the logarithmic image processing (LIP) [21] and the general adaptive neighborhood image processing (GANIP) [34] frameworks: LANIP = LIP + GA-NIP. The consistency of the LANIP framework with the human brightness perception has been shown through its connections with several visual laws ans characteristics: intensity range inversion, saturation characteristic, Weber's and Fechner's laws, psychophysical contrast, spatial adap-tivity, multiscale adaptivity, morphological symmetry property. The application issues focused on biomedical, materials, and visual imaging have enabled to illustrate in the same time the practical relevance and the visual consistency of the LANIP approach as applied on real application examples. The LANIP approach is technically built upon an analyzing criterion based on a local measurement such as the luminance or contrast as presented and illustrated in Sections 3 and 5. However, as noted in this paper, other analyzing criterion mappings than luminance and contrast may be used as (local measurement of) orientation, thickness, curvature, shape, and so on [35]. Therefore, analyzing maps associated to the image(s) to be studied are or may be available for each criterion allowing their detailed multiscale adaptive nonlinear radiometric,

(a) Original image f

(b) LIP contrast C( f )

(c) Kl( f )

(d) K5( f )

(e) kio(f )

(f) K20(f )

(g) kCo(f\f )

(h) k3C0f)(f )

(i) KcO f '( f )

(j) kco(fV)

Figure 11: Image enhancement through the toggle contrast process. The operator is applied on a real (a) image acquired on the retina of a human eye. The enhancement is achieved with the usual toggle contrast (c)-(f) and the LANIP-based toggle LIP contrast (g)-(j). Using the usual toggle contrast, the edges are disconnected as soon as the filtering becomes too strong. On the contrary, such structures are preserved and sharpened with the LANIP filters.

geometric, morphological or textural representation, processing and analysis to be performed. The authors are actually working on theses aspects, particularly on four of them (orientation, thickness, curvature, and shape) that are closely connected to human brightness perception in accordance with the Gestalt Theory, especially concerning the visual grouping process [15, 137]. Moreover, the authors wish to investigate the statistical perspectives of the LANIP-based filters.

ACKNOWLEDGMENTS

The authors wish to thank the professors P. Gain and F. G. Barral from the University Hospital Center of Saint-Etienne in France, who have kindly provided the human corneal en-dothelial images (Figures 9(a), 9(b)) and the brain image damaged by a cerebrovascular accident (Figure 6(a)).

REFERENCES

[1] T. G. Stockham Jr., "Image processing in the context of a visual model," Proceedings of the IEEE, vol. 60, no. 7, pp. 828842, 1972.

[2] B. R. Hunt, "Digital image processing," Proceedings of the IEEE, vol. 63, no. 4, pp. 693-708, 1975.

[3] D. H. Kelly, "Image processing experiments," Journal of the Optical Society of America, vol. 51, no. 10, pp. 1095-1101, 1961.

[4] A. K. Jain, "Advances in mathematical models for image processing," Proceedings of the IEEE, vol. 69, no. 5, pp. 502-528, 1981.

[5] D. J. Granrath, "The role of human visual models in image processing," Proceedings of the IEEE, vol. 69, no. 5, pp. 552561, 1981.

[6] D. Marr, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, W. H. Freeman, San Fransisco, Calif, USA, 1982.

[7] D. G. Myers, Digital Signal Processing Efficient Convolution and Fourier Transform Technique, Prentice-Hall, Upper Saddle River, NJ, USA, 1990.

[8] W. F. Schreiber, Fundamentals of Electronic Imaging Systems: Some Aspects of Image Processing, Springer, Berlin, Germany, 2nd edition, 1991.

[9] R. C. Gonzalez and P. Wintz, Digital Image Processing, Addison-Wesley, Reading, Mass, USA, 1987.

[10] J.-C. Pinoli, "A general comparative study of the multiplicative homomorphic, log-ratio and logarithmic image processing approaches," Signal Processing, vol. 58, no. 1, pp. 11-45, 1997.

[11] J. Shen, "On the foundations of vision modeling. I. Weber's law and Weberized TV restoration," Physica D: Nonlinear Phenomena, vol. 175, no. 3-4, pp. 241-251, 2003.

[12] M. Shah, "Guest introduction: the changing shape of computer vision in the twenty-first century," International Journal of Computer Vision, vol. 50, no. 2, pp. 103-110, 2002.

[13] J. Shen, "On the foundations of vision modeling. II. Mining of mirror symmetry of 2-D shapes," Journal of Visual Communication and Image Representation, vol. 16, no. 3, pp. 250270, 2005.

[14] S. E. Palmer, "Theoretical approaches to vision," in Vision Science: Photons to Phenomenology, pp. 45-92, MIT Press, Cambridge, Mass, USA, 1999.

[15] M. Wertheimer, "Laws of organization in perceptual forms," in A Sourcebook of Gestalt Psychology, pp. 71-88, Hartcourt Brace, San Diego, Calif, USA, 1938.

[16] I. Kovacs, "Gestalten of today: early processing of visual contours and surfaces," Behavioural Brain Research, vol. 82, no. 1, pp. 1-11, 1996.

[17] A. Desolneux, L. Moisan, and J.-M. Morel, "Computational gestalts and perception thresholds," Journal of Physiology-Paris, vol. 97, no. 2-3, pp. 311-324, 2003.

[18] Z. Xie and T. G. Stockham Jr., "Toward the unification of three visual laws and two visual models in brightness perception," IEEE Transactions on Systems, Man and Cybernetics, vol. 19, no. 2, pp. 379-387, 1989.

[19] M. Jourlin and J.-C. Pinoli, "Logarithmic image processing," Acta Stereologica, vol. 6, pp. 651-656, 1987.

[20] M. Jourlin and J.-C. Pinoli, "A model for logarithmic image processing," Journal of Microscopy, vol. 149, pp. 21-35, 1988.

[21] M. Jourlin and J.-C. Pinoli, "Logarithmic image processing: the mathematical and physical framework for the representation and processing of transmitted images," Advances in Imaging and Electron Physics, vol. 115, pp. 129-196, 2001.

[22] J.-C. Pinoli, "The logarithmic image processing model: connections with human brightness perception and contrast estimators," Journal of Mathematical Imaging and Vision, vol. 7, no. 4, pp. 341-358, 1997.

[23] J.-C. Pinoli, Contribution à la modélisation, au traitement et a l'analyse d'image, Ph.D. thesis, Department of Mathematics, University of Saint-Etienne, Saint-Etienne, France, February 1987.

[24] F. Mayet, J.-C. Pinoli, and M. Jourlin, "Justifications physiques et applications du modele LIP pour le traitement des images obtenues en lumiere transmise," Traitement du Signal, vol. 13, pp. 251-262, 1996.

[25] G. Deng, Image and signal processing using the logarithmic image processing model, Ph.D. thesis, Department of Electronic Engineering, La Trobe University, Melbourne, Australia, 1993.

[26] G. Deng and L. W. Cahill, "Image modelling and processing using the logarithmic image processing model," in Proceedings of the IEEE Workshop on Visual Signal Processing and Communications, pp. 61-64, Melbourne, Australia, September 1993.

[27] J. C. Brailean, D. Little, M. L. Giger, C.-T. Chen, and B. J. Sullivan, "Application of the EM algorithm to radiographic images," Medical Physics, vol. 19, no. 5, pp. 1175-1182, 1992.

[28] M. Jourlin, J.-C. Pinoli, and R. Zeboudj, "Contrast definition and contour detection for logarithmic images," Journal of Microscopy, vol. 156, pp. 33-40, 1989.

[29] G. R. Arce and R. E. Foster, "Detail-preserving ranked-order based filters for image processing," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 1, pp. 8398, 1989.

[30] M. Nagao and T. Matsuyama, "Edge preserving smoothing," Computer Graphics and Image Processing, vol. 9, no. 4, pp. 394-407, 1979.

[31] W.-J. Song and W. A. Pearlman, "Restoration of noisy images with adaptive windowing and nonlinear filtering," in Visual Communications and Image Processing, vol. 707 of Proceedings ofSPIE, pp. 198-206, Cambridge, Mass, USA, 1986.

[32] P. Salembier, "Structuring element adaptation for morphological filters," Journal of Visual Communication and Image Representation, vol. 3, no. 2, pp. 115-136, 1992.

[33] R. C. Vogt, "A spatially variant, locally adaptive, background normalization operator," in Mathematical Morphology and Its Applications to Image Processing, J. Serra and P. Soille, Eds., pp. 45-52, Kluwer Academic, Dordrecht, The Netherlands, 1994.

[34] J. Debayle and J.-C. Pinoli, "General adaptive neighborhood image processing: Part I: introduction and theoretical aspects," Journal of Mathematical Imaging and Vision, vol. 25, no. 2, pp. 245-266, 2006.

[35] J. Debayle, General adaptive neighborhood image processing, Ph.D. thesis, Ecole Nationale Supérieure des Mines, Saint-Etienne, France, November 2005.

[36] J. Debayle and J.-C. Pinoli, "Spatially adaptive morphological image filtering using intrinsic structuring elements," Image Analysis and Stereology, vol. 24, no. 3, pp. 145-158, 2005.

[37] A. V. Oppenheim, "Generalized superposition," Information and Control, vol. 11, no. 5-6, pp. 528-536, 1967.

[38] A. V. Oppenheim, "Superposition in a class of nonlinear systems," Tech. Rep., Research Laboratory of Electronics, MIT, Cambridge, Mass, USA, 1965.

[39] J. Debayle and J.-C. Pinoli, "Adaptive-neighborhood mathematical morphology and its applications to image filtering and segmentation," in Proceedings of the 9th European Congress on Stereology and Image Analysis (ECSIA '05), vol. 2, pp. 123-130, Zakopane, Poland, May 2005.

[40] J. Debayle and J.-C. Pinoli, "Multiscale image filtering and segmentation by means of adaptive neighborhood mathematical morphology," in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 3, pp. 537540, Genova, Italy, September 2005.

[41] J. Debayle and J.-C. Pinoli, "General adaptive neighborhood image processing: Part II: practical application examples," Journal of Mathematical Imaging and Vision, vol. 25, no. 2, pp. 267-284, 2006.

[42] J. C. Dainty and R. Shaw, Image Science, Academic Press, New York, NY, USA, 1974.

[43] M. Born and E. Wolf, Principle of Optics, Cambridge University Press, New York, NY, USA, 1999.

[44] P. W. Atkins, Physical Chemistry, Oxford University Press, Oxford, UK, 5th edition, 1994.

[45] I. E. Gordon, Theories of Visual Perception, Psychology Press, New York, NY, USA, 2004.

[46] R. Watt, Understanding Vision, Academic Press, San Diego, Calif, USA, 1991.

[47] G. Mather, Foundations of Perception, Psychology Press, New York, NY, USA, 2006.

[48] A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, USA, 1989.

[49] W. K. Pratt, Digital Image Processing, John Wiley & Sons, New York, NY, USA, 2nd edition, 1991.

[50] J. M. Palomares, J. González, and E. Ros, "Designing a fast convolution under the LIP paradigm applied to edge detection," in Proceedings of the 3rd International Conference on Advances in Pattern Recognition (ICAPR '05), S. Singh, M. Singh, C. Apte, and P. Perner, Eds., vol. 3687 of Lecture Notes in Computer Science, pp. 560-569, Bayh, UK, August 2005.

[51] J.-C. Pinoli, "Metrics, scalar product and correlation adapted to logarithmic images," Acta Stereologica, vol. 11, pp. 157168, 1992.

[52] G. Courbebaisse, F. Trunde, and M. Jourlin, "Wavalet transform and LIP model," Image Analysis and Stereology, vol. 21, pp. 121-125, 2002.

[53] G. Deng and L. W. Cahill, "The logarithmic image processing model and its applications," in Proceedings of the 27th Asilo-mar Conference on Signals, Systems and Computers (ACSSC '93), vol. 2, pp. 1047-1051, Pacific Grove, Calif, USA, November 1993.

[54] Q.-Z. Wu and B.-S. Jeng, "Background subtraction based on logarithmic intensities," Pattern Recognition Letters, vol. 23, no. 13, pp. 1529-1536, 2002.

[55] C. Bron, P. Gremillet, D. Launey, et al., "Three-dimensional electron microscopy of entire cells," Journal of Microscopy, vol. 157, part 1,pp. 115-126, 1990.

[56] N. Hautiere, R. Labayrade, and D. Aubert, "Detection of visibility conditions through use of on board cameras," in Proceedings of IEEE Intelligent Vehicles Symposium (IVS '05), pp. 193-198, Las Vegas, Nev, USA, June 2005.

[57] G. Deng and L. W. Cahill, "Multiscale image enhancement using the logarithmic image processing model," Electronics Letters, vol. 29, no. 9, pp. 803-804, 1993.

[58] G. Deng, L. W. Cahill, and G. R. Tobin, "A study of logarithmic image processing model and its application to image enhancement," IEEE Transactions on Image Processing, vol. 4, no. 4, pp. 506-512, 1995.

[59] M. Jourlin and J.-C. Pinoli, "Image dynamic range enhancement and stabilization in the context of the logarithmic image processing model," Signal Processing, vol. 41, no. 2, pp. 225-237, 1995.

[60] G. Ramponi, "A cubic unsharp masking technique for contrast enhancement," Signal Processing, vol. 67, no. 2, pp. 211— 222, 1998.

[61] D.-C. Chang and W.-R. Wu, "Image contrast enhancement based on a histogram transformation of local standard deviation," IEEE Transactions on Medical Imaging, vol. 17, no. 4, pp. 518-531, 1998.

[62] N. Mishra, P. S. Kumar, R. Chandrakanth, and R. Ramachan-dran, "Image enhancement using logarithmic image processing model," IETE Journal of Research, vol. 46, no. 5, pp. 309313,2000.

[63] P. Corcuff, P. Gremillet, M. Jourlin, Y. Duvault, F. Leroy, and J. L. Leveque, "3D reconstruction of human air by confocal microscopy," Journal of the Society of Cosmetic Chemists, vol. 44, pp. 1-12, 1993.

[64] P. Gremillet, M. Jourlin, and J.-C. Pinoli, "LIP-model-based three-dimensionnal reconstruction and visualisation of HIV infected entire cells," Journal of Microscopy, vol. 174, no. 1, pp. 31-38, 1994.

[65] G. Deng and L. W. Cahill, "Contrast edge detection using the logarithmic image processing model," in Proceedings of the International Conference on Signal Processing, pp. 792-796, Beijing, China, October 1993.

[66] B. Roux and R. M. Faure, "Recognition and quantification of clinker phases by image analysis," Acta Stereologica, vol. 11, pp. 149-154, 1992.

[67] J. C. Brailean, B. J. Sullivan, C.-T. Chen, and M. L. Giger, "Evaluating the EM algorithm for image processing using a human visual fidelity criterion," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '91), vol. 4, pp. 2957-2960, Toronto, Ont, Canada, May 1991.

[68] G. Deng and L. W. Cahill, "A novel nonlinear image filtering algorithm using the logarithmic image processing model," in Proceedings of the 8th IEEE Workshop on Image and Multidimensional Signal Processing, pp. 61-64, Cannes, France, September 1993.

[69] M. Jourlin and N. Montard, "A logarithmic version of the top-hat transform in connection with the Asplund distance," Acta Stereologica, vol. 16, no. 3, pp. 201-208, 1998.

[70] B. Roux, Mise au point d'une methode d'analyse d'images qui reconnaît et quantifie les phases de clinker, Ph.D. thesis, University of Saint-Etienne, Saint-Etienne, France, 1993.

[71] D. Comaniciu, "LIP-based edge block detector for classified vector quantization," Revue Roumaine des Sciences Techniques Serie Electrotechnique et Energetique, vol. 41, no. 1, pp. 89102, 1996.

[72] Z. J. Hou and G. W. Wei, "A new approach to edge detection," Pattern Recognition, vol. 35, no. 7, pp. 1559-1570, 2002.

[73] G. Deng and L. W. Cahill, "The contrast pyramid using the logarithmic image processing model," in Proceedings of the 2nd International Conference on Simulation and Modelling, pp. 75-82, Melbourne, Australia, July 1993.

[74] V. H. Metzler, T. M. Lehmann, and T. Aach, "Morphological multiscale shape analysis of light micrographs," in Nonlinear Image Processing XI, vol. 3961 of Proceedings ofSPIE, pp. 227238, San Jose, Calif, USA, January 2000.

[75] G. Deng and L. W. Cahill, "Generating sketch image for very low bit rate image communication," in Proceedings of the 1st IEEE Australian and New Zealand Conference on Intelligent Information Systems (ANZIIS '93), pp. 407-411, Perth, Australia, December 1993.

[76] G. Deng and L. W. Cahill, "Low-bit-rate image coding using sketch image and JBIG," in Still-Image Compression, vol. 2418 of Proceedings of SPIE, pp. 212-220, San Jose, Calif, USA, February 1995.

[77] F. Luthon, A. Caplier, and M. Lievin, "Spatiotemporal MRF approach to video segmentation: application to motion detection and lip segmentation," Signal Processing, vol. 76, no. 1, pp. 61-80, 1999.

[78] M. Lievin and F. Luthon, "Nonlinear color space and spatiotemporal MRF for hierarchical segmentation of face features in video," IEEE Transactions on Image Processing, vol. 13, no. 1,pp. 63-71,2004.

[81 [82

[84 [85

[87 [88

[89 [90

[92 [93

[95 [96 [97

V. Patrascu and V. Buzuloiu, "Color image enhancement [98 in the framework of logarithmic models," in Proceedings of the 8th IEEE International Conference on Telecommunications, vol. 1, pp. 199-204, Bucharest, Romania, June 2001. [99

A. V. Oppenheim, R. W. Schafer, and T. G. Stockham Jr., "Nonlinear filtering of multiplied and convolved signals," Proceedings of the IEEE, vol. 56, no. 8, pp. 1264-1291, 1968. A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, [100 Prentice-Hall, Englewood Cliffs, NJ, USA, 1975. E. Harouche, S. Peleg, H. Shvaytser, and L. S. Davis, "Noisy image restoration by cost function minimization," Pattern [101 Recognition Letters, vol. 3, no. 1, pp. 65-69, 1985. H. Shvayster and S. Peleg, "Pictures as elements in a vec- [102 tor space," in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR '83), pp. 442-446, Washington, DC, USA, June 1983.

H. Shvayster and S. Peleg, "Inversion of picture operators," [103 Pattern Recognition Letters, vol. 5, no. 1, pp. 49-61, 1987. J. C. Brailean, D. Little, M. L. Giger, C.-T. Chen, and B. J. Sul- [104 livan, "Quantitative performance evaluation of the EM algorithm applied to radiographic images," in Biomedical Image Processing II, vol. 1450 of Proceedings ofSPIE, pp. 40-46, San [105 Diego, Calif, USA, February 1991.

J.-C. Pinoli, "A contrast definition for logarithmic images in the continuous setting," Acta Stereologica, vol. 10, pp. 85-96, [106 1991.

M. H. Pirenne, Vision and the Eye, Associated Book, New York, NY, USA, 2nd edition, 1967. [107

P. Zuidema, J. J. Koenderink, and M. A. Bouman, "A mechanistic approach to threshold behavior of the visual system," IEEE Transactions on Systems, Man and Cybernetics, vol. 13, no. 5, pp. 923-934, 1983. [108

D. A. Baylor, T. D. Lamb, and K.-W. Yau, "Responses of retinal rods to single photons," Journal of Physiology, vol. 288, no. 1, pp. 613-634, 1979. [109

D. A. Baylor, B. J. Nunn, and J. L. Schnapf, "The photo-current, noise and spectral sensitivity of rods of the monkey [110 Macaca Fascicularis," Journal of Physiology, vol. 357, no. 1, pp. 575-607, 1984.

E. H. Weber, "Der tastsinn und das gemeingefiihl," in Handworterbuch der Physiologie, E. Wagner, Ed., vol. 3, pp. 481-588, Friedrich Vieweg & Sohn, Braunschweig, Germany, 1846.

S. S. Stevens, Handbook of Experimental Psychology, John Wiley & Sons, New York, NY, USA, 1951. H. R. Blackwell, "Contrast thresholds of the human eye," Journal of the Optical Society of America, vol. 36, pp. 624-643, 1946.

E. S. Lamar, S. Hecht, S. Shlaer, and D. Hendlay, "Size, shape, and contrast in detection of targets by daylight vision," Journal of the Optical Society of America, vol. 37, no. 7, pp. 531545, 1947.

G. Buchsbaum, "An analytical derivation of visual nonlin-earity," IEEE Transactions on Biomedical Engineering, vol. 27, no. 5, pp. 237-242, 1980.

L. E. Krueger, "Reconciling Fechner and Stevens: toward a unified psychophysical law," Behavioral and Brain Sciences, vol. 12, no. 2, pp. 251-320, 1989.

G. T. Fechner, Elements of Psychophysics. Vol. 1, Holt, Rine-hart & Winston, New York, NY, USA, 1960, English translation by H. E. Adler.

M. G. F. Fuortes, "Initiation of impulses in visual cells of Limulus," Journal of Physiology, vol. 148, no. 1, pp. 14-28, 1959.

H. De Vries, "The quantum character of light and its bearing upon threshold of vision, the differential sensitivity and visual acuity of the eye," Physica, vol. 10, no. 7, pp. 553-564, 1943.

A. Rose, "The sensitivity performance of the human eye on an absolute scale," Journal of the Optical Society ofAmerica, vol. 38, no. 2, pp. 196-208, 1948.

A. Rose, Vision: Human and Electronic, Plenum Press, New York, NY, USA, 1973.

Y. Y. Zeevi and S. S. Mangoubi, "Noise suppression in pho-toreceptors and its relevance to incremental intensity thresholds," Journal of the Optical Society of America, vol. 68, no. 12, pp. 1772-1776, 1978.

S. S. Stevens, "On the psychophysical law," Psychological Review, vol. 64, no. 3, pp. 153-181, 1957. S. S. Stevens and E. H. Galanter, "Ratio scales and category scales for a dozen perceptual continua," Journal ofExperimen-tal Psychology, vol. 54, no. 6, pp. 377-411, 1957. S. S. Stevens, "Concerning the psychophysical power law," Quarterly Journal of Experimental Psychology, vol. 16, no. 4, pp. 383-385, 1964.

K. I. Naka and W. A. Rushton, "S-potentials from luminosity units in the retina of fish (Cyprinidae)," Journal of Physiology, vol. 185, no. 3, pp. 587-599, 1966.

R. A. Normann and F. S. Werblin, "Control of retinal sensitivity. I. Light and dark adaptation of vertebrate rods and cones," Journal of General Physiology, vol. 63, no. 1, pp. 3761, 1974.

D. C. Hood, M. A. Finkelstein, and E. Buckingham, "Psy-chophysical tests of models of the response function," Vision Research, vol. 19, no. 4, pp. 401-406, 1979. G. Ekman, "Is the power law a special case of Fechner's law?" Perceptual and Motor Skills, vol. 19, p. 730, 1964. T. O. Kvalseth, "Is Fechner's logarithmic law a special case of Stevens' power law?" Perceptual and Motor Skills, vol. 52, pp. 617-618, 1981.

W. J. McGill and J. P. Goldberg, "A study of the near-miss involving Weber's law and pure intensity discrimination," Perceptual Psychophysics, vol. 4, pp. 105-109, 1968. V. Graf, J. C. Baird, andG. Glesman, "An empirical test of two psychophysical models," Acta Psychologica, vol. 38, no. 1, pp. 59-72, 1974.

D. J. Heeger, "Normalization of cell responses in cat striate cortex," VisualNeuroscience, vol. 9, no. 2, pp. 181-197, 1992. O. Schwartz and E. P. Simoncelli, "Natural signal statistics and sensory gain control," Nature Neuroscience, vol. 4, no. 8, pp. 819-825, 2001.

P. Ledda, L. P. Santos, and A. Chalmers, "A local model ofeye adaptation for high dynamic range images," in Proceedings of the 3rd International Conference on Computer Graphics, Virtual Reality, Visualization and Interaction in Africa, pp. 151160, Stellenbosch, South Africa, November 2004. J. Malo, I. Epifanio, R. Navarro, and E. P. Simoncelli, "Nonlinear image representation for efficient perceptual coding," IEEE Transactions on Image Processing, vol. 15, no. 1, pp. 6880, 2006.

A. B. Watson, "DCT quantization matrices visually optimized for individual images," in Human Vision, Visual

121 122

Processing, and Digital Display IV, vol. 1913 of Proceedings of SPIE, pp. 202-216, San Jose, Calif, USA, February 1993. E. P. Simoncelli and B. A. Olshausen, "Natural image statistics and neural representation," Annual Review of Neuroscience, vol. 24, pp. 1193-1216, 2001. A. Hyvarinen, J. Karhunen, and E. Oja, Independent Component Analysis, John Wiley & Sons, New York, NY, USA, 2001.

A. B. Watson, "Efficiency of a model human image code," Journal of the Optical Society of America A, vol. 4, no. 12, pp. 2401-2417, 1987.

B. A. Olshausen and D. J. Field, "Emergence of simple-cell receptive field properties by learning a sparse code for natural images," Nature, vol. 381, no. 6583, pp. 607-609, 1996.

J. Malo, A. M. Pons, A. Felipe, and J. M. Artigas, "Characterization of the human visual system threshold performance by a weighting function in the Gabor domain," Journal of Modern Optics, vol. 44, no. 1, pp. 127-148, 1997. D. J. Field, "Relations between the statistics of natural images and the response properties of cortical cells," Journal of the Optical Society of America A, vol. 4, no. 12, pp. 2379-2394, 1987.

A. B. Watson and J. A. Solomon, "Model of visual contrast gain control and pattern masking," Journal of the Optical Society of America A, vol. 14, no. 9, pp. 2379-2391, 1997. R. B. Paranjape, R. M. Rangayyan, and W. M. Morrow, "Adaptive neighbourhood mean and median image filtering," Journal of Electronic Imaging, vol. 3, no. 4, pp. 360-367, 1994. R. M. Rangayyan, M. Ciuc, and F. Faghih, "Adaptive-neighborhood filtering of images corrupted by signal-dependent noise," Applied Optics, vol. 37, no. 20, pp. 44774487, 1998.

J. R. Munkres, Topology, Prentice-Hall, Englewood Cliffs, NJ, USA, 2nd edition, 2000.

A. Jalobeanu, L. Blanc-Feraud, and J. Zerubia, "An adaptive Gaussian model for satellite image deblurring," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 613-621, 2004. J. Goutsias and H. J. A. M. Heijmans, "Nonlinear multiresolution signal decomposition schemes. Part I: morphological pyramids," IEEE Transactions on Image Processing, vol. 9, no. 11, pp. 1862-1876, 2000.

S. G. Mallat, "A theory for multiresolution signal decomposition: the wavelet representation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674693, 1989.

T. Lindeberg, "Scale-space theory: a basic tool for analysing structures at different scales," Journal of Applied Statistics, vol. 21, no. 2, pp. 225-270, 1994.

P. Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, 1990. G. Kanisza, Organization in Vision, Holt, Rinehart and Winston, New York, NY, USA, 1979.

S. E. Palmer, J. L. Brooks, and R. Nelson, "When does grouping happen?" Acta Psychologica, vol. 114, no. 3, pp. 311-330, 2003.

R. Hess and D. Field, "Integration of contours: new insights," Trends in Cognitive Sciences, vol. 3, no. 12, pp. 480-486, 1999. S. C. Dakin and A. M. Herbert, "The spatial region of integration for visual symmetry detection," Proceedings of the Royal Society B: Biological Sciences, vol. 265, no. 1397, pp. 659-664, 1998.

[137] S. C. Dakin and R. F. Hess, "The spatial mechanisms mediating symmetry perception," Vision Research, vol. 37, no. 20, pp. 2915-2930, 1997.

[138] S. C. Dakin and R. J. Watt, "Detection of bilateral symmetry using spatial filters," Spatial Vision, vol. 8, no. 4, pp. 393-413, 1994.

[139] V. Di Gesu and C. Valenti, "Symmetry operators in computer vision," Vistas in Astronomy, vol. 40, no. 4, pp. 461-468, 1996.

[140] H. Rom and G. Medioni, "Hierarchical decomposition and axial shape description," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 973-981, 1993.

[141] B. B. Kimia, "On the role of medial geometry in human vision," Journal of Physiology-Paris, vol. 97, no. 2-3, pp. 155190, 2003.

[142] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Addison-Wesley, Reading, Mass, USA, 1992.

[143] G. Matheron, Elements pour une Théorie des Milieux Poreux, Masson, Paris, France, 1967.

[144] P. Soille, Morphological Image Analysis. Principles and Applications, Springer, New York, NY, USA, 2003.

[145] J. Serra, Image Analysis and Mathematical Morphology: Vol. 2: Theoretical Advances, Academic Press, London, UK, 1988.

[146] M. Charif-Chefchaouni and D. Schonfeld, "Spatially-variant mathematical morphology," in Proceedings of IEEE International Conference Image Processing (ICIP '94), vol. 2, pp. 555559, Austin, Tex, USA, November 1994.

[147] R. Lerallut, E. Decenciere, and F. Meyer, "Image filtering using morphological amoebas," in Proceedings of the 7th International Symposium on Mathematical Morphology (ISMM '05), C. Ronse, L. Najman, and E. Decenciere, Eds., pp. 1322, Paris, France, April 2005.

[148] O. Cuisenaire, "Locally adaptable mathematical morphology," in Proceedings of IEEE International Conference on Image Processing (ICIP '05), vol. 2, pp. 125-128, Genova, Italy, September 2005.

[149] E. Cech, Topological Spaces, John Wiley & Sons, Prague, Czechoslovakia, 1966.

[150] J. Serra and P. Salembier, "Connected operators and pyramids," in Image Algebra and Morphological Image Processing IV, vol. 2030 of Proceedings of SPIE, pp. 65-76, San Diego, Calif, USA, July 1993.

[151] G. Matheron, "Filters and lattices," in Image Analysis and Mathematical Morphology. Volume 2 : Theoretical Advances, pp. 115-140, Academic Press, London, UK, 1988.

[152] S. BeucherandC. Lantuejoul, "Use ofwatersheds in contour detection," in Proceedings of International Workshop on Image Processing, Real-Time Edge and Motion Detection/Estimation, pp. 17-21, Rennes, France, September 1979.

[153] P. Gain, G. Thuret, L. Kodjikian, et al., "Automated tri-image analysis of stored corneal endothelium," British Journal of Ophthalmology, vol. 86, no. 7, pp. 801-808, 2002.

[154] L. Chazallon and J.-C. Pinoli, "An automatic morphological method for aluminium grain segmentation in complex grey level images," Acta Stereologica, vol. 16, no. 2, pp. 119-130, 1997.

[155] G. Ramponi, N. Strobel, S. K. Mitra, and T.-H. Yu, "Nonlinear unsharp masking methods for image contrast enhancement," Journal of Electronic Imaging, vol. 5, no. 3, pp. 353366, 1996.

J.-C. Pinoli received Master's, Ph.D. and D.Sc. (Habilitation a Diriger des Recherches) degrees in applied mathematics in 1983, 1985, and 1992, respectively. From 1985 to 1989, he was a Member of the Opto-electronics Department of the An-genieux (Thales) company, Saint-Heand, France, where he pioneered researches in the field of digital imaging and artificial vision. In 1990, he joined the Corporate Research Center of the Pechiney Company, Voreppe, France, and was a Member of the Computational Technologies Department in charge of the imaging activities. Since 2001, he is Full Professor at the French graduate school "Ecole Nationale Superieure des Mines de Saint-Etienne." He leads the Image Processing and Pattern Analysis Group within the Engineering and Health Research Center and the LPMG Laboratory, UMR CNRS 5148. His research interests and teaching include image processing, image analysis, mathematical morphology, and computer vision.

J. Debayle received his Ph.D. degree in image, vision, and signal from the French graduate school "Ecole Nationale Superieure des Mines de Saint-Etienne" and the University of Saint-Etienne, France, in 2005. He is currently a Postdoctoral Scientist at the French National Institute for Research in Computer Science and Control (INRIA). His research interests include adaptive image processing and analysis.