Scholarly article on topic 'Generating One Biometric Feature from Another: Faces from Fingerprints'

Generating One Biometric Feature from Another: Faces from Fingerprints Academic research paper on "Computer and information sciences"

Share paper
Academic journal
OECD Field of science

Academic research paper on topic "Generating One Biometric Feature from Another: Faces from Fingerprints"

Sensors 2010,70,4206^237;doi10 3390/s100504206


ISSN 1424-8220 www m dpicom /pumal/sensors


Generating One Biom etricFeature from Another: Faces from Fingerprints

Neclla O zkaya 1* and Seref Sagiroglu 2

1 C om puter Engieering Departm ent, Engineering Faculty, Erciyes University, 38039, Kayseri, Turkey

2 Com puter Engieering Departm ent, Engineering Faculty, GaziUniversity, 06570 Ankara, Turkey; E-M ail s^ gaziedu tr

* Autthorto whom correspondence should be addre^sd; E-M aiLneclaozkaya® erciyesedutr.

Received: 20 January 2010; in revisedform: 4 March 2010/Accepted: 22 March 2010/ Published: 28 April 2010

Abstract: This study presents a new approach based on artificial neural networks for generating one biom etric feature faces) from another (only fingerprints). An autom atc and intelligent ^stem was designed and developed to analyze the relationships among fingerprints and faces and ato to m odel and to im prove the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrow s, eyes. nose, m outh, ears and face border from only fingerprints. It is also unique and different from sim ilar studies recently presented in the literature w ith som e superior features. The param eter ^ttings of the system were achieved with the help of Taguchi experim ental design technique. The perform ance and accuracy of the ^stem have been evaluated w ith lO-fold cro^ validation technique using qualitative evaluation m etrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the com binaton of these objective and subjective m etrics tor illustrating the qualitative properties of the proposed m ethods as w ell as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determ ined from another:. These results have once m ore indicated that there is a strong relationship betw een fingerprints and faces.

Keywords: biometrics; fingerprint; face; artificial neural network; intelligent system ; Taguchi

1. Introduction

Biom etrcs has becom e m ore and m ore im polant axutbns to overcom e vuinejabiilities of the security system s for people, com panies, cDlpontbnLs, ir^stii^tb^s and governm ens. Person ientifratbnl ^stem s based on biometrcs were used in primarily lim ited applcaitbnls requiring high securitytasks like crim inal ^dentificat-bn and pr^iire woikinthe beginning, m ore recentythey have been used in a wide range of applications including information security, law enforcement, anveiDance, fcrensics, sm ait cards, access control, etc. because of their lelabilty, perform ance and accuracy of —entif-:altbn and velrifcatbnLprxe^es [1-1]. W hen the biom etui literatnle was reviewed, it was found that there was extensive literature on fnigelpriintientfxatbn and face :rcrc|n-tirn. The researchers were m ostly focused on designing m ore ^cute, hybrid, robust and fast ^stem s with high accuracy by developing m ore effective and effr-erLt techniques, alr:h-tertules, approaches, sensors and aDgol-thm s or their hybrid com binatbns [1,2].

Generating a biometrc feature from another is a chaDD^ging research topic. Generating face characteristics from only fhgelprhts is an especially interesting and attractive idea for applcaitbnls. It is thought that this m ight be used in m any security applcaitbnls. This c:halerLgirg topic of generating face parts from only fhgelpr¡nts has been recently introduced forthe first tim e by the authors in series of papers [5-13]. The :rla.tirnLships am ong biom etrc features of the faces and fhgelpr¡nts Fs& Fs) were experim entally show n in various studies covering the generation of:

• face borders [5],

• face contours, ;h<rUding face border and ears [6],

• face m odels, including eyebrows, eyes and mouth [7],

• inner face m asks icluding eyes. nose and mouth [8],

• face parte, jnclj^ing eyes. nose, m outh and ears [9],

• face m odels including eyes, nose, m outh, ears and face border [10],

• face parte, jnclj^ing eyebrow s, eyes. nose, m outh and ears [11],

• only eyes [12],

• face parte, jnclj^ing eyebrows, eyes and nose [13],

• face features, including eyes. nose and m outh [14] and

• face shapes, jnclj^ing eyes, m outh and face border [15].

In these studies, face parte are predicted from only fingerprints without any need of face inform atbn orim ages. The studies have experim entally dem onlstrited that there are close :r1atbnLsh:p>s am ong faces and fhgelplinte.

Altoughvarbus feature sets of iaces and fhgelplinte, different param eter ^ttings ahdreielerlce points w eie used to achieve the tasks w ithhigh accuracy from onOyfnLgelprinLts, obtaining the face pants including the inner face pants with eyebrow s and face borders with ears has not been studied up to now . In order to achieve the generatim task autom aacalDy with high accuracy, a com plete ^stem was developed. This ^stem com bines all the other recent studies introduced in the 1ielatnle and provides m ore com plex and gpedfr:: solutions for generating whole face features from fhgelplinte. In order to improve the perform ance of the proposed study, Taguchi experim ental design technique was cDeo used to determine best param eters of alt-fir;al neural network ANN) models used xl this genejatbn. in order to evaluate and dem onstlate the result m oie precitely, 10 -fold aoss vаLidаtbh technique with

both quantitative (objective) evaluation metrics and expanded qualitative (subjective) evaluation metrics were used. So the performance and accuracy were demonstrated in amore reliable way with a lim ited database in com parison to the previous studies.

The paper is organized as follow s. Section2 review s the backgroundinioam atononbiom etrics, autom atc fngerpriint identifratinandvetifiatinsystem s AFlVSs), and face recognition ^stem s (FRSs). S ection 3 briefly introduces ANNs. S ection 4 presents the m otivations of this study as well as investigates the previous works about relationships am ong fegeoprints and faces. Section 5 describes the evaluation methods. Section 6 presents the noveltyof the proposed system including basic notations, definitions and various steps of the present m ethod, the intelligent biom etric feature predictionLsystem CBFPS). The experim ents including num erical andgraphical results of IB FPS are depicted in Section 7. Finally, the proposed work is concluded and dia:us^d in Section 8.

2. Background of Biom etric System s

Biom etric features covering physical or behavioral characteristics including fingerprint, face, ear, hand geom etry, voice, retina, iris recognition, etc. are peculiar to the individual, reliable as far as not being transferable easily and invariant during the life tin e [1]. Typical biom etric ^stem s include enrollm ent, identification, verification, recognition, greening or cla^fiation proce^es. The steps in system tasks are as follows: biom etric data acquisition, feature extraction, registration, matching, m aking decision and evaluation. Biom etric data were obtained from people with the help of a cam eralike-device for the faces and fingerprint scanner for the fhgerprints, etc. Cn general, after data acquisition proce^es, the digital representation of the biom etric data of the people were obtained in the digital platform . Feature extraction proce^es were applied to this digital form of the biom etric features and feature sets were registered to the biom etric ^stem database. W hen a user wants to authenticate hin /ner self to the ^stem , a fresh biom etric feature was acquired, the same feature extraction algorithm is applied, and the extracted feature ^t is com pared to the template in the database. Cf these feature sets of the input andtthe tem plate biom etric features are sufficierLtHysm ilar according to the matching criteria, the user's final decision was taken and the user was authenticated at the end of the m atching proce^ [3,14].

Data acquisition, verification, :ae^tifjcationandsc:ree^g phases are the m aintypes of biom etric based ^stems [4]. The types are summarized as:

Type I: The biom etric data acquisition phase is the first step of the other three pha^s. Enrollm ent, classffcatiQn and recording of the biom etric features are achieved in this phase.

Type II: The vrerificafioni phase is the m ost com m only used biom etric ^stem m ode in the social life like person identification system s in physical acce^ control, com puter network logon or electronic data security [2,4]. In that phase an individual's identity is usually achieved via a user name, an identification num ber, a m agnetc card, a sm art card, etc. A t the end of the verification phase, the subm ited claiin of the identity is either rejected or accepted [1].

Type III: The ientifcationLphase is com m onlyused inapplications requiring high securtytasks like crm inal identification and police work. In that phase, the system tries to recognize an individual's identiyw ithusing justhis or her biom etric feature. The system fails if the personis an undefined person in the ^stem database. Cn that case, the output of the system is a com bination list of identities

and the scores indicates the similarity among two biometric features [15].According to some predefined rules about sm ilarity m easures, the ^stem decision was produced in this phase.

Type IV: The aoreening phase is like the -dentification phase. The result of determ inaton whether a person belongs to a watch list of identities ornot is displayed in this phase. Security at airports, public events and other anveilance applications are som e of the greening exam ples [4,16].

A typical biom etric ^stem is given in Figure 1. The procédés in the ^stem are achieved according to the arrow s illustrated in the figure depending on the application status.

Figure 1.A typicalbiom etric ^stem .

These son! of biom etrc rerbgn;abnL ^stem s m ake people, system s or inform atbn safer by reducing the frudandleading to user convenience [4] .Two of m ost popular biom etui: features used in the biom etrc based authent±atbnl ^stem s are fngerprints and faces. Fingerprints based biom etrc system s are called AFIVSs and faces based biometrc system s are called FRSs.

Fingerprints are unique patterns on the surface of the fingers. Fingerprints represent the people with high accuracy because of having natural identity throughout the life of which are not forgotten anyw here or not be lost easily. They were reliably and widely used to idenltify the people for a century due to its uniqueness, i m utabilty and relabilty [17].

InAFnvSs, radge-valley structure of the fngerprirLt pattern, core anddela points called singular points, end points and bifuratbrls called m irutiaes are used for identifying an in^i^idual. These structures are given in Figure 2. Many approaches to AFIVSs have been presented in the lieratuue [1,2,15,17-30]. The AFIVSs might be broadly clarified as being minutiae-based, correlation-based and image-based system s [18]. A good anvey about these system s was given in the reference [1]. The minutiae-based approaches rely on the com parsons for sim iarities and differences of the local ridge attributes and their relatbrlships to m ake a personal ideht±±altbnl [19-21]. They

attempt to align two sets of m inutiae from two fngerprints andcountthe total num ber of matched m inutiae [4]. If a m inutiae and its paiam eters are com puted relative to the singular points which are highly stable, rotation, translation and scale invariant. the m inutiae w ill then beaom e rotational. translatbnal and scale invariant [15,22-24]. Core points are the points where the innermost ridge loops are at their steepest. D elta points are the points from which three patterns deviate [23 25,26]. The general m ethods to detect the singular points are Poincare-based [27], intersection-based [23] or filer-based [28] m ethods.

Figure 2.R idge-valley structure and features of a fngerprint.

M ain steps of the operations in the m inutiae-based AFIVSs are aim m arized as: selecting the im age area; detecting the singularpoints; enhancing, im proving and thinning the fingerprint im age; extracting the m inutiae points and calculating their param eters; elim inating the fate m inutiae sets; properly representing the fingerprint im ages w iththeir feature sets; recording the feature sets into a database; m atching the future ^ts; and, testing and evaluating the ^stem [29]. The steps and their results are given in Figure 3, respectively. Although the perform ance of the m inutiae-based techniques relies on the accuracy of all these steps, the feature extraction and the use of sophisticated m atching techniques to com pare two m inutiae ^ts are often m ore effective on the perform ance.

Global patterns of the ridges and valleys are com pared to determ ine if the two fingerprints are aligned in the correlation-based AFIVSs. The template and query fingerprint images are spatially correlated to estim ate the degreee of sim ilarity betw een them . The perform ance of correlation -based techniques is affected by non-linear distortions and noises in the im age. In general, it has been observed thatm inutiae-based techniques perform betteerthan correlation-based ones [30]. The decision ismade using the features thatare directly extracted from the raw image in the image-based approaches that m ight be the only viable choice when im age quality is too low to allow reliable m inutiae extraction [18].

Figure 3.M ain operational steps of minutiae-based AFCVSs [29].

Step 1:

Input frngeoprint im age

Step 3: Enhanced and Cm proved im age

Step 5:

The m atchiig area and the fcgerprint feature sets

Step 2:

The im age area and the smgularpoints

Step 4:

Thinned im age

Step 6:

M atching scoies and the decision. Enroll, identify, verify or screen)

Step 7:

Test and evaluation

Faces are probablythe m ost highly accepted and user-friendly characteristics in the field of biom etrics. Face recognition is an attractive and active research area with ^veral applications ranging from static to dynam ic [19]. Cn general, a FRS consists of three m ain steps covering detection of the faces in a com plicated background, extraction of the features from the face regions and localization of the faces and finally recognition tasks [31]. The steps used in face processing in fingerprint to face task are illustrated in Figure 4.

Face recognition process is really com plex and difficult due to num erous factors affecting the appearance of an individual's facial features such as 3D pose, facial expression, hair style, make-up, etc. Cn addition to these varying factors, lighting, background, scale, noise and face occlusion, and many other possible factors make these tasks even more challenging [31]. The most popular approaches to face recognition are based on each location and ±iape of the facial attributes including eyes, eyebrow s, nose, lips andchinandtheir gpatial relationships or the overall analysis of the face im age representing a face as a weighted com binationof a number of canonical faces [4 32] .M any effective and robust methods for the face recognition have been ato proposed [2,19,31-35]. The m etthods are categorized in fourgroups as follow s [34]: hum an know ledge of what constitutes a typical

face was encoded in the know ledge-based m etthods. Structural features that exist even when the pose, viewpoint or lighting conditions vary to locate faces were amed to find in the feature invariant methods. Several standardpatterns of a face were used to describe the face as a whole or the facial features separatelyintem plate m atching basedm etthods. Finally, appearance-basedm etthods operate directly on im ages or appearances of the face objects and process the in ages as two-dim ehsiohal holistic patterns.

Figure 4.M ain proces^sof face processing fortingerprint to face task system .

Step 2:

Extract the features from the face regions to generate the template.

Step 3: Compare the inputs with templates and declare m atches

Step 1:

Capture the im age and detect the faces in a complicated background.

As explained earlier, processing fingerprints and faces are really difficult, complex and time consum ing tasks. M any approaches, techniques andalgoritlnm s have been usad for tace recognition, fingerprint recognition and their sub steps. Ct is very clear from the explanations that dealing w ith generating faces from fingerprints are really m ore difficult tasks. B ecause of the tasks to be achieved in this article, faces, fingerprints, pre and postprocessing of them , applying m any m etthods, im plem enting them in training and test procedures, analyzing them w ith different- m etrics, and representing the outputs in viaial platform , etc. have m ade the prediction task m ore difficult.

3. Artificial Ne ml Networks

ANNs are biologically inspired intelligent techniques to axlve m any problem s [36-40]. Learning, generalization, less data requiement, fast computation, ease of implementation and software and hardware availability features have m ade ANNs very attractive form any applications [36] There has been a growing research interest insecurity and recognition applications based on intelligent-

techniques and especially ANNs which are also very popular in biom etdc-fcasad applications [5-13,2934,3537-40]. M ultilayereed percepton M LP) isone of the most popularANN architecturees and can be trained with various learning algorithms. Because an M LP structure can be trained by many learning algorithm s, ithas been successfully applied to a variety of problem s in the literature [36].

The M LP structure consists of three layers: input, output and hidden layers. One or m ore hidden layers m ightbe used. The neurons in the input layer can be treated as buffers and distribute input signal to the neurons in the hidden layer. The output of each neuron in the hidden layer is obtained from the am of the m ultplication of all input signals and weights that follow all these input signals. The sum can be calculated as a function. This function can be a sim ple threshold function, a hyperbolic tangent or a sigm oid function. The outputs of the neurons in other layers are calculated in the ^m e way. The function can be a sim ple threshold function, a hyperbolic tangentor a sigm oid function. The outputs of the neurons in other layers are calculated in the sameway. The weights are adapted with the help of a learning algorithm according to the errors occurring in the calculation. The errors can be com puted by subtracting the ANN outputs from the desireed outputs. M LPs m ightbe trainedwith m any different learning algorithm s [36]. A generalfiorm of the M LP is given in Figure 5.

Figure 5.GeneralForm of the M LP.

Input Layer

Hidden Layer

Output Layer

Inthis study, the M LP basedm odel structure having single h^iddenlay^ w as used to m odel the relationships and to generate the faces. The M LP models were trained with the conjugate gradient algorithm updating w eight and bias values according to the conjugate gradient w ith Pow ell-B eale restarts (CGB) [41].

4. M otivation of the Proposed Approach

It is especially difficult to believe that there is a relationship between biometrc features because of their characteristics arch as their uniqueness. This research was difficult and challenging. As an initial step, biological andphysiological evidences regarding the reelationsnips am ong biom etric features to

support. this study were investigated. The evidences and observations given below hep us to believe that it is worth investigating the :Iea.tiQrlShip am ong fngerprints and faces. These are given below :

1. Ct is known that the phenotype of the biological organim is uniquely determ ined by the interaction of a ^peciffic genotype and a ^peciffic environm ent [42]. Physical appearances of faces and fingerprints are also a part of an individual's phenotype. In the case of fingerprints, the genes determine the general characteristics of the pattern [42]. Cn dermatoglyphics studies, the m axim um generic difference betw een fingerprints has been found am ong individuals of different races. Unrelated persons of the sam e race have very little generic sm ilariy in their fngeoprints, parentand child have som e generic sm ilariy as they ±iare half of the genes, flings have m ore Sim ilariy and the maximum generic sm ilariy is observed in ientical twins, which is the closest genetic reatinship [43].

2. Som e of the scientists in biom etrics have focused on analyzing the sim ilarties in fingerprint m inutiae patterns in identical twin fingers [42]. They absolutely confirmed that the identical twin fingerprints have a large class correlation. Cn addition to this cla^ correlation, correlation based on other generic attributes of the fngerprint arch as ridge count, ridge width, ridge ^paration, and ridge depth was ato found to be significant in ienticaltwins [42].

3. Cn the case of faces, the situation is very sim ilar w ith the circum stances of fngerprints. The m axim um generic sim ilarity is observed in the identical twins, w hich is the closest genetic oelatonship [43].

4. A num berof studies have especially focused on analyzing the s:igniificantcQrtel]atiQn am ong faces and fingerprints of identical twins [42,44-46]. The large coreaton among biometrics of identical twins was repeatedly indicated in the literature by declaring that identical twins would cause vulnerabiLityproblem s insecurity applications [47]. The sm ilariym easure of identical twinfingerpri^ts is reported as 95% [47]. The reasons of this high degree sim iariym easure were explained in ame studies as follow :

• Hectical twines have exactly identicalDNA except for the generally undetectable micro m utations thatbegin as soon as the cell starts dividing [46].

• Fingerprints of identical twins start their developm ent from the sam e D N A , ^ they show considerable generic sm ilarity [48].

The sim ilarity am ong biom etric features of identical twins w as giveninFigure 6. Fingerprints of identical twins and fingerprint of another person w ere given in Figure 7 [46]. The high degree of sm iarity in fingerprints or faces of identical twins isdem onstrated in Figure 8.

5. Previous Work on Relationships among Fingerprints and Faces

Cn the light of explanations in the previous action, identical twins have strong sm ilarites in both fngerprints and faces. Cncreasing and decreasing directions of these sm Jaries are also the sam e among the people. Consequently, this smilarity supports the idea that there might be some refetbnshiips among fingerprints and faces. The results reported by the authors have been ato experm entally show n that relationships am ong fingerprints and faces exist [5-13].

Figure б. Different biometri features of identical twins [45]. a) Retina, (b) Iris, (c) Fingerprint and (d) Palm print

Figure 8.Finger?ririts and faces for identical twins.

In the studies [5-13], relationships am ong fh^erprit and face parts were investigated and various face partswere tried to be predicted from just fingerprints step by step from simple to complex:. At the beginning of the processes, authors have tried to generate only face borders [5], only eyes [13] and face contours [6] from just fingerprints. In further st^s of the process, the ANN structures were improved, trained and tested to predict static faace parts 78/12]. After these studies, ANN structures used in

predicting proae^were advanced owing to the experiencesof the authors and more comply face parts would be generated with high accuracy [9-11]. Finally, this study introduces forthe firsttim e the most com plex representation of the relationLSlnips am ong fingerprints and faces. The studies [5-13] presented the experim entaal results in different platonm s such as traditional evaluation platform , num erical evaluation platform and finally a viaial evaluation platform . However it should be noted that because of having lim ited data sets covering 120 people inthose studies, 10-fbidcdbss-vallidatohShouidbe applied to illustrate the perform ance of the ^stem . R andom ly eeeL^ct^d train-test data sets are no longer appropriate to characterize the perform ance of the system . It can lead into error in evaluating the perform ance of the ^stem by causing imperfect comm ents on the results. In 10-fold cro^ validation process, the database was random ly divided into 10 different data group ^ts covering 90% of all data ffit in taining and the reest 10% in test data ^ts ibr each fold. The proposed system was trained and tested w ith these ten different ttaining-test data sets. A fer ten different ttainings, 10 test proces^s were then followed. Accuracy and performance of the ANN models ibr each fold were computed according to the appropriate evaluation m etrics covering expanded quantitative and qualitative m ettics.

The ANN structures of previous studies were designed and reconLfigu3ed with random ly elected or experim entally obtained param eters. It is wdl know n that finding appropriate param eters depending on applications is very difficult:. It takes tim e and autaable param eters are established with the help of trails and errors. To do it ^stem aticaly, as m entioned before, this study alto presents obtaining best ANN paraam eters like num bers of the layers, num bens of the inputs, ttaining algorithm s and activation functions with the help of Taguchi experim entaal design technique.

In the previous studies [5-13], perform ance and accuracy of the proposed m odel are evaluated by quantitative m etrics and/or hum an a^e^m ent presented ina graphical form . mtnis paper, boththe quantitative m easuuees (i.e., objective) carried out autom atically by com putters expanding the m etrics available in the literature and the qualitative (subjective) evaluation perceived by observation w ere taken into account. Next section describes these quantitative and qualitative evaluation m ettics.

6. Evaluation m etrics

To generate m ore accurate face features from fingerprints w ittnout having any inform atbn about faces is aiccesfully achieved and introduced in this study. It needs to be em phasized that evaluating results was an importent, critical and difficult- part in this study. There were not certain criteria to elaborate the results preecitely. Fordoing that, the success and reliability of the proposed ^stem having properm etrics in achieving face parts from only fingerprints m ustbe clearly illustrated.

The traditional metrics of an ordinary biometric system like FM R-FNM R representation and ROC curve are no longer appropriate to characterize the perform ance of the ^stem because of the proposed ^stem is not an ordinary biom etric-bassd recognition system . m this study, m ore testpnbaeduзe and perform anae m etrics covering com binaton of the quantitative and qualitative m easutees are introduced LbdbetedevaluationLS. The details of these m ettics are explained in the follow ing subsections.

6.1. QuantitativeEvaluation Metrics

These m etrics are briefly introduced in the follow ing subsections. 611. FMR-FNMR Curve and The ROC Curve

FM R-FNM R and ROC curves are commonly used as evaluation metrics for biometric based recognition system s. The curves and determ ination procedure were detailed in [1]. The null Ho) and alternate (H 1) hypotheses for the biom etric verification problem and a^ociated decisions according to these hypothecs were given in Table 1 and Table 2, respectively. If "T" is stored as a biometric tem plate of a person and "I" is the acquired input of a biometric feature, the hypotheses for biometric verification are written for Ho:IT input and template do not come from the same person and H1: I=T input and template come from the sameperson.

Table 1. The null and the alternate hypotheses for the biom etric verification.

Form ulas Definition

Ho: ¥T Input and template are not from the sameperaon

Hi: I=T Input and template are from the same person

Table 2.D ecisbn types.

Form ulas Definition

Do: ¥T A person is not the same person to be claimed

D1 : I=T A person is the sam eperson to be claim ed

In general, two types of errors are encountered in a typical biom etric verification system : m istaking biom etric m easurem ents from two different fingers being the sam e finger (fate m atch) and m istaking two biom etric m easurem ents for the sam e finger being two different fingers (false non-m atch). These errors are given in Table 3 for Type I and Type II, respectively. The verification involves m atching T and I using a sim ilarity measure sT,D. If the matching aore sTM is less than the system threshold t, then decide Do, elsedecide Di.To evaluate the system , itmustbe collected the aortes generated from a num ber of fingerprint pairs from the sam e finger (the distribution p (sH 1 = true) of arch aores is traditionally called genuine distribution), and scores generated from a num ber of fingerprint pairs from different fingers (the distribution p(s | Ho = true) of arch scores is traditionally called impostor distribution). FM R is the probability of Type I errorand could be defined as the percentage of im postor pairs w hose m atching score greaterthan orequal to t, and FNM R is the probability of Type II errorand could be defined as the percentage of genuine pairs w hose m atching aore is less than t.

Table 3.Two types of errors in a typicalbiometricsystem .

ErlDr Type Form ulas D efinittion

Type I: FM R) 1 FMR = P(P1 \H0 = true) = J> (y \H0 = true)ds t Fal^match rate: (D1 is decided when Ho is true),

Type II: (FNM R) t FNMR = P(D0 \H1 = true) = \H1 = true)ds 0 False non-m atch rate: (D o is decided when H1 is true).

Am ong FM R and FNM R, tere isa ^rcttiad^ff. If t is decreased to m ake the ^stem m oie tolerant withiespect to input variations and note, then FM R increases; vice versa ,ift is ttated to m ake the ^stem motte secure, then FNM R increases accordingly. So the ^stem performance was reported at all operating points (threshold, t) in ROC curves by plotting FNM R as a function of FM R [1].

612. Mean Squared Error (MSE) and Sum Squared Error SSE)

M SE and SSE are the m etrics to quantify the am ount by which an estim ator differs from the true value of the quantity being estim ated. These m etrics were used for evaluation of the perform ance and accuracy of the system s that were investigating the relationships am ong fingerprints and faces intthe literature [5]-[13]. M SE is to measure the average of the square of the error. SSE is the am of squared predicted values in a standard regression m odel In general, the less the SSE, the better the m odel perform s in its estimation. M SE andSSE were givenin Equations (1) and 2), respectively, m the Equations, n is the num berof the test people, Ot is the output of the ^stem and A is the desired value of Or.

MSE = -£(Z> -Ot)

n ,-=1 1)

SSE = X(A -ot)

613. Absolute Percentage Error APE) and Mean APE (M APE)

APE is the measure of accuracy in a fitted time series value. It usuallyexpresses accuracy as a percentage [50]. A PE is also com m only used as an evaluationm etric inthe sim ilar studies aim ed to investigate among fngerprints and faces in the literature [5]-[13]. These metrics were given in Equations (3) and (4). In the equations, n is the num berof the testpeople, Ot is the output of the ^stem and Di is the desired value of Ot:

APE = Y]—-^

1=1 ' 3)

1 " Id. - o\

MAPE = -V1 ' !|

n ¡=1 A (4)

61.4. Mean Absolute Error (MAE)

MAE is a quanttyused to m easune generations or predictions how they are close to the eventual outcom es. This m etnic w as used inthis study at first H shouldbe noted that, this m etric w as linked appropriately with the application proposed in this paper:. As the nam e suggests, MAE is an average of the absolute errors. It is calculated average of the absolute errors per each coordinate of the feature ^ts of the faces in the proposed study. The form ulaton of MAE is given in Equation (5). In the equation, Oi is the output of the ANN, Dt is the desireed value of the O, and e, = D, -Of.

MAE = 1 -Ot\ = ±¿h|

n 1 n , (5)

6.2. QualitativeEvaluation Metrics

In previous studies [5-13], quantitative evaluation platform s were prepared to help the researchers determ ine w hether the obtained results arte sim Jar to their desired values or not. In this study, in addition to that, a qualitative analysis was carried out in odded to determ ine whether the obtained results arte sim ilar to their desireed values, how m uch the results arte close to their desireed values and how aacumatelytie ^stem perform s the task:. Althoughthe quantitative metrics indicate the system perbrn ance clearly in the num erical m anner, they do not provide any inform atbn about the perceived visual quality of the results. A acordingly, a psychophysical experim ent w as designed and carried out below .

The aim of this qualitative analysis was to determ ine which quality of results the system produces im agery with the highest perceived results quality by hum an observers. Qualitative a^e^m ent m ethod applied to this study was explained below .

In oddedtb obtain an objective qualitative as^s^ ent of the results, a standard psychophysical rank-oddering paradigm [51,52] was employed to modify the paradigm for our study. Es^ntially, this paradigm consisted of presenting the panticipanits with the results and asking each pamticipant to rank odded of each of those results based on their "qualities" by assigning each of the results in a num erical value. Specifically, in this study the test. results ibr each ffbldwere presented to the panticipanits by asking each partcjpant to the degree of the results in a num erical value from 1 to 5. The m earnings of the num erical vzal iss are given below :

1: the ne^ultsare very different from the desireed values, the ^stem failed.

2: the results are a bit sdm iladrtc the desireed values, but the systEm cannotbe accepted as aiacesfuL

3: the results are sdm iladrtc the desireed values, the ^stem success is average.

4: the ne^ulttsarE very smilarto the desireed values, the ^stem isabove average.

5: the results are nearly the sam e orthe sam e with the desireed values, the ^stem is very aiccesfuL

Befbne starting the expenim ents each panticipanit w as asked to read standardized instructionis expllained the task cleanly. AH panticipanits were allowed to ask questions regarding the taskbefbre beginning the experim ents. At the beginning of the experim ents, for each trial, twelve results for each 10-fbld cro^ validation were sdm ultanLeously displayed. At the end of each checking pnoae&, he on she gives a m ark for the test results of each fold. At the end of this pant of the evaluation, each panticipanit

checks all test resuls of the 10-fold caoss validatin con^tc^iig 120 test people and gives a m ark for each fbld to evaluate the nesuls if face piedictin is aiooe^uly achieved ornot.

7. The Proposed System : intelligent Biom etric Feature Pr^dctdon System (IBFPS)

in order to achieve the task of piedictin, a proposed system caled IBFPS was developed and im plem enüed. The new appioach succe^fulygenEiates total fece featuies cbntairL;ing all of the face parts including eyEbiow s, eyes, nose, m outhandface conto-uis including face border and eais fiom only fhgErprintewihout having any infoim aton about faces in tthis study. In additin, the relatinships among Fs& Fs are also analyzed anddiau^ed inmore details w iththe help of diSerentevaluations oriieria.

A ^um e tthat tthis relatinship am ong faces and fiingeIpI;ir^tв can be m atthem aticaiy nepiesented as:

y = H(x) (6)

where j isa vector indicating the feature set of the face m odel andiis patam eters achieved from a person, x is a vecter repiesEnting the feature ^t of the fngerprint acquired flbm the sam. e person, H(.) is a highly nonlinear ^stem approxim ating y onto x. In tthis study, H(.) is approxim ated to a m odel to genEiate the relatinship am ong Fs& Fs with the help of ANN m odeHs.

The proposed system is basedon M LP-ANN m odel having the bestparam eters w iththe help of Taguchi experim ental design technique [53 -55]. M LPs were trained wih the binary input v^ctors and the cbrescbnd:ing output v^ctorswih diffEientparameüer levelsbased on M ean Square Errors M SEs) and Absolute Percentage Errors (APEs).

in order to deüeimine the best param eters of M LP-ANN stIuctuIe, L-16 (8**1 2**3) Taguchi experim ent is designed. Taguchi design factors and factor levels weie given in Table 4. Training algorithm s, the num bers of layers, the num bers of inpute and the transfer functions were m ai Taguchi design factors and 8, 2, 2 and 2 to be considEied as factor levels, rispec^/ely.

M LP-ANN training algorithm s aonsidered and used in tth^is woik were Pow ell-B eale conjugate gradient back propagatin (CGB), F]e:aher-Pow eil conjugate gradient (CGF), Polak-Ribieie cbnjugate gradient (CGP), Gradient Descent (GD), Gradient Descent with adaptive learning c:oef:□aien:s (GDA), One Step Secant OSS), GDA with momentum and adaptive leaming c:oefi□a;iEnte (GDAM ) and aaUed conjugate gradient (SCG ) [56].

in this study, the num bers of layers were ^t to 3 and 4, the num bers of inpute were 200 and 300. Hyperbolic Tangent (HT) and Sigm oid Function SF) actE/atin fJnatibns were considEred and used in M LP-ANN structuues.

in Taguchidesign, bestparametersof M LP-ANNs were deteimined acаbnd:ing to M SEs. M ai effect plots weie taken ito cbns;dEIatibns while analyzing the effects of param eters on the response factb^I:. These plots m ight help to undeiE^and and to com paie the changes in the l^el m eans and to in^ica^te the i^fHuence of ^ff^ti/e factDis m orte piecitely. Aacbr^;ing to these plots, ttGö^ing algorithm s had the irgestmain ^1f^on M SE . The numbeisof lay^in M LP-ANN structuie, and t:tal^^^fJnаtb^s were al^ considerably ^ff^tjve. M SEs were not maily aff^t^ by the numbeisof inputs. Finaly it can be cleariy ^dd tthat c:b^s;dEI;ing the main f^ü^plo:^, M SEs will get otalle^rifthe paism^t^Attings given in Table 5 were fDlowed.

Table 4.Taguchi design factors and factor levels.

Taguchi Design LEVELS

1 2 3 4 5 6 7 8

u. * c H £ CO C w < Q fr Training Algorithm s CGB CGF CGP GD GDA OSS GDAM SCG

Number of Layers 3 4

Num ber of Inputs 200 300

Transfer Functions HT SF

Table 5.ResultsforANN Parameter Analysis.

Factors Param eter Settings

M eans SR Optimum Design

Training Algorithm s CGB CGB CGB

Numbers of Layers 3 3 3

Num bers of Inputs 300 300 300

Transfer Functions SF SF SF

After the ANN structure and its training param eters were determ ined to achieve accurate totutins, the training processes were started with applying the figerprint and face feature sets of the people to the ^stem as inputs and outputs, respectively. The sizes of input and output vectors were alto 300 and 176, respectively. The ^stem achieves the trining proces^s with these feature ^ts according to the learning algorithm and the ANN param eters which were obtained from Taguchi design method. Even if the feature sets of Fs& Fs w ere required interning, only figerprint feature sets w ere used intest It should be em phasized that these fegerprints used in test w ere totally unknow n biom etric data to the ^stem . The outputs of the ^stem for the unknow n test data indicate the accuracy of the ^stem . The success and reliability of the system mustbe clearly shown by evaluating the ANN outputs against the properm etrics in achieving face parts from fegerprints. The block diagram of the M LP-ANN used in thiswork is given in Figure 9.

According to the best param eters obtained from Taguchi method, the M LP-ANN models were trained w ith a conjugate gradient algorithm that updates w eight and bias values according to the conjugate gradient back propagation with Pow ell-B eale restarts (CGB). The CGB is a network training algorithm that updates weight and bias values according to the CGB algorithm [56]. Con.jugate gradient algorithms (CGAs) execute very effective search inthe con.jugate gradient direction.. Generally, a learning rate is used to determ ine the lengthof the step size. For allCGAs, the search direction willbe periodically reset to the negative of the gradient. The standard resetpoint occurs when the number of iterations is equal to the number of network param eters (weights and biases), but there are other reset m ethods that can improve the efficiency of training [57] .One such reset m ethodwas proposed by Pow ell [41], based on an earlier version proposed by B eale [58].

Figure 9.The block diagram of the M LP NN structure.

In principle, feed forward neural networks for non-linear^stem identification can use all CGAs. In the fiist iteration, the CGAs start out by marching in the steepest descent direction that was given in Equation (7):

Po ~~So 7)

In the equator, p0 and g0 are the ssarch vector and gradient respectively. ConsiderXk is the estim ate of the m inimum at the start of the &-th iteration. The &-th iteration then consists of the computation of ^arch vector^ from which new estimateXk+i isobtained. It. isgiven in Equation (8):

xk+i ~ xk +akPk (8)

In the equation, at is previous know ledge based upon the theory of the m ethod or obtained by linear ^arch. The next search diectionis determ inedso that it is conjugate to previous search directions. Combining the new ^^p^desce^t direction with the previous ^arch direction is the general way for determining the new ^arch direction. It. isgiven in Equation (9). In the equation,p^ isa positive scalar and the various versions of gradientare distinguished by the m annerconstant pi iscomputed [59]:

Pk =~gk +&Pk -i (9)

Periodically ratting the search direction to the negative of the gradient im proves the CGAs. Since Pow ell-B eale procedure is ieffective, a restarting m ethod that does not abandon the second derivative information is needed. According to Pow ell-B eale technique it will restart if there is very little orthogonality left betw een the current gradient and the previous gradient. This is tested w ith the

inequality given in Equation 10). If this condition is satisfied, the search direction is reset to the negative of the gradient:

\ëTk-iëk\^ 02||Sk\|2 (10)

The inputs andoutputs of the system w ere digital representations of fingerprints andfaces of the people, respectively. The feature vectors of the fngerprints obtainedfrom a com m ercialy available ^ftware developm ent kit contain the local and global feature sets of the fingerprints including singularities, m inutiae points and their param eters [60]. Detailed explanation of the feature extracting algorithm s, extensive inform aton of fingerprint feature sets and their storage form at were given in the reference [60]. These cLtoaiinative data representtthe people with high accuracy. The outputswere the feature vectors of the faces obtainedfrom a feature-basedface feature extractionalgorithm that was borrowed ffom Cox et al. [61] and fundam entally m odified and adapted to this application. Increasing the num ber of the reference points 35 to 88 helped to represent the faces m ore accurately and œnsitively. Face feature œtswere also âiaped from Cartesian coordinates of the face model reference points not distances or average m easures as given in the reference [61]. It w as also observed that feature sets œntainenoughinform ationabout faces for getting them againw ithhigh accuracy. The face reference points on the tem plate, on the face im age of a person from our database and reconstruction of the face m odel from the reference pointswere given in Figure 10.

Figure 10. Face reference points a) on the template, b) on a real face im age from the

database, c) re-construction of the face m odel from the reference points.

(a) (b) (c)

A flexible design environment for the face model re-aonstruction converting the ANN outputs and/or the desired outputs to visual face m odels was alto included in the software developed. Indeed, it basically transform ed the reference points of the face m odels to the lines. The totware is capable of plotting the results of actual and/or calculated values of the sam e face in the sam e platform or in different platform s. It also illustrates the ANN results on the real face im ages. So, the face m odel reconstruction handles an important task for the system by creating two different viaial evaluation

platform s. This i^-ao^stiuctionproc:^^ enables users to achieve the qualitative evaHuationLprDce^es easily, efficiently and auttom atically with the supportof the developed useful graphical interface.

At the beginning of the experment an enrolment pIoaeduIe was followed for collecting the biometrc data from the people. This enrolment procedure helps to store fegerprint and face biom etrcs of individuals into the biom etric ^stem database. During this process a real multm odal database belonging to 120 persons was established. Ten fingerprints of each individual were scanned with a fingerprint scanner, and a 10 face im age having different angles were alto taken from the people using a digital cam era. A set of exam ples including fingerprints and faces of an individual were given in Figure 11 and Figure 12, respectively. Only one frontal face im age and one fegerprht belonging to the righthand index finger for each person were used in this study.

Figure 11. Ten fingerprint im ages of an individual from our database (from "1" to "10", from the left to the right respectively).

Figure 12. Face im ages captured from different angles from an individual

The software developed achieves all the tasks of the system from the enrolm ent step to evaluation step completely. It is expected that generating faces from fingerprints without having any priori know ledge about faces w ill find considerable attention in science and technology of biom etrics, security and industrial applications.

Asm antonEd sail-si; evaluating this system is veiycritical from the point of being a pioneering study claim ing to generate the facial parts including the inner face parts w ith eyebrow s and face contour with ears from only fingarpnints. So, the aicoessand relability of the ^stem must be clearly depicted. In that case, testprxe^es in this article were m ainly divided into two m ain parts: qualitative and quantitative evaluation platform s.

8. Experim ental Results

In order to achieve the experiments effectively, auttomatically and easily, a software platform covering Figures 3, 4 and 5 was developed.

In order to generate faces from only fingerprints, the follow ing experim ents were perform ed as:

1. Read fingerprints and faces from database

2. Obtain the feature sets of fingerprints and faces.

3. E stable a network configuration for training

4. Find optimum parameters with the help of Taguchimethod.

5. Train the network with ^fected parameters.

6. Save the results for further uses.

7. Test the ^stem against to the properevaluation metrics.

8. Test the ^stem performance based on 10-fold cross validationL technique.

9. Investigate whether the quantitative objective) evaluation results are consistent with qualitative (subjective) evaluations based on hum an perceptual a^e^m ent

Previous experim ents on predicting faces from fingerprints [5-13] have shown that the relationship betw esn fingerprints and faces can be alo achieved with high accuracy. In the current experim ants, an autom atic and intelligent ^stem based on artificial neural network is designed to generate the faces of people from their fingerprints only. The proposed study has som e advantages on the previous studies in the literature. These features are given below as:

1. All face parts including eyebrow s, eyes, nose, m outh, face border and ears w eie aicaesfuly predicted in this study for the first tim e.

2. The optmal param etErs of ANN model param eters w ee determ inedw iththe help of Taguchi experim ental design technique.

3. Q ualitative evaluation proaaduIa w as follow ed in add:iionL to the quantitative evaluation procaduIa with som e extra quantitative m etrics.

4. 10-fold cro^ validationl technique was applied to analyze and to evaluate the perform ancs and the accuracy of the system more pri^i^J^.

Producing the face m odels as cIoss as possible to the real one is the m ost critical part of the system in this study. In orderto evaluate the perform anca of the developed system effectively, testexperim ents weis mainly focused on two qualitative and quantitative evaluation platforms: a 10-fold cross-

validation m etthod was follow ed, as m entoned earlier. The results of the ^stem were tested against to these evaluation m etrics.

FM R& FNM R and RO C curve representations were ato givenin Figure 13. mtthe figure, ROC curves were plotted for each fo]dsepaIaie]y, but the FM R& FNM R representation curve was drawn using only average value of all folds for better com parison.

Figure 13. Test results for different representations TPR : True Positive Rate, FPR : Fate Positive Rate). (a) FM R& FNM R representation; b) ROC curves.

Error :

0 20 40 60 80 100

Threshold (t)

CL i—

0.4 0.2 0

0 0.2 0.4 0.6 0.8 1

As can be seen in Figure 13, the proposed ^stem perform s the tasks with high sm ilarity measures to the desired values. According to the num erical results givenin Table 6, the proposed system w as found also very arccesflil.

The APE, MAE and M APE values belonging to all test results tor each fold of 10-fold cross validation were dem onstrated inFigure 14. Averages of all APEs, M AEs and M APEs were given in Figure 15.

Figure 14.Resu]ís for APEs, M AEs andM APEs foreach fold. (a) APEs forgenerated faces for each fold; b) M AEs for generated faces for each fold; (c) M APEs for generated faces for each fold.

APEs of generated faces for each fold 15 < 10 S 0

1 2 3 4 5 6 7


9 10 11 12 No of testpeople

Fold-1 Fold-2 Fold-3 FoH-4 FoH-S FoH-6 -I— FoH-7 FoH-8 FoH-9 FoH-10

M A Es of generated faces foreach fold

H 0 05 <


1 2 3 4 5 6 7


9 10 11 12 No of testpeople

Fold-1 Fold-2 Fold-3 ■*- Fold-4 Fold-5 Fold-6 -I— Fold-7




M APEs of Generated Faces for Each Fold 020

9 10 11 12 No of testpeople




-*- Fold-1

-*- Fold-5

—1— FoH-7

- FoH-8

- Fold-9


Figure 15 Averages of APEs, M AEs and M APEs. (a) Averages of APE values of generated faces foreach fold; (b) Averages ofM A PE and MAE values of generated faces foreach fold.

M ean APE values

Fold num bers of lO-feU cross valdatin

M ean M APE and M AE values


123456789 10 Fold num bers of 10-6)1:1 cross valdaton

Table 6 Num erical resnlte forcom parimn .

M aximum M ean M inimum

APE 9 60953 7.68515 6.44791

M SE 0 00067 0 00038 0 00053

SSE 140740 0.79380 112700

MAE 001905 001718 001482

M APE 005460 004367 003664

For m ore realistic and com prehensive evaluation, all te^t resulte at each fold w ere ilustrated in Figure 16 with the desied values as used in tthe qualitative a^œm ent m etthod. Dark and light lines in the figure repassent the desirted and the generated face fêatures, lespectively. The number of rank orders in 10-fold cross validatbn w ith 20 participante as the results of the qualitative asœs^ ent m ethod was given in Table 7.

Figure 16.ResuJis for 10 different test data sets.

(a) The first 10-fold cross validation technique

(b) The second 10-fold cross validation technique

(c)The third 10-fold cross validation technique

(d) The fourth 10-fold cross validation technique

Figure 16. Cont.

(e) The fifth 10-fold cross validation technique

(f) The sixth 10-fold cross validation technique

(g) The seventh 10-fold cross validation technique

(h) The eighth 10-fold cross validation technique

Figure 16. Cont.

(i)The ninth 10-fold cross validation technique

(j) The tenth 10-fold cross validation technique

Table 7.Num berof rank orders in 10-fold cross validation with 20 participants.

No of 10-folds Rank Levels

1 2 3 4 5

The fist 0 0 0 4 16

The second 0 0 2 11 7

The third 0 0 6 4 10

The fourth 0 1 3 5 11

The fifth 0 1 2 8 9

The sixth 0 3 5 10 2

The seventh 0 0 2 7 11

The eighth 0 0 4 6 10

The ninth 0 0 5 10 5

The tenth 0 0 0 6 14

Total 0 5 29 71 95

AH observers who participated in our qualitative assessment method had normal (20/20) or corrected to norm al acuity, norm al color vision, and no history of ocular pathologies. In the qualitative assessment method each of the participants has assigned a numerical value of 1, 2, 3, 4 or 5 for all rssultsof the each fold.. Thus, within each condition, the system rsultsweie assigned 200 values (ten values perparticipant). I order to carry out a m eaningful quantitative analysis, the rank frequency, that

is, the number of times was assigned a rank value (i.e., the number of all the ones, twos, threes, fours and fives for the results), was taken as the operational definition of perceived resultqualityforeach fold. Foreach condition, the rankftequency was summed acro^ the 10-folds, which resulted inthe summed rank frequency (refer to line "Sum" in Table 7). From Table 7, it is clear that the proposed ^stem w as assigned the highest num ber of fives for all folds of 10 -foldcro^ validatintechnique. According to the m eans of qualitative a^e^m ent m ethod, the proposed ^stem produced high quality results that were perceived to have the highest marks. Comparison for the folds of 10-fold cro^ validatintechnique can be also achieved using Table 7. A ccording to Table 7, the first foldof the ^stem was perceived to have the highest marks, tenth fold of the system produced imagery that was assigned the second highest number of fives (i.e., essentially perceived as 'second best'); and the seventh fold of the ^stem produced im agery that was aligned the third highest num ber of fives (i.e., e^ntially perceived as 'third best'). For each condition the rank frequency was summed across the all folds of 10-fold cross validation technique.

Total value of the table indicates the sum of the m arks forthe all testresults. It actually show s the overall ^stem perform ance from point of the subjective m anner. According to the total value, 47 5% of the participant gave 5, it means that they thought that "the results are nearly the same with the desired values, the system is"; 35.5% of the participant gave 4, it means they thought "the results are very similar to the desired values, the system is successful", 14.5% of the participant gave 3, it also means that they thought "the results are similar to the desired values, the system success is average" and 2 5% of the participant gave 2, it means they thought "the results are a bit similar to the desired values, but the system cannot be accepted successful". None of the participant gave 1, so no of them thought that. the system is failed.

All obtained results from the two different evaluation platform s for each fold of 10-fold cross-validation technique have strongly dem onstrated and clearly confirm ed that there are close relationship among faces and fhgerprints. Based on the results reported in this article in various form s, it can be clearly and confidently to declared that the proposed face m odel generation ^stem is very aiccesful in achieving face parts from only fingerprints. The system presented inthis paper is a com plete ^stem combining all the other recent works introduced in [5-13], and it provides more complex and distinguished solution for generating the face parts. To the best of our knowledge, investigating relationships am ong fhgerprints and face features including the all face parts has not been studied in the literature so far. Also it is the firststudy tthatwas evaluated with 10-fold cro^ validation technique with qualitative evaluation metrics in additionto the quantitative evaluation metrics. Taguchi experim ental design technique was ato used to obtain best ANN param eters for better perform ance. Extensive experimental results have shown once more that the proposed system yields superior perform ance and it is capable of efficiently generating the face m asks from only fngerprints.

9. Conclusions and Future W ork

Predicting com plete face features with high accuracy just from fingerprints is the principal objective of this paper-. In this study a novel approach w as developed, used and introduced to aiccesfully achieve this aim . In the proposed study, the relatinships am ong fngerprint and face biom etrics were established andan unknownbiom etric feature was also predicted withhigh accuracy from a known

biom etna feature. The results of the two main validatich tests proved that the proposed system is vary ajcoessful in auttom atcally ganerating the faces from only figarpnints. This study is an dm proved version of ic-sb

In the future research, ihvasiigat±hls w ill ba conducts to Enhance the face generation process. in addition, a larger mum-modal database with ihteInatiohaT partiaipanlts including Fs& Fs will ba collected to investigate the proposed approach. Even if an unknow n biom etric feature can ba achieved from a known biom etric feature, the achieved feature cannot represent faces in real time face pictures. This ihitial study m ight help to lead to create new concepts, research areas, and especially new applications in the field of biom etrics.

Comparing with the results given in the literature detaimining the best parameter ^ttings by Taguchi experim ental design technique has im proved the results significantly. In addition, it should be noted that predicting m ora faca parts from fegerprints reduced the prediction perform ancs of the system .

For a m ors objective com paiison, the perform ancs and accuracy of the system have bssn evaluated with 10-fold cross validation technique using qualitative evaluation m etrics in addition to the expanded quantitative evaluation metrics. Consequently, the results weia presented on the basis of the com binaton of thess objective and subjective m etrics for illustrating the qualitative properties of the proposed m ethods as wil as a quantitative evaluation of theirperform ancss.

A cknow ledgem ents

The workinthe paper is supported by Eiciyes UnivansitySaientifia Research Projects (EUBA P) Fund undarthe piojsctaode: FBD -09-841.

Refsrencas and Notes

1. M aio, D .; M altoni, D .; Jain, A K .; Prabhakar, S . Handbook of Fingerprint Recognition; Sprhgar-Vsrlag:New York, NY ,USA ,2003.

2. Jaain, L C .; Halicd, U .; Hayashi, I.; Lse, S B .; Tsutsui, S. Intelligent Biometric Techniques in Fingerprint and Face Recognition; CRC Prass:New YoIk,N Y , USA , 1999.

3. Jain., A K .; Ro^,A .; Prabhakar, S. An itnoduation to biometric recognition .TEEE Trans. Circuits Syst. Video Technol. 2004,14,4-19.

4. Jain., A K .; Ro^, A .; Pankanlti S . B iom etnics: a tool for inform atonlSscuriy. IEEE Trans. Inf. Forensics Security 2006,1, 125-143.

5. Ozkaya, N .; Sagincgllu, S. Ihtellgenit Faca Border Generation System from Fingerprints. Proceedings of IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2008) in IEEE World Congress on Computational Intelligence (WCCI2008) ,H ong Kong, China, 1-6 June 2008.

6. Saginogli, S .; Ozkaya, N . Ah ThteTligant Automatic Faca Contour Prediction System , Advances in Artificial Intelligence. In Lecture Notes in Computer Science (LNCS); Proceedings of the 21th Canadian Conference on Artificial Intelligence (Al 2008), W indsor, Ontario, Canada, 28-30 M ay 2008. SprhgarBarlin:Heidelberg, Germ any; Volm e 5032, pp. 246-258.

7. Sagiroglu, S .; Ozkaya, N. An Intelligent Autom atic Face M odel PredictonSystem . Proceedings of International Conference on Multivariate Statistical Modelling & High Dimensional Data Mining (HDM2008) ,Kayseri, Turkey,19-23 June 2008.

8. O zkaya, N .; Sagiroglu, S . intelligent Face M ask Prediction System . Proceedings of International Joint Conference on Neural Networks (IJCNN 2008) in IEEE World Congress on Computational Intelligence (WCCI2008).Hong Kong, China, 1-6 June 2008.

9. Ozkaya, N .; Sagiogli, S. Translating the Fingerprints to the Faces: A New Approach. Proceedings of IEEE 16th Signal Processing, Communication and Applications Conference (SIU 2008) ,Ankara, Turkey, 20-22 April2008.

10. Sagirglu, S.; Ozkaya., N . a Ttifiriai Neural Network Based Auttomatic Face M odel Generation System from Only One Fingerprint-in Lecture Notes in Computer Science (LNCS), Proceedings of the Third International Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR) ,Paris, France,2-4 July 2008; Springer:H e:idelbeтg, G erm any; pp.305-316.

11. O zkaya, N .; Sagiroglu, S .; Face ReœgnitionfDm Fingerprints. J. Fac. Eng. Archit. Gazi Univ. 2008,23,785-794.

12. Sagiroglu, S; Ozkaya, N . An intelligent- and Automatic Eye Generation System from Only Fingerprints. Proceedings of Information Security and Cryptology Conference with International Participation.M ETU Culture and Convention Center,Ankara., Turkey, 23-25 December2008; pp. 230-238.

13. Sagirogli, S.; Ozkaya, N . Artifcial Neural Network Based Automatic Face Parts Prediction System from Only Fingerprints. Proceedings of IEEE Workshop on Computational Intelligence in Biometrics: Theory, Algorithms and Applications ,N ashville, TN , USA , 30 M arch- 2 April 2009.

14. Sagirogli, S .; O zkaya, N .; An intelligent face Features Generation System from Fingerprints. Turk. J. Elect. Engineer. Comput. Sei. 2009,77,183-203.

15. Sagiroglu, S .; O zkaya, N . An intelligent and Auttom atic Face Shape Prediction System from Fingerprints, intelligent- Antrorn ation and Soft Computing. 2010, in press.

16. Jain, A K .; Pankanti, S .; Prabhakar, S .; H ong, L .; Ross, A .;W aym an, JL. B iom etrics: A G rand Challenge, Proceedings of the International Conference on Pattern Recognition, Cambridge, UK , August, 2004; Volum e H, pp.935-942.

17. Kovacs-Vajna, Z M . A fingerprint verification system based on triangular m atching and dynam ic time warping. IEEE Trans. Pattern Anal. Mach. Intell. 2000,22, 1266-1276.

18. Lum ini, A ..Nanni, L.Tw o-class Fingerprintmateher Patt.Recog. 2006,39. 714-716.

19. H ong L .Jain, A . integrating feces and fegerprints forpersonal ientification. IEEE Trans. Patt. Anal. Mach. Int. 1998,20, 1295-1307.

20. Jain, A K .; H ong, L .; Bolle, R. O n-line fingerprint- verification. IEEE Trans. Patt. Anal. Mach. Int. 1997,79,302-314.

21. Zhou, J.; Gu, J. M odeling orientation fields of fingerprints with rational com plex functions. Patt. Recog. 2004,37, 389-391.

22. H sieh, C T.; Lu, Z .Y .; Li, T C .; M ei, K C . An Effective M ethod To Extract Fingerprint Singular Poin^t, Proceedings of the Fourth Int. Conf./Exhibition on High Performance Computing in the Asia-Pacific Region ,Beijing, China, 2000; pp. 696-699.

23. Rämö, P.; Tico, M .; Onnia, V .; Saarinen, J. Optimized singular point detection algorithm for fingerprint images. Proceeding of Int. Conf. on Image Processing, Thessaloniki, Greece, October 7-10, 2001, pp. 242-245 (2001)

24. Zhang, Q . and Yan, H . Fingerprint clarification based on extraction and analysis of singularities and pseudo ridges. Pattern Recogn. 2004,11, 2233-2243.

25. W ang, X .; Li, J.; N iu, Y . D effnition and extraction of stable points from fingerprint im ages. Pattern Recogn. 2007,40, 1804-1815.

26. Li, J.; Yau, W .Y .; W ang, H . Combining singular points andorientationim age inform ation for fingerprint classficaton. Pattern Recogn .2008,41, 353-366.

27. Kawagoe, M .; Tojo, A. Fige^rintpatem classification. Pattern Recogn .1984,77,295-303.

28. N ilson, K .; B igun, J. Localzationof corresponding points infingerprints by com ptex filtering. Pattern Recogn. Lett .2003,24,2135-2144.

29. O zkaya, N .; Sagirogli, S.; W ani, A. An intelligent auttom atic fngerprint recognition system design. 5 th Int. Conf. on Machine Learning and Applications, Orlando, FL, USA, 2006,-pp. 231238.

30. Ross, A .; Jain, A K .; Rei^an, J. A Hybrid Fingerprint M atcher. Pattern Recogn. 2003, 36, 1661-1673.

31. C evikalp, H .; N eam tu, M .; W ikes, M .; B arkana, A . D iscrim inative com m on vectors for face recognition JEfi^ Trans. Pattern Anal. Mach. Intell .2005,27, 4-13.

32. Li, S Z .;Jain, A K . Handbook ofFace Recognition .SpringerVerag: NewYork, NY, USA, 2004.

33. Bouchaf&a, D .;Am ira A . Structural Hidden M arkov M odels for Biom etrics: Fusion of Face and Fingerprint.Patt. Recog. 2008,47,852-867.

34. Yang, M H .; Kriegm an, D J. Ahuja, N . Detecting faces in im ages: a survey.IEEE Trans. Pattern Anal. Mach. Intell.2002,24,34-58.

35. Zhao, W .; Chelappa, R .; Phillips, PJ.; Rosenfeld, A. Face recognition: a literature anvey, ACM Computing Surveys. 2003,35, 399-459.

36. Haykin, S. Neural Networks: A Comprehensive Foundation; M acm iHan. College Publäiing Company: New York, NY ,USA , 1994.

37. Guven, A. Artificial Neural Network Based Diagnosis of Some of the Eye Diseases Using Ocular Electrophysiological signals .PhD . Thesis, Erciyes University:Kayseri,Turkey, 2006.

38. Sagar, V K .; Beng, K JA. Hybrid Fuzzy Logic and Neural Network M odel For Fingerprint M inutiae Extraction. Proceedings oflnt. Conf. on Neural Networks, W ashington, DC, USA, 1999; Volm e 5,pp. 3255-3259.

39. N agaty, K A. Fingerprin^ts classification losing artificial neural networks: a com bined strructuraland statistical approach.Neural Netw. 2001,14,1293-1305.

40. M aio, D .;M aloni D . Neural network based m inutiae fitting in fngerprints.Proceeding of 14th Int. Conf. onPatternRecognition ,, Australiai, 1998; pp. 1654-1658.

41. Powell, M JD . Restart procedures for the conjugate gradient method. Math. Program. 1977,12, 241-254.

42. Jain, A .; Prabhakar, S .; Pankanti, S . On the sim ilarity of identical twin fngerprints.Patt. Recog. 2002,35,2653-2663.

43. Cumm ins, H .; M iUo, C .; Fingerprints, Palms and Soles: An Introduction to Dermatoglyphics ; D over Publications Inc : New York, NY, USA , 1961.

44. Youssif, AAA.; Chowdhury, M U .; Ray, S ; Nafaa H Y ; Fingerprint Reacgn:itbn System Using Hybrid M atchiig Techniques. Proceedings of 6th IEEE/ACIS International Conference on Computer andinformation Science (ICIS 2007),M eHooume, Australia,2007; pp. 1086-1089.

45. Kong, D. Zhang, D,;Lu,G.A study of identical twins palmprint for personal ^Te^fi-aiion Patern Recognition. 2006, 39,2149-2156.

46. Jain, A ; Prabhakar, S ; Pankanti, S . TwinTest: On D irrim inabiliyof Fingerprints. m Lecture Notes in ComputerScience;Springer:Berlik,Germany, 2001; pp. 211-217.

47. Costello, D . Familie: the perfect deception: identical twins, W all Street J. in Handbook of Fingerprint Recognition ;Springer:N ew York, NY, USA, 2003;p.26.

48. Bodm er, W ; M cK ie, R ; The Book of Man: The Quest to Discover our Genetic Heritage; Viking Press: Toronto, ON, Canada,1994.

49. Cox, iJ; Ghosn J; Yinilos, PN. Feature-Based Face Recognition Using M ixture Distance. Comput. Vision Patt. Recog. 1996,10,209-216.

50. Novobilski A ; Kamangar, F A. Absolute percent error based fitness functions for evolving forecastm odels, FLAIRS Conference, Key W est, FL, USA ,2001;pp. 591-595.

51. Engen, T. Psychophysic^: Scaling M etthods. in Experimental Psychology, Sensation and Perception; Kling, JW , Riggs, LA, Eds; Holt, Rinehart and W inston mc: New York, NY , USA, 1972; Volume1,pp. 47-86.

52. Falm agne, JC . Psychophysical m easurem ent and theory. in Handbook of Perception and Human Performance, Sensory Processes and Perception ;Boff, K R , Kaufm an, L , Thom as, JP , Eds; John W iley & Sons: New York,NY, USA, 1986; Vol1, pp.1-1-1-64.

53. W u, Y ;W u, A. Taguchi Methods for Robust Design; Am erican Society of M ecnanical Engineers (ASM E), New York, NY, USA ,2000.

54. Phadke, M S. Quality Engineering Using Robust Design; Englewood Cliffy. Prentice-Hall: Englew ood Cliff NJ, USA, 1989.

55. W ang, H T ; Liu., Z J; Chen, S X ;Yang, JP. Application of Taguchi m etthod to robust design of BLDC m otorperfom ance.IEEE Trans. Magn. 1999,35,3700-3702.

56. The M atthwoIks, AcceleIating the Pace of Engineering and Science. Available Online: http //www m athw orcscom /accESB/heDpdeSk/he]p/ibo]bcx^/rneiyrnethtn D?/accESB/heDpdeSk/he]p/tooD:bx/ (accessed in 2008).

57. Neural Network Toolbox. Available Online: http//natlabizm:iIahr/neDp/tbol:bx/nhethaakpr 59 htm l/ (accessed in 2008).

58. Beale, EM L. A derivatich of conjugate gradients. in Numerical methods for nonlinear optimisation; Lootrn a, FA ,Ed ; A cadem ic press, London, UK , 1972.

59. Shaheed, M H . Perform ance analysis of 4 types of conjugate gradient aUgorihm s in the nonlinear dynam ic modellngof aTRM S using feedforward neuralNetwoIks. IEEE international Conference on System s, M an and Cybernetics, The Hague, The Netherlands, 2004;pp.5985-5991.

60. Biometrical & Artint-Tech. Available Online: http://taww heuICtec:hhoDogijaabm/Vf_sdkhtnD (accessed in 2008).

© 2010 by the authors; licensee M DPi, Basel, Switzerland. This article is an open-access article

distributed under the terms and conditions of the Creative Commons AttrihnHoni license

(http ://cIea.tiveаom m ohSbIg/licenLses/oy/3 0/.

Copyright of Sensors (14248220) is the property of MDPI Publishing and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.