SlideShare a Scribd company logo
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 7, No. 5, October 2017, pp. 2530~2536
ISSN: 2088-8708, DOI: 10.11591/ijece.v7i5.pp2530-2536  2530
Journal homepage: http://iaesjournal.com/online/index.php/IJECE
A New Approach of Iris Detection and Recognition
Rubel Biswas, Jia Uddin, Md. Junayed Hasan
Departement of Computer Science and Engineering, BRAC University, Bangladesh
Article Info ABSTRACT
Article history:
Received Feb 26, 2017
Revised Jun 17, 2017
Accepted Sep 11, 2017
This paper proposes an IRIS recognition and detection model for measuring
the e-security. This proposed model consists of the following blocks:
segmentation and normalization, feature encoding and feature extraction, and
classification. In first phase, histogram equalization and canny edge detection
is used for object detection. And then, Hough Transformation is utilized for
detecting the center of the pupil of an IRIS. In second phase, Daugmen’s
Rubber Sheet model and Log Gabor filter is used for normalization and
encoding and as a feature extraction method GNS (Global Neighborhood
Structure) map is used, finally extracted feature of GNS is feed to the SVM
(Support Vector Machine) for training and testing. For our tested dataset,
experimental results demonstrate 92% accuracy in real portion and 86%
accuracy in imaginary portion for both eyes. In addition, our proposed model
outperforms than other two conventional methods exhibiting higher
accuracy.
Keywords:
Daugman’s rubber sheet
DNS (Dominant Neighborhood
Structure)
E-Secuirity
GNS (Global Neighborhood
Structure)
Iris recognition Copyright © 2017 Institute of Advanced Engineering and Science.
All rights reserved.
Corresponding Author:
Jia Uddin, Assistant Professor
Departement of Computer Science and Engineering,
BRAC University,
66 Mohakhali, Dhaka 1212, Bangladesh.
Email: jia.uddin@bracu.ac.bd
1. INTRODUCTION
For the authentication of the identity of an individual, biometric recognition is a reliable way. For
biometric authentication, several physical stable characteristics, such as fingerprints, voice recognition, hand
geometry, hand writing, the retina, iris recognition are used. Among the characteristics, most of them require
the some physical actions of sensing device or some special actions [1]. To overcome the barrier like the
physical contacts, automated recognition technique is an alternative and less invasive. Iris recognition
technique is one of them[13].
Figure 1. Details of the portions of human eye
This noninvasive verification technique for the identification of individuals is more practical
because the pattern of the human iris is unique, distinctive and stable throughout the adult life [1]. Figure 1
illustrates a generalized overview of human eye; from it we can see the position of each portions of the eye.
IJECE ISSN: 2088-8708 
A New Approach of Iris Detection and Recognition (Rubel Biswas)
2531
From the human eyes, we can get the pattern of the iris which is identical and reliable for human
identification purpose. Previously, a number of researchers worked on Iris recognition. In most of the
conventional methods, researchers worked on either Iris localization or Iris pattern recognition. For Iris
localization most of the researchers use Daugman’s Iris recognition model [2]. In Daugman’s model, a 256
byte code is generated by quantizing the local phase angle based on the result of the filtered image. With a
quiet different approach the Wildes system fragmented a standardized connection based on goodness-of-
match values and Fisher’s linear discriminant for pattern identification [3]. Jiali Cui el al.[4] proposed an iris
detection algorithm based on principal component analysis. HyungGu Lee el al. utilized the binary feature
for Iris detection [5].
On these varieties of method, feature vector is not main key feature for detection techniques except
the Daugman’s model. For the numerical presentation of the character of any object, for the benefit in
statistical procedures and defining the characteristics into some dimensional vectors, feature vector approach
is more reliable. Beside these, selecting the optimal feature is also a vital criteria.
In our proposed model, we utilized the Daugman’s model. After feature encoding, DNS and GNS
maps [6, 7] are used to reduce the size of the feature vector and extract the feature vector of our tested IRIS
dataset. For the machine learning purpose on recognition phase, single class support vector machine has been
used.
In addition, we have compared the performance of our proposed approach considering the same
environment with two other approaches: one is proposed by Li M. el al. [8], another one by K. Sathiyarajael
al.[9]. However, the recent performance comparison in the area of iris recognition and detection depends on
how far the accyracy, efficacy and scalability performance can be increased [15].
Iris dection and recognition is an important pre-processing step in automatic automatic systems and a
well-designed technique can improve the accuracy in collecting clear iris images and mark noise areas
[16].The paper is organized as follows – Section 2 describes the overview in depth of our proposed model.
The results of the experiments carried out and their analysis are included in Section 3 and finally Section 4
concludes the paper.
2. RESEARCH METHOD
Figure 2 presents a detailed implementation of our research model, in which we highlight the major
portions by drawing seperate blocks.
Figure 2.The full steps of the IRIS detection and recognition model
 ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2530 – 2536
2532
2.1. Segmentation
At the beginning of processing of the input image, some steps are required to ensure better
performance from the system. We have used histogram equalization technique to adjust the image intensities
in order to enhance contrast. For improving the edge detection, we have considered image adjustment. For
the edge detection part, we have used Canny Edge Detection Algorithm (CEDA) [10]. CEDA is a multi-stage
algorithm to detect a wide range of edges in the image. By smoothing the image with the help of Gaussian
filter the noise will be removed. Then this algorithm finds the intensity gradients of the image and then non-
maximum suppression is applied to get rid of spurious response to edge detection. Figure 2 shows that in
Segmentation portion (A), for pupil and boundary selection Hogh transformation has been used and for
sclera/IRIS detection, mid point algorithm has been used.
For iris detection, in our system, we have implemented an automatic segmentation process with the
help of two algorithms. Initially, we check for the corneal reflection. We have taken the complement of the
given image and then we removed the dark points (reflection points). After that, from the given image we
have calculated the first derivatives of intensity values by calculating the result based on threshold value we
generate an edge map. The parameters of circles (center coordinates and the radius) are evaluated by voting
in Hough space. In order to detect the pupil we bias the first derivative in vertical direction. Finally, we have
drawn a circle by doubling the pupil radius.
2.2. Normalization
Daugman’s model [2] is used for the normalization of our segmented iris regions. We have made
sure that for all the normalized images must have the same resolution. We considered the center of the pupil
as the reference point. We have passed the radial vectors through the iris region. The selected data points
along each radial line are known as radial resolution. The number of radial lines going around the iris region
are known as radial resolution where the number of radial lines going around the iris region are defined as
angular resolution [14].
Because of the non-concentric nature of the pupil to the iris, to rescale points a remapping formula is
needed based on the angle around the circle. The formula is
(1)
With
(2)
(3)
Here, (Ox, Oy) represents the center displacement of the pupil compare to iris center. r′
represents the
distance between the edges of pupil and iris. is the angle based on the edges were counted. r1 is the radius of
the iris[14].
Figure 3.Rectangular sheet formation from circular area of Daugman’s rubber sheet model
The function first gives a ‘doughnut’ form to the iris region based on the angle. From this
‘doughnut’ form iris region we construct a 2D array with horizontal dimension of angular resolution and
vertical dimension of the radial resolution [1]. Figure 3 shows the main structural visualization of producing
the rectangular area from the circular radius of Daugman’s rubber sheet model.
IJECE ISSN: 2088-8708 
A New Approach of Iris Detection and Recognition (Rubel Biswas)
2533
2.3. Feature Encoding
The template matrix size is set in such a way by doubling the column size of the normalized iris
image and the row is kept same. The reason behind this – in our template both the real and imaginary value
will participate. After that, the normalized iris pattern is convoluted with 1D Log-Gabor wavelets. First 1D
signals are generated from 2D normalized iris pattern and then Gabor filter is used to those 1D signals. In the
Log –Gabor equation we used the following value. The value f0 is set to 18 which represents a scale 4 Gabor
wavelet. From the experiment, we set the value of sigma over frequency to the 0.5.
(4)
Where G is the Gabor filtered function.f0 and σ are the parameters of the filter. f0 will give the
center frequency of the filter [12].Log-Gabor filter returns a matrix with complex valued element with the
size of normalized iris image. After that, two new matrix is created from based on the real part and imaginary
part of Log-Gabor returned matrix. Thereafter, the raw data of these is converted to pseudo-polar coordinate
system.Then, the values of the real part matrix and imaginary part matrix in converted into binary value.
Finally, by merging these two matrix we get the template of a human. The data is set in such a way that the
odd columns contains real part matrix value and the even columns holds imaginary matrix value. We also
calculated absolute values. Figure 4 shows the phase quantization process in short. From this figure, the
process of collecting the real and imaginary response of the image after applying applying Log Gabor Filter;
has been visualized.
Figure 4. Phase quantization process of feature encoding
2.4. Texture Feature Extarction
For feature extraction, from the Gabor filtered image getting from feature encoding phase, we
construct the DNS [6] maps according to the statistical parameters of Table 1. From Table 1 we are getting
the exact parameters of the GNS and DNS map. We then calculate GNS feature vectors by averaging the
DNS [6, 7] values of the filtered image. For selecting the most significant features from the GNS map which
exhibits spatial textures, we select only the features in the concentric circles of various radii at the center of
the map. In our experiment, 8 and 16 features from first two innermost circles, 24 uniform angular features
from each of the other 8 circles are considered to construct the feature vector. Therefore, the number of
dimensions of the feature vector is 216 (=8+16+8x24). Figure 6 exhibits the steps of GNS and DNS map
extraction from Gabor filtered image.
 ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2530 – 2536
2534
Table 1. Statistical parameters of DNS and GNS maps used in the experiment
Parameters Values
Searching window size 21x21
Neighbor window 13x13
Number of central pixels 144
Size of DNS map 21x21
Size of GNS map 21x21
Gap between two central pixels 5
Total number of dimensions of feature vector 216
2.5. Training
In the training and testing, we utilize SVM (Single Class) with a Gaussian radial kernel function [6, 7]
The Gaussian radial basis kernel function is represented as:
𝑘 (𝑆𝑉𝑖 − 𝑆𝑉𝑗) = 𝑒𝑥𝑝
|𝑆𝑉𝑖 − 𝑆𝑉𝑗|2
2𝜎2 (5)
Where 𝑘 (𝑆𝑉𝑖 − 𝑆𝑉𝑗)is the kernel function, 𝑆𝑉𝑖 and 𝑆𝑉𝑗 are the input feature vectors, and σ is a
parameter set by users to determine the effective width of the basis kernel function [14].
3. RESULTS AND ANALYSIS
To validate the proposed model, CASIA-Iris-Interval dataset V3.0 [11] has been used. Also we have
used the same dataset for comparison portion. This data set is consisting of six subset. Among those we have
considered 200 images in total, in which we have utilized 70% data for training and 30% data for testing
purposes for each eye of the individual.
Figure 5. The output block of the whole process to get the GNS for right eye
Figure 5 demonstrate the output blocks for each steps according to our propose model for the right eye
from detection to GNS.After the detection of the eye, we eliminate the refection and then we detect the inner
circle. From the rubber sheet image of the eye, we get imaginary and real response. Then we calculate the
IJECE ISSN: 2088-8708 
A New Approach of Iris Detection and Recognition (Rubel Biswas)
2535
GNS part and from that result, we get the real, imaginary and absolute results. For iris detection phase, we
have considered the real and imaginary values.
Table 2. Experimental Results Details for Training and Testing for Both Eye for
Their Real and Imaginary Values
Eye Considered
Value
Train/
person
Test/
person
Recognized out of
50 persons
Perchantage of
Accurecy
Left Real 70% 30% 44 88%
Left Imaginary 70% 30% 40 80%
Right Real 70% 30% 43 86%
Right Imaginary 70% 30% 39 78%
Combined Real 70% 30% 46 92%
Combined Imaginary 70% 30% 43 86%
According to Table 2, experimental result shows that our proposed model gives 92% accuracy for
combined real portion recognition and 86% accuracy for combined imaginary portion. For each case, the real
part always exhibits accuracy of more than 86% while the imaginary one exhibits always more than 78%,
which concludes a good amount of percentage of accuracy. We have compared our proposed model with two
other conventional approaches: one is by Li M.el al.[17] and other one is by K. Sathiyaraja el al. [18].
Figure 6. The comparison chart among our model vs. algorithm 1 vs. algorithm 2 for both real and imaginary
value for both eye
From Figure 6, we have seen that for combined real and imaginary part, our proposed model gives a
better performance than two other stated models: Li M. el al. [17] as algorithm 1 and K. Sathiyaraja el al.
[18] as algorithm 2. It is clearly visible that while for real portion the algorithm 1 and 2 is giving the result of
86% and 84%, ours one is giving 92%, which is really significant. Beside these, imaginary portion is giving
86% recognition accuracy while algorithm 1 and 2 is giving 82% and 81%.
4. CONCLUSION
In our proposed model, for segmentation purpose we have used Hough Transformation. Then
Daugman’s Rubber Shhet model is utilized for normalization purpose. For reducing the number of feature
vector we have used GNS and DNS mapping, which creates significant changes on experimental result.
Overall, we have got 92% accuracy in real portion and 86% accuracy in imaginary portion for both eyesalong
with the reduced dimension of 216 feature vector.
 ISSN: 2088-8708
IJECE Vol. 7, No. 5, October 2017 : 2530 – 2536
2536
REFERENCES
[1] R.P. Wildes "Iris Recognition: An Emerging Biometric Technology" Proc. IEEE vol. 85 no. 9 pp. 1348-1363 1997.
[2] J. Daugman. How iris recognition works. Proceedings of 2002 International Conference on Image Processing, Vol.
1, 2002.
[3] Wildes, R.P., Asmuth, J.C. et al., "A System for Automated Iris Recognition", Proc. of the Second IEEE Workshop
on Applications of Computer Vision, 1994, pp.121-128.
[4] Jiali Cui, Yunhong Wang, JunZhou Huang, Tieniu Tan and Zhenan Sun, “An Iris Image Synthesis Method Based on
PCA and Super-resolution”, IEEE CS Proceedings of the 17th International Conference on Pattern Recognition
(ICPR’04).2004.
[5] A Yuniarti, "Classification and numbering of dental radiographs for an automated human identification system,”
TELKOMNIKA Telecommunication, Computing, Electronics and Control., vol. 10, no. 1, pp. 137-146, 2012.
[6] Khellah FM. Texture Classification Using Dominant Neighborhood Structure. IEEE Transaction on Image
Processing 2011; 20(11):3270-3279.
[7] J. Uddin, M. Kang, D. V. Nguyen, and J.-M. Kim, “Reliable fault classification of induction motors using texture
feature extraction and a multiclass support vector machine,” Mathematical Problems in Engineering, vol. 2014,
Article ID 814593, 9 pages, 2014.
[8] L. Ma, T. Tan, Y. Wang and D. Zhang, “Efficient Iris Recognition by Characterizing Key Local Variations”, IEEE
Transactions on Image Processing, Vol. 13, No. 6, 2004, pp. 739.
[9] IKGD Putra, E Erdiawan, "High performance palmprint identification system based on two dimensional gabor,”
TELKOMNIKA Telecommunication Computing Electronics and Control., vol. 8, no. 3, pp. 309-318, 2010.
[10] "The CASIA iris image database", [online] Available: http://biometrics.idealtest.org.
[11] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt.
Soc. Am. A, 1987, pp. 2379-2394.
[12] Naseem I, Aleem A, Togneri R, Bennamoun M, “Iris recognition using class-specific dictionaries”, Computers &
Electrical Engineering, Feb 18, 2016.
[13] Anwar, A. M. "An Iris detection and recognition system to measure the performance of E-security." Diss., BRAC
University, 2016.
[14] G. Indrawan, S. Akbar and B. Sitohang, “Fingerprint Direct-Access Strategy Using Local-Star-Structure based
Discriminator Features: A Comparison Study,” International Journal of Electrical and Computer Engineering
(IJECE).” vol. 4, no.5,October 2014, pp. 817-830.
[15] S. Saparudin, S. Akbar and G. Sulong, “Segmentation of Fingerprint Image Based on Gradient Magnitude and
Coherence,” International Journal of Electrical and Computer Engineering (IJECE)” vol.5, no.5, October 201, pp.
834-849.

More Related Content

PDF
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
PDF
A Survey : Iris Based Recognition Systems
PDF
EDGE DETECTION OF MICROSCOPIC IMAGE
PDF
AN IMPROVED IRIS RECOGNITION SYSTEM BASED ON 2-D DCT AND HAMMING DISTANCE TEC...
PDF
PDF
23-02-03[1]
PDF
A new technique to fingerprint recognition based on partial window
PDF
An efficient method for recognizing the low quality fingerprint verification ...
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
A Survey : Iris Based Recognition Systems
EDGE DETECTION OF MICROSCOPIC IMAGE
AN IMPROVED IRIS RECOGNITION SYSTEM BASED ON 2-D DCT AND HAMMING DISTANCE TEC...
23-02-03[1]
A new technique to fingerprint recognition based on partial window
An efficient method for recognizing the low quality fingerprint verification ...

What's hot (19)

PDF
EV-SIFT - An Extended Scale Invariant Face Recognition for Plastic Surgery Fa...
PDF
Enhanced Approach to Iris Normalization Using Circular Distribution for Iris ...
PDF
D04402024029
PDF
Pupil Detection Based on Color Difference and Circular Hough Transform
PDF
Iris Localization - a Biometric Approach Referring Daugman's Algorithm
PDF
Fingerprint verification using steerable filters
PDF
D45012128
PDF
A survey on efficient no reference blur estimation
PDF
A survey on efficient no reference blur estimation methods
PDF
Binary operation based hard exudate detection and fuzzy based classification ...
PDF
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
PDF
Dp34707712
PDF
IRJET- A Comprehensive Analysis of Edge Detectors in SD-OCT Images for Gl...
PDF
Performance Evaluation of Image Edge Detection Techniques
PDF
A robust iris recognition method on adverse conditions
PDF
Recent developments in iris based biometric
PDF
International Journal of Computer Science, Engineering and Information Techno...
PDF
Aa4102207210
PDF
Ijcse13 05-01-001
EV-SIFT - An Extended Scale Invariant Face Recognition for Plastic Surgery Fa...
Enhanced Approach to Iris Normalization Using Circular Distribution for Iris ...
D04402024029
Pupil Detection Based on Color Difference and Circular Hough Transform
Iris Localization - a Biometric Approach Referring Daugman's Algorithm
Fingerprint verification using steerable filters
D45012128
A survey on efficient no reference blur estimation
A survey on efficient no reference blur estimation methods
Binary operation based hard exudate detection and fuzzy based classification ...
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
Dp34707712
IRJET- A Comprehensive Analysis of Edge Detectors in SD-OCT Images for Gl...
Performance Evaluation of Image Edge Detection Techniques
A robust iris recognition method on adverse conditions
Recent developments in iris based biometric
International Journal of Computer Science, Engineering and Information Techno...
Aa4102207210
Ijcse13 05-01-001
Ad

Similar to A New Approach of Iris Detection and Recognition (20)

PDF
Iris recognition for personal identification using lamstar neural network
PDF
E017443136
PDF
IRJET- Survey of Iris Recognition Techniques
PDF
Secure System based on Dynamic Features of IRIS Recognition
PDF
Iris feature extraction
PDF
International Journal of Engineering Research and Development (IJERD)
PDF
Iris Recognition Using Support Vector Machine and ANN.pdf
PPTX
Iris sem
PDF
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
PDF
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
PDF
Real Time Implementation of Ede Detection Technique for Angiogram Images on FPGA
PDF
Comparison of Phase Only Correlation and Neural Network for Iris Recognition
PDF
Transform Domain Based Iris Recognition using EMD and FFT
PDF
Matching algorithm performance analysis for autocalibration method of stereo ...
PDF
IRJET- Shape based Image Classification using Geometric ­–Properties
PDF
Ijcse13 05-01-001
PDF
Efficient Small Template Iris Recognition System Using Wavelet Transform
PDF
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...
PDF
An Experiment with Sparse Field and Localized Region Based Active Contour Int...
PDF
Mislaid character analysis using 2-dimensional discrete wavelet transform for...
Iris recognition for personal identification using lamstar neural network
E017443136
IRJET- Survey of Iris Recognition Techniques
Secure System based on Dynamic Features of IRIS Recognition
Iris feature extraction
International Journal of Engineering Research and Development (IJERD)
Iris Recognition Using Support Vector Machine and ANN.pdf
Iris sem
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
Real Time Implementation of Ede Detection Technique for Angiogram Images on FPGA
Comparison of Phase Only Correlation and Neural Network for Iris Recognition
Transform Domain Based Iris Recognition using EMD and FFT
Matching algorithm performance analysis for autocalibration method of stereo ...
IRJET- Shape based Image Classification using Geometric ­–Properties
Ijcse13 05-01-001
Efficient Small Template Iris Recognition System Using Wavelet Transform
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...
An Experiment with Sparse Field and Localized Region Based Active Contour Int...
Mislaid character analysis using 2-dimensional discrete wavelet transform for...
Ad

More from IJECEIAES (20)

PDF
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
PDF
Embedded machine learning-based road conditions and driving behavior monitoring
PDF
Advanced control scheme of doubly fed induction generator for wind turbine us...
PDF
Neural network optimizer of proportional-integral-differential controller par...
PDF
An improved modulation technique suitable for a three level flying capacitor ...
PDF
A review on features and methods of potential fishing zone
PDF
Electrical signal interference minimization using appropriate core material f...
PDF
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
PDF
Bibliometric analysis highlighting the role of women in addressing climate ch...
PDF
Voltage and frequency control of microgrid in presence of micro-turbine inter...
PDF
Enhancing battery system identification: nonlinear autoregressive modeling fo...
PDF
Smart grid deployment: from a bibliometric analysis to a survey
PDF
Use of analytical hierarchy process for selecting and prioritizing islanding ...
PDF
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
PDF
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
PDF
Adaptive synchronous sliding control for a robot manipulator based on neural ...
PDF
Remote field-programmable gate array laboratory for signal acquisition and de...
PDF
Detecting and resolving feature envy through automated machine learning and m...
PDF
Smart monitoring technique for solar cell systems using internet of things ba...
PDF
An efficient security framework for intrusion detection and prevention in int...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Embedded machine learning-based road conditions and driving behavior monitoring
Advanced control scheme of doubly fed induction generator for wind turbine us...
Neural network optimizer of proportional-integral-differential controller par...
An improved modulation technique suitable for a three level flying capacitor ...
A review on features and methods of potential fishing zone
Electrical signal interference minimization using appropriate core material f...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Bibliometric analysis highlighting the role of women in addressing climate ch...
Voltage and frequency control of microgrid in presence of micro-turbine inter...
Enhancing battery system identification: nonlinear autoregressive modeling fo...
Smart grid deployment: from a bibliometric analysis to a survey
Use of analytical hierarchy process for selecting and prioritizing islanding ...
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
Adaptive synchronous sliding control for a robot manipulator based on neural ...
Remote field-programmable gate array laboratory for signal acquisition and de...
Detecting and resolving feature envy through automated machine learning and m...
Smart monitoring technique for solar cell systems using internet of things ba...
An efficient security framework for intrusion detection and prevention in int...

Recently uploaded (20)

DOCX
573137875-Attendance-Management-System-original
PPT
Project quality management in manufacturing
PPTX
Welding lecture in detail for understanding
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
web development for engineering and engineering
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Sustainable Sites - Green Building Construction
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Geodesy 1.pptx...............................................
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
573137875-Attendance-Management-System-original
Project quality management in manufacturing
Welding lecture in detail for understanding
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
web development for engineering and engineering
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Sustainable Sites - Green Building Construction
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Automation-in-Manufacturing-Chapter-Introduction.pdf
Geodesy 1.pptx...............................................
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
bas. eng. economics group 4 presentation 1.pptx
Foundation to blockchain - A guide to Blockchain Tech
Operating System & Kernel Study Guide-1 - converted.pdf
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
UNIT 4 Total Quality Management .pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...

A New Approach of Iris Detection and Recognition

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 7, No. 5, October 2017, pp. 2530~2536 ISSN: 2088-8708, DOI: 10.11591/ijece.v7i5.pp2530-2536  2530 Journal homepage: http://iaesjournal.com/online/index.php/IJECE A New Approach of Iris Detection and Recognition Rubel Biswas, Jia Uddin, Md. Junayed Hasan Departement of Computer Science and Engineering, BRAC University, Bangladesh Article Info ABSTRACT Article history: Received Feb 26, 2017 Revised Jun 17, 2017 Accepted Sep 11, 2017 This paper proposes an IRIS recognition and detection model for measuring the e-security. This proposed model consists of the following blocks: segmentation and normalization, feature encoding and feature extraction, and classification. In first phase, histogram equalization and canny edge detection is used for object detection. And then, Hough Transformation is utilized for detecting the center of the pupil of an IRIS. In second phase, Daugmen’s Rubber Sheet model and Log Gabor filter is used for normalization and encoding and as a feature extraction method GNS (Global Neighborhood Structure) map is used, finally extracted feature of GNS is feed to the SVM (Support Vector Machine) for training and testing. For our tested dataset, experimental results demonstrate 92% accuracy in real portion and 86% accuracy in imaginary portion for both eyes. In addition, our proposed model outperforms than other two conventional methods exhibiting higher accuracy. Keywords: Daugman’s rubber sheet DNS (Dominant Neighborhood Structure) E-Secuirity GNS (Global Neighborhood Structure) Iris recognition Copyright © 2017 Institute of Advanced Engineering and Science. All rights reserved. Corresponding Author: Jia Uddin, Assistant Professor Departement of Computer Science and Engineering, BRAC University, 66 Mohakhali, Dhaka 1212, Bangladesh. Email: jia.uddin@bracu.ac.bd 1. INTRODUCTION For the authentication of the identity of an individual, biometric recognition is a reliable way. For biometric authentication, several physical stable characteristics, such as fingerprints, voice recognition, hand geometry, hand writing, the retina, iris recognition are used. Among the characteristics, most of them require the some physical actions of sensing device or some special actions [1]. To overcome the barrier like the physical contacts, automated recognition technique is an alternative and less invasive. Iris recognition technique is one of them[13]. Figure 1. Details of the portions of human eye This noninvasive verification technique for the identification of individuals is more practical because the pattern of the human iris is unique, distinctive and stable throughout the adult life [1]. Figure 1 illustrates a generalized overview of human eye; from it we can see the position of each portions of the eye.
  • 2. IJECE ISSN: 2088-8708  A New Approach of Iris Detection and Recognition (Rubel Biswas) 2531 From the human eyes, we can get the pattern of the iris which is identical and reliable for human identification purpose. Previously, a number of researchers worked on Iris recognition. In most of the conventional methods, researchers worked on either Iris localization or Iris pattern recognition. For Iris localization most of the researchers use Daugman’s Iris recognition model [2]. In Daugman’s model, a 256 byte code is generated by quantizing the local phase angle based on the result of the filtered image. With a quiet different approach the Wildes system fragmented a standardized connection based on goodness-of- match values and Fisher’s linear discriminant for pattern identification [3]. Jiali Cui el al.[4] proposed an iris detection algorithm based on principal component analysis. HyungGu Lee el al. utilized the binary feature for Iris detection [5]. On these varieties of method, feature vector is not main key feature for detection techniques except the Daugman’s model. For the numerical presentation of the character of any object, for the benefit in statistical procedures and defining the characteristics into some dimensional vectors, feature vector approach is more reliable. Beside these, selecting the optimal feature is also a vital criteria. In our proposed model, we utilized the Daugman’s model. After feature encoding, DNS and GNS maps [6, 7] are used to reduce the size of the feature vector and extract the feature vector of our tested IRIS dataset. For the machine learning purpose on recognition phase, single class support vector machine has been used. In addition, we have compared the performance of our proposed approach considering the same environment with two other approaches: one is proposed by Li M. el al. [8], another one by K. Sathiyarajael al.[9]. However, the recent performance comparison in the area of iris recognition and detection depends on how far the accyracy, efficacy and scalability performance can be increased [15]. Iris dection and recognition is an important pre-processing step in automatic automatic systems and a well-designed technique can improve the accuracy in collecting clear iris images and mark noise areas [16].The paper is organized as follows – Section 2 describes the overview in depth of our proposed model. The results of the experiments carried out and their analysis are included in Section 3 and finally Section 4 concludes the paper. 2. RESEARCH METHOD Figure 2 presents a detailed implementation of our research model, in which we highlight the major portions by drawing seperate blocks. Figure 2.The full steps of the IRIS detection and recognition model
  • 3.  ISSN: 2088-8708 IJECE Vol. 7, No. 5, October 2017 : 2530 – 2536 2532 2.1. Segmentation At the beginning of processing of the input image, some steps are required to ensure better performance from the system. We have used histogram equalization technique to adjust the image intensities in order to enhance contrast. For improving the edge detection, we have considered image adjustment. For the edge detection part, we have used Canny Edge Detection Algorithm (CEDA) [10]. CEDA is a multi-stage algorithm to detect a wide range of edges in the image. By smoothing the image with the help of Gaussian filter the noise will be removed. Then this algorithm finds the intensity gradients of the image and then non- maximum suppression is applied to get rid of spurious response to edge detection. Figure 2 shows that in Segmentation portion (A), for pupil and boundary selection Hogh transformation has been used and for sclera/IRIS detection, mid point algorithm has been used. For iris detection, in our system, we have implemented an automatic segmentation process with the help of two algorithms. Initially, we check for the corneal reflection. We have taken the complement of the given image and then we removed the dark points (reflection points). After that, from the given image we have calculated the first derivatives of intensity values by calculating the result based on threshold value we generate an edge map. The parameters of circles (center coordinates and the radius) are evaluated by voting in Hough space. In order to detect the pupil we bias the first derivative in vertical direction. Finally, we have drawn a circle by doubling the pupil radius. 2.2. Normalization Daugman’s model [2] is used for the normalization of our segmented iris regions. We have made sure that for all the normalized images must have the same resolution. We considered the center of the pupil as the reference point. We have passed the radial vectors through the iris region. The selected data points along each radial line are known as radial resolution. The number of radial lines going around the iris region are known as radial resolution where the number of radial lines going around the iris region are defined as angular resolution [14]. Because of the non-concentric nature of the pupil to the iris, to rescale points a remapping formula is needed based on the angle around the circle. The formula is (1) With (2) (3) Here, (Ox, Oy) represents the center displacement of the pupil compare to iris center. r′ represents the distance between the edges of pupil and iris. is the angle based on the edges were counted. r1 is the radius of the iris[14]. Figure 3.Rectangular sheet formation from circular area of Daugman’s rubber sheet model The function first gives a ‘doughnut’ form to the iris region based on the angle. From this ‘doughnut’ form iris region we construct a 2D array with horizontal dimension of angular resolution and vertical dimension of the radial resolution [1]. Figure 3 shows the main structural visualization of producing the rectangular area from the circular radius of Daugman’s rubber sheet model.
  • 4. IJECE ISSN: 2088-8708  A New Approach of Iris Detection and Recognition (Rubel Biswas) 2533 2.3. Feature Encoding The template matrix size is set in such a way by doubling the column size of the normalized iris image and the row is kept same. The reason behind this – in our template both the real and imaginary value will participate. After that, the normalized iris pattern is convoluted with 1D Log-Gabor wavelets. First 1D signals are generated from 2D normalized iris pattern and then Gabor filter is used to those 1D signals. In the Log –Gabor equation we used the following value. The value f0 is set to 18 which represents a scale 4 Gabor wavelet. From the experiment, we set the value of sigma over frequency to the 0.5. (4) Where G is the Gabor filtered function.f0 and σ are the parameters of the filter. f0 will give the center frequency of the filter [12].Log-Gabor filter returns a matrix with complex valued element with the size of normalized iris image. After that, two new matrix is created from based on the real part and imaginary part of Log-Gabor returned matrix. Thereafter, the raw data of these is converted to pseudo-polar coordinate system.Then, the values of the real part matrix and imaginary part matrix in converted into binary value. Finally, by merging these two matrix we get the template of a human. The data is set in such a way that the odd columns contains real part matrix value and the even columns holds imaginary matrix value. We also calculated absolute values. Figure 4 shows the phase quantization process in short. From this figure, the process of collecting the real and imaginary response of the image after applying applying Log Gabor Filter; has been visualized. Figure 4. Phase quantization process of feature encoding 2.4. Texture Feature Extarction For feature extraction, from the Gabor filtered image getting from feature encoding phase, we construct the DNS [6] maps according to the statistical parameters of Table 1. From Table 1 we are getting the exact parameters of the GNS and DNS map. We then calculate GNS feature vectors by averaging the DNS [6, 7] values of the filtered image. For selecting the most significant features from the GNS map which exhibits spatial textures, we select only the features in the concentric circles of various radii at the center of the map. In our experiment, 8 and 16 features from first two innermost circles, 24 uniform angular features from each of the other 8 circles are considered to construct the feature vector. Therefore, the number of dimensions of the feature vector is 216 (=8+16+8x24). Figure 6 exhibits the steps of GNS and DNS map extraction from Gabor filtered image.
  • 5.  ISSN: 2088-8708 IJECE Vol. 7, No. 5, October 2017 : 2530 – 2536 2534 Table 1. Statistical parameters of DNS and GNS maps used in the experiment Parameters Values Searching window size 21x21 Neighbor window 13x13 Number of central pixels 144 Size of DNS map 21x21 Size of GNS map 21x21 Gap between two central pixels 5 Total number of dimensions of feature vector 216 2.5. Training In the training and testing, we utilize SVM (Single Class) with a Gaussian radial kernel function [6, 7] The Gaussian radial basis kernel function is represented as: 𝑘 (𝑆𝑉𝑖 − 𝑆𝑉𝑗) = 𝑒𝑥𝑝 |𝑆𝑉𝑖 − 𝑆𝑉𝑗|2 2𝜎2 (5) Where 𝑘 (𝑆𝑉𝑖 − 𝑆𝑉𝑗)is the kernel function, 𝑆𝑉𝑖 and 𝑆𝑉𝑗 are the input feature vectors, and σ is a parameter set by users to determine the effective width of the basis kernel function [14]. 3. RESULTS AND ANALYSIS To validate the proposed model, CASIA-Iris-Interval dataset V3.0 [11] has been used. Also we have used the same dataset for comparison portion. This data set is consisting of six subset. Among those we have considered 200 images in total, in which we have utilized 70% data for training and 30% data for testing purposes for each eye of the individual. Figure 5. The output block of the whole process to get the GNS for right eye Figure 5 demonstrate the output blocks for each steps according to our propose model for the right eye from detection to GNS.After the detection of the eye, we eliminate the refection and then we detect the inner circle. From the rubber sheet image of the eye, we get imaginary and real response. Then we calculate the
  • 6. IJECE ISSN: 2088-8708  A New Approach of Iris Detection and Recognition (Rubel Biswas) 2535 GNS part and from that result, we get the real, imaginary and absolute results. For iris detection phase, we have considered the real and imaginary values. Table 2. Experimental Results Details for Training and Testing for Both Eye for Their Real and Imaginary Values Eye Considered Value Train/ person Test/ person Recognized out of 50 persons Perchantage of Accurecy Left Real 70% 30% 44 88% Left Imaginary 70% 30% 40 80% Right Real 70% 30% 43 86% Right Imaginary 70% 30% 39 78% Combined Real 70% 30% 46 92% Combined Imaginary 70% 30% 43 86% According to Table 2, experimental result shows that our proposed model gives 92% accuracy for combined real portion recognition and 86% accuracy for combined imaginary portion. For each case, the real part always exhibits accuracy of more than 86% while the imaginary one exhibits always more than 78%, which concludes a good amount of percentage of accuracy. We have compared our proposed model with two other conventional approaches: one is by Li M.el al.[17] and other one is by K. Sathiyaraja el al. [18]. Figure 6. The comparison chart among our model vs. algorithm 1 vs. algorithm 2 for both real and imaginary value for both eye From Figure 6, we have seen that for combined real and imaginary part, our proposed model gives a better performance than two other stated models: Li M. el al. [17] as algorithm 1 and K. Sathiyaraja el al. [18] as algorithm 2. It is clearly visible that while for real portion the algorithm 1 and 2 is giving the result of 86% and 84%, ours one is giving 92%, which is really significant. Beside these, imaginary portion is giving 86% recognition accuracy while algorithm 1 and 2 is giving 82% and 81%. 4. CONCLUSION In our proposed model, for segmentation purpose we have used Hough Transformation. Then Daugman’s Rubber Shhet model is utilized for normalization purpose. For reducing the number of feature vector we have used GNS and DNS mapping, which creates significant changes on experimental result. Overall, we have got 92% accuracy in real portion and 86% accuracy in imaginary portion for both eyesalong with the reduced dimension of 216 feature vector.
  • 7.  ISSN: 2088-8708 IJECE Vol. 7, No. 5, October 2017 : 2530 – 2536 2536 REFERENCES [1] R.P. Wildes "Iris Recognition: An Emerging Biometric Technology" Proc. IEEE vol. 85 no. 9 pp. 1348-1363 1997. [2] J. Daugman. How iris recognition works. Proceedings of 2002 International Conference on Image Processing, Vol. 1, 2002. [3] Wildes, R.P., Asmuth, J.C. et al., "A System for Automated Iris Recognition", Proc. of the Second IEEE Workshop on Applications of Computer Vision, 1994, pp.121-128. [4] Jiali Cui, Yunhong Wang, JunZhou Huang, Tieniu Tan and Zhenan Sun, “An Iris Image Synthesis Method Based on PCA and Super-resolution”, IEEE CS Proceedings of the 17th International Conference on Pattern Recognition (ICPR’04).2004. [5] A Yuniarti, "Classification and numbering of dental radiographs for an automated human identification system,” TELKOMNIKA Telecommunication, Computing, Electronics and Control., vol. 10, no. 1, pp. 137-146, 2012. [6] Khellah FM. Texture Classification Using Dominant Neighborhood Structure. IEEE Transaction on Image Processing 2011; 20(11):3270-3279. [7] J. Uddin, M. Kang, D. V. Nguyen, and J.-M. Kim, “Reliable fault classification of induction motors using texture feature extraction and a multiclass support vector machine,” Mathematical Problems in Engineering, vol. 2014, Article ID 814593, 9 pages, 2014. [8] L. Ma, T. Tan, Y. Wang and D. Zhang, “Efficient Iris Recognition by Characterizing Key Local Variations”, IEEE Transactions on Image Processing, Vol. 13, No. 6, 2004, pp. 739. [9] IKGD Putra, E Erdiawan, "High performance palmprint identification system based on two dimensional gabor,” TELKOMNIKA Telecommunication Computing Electronics and Control., vol. 8, no. 3, pp. 309-318, 2010. [10] "The CASIA iris image database", [online] Available: http://biometrics.idealtest.org. [11] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A, 1987, pp. 2379-2394. [12] Naseem I, Aleem A, Togneri R, Bennamoun M, “Iris recognition using class-specific dictionaries”, Computers & Electrical Engineering, Feb 18, 2016. [13] Anwar, A. M. "An Iris detection and recognition system to measure the performance of E-security." Diss., BRAC University, 2016. [14] G. Indrawan, S. Akbar and B. Sitohang, “Fingerprint Direct-Access Strategy Using Local-Star-Structure based Discriminator Features: A Comparison Study,” International Journal of Electrical and Computer Engineering (IJECE).” vol. 4, no.5,October 2014, pp. 817-830. [15] S. Saparudin, S. Akbar and G. Sulong, “Segmentation of Fingerprint Image Based on Gradient Magnitude and Coherence,” International Journal of Electrical and Computer Engineering (IJECE)” vol.5, no.5, October 201, pp. 834-849.