SlideShare a Scribd company logo
Lips Recognition
                   Based on DTW Algorithm




                    Presented by-

                    Piyush Mittal
                    (211CS2281)
                    Information Security
                    Computer Science and
06/24/12            Engineering Department   1
2

                                   National Institute of Technology, Rourkela   06/24/12




Overview
  French criminologist, Edmond Locard, first recommended
  the use of lip prints for criminal identification in 1932.
  Lip prints are impressions of human lips left on the objects
  such as drinking glasses, cigarettes, drink containers,
  aluminium foils, etc.
  Study of human lips as a means of personal identification
  was started in 1970s by two Japanese scientists Yasuo and
  Kazuo Suzuki.
  The uniqueness of lip prints makes cheiloscopy especially
  effective when evidence is discovered at the crime scene such
  as for example lipstick blot marks, cups or glasses and even
  envelopes.
3

                                    National Institute of Technology, Rourkela   06/24/12




Overview
 Similarly to fingerprint patterns, lip prints have the following
 particular properties: permanence, indestructibility and
 uniqueness.
  Lip prints are genotypically determined and therefore and
 unique and stable throughout life of human being.
  Additionally, lip prints are not only unique to an individual
 but also offer the potential for recognition of an individual’s
 gender.
  The lip imprints can be captured by special police materials
 (paper, special cream and magnetic powder). Such obtained
 imprint pictures are then scanned.
4

                     National Institute of Technology, Rourkela   06/24/12




FEATURE EXTRACTION
5

                  National Institute of Technology, Rourkela   06/24/12




1 Image normalization
6
                                                                                                  06/24/12

1.1 Detection of lip area
                                                     National Institute of Technology, Rourkela




 It consists of several steps –
     In the first step, normalization of the image histogram is carried out.
    Then, pixels whose value is greater than the accepted threshold (180) are
    converted to the white color.
    Next, median filter with mask 7×7 is used to blur the image.
    In the last step, binarization is conducted according to the following formula:


                                                    0.516.I ( x, y )
               I BIN ( x, y ) = 1 − round(                           )
                                                        I AVG

where:
      I ( x, y )– value of the pixel at coordinates (x,y) before Binarization,
      I AVG– average value of the all image pixels before binarization,
      I BIN ( x, y ) – value of the pixel at coordinates (x,y) after binarization.

The value of 0.516 in the formula was experimentally determined.
7

National Institute of Technology, Rourkela   06/24/12
8
                                                                              06/24/12
                                 National Institute of Technology, Rourkela



1.2 Separation of Upper and Lower Lip
 Separation is determined by a curve that runs through the
 centre of the space between the lips Designated curve divides
 the lip print into an upper and lower lip.
9

                                       National Institute of Technology, Rourkela   06/24/12




1.3 Lip Print Rotation
 The curve obtained in the previous stage is then approximated
 by a straight line (Fig. 3a). For a given straight line equation, a
 rotation angle towards the X – axis can be determined. It
 allows obtaining a separation line which will be parallel to the
 Cartesian OX axis. Rotated lip print image is shown in Fig. 3b.
10

                         National Institute of Technology, Rourkela   06/24/12




Based on the data obtained in the steps (1)-(3)
we get a lip print image rotated and divided
into upper and lower lip (Fig. 4).
11

                     National Institute of Technology, Rourkela   06/24/12




2. Lip pattern extraction
12

                            National Institute of Technology, Rourkela   06/24/12



2.1 Lip pattern smoothing
  This process aims to improve the quality level of the
 lines forming the lip pattern. The smoothing masks
 5×5 are depicted in the Fig. 5.
13

                                 National Institute of Technology, Rourkela   06/24/12




The procedure is repeated for the all masks depicted on the
Fig. 5. Then, the mask with the largest cumulative value of
the sum is ultimately selected. For the selected in the
previous step mask, the average value of the pixels lying on
the elements of the mask is calculated and copied to the
central point of the analyzed source image. The effect of the
image smoothing inside of the interest region is shown in
Fig. 6.
14

                                   National Institute of Technology, Rourkela   06/24/12


2.2 Top-hat transformation

 The purpose of this procedure is to emphasize lines of the lip
 pattern and separate them from the background. To increase
 effectiveness of the algorithm, transformation is applied
 twice using different mask sizes. The following masks are
 used: 2×2 to highlight thin lines (up to 3 pixels) and 6×6 to
 highlight thick lines (more than 3 pixels). The results of the
 top-hat transformation are depicted in the Fig. 7.
15

2.3 Binarization
                                            National Institute of Technology, Rourkela   06/24/12




 This procedure is applied according to the formula below for
 both images resulted from the top-hat transformation. For
 the thin lines binarization threshold value was set to t=15,
 while for the thick lines this parameter was established to
 t=100.

    IBIN(x,y) =                   1 for I(x,y)>t
                                  0 for I(x,y)<=t
  where:
     I(x,y) – value of the pixel at the coordinates (x, y) before binarization,
     t – binarization threshold,
    IBIN(x,y)-value of the pixel at the coordinates (x, y) after binarization.
16

                            National Institute of Technology, Rourkela   06/24/12




The effect of the lip print image binarization is
shown in Fig. 8.
17

                                 National Institute of Technology, Rourkela   06/24/12




In the last stage, sub-images for the thin and thick lines are
combined into a single image, and then the obtained
global image is denoised. For the noise reduction,
appropriate 7×7 dimensional masks have been designed. It is
depicted on Fig.9.
18

                                   National Institute of Technology, Rourkela   06/24/12




For each of the masks number of black pixels in the
highlighted area of the mask is counted. If the number of the
black pixels is less than 5, then the central pixel of the mask
is converted to the white color.

Additionally, the area of the 11×11 pixels around the central
point of the mask is searched. If there are less than 11 pixels
inside of defined area, then the value of the central point of
the mask is converted to the white color. Example of the
noise reduction is shown in the Fig.10.
19

National Institute of Technology, Rourkela   06/24/12
20

                                   National Institute of Technology, Rourkela   06/24/12




3 Feature extraction
 The feature extraction algorithm is carried out for both the
 upper and lower lip. This process relies on determination of
 the vertical, horizontal and diagonal projections of the lip
 pattern image. The exemplary projections of the image lip
 print pixels towards the appropriate axes are presented in
 Fig.11.

 Projections are one-dimensional vectors represented in a form
 of specialized histograms. Each projection shows number of
 the black pixels which lie towards the appropriate direction:
 horizontal, vertical, oblique for 45° and 135°angles.
21

National Institute of Technology, Rourkela   06/24/12
22

                                    National Institute of Technology, Rourkela   06/24/12




THE DTW METHOD
 Two sequences Q={q1, …, qn} and U={u1, …, um} being
 compared, the D matrix of the size n×m is built in the first
 stage. It allows to align the two sequences Q and U. The
 matrix element D(i, j) contains the distance between the
 points qi and uj, so D(i, j)=d(qi,uj).

 In this study, the Euclidean distance was applied.

 On the basis of the elements D(i, j) so-called sequences
 matching cost have to be determined. When cost matching is
 lower then both sequences Q and U are more similar.
23

                                 National Institute of Technology, Rourkela   06/24/12




In the next stage, the warping path W is determined. The path
 W consists of a set of the some elements of the matrix D what
 allows to define a mapping between the sequences Q and U.

  The warping path can be determined as follows:

      W=w1,w2,...,wl ,    max(n,m) ≤l≤n+m−1

   The wh element of the path W is defined as:

    Wh =D(i ,j), h=1,....l i=1,.....,n                  j=1,.....,m
24

                                 National Institute of Technology, Rourkela   06/24/12




A correctly determined path W has to fulfill a few
conditions:

   The first element of the sequence Q must be matched to
the first element of the sequence U:
           w1 =w(1,1)=D(1,1)
   The last element of the sequence Q must be matched to
the last element of the sequence U:
           wl=w(n , m)=D(n , m)
   Next assignments in the path cannot concern elements
of sequences that are distant from each other more than
one instant t:
           it - it-1<=1   and    jt - jt-1<=1
   Points of the warping path W must be arranged
monotonically in time:
            it - it-1 >=0 and     jt - jt-1 >=0
25

                         National Institute of Technology, Rourkela   06/24/12




The D matrix together with the warping path for
two sample sequences is shown in Fig. 12.
26
                                                                             06/24/12
                                National Institute of Technology, Rourkela




The elements wk of the path W can be found very efficiently
using dynamic programming. The path W determination starts
from the upper right corner of the populated matrix D. In the
first step i=m and j=n, so wl = D(n,m) . Then the next
coordinates of the cell of the matrix D will be fixed from the
formula:
27

                          National Institute of Technology, Rourkela   06/24/12




Now, on the basis of the all elements w1,w2,…,wl
of the path W the total (cumulative) matching
cost γ can be calculated:
28

                                     National Institute of Technology, Rourkela   06/24/12




Comparison of the lip print projections was done using
the following algorithm:

1. Matching of horizontal, vertical and oblique (angle of 45° and
   135°) projections from the tested and template lip prints using
   the DTW algorithm (separately for the upper and lower lip).

2. Computation of the matching cost of all corresponding
  projections by means of the formula (i,j) and averaging the
  result.
29

                                   National Institute of Technology, Rourkela   06/24/12




DTW path for projections of two different sample lip
prints are shown in the Fig.13.
30

                                   National Institute of Technology, Rourkela   06/24/12




CONCLUSIONS AND FUTURE WORKS
  Considering this fact it can be stated that the results obtained
 by the proposed method are good and indicate the possibility
 of using this approach in forensic identification systems.

 In future studies, further improvement of lip print image
 quality will be also performed. It is also planned to compare a
 larger number of projections generated for different angles.

 Additionally, are planed studies where only part of the lip
 print will be analyzed .
31

                                 National Institute of Technology, Rourkela   06/24/12




REFERENCES
• Lukasz Smacki, Krzysztof Wrobel, Piotr Porwik, “Lip Print
  Recognition Based on DTW Algorithm,” Department of
  Computer Systems, University of Silesia, Katowice, Poland,
  2011

• E.J. Keogh, and M.J. Pazzani, “Computer Derivative
  Dynamic Time Warping,” Proc. First SIAM International
  Conference on Data Mining, Chicago, USA, 2001, pp. 1-11.
32

          National Institute of Technology, Rourkela   06/24/12




Any Suggestions?
33

                                                  National Institute of Technology, Rourkela   06/24/12




For more information visit- www.piyushmittal.in

More Related Content

PDF
ATF Overview March 2013
PPTX
Paints as evidence
PPTX
Handwriting Recognition
PPTX
Fiber examination.pptx
PPTX
Alternative light sources of fingerprint development Part 1.pptx
PPTX
Improvised Firearms
PPTX
Soil ppt
PPTX
ATF Overview March 2013
Paints as evidence
Handwriting Recognition
Fiber examination.pptx
Alternative light sources of fingerprint development Part 1.pptx
Improvised Firearms
Soil ppt

What's hot (20)

PPTX
Forensic-Science-Tools-and-Techniques.pptx
PPTX
Forensic Science - 06 Skeletons and bones
PPTX
adulteration in fuels
PPTX
Botanical evidences
PPT
Brain Fingerprinting
PPTX
Quantitation of DNA
PPT
CCTNS
PPTX
Alternative light sources of fingerprint development Part 2.pptx
PPTX
Internal Ballistics
PPTX
Hair evidence
PPTX
Gun Shot Residue Analysis
PPTX
Tire track and Tire Impression | Physical Evidence | Forensic Science
PPTX
restoration of toolmarks
PPTX
Forensic Sciences (DNA Fingerprinting) STR Typing - Case Report
PPTX
Microcrystalline Test for drugs
PPTX
Pattern recognition Hand Geometry
PPT
Brain Finger Printing
PPTX
Proteomics in forensic sciences
PPT
Forensic-Science-Tools-and-Techniques.pptx
Forensic Science - 06 Skeletons and bones
adulteration in fuels
Botanical evidences
Brain Fingerprinting
Quantitation of DNA
CCTNS
Alternative light sources of fingerprint development Part 2.pptx
Internal Ballistics
Hair evidence
Gun Shot Residue Analysis
Tire track and Tire Impression | Physical Evidence | Forensic Science
restoration of toolmarks
Forensic Sciences (DNA Fingerprinting) STR Typing - Case Report
Microcrystalline Test for drugs
Pattern recognition Hand Geometry
Brain Finger Printing
Proteomics in forensic sciences
Ad

Viewers also liked (20)

PPTX
Lip print
PPTX
Cheiloscopy main/ dental implant courses
PPT
Forensic Odontology
PPT
Dpa attacks by piyush mittal (211 cs2281)
PPTX
Forensic odontology
PPT
Basics of Coding Theory
PPT
Forensic Odontology Dentistry
PPTX
FORENSIC ODONTOLOGY ppt
PPTX
Forensic Odontology
PDF
Parallel aes implementation
PPT
Lip Service Print Portfolio Presentation
PPT
Ear recognition system
PPT
Shape Recognition and Retrieval Based on Edit Distance and Dynamic Programming
PDF
Paper presentation: The relative distance of key point based iris recognition
PPT
Eye Recognition
PPTX
Bite mark impression
PPTX
Human Computer Interaction using Eye Gesture Recognition : ElectroOcculography
PPT
19 Forensic Science Powerpoint Chapter 19 Forensic Footwear Evi
PPTX
blue_eye_technology_jeevagan
Lip print
Cheiloscopy main/ dental implant courses
Forensic Odontology
Dpa attacks by piyush mittal (211 cs2281)
Forensic odontology
Basics of Coding Theory
Forensic Odontology Dentistry
FORENSIC ODONTOLOGY ppt
Forensic Odontology
Parallel aes implementation
Lip Service Print Portfolio Presentation
Ear recognition system
Shape Recognition and Retrieval Based on Edit Distance and Dynamic Programming
Paper presentation: The relative distance of key point based iris recognition
Eye Recognition
Bite mark impression
Human Computer Interaction using Eye Gesture Recognition : ElectroOcculography
19 Forensic Science Powerpoint Chapter 19 Forensic Footwear Evi
blue_eye_technology_jeevagan
Ad

Similar to Lip recognition (20)

PDF
Iris Recognition
PDF
Iq3116211626
PDF
El4301832837
PDF
Project Synopsis
PDF
PDF
A Review on Feature Extraction Techniques and General Approach for Face Recog...
PDF
13 pradeep kumar_137-149
PDF
Biometric identification with improved efficiency using sift algorithm
PDF
Experimental study of minutiae based algorithm for fingerprint matching
PDF
EXPERIMENTAL STUDY OF MINUTIAE BASED ALGORITHM FOR FINGERPRINT MATCHING
PDF
Is2415291534
PDF
Comparative study of various enhancement techniques for finger print images
PDF
Comparative study of various enhancement techniques for finger print images
PDF
PDF
PPTX
Correlation based Fingerprint Recognition
PDF
Fingerprint _prem
PDF
Lip Reading by Using 3-D Discrete Wavelet Transform with Dmey Wavelet
DOCX
Abstract Silent Sound Technology
PDF
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...
Iris Recognition
Iq3116211626
El4301832837
Project Synopsis
A Review on Feature Extraction Techniques and General Approach for Face Recog...
13 pradeep kumar_137-149
Biometric identification with improved efficiency using sift algorithm
Experimental study of minutiae based algorithm for fingerprint matching
EXPERIMENTAL STUDY OF MINUTIAE BASED ALGORITHM FOR FINGERPRINT MATCHING
Is2415291534
Comparative study of various enhancement techniques for finger print images
Comparative study of various enhancement techniques for finger print images
Correlation based Fingerprint Recognition
Fingerprint _prem
Lip Reading by Using 3-D Discrete Wavelet Transform with Dmey Wavelet
Abstract Silent Sound Technology
Bimodal Biometric System using Multiple Transformation Features of Fingerprin...

More from Piyush Mittal (20)

PPTX
Power mock
PDF
Design pattern tutorial
PPT
Reflection
PPTX
Gpu archi
PPTX
Cuda Architecture
PDF
Intel open mp
PDF
Intro to parallel computing
PDF
Cuda toolkit reference manual
PDF
Matrix multiplication using CUDA
PPT
Channel coding
PDF
Java cheat sheet
PDF
Google app engine cheat sheet
PDF
Git cheat sheet
PDF
Vi cheat sheet
PDF
Css cheat sheet
PDF
Cpp cheat sheet
PDF
Ubuntu cheat sheet
PDF
Php cheat sheet
PDF
oracle 9i cheat sheet
PDF
Open ssh cheet sheat
Power mock
Design pattern tutorial
Reflection
Gpu archi
Cuda Architecture
Intel open mp
Intro to parallel computing
Cuda toolkit reference manual
Matrix multiplication using CUDA
Channel coding
Java cheat sheet
Google app engine cheat sheet
Git cheat sheet
Vi cheat sheet
Css cheat sheet
Cpp cheat sheet
Ubuntu cheat sheet
Php cheat sheet
oracle 9i cheat sheet
Open ssh cheet sheat

Recently uploaded (20)

PPTX
TLE Review Electricity (Electricity).pptx
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPT
Teaching material agriculture food technology
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Tartificialntelligence_presentation.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Approach and Philosophy of On baking technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Machine learning based COVID-19 study performance prediction
TLE Review Electricity (Electricity).pptx
cloud_computing_Infrastucture_as_cloud_p
Diabetes mellitus diagnosis method based random forest with bat algorithm
Reach Out and Touch Someone: Haptics and Empathic Computing
Encapsulation_ Review paper, used for researhc scholars
Building Integrated photovoltaic BIPV_UPV.pdf
Spectral efficient network and resource selection model in 5G networks
A comparative study of natural language inference in Swahili using monolingua...
MIND Revenue Release Quarter 2 2025 Press Release
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Teaching material agriculture food technology
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Empathic Computing: Creating Shared Understanding
Tartificialntelligence_presentation.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Approach and Philosophy of On baking technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Machine learning based COVID-19 study performance prediction

Lip recognition

  • 1. Lips Recognition Based on DTW Algorithm Presented by- Piyush Mittal (211CS2281) Information Security Computer Science and 06/24/12 Engineering Department 1
  • 2. 2 National Institute of Technology, Rourkela 06/24/12 Overview French criminologist, Edmond Locard, first recommended the use of lip prints for criminal identification in 1932. Lip prints are impressions of human lips left on the objects such as drinking glasses, cigarettes, drink containers, aluminium foils, etc. Study of human lips as a means of personal identification was started in 1970s by two Japanese scientists Yasuo and Kazuo Suzuki. The uniqueness of lip prints makes cheiloscopy especially effective when evidence is discovered at the crime scene such as for example lipstick blot marks, cups or glasses and even envelopes.
  • 3. 3 National Institute of Technology, Rourkela 06/24/12 Overview Similarly to fingerprint patterns, lip prints have the following particular properties: permanence, indestructibility and uniqueness. Lip prints are genotypically determined and therefore and unique and stable throughout life of human being. Additionally, lip prints are not only unique to an individual but also offer the potential for recognition of an individual’s gender. The lip imprints can be captured by special police materials (paper, special cream and magnetic powder). Such obtained imprint pictures are then scanned.
  • 4. 4 National Institute of Technology, Rourkela 06/24/12 FEATURE EXTRACTION
  • 5. 5 National Institute of Technology, Rourkela 06/24/12 1 Image normalization
  • 6. 6 06/24/12 1.1 Detection of lip area National Institute of Technology, Rourkela It consists of several steps – In the first step, normalization of the image histogram is carried out. Then, pixels whose value is greater than the accepted threshold (180) are converted to the white color. Next, median filter with mask 7×7 is used to blur the image. In the last step, binarization is conducted according to the following formula: 0.516.I ( x, y ) I BIN ( x, y ) = 1 − round( ) I AVG where: I ( x, y )– value of the pixel at coordinates (x,y) before Binarization, I AVG– average value of the all image pixels before binarization, I BIN ( x, y ) – value of the pixel at coordinates (x,y) after binarization. The value of 0.516 in the formula was experimentally determined.
  • 7. 7 National Institute of Technology, Rourkela 06/24/12
  • 8. 8 06/24/12 National Institute of Technology, Rourkela 1.2 Separation of Upper and Lower Lip Separation is determined by a curve that runs through the centre of the space between the lips Designated curve divides the lip print into an upper and lower lip.
  • 9. 9 National Institute of Technology, Rourkela 06/24/12 1.3 Lip Print Rotation The curve obtained in the previous stage is then approximated by a straight line (Fig. 3a). For a given straight line equation, a rotation angle towards the X – axis can be determined. It allows obtaining a separation line which will be parallel to the Cartesian OX axis. Rotated lip print image is shown in Fig. 3b.
  • 10. 10 National Institute of Technology, Rourkela 06/24/12 Based on the data obtained in the steps (1)-(3) we get a lip print image rotated and divided into upper and lower lip (Fig. 4).
  • 11. 11 National Institute of Technology, Rourkela 06/24/12 2. Lip pattern extraction
  • 12. 12 National Institute of Technology, Rourkela 06/24/12 2.1 Lip pattern smoothing This process aims to improve the quality level of the lines forming the lip pattern. The smoothing masks 5×5 are depicted in the Fig. 5.
  • 13. 13 National Institute of Technology, Rourkela 06/24/12 The procedure is repeated for the all masks depicted on the Fig. 5. Then, the mask with the largest cumulative value of the sum is ultimately selected. For the selected in the previous step mask, the average value of the pixels lying on the elements of the mask is calculated and copied to the central point of the analyzed source image. The effect of the image smoothing inside of the interest region is shown in Fig. 6.
  • 14. 14 National Institute of Technology, Rourkela 06/24/12 2.2 Top-hat transformation The purpose of this procedure is to emphasize lines of the lip pattern and separate them from the background. To increase effectiveness of the algorithm, transformation is applied twice using different mask sizes. The following masks are used: 2×2 to highlight thin lines (up to 3 pixels) and 6×6 to highlight thick lines (more than 3 pixels). The results of the top-hat transformation are depicted in the Fig. 7.
  • 15. 15 2.3 Binarization National Institute of Technology, Rourkela 06/24/12 This procedure is applied according to the formula below for both images resulted from the top-hat transformation. For the thin lines binarization threshold value was set to t=15, while for the thick lines this parameter was established to t=100. IBIN(x,y) = 1 for I(x,y)>t 0 for I(x,y)<=t where: I(x,y) – value of the pixel at the coordinates (x, y) before binarization, t – binarization threshold, IBIN(x,y)-value of the pixel at the coordinates (x, y) after binarization.
  • 16. 16 National Institute of Technology, Rourkela 06/24/12 The effect of the lip print image binarization is shown in Fig. 8.
  • 17. 17 National Institute of Technology, Rourkela 06/24/12 In the last stage, sub-images for the thin and thick lines are combined into a single image, and then the obtained global image is denoised. For the noise reduction, appropriate 7×7 dimensional masks have been designed. It is depicted on Fig.9.
  • 18. 18 National Institute of Technology, Rourkela 06/24/12 For each of the masks number of black pixels in the highlighted area of the mask is counted. If the number of the black pixels is less than 5, then the central pixel of the mask is converted to the white color. Additionally, the area of the 11×11 pixels around the central point of the mask is searched. If there are less than 11 pixels inside of defined area, then the value of the central point of the mask is converted to the white color. Example of the noise reduction is shown in the Fig.10.
  • 19. 19 National Institute of Technology, Rourkela 06/24/12
  • 20. 20 National Institute of Technology, Rourkela 06/24/12 3 Feature extraction The feature extraction algorithm is carried out for both the upper and lower lip. This process relies on determination of the vertical, horizontal and diagonal projections of the lip pattern image. The exemplary projections of the image lip print pixels towards the appropriate axes are presented in Fig.11. Projections are one-dimensional vectors represented in a form of specialized histograms. Each projection shows number of the black pixels which lie towards the appropriate direction: horizontal, vertical, oblique for 45° and 135°angles.
  • 21. 21 National Institute of Technology, Rourkela 06/24/12
  • 22. 22 National Institute of Technology, Rourkela 06/24/12 THE DTW METHOD Two sequences Q={q1, …, qn} and U={u1, …, um} being compared, the D matrix of the size n×m is built in the first stage. It allows to align the two sequences Q and U. The matrix element D(i, j) contains the distance between the points qi and uj, so D(i, j)=d(qi,uj). In this study, the Euclidean distance was applied. On the basis of the elements D(i, j) so-called sequences matching cost have to be determined. When cost matching is lower then both sequences Q and U are more similar.
  • 23. 23 National Institute of Technology, Rourkela 06/24/12 In the next stage, the warping path W is determined. The path W consists of a set of the some elements of the matrix D what allows to define a mapping between the sequences Q and U. The warping path can be determined as follows: W=w1,w2,...,wl , max(n,m) ≤l≤n+m−1 The wh element of the path W is defined as: Wh =D(i ,j), h=1,....l i=1,.....,n j=1,.....,m
  • 24. 24 National Institute of Technology, Rourkela 06/24/12 A correctly determined path W has to fulfill a few conditions: The first element of the sequence Q must be matched to the first element of the sequence U: w1 =w(1,1)=D(1,1) The last element of the sequence Q must be matched to the last element of the sequence U: wl=w(n , m)=D(n , m) Next assignments in the path cannot concern elements of sequences that are distant from each other more than one instant t: it - it-1<=1 and jt - jt-1<=1 Points of the warping path W must be arranged monotonically in time: it - it-1 >=0 and jt - jt-1 >=0
  • 25. 25 National Institute of Technology, Rourkela 06/24/12 The D matrix together with the warping path for two sample sequences is shown in Fig. 12.
  • 26. 26 06/24/12 National Institute of Technology, Rourkela The elements wk of the path W can be found very efficiently using dynamic programming. The path W determination starts from the upper right corner of the populated matrix D. In the first step i=m and j=n, so wl = D(n,m) . Then the next coordinates of the cell of the matrix D will be fixed from the formula:
  • 27. 27 National Institute of Technology, Rourkela 06/24/12 Now, on the basis of the all elements w1,w2,…,wl of the path W the total (cumulative) matching cost γ can be calculated:
  • 28. 28 National Institute of Technology, Rourkela 06/24/12 Comparison of the lip print projections was done using the following algorithm: 1. Matching of horizontal, vertical and oblique (angle of 45° and 135°) projections from the tested and template lip prints using the DTW algorithm (separately for the upper and lower lip). 2. Computation of the matching cost of all corresponding projections by means of the formula (i,j) and averaging the result.
  • 29. 29 National Institute of Technology, Rourkela 06/24/12 DTW path for projections of two different sample lip prints are shown in the Fig.13.
  • 30. 30 National Institute of Technology, Rourkela 06/24/12 CONCLUSIONS AND FUTURE WORKS Considering this fact it can be stated that the results obtained by the proposed method are good and indicate the possibility of using this approach in forensic identification systems. In future studies, further improvement of lip print image quality will be also performed. It is also planned to compare a larger number of projections generated for different angles. Additionally, are planed studies where only part of the lip print will be analyzed .
  • 31. 31 National Institute of Technology, Rourkela 06/24/12 REFERENCES • Lukasz Smacki, Krzysztof Wrobel, Piotr Porwik, “Lip Print Recognition Based on DTW Algorithm,” Department of Computer Systems, University of Silesia, Katowice, Poland, 2011 • E.J. Keogh, and M.J. Pazzani, “Computer Derivative Dynamic Time Warping,” Proc. First SIAM International Conference on Data Mining, Chicago, USA, 2001, pp. 1-11.
  • 32. 32 National Institute of Technology, Rourkela 06/24/12 Any Suggestions?
  • 33. 33 National Institute of Technology, Rourkela 06/24/12 For more information visit- www.piyushmittal.in