SlideShare a Scribd company logo
1
Lexical Analysis
• Basic Concepts & Regular Expressions
• What does a Lexical Analyzer do?
• How does it Work?
• Formalizing Token Definition & Recognition
• Reviewing Finite Automata Concepts
• Non-Deterministic and Deterministic FA
• Conversion Process
• Regular Expressions to NFA
• NFA to DFA
• Relating NFAs/DFAs /Conversion to Lexical Analysis
2
Lexical Analyzer in Perspective
lexical
analyzer parser
symbol
table
source
program
token
get next
token
Important Issue:
• What are Responsibilities of each Box ?
• Focus on Lexical Analyzer and Parser.
3
Lexical Analyzer in Perspective
• LEXICAL ANALYZER
• Scan Input
• Remove WS, NL, …
• Identify Tokens
• Create Symbol Table
• Insert Tokens into ST
• Generate Errors
• Send Tokens to Parser
• PARSER
• Perform Syntax Analysis
• Actions Dictated by Token
Order
• Update Symbol Table Entries
• Create Abstract Rep. of
Source
• Generate Errors
• And More…. (We’ll see later)
4
What Factors Have Influenced the
Functional Division of Labor ?
• Separation of Lexical Analysis From Parsing Presents
a Simpler Conceptual Model
• A parser embodying the conventions for comments and white space is
significantly more complex that one that can assume comments and
white space have already been removed by lexical analyzer.
• Separation Increases Compiler Efficiency
• Specialized buffering techniques for reading input characters and
processing tokens…
• Separation Promotes Portability.
• Input alphabet peculiarities and other device-specific anomalies can be
restricted to the lexical analyzer.
5
Introducing Basic Terminology
• What are Major Terms for Lexical Analysis?
• TOKEN
• A pair consisting of a token name and an optional attribute value.
• A particular keyword, or a sequence of input characters denoting
identifier.
• PATTERN
• A description of a form that the lexemes of a token may take.
• For keywords, the pattern is just a sequence of characters that
form keywords.
• LEXEME
• Actual sequence of characters that matches pattern and is
classified by a token
6
Introducing Basic Terminology
Token Sample Lexemes Informal Description of Pattern
const
if
relation
id
num
literal
const
if
<, <=, =, < >, >, >=
pi, count, D2
3.1416, 0, 6.02E23
“core dumped”
const
characters of i, f
< or <= or = or < > or >= or >
letter followed by letters and digits
any numeric constant
any characters between “ and “
except “
Classifies
Pattern
Actual values are critical. Info is :
1. Stored in symbol table
2. Returned to parser 7
Attributes for Tokens
• When more than one lexeme can match a pattern, a lexical
analyzer must provide the compiler additional information
about that lexeme matched.
• In formation about identifiers, its lexeme, type and location
at which it was first found is kept in symbol table.
• The appropriate attribute value for an identifier is a pointer
to the symbol table entry for that identifier.
8
Attributes for Tokens
Tokens influence parsing decision;
The attributes influence the translation of tokens.
Example: E = M * C ** 2
<id, pointer to symbol-table entry for E>
<assign_op, >
<id, pointer to symbol-table entry for M>
<mult_op, >
<id, pointer to symbol-table entry for C>
<exp_op, >
<num, integer value 2>
9
Handling Lexical Errors
• Its hard for lexical analyzer without the aid of other
components, that there is a source-code error.
• If the statement fi is encountered for the first time in a C
program it can not tell whether fi is misspelling of if statement
or a undeclared literal.
• Probably the parser in this case will be able to handle this.
• Error Handling is very localized, with Respect to Input Source
• For example: whil ( x = 0 ) do
generates no lexical errors in PASCAL
10
Handling Lexical Errors
• In what Situations do Errors Occur?
• Lexical analyzer is unable to proceed because none of the
patterns for tokens matches a prefix of remaining input.
• Panic mode Recovery
• Delete successive characters from the remaining input until
the analyzer can find a well-formed token.
• May confuse the parser – creating syntax error
• Possible error recovery actions:
• Deleting or Inserting Input Characters
• Replacing or Transposing Characters
11
Buffer Pairs
• Lexical analyzer needs to look ahead several characters
beyond the lexeme for a pattern before a match can be
announced.
• Use a function ungetc to push look-ahead characters back
into the input stream.
• Large amount of time can be consumed moving characters.
Special Buffering Technique
Use a buffer divided into two N-character halves
N = Number of characters on one disk block
One system command read N characters
Fewer than N character => eof
12
Buffer Pairs (2)
• Two pointers lexeme beginning and forward to the input buffer are
maintained.
• The string of characters between the pointers is the current lexeme.
• Initially both pointers point to first character of the next lexeme to be
found. Forward pointer scans ahead until a match for a pattern is
found
• Once the next lexeme is determined, the forward pointer is set to the
character at its right end.
• After the lexeme is processed both pointers are set to the character
immediately past the lexeme
Lexeme_beginning forward
Comments and white space can be treated as patterns that yield no token
M=E eof2**C*
13
Code to advance forward pointer
1. This buffering scheme works quite well most of the time
but with it amount of lookahead is limited.
2. Limited lookahead makes it impossible to recognize tokens
in situations where the distance, forward pointer must
travel is more than the length of buffer.
Pitfalls:
14
if forward at the end of first half then begin
reload second half ;
forward : = forward + 1;
end
else if forward at end of second half then begin
reload first half ;
move forward to beginning of first half
end
else forward : = forward + 1;
Specification of Tokens
15
Regular expressions are an important notation for specifying
lexeme patterns
An alphabet is a finite set of symbols.
• Typical example of symbols are letters, digits and punctuation etc.
• The set {0, 1} is the binary alphabet.
A string over an alphabet is a finite sequence of symbols drawn from that
alphabet.
• The length is string s is denoted as |s|
• Empty string is denoted by ε
Prefix: ban, banana, ε, etc are the prefixes of banana
Suffix: nana, banana, ε, etc are suffixes of banana
Kleene or closure of a language L, denoted by L*.
• L*: concatenation of L zero or more times
• L0: concatenation of L zero times
• L+: concatenation of L one or more times
Kleene closure
L* denotes “zero or more concatenations of” L
16
Example
Let: L = { a, b, c, ..., z }
D = { 0, 1, 2, ..., 9 }
D+ = “The set of strings with one or more digits”
L  D = “The set of all letters and digits (alphanumeric characters)”
LD = “The set of strings consisting of a letter followed by a digit”
L* = “The set of all strings of letters, including , the empty string”
( L  D )* = “Sequences of zero or more letters and digits”
L ( ( L  D )* ) = “Set of strings that start with a letter, followed by zero or
more letters and digits.”
17
Rules for specifying Regular
Expressions
Regular expressions over alphabet 
1.  is a regular expression that denotes {}.
2. If a is a symbol (i.e., if a ), then a is a regular expression
that denotes {a}.
3. Suppose r and s are regular expressions denoting the
languages L(r) and L(s). Then
a) (r) | (s) is a regular expression denoting L(r) U L(s).
b) (r)(s) is a regular expression denoting L(r)L(s).
c) (r)* is a regular expression denoting (L(r))*.
d) (r) is a regular expression denoting L(r). 18
How to “Parse” Regular Expressions
• Precedence:
• * has highest precedence.
• Concatenation as middle precedence.
• | has lowest precedence.
• Use parentheses to override these rules.
• Examples:
• a b* = a (b*)
• If you want (a b)* you must use parentheses.
• a | b c = a | (b c)
• If you want (a | b) c you must use parentheses.
• Concatenation and | are associative.
• (a b) c = a (b c) = a b c
• (a | b) | c = a | (b | c) = a | b | c
• Example:
• b d | e f * | g a = (b d) | (e (f *)) | (g a)
19
Example
• Let  = {a, b}
• The regular expression a | b denotes the set {a, b}
• The regular expression (a|b)(a|b) denotes {aa, ab, ba, bb}
• The regular expression a* denotes the set of all strings of
zero or more a’s. i.e., {, a, aa, aaa, ….. }
• The regular expression (a|b)* denotes the set containing
zero or more instances of an a or b.
• The regular expression a|a*b denotes the set containing
the string a and all strings consisting of zero or more a’s
followed by one b.
20
Regular Definition
• If Σ is an alphabet of basic symbols then a regular
definition is a sequence of the following form:
d1r1
d2r2
……..
dnrn
where
• Each di is a new symbol such that di  Σ and di dj where
j < I
• Each ri is a regular expression over Σ  {d1,d2,…,di-1) 21
Regular Definition
22
Addition Notation / Shorthand
23
UnsignedNumber 1240, 39.45, 6.33E15, or 1.578E-41
digit  0 | 1 | 2 | … | 9
digits  digit digit*
optional_fraction  . digits | 
optional_exponent  ( E ( + | -| ) digits) | 
num  digits optional_fraction optional_exponent
digit  0 | 1 | 2 | … | 9
digits  digit+
optional_fraction  (. digits ) ?
optional_exponent  ( E ( + | -) ? digits) ?
num  digits optional_fraction optional_exponent
Shorthand
24
Token Recognition
How can we use concepts developed so far to assist in
recognizing tokens of a source language ?
Assume Following Tokens:
if, then, else, relop, id, num
Given Tokens, What are Patterns ?
if  if
then  then
else  else
relop  < | <= | > | >= | = | <>
id  letter ( letter | digit )*
num  digit + (. digit + ) ? ( E(+ | -) ? digit + ) ?
Grammar:
stmt  |if expr then stmt
|if expr then stmt else stmt
|
expr  term relop term | term
term  id | num
26
What Else Does Lexical Analyzer Do?
Scan away blanks, new lines, tabs
Can we Define Tokens For These?
blank  blank
tab  tab
newline  newline
delim  blank | tab | newline
ws  delim +
In these cases no token is returned to parser
27
Overall
ws
if
then
else
id
num
<
<=
=
< >
>
>=
-
if
then
else
id
num
relop
relop
relop
relop
relop
relop
-
-
-
-
pointer to table entry
Exact value
LT
LE
EQ
NE
GT
GE
Note: Each token has a unique token identifier to define category of lexemes
28
Constructing Transition Diagrams for
Tokens
• Transition Diagrams (TD) are used to represent the tokens
• As characters are read, the relevant TDs are used to attempt to
match lexeme to a pattern
• Each TD has:
• States : Represented by Circles
• Actions : Represented by Arrows between states
• Start State : Beginning of a pattern (Arrowhead)
• Final State(s) : End of pattern (Concentric Circles)
• Edges: arrows connecting the states
• Each TD is Deterministic (assume) - No need to choose
between 2 different actions !
29
Example TDs
start
other
=>
0 6 7
8
* RTN(GT)
RTN(GE)> = :
We’ve accepted “>” and have read one extra char that must be
unread. 30
Example : All RELOPs
start <
0
other
=
6 7
8
return(relop, LE)
5
4
>
=
1 2
3
other
>
=
*
*
return(relop, NE)
return(relop, LT)
return(relop, EQ)
return(relop, GE)
return(relop, GT)
31
Example TDs : id and delim
id :
delim :
start delim
28
other
3029
delim
*
return( get_token(), install_id())
start letter
9
other
1110
letter or digit
*
Either returns ptr or “0” if reserved
32
Example TDs : Unsigned #s
1912 1413 1615 1817
start otherdigit . digit E + | - digit
digit
digit
digit
E
digit
*
start digit
25
other
2726
digit
*
start digit
20
* .
21
digit
24
other
23
digit
digit
22
*
Questions: Is ordering important for unsigned #s ?
Why are there no TDs for then, else, if ?
return(num, install_num())
33
QUESTION :
What would the transition diagram
(TD) for strings containing each
vowel, in their strict lexicographical
order, look like?
34
Answer
cons  B | C | D | F | G | H | J | … | N | P | … | T | V | .. | Z
string  cons* A cons* E cons* I cons* O cons* U cons*
otherUOIEA
consconsconsconsconscons
start
error
accept
Note: The error path is taken
if the character is other than
a cons or the vowel in the lex
order.
35
Capturing Multiple Tokens
Capturing keyword “begin”
Capturing variable names
What if both need to happen at the same time?
b e g i n WS
WS – white space
A – alphabetic
AN – alphanumericA
AN
WS
start
start
36
Capturing Multiple Tokens
b e g i n WS
WS – white space
A – alphabetic
AN – alphanumeric
A-b
AN
WS
AN
Machine is much more complicated – just for these two tokens!
start
37
Finite State Automata (FSAs)
• “Finite State Machines”, “Finite Automata”, “FA”
• A recognizer for a language is a program that takes
as input a string x and answers “yes” if x is a
sentence of the language and “no” otherwise.
• The regular expression is compiled into a
recognizer by constructing a generalized transition
diagram called a finite automaton.
• Each state is labeled with a state name
• Directed edges, labeled with symbols
• Two types
• Deterministic (DFA)
• Non-deterministic (NFA)
38
Nondeterministic Finite Automata
A nondeterministic finite automaton (NFA) is a
mathematical model that consists of
1. A set of states S
2. A set of input symbols 
3. A transition function that maps state/symbol
pairs to a set of states
4. A special state s0 called the start state
5. A set of states F (subset of S) of final states
INPUT: string
OUTPUT: yes or no
39
Example – NFA : (a|b)*abb
S = { 0, 1, 2, 3 }
s0 = 0
F = { 3 }
 = { a, b }
start
0 3b21 ba
a
b
s
t
a
t
e
i n p u t
0
1
2
a b
{ 0, 1 }
-- { 2 }
-- { 3 }
{ 0 }
 (null) moves possible
ji 
Switch state but do not
use any input symbol
Transition Table
40
How Does An NFA Work ?
start
0 3b21 ba
a
b • Given an input string, we trace moves
• If no more input & in final state, ACCEPT
EXAMPLE:
Input: ababb
move(0, a) = 1
move(1, b) = 2
move(2, a) = ? (undefined)
REJECT !
move(0, a) = 0
move(0, b) = 0
move(0, a) = 1
move(1, b) = 2
move(2, b) = 3
ACCEPT !
-OR-
41
Handling Undefined Transitions
We can handle undefined transitions by defining one more
state, a “death” state, and transitioning all previously
undefined transition to this death state.
start
0 3b21 ba
a
b
4
a, b
a
a
 42
Other Concepts
start
0 3b21 ba
a
b
Not all paths may result in acceptance.
aabb is accepted along path : 0  0  1  2  3
BUT… it is not accepted along the valid path:
0  0  0  0  0
43
Deterministic Finite Automata
A DFA is an NFA with the following restrictions:
•  moves are not allowed
• For every state s S, there is one and only one path from s
for every input symbol a  .
Since transition tables don’t have any alternative options, DFAs
are easily simulated via an algorithm.
s  s0
c  nextchar;
while c  eof do
s  move(s,c);
c  nextchar;
end;
if s is in F then return “yes”
else return “no”
44
Example – DFA : (a|b)*abb
start
0 3b21 ba
a
b
start
0 3b21 ba
b
a
b
a
a
What Language is Accepted?
Recall the original NFA:
45
Relation between RE, NFA and DFA
1. There is an algorithm for converting any RE into an NFA.
2. There is an algorithm for converting any NFA to a DFA.
3. There is an algorithm for converting any DFA to a RE.
These facts tell us that REs, NFAs and DFAs have equivalent
expressive power.
All three describe the class of regular languages.
46
NFA vs DFA
• An NFA may be simulated by algorithm, when NFA is constructed
from the R.E
• Algorithm run time is proportional to |N| * |x| where |N| is the
number of states and |x| is the length of input
• Alternatively, we can construct DFA from NFA and uses it to
recognize input
• The space requirement of a DFA can be large. The RE
(a+b)*a(a+b)(a+b)….(a+b) [n-1 (a+b) at the end] has no DFA
with less than 2n states. Fortunately, such RE in practice does
not occur often
space
required
O(|r|) O(|r|*|x|)
O(|x|)O(2|r|)DFA
NFA
time to
simulate
where |r| is the length of the regular expression.
47
Thank You
Any Questions?
48

More Related Content

PPT
1.Role lexical Analyzer
PPT
One Dimensional Array
PPTX
Register transfer language
PPTX
Specification-of-tokens
PPTX
Role-of-lexical-analysis
PPTX
Type checking compiler construction Chapter #6
PPTX
Lexical Analysis - Compiler Design
PPT
Abstract data types
1.Role lexical Analyzer
One Dimensional Array
Register transfer language
Specification-of-tokens
Role-of-lexical-analysis
Type checking compiler construction Chapter #6
Lexical Analysis - Compiler Design
Abstract data types

What's hot (20)

PPT
Compiler Design Unit 1
PPTX
Types of Parser
PPT
Intermediate code generation (Compiler Design)
PPTX
PDF
Algorithms Lecture 1: Introduction to Algorithms
PDF
itft-Decision making and branching in java
PDF
Applications of stack
PPTX
Binary Tree Traversal
PPT
Assemblers: Ch03
PPTX
Quadratic probing
PPTX
Syntax Analysis in Compiler Design
PPTX
Syntax-Directed Translation into Three Address Code
PPT
Linked list
PPTX
Hashing
PPT
Operators in C++
PPTX
Regular Expression in Compiler design
PPTX
Input-Buffering
PPTX
Linker and Loader
PPSX
Type conversion
PPTX
Recognition-of-tokens
Compiler Design Unit 1
Types of Parser
Intermediate code generation (Compiler Design)
Algorithms Lecture 1: Introduction to Algorithms
itft-Decision making and branching in java
Applications of stack
Binary Tree Traversal
Assemblers: Ch03
Quadratic probing
Syntax Analysis in Compiler Design
Syntax-Directed Translation into Three Address Code
Linked list
Hashing
Operators in C++
Regular Expression in Compiler design
Input-Buffering
Linker and Loader
Type conversion
Recognition-of-tokens
Ad

Viewers also liked (20)

PPT
Compiler Design - Introduction to Compiler
PDF
Lecture 01 introduction to compiler
PPT
Lexical analyzer
PPTX
Fog computing ( foggy cloud)
PDF
Lecture4 lexical analysis2
ODP
About Tokens and Lexemes
PPT
Lecture 05 syntax analysis 2
PPT
Lex (lexical analyzer)
PPTX
Single linked list
PDF
Lexical
PDF
Lecture3 lexical analysis
PPT
Lecture 13 intermediate code generation 2.pptx
PPTX
Cognitive radio network_MS_defense_presentation
PPTX
Cs419 lec8 top-down parsing
PPT
Compiler Construction
KEY
Introduction to Parse
PPTX
Distributed contention based mac protocol for cognitive radio
PPTX
Lecture 15 run timeenvironment_2
PPTX
Top down parsing(sid) (1)
PDF
07 top-down-parsing
Compiler Design - Introduction to Compiler
Lecture 01 introduction to compiler
Lexical analyzer
Fog computing ( foggy cloud)
Lecture4 lexical analysis2
About Tokens and Lexemes
Lecture 05 syntax analysis 2
Lex (lexical analyzer)
Single linked list
Lexical
Lecture3 lexical analysis
Lecture 13 intermediate code generation 2.pptx
Cognitive radio network_MS_defense_presentation
Cs419 lec8 top-down parsing
Compiler Construction
Introduction to Parse
Distributed contention based mac protocol for cognitive radio
Lecture 15 run timeenvironment_2
Top down parsing(sid) (1)
07 top-down-parsing
Ad

Similar to Lecture 02 lexical analysis (20)

PPT
atc 3rd module compiler and automata.ppt
PPTX
04LexicalAnalysissnsnjmsjsjmsbdjjdnd.pptx
PPT
Lecturer-05 lex anylser (1).pptrjyghsgst
PPT
52232.-Compiler-Design-Lexical-Analysis.ppt
PDF
Lexical analysis Compiler design pdf to read
PDF
Lexical analysis compiler design to read and study
PDF
Lexical Analysis - Compiler design
PDF
3a. Context Free Grammar.pdf
PPT
02. Chapter 3 - Lexical Analysis NLP.ppt
PPT
Compiler Design ug semLexical Analysis.ppt
PDF
Lexicalanalyzer
PDF
Lexicalanalyzer
PPT
7645347.ppt
PPTX
Unitiv 111206005201-phpapp01
PPTX
Compiler Lexical Analyzer to analyze lexemes.pptx
PPT
Lecture 1 - Lexical Analysis.ppt
PPT
Chapter-2-lexical-analyser and its property lecture note.ppt
PPTX
Lexical Analyser PPTs for Third Lease Computer Sc. and Engineering
PPT
Lexical Analysis
atc 3rd module compiler and automata.ppt
04LexicalAnalysissnsnjmsjsjmsbdjjdnd.pptx
Lecturer-05 lex anylser (1).pptrjyghsgst
52232.-Compiler-Design-Lexical-Analysis.ppt
Lexical analysis Compiler design pdf to read
Lexical analysis compiler design to read and study
Lexical Analysis - Compiler design
3a. Context Free Grammar.pdf
02. Chapter 3 - Lexical Analysis NLP.ppt
Compiler Design ug semLexical Analysis.ppt
Lexicalanalyzer
Lexicalanalyzer
7645347.ppt
Unitiv 111206005201-phpapp01
Compiler Lexical Analyzer to analyze lexemes.pptx
Lecture 1 - Lexical Analysis.ppt
Chapter-2-lexical-analyser and its property lecture note.ppt
Lexical Analyser PPTs for Third Lease Computer Sc. and Engineering
Lexical Analysis

More from Iffat Anjum (20)

PPT
Lecture 16 17 code-generation
PPTX
Lecture 14 run time environment
PPTX
Lecture 12 intermediate code generation
PPTX
Lecture 11 semantic analysis 2
PPTX
Lecture 09 syntax analysis 05
PPTX
Lecture 10 semantic analysis 01
PPTX
Lecture 07 08 syntax analysis-4
PPT
Lecture 06 syntax analysis 3
PPT
Lecture 03 lexical analysis
PPT
Lecture 04 syntax analysis
PPT
On qo s provisioning in context aware wireless sensor networks for healthcare
PPT
Data link control
PPT
Pnp mac preemptive slot allocation and non preemptive transmission for provid...
PPT
Qo s based mac protocol for medical wireless body area sensor networks
PPT
A reinforcement learning based routing protocol with qo s support for biomedi...
PPT
Data centric multiobjective qo s-aware routing protocol (dm-qos) for body are...
PPTX
Quality of service aware mac protocol for body sensor networks
PPT
Library system
PPTX
Multicastingand multicast routing protocols
PPT
Fpga(field programmable gate array)
Lecture 16 17 code-generation
Lecture 14 run time environment
Lecture 12 intermediate code generation
Lecture 11 semantic analysis 2
Lecture 09 syntax analysis 05
Lecture 10 semantic analysis 01
Lecture 07 08 syntax analysis-4
Lecture 06 syntax analysis 3
Lecture 03 lexical analysis
Lecture 04 syntax analysis
On qo s provisioning in context aware wireless sensor networks for healthcare
Data link control
Pnp mac preemptive slot allocation and non preemptive transmission for provid...
Qo s based mac protocol for medical wireless body area sensor networks
A reinforcement learning based routing protocol with qo s support for biomedi...
Data centric multiobjective qo s-aware routing protocol (dm-qos) for body are...
Quality of service aware mac protocol for body sensor networks
Library system
Multicastingand multicast routing protocols
Fpga(field programmable gate array)

Recently uploaded (20)

PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Business Ethics Teaching Materials for college
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Insiders guide to clinical Medicine.pdf
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
Basic Mud Logging Guide for educational purpose
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
01-Introduction-to-Information-Management.pdf
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
TR - Agricultural Crops Production NC III.pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Renaissance Architecture: A Journey from Faith to Humanism
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Business Ethics Teaching Materials for college
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Anesthesia in Laparoscopic Surgery in India
Insiders guide to clinical Medicine.pdf
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Basic Mud Logging Guide for educational purpose
VCE English Exam - Section C Student Revision Booklet
Abdominal Access Techniques with Prof. Dr. R K Mishra
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
RMMM.pdf make it easy to upload and study
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
01-Introduction-to-Information-Management.pdf
STATICS OF THE RIGID BODIES Hibbelers.pdf
human mycosis Human fungal infections are called human mycosis..pptx
TR - Agricultural Crops Production NC III.pdf

Lecture 02 lexical analysis

  • 1. 1
  • 2. Lexical Analysis • Basic Concepts & Regular Expressions • What does a Lexical Analyzer do? • How does it Work? • Formalizing Token Definition & Recognition • Reviewing Finite Automata Concepts • Non-Deterministic and Deterministic FA • Conversion Process • Regular Expressions to NFA • NFA to DFA • Relating NFAs/DFAs /Conversion to Lexical Analysis 2
  • 3. Lexical Analyzer in Perspective lexical analyzer parser symbol table source program token get next token Important Issue: • What are Responsibilities of each Box ? • Focus on Lexical Analyzer and Parser. 3
  • 4. Lexical Analyzer in Perspective • LEXICAL ANALYZER • Scan Input • Remove WS, NL, … • Identify Tokens • Create Symbol Table • Insert Tokens into ST • Generate Errors • Send Tokens to Parser • PARSER • Perform Syntax Analysis • Actions Dictated by Token Order • Update Symbol Table Entries • Create Abstract Rep. of Source • Generate Errors • And More…. (We’ll see later) 4
  • 5. What Factors Have Influenced the Functional Division of Labor ? • Separation of Lexical Analysis From Parsing Presents a Simpler Conceptual Model • A parser embodying the conventions for comments and white space is significantly more complex that one that can assume comments and white space have already been removed by lexical analyzer. • Separation Increases Compiler Efficiency • Specialized buffering techniques for reading input characters and processing tokens… • Separation Promotes Portability. • Input alphabet peculiarities and other device-specific anomalies can be restricted to the lexical analyzer. 5
  • 6. Introducing Basic Terminology • What are Major Terms for Lexical Analysis? • TOKEN • A pair consisting of a token name and an optional attribute value. • A particular keyword, or a sequence of input characters denoting identifier. • PATTERN • A description of a form that the lexemes of a token may take. • For keywords, the pattern is just a sequence of characters that form keywords. • LEXEME • Actual sequence of characters that matches pattern and is classified by a token 6
  • 7. Introducing Basic Terminology Token Sample Lexemes Informal Description of Pattern const if relation id num literal const if <, <=, =, < >, >, >= pi, count, D2 3.1416, 0, 6.02E23 “core dumped” const characters of i, f < or <= or = or < > or >= or > letter followed by letters and digits any numeric constant any characters between “ and “ except “ Classifies Pattern Actual values are critical. Info is : 1. Stored in symbol table 2. Returned to parser 7
  • 8. Attributes for Tokens • When more than one lexeme can match a pattern, a lexical analyzer must provide the compiler additional information about that lexeme matched. • In formation about identifiers, its lexeme, type and location at which it was first found is kept in symbol table. • The appropriate attribute value for an identifier is a pointer to the symbol table entry for that identifier. 8
  • 9. Attributes for Tokens Tokens influence parsing decision; The attributes influence the translation of tokens. Example: E = M * C ** 2 <id, pointer to symbol-table entry for E> <assign_op, > <id, pointer to symbol-table entry for M> <mult_op, > <id, pointer to symbol-table entry for C> <exp_op, > <num, integer value 2> 9
  • 10. Handling Lexical Errors • Its hard for lexical analyzer without the aid of other components, that there is a source-code error. • If the statement fi is encountered for the first time in a C program it can not tell whether fi is misspelling of if statement or a undeclared literal. • Probably the parser in this case will be able to handle this. • Error Handling is very localized, with Respect to Input Source • For example: whil ( x = 0 ) do generates no lexical errors in PASCAL 10
  • 11. Handling Lexical Errors • In what Situations do Errors Occur? • Lexical analyzer is unable to proceed because none of the patterns for tokens matches a prefix of remaining input. • Panic mode Recovery • Delete successive characters from the remaining input until the analyzer can find a well-formed token. • May confuse the parser – creating syntax error • Possible error recovery actions: • Deleting or Inserting Input Characters • Replacing or Transposing Characters 11
  • 12. Buffer Pairs • Lexical analyzer needs to look ahead several characters beyond the lexeme for a pattern before a match can be announced. • Use a function ungetc to push look-ahead characters back into the input stream. • Large amount of time can be consumed moving characters. Special Buffering Technique Use a buffer divided into two N-character halves N = Number of characters on one disk block One system command read N characters Fewer than N character => eof 12
  • 13. Buffer Pairs (2) • Two pointers lexeme beginning and forward to the input buffer are maintained. • The string of characters between the pointers is the current lexeme. • Initially both pointers point to first character of the next lexeme to be found. Forward pointer scans ahead until a match for a pattern is found • Once the next lexeme is determined, the forward pointer is set to the character at its right end. • After the lexeme is processed both pointers are set to the character immediately past the lexeme Lexeme_beginning forward Comments and white space can be treated as patterns that yield no token M=E eof2**C* 13
  • 14. Code to advance forward pointer 1. This buffering scheme works quite well most of the time but with it amount of lookahead is limited. 2. Limited lookahead makes it impossible to recognize tokens in situations where the distance, forward pointer must travel is more than the length of buffer. Pitfalls: 14 if forward at the end of first half then begin reload second half ; forward : = forward + 1; end else if forward at end of second half then begin reload first half ; move forward to beginning of first half end else forward : = forward + 1;
  • 15. Specification of Tokens 15 Regular expressions are an important notation for specifying lexeme patterns An alphabet is a finite set of symbols. • Typical example of symbols are letters, digits and punctuation etc. • The set {0, 1} is the binary alphabet. A string over an alphabet is a finite sequence of symbols drawn from that alphabet. • The length is string s is denoted as |s| • Empty string is denoted by ε Prefix: ban, banana, ε, etc are the prefixes of banana Suffix: nana, banana, ε, etc are suffixes of banana Kleene or closure of a language L, denoted by L*. • L*: concatenation of L zero or more times • L0: concatenation of L zero times • L+: concatenation of L one or more times
  • 16. Kleene closure L* denotes “zero or more concatenations of” L 16
  • 17. Example Let: L = { a, b, c, ..., z } D = { 0, 1, 2, ..., 9 } D+ = “The set of strings with one or more digits” L  D = “The set of all letters and digits (alphanumeric characters)” LD = “The set of strings consisting of a letter followed by a digit” L* = “The set of all strings of letters, including , the empty string” ( L  D )* = “Sequences of zero or more letters and digits” L ( ( L  D )* ) = “Set of strings that start with a letter, followed by zero or more letters and digits.” 17
  • 18. Rules for specifying Regular Expressions Regular expressions over alphabet  1.  is a regular expression that denotes {}. 2. If a is a symbol (i.e., if a ), then a is a regular expression that denotes {a}. 3. Suppose r and s are regular expressions denoting the languages L(r) and L(s). Then a) (r) | (s) is a regular expression denoting L(r) U L(s). b) (r)(s) is a regular expression denoting L(r)L(s). c) (r)* is a regular expression denoting (L(r))*. d) (r) is a regular expression denoting L(r). 18
  • 19. How to “Parse” Regular Expressions • Precedence: • * has highest precedence. • Concatenation as middle precedence. • | has lowest precedence. • Use parentheses to override these rules. • Examples: • a b* = a (b*) • If you want (a b)* you must use parentheses. • a | b c = a | (b c) • If you want (a | b) c you must use parentheses. • Concatenation and | are associative. • (a b) c = a (b c) = a b c • (a | b) | c = a | (b | c) = a | b | c • Example: • b d | e f * | g a = (b d) | (e (f *)) | (g a) 19
  • 20. Example • Let  = {a, b} • The regular expression a | b denotes the set {a, b} • The regular expression (a|b)(a|b) denotes {aa, ab, ba, bb} • The regular expression a* denotes the set of all strings of zero or more a’s. i.e., {, a, aa, aaa, ….. } • The regular expression (a|b)* denotes the set containing zero or more instances of an a or b. • The regular expression a|a*b denotes the set containing the string a and all strings consisting of zero or more a’s followed by one b. 20
  • 21. Regular Definition • If Σ is an alphabet of basic symbols then a regular definition is a sequence of the following form: d1r1 d2r2 …….. dnrn where • Each di is a new symbol such that di  Σ and di dj where j < I • Each ri is a regular expression over Σ  {d1,d2,…,di-1) 21
  • 23. Addition Notation / Shorthand 23
  • 24. UnsignedNumber 1240, 39.45, 6.33E15, or 1.578E-41 digit  0 | 1 | 2 | … | 9 digits  digit digit* optional_fraction  . digits |  optional_exponent  ( E ( + | -| ) digits) |  num  digits optional_fraction optional_exponent digit  0 | 1 | 2 | … | 9 digits  digit+ optional_fraction  (. digits ) ? optional_exponent  ( E ( + | -) ? digits) ? num  digits optional_fraction optional_exponent Shorthand 24
  • 25. Token Recognition How can we use concepts developed so far to assist in recognizing tokens of a source language ? Assume Following Tokens: if, then, else, relop, id, num Given Tokens, What are Patterns ? if  if then  then else  else relop  < | <= | > | >= | = | <> id  letter ( letter | digit )* num  digit + (. digit + ) ? ( E(+ | -) ? digit + ) ? Grammar: stmt  |if expr then stmt |if expr then stmt else stmt | expr  term relop term | term term  id | num 26
  • 26. What Else Does Lexical Analyzer Do? Scan away blanks, new lines, tabs Can we Define Tokens For These? blank  blank tab  tab newline  newline delim  blank | tab | newline ws  delim + In these cases no token is returned to parser 27
  • 27. Overall ws if then else id num < <= = < > > >= - if then else id num relop relop relop relop relop relop - - - - pointer to table entry Exact value LT LE EQ NE GT GE Note: Each token has a unique token identifier to define category of lexemes 28
  • 28. Constructing Transition Diagrams for Tokens • Transition Diagrams (TD) are used to represent the tokens • As characters are read, the relevant TDs are used to attempt to match lexeme to a pattern • Each TD has: • States : Represented by Circles • Actions : Represented by Arrows between states • Start State : Beginning of a pattern (Arrowhead) • Final State(s) : End of pattern (Concentric Circles) • Edges: arrows connecting the states • Each TD is Deterministic (assume) - No need to choose between 2 different actions ! 29
  • 29. Example TDs start other => 0 6 7 8 * RTN(GT) RTN(GE)> = : We’ve accepted “>” and have read one extra char that must be unread. 30
  • 30. Example : All RELOPs start < 0 other = 6 7 8 return(relop, LE) 5 4 > = 1 2 3 other > = * * return(relop, NE) return(relop, LT) return(relop, EQ) return(relop, GE) return(relop, GT) 31
  • 31. Example TDs : id and delim id : delim : start delim 28 other 3029 delim * return( get_token(), install_id()) start letter 9 other 1110 letter or digit * Either returns ptr or “0” if reserved 32
  • 32. Example TDs : Unsigned #s 1912 1413 1615 1817 start otherdigit . digit E + | - digit digit digit digit E digit * start digit 25 other 2726 digit * start digit 20 * . 21 digit 24 other 23 digit digit 22 * Questions: Is ordering important for unsigned #s ? Why are there no TDs for then, else, if ? return(num, install_num()) 33
  • 33. QUESTION : What would the transition diagram (TD) for strings containing each vowel, in their strict lexicographical order, look like? 34
  • 34. Answer cons  B | C | D | F | G | H | J | … | N | P | … | T | V | .. | Z string  cons* A cons* E cons* I cons* O cons* U cons* otherUOIEA consconsconsconsconscons start error accept Note: The error path is taken if the character is other than a cons or the vowel in the lex order. 35
  • 35. Capturing Multiple Tokens Capturing keyword “begin” Capturing variable names What if both need to happen at the same time? b e g i n WS WS – white space A – alphabetic AN – alphanumericA AN WS start start 36
  • 36. Capturing Multiple Tokens b e g i n WS WS – white space A – alphabetic AN – alphanumeric A-b AN WS AN Machine is much more complicated – just for these two tokens! start 37
  • 37. Finite State Automata (FSAs) • “Finite State Machines”, “Finite Automata”, “FA” • A recognizer for a language is a program that takes as input a string x and answers “yes” if x is a sentence of the language and “no” otherwise. • The regular expression is compiled into a recognizer by constructing a generalized transition diagram called a finite automaton. • Each state is labeled with a state name • Directed edges, labeled with symbols • Two types • Deterministic (DFA) • Non-deterministic (NFA) 38
  • 38. Nondeterministic Finite Automata A nondeterministic finite automaton (NFA) is a mathematical model that consists of 1. A set of states S 2. A set of input symbols  3. A transition function that maps state/symbol pairs to a set of states 4. A special state s0 called the start state 5. A set of states F (subset of S) of final states INPUT: string OUTPUT: yes or no 39
  • 39. Example – NFA : (a|b)*abb S = { 0, 1, 2, 3 } s0 = 0 F = { 3 }  = { a, b } start 0 3b21 ba a b s t a t e i n p u t 0 1 2 a b { 0, 1 } -- { 2 } -- { 3 } { 0 }  (null) moves possible ji  Switch state but do not use any input symbol Transition Table 40
  • 40. How Does An NFA Work ? start 0 3b21 ba a b • Given an input string, we trace moves • If no more input & in final state, ACCEPT EXAMPLE: Input: ababb move(0, a) = 1 move(1, b) = 2 move(2, a) = ? (undefined) REJECT ! move(0, a) = 0 move(0, b) = 0 move(0, a) = 1 move(1, b) = 2 move(2, b) = 3 ACCEPT ! -OR- 41
  • 41. Handling Undefined Transitions We can handle undefined transitions by defining one more state, a “death” state, and transitioning all previously undefined transition to this death state. start 0 3b21 ba a b 4 a, b a a  42
  • 42. Other Concepts start 0 3b21 ba a b Not all paths may result in acceptance. aabb is accepted along path : 0  0  1  2  3 BUT… it is not accepted along the valid path: 0  0  0  0  0 43
  • 43. Deterministic Finite Automata A DFA is an NFA with the following restrictions: •  moves are not allowed • For every state s S, there is one and only one path from s for every input symbol a  . Since transition tables don’t have any alternative options, DFAs are easily simulated via an algorithm. s  s0 c  nextchar; while c  eof do s  move(s,c); c  nextchar; end; if s is in F then return “yes” else return “no” 44
  • 44. Example – DFA : (a|b)*abb start 0 3b21 ba a b start 0 3b21 ba b a b a a What Language is Accepted? Recall the original NFA: 45
  • 45. Relation between RE, NFA and DFA 1. There is an algorithm for converting any RE into an NFA. 2. There is an algorithm for converting any NFA to a DFA. 3. There is an algorithm for converting any DFA to a RE. These facts tell us that REs, NFAs and DFAs have equivalent expressive power. All three describe the class of regular languages. 46
  • 46. NFA vs DFA • An NFA may be simulated by algorithm, when NFA is constructed from the R.E • Algorithm run time is proportional to |N| * |x| where |N| is the number of states and |x| is the length of input • Alternatively, we can construct DFA from NFA and uses it to recognize input • The space requirement of a DFA can be large. The RE (a+b)*a(a+b)(a+b)….(a+b) [n-1 (a+b) at the end] has no DFA with less than 2n states. Fortunately, such RE in practice does not occur often space required O(|r|) O(|r|*|x|) O(|x|)O(2|r|)DFA NFA time to simulate where |r| is the length of the regular expression. 47