SlideShare a Scribd company logo
1 of 28
Information Theory
MSU, MARAWI CITY
EEE DEPARTMENT
Shannon Theorem
“If the rate of information from the
source does not exceed the capacity of
a communication channel, then there
exist a coding techniques such that the
information can be transmitted over the
channel with an arbitrary small
frequency of errors, despite the
presence of noise.”
Three Basic Concepts:
 Measure of source information
 Information capacity of a channel
 Coding for information transfer
4
 Information Source
 Measuring Information
 Entropy
 Source Coding
 Designing Codes
5
Information Source
 4 characteristics of information source
 The no. of symbols, n
 The symbols, S1, S2, …, Sn
 The probability of occurrence of each symbol,
P(S1), P(S2), …, P(Sn)
 The correlation between successive symbols
 Memoryless source: if each symbol is
independent
 A message: a stream of symbols from the
senders to the receiver
6
Examples …
 Ex. 1.: A source that sends binary
information (streams of 0s and 1s)
with each symbol having equal
probability and no correlation can be
modeled as a memoryless source
 n = 2
 Symbols: 0 and 1
 Probabilities: p(0) = ½ and P(1) = ½
7
Measuring Information
 To measure the information contained in a
message
 How much information does a message
carry from the sender to the receiver?
 Examples
 Ex.2.: Imagine a person sitting in a room.
Looking out the window, she can clearly see
that the sun is shining. If at this moment she
receives a call from a neighbor saying “It is now
daytime”, does this message contain any
information?
 Ex. 3. : A person has bought a lottery ticket. A
friend calls to tell her that she has won first
prize. Does this message contain any
information?
8
Examples …
 Ex.2. It does not, the message contains no
information. Why? Because she is already
certain that is daytime.
 Ex. 3. It does. The message contains a lot of
information, because the probability of winning
first prize is very small
 Conclusion
 The information content of a message is
inversely proportional to the probability of the
occurrence of that message.
 If a message is very probable, it does not
contain any information. If it is very improbable,
it contains a lot of information
9
Symbol Information
 To measure the information contained in a
message, it is needed to measure the information
contained in each symbol
 I(s) = log2 1/P(s) bits
 Bits is different from the bit, binary digit, used to define a
0 or 1
 Examples
 Ex.5. Find the information content of each symbol
when the source is binary (sending only 0 or 1 with
equal probability)
 Ex. 6. Find the information content of each symbol
when the source is sending four symbols with prob.
P(S1) = 1/8, P(S2) = 1/8, P(S3) = ¼ ; and P(S4) =
1/2
10
Examples …
 Ex. 5.
 P(0) = P(1) = ½ , the information content of
each symbol is
 Ex.6.
bit
1
]
2
[
log
1
log
)
1
(
1
log
)
1
(
bit
1
]
2
[
log
1
log
)
0
(
1
log
)
0
(
2
2
1
2
2
2
2
1
2
2








P
I
P
I
bit
1
]
2
[
log
1
log
)
(
1
log
)
(
bit
2
]
4
[
log
1
log
)
(
1
log
)
(
bit
3
]
8
[
log
1
log
)
(
1
log
)
(
bit
3
]
8
[
log
1
log
)
(
1
log
)
(
2
2
1
2
4
2
4
2
4
1
2
3
2
3
2
8
1
2
2
2
2
2
8
1
2
1
2
1
















S
P
S
I
S
P
S
I
S
P
S
I
S
P
S
I
11
Examples …
 Ex.6.
 The symbols S1 and S2 are least probable.
At the receiver each carries more
information (3 bits) than S3 or S4. The
symbol S3 is less probable than S4, so S3
carries more information than S4
 Definition the relationships
 If P(Si) = P(Sj), then I(Si) = I(Sj)
 If P(Si) < P(Sj), then I(Si) > I(Sj)
 If P(Si) = 1, then I(Si) = 0
12
Message Information
 If the message comes from a memoryless
source, each symbol is independent and the
probability of receiving a message with
symbols Si, Sj, Sk, … (where i, j, and k can
be the same) is:
 P(message) = P(Si)P(Sj)P(Sk) …
 Then the information content carried by the
message is
...
)
(
)
(
)
(
)
(
...
log
log
log
)
(
log
)
(
)
(
1
2
)
(
1
2
)
(
1
2
)
(
1
2









k
j
i
S
P
Sj
P
S
P
message
P
S
I
S
I
S
I
message
I
message
I
message
I
k
i
13
Example …
 Ex.7.
 An equal – probability binary source
sends an 8-bit message. What is the
amount of information received?
 The information content of the message is
 I(message) = I(first bit) + I(second bit) +
… + I(eight bit) = 8 bits
14
Entropy
 Entropy (H) of the source
 The average amount of information
contained in the symbols
 H(Source) = P(S1)xI(S1) + P(S2)xI(S2) + …
+ P(Sn)xI(Sn)
 Example
 What is the entropy of an equal-probability
binary source?
 H(Source) = P(0)xI(0) + P(1)xI(1) = 0.5x1
+ 0.5x1 = 1 bit
 1 bit per symbol
15
Maximum Entropy
 For a particular source with n symbols,
maximum entropy can be achieved only if all
the probabilities are the same. The value of
this max is
 In othe words, the entropy of every source
has an upper limit defined by
 H(Source)≤log2n
   

 

 n
S
P
Source
H
n
n
Si
P
i 2
1
2
1
)
(
1
2
max log
log
log
)
(
)
( 1
16
Example …
 What is the maximum entropy of a
binary source?
 Hmax = log22 = 1 bit
17
Source Coding
 To send a message from a source to a
destination, a symbol is normally coded
into a sequence of binary digits.
 The result is called code word
 A code is a mapping from a set of symbols
into a set of code words.
 Example, ASCII code is a mapping of a set
of 128 symbols into a set of 7-bit code
words
 A ………………………..> 0100001
 B …………………………> 0100010
 Set of symbols ….> Set of binary streams
18
Fixed- and Variable-Length Code
 A code can be designed with all the
code words the same length (fixed-
length code) or with different lengths
(variable length code)
 Examples
 A code with fixed-length code words:
 S1 -> 00; S2 -> 01; S3 -> 10; S4 -> 11
 A code with variable-length code words:
 S1 -> 0; S2 -> 10; S3 -> 11; S4 -> 110
19
Distinct Codes
 Each code words is different from every
other code word
 Example
 S1 -> 0; S2 -> 10; S3 -> 11; S4 -> 110
 Uniquely Decodable Codes
 A distinct code is uniquely decodable if each
code word can be decoded when inserted
between other code words.
 Example
 Not uniquely decodable
 S1 -> 0; S2 -> 1; S3 -> 00; S4 -> 10 because
 0010 -> S3 S4 or S3S2S1 or S1S1S4
20
Instantaneous Codes
 A uniquely decodable
 S1 -> 0; S2 -> 01; S3 -> 011; S4 -> 0111
 A 0 uniquely defines the beginning of a code
word
 A uniquely decodable code is
instantaneously decodable if no code
word is the prefix of any other code
word
21
Examples …
 A code word and its prefixes (note that each
code word is also a prefix of itself)
 S -> 01001 ; prefixes: 0, 10, 010, 0100, 01001
 A uniquely decodable code that is instantaneously
decodable
 S1 -> 0; s2 -> 10; s3 -> 110; s4 -> 111
 When the receiver receives a 0, it immediately
knows that it is S1; no other symbol starts with a
0. When the rx receives a 10, it immediately
knows that it is S2; no other symbol starts with
10, and so on
22
Relationship between different
types of coding
Instantaneous
codes
Uniquely decodable codes
Distinct codes
All codes
23
Code …
 Average code length
 L=L(S1)xP(S1) + L(S2)xP(S2) + …
 Example
 Find the average length of the following
code:
 S1 -> 0; S2 -> 10; S3 -> 110; S4 -> 111
 P(S1) = ½, P(S2) = ¼; P(S3) = 1/8; P(S4) =
1/8
 Solution
 L = 1x ½ + 2x ¼ + 3x 1/8 + 3x1/8 = 1
¾ bits
24
Code …
 Code efficiency
  (code efficiency) is defined as the entropy of
the source code divided by the average length
of the code
 Example
 Find the efficiency of the following code:
 S1 ->0; S2->10; S3 -> 110; S4 -> 111
 P(S1) = ½, P(S2) = ¼; P(S3) = 1/8; P(S4) = 1/8
 Solution
  %
100
)
(
L
source
H


  %
100
%
100
bits
1
)
8
(
log
)
8
(
log
)
4
(
log
)
2
(
log
)
(
bits
1
4
3
4
3
1
1
4
3
2
8
1
2
8
1
2
4
1
2
2
1
4
3









source
H
L
25
Designing Codes
 Two examples of instantaneous codes
 Shannon – Fano code
 Huffman code
 Shannon – Fano code
 An instantaneous variable – length encoding method in
which the more probable symbols are given shorter
code words and the less probable are given longer code
words
 Design builds a binary tree top (top to bottom
construction) following the steps below:
 1. List the symbols in descending order of probability
 2. Divide the list into two equal (or nearly equal)
probability sublists. Assign 0 to the first sublist and 1
to the second
 3. Repeat step 2 for each sublist until no further
division is possible
26
Example of Shannon – Fano
Encoding
 Find the Shannon – Fano code words for
the following source
 P(S1) = 0.3 ; P(S2) = 0.2 ; P(S3) = 0.15 ; P(S4)
= 0.1 ; P(S5) = 0.1 ; P(S6) = 0.05 ; P(S7) =
0.05 ; P(S8) = 0.05
 Solution
 Because each code word is assigned a leaf of
the tree, no code word is the prefix of any
other. The code is instantaneous. Calculation of
the average length and the efficiency of this
code
 H(source) = 2.7
 L = 2.75
  = 98%
27
Example of Shannon – Fano
Encoding
S1
0.30
S2
0.20
S3
0.15
S4
0.10
S5
0.10
S6
0.05
S7
0.05
S8
0.05
0 1
S1 S2 S3 S4 S5 S6 S7 S8
0 1 0 1
S1 S2 S3 S4 S5 S6 S7 S8
00 01 0 1 0 1
S3 S4 S5 S6 S7 S8
100 101 0 1 0 1
S5 S6 S7 S8
1100 1101 1110 1111
28
Huffman Encoding
 An instantaneous variable – length
encoding method in which the more
probable symbols are given shorter
code words and the less probable are
given longer code words
 Design builds a binary tree (bottom
up construction):
 1. Add two least probable symbols
 2. Repeat step 1 until no further
combination is possible
29
Example Huffman encoding
 Find the Huffman code words for the
following source
 P(S1) = 0.3 ; P(S2) = 0.2 ; P(S3) = 0.15 ; P(S4)
= 0.1 ; P(S5) = 0.1 ; P(S6) = 0.05 ; P(S7) =
0.05 ; P(S8) = 0.05
 Solution
 Because each code word is assigned a leaf of
the tree, no code word is the prefix of any
other. The code is instantaneous. Calculation of
the average length and the efficiency of this
code
 H(source) = 2.70 ; L = 2.75 ;  = 98%
30
Example Huffman encoding
0 1
0 1
0 1
0 1
0 1
0 1 0 1
0.30 0.20 0.15 0.10 0.10 0.05 0.05 0.05
S1
00
S2
10
S3
010
S4
110
S5
111
S6
0110
S7
01110
S8
01111
0.20 0.10
0.15
0.3
0.40
0.60
1.00

More Related Content

PDF
Neural Networks: Multilayer Perceptron
PPT
Analog Transmissions
PPTX
First order logic
PDF
Checksum explaination
PDF
Lecture Notes on Adaptive Signal Processing-1.pdf
PDF
Dcs unit 2
PPTX
NP completeness
Neural Networks: Multilayer Perceptron
Analog Transmissions
First order logic
Checksum explaination
Lecture Notes on Adaptive Signal Processing-1.pdf
Dcs unit 2
NP completeness

What's hot (20)

PDF
Physical Layer Numericals - Data Communication & Networking
PPTX
Reed solomon Encoder and Decoder
PPT
Np completeness
PPT
UNIT-3 : CHANNEL CODING
PDF
Adaptive filters
PPTX
Convolution Codes
PPTX
Hamming code system
PPTX
Information Theory Coding 1
PPT
Hamming codes
PPTX
Lecture No:1 Signals & Systems
PPT
Information theory 1
PPTX
Introductio to Data Science and types of data
PPTX
Information theory
PPTX
Parity generator & checker
DOCX
Manchester coding
PDF
Cs8591 Computer Networks
PPTX
Signal modelling
PDF
Information theory
PDF
Fourier transforms & fft algorithm (paul heckbert, 1998) by tantanoid
PPS
Neural Networks
Physical Layer Numericals - Data Communication & Networking
Reed solomon Encoder and Decoder
Np completeness
UNIT-3 : CHANNEL CODING
Adaptive filters
Convolution Codes
Hamming code system
Information Theory Coding 1
Hamming codes
Lecture No:1 Signals & Systems
Information theory 1
Introductio to Data Science and types of data
Information theory
Parity generator & checker
Manchester coding
Cs8591 Computer Networks
Signal modelling
Information theory
Fourier transforms & fft algorithm (paul heckbert, 1998) by tantanoid
Neural Networks
Ad

Similar to Information Theory MSU-EEE.ppt (20)

PPT
Unit 4
PDF
Module 1 till huffman coding5c-converted.pdf
PPTX
basicsofcodingtheory-160202182933-converted.pptx
PDF
Basics of coding theory
PPT
Source coding
PDF
sadsad asdasd dasdsa dasda sadCHAPTER 13.pdf
PPT
Noise info theory and Entrophy
PPT
Noise infotheory1
PPTX
Information Theory and coding - Lecture 3
PPTX
Fundamental Limits on Performance in InformationTheory.pptx
PPT
710402_Lecture 1.ppt
PDF
Data Communication & Computer network: Shanon fano coding
PPTX
Information Theory and coding - Lecture 2
PDF
cp467_12_lecture14_image compression1.pdf
PDF
Lec-03 Entropy Coding I: Hoffmann & Golomb Codes
PPT
DC Lecture Slides 1 - Information Theory.ppt
PPT
Huffman&Shannon-multimedia algorithms.ppt
PDF
Arithmetic Coding
PDF
Itblock2 150209161919-conversion-gate01
PDF
Introduction to Source Coding.pdf
Unit 4
Module 1 till huffman coding5c-converted.pdf
basicsofcodingtheory-160202182933-converted.pptx
Basics of coding theory
Source coding
sadsad asdasd dasdsa dasda sadCHAPTER 13.pdf
Noise info theory and Entrophy
Noise infotheory1
Information Theory and coding - Lecture 3
Fundamental Limits on Performance in InformationTheory.pptx
710402_Lecture 1.ppt
Data Communication & Computer network: Shanon fano coding
Information Theory and coding - Lecture 2
cp467_12_lecture14_image compression1.pdf
Lec-03 Entropy Coding I: Hoffmann & Golomb Codes
DC Lecture Slides 1 - Information Theory.ppt
Huffman&Shannon-multimedia algorithms.ppt
Arithmetic Coding
Itblock2 150209161919-conversion-gate01
Introduction to Source Coding.pdf
Ad

Recently uploaded (20)

PDF
Digital Logic Computer Design lecture notes
PDF
Structs to JSON How Go Powers REST APIs.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Construction Project Organization Group 2.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Geodesy 1.pptx...............................................
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Internet of Things (IOT) - A guide to understanding
DOCX
573137875-Attendance-Management-System-original
Digital Logic Computer Design lecture notes
Structs to JSON How Go Powers REST APIs.pdf
Mechanical Engineering MATERIALS Selection
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Construction Project Organization Group 2.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Geodesy 1.pptx...............................................
Model Code of Practice - Construction Work - 21102022 .pdf
Foundation to blockchain - A guide to Blockchain Tech
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
CH1 Production IntroductoryConcepts.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Internet of Things (IOT) - A guide to understanding
573137875-Attendance-Management-System-original

Information Theory MSU-EEE.ppt

  • 1. 1 of 28 Information Theory MSU, MARAWI CITY EEE DEPARTMENT
  • 2. Shannon Theorem “If the rate of information from the source does not exceed the capacity of a communication channel, then there exist a coding techniques such that the information can be transmitted over the channel with an arbitrary small frequency of errors, despite the presence of noise.”
  • 3. Three Basic Concepts:  Measure of source information  Information capacity of a channel  Coding for information transfer
  • 4. 4  Information Source  Measuring Information  Entropy  Source Coding  Designing Codes
  • 5. 5 Information Source  4 characteristics of information source  The no. of symbols, n  The symbols, S1, S2, …, Sn  The probability of occurrence of each symbol, P(S1), P(S2), …, P(Sn)  The correlation between successive symbols  Memoryless source: if each symbol is independent  A message: a stream of symbols from the senders to the receiver
  • 6. 6 Examples …  Ex. 1.: A source that sends binary information (streams of 0s and 1s) with each symbol having equal probability and no correlation can be modeled as a memoryless source  n = 2  Symbols: 0 and 1  Probabilities: p(0) = ½ and P(1) = ½
  • 7. 7 Measuring Information  To measure the information contained in a message  How much information does a message carry from the sender to the receiver?  Examples  Ex.2.: Imagine a person sitting in a room. Looking out the window, she can clearly see that the sun is shining. If at this moment she receives a call from a neighbor saying “It is now daytime”, does this message contain any information?  Ex. 3. : A person has bought a lottery ticket. A friend calls to tell her that she has won first prize. Does this message contain any information?
  • 8. 8 Examples …  Ex.2. It does not, the message contains no information. Why? Because she is already certain that is daytime.  Ex. 3. It does. The message contains a lot of information, because the probability of winning first prize is very small  Conclusion  The information content of a message is inversely proportional to the probability of the occurrence of that message.  If a message is very probable, it does not contain any information. If it is very improbable, it contains a lot of information
  • 9. 9 Symbol Information  To measure the information contained in a message, it is needed to measure the information contained in each symbol  I(s) = log2 1/P(s) bits  Bits is different from the bit, binary digit, used to define a 0 or 1  Examples  Ex.5. Find the information content of each symbol when the source is binary (sending only 0 or 1 with equal probability)  Ex. 6. Find the information content of each symbol when the source is sending four symbols with prob. P(S1) = 1/8, P(S2) = 1/8, P(S3) = ¼ ; and P(S4) = 1/2
  • 10. 10 Examples …  Ex. 5.  P(0) = P(1) = ½ , the information content of each symbol is  Ex.6. bit 1 ] 2 [ log 1 log ) 1 ( 1 log ) 1 ( bit 1 ] 2 [ log 1 log ) 0 ( 1 log ) 0 ( 2 2 1 2 2 2 2 1 2 2         P I P I bit 1 ] 2 [ log 1 log ) ( 1 log ) ( bit 2 ] 4 [ log 1 log ) ( 1 log ) ( bit 3 ] 8 [ log 1 log ) ( 1 log ) ( bit 3 ] 8 [ log 1 log ) ( 1 log ) ( 2 2 1 2 4 2 4 2 4 1 2 3 2 3 2 8 1 2 2 2 2 2 8 1 2 1 2 1                 S P S I S P S I S P S I S P S I
  • 11. 11 Examples …  Ex.6.  The symbols S1 and S2 are least probable. At the receiver each carries more information (3 bits) than S3 or S4. The symbol S3 is less probable than S4, so S3 carries more information than S4  Definition the relationships  If P(Si) = P(Sj), then I(Si) = I(Sj)  If P(Si) < P(Sj), then I(Si) > I(Sj)  If P(Si) = 1, then I(Si) = 0
  • 12. 12 Message Information  If the message comes from a memoryless source, each symbol is independent and the probability of receiving a message with symbols Si, Sj, Sk, … (where i, j, and k can be the same) is:  P(message) = P(Si)P(Sj)P(Sk) …  Then the information content carried by the message is ... ) ( ) ( ) ( ) ( ... log log log ) ( log ) ( ) ( 1 2 ) ( 1 2 ) ( 1 2 ) ( 1 2          k j i S P Sj P S P message P S I S I S I message I message I message I k i
  • 13. 13 Example …  Ex.7.  An equal – probability binary source sends an 8-bit message. What is the amount of information received?  The information content of the message is  I(message) = I(first bit) + I(second bit) + … + I(eight bit) = 8 bits
  • 14. 14 Entropy  Entropy (H) of the source  The average amount of information contained in the symbols  H(Source) = P(S1)xI(S1) + P(S2)xI(S2) + … + P(Sn)xI(Sn)  Example  What is the entropy of an equal-probability binary source?  H(Source) = P(0)xI(0) + P(1)xI(1) = 0.5x1 + 0.5x1 = 1 bit  1 bit per symbol
  • 15. 15 Maximum Entropy  For a particular source with n symbols, maximum entropy can be achieved only if all the probabilities are the same. The value of this max is  In othe words, the entropy of every source has an upper limit defined by  H(Source)≤log2n          n S P Source H n n Si P i 2 1 2 1 ) ( 1 2 max log log log ) ( ) ( 1
  • 16. 16 Example …  What is the maximum entropy of a binary source?  Hmax = log22 = 1 bit
  • 17. 17 Source Coding  To send a message from a source to a destination, a symbol is normally coded into a sequence of binary digits.  The result is called code word  A code is a mapping from a set of symbols into a set of code words.  Example, ASCII code is a mapping of a set of 128 symbols into a set of 7-bit code words  A ………………………..> 0100001  B …………………………> 0100010  Set of symbols ….> Set of binary streams
  • 18. 18 Fixed- and Variable-Length Code  A code can be designed with all the code words the same length (fixed- length code) or with different lengths (variable length code)  Examples  A code with fixed-length code words:  S1 -> 00; S2 -> 01; S3 -> 10; S4 -> 11  A code with variable-length code words:  S1 -> 0; S2 -> 10; S3 -> 11; S4 -> 110
  • 19. 19 Distinct Codes  Each code words is different from every other code word  Example  S1 -> 0; S2 -> 10; S3 -> 11; S4 -> 110  Uniquely Decodable Codes  A distinct code is uniquely decodable if each code word can be decoded when inserted between other code words.  Example  Not uniquely decodable  S1 -> 0; S2 -> 1; S3 -> 00; S4 -> 10 because  0010 -> S3 S4 or S3S2S1 or S1S1S4
  • 20. 20 Instantaneous Codes  A uniquely decodable  S1 -> 0; S2 -> 01; S3 -> 011; S4 -> 0111  A 0 uniquely defines the beginning of a code word  A uniquely decodable code is instantaneously decodable if no code word is the prefix of any other code word
  • 21. 21 Examples …  A code word and its prefixes (note that each code word is also a prefix of itself)  S -> 01001 ; prefixes: 0, 10, 010, 0100, 01001  A uniquely decodable code that is instantaneously decodable  S1 -> 0; s2 -> 10; s3 -> 110; s4 -> 111  When the receiver receives a 0, it immediately knows that it is S1; no other symbol starts with a 0. When the rx receives a 10, it immediately knows that it is S2; no other symbol starts with 10, and so on
  • 22. 22 Relationship between different types of coding Instantaneous codes Uniquely decodable codes Distinct codes All codes
  • 23. 23 Code …  Average code length  L=L(S1)xP(S1) + L(S2)xP(S2) + …  Example  Find the average length of the following code:  S1 -> 0; S2 -> 10; S3 -> 110; S4 -> 111  P(S1) = ½, P(S2) = ¼; P(S3) = 1/8; P(S4) = 1/8  Solution  L = 1x ½ + 2x ¼ + 3x 1/8 + 3x1/8 = 1 ¾ bits
  • 24. 24 Code …  Code efficiency   (code efficiency) is defined as the entropy of the source code divided by the average length of the code  Example  Find the efficiency of the following code:  S1 ->0; S2->10; S3 -> 110; S4 -> 111  P(S1) = ½, P(S2) = ¼; P(S3) = 1/8; P(S4) = 1/8  Solution   % 100 ) ( L source H     % 100 % 100 bits 1 ) 8 ( log ) 8 ( log ) 4 ( log ) 2 ( log ) ( bits 1 4 3 4 3 1 1 4 3 2 8 1 2 8 1 2 4 1 2 2 1 4 3          source H L
  • 25. 25 Designing Codes  Two examples of instantaneous codes  Shannon – Fano code  Huffman code  Shannon – Fano code  An instantaneous variable – length encoding method in which the more probable symbols are given shorter code words and the less probable are given longer code words  Design builds a binary tree top (top to bottom construction) following the steps below:  1. List the symbols in descending order of probability  2. Divide the list into two equal (or nearly equal) probability sublists. Assign 0 to the first sublist and 1 to the second  3. Repeat step 2 for each sublist until no further division is possible
  • 26. 26 Example of Shannon – Fano Encoding  Find the Shannon – Fano code words for the following source  P(S1) = 0.3 ; P(S2) = 0.2 ; P(S3) = 0.15 ; P(S4) = 0.1 ; P(S5) = 0.1 ; P(S6) = 0.05 ; P(S7) = 0.05 ; P(S8) = 0.05  Solution  Because each code word is assigned a leaf of the tree, no code word is the prefix of any other. The code is instantaneous. Calculation of the average length and the efficiency of this code  H(source) = 2.7  L = 2.75   = 98%
  • 27. 27 Example of Shannon – Fano Encoding S1 0.30 S2 0.20 S3 0.15 S4 0.10 S5 0.10 S6 0.05 S7 0.05 S8 0.05 0 1 S1 S2 S3 S4 S5 S6 S7 S8 0 1 0 1 S1 S2 S3 S4 S5 S6 S7 S8 00 01 0 1 0 1 S3 S4 S5 S6 S7 S8 100 101 0 1 0 1 S5 S6 S7 S8 1100 1101 1110 1111
  • 28. 28 Huffman Encoding  An instantaneous variable – length encoding method in which the more probable symbols are given shorter code words and the less probable are given longer code words  Design builds a binary tree (bottom up construction):  1. Add two least probable symbols  2. Repeat step 1 until no further combination is possible
  • 29. 29 Example Huffman encoding  Find the Huffman code words for the following source  P(S1) = 0.3 ; P(S2) = 0.2 ; P(S3) = 0.15 ; P(S4) = 0.1 ; P(S5) = 0.1 ; P(S6) = 0.05 ; P(S7) = 0.05 ; P(S8) = 0.05  Solution  Because each code word is assigned a leaf of the tree, no code word is the prefix of any other. The code is instantaneous. Calculation of the average length and the efficiency of this code  H(source) = 2.70 ; L = 2.75 ;  = 98%
  • 30. 30 Example Huffman encoding 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0.30 0.20 0.15 0.10 0.10 0.05 0.05 0.05 S1 00 S2 10 S3 010 S4 110 S5 111 S6 0110 S7 01110 S8 01111 0.20 0.10 0.15 0.3 0.40 0.60 1.00