SlideShare a Scribd company logo
Fundamentals of Digital
Communications and Data Transmission
14th March 2014
Prof. Asodariya Bhavesh
SSASIT, Surat
Outline
• What is communication?
• Analog to Digital Conversion (A/D)
• Source Coding
• Channel Encoding
• Modulation Techniques
• Modulation Techniques (Part II)
What is Communication?
• Communication is transferring data reliably
from one point to another
– Data could be: voice, video, codes etc…
• It is important to receive the same
information that was sent from the
transmitter.
• Communication system
– A system that allows transfer of information
realiably
Information
Source
Transmitter Channel Receiver Information
Sink
Block Diagram of a typical communication system
• Information Source
– The source of data
• Data could be: human voice, data storage device CD,
video etc..
– Data types:
• Discrete: Finite set of outcomes “Digital”
• Continuous : Infinite set of outcomes “Analog”
• Transmitter
– Converts the source data into a suitable form for
transmission through signal processing
– Data form depends on the channel
• Channel:
– The physical medium used to send the signal
– The medium where the signal propagates till
arriving to the receiver
– Physical Mediums (Channels):
• Wired : twisted pairs, coaxial cable, fiber optics
• Wireless: Air, vacuum and water
– Each physical channel has a certain limited range
of frequencies ,( fmin  fmax ), that is called the
channel bandwidth
– Physical channels have another important
limitation which is the NOISE
• Channel:
• Noise is undesired random signal that corrupts the original
signal and degrades it
• Noise sources:
» Electronic equipments in the communication system
» Thermal noise
» Atmospheric electromagnetic noise (Interference with
another signals that are being transmitted at the same
channel)
– Another Limitation of noise is the attenuation
• Weakens the signal strength as it travels over the
transmission medium
• Attenuation increases as frequency increases
– One Last important limitation is the delay distortion
• Mainly in the wired transmission
• Delays the transmitted signals  Violates the reliability of
the communication system
• Receiver
– Extracting the message/code in the received signal
• Example
– Speech signal at transmitter is converted into electromagnetic
waves to travel over the channel
– Once the electromagnetic waves are received properly, the
receiver converts it back to a speech form
– Information Sink
• The final stage
• The user
Effect of Noise On a transmitted signal
Digital Communication System
• Data of a digital format “i.e binary numbers”
Information
Source
A / D
Converter
Source
Encoder
Channel
Encoder
Modulator
Information
Sink
D / A
Converter
Source
Decoder
Channel
Decoder
Demodulator
Channel
• Information source
– Analog Data: Microphone, speech signal, image,
video etc…
– Discrete (Digital) Data: keyboard, binary numbers,
hex numbers, etc…
• Analog to Digital Converter (A/D)
– Sampling:
• Converting continuous time signal to a digital signal
– Quantization:
• Converting the amplitude of the analog signal to a
digital value
– Coding:
• Assigning a binary code to each finite amplitude in the
• Source encoder
– Represent the transmitted data more efficiently
and remove redundant information
• How? “write Vs. rite”
• Speech signals frequency and human ear “20 kHz”
– Two types of encoding:
– Lossless data compression (encoding)
• Data can be recovered without any missing information
– Lossy data compression (encoding)
• Smaller size of data
• Data removed in encoding can not be recovered again
• Channel encoder:
– To control the noise and to detect and correct the
errors that can occur in the transmitted data due
the noise.
• Modulator:
– Represent the data in a form to make it
compatible with the channel
• Carrier signal “high frequency signal”
• Demodulator:
– Removes the carrier signal and reverse the
process of the Modulator
• Channel decoder:
– Detects and corrects the errors in the signal
gained from the channel
• Source decoder:
– Decompresses the data into it’s original format.
• Digital to Analog Converter:
– Reverses the operation of the A/D
– Needs techniques and knowledge about sampling,
quantization, and coding methods.
• Information Sink
– The User
Why should we use digital communication?
• Ease of regeneration
– Pulses “ 0 , 1”
– Easy to use repeaters
• Noise immunity
– Better noise handling when using repeaters that repeats
the original signal
– Easy to differentiate between the values “either 0 or 1”
• Ease of Transmission
– Less errors
– Faster !
– Better productivity
Why should we use digital communication?
• Ease of multiplexing
– Transmitting several signals simultaneously
• Use of modern technology
– Less cost !
• Ease of encryption
– Security and privacy guarantee
– Handles most of the encryption techniques
Disadvantage !
• The major disadvantage of digital transmission
is that it requires a greater transmission
bandwidth or channel bandwidth to
communicate the same information in digital
format as compared to analog format.
• Another disadvantage of digital transmission is
that digital detection requires system
synchronization, whereas analog signals
generally have no such requirement.
Chapter 2: Analog to Digital
Conversion (A/D)
14th March 2014
Prof. Asodariya Bhavesh
SSASIT, Surat
Digital Communication System
Information
Source
A / D
Converter
Source
Encoder
Channel
Encoder
Modulator
Information
Sink
D / A
Converter
Source
Decoder
Channel
Decoder
Demodulator
Channel
2.1 Basic Concepts in Signals
• A/D is the process of converting an analog
signal to digital signal, in order to transmit it
through a digital communication system.
• Electric Signals can be represented either in
Time domain or frequency domain.
– Time domain i.e
– We can get the value of that signal at any time (t)
by substituting in the v(t) equation.
v(t) 2sin(2 1000t 45)
Time Domain Frequency Domain
Amp. Amp.
Time(s) Frequency (Hz)
Converting an Analog Signal to a Discrete
Signal (A/D)
• Can be done through three basic steps:
1- Sampling
2- Quantization
3- Coding
Sampling
• Process of converting the continuous time
signal to a discrete time signal.
• Sampling is done by taking “Samples” at
specific times spaced regularly.
– V(t) is an analog signal
– V(nTs) is the sampled signal
• Ts = positive real number that represent the spacing of
the sampling time
• n = sample number integer
Sampling
Original Analog Signal
“Before Sampling”
Sampled Analog Signal
“After Sampling”
Sampling
• The closer the Ts value, the closer the sampled
signal resemble the original signal.
• Note that we have lost some values of the
original signal, the parts between each
successive samples.
• Can we recover these values? And How?
• Can we go back from the discrete signal to
the original continuous signal?
Sampling Theorem
• A bandlimited signal having no spectral components
above fmax (Hz), can be determined uniquely by values
sampled at uniform intervals of Ts seconds, where
• An analog signal can be reconstructed from a sampled
signal without any loss of information if and only if it is:
– Band limited signal
– The sampling frequency is at least twice the signal
bandwidth
Ts
1
2 fmax
Quantization
• Quantization is a process of approximating a
continuous range of values, very large set of
possible discrete values, by a relatively small
range of values, small set of discrete values.
• Continuous range  infinte set of values
• Discrete range  finite set of values
Quantization
• Dynamic range of a signal
– The difference between the highest to lowest
value the signal can takes.
Quantization
• In the Quantization process, the dynamic range of a
signal is divided into L amplitude levels denoted by mk,
where k = 1, 2, 3, .. L
• L is an integer power of 2
• L = 2k
• K is the number of bits needed to represent the amplitude
level.
• For example:
– If we divide the dynamic range into 8 levels,
• L = 8 = 23
– We need 3 bits to represent each level.
Quantization
• Example:
– Suppose we have an analog signal with the values
between [0, 10]. If we divide the signal into four
levels. We have
• m1  [ 0, 2.5 ]
• m2  [ 2.5, 5 ]
• m3  [ 5 , 7.5]
• m4  [ 7.5, 10]
Quantization
• For every level, we assign a value for the signal
if it falls within the same level.
M1 = 1.25 if the signal in m1
M2 = 3.75 if the signal in m2
Q [ v(t) ] =
M3 = 6.25 if the signal in m3
M4 = 8.75 if the signal in m4
Quantization
Original Analog Signal
“Before Quantization”
Quantized Analog Signal
“After Quantization”
Quantization
Original Discrete Signal
“Before Quantization”
Quantized Discrete Signal
“After Quantization”
Quantization
• The more quantization levels we take the
smaller the error between the original and
quantized signal.
• Quantization step
• The smaller the Δ the smaller the error.
Dynamic Range
No. of Quantization levels
Smax Smin
L
Coding
• Assigning a binary code to each quantization
level.
• For example, if we have quantized a signal into
16 levels, the coding process is done as the
following:
Step Code Step Code Step Code Step Code
0 0000 4 0100 8 1000 12 1100
1 0001 5 0101 9 1001 13 1101
2 0010 6 0110 10 1010 14 1110
3 0011 7 0111 11 1011 15 1111
Coding
• The binary codes are represented as pulses
• Pulse means 1
• No pulse means 0
• After coding process, the signal is ready to be
transmitted through the channel. And
Therefore, completing the A/D conversion of
an analog signal.
Chapter 3: Source Coding
14th March 2014
Prof. Asodariya Bhavesh
SSASIT, Surat
3.1 Measure of Information
• What is the definition of “Information” ?
• News, text data, images, videos, sound etc..
• In Information Theory
– Information is linked with the element of surprise or
uncertainty
– In terms of probability
– Information
• The more probable some event to occur the less information
related to its occurrence.
• The less probable some event to occur the more information
we get when it occurs.
Example1:
• The rush hour in Kuwait is between 7.00 am –
8.00 am
– A person leaving his home to work at 7.30 will
NOT be surprised about the traffic jam  almost
no information is gained here
– A person leaving his home to work at 7.30 will BE
surprised if THERE IS NO traffic jam:
– He will start asking people / family / friends
– Unusual experience
– Gaining more information
Example 2
• The weather temperature in Kuwait at summer
season is usually above 30o
• It is known that from the historical data of the
weather, the chance that it rains in summer is
very rare chance.
– A person who lives in Kuwait will not be surprised by
this fact about the weather
– A person who lived in Kuwait will BE SURPRISED if it
rains during summer, therefore asking about the
phenomena. Therefore gaining more knowledge
“information”
How can we measure information?
• Measure of Information
– Given a digital source with N possible outcomes
“messages”, the information sent from the digital
source when the jth message is transmitted is
given by the following equation:
[ Bits ]
Ij log2(
1
pj
)
Example 1
• Find the information content of a message
that takes on one of four possible outcomes
equally likely
• Solution
The probability of each outcome = P =
Therefore,
1
0.25
I log2(
1
0.25
)
log(
1
0.25
)
log(2)
2 bits
Example 2
• Suppose we have a digital source that
generates binary bits. The probability that it
generates “0” is 0.25, while the probability
that it generates “1” is 0.75. Calculate the
amount of information conveyed by every bit.
Example 2 (Solution)
• For the binary “0” :
• For the binary “1”:
• Information conveyed by the “0” is more than the
information conveyed by the “1”
I log2(
1
0.25
) 2 bits
bitsI 42.0)
75.0
1
(log2
Example 3:
• A discrete source generates a sequence of ( n )
bits. How many possible messages can we
receive from this source?
• Assuming all the messages are equally likely to
occur, how much information is conveyed by
each message?
Example 3 (solution):
• The source generates a sequence of n bits,
each bit takes one of two possible values
– a discrete source generates either “0” or “1”
• Therefore:
– We have 2N possible outcomes
• The Information Conveyed by each outcome
I log2(
1
2n
)
log(2n
)
log(2)
nlog(2)
log(2)
n bits
3.3 Entropy
• The entropy of a discrete source S is the
average amount of information ( or
uncertainty ) associated with that source.
• m = number of possible outcomes
• Pj = probability of the jth message
H(s) pj log2(
1
pj
) [bits]
j 1
m
Importance of Entropy
• Entropy is considered one of the most
important quantities in information theory.
• There are two types of source coding:
– Lossless coding “lossless data compression”
– Lossy coding “lossy data compression”
• Entropy is the threshold quantity that
separates lossy from lossless data
compression.
Example 4
• Consider an experiment of selecting a card at
random from a cards deck of 52 cards. Suppose
we’re interested in the following events:
– Getting a picture, with probability of :
– Getting a number less than 3, with probability of:
– Getting a number between 3 and 10, with a probability of:
• Calculate the Entropy of this random experiment.
52
12
8
52
32
52
Example 4 (solution) :
• The entropy is given by :
• Therefore,
H(s) pj log2(
1
pj
) [bits]
j 1
3
H(s)
12
52
log2(
52
12
)
8
52
log2(
52
8
)
32
52
log2(
52
32
) 1.335 bits
Source Coding Theorem
• First discovered by Claude Shannon.
• Source coding theorem
“A discrete source with entropy rate H can be
encoded with arbitrarily small error probability at
any rate L bits per source output as long as L > H”
Where
H = Entropy rate
L = codeword length
If we encode the source with L > H  Trivial Amount of errors
If we encode the source with L < H  we’re certain that an error will
occur
3.4 Lossless data compression
• Data compression
– Encoding information in a relatively smaller size than their original size
• Like ZIP files (WinZIP), RAR files (WinRAR),TAR files etc..
• Data compression:
– Lossless: the compressed data are an exact copy of the original data
– Lossy: the compressed data may be different than the original data
• Loseless data compression techniques:
– Huffman coding algorithm
– Lempel-Ziv Source coding algorithm
Chapter 4: Channel Encoding
14th March 2014
Prof. Asodariya Bhavesh
SSASIT, Surat
Overview
• Channel encoding definition and importance
• Error Handling techniques
• Error Detection techniques
• Error Correction techniques
Channel Encoding - Definition
• In digital communication systems an optimum
system might be defined as one that
minimizes the probability of bit error.
• Error occurs in the transmitted signal due to
the transmission in a non-ideal channel
– Noise exists in channels
– Noise signals corrupt the transmitted data
Channel Encoding - Imporatance
• Channel encoding
– Techniques used to protect the transmitted signal
from the noise effect
• Two basic approaches of channel encoding
– Automatic Repeat Request (ARQ)
– Forward Error Correction (FEC)
Automatic Repeat Request (ARQ)
• Whenever the receiver detects an error in the
transmitted block of data, it requests the
transmitter to send the block again to
overcome the error.
• The request continue “repeats” until the block
is received correctly
• ARQ is used in two-way communication
systems
– Transmitter  Receiver
Automatic Repeat Request (ARQ)
• Advantages:
– Error detection is simple and requires much
simpler decoding equipments than the other
techniques
• Disadvantages:
– If we have a channel with high error rate, the
information must be sent too frequently.
– This results in sending less information thus
producing a less efficient system
Forward Error Correction (FEC)
• The transmitted data are encoded so that the
receiver can detect AND correct any errors.
• Commonly known as Channel Encoding
• Can be Used in both two-way or one-way
transmission.
• FEC is the most common technique used in
the digital communication because of its
improved performance in correcting the
errors.
Forward Error Correction (FEC)
• Improved performance because:
– It introduces redundancy in the transmitted data
in a controlled way
– Noise averaging : the receiver can average out the
noise over long time of periods.
Error Control Coding
• There are two basic categories for error
control coding
– Block codes
– Tree Codes
• Block Codes:
– A block of k bits is mapped into a block of n bits
Block of K bits Block of n bits
Error Control Coding
• tree codes are also known as codes with
memory, in this type of codes the encoder
operates on the incoming message sequence
continuously in a serial manner.
• Protecting data from noise can be done
through:
– Error Detection
– Error Correction
Error Control Coding
• Error Detection
– We basically check if we have an error in the
received data or not.
• There are many techniques for the detection
stage
• Parity Check
• Cyclic Redundancy Check (CRC)
Error Control Coding
• Error Correction
– If we have detected an error “or more” in the
received data and we can correct them, then we
proceed in the correction phase
• There are many techniques for error
correction as well:
• Repetition Code
• Hamming Code
Error Detection Techniques
• Parity Check
– Very simple technique used to detect errors
• In Parity check, a parity bit is added to the
data block
– Assume a data block of size k bits
– Adding a parity bit will result in a block of size k+1
bits
• The value of the parity bit depends on the
number of “1”s in the k bits data block
Parity Check
• Suppose we want to make the number of 1’s in the
transmitted data block odd, in this case the value of the parity
bit depends on the number of 1’s in the original data
– if we transmit a message = 1010111
• k = 7 bits
– Adding a parity check so that the number of 1’s is even
• The message would be : 10101111
• k+1 = 8 bits
• At the reciever ,if one bit changes its values, then an error can
be detected
Example - 1
• At the transmitter, we need to send the
message M= 1011100.
– We need to make the number of one’s odd
• Transmitter:
– k=7 bits , M =1011100
– k+1=8 bits , M’=10111001
• Receiver:
– If we receive M’ = 10111001  no error is
detected
– If we receive M’= 10111000  an Error is
Parity Check
• If an odd number of errors occurred, then the
error still can be detected “assuming a parity
bit that makes an odd number of 1’s”
• Disadvantage:
– If an even number of errors occurred, the the
error can NOT be detected “assuming a parity bit
that makes an odd number of 1’s”
Chapter 5:
Modulation Techniques
14th March 2014
Prof. Asodariya Bhavesh
SSASIT, Surat
Introduction
• After encoding the binary data, the data is
now ready to be transmitted through the
physical channel
• In order to transmit the data in the physical
channel we must convert the data back to an
electrical signal
– Convert it back to an analog form
• This process is called modulation
Modulation - Definition
• Modulation is the process of changing a
parameter of a signal using another signal.
• The most commonly used signal type is the
sinusoidal signal that has the form of :
•V(t) = A sin ( wt + θ )
• A : amplitude of the signla
• w : radian frequency
• θ : Phase shift
Modulation
• In modulation process, we need to use two
types of signals:
– Information, message or transmitted signal
– Carrier signal
• Let’s assume the carrier signal is of a
sinusoidal type of the form x(t) = A sin (wt + θ
)
• Modulation is letting the message signal to
change one of the carrier signal parameters
Modulation
• If we let the carrier signal amplitude changes
in accordance with the message signal then
we call the process amplitude modulation
• If we let the carrier signal frequency changes
in accordance with the message signal then
we call this process frequency modulation
Modulation Types AM, FM, PAM
Modulation Types AM, FM, PAM
Digital Data Transmission
• There are two types of Digital Data
Transmission:
1) Base-Band data transmission
– Uses low frequency carrier signal to transmit the data
2) Band-Pass data transmission
– Uses high frequency carrier signal to transmit the data
Base-Band Data Transmission
• Base-Band data transmission = Line coding
• The binary data is converted into an electrical
signal in order to transmit them in the channel
• Binary data are represented using amplitudes
for the 1’s and 0’s
• We will presenting some of the common base-
band signaling techniques used to transmit
the information
Line Coding Techniques
• Non-Return to Zero (NRZ)
• Unipolar Return to Zero (Unipolar-RZ)
• Bi-Polar Return to Zero (Bi-polar RZ)
• Return to Zero Alternate Mark Inversion (RZ-AMI)
• Non-Return to Zero – Mark (NRZ-Mark)
• Manchester coding (Biphase)
Non-Return to Zero (NRZ)
• The “1” is represented by some level
• The “0” is represented by the opposite
• The term non-return to zero means the signal
switched from one level to another without
taking the zero value at any time during
transmission.
NRZ - Example
• We want to transmit m=1011010
Unipolar Return to Zero (Unipolar RZ)
• Binary “1” is represented by some level that is
half the width of the signal
• Binary “0” is represented by the absence of
the pulse
Unipolar RZ - Example
• We want to transmit m=1011010
Bipolar Return to Zero (Bipolar RZ)
• Binary “1” is represented by some level that is
half the width of the signal
• Binary “0” is represented a pulse that is half
width the signal but with the opposite sign
Bipolar RZ - Example
• We want to transmit m=1011010
Return to Zero Alternate Mark
Inversion (RZ-AMI)
• Binary “1” is represented by a pulse
alternating in sign
• Binary “0” is represented with the absence of
the pulse
RZ-AMI - Example
• We want to transmit m=1011010
Non-Return to Zero – Mark (NRZ-Mark)
• Also known as differential encoding
• Binary “1” represented in the change of the
level
– High to low
– Low to high
• Binary “0” represents no change in the level
NRZ-Mark - Example
• We want to transmit m=1011010
Manchester coding (Biphase)
• Binary “1” is represented by a positive pulse
half width the signal followed by a negative
pulse
• Binary “0” is represented by a negative pulse
half width the signal followed by a positive
pulse
Manchester coding - Example
• We want to transmit m=1011010
Transmission
• Transmission bandwidth: the transmission
bandwidth of a communication system is the
band of frequencies allowed for signal
transmission, in another word it is the band of
frequencies at which we are allowed to use to
transmit the data.
Bit Rate
• Bit Rate : is the number of bits transferred
between devices per second
• If each bit is represented by a pulse of width
Tb, then the bit rate
Rb
1
Tb
bits/sec
Example – Bit rate calculation
• Suppose that we have a binary data source
that generates bits. Each bit is represented by
a pulse of width Tb = 0.1 mSec
• Calculate the bit rate for the source
• Solution
Rb
1
Tb
1
0.1 10 3
10000 bits/sec
Example – Bit rate calculation
• Suppose we have an image frame of size
200x200 pixels. Each pixel is represented by
three primary colors red, green and blue
(RGB). Each one of these colors is represented
by 8 bits, if we transmit 1000 frames in 5
seconds what is the bit rate for this image?
Example – Bit rate calculation
• We have a total size of 200x200 = 40000 pixels
• Each pixel has three colors, RGB that each of them has 8
bits.
– 3 x 8 = 24 bits ( for each pixel with RGB)
• Therefore, for the whole image we have a total size of 24
x 40000 = 960000 bits
• Since we have 1000 frames in 5 seconds, then the total
number of bits transmitted will be 1000 x 960000 =
960000000 bits in 5 seconds
• Bit rate = 96000000/5 = 192000000 bits/second
Baud rate (Symbol rate)
• The number of symbols transmitted per second
through the communication channel.
• The symbol rate is related to the bit rate by the
following equation:
• Rb = bit rate
• Rs = symbol rate
• N = Number of bits per symbol
Rs
Rb
N
Baud rate (Symbol rate)
• We usually use symbols to transmit data when the
transmission bandwidth is limited
• For example, we need to transmit a data at high rate and the
bit duration Tb is very small; to overcome this problem we
take a group of more than one bit, say 2, therefore :
Tb fo
1
Tb
2Tb f
1
2Tb
1
2
fo
4Tb f
1
4Tb
1
4
fo
Baud rate (Symbol rate)
• We notice that by transmitting symbols rather
than bits we can reduce the spectrum of the
transmitted signal.
• Hence, we can use symbol transmission rather
than bit transmission when the transmission
bandwidth is limited
Example
• A binary data source transmits binary data,
the bit duration is 1µsec, Suppose we want to
transmit symbols rather than bits, if each
symbol is represented by four bits. what is the
symbol rate?
• Each bit is represented by a pulse of duration
1µ second, hence the bit rate
Rb
1
1 10 6
1000000 bits/sec
Example (Continue)
• Therefore, the symbol rate will be
Rs
Rb
N
1000000
4
250000 symbols/sec
Chapter 6:
Modulation Techniques (Part II)
14th March 2014
Prof. Asodariya Bhavesh
SSASIT, Surat
Introduction
• Bandpass data transmission
• Amplitude Shift Keying (ASK)
• Phase Shift Keying (PSK)
• Frequency Shift Keying (FSK)
• Multilevel Signaling (Mary Modulation)
Bandpass Data Transmission
• In communication, we use modulation for several
reasons in particular:
– To transmit the message signal through the
communication channel efficiently.
– To transmit several signals at the same time over a
communication link through the process of
multiplexing or multiple access.
– To simplify the design of the electronic systems used
to transmit the message.
– by using modulation we can easily transmit data with
low loss
Bandpass Digital Transmission
• Digital modulation is the process by which
digital symbols are transformed into wave-
forms that are compatible with the
characteristics of the channel.
• The following are the general steps used by
the modulator to transmit data
– 1. Accept incoming digital data
– 2. Group the data into symbols
– 3. Use these symbols to set or change the phase, frequency or
amplitude of the reference carrier signal appropriately.
Bandpass Modulation Techniques
• Amplitude Shift Keying (ASK)
• Phase Shift Keying (PSK)
• Frequency Shift Keying (FSK)
• Multilevel Signaling (Mary Modulation)
• Mary Amplitude Modulation
• Mary Phase Shift Keying (Mary PSK)
• Mary Frequency Shift Keying (Mary FSK)
• Quadrature Amplitude Modulation (QAM)
Amplitude Shift Keying (ASK)
• In ASK the binary data modulates the
amplitude of the carrier signal
Phase Shift Keying (PSK)
• In PSK the binary data modulates the phase of
the carrier signal
Frequency Shift Keying (FSK)
• In FSK the binary data modulates the
frequency of the carrier signal
Modulation Types – 4 Level ASK,
FSK, PSK
Constellation Diagram
BPSK
Constellation Diagram
QPSK
Constellation Diagram
16-QAM
Introduction of digital communication
Introduction of digital communication

More Related Content

PDF
Introduction to Digital Communication
PDF
Digital base band modulation
PPTX
UNIT-1 Elements of Digital Communication
PPTX
Digital Communication 1
PPT
Source coding
PDF
Digital communication systems
PPTX
Modulation techniques
Introduction to Digital Communication
Digital base band modulation
UNIT-1 Elements of Digital Communication
Digital Communication 1
Source coding
Digital communication systems
Modulation techniques

What's hot (20)

PPT
Link power and rise time budget analysis
PPT
Switching systems lecture2
PPTX
Path Loss and Shadowing
PPTX
Large scale path loss 1
PDF
Digital Communications Lecture 1
PPTX
NYQUIST CRITERION FOR ZERO ISI
PPT
Antennas wave and propagation
PPTX
Wavelength division multiplexing
PPTX
Adaptive equalization
PPT
Digital modulation
PPTX
Loop Antennas
PPTX
Linear block code
PPTX
M ary psk and m ary qam ppt
PPTX
Line coding
PPT
Optical receivers
PPSX
Single sidebands ssb lathi
PPTX
cellular concepts in wireless communication
PDF
Companding & Pulse Code Modulation
PPTX
Spread spectrum
PPTX
Digital communication unit II
Link power and rise time budget analysis
Switching systems lecture2
Path Loss and Shadowing
Large scale path loss 1
Digital Communications Lecture 1
NYQUIST CRITERION FOR ZERO ISI
Antennas wave and propagation
Wavelength division multiplexing
Adaptive equalization
Digital modulation
Loop Antennas
Linear block code
M ary psk and m ary qam ppt
Line coding
Optical receivers
Single sidebands ssb lathi
cellular concepts in wireless communication
Companding & Pulse Code Modulation
Spread spectrum
Digital communication unit II
Ad

Viewers also liked (20)

PPT
Digital Communication
PPTX
Digital communication system
PPT
Digital Communication ppt
PPTX
Digital Communication Techniques
PPTX
A 01-quick-introduction-to-digital-communication-system
PPTX
PDF
An Overview of Digital Communication and Transmission
PPTX
Cyclic redundancy check
PPTX
Introduction to digital communications
PPT
Chapter 03 cyclic codes
PPTX
Data Communications (under graduate course) Lecture 1 of 5
PPT
communication system Chapter 4
PPT
Angle modulation
PPTX
CRC Error coding technique
PPT
Multiplexing and Multiple access
DOC
Chap 3
PDF
Information theory
PDF
Digital Modulation Unit 3
PPTX
Presentation on cyclic redundancy check (crc)
PPTX
Line coding
Digital Communication
Digital communication system
Digital Communication ppt
Digital Communication Techniques
A 01-quick-introduction-to-digital-communication-system
An Overview of Digital Communication and Transmission
Cyclic redundancy check
Introduction to digital communications
Chapter 03 cyclic codes
Data Communications (under graduate course) Lecture 1 of 5
communication system Chapter 4
Angle modulation
CRC Error coding technique
Multiplexing and Multiple access
Chap 3
Information theory
Digital Modulation Unit 3
Presentation on cyclic redundancy check (crc)
Line coding
Ad

Similar to Introduction of digital communication (20)

PDF
PPTX
Module2.pptxwewewewewewewewewewewewewewewewe
PPTX
UNIT 2- UNDERSTANDING DIGITAL SIGNALS PART 2
PPTX
Introductionto Digitalcommunications and baseband modulation techniques
PDF
Digital communication systems Vemu Institute of technology
PPTX
Introduction to Digital Communication.pptx
PPTX
Basic block diagram.pptfyr6r5e5ee5tr68t6ryfyifty
PPT
Communication System (3).ppt
PPTX
communication systems for second year ECE
PDF
digital communication.pdf
PDF
Ilovepdf merged
PDF
PCM_DC_PPT.pdf
PPT
Chapter 2- Digital Data Acquistion.ppt
PPTX
Data communication in computer network.pptx
PPT
Data Communicationimportntsone_Part 2.ppt
PPT
multimedia chapter1
 
PPTX
Presentation1 (1).spptx computer network
PPTX
Pulse Code Modulation
DOCX
Source coding systems
PDF
Data Communication and Networking- chapter 2-Information Encoding
Module2.pptxwewewewewewewewewewewewewewewewe
UNIT 2- UNDERSTANDING DIGITAL SIGNALS PART 2
Introductionto Digitalcommunications and baseband modulation techniques
Digital communication systems Vemu Institute of technology
Introduction to Digital Communication.pptx
Basic block diagram.pptfyr6r5e5ee5tr68t6ryfyifty
Communication System (3).ppt
communication systems for second year ECE
digital communication.pdf
Ilovepdf merged
PCM_DC_PPT.pdf
Chapter 2- Digital Data Acquistion.ppt
Data communication in computer network.pptx
Data Communicationimportntsone_Part 2.ppt
multimedia chapter1
 
Presentation1 (1).spptx computer network
Pulse Code Modulation
Source coding systems
Data Communication and Networking- chapter 2-Information Encoding

More from asodariyabhavesh (9)

PPT
wirelessSlidesCh04 (2).ppt
PPT
Chapter10 image segmentation
PPTX
Chapter 9 morphological image processing
PPTX
Chapter 8 image compression
PPTX
Chapter 6 color image processing
PPT
Chapter 5
PPTX
Chapter 3 image enhancement (spatial domain)
PPTX
Chapter 1 and 2 gonzalez and woods
PPTX
Arm architecture chapter2_steve_furber
wirelessSlidesCh04 (2).ppt
Chapter10 image segmentation
Chapter 9 morphological image processing
Chapter 8 image compression
Chapter 6 color image processing
Chapter 5
Chapter 3 image enhancement (spatial domain)
Chapter 1 and 2 gonzalez and woods
Arm architecture chapter2_steve_furber

Recently uploaded (20)

PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
Well-logging-methods_new................
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
PPT on Performance Review to get promotions
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Construction Project Organization Group 2.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
bas. eng. economics group 4 presentation 1.pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Well-logging-methods_new................
R24 SURVEYING LAB MANUAL for civil enggi
PPT on Performance Review to get promotions
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Automation-in-Manufacturing-Chapter-Introduction.pdf
Construction Project Organization Group 2.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
CH1 Production IntroductoryConcepts.pptx
Lecture Notes Electrical Wiring System Components
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Model Code of Practice - Construction Work - 21102022 .pdf
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
bas. eng. economics group 4 presentation 1.pptx

Introduction of digital communication

  • 1. Fundamentals of Digital Communications and Data Transmission 14th March 2014 Prof. Asodariya Bhavesh SSASIT, Surat
  • 2. Outline • What is communication? • Analog to Digital Conversion (A/D) • Source Coding • Channel Encoding • Modulation Techniques • Modulation Techniques (Part II)
  • 3. What is Communication? • Communication is transferring data reliably from one point to another – Data could be: voice, video, codes etc… • It is important to receive the same information that was sent from the transmitter. • Communication system – A system that allows transfer of information realiably
  • 4. Information Source Transmitter Channel Receiver Information Sink Block Diagram of a typical communication system
  • 5. • Information Source – The source of data • Data could be: human voice, data storage device CD, video etc.. – Data types: • Discrete: Finite set of outcomes “Digital” • Continuous : Infinite set of outcomes “Analog” • Transmitter – Converts the source data into a suitable form for transmission through signal processing – Data form depends on the channel
  • 6. • Channel: – The physical medium used to send the signal – The medium where the signal propagates till arriving to the receiver – Physical Mediums (Channels): • Wired : twisted pairs, coaxial cable, fiber optics • Wireless: Air, vacuum and water – Each physical channel has a certain limited range of frequencies ,( fmin  fmax ), that is called the channel bandwidth – Physical channels have another important limitation which is the NOISE
  • 7. • Channel: • Noise is undesired random signal that corrupts the original signal and degrades it • Noise sources: » Electronic equipments in the communication system » Thermal noise » Atmospheric electromagnetic noise (Interference with another signals that are being transmitted at the same channel) – Another Limitation of noise is the attenuation • Weakens the signal strength as it travels over the transmission medium • Attenuation increases as frequency increases – One Last important limitation is the delay distortion • Mainly in the wired transmission • Delays the transmitted signals  Violates the reliability of the communication system
  • 8. • Receiver – Extracting the message/code in the received signal • Example – Speech signal at transmitter is converted into electromagnetic waves to travel over the channel – Once the electromagnetic waves are received properly, the receiver converts it back to a speech form – Information Sink • The final stage • The user
  • 9. Effect of Noise On a transmitted signal
  • 10. Digital Communication System • Data of a digital format “i.e binary numbers” Information Source A / D Converter Source Encoder Channel Encoder Modulator Information Sink D / A Converter Source Decoder Channel Decoder Demodulator Channel
  • 11. • Information source – Analog Data: Microphone, speech signal, image, video etc… – Discrete (Digital) Data: keyboard, binary numbers, hex numbers, etc… • Analog to Digital Converter (A/D) – Sampling: • Converting continuous time signal to a digital signal – Quantization: • Converting the amplitude of the analog signal to a digital value – Coding: • Assigning a binary code to each finite amplitude in the
  • 12. • Source encoder – Represent the transmitted data more efficiently and remove redundant information • How? “write Vs. rite” • Speech signals frequency and human ear “20 kHz” – Two types of encoding: – Lossless data compression (encoding) • Data can be recovered without any missing information – Lossy data compression (encoding) • Smaller size of data • Data removed in encoding can not be recovered again
  • 13. • Channel encoder: – To control the noise and to detect and correct the errors that can occur in the transmitted data due the noise. • Modulator: – Represent the data in a form to make it compatible with the channel • Carrier signal “high frequency signal” • Demodulator: – Removes the carrier signal and reverse the process of the Modulator
  • 14. • Channel decoder: – Detects and corrects the errors in the signal gained from the channel • Source decoder: – Decompresses the data into it’s original format. • Digital to Analog Converter: – Reverses the operation of the A/D – Needs techniques and knowledge about sampling, quantization, and coding methods. • Information Sink – The User
  • 15. Why should we use digital communication? • Ease of regeneration – Pulses “ 0 , 1” – Easy to use repeaters • Noise immunity – Better noise handling when using repeaters that repeats the original signal – Easy to differentiate between the values “either 0 or 1” • Ease of Transmission – Less errors – Faster ! – Better productivity
  • 16. Why should we use digital communication? • Ease of multiplexing – Transmitting several signals simultaneously • Use of modern technology – Less cost ! • Ease of encryption – Security and privacy guarantee – Handles most of the encryption techniques
  • 17. Disadvantage ! • The major disadvantage of digital transmission is that it requires a greater transmission bandwidth or channel bandwidth to communicate the same information in digital format as compared to analog format. • Another disadvantage of digital transmission is that digital detection requires system synchronization, whereas analog signals generally have no such requirement.
  • 18. Chapter 2: Analog to Digital Conversion (A/D) 14th March 2014 Prof. Asodariya Bhavesh SSASIT, Surat
  • 19. Digital Communication System Information Source A / D Converter Source Encoder Channel Encoder Modulator Information Sink D / A Converter Source Decoder Channel Decoder Demodulator Channel
  • 20. 2.1 Basic Concepts in Signals • A/D is the process of converting an analog signal to digital signal, in order to transmit it through a digital communication system. • Electric Signals can be represented either in Time domain or frequency domain. – Time domain i.e – We can get the value of that signal at any time (t) by substituting in the v(t) equation. v(t) 2sin(2 1000t 45)
  • 21. Time Domain Frequency Domain Amp. Amp. Time(s) Frequency (Hz)
  • 22. Converting an Analog Signal to a Discrete Signal (A/D) • Can be done through three basic steps: 1- Sampling 2- Quantization 3- Coding
  • 23. Sampling • Process of converting the continuous time signal to a discrete time signal. • Sampling is done by taking “Samples” at specific times spaced regularly. – V(t) is an analog signal – V(nTs) is the sampled signal • Ts = positive real number that represent the spacing of the sampling time • n = sample number integer
  • 24. Sampling Original Analog Signal “Before Sampling” Sampled Analog Signal “After Sampling”
  • 25. Sampling • The closer the Ts value, the closer the sampled signal resemble the original signal. • Note that we have lost some values of the original signal, the parts between each successive samples. • Can we recover these values? And How? • Can we go back from the discrete signal to the original continuous signal?
  • 26. Sampling Theorem • A bandlimited signal having no spectral components above fmax (Hz), can be determined uniquely by values sampled at uniform intervals of Ts seconds, where • An analog signal can be reconstructed from a sampled signal without any loss of information if and only if it is: – Band limited signal – The sampling frequency is at least twice the signal bandwidth Ts 1 2 fmax
  • 27. Quantization • Quantization is a process of approximating a continuous range of values, very large set of possible discrete values, by a relatively small range of values, small set of discrete values. • Continuous range  infinte set of values • Discrete range  finite set of values
  • 28. Quantization • Dynamic range of a signal – The difference between the highest to lowest value the signal can takes.
  • 29. Quantization • In the Quantization process, the dynamic range of a signal is divided into L amplitude levels denoted by mk, where k = 1, 2, 3, .. L • L is an integer power of 2 • L = 2k • K is the number of bits needed to represent the amplitude level. • For example: – If we divide the dynamic range into 8 levels, • L = 8 = 23 – We need 3 bits to represent each level.
  • 30. Quantization • Example: – Suppose we have an analog signal with the values between [0, 10]. If we divide the signal into four levels. We have • m1  [ 0, 2.5 ] • m2  [ 2.5, 5 ] • m3  [ 5 , 7.5] • m4  [ 7.5, 10]
  • 31. Quantization • For every level, we assign a value for the signal if it falls within the same level. M1 = 1.25 if the signal in m1 M2 = 3.75 if the signal in m2 Q [ v(t) ] = M3 = 6.25 if the signal in m3 M4 = 8.75 if the signal in m4
  • 32. Quantization Original Analog Signal “Before Quantization” Quantized Analog Signal “After Quantization”
  • 33. Quantization Original Discrete Signal “Before Quantization” Quantized Discrete Signal “After Quantization”
  • 34. Quantization • The more quantization levels we take the smaller the error between the original and quantized signal. • Quantization step • The smaller the Δ the smaller the error. Dynamic Range No. of Quantization levels Smax Smin L
  • 35. Coding • Assigning a binary code to each quantization level. • For example, if we have quantized a signal into 16 levels, the coding process is done as the following: Step Code Step Code Step Code Step Code 0 0000 4 0100 8 1000 12 1100 1 0001 5 0101 9 1001 13 1101 2 0010 6 0110 10 1010 14 1110 3 0011 7 0111 11 1011 15 1111
  • 36. Coding • The binary codes are represented as pulses • Pulse means 1 • No pulse means 0 • After coding process, the signal is ready to be transmitted through the channel. And Therefore, completing the A/D conversion of an analog signal.
  • 37. Chapter 3: Source Coding 14th March 2014 Prof. Asodariya Bhavesh SSASIT, Surat
  • 38. 3.1 Measure of Information • What is the definition of “Information” ? • News, text data, images, videos, sound etc.. • In Information Theory – Information is linked with the element of surprise or uncertainty – In terms of probability – Information • The more probable some event to occur the less information related to its occurrence. • The less probable some event to occur the more information we get when it occurs.
  • 39. Example1: • The rush hour in Kuwait is between 7.00 am – 8.00 am – A person leaving his home to work at 7.30 will NOT be surprised about the traffic jam  almost no information is gained here – A person leaving his home to work at 7.30 will BE surprised if THERE IS NO traffic jam: – He will start asking people / family / friends – Unusual experience – Gaining more information
  • 40. Example 2 • The weather temperature in Kuwait at summer season is usually above 30o • It is known that from the historical data of the weather, the chance that it rains in summer is very rare chance. – A person who lives in Kuwait will not be surprised by this fact about the weather – A person who lived in Kuwait will BE SURPRISED if it rains during summer, therefore asking about the phenomena. Therefore gaining more knowledge “information”
  • 41. How can we measure information? • Measure of Information – Given a digital source with N possible outcomes “messages”, the information sent from the digital source when the jth message is transmitted is given by the following equation: [ Bits ] Ij log2( 1 pj )
  • 42. Example 1 • Find the information content of a message that takes on one of four possible outcomes equally likely • Solution The probability of each outcome = P = Therefore, 1 0.25 I log2( 1 0.25 ) log( 1 0.25 ) log(2) 2 bits
  • 43. Example 2 • Suppose we have a digital source that generates binary bits. The probability that it generates “0” is 0.25, while the probability that it generates “1” is 0.75. Calculate the amount of information conveyed by every bit.
  • 44. Example 2 (Solution) • For the binary “0” : • For the binary “1”: • Information conveyed by the “0” is more than the information conveyed by the “1” I log2( 1 0.25 ) 2 bits bitsI 42.0) 75.0 1 (log2
  • 45. Example 3: • A discrete source generates a sequence of ( n ) bits. How many possible messages can we receive from this source? • Assuming all the messages are equally likely to occur, how much information is conveyed by each message?
  • 46. Example 3 (solution): • The source generates a sequence of n bits, each bit takes one of two possible values – a discrete source generates either “0” or “1” • Therefore: – We have 2N possible outcomes • The Information Conveyed by each outcome I log2( 1 2n ) log(2n ) log(2) nlog(2) log(2) n bits
  • 47. 3.3 Entropy • The entropy of a discrete source S is the average amount of information ( or uncertainty ) associated with that source. • m = number of possible outcomes • Pj = probability of the jth message H(s) pj log2( 1 pj ) [bits] j 1 m
  • 48. Importance of Entropy • Entropy is considered one of the most important quantities in information theory. • There are two types of source coding: – Lossless coding “lossless data compression” – Lossy coding “lossy data compression” • Entropy is the threshold quantity that separates lossy from lossless data compression.
  • 49. Example 4 • Consider an experiment of selecting a card at random from a cards deck of 52 cards. Suppose we’re interested in the following events: – Getting a picture, with probability of : – Getting a number less than 3, with probability of: – Getting a number between 3 and 10, with a probability of: • Calculate the Entropy of this random experiment. 52 12 8 52 32 52
  • 50. Example 4 (solution) : • The entropy is given by : • Therefore, H(s) pj log2( 1 pj ) [bits] j 1 3 H(s) 12 52 log2( 52 12 ) 8 52 log2( 52 8 ) 32 52 log2( 52 32 ) 1.335 bits
  • 51. Source Coding Theorem • First discovered by Claude Shannon. • Source coding theorem “A discrete source with entropy rate H can be encoded with arbitrarily small error probability at any rate L bits per source output as long as L > H” Where H = Entropy rate L = codeword length If we encode the source with L > H  Trivial Amount of errors If we encode the source with L < H  we’re certain that an error will occur
  • 52. 3.4 Lossless data compression • Data compression – Encoding information in a relatively smaller size than their original size • Like ZIP files (WinZIP), RAR files (WinRAR),TAR files etc.. • Data compression: – Lossless: the compressed data are an exact copy of the original data – Lossy: the compressed data may be different than the original data • Loseless data compression techniques: – Huffman coding algorithm – Lempel-Ziv Source coding algorithm
  • 53. Chapter 4: Channel Encoding 14th March 2014 Prof. Asodariya Bhavesh SSASIT, Surat
  • 54. Overview • Channel encoding definition and importance • Error Handling techniques • Error Detection techniques • Error Correction techniques
  • 55. Channel Encoding - Definition • In digital communication systems an optimum system might be defined as one that minimizes the probability of bit error. • Error occurs in the transmitted signal due to the transmission in a non-ideal channel – Noise exists in channels – Noise signals corrupt the transmitted data
  • 56. Channel Encoding - Imporatance • Channel encoding – Techniques used to protect the transmitted signal from the noise effect • Two basic approaches of channel encoding – Automatic Repeat Request (ARQ) – Forward Error Correction (FEC)
  • 57. Automatic Repeat Request (ARQ) • Whenever the receiver detects an error in the transmitted block of data, it requests the transmitter to send the block again to overcome the error. • The request continue “repeats” until the block is received correctly • ARQ is used in two-way communication systems – Transmitter  Receiver
  • 58. Automatic Repeat Request (ARQ) • Advantages: – Error detection is simple and requires much simpler decoding equipments than the other techniques • Disadvantages: – If we have a channel with high error rate, the information must be sent too frequently. – This results in sending less information thus producing a less efficient system
  • 59. Forward Error Correction (FEC) • The transmitted data are encoded so that the receiver can detect AND correct any errors. • Commonly known as Channel Encoding • Can be Used in both two-way or one-way transmission. • FEC is the most common technique used in the digital communication because of its improved performance in correcting the errors.
  • 60. Forward Error Correction (FEC) • Improved performance because: – It introduces redundancy in the transmitted data in a controlled way – Noise averaging : the receiver can average out the noise over long time of periods.
  • 61. Error Control Coding • There are two basic categories for error control coding – Block codes – Tree Codes • Block Codes: – A block of k bits is mapped into a block of n bits Block of K bits Block of n bits
  • 62. Error Control Coding • tree codes are also known as codes with memory, in this type of codes the encoder operates on the incoming message sequence continuously in a serial manner. • Protecting data from noise can be done through: – Error Detection – Error Correction
  • 63. Error Control Coding • Error Detection – We basically check if we have an error in the received data or not. • There are many techniques for the detection stage • Parity Check • Cyclic Redundancy Check (CRC)
  • 64. Error Control Coding • Error Correction – If we have detected an error “or more” in the received data and we can correct them, then we proceed in the correction phase • There are many techniques for error correction as well: • Repetition Code • Hamming Code
  • 65. Error Detection Techniques • Parity Check – Very simple technique used to detect errors • In Parity check, a parity bit is added to the data block – Assume a data block of size k bits – Adding a parity bit will result in a block of size k+1 bits • The value of the parity bit depends on the number of “1”s in the k bits data block
  • 66. Parity Check • Suppose we want to make the number of 1’s in the transmitted data block odd, in this case the value of the parity bit depends on the number of 1’s in the original data – if we transmit a message = 1010111 • k = 7 bits – Adding a parity check so that the number of 1’s is even • The message would be : 10101111 • k+1 = 8 bits • At the reciever ,if one bit changes its values, then an error can be detected
  • 67. Example - 1 • At the transmitter, we need to send the message M= 1011100. – We need to make the number of one’s odd • Transmitter: – k=7 bits , M =1011100 – k+1=8 bits , M’=10111001 • Receiver: – If we receive M’ = 10111001  no error is detected – If we receive M’= 10111000  an Error is
  • 68. Parity Check • If an odd number of errors occurred, then the error still can be detected “assuming a parity bit that makes an odd number of 1’s” • Disadvantage: – If an even number of errors occurred, the the error can NOT be detected “assuming a parity bit that makes an odd number of 1’s”
  • 69. Chapter 5: Modulation Techniques 14th March 2014 Prof. Asodariya Bhavesh SSASIT, Surat
  • 70. Introduction • After encoding the binary data, the data is now ready to be transmitted through the physical channel • In order to transmit the data in the physical channel we must convert the data back to an electrical signal – Convert it back to an analog form • This process is called modulation
  • 71. Modulation - Definition • Modulation is the process of changing a parameter of a signal using another signal. • The most commonly used signal type is the sinusoidal signal that has the form of : •V(t) = A sin ( wt + θ ) • A : amplitude of the signla • w : radian frequency • θ : Phase shift
  • 72. Modulation • In modulation process, we need to use two types of signals: – Information, message or transmitted signal – Carrier signal • Let’s assume the carrier signal is of a sinusoidal type of the form x(t) = A sin (wt + θ ) • Modulation is letting the message signal to change one of the carrier signal parameters
  • 73. Modulation • If we let the carrier signal amplitude changes in accordance with the message signal then we call the process amplitude modulation • If we let the carrier signal frequency changes in accordance with the message signal then we call this process frequency modulation
  • 76. Digital Data Transmission • There are two types of Digital Data Transmission: 1) Base-Band data transmission – Uses low frequency carrier signal to transmit the data 2) Band-Pass data transmission – Uses high frequency carrier signal to transmit the data
  • 77. Base-Band Data Transmission • Base-Band data transmission = Line coding • The binary data is converted into an electrical signal in order to transmit them in the channel • Binary data are represented using amplitudes for the 1’s and 0’s • We will presenting some of the common base- band signaling techniques used to transmit the information
  • 78. Line Coding Techniques • Non-Return to Zero (NRZ) • Unipolar Return to Zero (Unipolar-RZ) • Bi-Polar Return to Zero (Bi-polar RZ) • Return to Zero Alternate Mark Inversion (RZ-AMI) • Non-Return to Zero – Mark (NRZ-Mark) • Manchester coding (Biphase)
  • 79. Non-Return to Zero (NRZ) • The “1” is represented by some level • The “0” is represented by the opposite • The term non-return to zero means the signal switched from one level to another without taking the zero value at any time during transmission.
  • 80. NRZ - Example • We want to transmit m=1011010
  • 81. Unipolar Return to Zero (Unipolar RZ) • Binary “1” is represented by some level that is half the width of the signal • Binary “0” is represented by the absence of the pulse
  • 82. Unipolar RZ - Example • We want to transmit m=1011010
  • 83. Bipolar Return to Zero (Bipolar RZ) • Binary “1” is represented by some level that is half the width of the signal • Binary “0” is represented a pulse that is half width the signal but with the opposite sign
  • 84. Bipolar RZ - Example • We want to transmit m=1011010
  • 85. Return to Zero Alternate Mark Inversion (RZ-AMI) • Binary “1” is represented by a pulse alternating in sign • Binary “0” is represented with the absence of the pulse
  • 86. RZ-AMI - Example • We want to transmit m=1011010
  • 87. Non-Return to Zero – Mark (NRZ-Mark) • Also known as differential encoding • Binary “1” represented in the change of the level – High to low – Low to high • Binary “0” represents no change in the level
  • 88. NRZ-Mark - Example • We want to transmit m=1011010
  • 89. Manchester coding (Biphase) • Binary “1” is represented by a positive pulse half width the signal followed by a negative pulse • Binary “0” is represented by a negative pulse half width the signal followed by a positive pulse
  • 90. Manchester coding - Example • We want to transmit m=1011010
  • 91. Transmission • Transmission bandwidth: the transmission bandwidth of a communication system is the band of frequencies allowed for signal transmission, in another word it is the band of frequencies at which we are allowed to use to transmit the data.
  • 92. Bit Rate • Bit Rate : is the number of bits transferred between devices per second • If each bit is represented by a pulse of width Tb, then the bit rate Rb 1 Tb bits/sec
  • 93. Example – Bit rate calculation • Suppose that we have a binary data source that generates bits. Each bit is represented by a pulse of width Tb = 0.1 mSec • Calculate the bit rate for the source • Solution Rb 1 Tb 1 0.1 10 3 10000 bits/sec
  • 94. Example – Bit rate calculation • Suppose we have an image frame of size 200x200 pixels. Each pixel is represented by three primary colors red, green and blue (RGB). Each one of these colors is represented by 8 bits, if we transmit 1000 frames in 5 seconds what is the bit rate for this image?
  • 95. Example – Bit rate calculation • We have a total size of 200x200 = 40000 pixels • Each pixel has three colors, RGB that each of them has 8 bits. – 3 x 8 = 24 bits ( for each pixel with RGB) • Therefore, for the whole image we have a total size of 24 x 40000 = 960000 bits • Since we have 1000 frames in 5 seconds, then the total number of bits transmitted will be 1000 x 960000 = 960000000 bits in 5 seconds • Bit rate = 96000000/5 = 192000000 bits/second
  • 96. Baud rate (Symbol rate) • The number of symbols transmitted per second through the communication channel. • The symbol rate is related to the bit rate by the following equation: • Rb = bit rate • Rs = symbol rate • N = Number of bits per symbol Rs Rb N
  • 97. Baud rate (Symbol rate) • We usually use symbols to transmit data when the transmission bandwidth is limited • For example, we need to transmit a data at high rate and the bit duration Tb is very small; to overcome this problem we take a group of more than one bit, say 2, therefore : Tb fo 1 Tb 2Tb f 1 2Tb 1 2 fo 4Tb f 1 4Tb 1 4 fo
  • 98. Baud rate (Symbol rate) • We notice that by transmitting symbols rather than bits we can reduce the spectrum of the transmitted signal. • Hence, we can use symbol transmission rather than bit transmission when the transmission bandwidth is limited
  • 99. Example • A binary data source transmits binary data, the bit duration is 1µsec, Suppose we want to transmit symbols rather than bits, if each symbol is represented by four bits. what is the symbol rate? • Each bit is represented by a pulse of duration 1µ second, hence the bit rate Rb 1 1 10 6 1000000 bits/sec
  • 100. Example (Continue) • Therefore, the symbol rate will be Rs Rb N 1000000 4 250000 symbols/sec
  • 101. Chapter 6: Modulation Techniques (Part II) 14th March 2014 Prof. Asodariya Bhavesh SSASIT, Surat
  • 102. Introduction • Bandpass data transmission • Amplitude Shift Keying (ASK) • Phase Shift Keying (PSK) • Frequency Shift Keying (FSK) • Multilevel Signaling (Mary Modulation)
  • 103. Bandpass Data Transmission • In communication, we use modulation for several reasons in particular: – To transmit the message signal through the communication channel efficiently. – To transmit several signals at the same time over a communication link through the process of multiplexing or multiple access. – To simplify the design of the electronic systems used to transmit the message. – by using modulation we can easily transmit data with low loss
  • 104. Bandpass Digital Transmission • Digital modulation is the process by which digital symbols are transformed into wave- forms that are compatible with the characteristics of the channel. • The following are the general steps used by the modulator to transmit data – 1. Accept incoming digital data – 2. Group the data into symbols – 3. Use these symbols to set or change the phase, frequency or amplitude of the reference carrier signal appropriately.
  • 105. Bandpass Modulation Techniques • Amplitude Shift Keying (ASK) • Phase Shift Keying (PSK) • Frequency Shift Keying (FSK) • Multilevel Signaling (Mary Modulation) • Mary Amplitude Modulation • Mary Phase Shift Keying (Mary PSK) • Mary Frequency Shift Keying (Mary FSK) • Quadrature Amplitude Modulation (QAM)
  • 106. Amplitude Shift Keying (ASK) • In ASK the binary data modulates the amplitude of the carrier signal
  • 107. Phase Shift Keying (PSK) • In PSK the binary data modulates the phase of the carrier signal
  • 108. Frequency Shift Keying (FSK) • In FSK the binary data modulates the frequency of the carrier signal
  • 109. Modulation Types – 4 Level ASK, FSK, PSK