SlideShare a Scribd company logo
CHANNEL EQUALISATION
           BY-   AJIT KUMAR PANDA
                 POONAN SAHOO
                 SAYANTAN DAS
                 SURAJ CHOUDHURY
THREATS IN DIGITAL
COMMUNICATION

 •There are four main threats in the process of
 digital communication

 Inter Symbol Interference (ISI)

 Multipath Propagation

 Co-channel Interference

 Presence of noise in the channel
INTER SYMBOL INTERFERENCE:
 Inter Symbol Interference
    in Digital Transmission

 Inter-symbol interference
    (ISI) arises when the data
    transmitted through the
    channel is dispersive, in
    which each received pulse
    is affected somewhat by
    adjacent pulses and due to
    which interference occurs
    in the transmitted signals.

   It is difficult to recover the
    original data from one
    channel sample
CO-CHANNEL INTERFERENCE:
 Co-channel Interference (CCI) and
  Adjacent Channel Interference
  (ACI) occur in communication
  systems due to multiple access
  techniques using space, frequency or
  time.


 CCI occurs in cellular radio and dual-
  polarized microwave radio, for
  efficient utilization of the allocated
  channels frequencies by reusing the
  frequencies in different cells.
MULTI-PATH PROPAGATION:
   Within telecommunication channels multiple paths of propagation commonly occur.
    In practical terms this is equivalent to transmitting the same signal through a number
    of separate channels, each having a different attenuation and delay.
    Consider an open-air radio transmission channel that has three propagation
    paths, as illustrated in Fig. These could be
     -     Direct
    -      Earth Bound
    -      Sky Bound
   Fig1.2b describes how a receiver picks up the
     transmitted data. The direct signal is received
    first whilst the earth and sky bound are delayed.
    All three of the signals are attenuated with the sky path suffering the most.
   Multipath interference between consecutively transmitted signals will take place if
    one signal is received whilst the previous signal is still being detected.
    In Fig1.2 this would occur if the symbol transmission rate is greater than 1/τ where, τ
    represents transmission delay. Because bandwidth efficiency leads to high data
    rates, multi-path interference commonly occurs.
EQUALIZER
 Equalization is the process to remove ISI and noise effects from the
    channel
   It is located at the receiver end of the channel




 It is an inverse filter placed at the front end of the receiver
 The transfer function of the equalizer is just inverse of the transfer
    function of the channel
 Equalization is an iterative process of reducing the mean square
    error the difference between desired response and output of filter
    used in equalizer
TYPES OF EQUALIZERS:

 Equalizers are of two types

                                  LINEAR
                EQUALIZERS
                                NON LINEAR


 Linear equalizers aim at reducing ISI in linear channels
  using various algorithms like Least Mean
  Square(LMS), Recursive Least Square(RLS) and
  normalized LMS
 Non linear equalizers equalize non-linear channels. They
  mainly use Neural Networks(NN) and Multilayer
  Perception(MLP) algorithms for equalization
Linear Adaptive Filters:




 An adaptive filter is a computational device that
  attempts to model the relationship between two signals
  in real time in an iterative manner
 Here the output is compared to the desired signal and
  accordingly the parameters of adaptive filter are varied
  and so it is known as self designing filter.
Applications of Adaptive Filters:
Identification
 Used to provide a linear model of an unknown plant




 Parameters
       u=input of adaptive filter=input to plant
       y=output of adaptive filter
       d=desired response=output of plant
       ---> e=d-y=estimation error
          Applications:
          System identification
Applications of Adaptive Filters:
Inverse Modeling
     Used to provide an inverse model of an unknown plant




 Parameters
        u=input of adaptive filter=output to plant
        y=output of adaptive filter
        d=desired response=delayed system input
        e=d-y=estimation error
         Applications:
              Channel Equalization
The channel equalization
model:
Stochastic Gradient
 Approach:
    Most commonly used type of Adaptive Filters
    Define cost function as mean-squared error
          Difference between filter output and desired response
    Based on the method of steepest descent
        Move towards the minimum on the error surface to get to minimum
        Requires the gradient of the error surface to be known
    Most popular adaptation algorithm is LMS
        Derived from steepest descent
        Doesn’t require gradient to be know: it is estimated at every iteration
    Least-Mean-Square (LMS) Algorithm

update value             old value                   learning -         tap
                                                                                   error
of tap - weigth          of tap - weight             rate              input
                                                                                   signal
vector                   vector                      parameter vector
LMS algorithm
•   Introduced by Widrow & Hoff in 1959
•   Simple, no matrices calculation involved in the adaptation
•   In the family of stochastic gradient algorithms
•   Approximation of the steepest – descent method
•   Based on the MMSE criterion.(Minimum Mean square Error)
•   Adaptive process containing two important signals:
•       1.) Filtering process, producing output signal.
•       2.) Desired signal (Training sequence)
•   Adaptive process: recursive adjustment of filter tap
    weights
Least-Mean-Square (LMS)
Algorithm continued....
 The LMS Algorithm consists of two basic processes that is
  followed in the adaptive equalization processes:
    Training : It refers to adapting to the training sequence
    Tracking: keeps track of the changing characteristics of the
     channel.
LMS Algorithm Steps:
                       M 1
 Filter output   zn                *
                             u n k wk n
                       k 0


 Estimation error     en      dn     zn

                               wk n    1   wk n   un   k e* n
 Tap-weight adaptation
Derivation of the LMS MSE
expression:
 Error=E=(x(n)-x(n)’)
 Square error=E=(x(n)-x(n)’)2
 Using minimum mean square error criterion , we differentiate the
  expression
        dE/dw=d/dw((x(n)-x(n)’)2)
 Applying chain rule and substitution of x(n)’ ,we get
      dE/dw=2(x(n)-x(n)’)*d/dw(x(n)- Ʃw*s(n-i))
      dE/dw=2(e(n))(s(n-i))
 From this we can derive an update equation for every new
  sample n using steepest descent and gradient method as
          w(n+1)= -u*(dE/dw)
     so,w(n+1)=2*u*e(n)*s(n-i)
          for i=0,1,2,3...........
Stability of LMS:
 The LMS algorithm is convergent in the mean square if and only
   if the step-size parameter satisfy

                                     1
                          0
                                     m ax


  Here max is the largest eigen value of the correlation matrix of
  the input data.
                                                          1
 More practical test for stability is    0
                                                        input signal power

 The value of step size has to be a trade off between fast
  convergence rates and less steady state misadjustment.
    Larger values for step size
           Increases adaptation rate (faster adaptation)
           Increases residual mean-squared error
LMS-Pros & cons:

 LMS – Advantage:
    • Simplicity of implementation
    • Not neglecting the noise like Zero forcing equalizer
    • Stable and robust performance against different signal
      conditions

 LMS – Disadvantage:
     Slow Convergence

     Demands using of training sequence as reference ,thus
      decreasing the communication BW.
NLMS-Normalised LMS
algorithm
 Is mainly required to provide better performance than LMS
    as LMS performance is slow
   Uses normalization technique to provide a variable step size
     as step size ‘u’ is divided by instantaneous signal power thus
    providing more stability and faster convergence.
   Is equivalent to running the LMS recursion for a new sample of
    inputs every time recursion or the NLMS operation is carried out.
    W(n+1) = w(n) + (1/xT(n)x(n)) * e(n) x(n)
   The step size value for the input vector is calculated
    µ (n) = 1/xT(n)x(n)
   The filter tap weights are updated in preparation for the next
    iteration
    W(n+1) = w(n) + 2*µ (n) * e(n) * x(n)
Results for LMS algorithm:




 Convergence is faster with increased step size .
  Plot is for noise=30 dB
Results for NLMS algorithm:




 Convergence is faster in case of NLMS
  algorithm
 It provides a more stable output.
NON-LINEAR CHANNEL
EQUALISATION
Need For Non-Linear
Equalizer:
 Linear Equalizers do not perform well on channels
  having deep spectral nulls in the pass band.

 To compensate distortion linear equalizer places too
  much gain in the vicinity of spectral nulls thereby
  enhancing the noise present in these frequencies.

 BER is better in Non-linear channel equalizer

 Linear equalizer-inverse problem
   Non-linear equalizer-pattern classification
Non-Linear Channel
Equalizer:
  t k denotes a sequence of
  T spaced complex symbols
  of an BPSK
  constellation, where 1/T
  denotes the symbol rate and
  k denotes the discrete time
  index.
 A widely used model for a
  linear dispersive channel is
  an FIR filter whose output at
        th
  the kN instant is given by
         h-1
                                  Schematic Diagram of a Non-Linear
                                  Wireless Digital Communication system
        ∑
    ak= i=0 hi * t k-i            with channel equalizer
Continued…
   where
            hi- denotes the FIR filter weights
            Nh- denotes the FIR order.

   Considering the channel to be a nonlinear one the NL block introduces
    channel nonlinearity to the filter output.

   The transmitted signal t k after being passed through the nonlinear
    channel and added with the additive noise arrives at the receiver, which
    is denoted by r k .The received signal at the kth time instant is given by r k.

   The purpose of equalizer attached at the receiver front end is to recover
    the transmitted sequence t k or its delayed version t k-1 ,where t is the
    propagation delay associated with the physical channel.
Neural Network:
 Started in 1800s as an effort to
  describe how human mind
  performs.

 It is applied to computational
  models with Turing ‘s B-type
  machine and Perceptron

 A neural network is a massively
  parallel distributed processor made
  up of simple processing units, which
  has a natural propensity for storing
  experimental knowledge and making
  it available for use.
Continued:
   Today in general form a neural network is a
    machine that is designed by using electronic
    components or is simulated in software on
    a digital computer.

   To achieve good performance, neural
    networks employ a massive interconnection of
    simple computing cells referred to as
    “Neurons” or “processing units”

   The procedure is called a learning
    algorithm, the function of which is to modify the
    synaptic weights of the network in an orderly
    fashion to attain a desired design objective.
    McCulloch and Pitts have developed the neural
    networks for different computing machines.

Artificial Neural Network:
 Artificial Neural Network (ANN)
  have become a powerful tool for
  many complex applications including
  functional approximation, nonlinear
  system identification, motor
  control, pattern recognition, adaptive
  channel equalization and
  optimization.

 ANN is capable of performing
  nonlinear mapping between the
  input and output space due to its
  large parallel interconnection
  between different layers and the
  nonlinear processing characteristics.
Continued:
 An artificial neuron basically consists of a
  computing element that performs the
  weighted sum of the input signal and the
  connecting weight. The weighted sum is
  added with the bias called threshold and
  the resultant signal is passed through a
  nonlinear activation function. Common
  types of activation functions are sigmoid and
  hyperbolic tangent.
 Each neuron is associated with three
  parameters whose learning can be adjusted.
  These are the connecting weights, the bias
  and the slope of the nonlinear function.
 For the structural point of view a NN may
  be single layer or it may be multilayer
Multi-layer Perceptron:
 The perceptron is a single level
  connection of McCulloch-Pitts
  neurons is called as Single-layer
  feed forward networks.
 The network is capable of
  linearly separating the input
  vectors into pattern of classes by
  a hyper plane. Similarly many
  perceptrons can be connected in
  layers
 To provide a MLP network, the
  input signal propagates through
  the network in a forward
  direction, on a layer-by-layer
  basis. This network has been
  applied successfully to solve
  diverse problems.
Continued…
 Generally MLP is trained using popular error back-
  propagation algorithm.
 The scheme of MLP using four layers is shown. Si
  represent the inputs s1, s2, ….. , sn to the network, and yk
  represents the output of the final layer of the neural
  network.
 The connecting weights between the input to the first
  hidden layer, first to second hidden layer and the second
  hidden layer to the output layers are represented by W i
  ,W ji ,W kj respectively.
 The final output layer of the MLP may be expressed as

More Related Content

PPTX
Introduction to equalization
PPT
Unit iv wcn main
PPTX
Adaptive equalization
PPT
Equalization
PPTX
Equalization techniques
PPTX
Lecture Notes: EEEC6440315 Communication Systems - Inter Symbol Interference...
PDF
4.5 equalizers and its types
PPTX
NYQUIST CRITERION FOR ZERO ISI
Introduction to equalization
Unit iv wcn main
Adaptive equalization
Equalization
Equalization techniques
Lecture Notes: EEEC6440315 Communication Systems - Inter Symbol Interference...
4.5 equalizers and its types
NYQUIST CRITERION FOR ZERO ISI

What's hot (20)

PPTX
pulse shaping and equalization
PPTX
Channel equalization
PPTX
Ec 2401 wireless communication unit 4
PPTX
Digital communication unit II
PPT
Speech encoding techniques
PDF
Lecture Notes on Adaptive Signal Processing-1.pdf
PPT
Equalisation, diversity, coding.
PPTX
orthogonal frequency division multiplexing(OFDM)
PPT
MIMO in 15 minutes
PPT
Adaptive filter
PPTX
Pulse Modulation ppt
PPTX
linear equalizer and turbo equalizer
PPTX
PPTX
Double SideBand Suppressed Carrier (DSB-SC)
PPTX
Linear equalizations and its variations
PPT
Small Scale Multi path measurements
PPTX
Multirate DSP
PPTX
Apperture and Horn Antenna
PPTX
Pulse shaping
PDF
Ofdm for wireless
pulse shaping and equalization
Channel equalization
Ec 2401 wireless communication unit 4
Digital communication unit II
Speech encoding techniques
Lecture Notes on Adaptive Signal Processing-1.pdf
Equalisation, diversity, coding.
orthogonal frequency division multiplexing(OFDM)
MIMO in 15 minutes
Adaptive filter
Pulse Modulation ppt
linear equalizer and turbo equalizer
Double SideBand Suppressed Carrier (DSB-SC)
Linear equalizations and its variations
Small Scale Multi path measurements
Multirate DSP
Apperture and Horn Antenna
Pulse shaping
Ofdm for wireless
Ad

Similar to Channel Equalisation (20)

PPT
Introduction to adaptive filtering and its applications.ppt
PDF
PONDICHERRY UNIVERSITY DEPARTMENT OF ELECTRONICS ENGINEERING.pdf
PPT
adaptive equa.ppt
PDF
A New VSLMS Algorithm for Performance Analysis of Self Adaptive Equalizers
PDF
Ijrdt11 140004
PDF
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
PDF
Adaptive Equalization
PPT
Adaptive Filtering.ppt
PDF
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...
PPTX
Adaptive equalization
PDF
LMS .pdf
PPTX
Dsp ppt madhuri.anudeep
PPTX
MULTIPATH MITIGATION TECHNIQUES-Wireless communication-
PPTX
Equalization with the help of Non OMA.pptx
PPTX
Adaptive filter
PDF
DSP2_slides_04_adaptievefiltering.pdf
PDF
Paper id 26201481
PDF
Gn3411911195
PPTX
Eqalization and diversity
PDF
Adaptive Variable Step Size in LMS Algorithm Using Evolutionary Programming: ...
Introduction to adaptive filtering and its applications.ppt
PONDICHERRY UNIVERSITY DEPARTMENT OF ELECTRONICS ENGINEERING.pdf
adaptive equa.ppt
A New VSLMS Algorithm for Performance Analysis of Self Adaptive Equalizers
Ijrdt11 140004
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
Adaptive Equalization
Adaptive Filtering.ppt
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...
Adaptive equalization
LMS .pdf
Dsp ppt madhuri.anudeep
MULTIPATH MITIGATION TECHNIQUES-Wireless communication-
Equalization with the help of Non OMA.pptx
Adaptive filter
DSP2_slides_04_adaptievefiltering.pdf
Paper id 26201481
Gn3411911195
Eqalization and diversity
Adaptive Variable Step Size in LMS Algorithm Using Evolutionary Programming: ...
Ad

Recently uploaded (20)

PDF
NewMind AI Monthly Chronicles - July 2025
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Cloud computing and distributed systems.
PDF
Empathic Computing: Creating Shared Understanding
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPT
Teaching material agriculture food technology
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
A Presentation on Artificial Intelligence
PDF
Approach and Philosophy of On baking technology
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
NewMind AI Monthly Chronicles - July 2025
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Cloud computing and distributed systems.
Empathic Computing: Creating Shared Understanding
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Encapsulation_ Review paper, used for researhc scholars
20250228 LYD VKU AI Blended-Learning.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Teaching material agriculture food technology
Agricultural_Statistics_at_a_Glance_2022_0.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
CIFDAQ's Market Insight: SEC Turns Pro Crypto
A Presentation on Artificial Intelligence
Approach and Philosophy of On baking technology
Unlocking AI with Model Context Protocol (MCP)
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Dropbox Q2 2025 Financial Results & Investor Presentation
Review of recent advances in non-invasive hemoglobin estimation
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Per capita expenditure prediction using model stacking based on satellite ima...

Channel Equalisation

  • 1. CHANNEL EQUALISATION BY- AJIT KUMAR PANDA POONAN SAHOO SAYANTAN DAS SURAJ CHOUDHURY
  • 2. THREATS IN DIGITAL COMMUNICATION •There are four main threats in the process of digital communication Inter Symbol Interference (ISI) Multipath Propagation Co-channel Interference Presence of noise in the channel
  • 3. INTER SYMBOL INTERFERENCE:  Inter Symbol Interference in Digital Transmission  Inter-symbol interference (ISI) arises when the data transmitted through the channel is dispersive, in which each received pulse is affected somewhat by adjacent pulses and due to which interference occurs in the transmitted signals.  It is difficult to recover the original data from one channel sample
  • 4. CO-CHANNEL INTERFERENCE:  Co-channel Interference (CCI) and Adjacent Channel Interference (ACI) occur in communication systems due to multiple access techniques using space, frequency or time.  CCI occurs in cellular radio and dual- polarized microwave radio, for efficient utilization of the allocated channels frequencies by reusing the frequencies in different cells.
  • 5. MULTI-PATH PROPAGATION:  Within telecommunication channels multiple paths of propagation commonly occur. In practical terms this is equivalent to transmitting the same signal through a number of separate channels, each having a different attenuation and delay.  Consider an open-air radio transmission channel that has three propagation paths, as illustrated in Fig. These could be - Direct - Earth Bound - Sky Bound  Fig1.2b describes how a receiver picks up the transmitted data. The direct signal is received first whilst the earth and sky bound are delayed.  All three of the signals are attenuated with the sky path suffering the most.  Multipath interference between consecutively transmitted signals will take place if one signal is received whilst the previous signal is still being detected.  In Fig1.2 this would occur if the symbol transmission rate is greater than 1/τ where, τ represents transmission delay. Because bandwidth efficiency leads to high data rates, multi-path interference commonly occurs.
  • 6. EQUALIZER  Equalization is the process to remove ISI and noise effects from the channel  It is located at the receiver end of the channel  It is an inverse filter placed at the front end of the receiver  The transfer function of the equalizer is just inverse of the transfer function of the channel  Equalization is an iterative process of reducing the mean square error the difference between desired response and output of filter used in equalizer
  • 7. TYPES OF EQUALIZERS:  Equalizers are of two types LINEAR EQUALIZERS NON LINEAR  Linear equalizers aim at reducing ISI in linear channels using various algorithms like Least Mean Square(LMS), Recursive Least Square(RLS) and normalized LMS  Non linear equalizers equalize non-linear channels. They mainly use Neural Networks(NN) and Multilayer Perception(MLP) algorithms for equalization
  • 8. Linear Adaptive Filters:  An adaptive filter is a computational device that attempts to model the relationship between two signals in real time in an iterative manner  Here the output is compared to the desired signal and accordingly the parameters of adaptive filter are varied and so it is known as self designing filter.
  • 9. Applications of Adaptive Filters: Identification  Used to provide a linear model of an unknown plant  Parameters  u=input of adaptive filter=input to plant  y=output of adaptive filter  d=desired response=output of plant  ---> e=d-y=estimation error  Applications:  System identification
  • 10. Applications of Adaptive Filters: Inverse Modeling  Used to provide an inverse model of an unknown plant  Parameters  u=input of adaptive filter=output to plant  y=output of adaptive filter  d=desired response=delayed system input  e=d-y=estimation error  Applications:  Channel Equalization
  • 12. Stochastic Gradient Approach:  Most commonly used type of Adaptive Filters  Define cost function as mean-squared error  Difference between filter output and desired response  Based on the method of steepest descent  Move towards the minimum on the error surface to get to minimum  Requires the gradient of the error surface to be known  Most popular adaptation algorithm is LMS  Derived from steepest descent  Doesn’t require gradient to be know: it is estimated at every iteration  Least-Mean-Square (LMS) Algorithm update value old value learning - tap error of tap - weigth of tap - weight rate input signal vector vector parameter vector
  • 13. LMS algorithm • Introduced by Widrow & Hoff in 1959 • Simple, no matrices calculation involved in the adaptation • In the family of stochastic gradient algorithms • Approximation of the steepest – descent method • Based on the MMSE criterion.(Minimum Mean square Error) • Adaptive process containing two important signals: • 1.) Filtering process, producing output signal. • 2.) Desired signal (Training sequence) • Adaptive process: recursive adjustment of filter tap weights
  • 14. Least-Mean-Square (LMS) Algorithm continued....  The LMS Algorithm consists of two basic processes that is followed in the adaptive equalization processes:  Training : It refers to adapting to the training sequence  Tracking: keeps track of the changing characteristics of the channel.
  • 15. LMS Algorithm Steps: M 1  Filter output zn * u n k wk n k 0  Estimation error en dn zn wk n 1 wk n un k e* n  Tap-weight adaptation
  • 16. Derivation of the LMS MSE expression:  Error=E=(x(n)-x(n)’)  Square error=E=(x(n)-x(n)’)2  Using minimum mean square error criterion , we differentiate the expression dE/dw=d/dw((x(n)-x(n)’)2)  Applying chain rule and substitution of x(n)’ ,we get dE/dw=2(x(n)-x(n)’)*d/dw(x(n)- Ʃw*s(n-i)) dE/dw=2(e(n))(s(n-i))  From this we can derive an update equation for every new sample n using steepest descent and gradient method as w(n+1)= -u*(dE/dw) so,w(n+1)=2*u*e(n)*s(n-i) for i=0,1,2,3...........
  • 17. Stability of LMS:  The LMS algorithm is convergent in the mean square if and only if the step-size parameter satisfy 1 0 m ax Here max is the largest eigen value of the correlation matrix of the input data. 1  More practical test for stability is 0 input signal power  The value of step size has to be a trade off between fast convergence rates and less steady state misadjustment.  Larger values for step size  Increases adaptation rate (faster adaptation)  Increases residual mean-squared error
  • 18. LMS-Pros & cons:  LMS – Advantage: • Simplicity of implementation • Not neglecting the noise like Zero forcing equalizer • Stable and robust performance against different signal conditions  LMS – Disadvantage:  Slow Convergence  Demands using of training sequence as reference ,thus decreasing the communication BW.
  • 19. NLMS-Normalised LMS algorithm  Is mainly required to provide better performance than LMS as LMS performance is slow  Uses normalization technique to provide a variable step size as step size ‘u’ is divided by instantaneous signal power thus providing more stability and faster convergence.  Is equivalent to running the LMS recursion for a new sample of inputs every time recursion or the NLMS operation is carried out. W(n+1) = w(n) + (1/xT(n)x(n)) * e(n) x(n)  The step size value for the input vector is calculated µ (n) = 1/xT(n)x(n)  The filter tap weights are updated in preparation for the next iteration W(n+1) = w(n) + 2*µ (n) * e(n) * x(n)
  • 20. Results for LMS algorithm:  Convergence is faster with increased step size . Plot is for noise=30 dB
  • 21. Results for NLMS algorithm:  Convergence is faster in case of NLMS algorithm  It provides a more stable output.
  • 23. Need For Non-Linear Equalizer:  Linear Equalizers do not perform well on channels having deep spectral nulls in the pass band.  To compensate distortion linear equalizer places too much gain in the vicinity of spectral nulls thereby enhancing the noise present in these frequencies.  BER is better in Non-linear channel equalizer  Linear equalizer-inverse problem Non-linear equalizer-pattern classification
  • 24. Non-Linear Channel Equalizer:  t k denotes a sequence of T spaced complex symbols of an BPSK constellation, where 1/T denotes the symbol rate and k denotes the discrete time index.  A widely used model for a linear dispersive channel is an FIR filter whose output at th the kN instant is given by h-1 Schematic Diagram of a Non-Linear Wireless Digital Communication system ∑ ak= i=0 hi * t k-i with channel equalizer
  • 25. Continued…  where hi- denotes the FIR filter weights Nh- denotes the FIR order.  Considering the channel to be a nonlinear one the NL block introduces channel nonlinearity to the filter output.  The transmitted signal t k after being passed through the nonlinear channel and added with the additive noise arrives at the receiver, which is denoted by r k .The received signal at the kth time instant is given by r k.  The purpose of equalizer attached at the receiver front end is to recover the transmitted sequence t k or its delayed version t k-1 ,where t is the propagation delay associated with the physical channel.
  • 26. Neural Network:  Started in 1800s as an effort to describe how human mind performs.  It is applied to computational models with Turing ‘s B-type machine and Perceptron  A neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experimental knowledge and making it available for use.
  • 27. Continued:  Today in general form a neural network is a machine that is designed by using electronic components or is simulated in software on a digital computer.  To achieve good performance, neural networks employ a massive interconnection of simple computing cells referred to as “Neurons” or “processing units”  The procedure is called a learning algorithm, the function of which is to modify the synaptic weights of the network in an orderly fashion to attain a desired design objective. McCulloch and Pitts have developed the neural networks for different computing machines. 
  • 28. Artificial Neural Network:  Artificial Neural Network (ANN) have become a powerful tool for many complex applications including functional approximation, nonlinear system identification, motor control, pattern recognition, adaptive channel equalization and optimization.  ANN is capable of performing nonlinear mapping between the input and output space due to its large parallel interconnection between different layers and the nonlinear processing characteristics.
  • 29. Continued:  An artificial neuron basically consists of a computing element that performs the weighted sum of the input signal and the connecting weight. The weighted sum is added with the bias called threshold and the resultant signal is passed through a nonlinear activation function. Common types of activation functions are sigmoid and hyperbolic tangent.  Each neuron is associated with three parameters whose learning can be adjusted. These are the connecting weights, the bias and the slope of the nonlinear function.  For the structural point of view a NN may be single layer or it may be multilayer
  • 30. Multi-layer Perceptron:  The perceptron is a single level connection of McCulloch-Pitts neurons is called as Single-layer feed forward networks.  The network is capable of linearly separating the input vectors into pattern of classes by a hyper plane. Similarly many perceptrons can be connected in layers  To provide a MLP network, the input signal propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied successfully to solve diverse problems.
  • 31. Continued…  Generally MLP is trained using popular error back- propagation algorithm.  The scheme of MLP using four layers is shown. Si represent the inputs s1, s2, ….. , sn to the network, and yk represents the output of the final layer of the neural network.  The connecting weights between the input to the first hidden layer, first to second hidden layer and the second hidden layer to the output layers are represented by W i ,W ji ,W kj respectively.  The final output layer of the MLP may be expressed as