SlideShare a Scribd company logo
BACK PROPAGATION ALGORITHMS
AND NEURAL NETWORK
IMPLEMENTATION
Multi-Layer Perceptron
(MLP), Back Propagation
Algorithm, AND/OR/XOR
Implementation
MULTI-LAYER PERCEPTRON (MLP):
OVERVIEW
- Structure:
- Consists of input, hidden, and output layers.
- Fully connected neurons.
- Solves complex, non-linear problems.
- Common activation functions:
- Sigmoid: Output in range (0, 1).
- ReLU: Introduces non-linearity.
- Training involves forward and backward propagation.
MLP ARCHITECTURE
- Input Layer:
- Accepts raw data.
- Number of neurons = Features of input data.
- Hidden Layers:
- Perform feature extraction.
- Multiple layers for deep learning.
MLP ARCHITECTURE
- Output Layer:
- Final predictions.
- Number of neurons = Number of output classes.
Detailed_Back_Propagation_and_Neural_Network_Implementation.pptx
BACK PROPAGATION ALGORITHM: STEPS
1. **Forward Pass**:
- Compute outputs for each layer.
- Use activation functions for non-linearity.
2. **Error Calculation**:
- Compute error (E) = (Target - Output).
3. **Backward Pass**:
- Calculate gradients of E w.r.t weights using chain rule.
4. **Weight Update**:
- Use gradient descent: w = w - * ∂E/∂w
η
- Repeat until error converges.
STOCHASTIC GRADIENT DESCENT (SGD)
- Optimization method used in back propagation.
- Updates weights after processing each training example.
- Steps:
1. Shuffle training data.
2. Process one sample at a time.
3. Update weights using gradient of loss function.
- Advantages: Fast updates, suitable for large datasets.
- Drawback: High variance, which can lead to noisy convergence.
NEURAL NETWORK IMPLEMENTATION:
AND GATE
- **Problem:**
- Input: Binary pairs (0, 1).
- Output: 1 if all inputs are 1; otherwise 0.
- **Weights and Bias:**
- Adjusted such that weighted sum > threshold only when both inputs are
1.
- **Example:**
- Input: [0, 0] -> Output: 0
- Input: [1, 1] -> Output: 1
- Linearly separable problem.
NEURAL NETWORK IMPLEMENTATION: OR
GATE
- **Problem:**
- Input: Binary pairs (0, 1).
- Output: 1 if any input is 1; otherwise 0.
- **Weights and Bias:**
- Adjusted such that weighted sum > threshold if either input is 1.
- **Example:**
- Input: [0, 0] -> Output: 0
- Input: [1, 0] -> Output: 1
- Linearly separable problem.
NEURAL NETWORK IMPLEMENTATION:
XOR GATE
- **Problem:**
- Input: Binary pairs (0, 1).
- Output: 1 if inputs are different; otherwise 0.
- **Weights and Bias:**
- Requires hidden layer to separate data points.
- **Example:**
- Input: [0, 1] -> Output: 1
- Input: [1, 1] -> Output: 0
- Non-linear problem solved by MLP.
EXAMPLE
EXAMPLE
EXAMPLE
EXAMPLE
EXAMPLE
EXAMPLE
SUMMARY
- MLPs use back propagation for learning.
- Key steps: Forward pass, error calculation, backward pass, weight
update.
- SGD is an efficient optimization technique.
- AND and OR gates are linearly separable, solvable by simple
networks.
- XOR requires non-linear separability, solved by MLPs with hidden
layers.

More Related Content

PDF
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
PDF
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
PPTX
Machine Learning DR PRKRao-PPT UNIT-II.pptx
PPTX
Feed Forward Neural Network.pptx
PPT
19_Learning.ppt
PPTX
back propagation1_presenation_lab 6.pptx
PPTX
Unit 2 ml.pptx
PPTX
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
Machine Learning DR PRKRao-PPT UNIT-II.pptx
Feed Forward Neural Network.pptx
19_Learning.ppt
back propagation1_presenation_lab 6.pptx
Unit 2 ml.pptx
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;

Similar to Detailed_Back_Propagation_and_Neural_Network_Implementation.pptx (20)

PPT
deep learning UNIT-1 Introduction Part-1.ppt
PPTX
22PCOAM16_UNIT 2_ Session 11 MLP Practice & Example .pptx
PPTX
Unit ii supervised ii
PPTX
Perceptron and Sigmoid Neurons
PDF
Digital Implementation of Artificial Neural Network for Function Approximatio...
PDF
Digital Implementation of Artificial Neural Network for Function Approximatio...
PPT
assignment regarding the security of the cyber
PPTX
Deeplearning for Computer Vision PPT with
PPTX
Feed forward back propogation algorithm .pptx
PDF
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
PDF
PDF
Introduction to Artificial Neural Networks - PART IV.pdf
PDF
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
PDF
Analysis_molf
PPT
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
PPTX
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
PDF
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
PDF
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
PDF
Neural Networks on Steroids
deep learning UNIT-1 Introduction Part-1.ppt
22PCOAM16_UNIT 2_ Session 11 MLP Practice & Example .pptx
Unit ii supervised ii
Perceptron and Sigmoid Neurons
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
assignment regarding the security of the cyber
Deeplearning for Computer Vision PPT with
Feed forward back propogation algorithm .pptx
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
Introduction to Artificial Neural Networks - PART IV.pdf
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Analysis_molf
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
Neural Networks on Steroids
Ad

Recently uploaded (20)

PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
advance database management system book.pdf
PDF
HVAC Specification 2024 according to central public works department
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PDF
IGGE1 Understanding the Self1234567891011
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PPTX
20th Century Theater, Methods, History.pptx
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
My India Quiz Book_20210205121199924.pdf
PDF
1_English_Language_Set_2.pdf probationary
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Indian roads congress 037 - 2012 Flexible pavement
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
LDMMIA Reiki Yoga Finals Review Spring Summer
advance database management system book.pdf
HVAC Specification 2024 according to central public works department
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
IGGE1 Understanding the Self1234567891011
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
20th Century Theater, Methods, History.pptx
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
My India Quiz Book_20210205121199924.pdf
1_English_Language_Set_2.pdf probationary
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
TNA_Presentation-1-Final(SAVE)) (1).pptx
Weekly quiz Compilation Jan -July 25.pdf
Indian roads congress 037 - 2012 Flexible pavement
Unit 4 Computer Architecture Multicore Processor.pptx
Ad

Detailed_Back_Propagation_and_Neural_Network_Implementation.pptx

  • 1. BACK PROPAGATION ALGORITHMS AND NEURAL NETWORK IMPLEMENTATION Multi-Layer Perceptron (MLP), Back Propagation Algorithm, AND/OR/XOR Implementation
  • 2. MULTI-LAYER PERCEPTRON (MLP): OVERVIEW - Structure: - Consists of input, hidden, and output layers. - Fully connected neurons. - Solves complex, non-linear problems. - Common activation functions: - Sigmoid: Output in range (0, 1). - ReLU: Introduces non-linearity. - Training involves forward and backward propagation.
  • 3. MLP ARCHITECTURE - Input Layer: - Accepts raw data. - Number of neurons = Features of input data. - Hidden Layers: - Perform feature extraction. - Multiple layers for deep learning.
  • 4. MLP ARCHITECTURE - Output Layer: - Final predictions. - Number of neurons = Number of output classes.
  • 6. BACK PROPAGATION ALGORITHM: STEPS 1. **Forward Pass**: - Compute outputs for each layer. - Use activation functions for non-linearity. 2. **Error Calculation**: - Compute error (E) = (Target - Output). 3. **Backward Pass**: - Calculate gradients of E w.r.t weights using chain rule. 4. **Weight Update**: - Use gradient descent: w = w - * ∂E/∂w η - Repeat until error converges.
  • 7. STOCHASTIC GRADIENT DESCENT (SGD) - Optimization method used in back propagation. - Updates weights after processing each training example. - Steps: 1. Shuffle training data. 2. Process one sample at a time. 3. Update weights using gradient of loss function. - Advantages: Fast updates, suitable for large datasets. - Drawback: High variance, which can lead to noisy convergence.
  • 8. NEURAL NETWORK IMPLEMENTATION: AND GATE - **Problem:** - Input: Binary pairs (0, 1). - Output: 1 if all inputs are 1; otherwise 0. - **Weights and Bias:** - Adjusted such that weighted sum > threshold only when both inputs are 1. - **Example:** - Input: [0, 0] -> Output: 0 - Input: [1, 1] -> Output: 1 - Linearly separable problem.
  • 9. NEURAL NETWORK IMPLEMENTATION: OR GATE - **Problem:** - Input: Binary pairs (0, 1). - Output: 1 if any input is 1; otherwise 0. - **Weights and Bias:** - Adjusted such that weighted sum > threshold if either input is 1. - **Example:** - Input: [0, 0] -> Output: 0 - Input: [1, 0] -> Output: 1 - Linearly separable problem.
  • 10. NEURAL NETWORK IMPLEMENTATION: XOR GATE - **Problem:** - Input: Binary pairs (0, 1). - Output: 1 if inputs are different; otherwise 0. - **Weights and Bias:** - Requires hidden layer to separate data points. - **Example:** - Input: [0, 1] -> Output: 1 - Input: [1, 1] -> Output: 0 - Non-linear problem solved by MLP.
  • 17. SUMMARY - MLPs use back propagation for learning. - Key steps: Forward pass, error calculation, backward pass, weight update. - SGD is an efficient optimization technique. - AND and OR gates are linearly separable, solvable by simple networks. - XOR requires non-linear separability, solved by MLPs with hidden layers.