1. BACK PROPAGATION ALGORITHMS
AND NEURAL NETWORK
IMPLEMENTATION
Multi-Layer Perceptron
(MLP), Back Propagation
Algorithm, AND/OR/XOR
Implementation
2. MULTI-LAYER PERCEPTRON (MLP):
OVERVIEW
- Structure:
- Consists of input, hidden, and output layers.
- Fully connected neurons.
- Solves complex, non-linear problems.
- Common activation functions:
- Sigmoid: Output in range (0, 1).
- ReLU: Introduces non-linearity.
- Training involves forward and backward propagation.
3. MLP ARCHITECTURE
- Input Layer:
- Accepts raw data.
- Number of neurons = Features of input data.
- Hidden Layers:
- Perform feature extraction.
- Multiple layers for deep learning.
4. MLP ARCHITECTURE
- Output Layer:
- Final predictions.
- Number of neurons = Number of output classes.
6. BACK PROPAGATION ALGORITHM: STEPS
1. **Forward Pass**:
- Compute outputs for each layer.
- Use activation functions for non-linearity.
2. **Error Calculation**:
- Compute error (E) = (Target - Output).
3. **Backward Pass**:
- Calculate gradients of E w.r.t weights using chain rule.
4. **Weight Update**:
- Use gradient descent: w = w - * ∂E/∂w
η
- Repeat until error converges.
7. STOCHASTIC GRADIENT DESCENT (SGD)
- Optimization method used in back propagation.
- Updates weights after processing each training example.
- Steps:
1. Shuffle training data.
2. Process one sample at a time.
3. Update weights using gradient of loss function.
- Advantages: Fast updates, suitable for large datasets.
- Drawback: High variance, which can lead to noisy convergence.
8. NEURAL NETWORK IMPLEMENTATION:
AND GATE
- **Problem:**
- Input: Binary pairs (0, 1).
- Output: 1 if all inputs are 1; otherwise 0.
- **Weights and Bias:**
- Adjusted such that weighted sum > threshold only when both inputs are
1.
- **Example:**
- Input: [0, 0] -> Output: 0
- Input: [1, 1] -> Output: 1
- Linearly separable problem.
9. NEURAL NETWORK IMPLEMENTATION: OR
GATE
- **Problem:**
- Input: Binary pairs (0, 1).
- Output: 1 if any input is 1; otherwise 0.
- **Weights and Bias:**
- Adjusted such that weighted sum > threshold if either input is 1.
- **Example:**
- Input: [0, 0] -> Output: 0
- Input: [1, 0] -> Output: 1
- Linearly separable problem.
10. NEURAL NETWORK IMPLEMENTATION:
XOR GATE
- **Problem:**
- Input: Binary pairs (0, 1).
- Output: 1 if inputs are different; otherwise 0.
- **Weights and Bias:**
- Requires hidden layer to separate data points.
- **Example:**
- Input: [0, 1] -> Output: 1
- Input: [1, 1] -> Output: 0
- Non-linear problem solved by MLP.
17. SUMMARY
- MLPs use back propagation for learning.
- Key steps: Forward pass, error calculation, backward pass, weight
update.
- SGD is an efficient optimization technique.
- AND and OR gates are linearly separable, solvable by simple
networks.
- XOR requires non-linear separability, solved by MLPs with hidden
layers.