2. Introduction
AI solves complex problems using systematic approaches.
Various techniques are employed based on problem nature (search, learning, optimization, etc.).
3. Search Algorithms
Uninformed Search (Blind Search): No prior knowledge.
◦ BFS: Explores all levels before moving deeper.
◦ DFS: Explores depth-first, may not find the shortest path.
Informed Search (Heuristic Search): Uses problem-specific knowledge.
◦ A* Search: Uses path cost + heuristic.
◦ Greedy Best-First Search: Selects the node closest to the goal.
4. Search Algorithms
AI problems are often framed as state-space search problems, where an agent explores a series of
possible states to find an optimal solution. Search algorithms fall into two main categories:
Uninformed (Blind) Search
These algorithms do not use any domain-specific knowledge and systematically explore the search
space.
(a) Breadth-First Search (BFS)
Explores all neighbors of a node before going deeper.
Uses a queue (FIFO) for exploration.
Guarantees the shortest path in an unweighted graph.
Time & Space Complexity: O(bd)O(b^d)O(bd), where bbb is the branching factor, and ddd is the depth.
5. (b) Depth-First Search (DFS)
Explores as deep as possible before backtracking.
Uses a stack (LIFO) for exploration.
Not guaranteed to find the shortest path.
Time Complexity: O(bd)O(b^d)O(bd), but space complexity is lower than BFS (O(d)O(d)O(d)).
(c) Depth-Limited Search (DLS)
A variant of DFS with a depth limit.
Prevents excessive memory usage but may fail if the goal is beyond the depth limit.
(d) Iterative Deepening DFS (IDDFS)
Repeatedly runs DFS with increasing depth limits.
Combines benefits of DFS (low memory) and BFS (shortest path guarantee).
6. Informed (Heuristic) Search
These algorithms use heuristic functions (estimates of how close a state is to the goal) to
improve efficiency.
(a) Greedy Best-First Search
Selects the node that appears closest to the goal based on a heuristic function h(n)h(n)h(n).
May get stuck in local optima.
Example: Navigation apps use distance-based heuristics.
(b) A Search*
Uses both path cost and heuristic information:
f(n)=g(n)+h(n)f(n) = g(n) + h(n)f(n)=g(n)+h(n)
7. where:
◦ g(n)g(n)g(n) = Cost from start to node nnn.
◦ h(n)h(n)h(n) = Estimated cost from nnn to goal.
Guaranteed to find the optimal path if h(n)h(n)h(n) is admissible (i.e., never overestimates the cost).
(c) Hill Climbing
Starts at an arbitrary solution and iteratively moves towards a better solution.
Gets stuck in local maxima.
Used in robot path planning and optimization.
(d) Simulated Annealing
Similar to hill climbing but sometimes accepts worse solutions to escape local optima.
Inspired by metal cooling in thermodynamics.
8. Constraint Satisfaction Problems (CSPs)
Problems defined by variables, domains, and constraints.
Techniques:
◦ Backtracking: Systematically explores possibilities.
◦ Constraint Propagation: Reduces search space using rules.
◦ Forward Checking: Detects early constraint violations.
10. Machine Learning-Based Techniques
AI can learn patterns and improve decisions.
Categories:
◦ Supervised Learning (Classification & Regression)
◦ Unsupervised Learning (Clustering & Pattern Discovery)
◦ Reinforcement Learning (Reward-based learning)
11. Evolutionary Algorithms
Inspired by nature and evolution.
Genetic Algorithms: Mutation, crossover, selection.
Particle Swarm Optimization: Mimics swarm intelligence.
12. Game Theory & Adversarial Search
Used in multi-agent competitive environments.
Minimax Algorithm: Selects the best move assuming optimal opponent play.
Alpha-Beta Pruning: Optimized Minimax by removing unnecessary branches
13. Planning Algorithms
Used in robotics & automated agents.
STRIPS: Action-based planning.
Partial-Order Planning: Actions without strict sequence.
14. Natural Language Processing (NLP)
AI techniques to process human language.
Techniques:
◦ Rule-Based Parsing
◦ Hidden Markov Models (HMM)
◦ Transformer Models (GPT, BERT)
15. Heuristic & Metaheuristic Techniques
Approximate solutions for complex problems.
Simulated Annealing: Accepts bad moves to escape local optima.
Tabu Search: Avoids repeating solutions.
Ant Colony Optimization: Mimics real-world ant behavior.