2. UNIT II- SEARCHING STRATEGIES
Search algorithms-Informed Search- Uninformed Search-Heuristic
search strategies – heuristic functions. Local search and
optimization problems – local search in continuous space – search
with non-deterministic actions – search in partially observable
environments – online search agents and unknown environments.
3. Introduction
What are Search Algorithms?
– Definition: Procedures to navigate through problem spaces to find solutions.
– Importance in AI: Fundamental for problem-solving and decision-making.
Types of Search Algorithms
Uninformed (Blind) Search
– Characteristics: No additional information about states beyond the problem
definition.
– Examples: Breadth-First Search (BFS), Depth-First Search (DFS)
Informed (Heuristic) Search
– Characteristics: Uses problem-specific knowledge to find solutions more
efficiently.
– Examples: A*, Greedy Best-First Search
4. Uninformed Search Algorithms
Breadth-First Search (BFS)
Description: Explores all nodes at the present depth level before
moving on to nodes at the next depth level.
Applications: Shortest path in unweighted graphs.
Depth-First Search (DFS)
Description: Explores as far as possible along each branch before
backtracking.
Applications: Solving puzzles, pathfinding in mazes.
5. BFS vs. DFS
Comparison
BFS: Completeness, optimality, time complexity, space complexity.
DFS: Completeness, optimality, time complexity, space complexity.
Advantages & Disadvantages
BFS: Guaranteed to find the shortest path, but high memory usage.
DFS: Low memory usage, but may not find the shortest path.
6. Informed Search Algorithms
• A Search*
– Description: Combines the cost to reach the node and the estimated cost
to reach the goal (f(n) = g(n) + h(n)).
– Applications: Pathfinding in games, network routing.
• Greedy Best-First Search
– Description: Expands the node that appears to be closest to the goal.
– Applications: Simplified versions of A*, less memory-intensive.
7. Heuristics in Search Algorithms
Heuristic Functions
Definition: Estimates the cost from the current node to the goal.
Importance: Guides the search process, improves efficiency.
Examples of Heuristics
Manhattan Distance: Used in grid-based pathfinding.
Euclidean Distance: Used in 2D/3D space pathfinding.
8. Advanced Search Algorithms
Iterative Deepening Search (IDS)
Description: Combines the benefits of BFS and DFS by progressively
deepening the search.
Applications: Game tree exploration.
Bidirectional Search
Description: Runs two simultaneous searches, one forward from the initial
state and one backward from the goal.
Applications: Network routing, game AI.
9. Applications of Search Algorithms
• Game AI
– Examples: Chess, Go, puzzle games.
• Robotics
– Examples: Path planning, obstacle avoidance.
• Natural Language Processing
– Examples: Parsing, machine translation.
10. Challenges and Limitations
Scalability
Issues with large problem spaces.
Heuristic Design
Importance of good heuristics for efficiency.
Memory Usage
High memory requirements for some algorithms.
11. Informed Search Algorithms
Uses problem-specific knowledge to find solutions more efficiently.
Examples:
A* Search
Greedy Best-First Search
A* Search
Combines the cost to reach the node and the estimated cost to reach the goal
(f(n) = g(n) + h(n)).
Applications:
Pathfinding in games, network routing.
Advantages:
Completeness, optimality, efficiency.
Disadvantages:
Can be memory intensive.
12. Greedy Best-First Search
Description:
Expands the node that appears to be closest to the goal.
Applications:
Simplified versions of A*, less memory-intensive.
Advantages:
Faster than A* in some cases.
Disadvantages:
Not always optimal.
13. Heuristics in Informed Search
Heuristic Functions:
Definition: Estimates the cost from the current node to the goal.
Examples: Manhattan Distance, Euclidean Distance.
Importance:
Guides the search process, improves efficiency.
Comparison of Informed and Uninformed Search
Table of Differences:
Criteria: Information usage, efficiency, applications, complexity.
14. Applications of Search Algorithms
Game AI:
Examples: Chess, Go, puzzle games.
Robotics:
Examples: Path planning, obstacle avoidance.
Natural Language Processing:
Examples: Parsing, machine translation.
15. Introduction
Definition:
Heuristic Search: Search strategies that utilize heuristic functions to
guide the search.
Heuristic Functions: Functions that estimate the cost to reach the goal
from a given state.
Importance:Significance in AI: Improves search efficiency and
effectiveness.
Types of Heuristic Search Strategies
Overview:
A* Search
Greedy Best-First Search
Hill Climbing
Simulated Annealing
16. A* Search Algorithm
Description:
Combines the cost to reach the node (g(n)) and the estimated cost to
the goal (h(n)) to form f(n) = g(n) + h(n).
Applications:
Pathfinding in games, network routing.
Advantages:
Completeness, optimality.
Disadvantages:
Can be memory intensive.
17. Greedy Best-First Search
Description:
Expands the node that appears to be closest to the goal based
on heuristic h(n).
Applications:
Simplified versions of A*, less memory-intensive.
Advantages:
Faster in some cases.
Disadvantages:
Not always optimal.
18. Hill Climbing
Description:
– Iteratively moves to the neighbor with the highest value, attempting
to find a peak.
Applications:
– Optimization problems.
Advantages:
– Simple to implement.
Disadvantages:
– Can get stuck in local maxima.
19. Simulated Annealing
Description:
Inspired by the annealing process in metallurgy, it allows
occasional moves to worse states to escape local maxima.
Applications:
Optimization problems where the solution space is complex.
Advantages:
Avoids local maxima better than hill climbing.
Disadvantages:
Requires careful tuning of parameters.
20. Heuristic Functions
Functions that estimate the cost from the current state to the goal.
Importance:
Guide the search, reduce the search space.
Examples:
Manhattan Distance: Used in grid-based pathfinding.
Euclidean Distance: Used in 2D/3D space pathfinding.
Designing Heuristic Functions
Principles:
Admissibility: Heuristic never overestimates the cost.
Consistency (Monotonicity): Heuristic satisfies the triangle inequality.
Examples:
Straight-line distance in navigation problems.
Number of misplaced tiles in puzzle problems.
21. Effective Heuristics
Characteristics:
Accuracy: Closer approximation to the actual cost.
Efficiency: Low computational overhead.
Trade-offs:
Balancing accuracy and computational cost.
Applications of Heuristic Search
Game AI:
Examples: Chess, Go, puzzle games.
Robotics:
Examples: Path planning, obstacle avoidance.
Natural Language Processing:
Examples: Parsing, machine translation.
22. Challenges and Limitations
Scalability:
Issues with large problem spaces.
Heuristic Design:
Importance of good heuristics for efficiency.
Memory Usage:
High memory requirements for some algorithms.
23. Local Search:
A method for solving optimization problems by iteratively improving the
solution.
Optimization Problems: Problems where the goal is to find the best
solution from a set of possible solutions.
Importance:Significance in AI: Enhances efficiency in finding optimal or
near-optimal solutions.
24. Optimization Problems
Definition:
Problems that seek the best solution from a set of feasible solutions.
Examples:
Traveling Salesman Problem, Vehicle Routing Problem, Scheduling.
Formulating Optimization Problems
Components:
Objective Function: The function to be maximized or minimized.
Constraints: Conditions that the solution must satisfy.
Example:
Formulation of the Traveling Salesman Problem.
25. Applications of Local Search
Industry Applications:
Logistics: Vehicle routing, warehouse management.
Engineering: Design optimization, resource allocation.
Computer Science: Machine learning, data mining.
Challenges and Limitations
Local Search:
Stuck in local maxima, high computational cost.
Optimization Problems:
Scalability, handling multiple objectives.
26. Local Search in Continuous Space
Definition:
Optimization techniques in continuous domains.
Applications:
Engineering design, robotics, machine learning.
27. Types of Local Search Algorithms in
Continuous Space
Overview:
Gradient Descent
Simulated Annealing
Genetic Algorithms
Particle Swarm Optimization
28. Gradient Descent
Description:
Iteratively moves in the direction of the steepest decrease of a cost
function.
Applications:
Machine learning, neural networks.
Advantages:
Efficient for differentiable functions.
Disadvantages:
Can get stuck in local minima.
29. Simulated Annealing in Continuous Space
Description:
Allows occasional moves to worse states to escape local minima.
Applications:
Complex optimization problems.
Advantages:
Effective at finding global optima.
Disadvantages:
Requires careful parameter tuning.
30. Genetic Algorithms
Description:
Uses operations like mutation, crossover, and selection.
Applications:
Optimization, machine learning.
Advantages:
Good at exploring large, complex spaces.
Disadvantages:
Computationally expensive.
31. Particle Swarm Optimization
Description:
Simulates social behavior of birds flocking or fish
schooling.
Applications:
Optimization problems in engineering and computer
science.
Advantages:
Effective for multi-dimensional problems.
Disadvantages:
Requires tuning of multiple parameters.
32. Search with Non-Deterministic Actions in NLP
Definition:
Handling uncertainty in actions within NLP tasks.
Challenges:
Ambiguity, variability in language.
Non-Deterministic Actions in NLP
Examples:
Parsing sentences with multiple valid interpretations.
Machine translation with various possible outputs.
33. Techniques for Non-Deterministic Search
Overview:
Probabilistic Parsing
Beam Search
Reinforcement Learning
Probabilistic Parsing
Assigns probabilities to different parse trees.
Applications:
Syntax analysis, machine translation.
Advantages:
Can handle ambiguity effectively.
Disadvantages:
Computationally intensive
34. Beam Search
Description:
Keeps track of the top k most probable sequences.
Applications:
Machine translation, speech recognition.
Advantages:
Balances search breadth and depth.
Disadvantages:
May miss the global optimum.
35. Reinforcement Learning for NLP
Description:
Agents learn to make decisions by receiving rewards or
penalties.
Applications:
Dialogue systems, text generation.
Advantages:
Can learn complex behaviors.
Disadvantages:
Requires a lot of training data.
36. Challenges and Limitations
Local Search in Continuous Space:
Local minima, parameter tuning.
Non-Deterministic Search in NLP:
Ambiguity, computational complexity
37. Definition of Partially Observable Environments
Partially Observable Environments: These are environments where the agent
does not have access to the complete state information of the environment. The
agent must make decisions based on incomplete or uncertain information.
Characteristics:
Limited Sensing: The agent can only observe a subset of the entire
environment.
Uncertainty: There is ambiguity in the information received by the agent,
leading to uncertainty about the actual state of the environment.
Example: A robot navigating a building with incomplete map information, or a
game player making moves without knowing the opponent’s strategy.
38. Importance in AI
Real-World Relevance: Many real-world problems involve incomplete
information, making partially observable environments a critical area of study
in AI.
Robotics: Robots often operate in environments where they cannot perceive
everything (e.g., occluded objects, unknown terrains).
Autonomous Vehicles: Self-driving cars must navigate and make decisions
with limited sensor data.
Healthcare: Medical diagnosis often relies on incomplete patient data,
requiring probabilistic reasoning to make accurate diagnoses.
Games and Strategy: Games like poker or chess have elements that are not
fully observable, requiring strategic decision-making based on partial
information.
39. Challenges Addressed
Handling Uncertainty:
Developing methods to make reliable decisions under uncertainty.
Adaptive Decision-Making: Creating systems that can adapt to new
information and changing environments.
Probabilistic Models: Using probabilistic models to predict and infer missing
information, improving decision-making accuracy.
40. Key Characteristics : Limited Sensing:
Only a subset of the environment is observable.
Uncertainty: Ambiguity in the observations leads to uncertainty about the
true state.
Example :Robot Navigation: A robot navigating a building without a
complete map.
Gaming: A poker player making decisions without seeing the opponent's
cards.
41. Types of Partially Observable Problems
Examples:
Robot Navigation in Unknown Terrain: Robots must navigate
without a complete map.
Playing Games with Hidden Information: Games like poker where
not all cards are visible.
Medical Diagnosis with Incomplete Patient Data: Diagnosing
diseases with limited symptoms and test results.
42. Search Strategies in Partially Observable
Environments
Overview:
Belief State Representation: Maintaining a set of possible states the
system could be in.
Contingency Planning: Planning for various possible scenarios and
outcomes.
Probabilistic Reasoning: Using probabilities to represent uncertainty and
make decisions.
43. Belief State Representation
Description:
Represents all possible states the system could be in, given the available
information.
Applications:
Robot Localization: Determining a robot's position within a known
map.
Tracking Problems: Following an object with uncertain movements.
Advantages:
Comprehensive representation of uncertainty.
Disadvantages:
Computationally expensive to maintain and update.
44. Contingency Planning
Description:
Planning for various possible scenarios and outcomes
Applications:
Autonomous vehicles navigating in unpredictable environments
Advantages:
Robustness to unexpected events
Disadvantages:
Complex planning process
45. Probabilistic Reasoning
Description:
Using probabilities to represent uncertainty and make decisions
Techniques:
Bayesian Networks, Markov Decision Processes (MDPs), Partially
Observable MDPs (POMDPs)
Applications:
Medical diagnosis, speech recognition
Advantages:
Handles uncertainty well
Disadvantages:
Requires accurate probability models
46. Markov Decision Processes (MDPs)
Description:
Framework for modeling decision making with probabilistic
transitions
Components:
States, Actions, Transition Model, Reward Function
Applications:
Robotics, automated control systems
47. Partially Observable MDPs (POMDPs)
Description:
Extends MDPs to handle partial observability
Components:
States, Actions, Transition Model, Observation Model, Reward
Function
Applications:
Autonomous navigation, strategic game playing
Advantages:
Provides a rigorous framework for decision making under
uncertainty
Disadvantages:
Computationally challenging
49. Online Search Agents:
Systems designed to explore and solve problems in real-time.
Unknown Environments: Situations where the agent has incomplete or no prior
knowledge about the surroundings.
Importance: Essential in AI for robotics, navigation, and web searches.
What are Online Search Agents?
Agents that make decisions based on current knowledge and adapt as they
explore their environment.
Key Characteristics:
Real-time Decision Making: Responds to the environment dynamically.
Incremental Exploration: Gathers information as it navigates.
Adaptation: Adjusts strategies based on new information.
50. Examples:
Autonomous Robots: Robots navigating through unknown terrains.
Web Crawlers: Programs indexing web pages.
Understanding Unknown Environments
Definition: Environments where the agent has no prior map or
knowledge.
Challenges:
Exploration: Discovering new areas.
Decision Making: Making choices with incomplete
information.
Uncertainty: Dealing with unknown obstacles and dynamic
changes.
51. Types of Search Strategies
Offline Search vs. Online Search:
Offline Search: Pre-planned search strategies based on complete knowledge.
Online Search: Real-time adaptation based on partial and evolving knowledge.
Search Algorithms:
Breadth-First Search: Explores all nodes at the present depth before moving on
to the next level.
Depth-First Search: Explores as far as possible along a branch before
backtracking.
A Algorithm*: Combines pathfinding and graph traversal for optimal solutions.
Comparison:
Offline Search: Static, requires full knowledge.
Online Search: Dynamic, adapts to new information.
52. Online Search Techniques
Real-Time A (RTA)**: A* search with real-time constraints.
Learning Real-Time A (LRTA)**: Learns from previous searches to improve future
performance.
Real-Time Techniques:
Greedy Best-First Search: Chooses paths that appear most promising based on current
knowledge.
D Algorithm*: Incrementally updates paths as new information is obtained.
Benefits:
Adaptability to new information.
Effective in dynamic and unknown environments.
Limitations:
May not always find the optimal path.
Dependent on the efficiency of the algorithm.
53. Agent Behavior in Unknown Environments
Exploration Strategies:
Exploration vs. Exploitation: Balancing between discovering new areas
and optimizing known paths.
Behavioral Models: Random Walk, Grid-Based Search.
Adaptation:
Reactive Agents: Simple, responds to immediate changes.
Deliberative Agents: Plans future actions based on expected outcomes.
Learning:
Reinforcement Learning: Agents learn from interactions to improve
decision-making.