COURSE CODE :23AD3401
CORSE NAME : Artificial Intelligence
YEAR : II
SEMESTER : III
DEPARTMENT OF AI&DS
UNIT II- SEARCHING STRATEGIES
Search algorithms-Informed Search- Uninformed Search-Heuristic
search strategies – heuristic functions. Local search and
optimization problems – local search in continuous space – search
with non-deterministic actions – search in partially observable
environments – online search agents and unknown environments.
Introduction
What are Search Algorithms?
– Definition: Procedures to navigate through problem spaces to find solutions.
– Importance in AI: Fundamental for problem-solving and decision-making.
Types of Search Algorithms
Uninformed (Blind) Search
– Characteristics: No additional information about states beyond the problem
definition.
– Examples: Breadth-First Search (BFS), Depth-First Search (DFS)
Informed (Heuristic) Search
– Characteristics: Uses problem-specific knowledge to find solutions more
efficiently.
– Examples: A*, Greedy Best-First Search
Uninformed Search Algorithms
Breadth-First Search (BFS)
Description: Explores all nodes at the present depth level before
moving on to nodes at the next depth level.
Applications: Shortest path in unweighted graphs.
Depth-First Search (DFS)
Description: Explores as far as possible along each branch before
backtracking.
Applications: Solving puzzles, pathfinding in mazes.
BFS vs. DFS
Comparison
BFS: Completeness, optimality, time complexity, space complexity.
DFS: Completeness, optimality, time complexity, space complexity.
Advantages & Disadvantages
BFS: Guaranteed to find the shortest path, but high memory usage.
DFS: Low memory usage, but may not find the shortest path.
Informed Search Algorithms
• A Search*
– Description: Combines the cost to reach the node and the estimated cost
to reach the goal (f(n) = g(n) + h(n)).
– Applications: Pathfinding in games, network routing.
• Greedy Best-First Search
– Description: Expands the node that appears to be closest to the goal.
– Applications: Simplified versions of A*, less memory-intensive.
Heuristics in Search Algorithms
Heuristic Functions
Definition: Estimates the cost from the current node to the goal.
Importance: Guides the search process, improves efficiency.
Examples of Heuristics
Manhattan Distance: Used in grid-based pathfinding.
Euclidean Distance: Used in 2D/3D space pathfinding.
Advanced Search Algorithms
Iterative Deepening Search (IDS)
Description: Combines the benefits of BFS and DFS by progressively
deepening the search.
Applications: Game tree exploration.
Bidirectional Search
Description: Runs two simultaneous searches, one forward from the initial
state and one backward from the goal.
Applications: Network routing, game AI.
Applications of Search Algorithms
• Game AI
– Examples: Chess, Go, puzzle games.
• Robotics
– Examples: Path planning, obstacle avoidance.
• Natural Language Processing
– Examples: Parsing, machine translation.
Challenges and Limitations
Scalability
Issues with large problem spaces.
Heuristic Design
Importance of good heuristics for efficiency.
Memory Usage
High memory requirements for some algorithms.
Informed Search Algorithms
Uses problem-specific knowledge to find solutions more efficiently.
Examples:
A* Search
Greedy Best-First Search
A* Search
Combines the cost to reach the node and the estimated cost to reach the goal
(f(n) = g(n) + h(n)).
Applications:
Pathfinding in games, network routing.
Advantages:
Completeness, optimality, efficiency.
Disadvantages:
Can be memory intensive.
Greedy Best-First Search
Description:
Expands the node that appears to be closest to the goal.
Applications:
Simplified versions of A*, less memory-intensive.
Advantages:
Faster than A* in some cases.
Disadvantages:
Not always optimal.
Heuristics in Informed Search
Heuristic Functions:
Definition: Estimates the cost from the current node to the goal.
Examples: Manhattan Distance, Euclidean Distance.
Importance:
Guides the search process, improves efficiency.
Comparison of Informed and Uninformed Search
Table of Differences:
Criteria: Information usage, efficiency, applications, complexity.
Applications of Search Algorithms
Game AI:
Examples: Chess, Go, puzzle games.
Robotics:
Examples: Path planning, obstacle avoidance.
Natural Language Processing:
Examples: Parsing, machine translation.
Introduction
Definition:
 Heuristic Search: Search strategies that utilize heuristic functions to
guide the search.
 Heuristic Functions: Functions that estimate the cost to reach the goal
from a given state.
 Importance:Significance in AI: Improves search efficiency and
effectiveness.
Types of Heuristic Search Strategies
Overview:
A* Search
Greedy Best-First Search
Hill Climbing
Simulated Annealing
A* Search Algorithm
Description:
Combines the cost to reach the node (g(n)) and the estimated cost to
the goal (h(n)) to form f(n) = g(n) + h(n).
Applications:
Pathfinding in games, network routing.
Advantages:
Completeness, optimality.
Disadvantages:
Can be memory intensive.
Greedy Best-First Search
Description:
Expands the node that appears to be closest to the goal based
on heuristic h(n).
Applications:
Simplified versions of A*, less memory-intensive.
Advantages:
Faster in some cases.
Disadvantages:
Not always optimal.
Hill Climbing
Description:
– Iteratively moves to the neighbor with the highest value, attempting
to find a peak.
Applications:
– Optimization problems.
Advantages:
– Simple to implement.
Disadvantages:
– Can get stuck in local maxima.
Simulated Annealing
Description:
Inspired by the annealing process in metallurgy, it allows
occasional moves to worse states to escape local maxima.
Applications:
Optimization problems where the solution space is complex.
Advantages:
Avoids local maxima better than hill climbing.
Disadvantages:
Requires careful tuning of parameters.
Heuristic Functions
Functions that estimate the cost from the current state to the goal.
Importance:
Guide the search, reduce the search space.
Examples:
Manhattan Distance: Used in grid-based pathfinding.
Euclidean Distance: Used in 2D/3D space pathfinding.
Designing Heuristic Functions
Principles:
Admissibility: Heuristic never overestimates the cost.
Consistency (Monotonicity): Heuristic satisfies the triangle inequality.
Examples:
Straight-line distance in navigation problems.
Number of misplaced tiles in puzzle problems.
Effective Heuristics
Characteristics:
Accuracy: Closer approximation to the actual cost.
Efficiency: Low computational overhead.
Trade-offs:
Balancing accuracy and computational cost.
Applications of Heuristic Search
Game AI:
Examples: Chess, Go, puzzle games.
Robotics:
Examples: Path planning, obstacle avoidance.
Natural Language Processing:
Examples: Parsing, machine translation.
Challenges and Limitations
Scalability:
Issues with large problem spaces.
Heuristic Design:
Importance of good heuristics for efficiency.
Memory Usage:
High memory requirements for some algorithms.
Local Search:
A method for solving optimization problems by iteratively improving the
solution.
Optimization Problems: Problems where the goal is to find the best
solution from a set of possible solutions.
Importance:Significance in AI: Enhances efficiency in finding optimal or
near-optimal solutions.
Optimization Problems
Definition:
Problems that seek the best solution from a set of feasible solutions.
Examples:
Traveling Salesman Problem, Vehicle Routing Problem, Scheduling.
Formulating Optimization Problems
Components:
Objective Function: The function to be maximized or minimized.
Constraints: Conditions that the solution must satisfy.
Example:
Formulation of the Traveling Salesman Problem.
Applications of Local Search
Industry Applications:
Logistics: Vehicle routing, warehouse management.
Engineering: Design optimization, resource allocation.
Computer Science: Machine learning, data mining.
Challenges and Limitations
Local Search:
Stuck in local maxima, high computational cost.
Optimization Problems:
Scalability, handling multiple objectives.
Local Search in Continuous Space
Definition:
Optimization techniques in continuous domains.
Applications:
Engineering design, robotics, machine learning.
Types of Local Search Algorithms in
Continuous Space
Overview:
Gradient Descent
Simulated Annealing
Genetic Algorithms
Particle Swarm Optimization
Gradient Descent
Description:
Iteratively moves in the direction of the steepest decrease of a cost
function.
Applications:
Machine learning, neural networks.
Advantages:
Efficient for differentiable functions.
Disadvantages:
Can get stuck in local minima.
Simulated Annealing in Continuous Space
Description:
Allows occasional moves to worse states to escape local minima.
Applications:
Complex optimization problems.
Advantages:
Effective at finding global optima.
Disadvantages:
Requires careful parameter tuning.
Genetic Algorithms
Description:
Uses operations like mutation, crossover, and selection.
Applications:
Optimization, machine learning.
Advantages:
Good at exploring large, complex spaces.
Disadvantages:
Computationally expensive.
Particle Swarm Optimization
Description:
Simulates social behavior of birds flocking or fish
schooling.
Applications:
Optimization problems in engineering and computer
science.
Advantages:
Effective for multi-dimensional problems.
Disadvantages:
Requires tuning of multiple parameters.
Search with Non-Deterministic Actions in NLP
Definition:
Handling uncertainty in actions within NLP tasks.
Challenges:
Ambiguity, variability in language.
Non-Deterministic Actions in NLP
Examples:
Parsing sentences with multiple valid interpretations.
Machine translation with various possible outputs.
Techniques for Non-Deterministic Search
Overview:
Probabilistic Parsing
Beam Search
Reinforcement Learning
Probabilistic Parsing
Assigns probabilities to different parse trees.
Applications:
Syntax analysis, machine translation.
Advantages:
Can handle ambiguity effectively.
Disadvantages:
Computationally intensive
Beam Search
Description:
Keeps track of the top k most probable sequences.
Applications:
Machine translation, speech recognition.
Advantages:
Balances search breadth and depth.
Disadvantages:
May miss the global optimum.
Reinforcement Learning for NLP
Description:
Agents learn to make decisions by receiving rewards or
penalties.
Applications:
Dialogue systems, text generation.
Advantages:
Can learn complex behaviors.
Disadvantages:
Requires a lot of training data.
Challenges and Limitations
Local Search in Continuous Space:
Local minima, parameter tuning.
Non-Deterministic Search in NLP:
Ambiguity, computational complexity
Definition of Partially Observable Environments
Partially Observable Environments: These are environments where the agent
does not have access to the complete state information of the environment. The
agent must make decisions based on incomplete or uncertain information.
Characteristics:
 Limited Sensing: The agent can only observe a subset of the entire
environment.
 Uncertainty: There is ambiguity in the information received by the agent,
leading to uncertainty about the actual state of the environment.
 Example: A robot navigating a building with incomplete map information, or a
game player making moves without knowing the opponent’s strategy.
Importance in AI
Real-World Relevance: Many real-world problems involve incomplete
information, making partially observable environments a critical area of study
in AI.
Robotics: Robots often operate in environments where they cannot perceive
everything (e.g., occluded objects, unknown terrains).
Autonomous Vehicles: Self-driving cars must navigate and make decisions
with limited sensor data.
Healthcare: Medical diagnosis often relies on incomplete patient data,
requiring probabilistic reasoning to make accurate diagnoses.
Games and Strategy: Games like poker or chess have elements that are not
fully observable, requiring strategic decision-making based on partial
information.
Challenges Addressed
Handling Uncertainty:
Developing methods to make reliable decisions under uncertainty.
Adaptive Decision-Making: Creating systems that can adapt to new
information and changing environments.
Probabilistic Models: Using probabilistic models to predict and infer missing
information, improving decision-making accuracy.
Key Characteristics : Limited Sensing:
Only a subset of the environment is observable.
Uncertainty: Ambiguity in the observations leads to uncertainty about the
true state.
Example :Robot Navigation: A robot navigating a building without a
complete map.
Gaming: A poker player making decisions without seeing the opponent's
cards.
Types of Partially Observable Problems
Examples:
Robot Navigation in Unknown Terrain: Robots must navigate
without a complete map.
Playing Games with Hidden Information: Games like poker where
not all cards are visible.
Medical Diagnosis with Incomplete Patient Data: Diagnosing
diseases with limited symptoms and test results.
Search Strategies in Partially Observable
Environments
Overview:
Belief State Representation: Maintaining a set of possible states the
system could be in.
Contingency Planning: Planning for various possible scenarios and
outcomes.
Probabilistic Reasoning: Using probabilities to represent uncertainty and
make decisions.
Belief State Representation
Description:
Represents all possible states the system could be in, given the available
information.
Applications:
Robot Localization: Determining a robot's position within a known
map.
Tracking Problems: Following an object with uncertain movements.
Advantages:
Comprehensive representation of uncertainty.
Disadvantages:
Computationally expensive to maintain and update.
Contingency Planning
Description:
Planning for various possible scenarios and outcomes
Applications:
Autonomous vehicles navigating in unpredictable environments
Advantages:
Robustness to unexpected events
Disadvantages:
Complex planning process
Probabilistic Reasoning
Description:
Using probabilities to represent uncertainty and make decisions
Techniques:
Bayesian Networks, Markov Decision Processes (MDPs), Partially
Observable MDPs (POMDPs)
Applications:
Medical diagnosis, speech recognition
Advantages:
Handles uncertainty well
Disadvantages:
Requires accurate probability models
Markov Decision Processes (MDPs)
Description:
Framework for modeling decision making with probabilistic
transitions
Components:
States, Actions, Transition Model, Reward Function
Applications:
Robotics, automated control systems
Partially Observable MDPs (POMDPs)
Description:
Extends MDPs to handle partial observability
Components:
States, Actions, Transition Model, Observation Model, Reward
Function
Applications:
Autonomous navigation, strategic game playing
Advantages:
Provides a rigorous framework for decision making under
uncertainty
Disadvantages:
Computationally challenging
Future Directions
Research Areas:
Improved algorithms, real-time decision making, better sensor integration
Potential Impact:
Enhanced capabilities in AI applications
Online Search Agents:
Systems designed to explore and solve problems in real-time.
Unknown Environments: Situations where the agent has incomplete or no prior
knowledge about the surroundings.
Importance: Essential in AI for robotics, navigation, and web searches.
What are Online Search Agents?
Agents that make decisions based on current knowledge and adapt as they
explore their environment.
Key Characteristics:
 Real-time Decision Making: Responds to the environment dynamically.
 Incremental Exploration: Gathers information as it navigates.
 Adaptation: Adjusts strategies based on new information.
Examples:
Autonomous Robots: Robots navigating through unknown terrains.
Web Crawlers: Programs indexing web pages.
Understanding Unknown Environments
Definition: Environments where the agent has no prior map or
knowledge.
Challenges:
 Exploration: Discovering new areas.
 Decision Making: Making choices with incomplete
information.
 Uncertainty: Dealing with unknown obstacles and dynamic
changes.
Types of Search Strategies
Offline Search vs. Online Search:
Offline Search: Pre-planned search strategies based on complete knowledge.
Online Search: Real-time adaptation based on partial and evolving knowledge.
Search Algorithms:
Breadth-First Search: Explores all nodes at the present depth before moving on
to the next level.
Depth-First Search: Explores as far as possible along a branch before
backtracking.
A Algorithm*: Combines pathfinding and graph traversal for optimal solutions.
Comparison:
Offline Search: Static, requires full knowledge.
Online Search: Dynamic, adapts to new information.
Online Search Techniques
Real-Time A (RTA)**: A* search with real-time constraints.
Learning Real-Time A (LRTA)**: Learns from previous searches to improve future
performance.
Real-Time Techniques:
Greedy Best-First Search: Chooses paths that appear most promising based on current
knowledge.
D Algorithm*: Incrementally updates paths as new information is obtained.
Benefits:
Adaptability to new information.
Effective in dynamic and unknown environments.
Limitations:
May not always find the optimal path.
Dependent on the efficiency of the algorithm.
Agent Behavior in Unknown Environments
Exploration Strategies:
Exploration vs. Exploitation: Balancing between discovering new areas
and optimizing known paths.
Behavioral Models: Random Walk, Grid-Based Search.
Adaptation:
Reactive Agents: Simple, responds to immediate changes.
Deliberative Agents: Plans future actions based on expected outcomes.
Learning:
Reinforcement Learning: Agents learn from interactions to improve
decision-making.

More Related Content

PPTX
Problem-Solving Techniques in Artificial Intelligence
PPTX
SEARCH METHODS - Backtracking search, Beam search, Bidirectional search and N...
PPT
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
PDF
Lecture 3-Problem Solving by Searching Techniques.pdf
PPTX
Aritificial Intelligence Search Techniques Unit 1
PPTX
Transfer learning with real world applications in deep learning
PPTX
Data Science Challenge presentation given to the CinBITools Meetup Group
PPTX
Cloudera Data Science Challenge
Problem-Solving Techniques in Artificial Intelligence
SEARCH METHODS - Backtracking search, Beam search, Bidirectional search and N...
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
Lecture 3-Problem Solving by Searching Techniques.pdf
Aritificial Intelligence Search Techniques Unit 1
Transfer learning with real world applications in deep learning
Data Science Challenge presentation given to the CinBITools Meetup Group
Cloudera Data Science Challenge

Similar to AI PPT Unit 2 Artificial Intelligence 24sd45 (20)

PPTX
Artificial Intelligence [Problem of Heuristic Search Algorithm]
PDF
AI and Deep Learning
PPTX
AI_Lecture2.pptx
PPTX
TensorFlow Event presentation08-12-2024.pptx
PDF
Mastering Advanced Deep Learning Techniques | IABAC
PPTX
Strata London - Deep Learning 05-2015
PDF
Heuristic Search Techniques in Artificial Intelligence
PPTX
Machine Learning in e commerce - Reboot
PDF
Current clustering techniques
PPTX
Unit-2 for AIML including type of searches
PPTX
CHAPTER 5.pptx of the following of our discussion
PPTX
AutoML for user segmentation: how to match millions of users with hundreds of...
PPSX
Automatic Image Annotation (AIA)
PPTX
AIArtificial intelligence (AI) is a field of computer sciencehsh
PPTX
3. ArtificialSolving problems by searching.pptx
PPT
chapter3part1.ppt
PPTX
AI: AI & problem solving
PPTX
AI: AI & Problem Solving
PDF
LR2. Summary Day 2
Artificial Intelligence [Problem of Heuristic Search Algorithm]
AI and Deep Learning
AI_Lecture2.pptx
TensorFlow Event presentation08-12-2024.pptx
Mastering Advanced Deep Learning Techniques | IABAC
Strata London - Deep Learning 05-2015
Heuristic Search Techniques in Artificial Intelligence
Machine Learning in e commerce - Reboot
Current clustering techniques
Unit-2 for AIML including type of searches
CHAPTER 5.pptx of the following of our discussion
AutoML for user segmentation: how to match millions of users with hundreds of...
Automatic Image Annotation (AIA)
AIArtificial intelligence (AI) is a field of computer sciencehsh
3. ArtificialSolving problems by searching.pptx
chapter3part1.ppt
AI: AI & problem solving
AI: AI & Problem Solving
LR2. Summary Day 2
Ad

More from eticket4403 (9)

PPTX
Software-Defined Networking (SDN) is a transformative networking paradigm
PPT
Modern networking - encompassing advanced technologies
PPTX
AI PPT Unit 1 Artificial Intelligence 24sd45
PPT
Cyber security Unit 3 Cryptography and Network security
PPTX
Cyber security security measure unit 1 ppt
PPTX
Modern Networking Unit 3 Network Function virtualization
PPTX
Modern Networking Unit 4 - cloud computing
PPTX
UNIT 3 _ _ IOT APPLICATIONS USING ARDUINO
PPTX
IoT- Definitions and Functional Requirements - Motivation – Architecture
Software-Defined Networking (SDN) is a transformative networking paradigm
Modern networking - encompassing advanced technologies
AI PPT Unit 1 Artificial Intelligence 24sd45
Cyber security Unit 3 Cryptography and Network security
Cyber security security measure unit 1 ppt
Modern Networking Unit 3 Network Function virtualization
Modern Networking Unit 4 - cloud computing
UNIT 3 _ _ IOT APPLICATIONS USING ARDUINO
IoT- Definitions and Functional Requirements - Motivation – Architecture
Ad

Recently uploaded (20)

PDF
Abrasive, erosive and cavitation wear.pdf
PPTX
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PPTX
tack Data Structure with Array and Linked List Implementation, Push and Pop O...
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PPTX
Software Engineering and software moduleing
PPTX
introduction to high performance computing
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PDF
BIO-INSPIRED ARCHITECTURE FOR PARSIMONIOUS CONVERSATIONAL INTELLIGENCE : THE ...
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PDF
August -2025_Top10 Read_Articles_ijait.pdf
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
PPT
Total quality management ppt for engineering students
PDF
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PDF
Soil Improvement Techniques Note - Rabbi
PDF
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
Abrasive, erosive and cavitation wear.pdf
Chemical Technological Processes, Feasibility Study and Chemical Process Indu...
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
tack Data Structure with Array and Linked List Implementation, Push and Pop O...
distributed database system" (DDBS) is often used to refer to both the distri...
Software Engineering and software moduleing
introduction to high performance computing
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
BIO-INSPIRED ARCHITECTURE FOR PARSIMONIOUS CONVERSATIONAL INTELLIGENCE : THE ...
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
August -2025_Top10 Read_Articles_ijait.pdf
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
Total quality management ppt for engineering students
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
Soil Improvement Techniques Note - Rabbi
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
III.4.1.2_The_Space_Environment.p pdffdf
Fundamentals of safety and accident prevention -final (1).pptx

AI PPT Unit 2 Artificial Intelligence 24sd45

  • 1. COURSE CODE :23AD3401 CORSE NAME : Artificial Intelligence YEAR : II SEMESTER : III DEPARTMENT OF AI&DS
  • 2. UNIT II- SEARCHING STRATEGIES Search algorithms-Informed Search- Uninformed Search-Heuristic search strategies – heuristic functions. Local search and optimization problems – local search in continuous space – search with non-deterministic actions – search in partially observable environments – online search agents and unknown environments.
  • 3. Introduction What are Search Algorithms? – Definition: Procedures to navigate through problem spaces to find solutions. – Importance in AI: Fundamental for problem-solving and decision-making. Types of Search Algorithms Uninformed (Blind) Search – Characteristics: No additional information about states beyond the problem definition. – Examples: Breadth-First Search (BFS), Depth-First Search (DFS) Informed (Heuristic) Search – Characteristics: Uses problem-specific knowledge to find solutions more efficiently. – Examples: A*, Greedy Best-First Search
  • 4. Uninformed Search Algorithms Breadth-First Search (BFS) Description: Explores all nodes at the present depth level before moving on to nodes at the next depth level. Applications: Shortest path in unweighted graphs. Depth-First Search (DFS) Description: Explores as far as possible along each branch before backtracking. Applications: Solving puzzles, pathfinding in mazes.
  • 5. BFS vs. DFS Comparison BFS: Completeness, optimality, time complexity, space complexity. DFS: Completeness, optimality, time complexity, space complexity. Advantages & Disadvantages BFS: Guaranteed to find the shortest path, but high memory usage. DFS: Low memory usage, but may not find the shortest path.
  • 6. Informed Search Algorithms • A Search* – Description: Combines the cost to reach the node and the estimated cost to reach the goal (f(n) = g(n) + h(n)). – Applications: Pathfinding in games, network routing. • Greedy Best-First Search – Description: Expands the node that appears to be closest to the goal. – Applications: Simplified versions of A*, less memory-intensive.
  • 7. Heuristics in Search Algorithms Heuristic Functions Definition: Estimates the cost from the current node to the goal. Importance: Guides the search process, improves efficiency. Examples of Heuristics Manhattan Distance: Used in grid-based pathfinding. Euclidean Distance: Used in 2D/3D space pathfinding.
  • 8. Advanced Search Algorithms Iterative Deepening Search (IDS) Description: Combines the benefits of BFS and DFS by progressively deepening the search. Applications: Game tree exploration. Bidirectional Search Description: Runs two simultaneous searches, one forward from the initial state and one backward from the goal. Applications: Network routing, game AI.
  • 9. Applications of Search Algorithms • Game AI – Examples: Chess, Go, puzzle games. • Robotics – Examples: Path planning, obstacle avoidance. • Natural Language Processing – Examples: Parsing, machine translation.
  • 10. Challenges and Limitations Scalability Issues with large problem spaces. Heuristic Design Importance of good heuristics for efficiency. Memory Usage High memory requirements for some algorithms.
  • 11. Informed Search Algorithms Uses problem-specific knowledge to find solutions more efficiently. Examples: A* Search Greedy Best-First Search A* Search Combines the cost to reach the node and the estimated cost to reach the goal (f(n) = g(n) + h(n)). Applications: Pathfinding in games, network routing. Advantages: Completeness, optimality, efficiency. Disadvantages: Can be memory intensive.
  • 12. Greedy Best-First Search Description: Expands the node that appears to be closest to the goal. Applications: Simplified versions of A*, less memory-intensive. Advantages: Faster than A* in some cases. Disadvantages: Not always optimal.
  • 13. Heuristics in Informed Search Heuristic Functions: Definition: Estimates the cost from the current node to the goal. Examples: Manhattan Distance, Euclidean Distance. Importance: Guides the search process, improves efficiency. Comparison of Informed and Uninformed Search Table of Differences: Criteria: Information usage, efficiency, applications, complexity.
  • 14. Applications of Search Algorithms Game AI: Examples: Chess, Go, puzzle games. Robotics: Examples: Path planning, obstacle avoidance. Natural Language Processing: Examples: Parsing, machine translation.
  • 15. Introduction Definition:  Heuristic Search: Search strategies that utilize heuristic functions to guide the search.  Heuristic Functions: Functions that estimate the cost to reach the goal from a given state.  Importance:Significance in AI: Improves search efficiency and effectiveness. Types of Heuristic Search Strategies Overview: A* Search Greedy Best-First Search Hill Climbing Simulated Annealing
  • 16. A* Search Algorithm Description: Combines the cost to reach the node (g(n)) and the estimated cost to the goal (h(n)) to form f(n) = g(n) + h(n). Applications: Pathfinding in games, network routing. Advantages: Completeness, optimality. Disadvantages: Can be memory intensive.
  • 17. Greedy Best-First Search Description: Expands the node that appears to be closest to the goal based on heuristic h(n). Applications: Simplified versions of A*, less memory-intensive. Advantages: Faster in some cases. Disadvantages: Not always optimal.
  • 18. Hill Climbing Description: – Iteratively moves to the neighbor with the highest value, attempting to find a peak. Applications: – Optimization problems. Advantages: – Simple to implement. Disadvantages: – Can get stuck in local maxima.
  • 19. Simulated Annealing Description: Inspired by the annealing process in metallurgy, it allows occasional moves to worse states to escape local maxima. Applications: Optimization problems where the solution space is complex. Advantages: Avoids local maxima better than hill climbing. Disadvantages: Requires careful tuning of parameters.
  • 20. Heuristic Functions Functions that estimate the cost from the current state to the goal. Importance: Guide the search, reduce the search space. Examples: Manhattan Distance: Used in grid-based pathfinding. Euclidean Distance: Used in 2D/3D space pathfinding. Designing Heuristic Functions Principles: Admissibility: Heuristic never overestimates the cost. Consistency (Monotonicity): Heuristic satisfies the triangle inequality. Examples: Straight-line distance in navigation problems. Number of misplaced tiles in puzzle problems.
  • 21. Effective Heuristics Characteristics: Accuracy: Closer approximation to the actual cost. Efficiency: Low computational overhead. Trade-offs: Balancing accuracy and computational cost. Applications of Heuristic Search Game AI: Examples: Chess, Go, puzzle games. Robotics: Examples: Path planning, obstacle avoidance. Natural Language Processing: Examples: Parsing, machine translation.
  • 22. Challenges and Limitations Scalability: Issues with large problem spaces. Heuristic Design: Importance of good heuristics for efficiency. Memory Usage: High memory requirements for some algorithms.
  • 23. Local Search: A method for solving optimization problems by iteratively improving the solution. Optimization Problems: Problems where the goal is to find the best solution from a set of possible solutions. Importance:Significance in AI: Enhances efficiency in finding optimal or near-optimal solutions.
  • 24. Optimization Problems Definition: Problems that seek the best solution from a set of feasible solutions. Examples: Traveling Salesman Problem, Vehicle Routing Problem, Scheduling. Formulating Optimization Problems Components: Objective Function: The function to be maximized or minimized. Constraints: Conditions that the solution must satisfy. Example: Formulation of the Traveling Salesman Problem.
  • 25. Applications of Local Search Industry Applications: Logistics: Vehicle routing, warehouse management. Engineering: Design optimization, resource allocation. Computer Science: Machine learning, data mining. Challenges and Limitations Local Search: Stuck in local maxima, high computational cost. Optimization Problems: Scalability, handling multiple objectives.
  • 26. Local Search in Continuous Space Definition: Optimization techniques in continuous domains. Applications: Engineering design, robotics, machine learning.
  • 27. Types of Local Search Algorithms in Continuous Space Overview: Gradient Descent Simulated Annealing Genetic Algorithms Particle Swarm Optimization
  • 28. Gradient Descent Description: Iteratively moves in the direction of the steepest decrease of a cost function. Applications: Machine learning, neural networks. Advantages: Efficient for differentiable functions. Disadvantages: Can get stuck in local minima.
  • 29. Simulated Annealing in Continuous Space Description: Allows occasional moves to worse states to escape local minima. Applications: Complex optimization problems. Advantages: Effective at finding global optima. Disadvantages: Requires careful parameter tuning.
  • 30. Genetic Algorithms Description: Uses operations like mutation, crossover, and selection. Applications: Optimization, machine learning. Advantages: Good at exploring large, complex spaces. Disadvantages: Computationally expensive.
  • 31. Particle Swarm Optimization Description: Simulates social behavior of birds flocking or fish schooling. Applications: Optimization problems in engineering and computer science. Advantages: Effective for multi-dimensional problems. Disadvantages: Requires tuning of multiple parameters.
  • 32. Search with Non-Deterministic Actions in NLP Definition: Handling uncertainty in actions within NLP tasks. Challenges: Ambiguity, variability in language. Non-Deterministic Actions in NLP Examples: Parsing sentences with multiple valid interpretations. Machine translation with various possible outputs.
  • 33. Techniques for Non-Deterministic Search Overview: Probabilistic Parsing Beam Search Reinforcement Learning Probabilistic Parsing Assigns probabilities to different parse trees. Applications: Syntax analysis, machine translation. Advantages: Can handle ambiguity effectively. Disadvantages: Computationally intensive
  • 34. Beam Search Description: Keeps track of the top k most probable sequences. Applications: Machine translation, speech recognition. Advantages: Balances search breadth and depth. Disadvantages: May miss the global optimum.
  • 35. Reinforcement Learning for NLP Description: Agents learn to make decisions by receiving rewards or penalties. Applications: Dialogue systems, text generation. Advantages: Can learn complex behaviors. Disadvantages: Requires a lot of training data.
  • 36. Challenges and Limitations Local Search in Continuous Space: Local minima, parameter tuning. Non-Deterministic Search in NLP: Ambiguity, computational complexity
  • 37. Definition of Partially Observable Environments Partially Observable Environments: These are environments where the agent does not have access to the complete state information of the environment. The agent must make decisions based on incomplete or uncertain information. Characteristics:  Limited Sensing: The agent can only observe a subset of the entire environment.  Uncertainty: There is ambiguity in the information received by the agent, leading to uncertainty about the actual state of the environment.  Example: A robot navigating a building with incomplete map information, or a game player making moves without knowing the opponent’s strategy.
  • 38. Importance in AI Real-World Relevance: Many real-world problems involve incomplete information, making partially observable environments a critical area of study in AI. Robotics: Robots often operate in environments where they cannot perceive everything (e.g., occluded objects, unknown terrains). Autonomous Vehicles: Self-driving cars must navigate and make decisions with limited sensor data. Healthcare: Medical diagnosis often relies on incomplete patient data, requiring probabilistic reasoning to make accurate diagnoses. Games and Strategy: Games like poker or chess have elements that are not fully observable, requiring strategic decision-making based on partial information.
  • 39. Challenges Addressed Handling Uncertainty: Developing methods to make reliable decisions under uncertainty. Adaptive Decision-Making: Creating systems that can adapt to new information and changing environments. Probabilistic Models: Using probabilistic models to predict and infer missing information, improving decision-making accuracy.
  • 40. Key Characteristics : Limited Sensing: Only a subset of the environment is observable. Uncertainty: Ambiguity in the observations leads to uncertainty about the true state. Example :Robot Navigation: A robot navigating a building without a complete map. Gaming: A poker player making decisions without seeing the opponent's cards.
  • 41. Types of Partially Observable Problems Examples: Robot Navigation in Unknown Terrain: Robots must navigate without a complete map. Playing Games with Hidden Information: Games like poker where not all cards are visible. Medical Diagnosis with Incomplete Patient Data: Diagnosing diseases with limited symptoms and test results.
  • 42. Search Strategies in Partially Observable Environments Overview: Belief State Representation: Maintaining a set of possible states the system could be in. Contingency Planning: Planning for various possible scenarios and outcomes. Probabilistic Reasoning: Using probabilities to represent uncertainty and make decisions.
  • 43. Belief State Representation Description: Represents all possible states the system could be in, given the available information. Applications: Robot Localization: Determining a robot's position within a known map. Tracking Problems: Following an object with uncertain movements. Advantages: Comprehensive representation of uncertainty. Disadvantages: Computationally expensive to maintain and update.
  • 44. Contingency Planning Description: Planning for various possible scenarios and outcomes Applications: Autonomous vehicles navigating in unpredictable environments Advantages: Robustness to unexpected events Disadvantages: Complex planning process
  • 45. Probabilistic Reasoning Description: Using probabilities to represent uncertainty and make decisions Techniques: Bayesian Networks, Markov Decision Processes (MDPs), Partially Observable MDPs (POMDPs) Applications: Medical diagnosis, speech recognition Advantages: Handles uncertainty well Disadvantages: Requires accurate probability models
  • 46. Markov Decision Processes (MDPs) Description: Framework for modeling decision making with probabilistic transitions Components: States, Actions, Transition Model, Reward Function Applications: Robotics, automated control systems
  • 47. Partially Observable MDPs (POMDPs) Description: Extends MDPs to handle partial observability Components: States, Actions, Transition Model, Observation Model, Reward Function Applications: Autonomous navigation, strategic game playing Advantages: Provides a rigorous framework for decision making under uncertainty Disadvantages: Computationally challenging
  • 48. Future Directions Research Areas: Improved algorithms, real-time decision making, better sensor integration Potential Impact: Enhanced capabilities in AI applications
  • 49. Online Search Agents: Systems designed to explore and solve problems in real-time. Unknown Environments: Situations where the agent has incomplete or no prior knowledge about the surroundings. Importance: Essential in AI for robotics, navigation, and web searches. What are Online Search Agents? Agents that make decisions based on current knowledge and adapt as they explore their environment. Key Characteristics:  Real-time Decision Making: Responds to the environment dynamically.  Incremental Exploration: Gathers information as it navigates.  Adaptation: Adjusts strategies based on new information.
  • 50. Examples: Autonomous Robots: Robots navigating through unknown terrains. Web Crawlers: Programs indexing web pages. Understanding Unknown Environments Definition: Environments where the agent has no prior map or knowledge. Challenges:  Exploration: Discovering new areas.  Decision Making: Making choices with incomplete information.  Uncertainty: Dealing with unknown obstacles and dynamic changes.
  • 51. Types of Search Strategies Offline Search vs. Online Search: Offline Search: Pre-planned search strategies based on complete knowledge. Online Search: Real-time adaptation based on partial and evolving knowledge. Search Algorithms: Breadth-First Search: Explores all nodes at the present depth before moving on to the next level. Depth-First Search: Explores as far as possible along a branch before backtracking. A Algorithm*: Combines pathfinding and graph traversal for optimal solutions. Comparison: Offline Search: Static, requires full knowledge. Online Search: Dynamic, adapts to new information.
  • 52. Online Search Techniques Real-Time A (RTA)**: A* search with real-time constraints. Learning Real-Time A (LRTA)**: Learns from previous searches to improve future performance. Real-Time Techniques: Greedy Best-First Search: Chooses paths that appear most promising based on current knowledge. D Algorithm*: Incrementally updates paths as new information is obtained. Benefits: Adaptability to new information. Effective in dynamic and unknown environments. Limitations: May not always find the optimal path. Dependent on the efficiency of the algorithm.
  • 53. Agent Behavior in Unknown Environments Exploration Strategies: Exploration vs. Exploitation: Balancing between discovering new areas and optimizing known paths. Behavioral Models: Random Walk, Grid-Based Search. Adaptation: Reactive Agents: Simple, responds to immediate changes. Deliberative Agents: Plans future actions based on expected outcomes. Learning: Reinforcement Learning: Agents learn from interactions to improve decision-making.