SlideShare a Scribd company logo
-Dr. Mehak Saini
ARTIFICIAL
INTELLIGENCE
& MACHINE
LEARNING
(IT-351)
Syllabus
1
What is Artificial Intelligence?
2
Overview of Different Tasks
4
Contents AI v/s ML v/s DL
3
Evaluation of AI
5 Can Machines Think?
6
Physical Symbol System Hypothesis
7
8 AI Techniques
1 SYLLABUS
Artificial Intelligence (AI) is the
use of computer systems to
simulate human mental
processes, such as interpreting
and generating language.
What is Artificial
Intelligence?
2
Artificial Intelligence(AI) is that branch of computer science that creates intelligent
machines that think and act like humans.
It is one of the revolutionizing technologies that people are fascinated by because of its
ability to relate to their daily lives.
AI enables machines to think, learn and adapt, to enhance and automate tasks across
industries.
"What is AI?"
Intelligence:
"The ability to learn, understand, and think." (Oxford Dictionary)
In the context of AI, it extends to machines being able to mimic
human cognitive functions.
Artificial Intelligence (AI):
The study of how to make computers perform tasks that usually
require human intelligence.
Involves programming computers to process information, make
decisions, and learn from experience.
What is AI?
Approaches to Intelligence
1. Thinking
Humanly:
Cognitive
Modeling
Compares AI
reasoning steps to
human thinking.
Involves testable
theories from
cognitive science.
2. Thinking
Rationally: Laws
of Thought
Originated from Aristotle's
attempt to define "right
thinking."
Uses formal logic to
represent reasoning.
Obstacles: Informal
knowledge representation,
computational complexity.
3. Acting
Humanly: The
Turing Test
Alan Turing's concept
from 1950 to assess
machine intelligence.
Requires AI to exhibit
human-like behavior.
Suggests key AI
components:
knowledge, reasoning,
language,
understanding, learning.
4. Acting
Rationally:
Rational Agents
AI acts to achieve goals
based on given beliefs.
More flexible and
general than "laws of
thought."
Suitable for scientific
and human-
independent
development.
Definitions of AI
Thinking
Learning
Reasoning
Acting
Father of AI: John
McCarthy (1956)
Defined AI as "The
science and engineering
of making intelligent
machines."
Haugeland (1985)
"The exciting new
effort to make
computers think...
machines with minds."
Charniak &
McDermott (1985)
"The study of mental
faculties through
computational
models."
Rich & Knight
(1991)
"Making computers
do things at which, at
the moment, people
are better."
Samuel's Self-Learning
Program (1959)
Developed the first self-
learning program, the
checkers-playing AI, illustrating
that machines could learn from
experience.
Bellman (1978)
[The automation of]
activities associated
with human thinking,
like decision-making,
problem-solving.
Winston (1992)
"The study of
computations that
make it possible to
perceive, reason, and
act."
Partridge (1991)
"A collection of
algorithms that are
adequate
approximations of
intractable problems."
Ability to learn −
AI systems can
improve their
performance
eventually by
learning from
data and past
experiences.
Logical decision
making −
AI systems are
fed with large
amounts of data
to understand
and recognize
patterns for
analysis and
decision making.
Adaptability −
AI systems can
adjust and
adapt to
changes in
data.
Efficient
automation −
AI would
efficiently
execute
repetitive tasks
and processes.
Versatility −
AI can be widely
applied for
various tasks
across all fields
like businesses,
automotive,
health, and many
others.
FEATURES of Artificial Intelligence
examples of Artificial Intelligence
Speech
Recognition:
Understanding
and processing
human speech
(e.g., Siri, Alexa).
Image
Recognition:
Identifying objects,
faces, or patterns
in images.
Intuition and
Inferencing:
Making decisions
or predictions
based on
incomplete data.
Learning New
Skills: Adapting
and improving
performance over
time (e.g., self-
driving cars).
Decision Making:
Analyzing data to
make informed
choices.
Abstract
Thinking:
Problem-solving
in complex,
abstract domains.
Task Domains of Artificial Intelligence
Perception: Vision,
Speech
Natural Language:
Understanding,
Generation,
Translation
Commonsense
Reasoning
Robot Control
Games: Chess,
Backgammon,
Checkers, Go
Mathematics:
Geometry, Logic,
Integral Calculus,
Proving Properties of
Programs
Engineering:
Design, Fault
Finding,
Manufacturing
Planning
Scientific Analysis
Medical Diagnosis
Financial Analysis
Mundane Tasks Formal Tasks Expert Tasks
Key Topics in Artificial Intelligence
Pattern Matching:
Identifying patterns in data
for recognition and
decision-making.
Logic Representation:
Using formal logic to
represent knowledge and
reason about problems.
Symbolic Processing:
Manipulating symbols to
represent real-world
entities and concepts.
Numeric Processing:
Handling numerical data
for computations and
analyses.
Problem Solving:
Techniques for finding
solutions to complex issues.
Heuristic Search: Exploring
possible solutions using
intelligent trial and error.
Natural Language Processing
(NLP): Understanding and
generating human language.
Knowledge Representation:
Structuring information to
make it usable for AI
applications.
Expert Systems: Emulating
the decision-making ability of
human experts.
Robotics: Designing intelligent
machines that can perform
physical tasks.
Neural Networks: Simulating human
brain processes to recognize patterns
and make decisions.
Learning: Adapting behavior based on
past experiences.
Planning: Strategically determining
actions to achieve specific goals.
Semantic Networks: Connecting
concepts in a network to represent
knowledge.
Clustering and Classification:
Grouping and categorizing data
based on similarities.
Regression: Predicting outcomes based
on input data.
Control: Directing the behavior of
machines or systems.
Core AI Techniques Advanced AI Methods Machine Learning and AI
Subfields
Artificial
Intelligence (AI)
V/s
machine learning (ML)
v/s
deep learning (DL)
3
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
“The Rise of Intelligent Machines: The Evolutionary
Tale of AI”
4
The Birth of AI: The Dawn of a New Era (1950s-1960s)
•Imagine a time when computers were room-
sized behemoths, and the idea of machines that
could think like humans was the stuff of science
fiction. In the 1950s, visionary thinkers like Alan
Turing began to ask a bold question: “Can
machines think?” This was the spark that ignited
the field of Artificial Intelligence.
•The early days of AI were filled with excitement
and ambition. Researchers developed the first AI
programs, like the “General Problem Solver,”
which could solve puzzles and logical problems.
But these early systems were limited—they could
only do what they were explicitly programmed
to do. The dream of creating truly intelligent
machines seemed distant, constrained by the
primitive technology of the time.
The Machine Learning Revolution: Teaching Machines to Learn (1980s-1990s)
•The Machine Learning (ML) revolution of the 1980s and 1990s was a game-changer in the
world of artificial intelligence. Instead of programming computers with strict rules,
scientists began teaching them to learn from data—much like how humans learn from
experience. But what if machines could figure things out on their own, just by
recognizing patterns in the data they see? Could machines really get better at tasks just
by learning from mistakes and successes?
•Imagine teaching a computer to play chess—not by programming every move but by
letting it learn from millions of games. By the 1990s, this approach allowed ML-powered
systems to outperform traditional AI. For instance , ML-based spam filters became highly
effective at blocking unwanted emails by recognizing patterns.
•ML also revolutionized credit card fraud detection by identifying unusual transaction
patterns in real-time, significantly reducing fraudulent activities. In the business world,
ML began enhancing sales forecasting, enabling companies to predict future sales trends
with greater accuracy. These early successes highlighted ML’s potential to solve complex
problems, laying the groundwork for today’s advanced AI.
Deep Learning: Unlocking the Power of Data (2010s)
As we entered the 2010s, the AI landscape was transformed by a breakthrough
known as Deep Learning. If Machine Learning was like teaching a computer to
learn from data, Deep Learning was like giving it a brain with billions of neurons,
capable of processing vast amounts of information in ways that mimicked
human learning.
Computer Vision emerged as a powerful tool that allowed machines to interpret
visual data—recognizing objects, faces, and even entire scenes with incredible
accuracy.Powered by advances in Deep Learning, Computer Vision transformed
industries like security, healthcare, and retail.
Think about the facial recognition systems that unlock your phone or the AI
algorithms that help doctors diagnose diseases from medical scans—these are all
products of Computer Vision technology.
Natural Language Processing: Teaching Machines to Understand Us (2010s)
•But understanding images and playing games was just the beginning.
The real challenge was teaching machines to understand and
communicate with us in our own language. Natural Language
Processing (NLP), a field of AI dedicated to bridging the gap between
human language and machine understanding.
•NLP had been around for decades, but it wasn’t until the 2010s, with
the advent of models like BERT and GPT, that machines started to truly
grasp the nuances of language. These models could understand
context, interpret meaning, and even generate human-like text.
Suddenly, talking to a machine didn’t feel so different from talking to a
human, and applications like virtual assistants, chatbots, and translation
services began to thrive.
Autonomous Systems: Machines on the Move (2010s-Present)
•As AI continued to evolve, it wasn’t long before machines began to move on
their own. Autonomous systems, like self-driving cars and drones, represent
one of the most exciting frontiers of AI. These systems combine AI with
sensors, cameras, and real-time data processing to navigate the world
without human intervention.
•The development of autonomous systems posed unique challenges—how do
you teach a car to make split-second decisions on a busy road? How do you
ensure a drone can safely deliver a package? By integrating AI with advanced
sensors and sophisticated algorithms, engineers have created machines that
can operate independently, transforming industries from transportation to
logistics.
Generative AI: Unleashing Creativity (2020s-Present)
•As AI reached new heights, it began to explore the realm of
creativity—a domain once thought to be uniquely human.
Generative AI models, like GPT-3, have the ability to create original
content, from writing articles to composing music and generating
artwork.
•Generative AI has opened up a world of possibilities in art, design,
entertainment, and beyond. It’s like having a creative partner that
can brainstorm ideas, draft content, or even paint a picture. But
with this power comes responsibility—ensuring that AI-generated
content is used ethically and transparently is a challenge that the
AI community continues to tackle.
💡Final Thoughts
•As we’ve journeyed through the evolution of Artificial Intelligence,
it’s clear that what was once a distant dream has now become an
integral part of our daily lives. From early algorithms to today’s
intelligent systems, AI has transformed the way we live, work, and
interact with the world. But this is just the beginning. As AI
continues to advance, the possibilities are limitless, opening doors
to innovations we can only imagine. The future of AI is not just
about technology—it’s about redefining what’s possible.
Can Machines
Think?
5
Ever since the possibility of building
intelligent machines arose, there have
been raging debates on whether
machine intelligence is possible or not.
All kinds of arguments have been put
forth both for and against the possibility.
It was perhaps to put an end to these
arguments that Alan Turing (1950)
proposed his famous imitation game,
which we now call the Turing Test.
Turing Test Setup
The Turing Test, proposed by Alan Turing in 1950, is a method to evaluate a machine's
ability to exhibit intelligent behavior indistinguishable from that of a human. The setup
involves three participants:
Human Interrogator (Judge): A person who poses questions to both a human and a
machine.
Human Participant: A person who responds to the interrogator's questions.
Machine: A computer or AI program designed to generate responses.
The interrogator communicates with both the human and the machine through a text-
based interface, ensuring that physical presence or voice cues do not give away the
participants' identities. The test typically involves a series of conversations where the
interrogator asks various questions and tries to distinguish between the human and the
machine.
Experimental Setup and Findings of the Turing Test
Findings of the Turing Test
Early Results: Early attempts at creating machines to pass the Turing Test demonstrated
limited success. Early programs like ELIZA (1966) could mimic simple conversation
patterns, but their lack of understanding was evident with more complex queries.
Modern Applications: Recent advancements in AI, particularly Natural Language
Processing (NLP) models like GPT-3, have come closer to passing the Turing Test by
generating human-like responses in diverse contexts. However, these models still
struggle with deep understanding, common sense, and contextually nuanced
conversations.
Overall Findings: The Turing Test has become a benchmark for testing machine
intelligence. While many AI programs can handle simple conversational tasks
convincingly, true understanding, self-awareness, and context management are still
challenges that prevent machines from fully passing the test.
However, it is quite obvious that relying on human impressions
based on interaction in natural language is not the best way of
determining whether a machine is intelligent or not.
With more and more machines becoming good at generating text
rivalling that produced by humans, a need is being felt for something
that delves deeper and tests whether the machine is actually
reasoning when answering questions.
Hector Levesque and colleagues have proposed a new test of
intelligence which they call the Winograd Schema Challenge, after
Terry Winograd who first suggested it (Levesque et al., 2012;
Levesque, 2017). The idea is that the test cannot be answered by
having a large language model or access to the internet but would
need common sense knowledge about the world.
The following is the example attributed to Winograd (1972).
The town councillors refused to give the angry demonstrators a permit because they feared violence. Who
feared violence?
(a) The town councillors
(b) The angry demonstrators
The town councillors refused to give the angry demonstrators a permit because they advocated violence.
Who advocated violence?
(a) The town councillors
(b) The angry demonstrators
In both cases, two options are given to the subject who has to choose one of the two
The authors report that the Winograd Schema Test was preceded by a pronoun disambiguation test in a
single sentence, with examples chosen from naturally occurring text. Only those programs that did well in
the first test were allowed to advance to the Winograd Schema Test.
The important thing is that such problems can be solved only if the subject is well versed with sufficient
common sense knowledge about the world and also the structure of language.
A question one might ask is why should a test of intelligence be language based?
Could there be other indicators of intelligence?
AARON, the drawing artist created by Harold Cohen (1928–2016) .
DALL-E, Midjourney, and Stable Diffusion: text-to-image AI systems (Cohen, 2016).
Turing Test for musical intelligence proposed by Erik Belgum and colleagues (Belgum et al., 1989).
EMI (Experiments in Musical Intelligence) : a computer program that composes music in the style of
various classical composers (Cope, 2004)
IBM’s program, Watson’s spectacular win in the game of Jeopardy (2011).
IBM’s chef Watson: a computationally creative computer that can automatically design and discover
culinary recipes that are flavorful, healthy, and novel.
DeepMind’s AlphaGo: beat the reigning world champion Lee Sedol in the oriental game of go (Silver et al.,
2016).
IBM’s Deep Blue program beat the then world champion Garry Kasparov in the game of chess in 1997.
In the summer of 1956, John McCarthy and Marvin Minsky had organized the Dartmouth Conference with
the following stated goal – ‘The study is to proceed on the basis of the conjecture that every aspect of
learning or any other feature of intelligence can in principle be so precisely described that a machine can
be made to simulate it’ (McCorduck, 2004).
While the Turing Test serves as a measure of how closely machines can imitate human behavior, the Chinese Room
Experiment critiques the idea that such imitation equates to real understanding.
Chinese Room Experiment: Setup and Findings
Chinese Room Setup
Proposed by philosopher John Searle in 1980, the Chinese Room Experiment
challenges the notion of computers possessing true understanding or consciousness.
The setup involves:
Room: A closed room containing a person (who does not understand Chinese) and
a set of instructions.
Person (Human Processor): Inside the room, the person follows a comprehensive
set of syntactic rules (instructions) for manipulating Chinese symbols.
Input and Output: Chinese characters (input) are passed into the room, and the
person uses the rule book to arrange symbols and produce the appropriate output
(responses in Chinese) without understanding their meaning.
The person inside the room can generate responses that seem coherent to Chinese-
speaking individuals outside, even though they have no understanding of the
language.
Findings of the Chinese Room Experiment
Understanding vs. Processing: The experiment demonstrates that while a machine
(or a human using rule-based instructions) can simulate an understanding of
language, it does not imply that it actually understands the content. This supports
Searle's argument that computers process symbols syntactically without any
semantic comprehension.
Implications for AI: The experiment raises questions about the nature of artificial
intelligence. Even if a machine passes the Turing Test, it does not mean the
machine has a true "mind" or consciousness. It suggests that AI may excel at
simulating human-like responses without possessing genuine understanding or
intentionality.
In summary, while the Turing Test serves as a measure
of how closely machines can imitate human behavior,
the Chinese Room Experiment critiques the idea that
such imitation equates to real understanding.
Both have significantly influenced the philosophical
and technical discussions around AI and the nature of
machine intelligence.
6 Machine
learning:
Overview of
Different
Tasks
what is Machine learning?
Definition 1 ( an older/ informal one):
Arther Samuel (in 1959) described Machine
Learning as:
“the field of study that gives computers the
ability to learn without being explicitly
programmed.”
Tom Mitchell (in 1998):
“A computer program is said to learn from experience E with respect to
some class of tasks T and performance measure P, if its performance at
tasks in T, as measured by P, improves with experience E.”
Example: playing checkers.
E = the experience of playing many games of checkers.
T = the task of playing checkers.
P = the probability that the program will win the next game.
Definition 2 (a modern one):
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
DATA REPRESENTATION iN Machine learning
Experience in the form of raw data is a source of learning in many applications (human
knowledge in linguistic form is an additional learning source).
This raw data is pre-processed w.r.t. the class of tasks and this leads to an information-
system that represents the knowledge in the raw data (input to the learning system shown
in the following Figure).
Figure: A block diagram
representation of a
learning machine.
The information-system data maybe stored in a data warehouse which provides
integrated, consistent and cleaned data to machine learning algorithms.
However, for many applications, instead of analysing data accessed online from data
warehouses, data maybe available in a flat file, which is simply a data table.
Here, the information system is a form of data table D, as shown below, where each row
represents a measurement/ observation and each column denotes the value of an attribute of
the information system for all measurements/ observations.
Table: A Template for Data Table
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
SUPERVISED Machine learning
Algorithm learns to map input data (features) to known output data (labels).
Training data is labeled: each data point has a corresponding output label that the algorithm is
trying to predict.
Tasks:
(i) Classification: the output variable is a categorical variable (response is qualitative)
(ii) Regression: the output variable is a continuous variable (response is quantitative)
Example: Classification task of predicting fruits
Image classification: Identify objects or features in images, such as classifying whether an
image contains a cat or a dog.
Sentiment analysis: Determine the sentiment or opinion expressed in text data, such as
classifying whether a product review is positive or negative.
Fraud detection: Identify fraudulent transactions in financial data, such as predicting
whether a credit card transaction is likely to be fraudulent.
Predictive maintenance: Predict when maintenance is needed for equipment based on
sensor data, such as predicting when a machine is likely to fail.
Personalized recommendations: Recommend products or content to users based on their
past behavior and preferences, such as suggesting movies or TV shows to watch.
APPLICATIONS OF SUPERVISED Machine learning
CHALLENGES IN SUPERVISED Machine learning
Insufficient or biased data: Lack of data or biased data can lead to poor model performance or
inaccurate predictions.
Overfitting: When the model is too complex, it can fit the training data too closely, leading to
poor generalization performance on new data.
Feature engineering: Choosing and engineering the right set of features can be a time-
consuming and subjective process that requires domain expertise.
Model selection: Choosing the right model and hyperparameters for a given problem can be
challenging and require extensive experimentation and tuning.
Interpretability: Understanding why a model makes certain predictions or decisions can be
difficult, especially for complex models like deep neural networks.
SUPERVISED/ DIRECTED Machine learning
Introduction to Artificial Intelligence (AI) and Machine Learning (ML)
Classification
Definition: Classification is a supervised learning task where the objective is to categorize input
data into predefined labels or classes.
How it Works: The model learns from labeled training data, which consists of input-output
pairs. The goal is to learn the mapping between input features and target labels to classify new,
unseen data correctly.
Examples:
Email spam detection (spam or not spam).
Image recognition (identifying objects like cats, dogs, cars, etc.).
Sentiment analysis (positive, negative, or neutral sentiments in text).
Common Algorithms: Decision Trees, Support Vector Machines (SVM), Neural Networks, Naive
Bayes, K-Nearest Neighbors (KNN), Random Forests.
Regression
Definition: Regression is a supervised learning task aimed at predicting a continuous
numerical value based on input features.
How it Works: The model learns the relationship between input variables and the continuous
output variable by finding the best-fitting function (often a line or curve) to minimize
prediction error.
Examples:
Predicting house prices based on features like size, location, and number of bedrooms.
Forecasting stock prices.
Estimating a person's age from their photograph.
Common Algorithms: Linear Regression, Polynomial Regression, Support Vector Regression
(SVR), Decision Trees (for regression), Neural Networks (e.g., feedforward networks).
UNSUPERVISED Machine learning
Algorithm learns patterns and structures in the input data (features) without being given explicit
labels / targets.
Training data is unlabeled.
Tasks:
(i) Clustering: group similar data points together
(ii) Dimensionality reduction: the output represents the input with reduced dimensions
Example: Clustering task of grouping fruits
APPLICATIONS OF UNSUPERVISED Machine learning
Anomaly detection: Identify unusual or rare events or patterns in data, such as detecting
fraudulent transactions.
Clustering: Group similar data points together based on similarity or distance, such as
clustering customers based on their purchasing behavior.
Dimensionality reduction: Reduce the number of features or variables in high-
dimensional data while preserving as much information as possible, such as reducing the
number of dimensions in an image or text dataset.
Topic modeling: Discover underlying topics or themes in a collection of documents or text
data, such as identifying topics in customer reviews or news articles.
CHALLENGES IN UNSUPERVISED Machine learning
Evaluation: There are no clear evaluation metrics for unsupervised learning, making it
difficult to objectively compare different models or algorithms.
Interpretability: Understanding the meaning or interpretation of the patterns or
clusters discovered by unsupervised learning can be challenging, especially for
complex models or high-dimensional data.
Scalability: Some unsupervised learning algorithms can be computationally expensive
and difficult to scale to large datasets or high-dimensional data.
Determining the number of clusters: Determining the optimal number of clusters in a
clustering algorithm can be challenging and subjective.
Data preprocessing: Preprocessing and cleaning the data can be time-consuming and
require domain expertise, especially for complex or unstructured data like images or
text.
UNSUPERVISED/ UNDIRECTED Machine learning
CLUSTERING
Definition: Clustering is an unsupervised learning task where the goal is to group a
set of data points into clusters based on their similarities.
How it Works: The algorithm finds natural groupings in the data by measuring
similarity (e.g., distance) between data points. It assigns points to clusters such that
points in the same cluster are more similar to each other than to those in other
clusters.
Examples:
Customer segmentation in marketing (grouping customers based on purchasing
behavior).
Document categorization (organizing news articles into topics).
Image segmentation (grouping pixels in an image into regions).
Common Algorithms: K-Means, Hierarchical Clustering, DBSCAN (Density-Based
Spatial Clustering of Applications with Noise), Gaussian Mixture Models.
Reinforcement Learning (RL)
Physical Symbol System HYPOTHESIS
(PSSH):
THE UNDERLYING ASSUMPTION
7
The Physical Symbol System Hypothesis is a theory in AI proposed by
researchers Allen Newell and Herbert A. Simon in 1976.
It states that a physical symbol system—a system that uses symbols to
represent knowledge and manipulate those symbols to produce actions—has
the necessary and sufficient means for intelligent behavior.
What Is a Physical Symbol System?
It is a system that consists of symbols (like words, numbers, or other
representations) and a set of rules (operations) for manipulating those
symbols.
Computers are an example of physical symbol systems. They use symbols
(data) and follow rules (programs) to process information.
Key Points of the Hypothesis:
Necessary for Intelligence: The hypothesis claims that to
exhibit intelligent behavior (like problem-solving or reasoning),
a system must operate using symbols. Essentially, human-like
intelligence requires symbols to represent objects, actions, and
ideas.
Sufficient for Intelligence: If a system has the ability to store,
manipulate, and use symbols according to certain rules, it can
achieve any level of intelligence. This means that a properly
programmed computer, in theory, could display any form of
human-like intelligence.
Why Is It Important in AI?
The Physical Symbol System Hypothesis laid the foundation for symbolic AI, which
includes methods like logic, rule-based systems, and expert systems. These
approaches rely on representing knowledge in symbols (like facts, rules, and
concepts) and using algorithms to manipulate them.
Example
Chess-Playing AI: In a chess game, pieces (symbols) represent different roles
(knight, rook, etc.), and the rules of chess define how these symbols can be
manipulated (moved). A computer playing chess uses these symbols and rules to
make intelligent moves, embodying the physical symbol system hypothesis.
Thus, the hypothesis suggests that the use of symbols and rules to process
information is the key to creating an intelligent system, much like how humans
think and reason.
AI TECHNIQUES
8
What is an AI Technique?
Exploring the Basics of AI Techniques and Knowledge Representation
Artificial Intelligence Problems
AI problems cover a wide range but share one key
aspect: they are hard.
Techniques are needed to solve these problems.
Question: Are there specific techniques suited for
various AI problems?
AI Techniques Require Knowledge
AI techniques rely heavily on knowledge.
Key Properties of Knowledge:
(i) Voluminous: Contains a large amount of information.
(ii) Hard to characterize: Not easy to define accurately.
(iii) Constantly changing: Needs frequent updates.
(iv) Organized for use: Different from raw data as it’s structured for
practical applications.
Defining an AI Technique
An AI technique is a method that uses knowledge,
represented in such a way that:
It captures generalizations (groups similar situations
together).
It can be understood by people who provide the knowledge.
Essential Properties of an AI Technique
Easily modifiable: Can be updated to correct errors or
reflect changes.
Versatile: Usable in many situations even if not perfectly
accurate.
Narrow down possibilities: Helps overcome information
overload by narrowing options.
AI Techniques and Problem Independence
AI techniques are designed to be independent of the
problems they solve.
They can sometimes solve non-AI problems as well.
Objective: Characterize AI techniques to make them
as problem-independent as possible.
Summary of AI Techniques
AI techniques are knowledge-based methods designed
to solve complex problems.
Knowledge in AI must be well-structured, adaptable,
and provide generalizations.
The goal is to make AI techniques flexible and usable
across various problem domains.
AI techniques are problem independent.
In order to try to characterize Al techniques in
asproblem-independent a way aspossible, let's
look at two very different problems and a series
of approaches for solving each of them.
Characterizing AI Techniques
with Examples
To understand AI techniques, it's helpful to apply
them to different types of problems.
By exploring how AI can handle both simple (Tic-
Tac-Toe) and complex (questioning) tasks, we gain
insight into the flexibility and problem-solving
capacity of AI methods.
The aim is to characterize AI techniques as being as
problem-independent as possible.
What Are We Trying to Infer?
For Tic-Tac-Toe:
Understand how AI uses strategies, rules, and predictions to solve structured,
game-like problems.
Demonstrates how knowledge can be represented and used to achieve an
optimal outcome in a well-defined scenario.
For Questioning:
Explore how AI handles unstructured, dynamic, and less predictable scenarios
(like natural language questioning).
Shows the challenges of knowledge representation, reasoning, and adaptability
in complex situations.
Main Objective:
To identify how AI techniques adapt to various problem types and whether
these techniques can be generalized for broader application.
A Comparison of Four Programs to Play Tic-Tac-Toe
TIC-TAC-TOE
Programs: From
Simple to
Advanced AI
Techniques
Program 1
Simple
Table
Lookup
Program 3
Magic
Square
Technique
Program 4
Minimax
Algorithm
Overview of the Tic-Tac-Toe Programs
Program 2
Rule-Based
Strategy
Four programs increasing in complexity and intelligence:
Data Structures:
Board:The game board is represented as a list (vector) with nine elements. Each
element corresponds to one of the nine positions on the board, numbered 1 to 9.
Values:
(i) 0 means the position is empty.
(ii) 1 means the position contains an "X".
(iii) 2 means the position contains an "O".
Movetable: A large list (vector) with 19,683 elements (since each board position
can be 0, 1, or 2, giving us 393^939 possible board configurations). Each entry in
this list represents a possible game state and what the board should look like
after a move.
Program 1: Simple Table Lookup
How the Program Works?
Convert the Board: The program views the current board as a
"ternary" (base three) number and converts it to a regular
decimal number.
Use the Movetable: The decimal number is used to find the
corresponding entry in the Movetable, which tells the program
the next board state after making a move.
Update the Board: The board is updated based on this
information.
Efficiency:
The program is fast because it directly looks up the next move using a pre-
calculated list (the Movetable).
Limitations:
Despite its efficiency, the program has several disadvantages:
(i) Memory Intensive:
Storing all possible board positions and moves requires a lot of memory.
(ii) Manual Work: It requires a lot of manual effort to fill in all possible moves
into the Movetable.
(iii) Error-Prone: Manually determining the correct moves for every possible
board state can lead to mistakes.
(iv) Lacks Flexibility: If you wanted to extend the game (e.g., to a 3D version),
you would need to start over, and the number of possible board states
would be enormous (3273^{27}327), making it impractical to store.
Conclusion
This approach does not meet the qualities of a good AI
technique, which should be flexible, generalizable, and
easy to modify or extend. Hence, it is suggested to explore
better methods for representing and solving the game.
This program is an improvement over the first one,
aiming to be more flexible and use less memory.
Data Structures:
Board: Similar to Program 1, it uses a nine-element list
(vector) to represent the Tic-Tac-Toe board. However, it
now stores:
(i) 2 for a blank square.
(ii) 3 for an "X".
(iii) 5 for an "O".
Turn: An integer that tracks which move of the game is
being played (1 for the first move, 9 for the last).
Program 2: Rule-Based StrategY
The Algorithm:
The main algorithm uses three smaller subroutines (mini-programs):
Make2:
1.
Checks if the center of the board is blank. If it is, the program places an "O" there. If not, it selects
one of the blank squares in non-corner positions (like squares 2, 4, 6, or 8).
Posswin(p):
2.
This checks if the current player (p) can win on their next move. It does this by examining all
rows, columns, and diagonals of the board.
It multiplies the values of squares in a row. If the product is 18 (3 × 3 × 2), it means player "X" can
win, since the values indicate two "X"s (3) and one empty space (2). Similarly, if the product is 50
(5 × 5 × 2), it indicates that player "O" can win. This function helps the program both win and
block the opponent's potential win.
Go(n):
3.
Makes a move in the specified square (n). This updates the board for the next turn and
increments the turn counter.
Playing Strategy:
The algorithm has a pre-defined strategy for each move:
Turn 1: Makes the first move in the upper left corner.
Turn 2: If the center is blank, it occupies it; otherwise, it chooses the upper left
corner.
Turn 3: If the bottom right corner is blank, it occupies it; otherwise, it chooses the
center.
Turn 4 to Turn 9: The program checks if it can win or block the opponent using
the Posswin function. If not possible, it makes strategic moves like occupying
empty squares or setting up a "fork" (a move that creates multiple win
opportunities).
Efficiency: This program is not as fast as the first one because it checks multiple
conditions before making a move. However, it uses much less memory since it
doesn't store every possible game state like the first program.
Clarity: The strategy is easier to understand and modify if needed. However, the
entire strategy is still manually programmed, meaning if there are bugs, the
program won't play well.
Limitations: The program's knowledge is specific to regular 2D Tic-Tac-Toe and
can't be generalized to a different game format, such as a 3D version
Conclusion
While this program improves in terms of memory usage and
strategy understanding, it still has limitations. It cannot adapt or
generalize its strategy to different versions of Tic-Tac-Toe or
other games, which is a key aspect of more advanced AI
techniques.
Program 2': Magic Square
This program is very similar to Program 2 but changes how the Tic-Tac-Toe
board is represented.
Board Representation:
The board is still a nine-element list (vector), but the positions are
numbered differently:
This arrangement forms a magic square, meaning the sum of numbers in
every row, column, and diagonal is 15.
How It Works?
When checking if a player has a winning combination, the program uses
the properties of the magic square. It keeps a list of all the squares that
each player has occupied.
1.
For each pair of occupied squares by a player, it calculates the difference
between 15 and the sum of those two squares.
2.
Example: If a player occupies squares 8 and 3, their sum is 11. The
difference from 15 is 4, which means if the player also occupies square 4,
they win.
The program ignores pairs that don’t add up to a meaningful difference
(either negative, greater than 9, or involve non-collinear squares).
3.
This method simplifies the win-checking process since it reduces the
number of checks needed. If a player can occupy a square indicated by
this difference, they have a winning move.
4.
Efficiency: This program is more efficient than Program 2 because it uses
the unique properties of the magic square to limit the number of moves it
needs to check for a win.
Why Is It Interesting? The program highlights how different ways of
representing the same problem (in this case, the board layout) can greatly
impact the efficiency of a solution. Using a magic square makes it quicker
to identify potential winning moves.
Human vs. Computer Approach: The comments point out a difference
between how humans and computers solve problems. Humans often scan
the board visually, while computers process numbers systematically. The
magic square approach might seem less intuitive to humans but is quite
efficient for computers.
Key Takeaway
The main idea is that choosing the right way to represent a problem
(like using a magic square) can make the AI more efficient at solving
it. This is an important concept in AI programming, as it
demonstrates how certain representations can simplify complex
checks and decisions.
Program 3: Minimax Algorithm
Program 3 uses a method called the minimax algorithm, which is a standard
technique in AI for decision-making. Unlike the previous programs that relied
on predefined strategies or simple checks, this program evaluates future
moves to make the best decision.
Data Structures:
BoardPosition:
This structure contains:
A nine-element list representing the board.
1.
A list of all possible board positions that could result from the next move.
2.
A number representing an estimate of how likely each board position is to
lead to a win for the player.
3.
The Algorithm (Minimax):
The program looks ahead at possible future moves to decide the best one. It does this in
the following steps:
Check for Immediate Win: It checks if any of the potential moves can immediately lead
to a win. If so, that move is given the highest possible rating.
Simulate Opponent's Moves: If there is no immediate win, the program considers all
possible moves the opponent could make next. For each of these moves, it recursively
evaluates how unfavorable these moves would be for itself (the player). The worst
possible outcome is assigned to the current move under consideration.
Choose the Best Move: After evaluating all possible moves, the program selects the one
with the highest rating (i.e., the one that maximizes its chances of winning while
minimizing the opponent’s chances).
This process of evaluating future moves is called the minimax procedure. The algorithm
aims to maximize the player’s chance of winning while assuming the opponent will also
play their best to minimize this chance.
Comments on the Program
Time-Intensive: This program takes much more time to run than the previous
programs because it explores a "tree" of all possible moves and outcomes before
making a decision. This means it has to consider many different sequences of
moves to find the best path.
More Flexible: Unlike the previous programs, this approach can handle more
complex games than Tic-Tac-Toe. The exhaustive, pre-defined strategies in earlier
programs would fail in such scenarios, but minimax can be extended to larger,
more complicated games.
Augmentation: The minimax algorithm can be improved with additional
knowledge about the game. For example, it might consider only a subset of
possible moves, reducing the time required to make decisions.
AI Technique: This is a true example of an AI technique. For small problems like Tic-
Tac-Toe, it may seem less efficient than earlier methods, but it can be adapted for
complex situations where predefined strategies would not work.
Key Takeaway
Program 3 uses the minimax algorithm, which is a fundamental concept in AI. By
evaluating possible future outcomes, the program makes decisions that maximize
the chance of success, showing how advanced AI can plan multiple steps ahead.
Although this approach is more computationally intensive, it is far more adaptable
and powerful for a wide range of games and problems.
Summary and Comparison
Table: Tic-Tac-Toe Program Comparison
Introduction to Question
Answering in AI
Involves reading an English text and
answering questions about it in English.
Different from structured tasks like Tic-
Tac-Toe; the problem is harder to define
formally.
Example input text: "Russia massed
troops on the Czech border."
Requires models to interpret the text
and provide contextually correct
answers.
Question
Answering
in AI
How AI Models Influence Answers
Example input text: "Russia massed troops on the Czech border."
Q: Why did Russia do this?
A: To take political control of
Czechoslovakia.
Q: What should the United States do?
A: Intervene militarily.
Dialogue 1: Conservative model: Dialogue 2: Liberal model:
Q: Why did Russia do this?
A: To increase political influence over
Czechoslovakia.
Q: What should the United States do?
A: Denounce the action in the UN.
Key Point: AI's answers change based on the model's perspective.
Defining "Correct" Answers in AI: The Challenge of Correct Answers
Defining a "correct" answer is subjective and relies on the model's
beliefs.
AI answers are computed based on predefined models, making
the idea of a "correct" response complex.
Example Text: "Mary went shopping for a new coat. She found a
red one..."
Sample Questions:
What did Mary go shopping for?
What did Mary find that she liked?
Key Takeaways
Question answering in AI is more complex than structured problem-
solving.
Answers depend heavily on the AI model used.
This exploration shows how AI techniques need to adapt to different
types of problems and the subjectivity involved.
Program 1 – Pattern Matching for Question Answering
(Literal Text Matching)
Data Structures: Uses predefined templates and patterns to match
questions to text fragments.
Algorithm:
Matches question patterns to the input text.
1.
Substitutes terms to generate responses.
2.
Example: Q1: What did Mary go shopping for? → "A new coat."
Comments: Simple and fragile; fails when questions deviate from expected
forms.
Program 2 – Structured Text Representation (Structured Text for
Better Understanding)
Data Structures:
EnglishKnow: Basic understanding of grammar and semantics.
1.
StructuredText: Extracts essential information from the input text.
2.
Algorithm: Converts input and questions into structured forms to
identify correct answers.
Example: Q2: What did Mary find that she liked? → "A red coat."
Comments: More effective than Program 1; still limited by the
predefined structure.
Program 3 – World Knowledge Integration (Incorporating World
Knowledge)
Data Structures:
WorldModel: Incorporates background knowledge about common scenarios.
1.
IntegratedText: Combines text with related world knowledge.
2.
Algorithm: Uses scripts (predefined sequences of events) to understand the text
deeply.
Example: Q3: Did Mary buy anything? → "She bought a red coat."
Comments: Most powerful of the three but still lacks general inference capabilities.
Evolution of AI Techniques: From Simple to Complex AI
Program 1: Basic pattern matching – limited and fragile.
Program 2: Structured text with semantic understanding – more
flexible.
Program 3: Uses world knowledge – deeper understanding, but not
fully general.
Key Takeaway: Effective AI techniques integrate knowledge, search,
and abstraction to handle complex queries.
1. Search
A method for solving
problems where no
direct approach
exists.
Provides a framework
to embed any direct
techniques available.
3. Abstraction
Separates
important features
from unimportant
details to avoid
overwhelming the
process.
Three Key AI Techniques
2. Use of
Knowledge
Solves complex
problems by
utilizing the
structure and
relationships
within the objects
involved.
Programs that use these techniques are less fragile and can handle large, complex problems.
These techniques help define what an AI technique is, though a precise definition is challenging.
REFERENCES
Artificial Intelligence by Elaine Rich, K. Knight, McGrawHill
Search Methods in Artificial Intelligence by Deepak Khemani
https://www.analyticsvidhya.com/blog/2021/06/machine-learning-vs-artificial-intelligence-vs-
deep-learning/#:~:text=ML%20vs%20DL%20vs%20AI%3A%20Overview,-
Artificial%20Intelligence%20(AI&text=AI%20simulates%20human%20intelligence%20to,it%2
0can%20use%20predefined%20rules.
https://www.geeksforgeeks.org/difference-between-artificial-intelligence-vs-machine-
learning-vs-deep-learning/
https://www.linkedin.com/pulse/rise-intelligent-machines-evolutionary-tale-ai-jeevitha-d-s-
ixa4c/?trackingId=I%2FCP4XHZTSScRI2EiyL27Q%3D%3D
https://www.linkedin.com/posts/mario-balladares-52263b22_ai-regression-classification-
activity-7090701246453440513-1OTw/
https://nptel.ac.in/courses/106106247

More Related Content

PDF
State space representation and search.pdf
PDF
heuristic search Techniques and game playing.pdf
PDF
generative-ai-fundamentals and Large language models
PDF
Artificial Intelligence (AI) in Education.pdf
PDF
Cybersecurity Career Paths | Skills Required in Cybersecurity Career | Learn ...
PPTX
Artificial Intelligence
PPTX
SDWAN Introduction presentation & Public Speaking
PDF
Semantic_net_and_Frames_in_knowledgeR.pdf
State space representation and search.pdf
heuristic search Techniques and game playing.pdf
generative-ai-fundamentals and Large language models
Artificial Intelligence (AI) in Education.pdf
Cybersecurity Career Paths | Skills Required in Cybersecurity Career | Learn ...
Artificial Intelligence
SDWAN Introduction presentation & Public Speaking
Semantic_net_and_Frames_in_knowledgeR.pdf

What's hot (20)

PPTX
Artificial Intelligence (AI): Applications in agriculture
PPTX
Introduction to Artificial Intelligence
PPTX
Artificial intelligence
PDF
Dimentionality Reduction PCA Version 1.pdf
PPTX
Artificial intelligence introduction
PPTX
Artificial intelligence
PDF
Artificial Intelligence - Past, Present and Future
PPT
Artificial inteligence
PPTX
Computer Vision
PPTX
ppt Artificial intelligence .pptx
PPTX
What Is Machine Learning? | What Is Machine Learning And How Does It Work? | ...
PPT
Artificial intelligence
PPTX
AI.pptx
PPTX
Artificial intelligence by JD
PPT
introducción a Machine Learning
PDF
Introduction to machine learning
PPT
Ai software in everyday life
PDF
AI, Machine Learning and Deep Learning - The Overview
PPTX
Understanding Artificial intelligence
PPTX
Artificial intelligence
Artificial Intelligence (AI): Applications in agriculture
Introduction to Artificial Intelligence
Artificial intelligence
Dimentionality Reduction PCA Version 1.pdf
Artificial intelligence introduction
Artificial intelligence
Artificial Intelligence - Past, Present and Future
Artificial inteligence
Computer Vision
ppt Artificial intelligence .pptx
What Is Machine Learning? | What Is Machine Learning And How Does It Work? | ...
Artificial intelligence
AI.pptx
Artificial intelligence by JD
introducción a Machine Learning
Introduction to machine learning
Ai software in everyday life
AI, Machine Learning and Deep Learning - The Overview
Understanding Artificial intelligence
Artificial intelligence
Ad

Similar to Introduction to Artificial Intelligence (AI) and Machine Learning (ML) (20)

PPTX
Introduction to artificial intelligence
PDF
Lecture 1-Introduction to AI and its application.pdf
PDF
AI Evolution Beyond Humans _The Age of Machine Superiority.pdf
PDF
AI 3.0: Is it Finally Time for Artificial Intelligence and Sensor Networks to...
PDF
Introduction to Artificial Intelligence: Concepts and Applications
PDF
AI Chapter I for Computer Science Students
PPTX
AI ML Unit-1 in machine learning techniques.pptx.
PPTX
Artificial Intelligence
PPTX
module 1 & 2_AI_introduction to AI,Problem solving techniques
PPTX
AI basic.pptx
PPTX
PPTX
The Evolution and Impact of Artificial Intelligence
DOCX
one hour presentation content for introduction to....docx
DOCX
what is artificial intelligence ? AI is the simulation of human intelligence
PPTX
Introduction to Artificial Intelligence (Std.X)
PDF
Artificial intelligence (AI) 2022
PPTX
Artificial intelligence
PDF
Introduction to Artificial Intelligence.pdf
PPTX
Aritficial intelligence
PDF
Introduction To Artificial Intelligence and applications
Introduction to artificial intelligence
Lecture 1-Introduction to AI and its application.pdf
AI Evolution Beyond Humans _The Age of Machine Superiority.pdf
AI 3.0: Is it Finally Time for Artificial Intelligence and Sensor Networks to...
Introduction to Artificial Intelligence: Concepts and Applications
AI Chapter I for Computer Science Students
AI ML Unit-1 in machine learning techniques.pptx.
Artificial Intelligence
module 1 & 2_AI_introduction to AI,Problem solving techniques
AI basic.pptx
The Evolution and Impact of Artificial Intelligence
one hour presentation content for introduction to....docx
what is artificial intelligence ? AI is the simulation of human intelligence
Introduction to Artificial Intelligence (Std.X)
Artificial intelligence (AI) 2022
Artificial intelligence
Introduction to Artificial Intelligence.pdf
Aritficial intelligence
Introduction To Artificial Intelligence and applications
Ad

Recently uploaded (20)

PPTX
UNIT 4 Total Quality Management .pptx
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Construction Project Organization Group 2.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
Geodesy 1.pptx...............................................
PPTX
additive manufacturing of ss316l using mig welding
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
Sustainable Sites - Green Building Construction
PDF
Digital Logic Computer Design lecture notes
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
Welding lecture in detail for understanding
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
UNIT 4 Total Quality Management .pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Mechanical Engineering MATERIALS Selection
Construction Project Organization Group 2.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Embodied AI: Ushering in the Next Era of Intelligent Systems
UNIT-1 - COAL BASED THERMAL POWER PLANTS
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Geodesy 1.pptx...............................................
additive manufacturing of ss316l using mig welding
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Model Code of Practice - Construction Work - 21102022 .pdf
Sustainable Sites - Green Building Construction
Digital Logic Computer Design lecture notes
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Welding lecture in detail for understanding
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx

Introduction to Artificial Intelligence (AI) and Machine Learning (ML)

  • 1. -Dr. Mehak Saini ARTIFICIAL INTELLIGENCE & MACHINE LEARNING (IT-351)
  • 2. Syllabus 1 What is Artificial Intelligence? 2 Overview of Different Tasks 4 Contents AI v/s ML v/s DL 3 Evaluation of AI 5 Can Machines Think? 6 Physical Symbol System Hypothesis 7 8 AI Techniques
  • 4. Artificial Intelligence (AI) is the use of computer systems to simulate human mental processes, such as interpreting and generating language. What is Artificial Intelligence? 2 Artificial Intelligence(AI) is that branch of computer science that creates intelligent machines that think and act like humans. It is one of the revolutionizing technologies that people are fascinated by because of its ability to relate to their daily lives. AI enables machines to think, learn and adapt, to enhance and automate tasks across industries.
  • 5. "What is AI?" Intelligence: "The ability to learn, understand, and think." (Oxford Dictionary) In the context of AI, it extends to machines being able to mimic human cognitive functions. Artificial Intelligence (AI): The study of how to make computers perform tasks that usually require human intelligence. Involves programming computers to process information, make decisions, and learn from experience.
  • 6. What is AI? Approaches to Intelligence 1. Thinking Humanly: Cognitive Modeling Compares AI reasoning steps to human thinking. Involves testable theories from cognitive science. 2. Thinking Rationally: Laws of Thought Originated from Aristotle's attempt to define "right thinking." Uses formal logic to represent reasoning. Obstacles: Informal knowledge representation, computational complexity. 3. Acting Humanly: The Turing Test Alan Turing's concept from 1950 to assess machine intelligence. Requires AI to exhibit human-like behavior. Suggests key AI components: knowledge, reasoning, language, understanding, learning. 4. Acting Rationally: Rational Agents AI acts to achieve goals based on given beliefs. More flexible and general than "laws of thought." Suitable for scientific and human- independent development.
  • 7. Definitions of AI Thinking Learning Reasoning Acting Father of AI: John McCarthy (1956) Defined AI as "The science and engineering of making intelligent machines." Haugeland (1985) "The exciting new effort to make computers think... machines with minds." Charniak & McDermott (1985) "The study of mental faculties through computational models." Rich & Knight (1991) "Making computers do things at which, at the moment, people are better." Samuel's Self-Learning Program (1959) Developed the first self- learning program, the checkers-playing AI, illustrating that machines could learn from experience. Bellman (1978) [The automation of] activities associated with human thinking, like decision-making, problem-solving. Winston (1992) "The study of computations that make it possible to perceive, reason, and act." Partridge (1991) "A collection of algorithms that are adequate approximations of intractable problems."
  • 8. Ability to learn − AI systems can improve their performance eventually by learning from data and past experiences. Logical decision making − AI systems are fed with large amounts of data to understand and recognize patterns for analysis and decision making. Adaptability − AI systems can adjust and adapt to changes in data. Efficient automation − AI would efficiently execute repetitive tasks and processes. Versatility − AI can be widely applied for various tasks across all fields like businesses, automotive, health, and many others. FEATURES of Artificial Intelligence
  • 9. examples of Artificial Intelligence Speech Recognition: Understanding and processing human speech (e.g., Siri, Alexa). Image Recognition: Identifying objects, faces, or patterns in images. Intuition and Inferencing: Making decisions or predictions based on incomplete data. Learning New Skills: Adapting and improving performance over time (e.g., self- driving cars). Decision Making: Analyzing data to make informed choices. Abstract Thinking: Problem-solving in complex, abstract domains.
  • 10. Task Domains of Artificial Intelligence Perception: Vision, Speech Natural Language: Understanding, Generation, Translation Commonsense Reasoning Robot Control Games: Chess, Backgammon, Checkers, Go Mathematics: Geometry, Logic, Integral Calculus, Proving Properties of Programs Engineering: Design, Fault Finding, Manufacturing Planning Scientific Analysis Medical Diagnosis Financial Analysis Mundane Tasks Formal Tasks Expert Tasks
  • 11. Key Topics in Artificial Intelligence Pattern Matching: Identifying patterns in data for recognition and decision-making. Logic Representation: Using formal logic to represent knowledge and reason about problems. Symbolic Processing: Manipulating symbols to represent real-world entities and concepts. Numeric Processing: Handling numerical data for computations and analyses. Problem Solving: Techniques for finding solutions to complex issues. Heuristic Search: Exploring possible solutions using intelligent trial and error. Natural Language Processing (NLP): Understanding and generating human language. Knowledge Representation: Structuring information to make it usable for AI applications. Expert Systems: Emulating the decision-making ability of human experts. Robotics: Designing intelligent machines that can perform physical tasks. Neural Networks: Simulating human brain processes to recognize patterns and make decisions. Learning: Adapting behavior based on past experiences. Planning: Strategically determining actions to achieve specific goals. Semantic Networks: Connecting concepts in a network to represent knowledge. Clustering and Classification: Grouping and categorizing data based on similarities. Regression: Predicting outcomes based on input data. Control: Directing the behavior of machines or systems. Core AI Techniques Advanced AI Methods Machine Learning and AI Subfields
  • 12. Artificial Intelligence (AI) V/s machine learning (ML) v/s deep learning (DL) 3
  • 16. “The Rise of Intelligent Machines: The Evolutionary Tale of AI” 4
  • 17. The Birth of AI: The Dawn of a New Era (1950s-1960s) •Imagine a time when computers were room- sized behemoths, and the idea of machines that could think like humans was the stuff of science fiction. In the 1950s, visionary thinkers like Alan Turing began to ask a bold question: “Can machines think?” This was the spark that ignited the field of Artificial Intelligence. •The early days of AI were filled with excitement and ambition. Researchers developed the first AI programs, like the “General Problem Solver,” which could solve puzzles and logical problems. But these early systems were limited—they could only do what they were explicitly programmed to do. The dream of creating truly intelligent machines seemed distant, constrained by the primitive technology of the time.
  • 18. The Machine Learning Revolution: Teaching Machines to Learn (1980s-1990s)
  • 19. •The Machine Learning (ML) revolution of the 1980s and 1990s was a game-changer in the world of artificial intelligence. Instead of programming computers with strict rules, scientists began teaching them to learn from data—much like how humans learn from experience. But what if machines could figure things out on their own, just by recognizing patterns in the data they see? Could machines really get better at tasks just by learning from mistakes and successes? •Imagine teaching a computer to play chess—not by programming every move but by letting it learn from millions of games. By the 1990s, this approach allowed ML-powered systems to outperform traditional AI. For instance , ML-based spam filters became highly effective at blocking unwanted emails by recognizing patterns. •ML also revolutionized credit card fraud detection by identifying unusual transaction patterns in real-time, significantly reducing fraudulent activities. In the business world, ML began enhancing sales forecasting, enabling companies to predict future sales trends with greater accuracy. These early successes highlighted ML’s potential to solve complex problems, laying the groundwork for today’s advanced AI.
  • 20. Deep Learning: Unlocking the Power of Data (2010s)
  • 21. As we entered the 2010s, the AI landscape was transformed by a breakthrough known as Deep Learning. If Machine Learning was like teaching a computer to learn from data, Deep Learning was like giving it a brain with billions of neurons, capable of processing vast amounts of information in ways that mimicked human learning. Computer Vision emerged as a powerful tool that allowed machines to interpret visual data—recognizing objects, faces, and even entire scenes with incredible accuracy.Powered by advances in Deep Learning, Computer Vision transformed industries like security, healthcare, and retail. Think about the facial recognition systems that unlock your phone or the AI algorithms that help doctors diagnose diseases from medical scans—these are all products of Computer Vision technology.
  • 22. Natural Language Processing: Teaching Machines to Understand Us (2010s)
  • 23. •But understanding images and playing games was just the beginning. The real challenge was teaching machines to understand and communicate with us in our own language. Natural Language Processing (NLP), a field of AI dedicated to bridging the gap between human language and machine understanding. •NLP had been around for decades, but it wasn’t until the 2010s, with the advent of models like BERT and GPT, that machines started to truly grasp the nuances of language. These models could understand context, interpret meaning, and even generate human-like text. Suddenly, talking to a machine didn’t feel so different from talking to a human, and applications like virtual assistants, chatbots, and translation services began to thrive.
  • 24. Autonomous Systems: Machines on the Move (2010s-Present)
  • 25. •As AI continued to evolve, it wasn’t long before machines began to move on their own. Autonomous systems, like self-driving cars and drones, represent one of the most exciting frontiers of AI. These systems combine AI with sensors, cameras, and real-time data processing to navigate the world without human intervention. •The development of autonomous systems posed unique challenges—how do you teach a car to make split-second decisions on a busy road? How do you ensure a drone can safely deliver a package? By integrating AI with advanced sensors and sophisticated algorithms, engineers have created machines that can operate independently, transforming industries from transportation to logistics.
  • 26. Generative AI: Unleashing Creativity (2020s-Present)
  • 27. •As AI reached new heights, it began to explore the realm of creativity—a domain once thought to be uniquely human. Generative AI models, like GPT-3, have the ability to create original content, from writing articles to composing music and generating artwork. •Generative AI has opened up a world of possibilities in art, design, entertainment, and beyond. It’s like having a creative partner that can brainstorm ideas, draft content, or even paint a picture. But with this power comes responsibility—ensuring that AI-generated content is used ethically and transparently is a challenge that the AI community continues to tackle.
  • 28. 💡Final Thoughts •As we’ve journeyed through the evolution of Artificial Intelligence, it’s clear that what was once a distant dream has now become an integral part of our daily lives. From early algorithms to today’s intelligent systems, AI has transformed the way we live, work, and interact with the world. But this is just the beginning. As AI continues to advance, the possibilities are limitless, opening doors to innovations we can only imagine. The future of AI is not just about technology—it’s about redefining what’s possible.
  • 29. Can Machines Think? 5 Ever since the possibility of building intelligent machines arose, there have been raging debates on whether machine intelligence is possible or not. All kinds of arguments have been put forth both for and against the possibility. It was perhaps to put an end to these arguments that Alan Turing (1950) proposed his famous imitation game, which we now call the Turing Test.
  • 30. Turing Test Setup The Turing Test, proposed by Alan Turing in 1950, is a method to evaluate a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. The setup involves three participants: Human Interrogator (Judge): A person who poses questions to both a human and a machine. Human Participant: A person who responds to the interrogator's questions. Machine: A computer or AI program designed to generate responses. The interrogator communicates with both the human and the machine through a text- based interface, ensuring that physical presence or voice cues do not give away the participants' identities. The test typically involves a series of conversations where the interrogator asks various questions and tries to distinguish between the human and the machine. Experimental Setup and Findings of the Turing Test
  • 31. Findings of the Turing Test Early Results: Early attempts at creating machines to pass the Turing Test demonstrated limited success. Early programs like ELIZA (1966) could mimic simple conversation patterns, but their lack of understanding was evident with more complex queries. Modern Applications: Recent advancements in AI, particularly Natural Language Processing (NLP) models like GPT-3, have come closer to passing the Turing Test by generating human-like responses in diverse contexts. However, these models still struggle with deep understanding, common sense, and contextually nuanced conversations. Overall Findings: The Turing Test has become a benchmark for testing machine intelligence. While many AI programs can handle simple conversational tasks convincingly, true understanding, self-awareness, and context management are still challenges that prevent machines from fully passing the test.
  • 32. However, it is quite obvious that relying on human impressions based on interaction in natural language is not the best way of determining whether a machine is intelligent or not. With more and more machines becoming good at generating text rivalling that produced by humans, a need is being felt for something that delves deeper and tests whether the machine is actually reasoning when answering questions. Hector Levesque and colleagues have proposed a new test of intelligence which they call the Winograd Schema Challenge, after Terry Winograd who first suggested it (Levesque et al., 2012; Levesque, 2017). The idea is that the test cannot be answered by having a large language model or access to the internet but would need common sense knowledge about the world.
  • 33. The following is the example attributed to Winograd (1972). The town councillors refused to give the angry demonstrators a permit because they feared violence. Who feared violence? (a) The town councillors (b) The angry demonstrators The town councillors refused to give the angry demonstrators a permit because they advocated violence. Who advocated violence? (a) The town councillors (b) The angry demonstrators In both cases, two options are given to the subject who has to choose one of the two The authors report that the Winograd Schema Test was preceded by a pronoun disambiguation test in a single sentence, with examples chosen from naturally occurring text. Only those programs that did well in the first test were allowed to advance to the Winograd Schema Test. The important thing is that such problems can be solved only if the subject is well versed with sufficient common sense knowledge about the world and also the structure of language.
  • 34. A question one might ask is why should a test of intelligence be language based? Could there be other indicators of intelligence? AARON, the drawing artist created by Harold Cohen (1928–2016) . DALL-E, Midjourney, and Stable Diffusion: text-to-image AI systems (Cohen, 2016). Turing Test for musical intelligence proposed by Erik Belgum and colleagues (Belgum et al., 1989). EMI (Experiments in Musical Intelligence) : a computer program that composes music in the style of various classical composers (Cope, 2004) IBM’s program, Watson’s spectacular win in the game of Jeopardy (2011). IBM’s chef Watson: a computationally creative computer that can automatically design and discover culinary recipes that are flavorful, healthy, and novel. DeepMind’s AlphaGo: beat the reigning world champion Lee Sedol in the oriental game of go (Silver et al., 2016). IBM’s Deep Blue program beat the then world champion Garry Kasparov in the game of chess in 1997. In the summer of 1956, John McCarthy and Marvin Minsky had organized the Dartmouth Conference with the following stated goal – ‘The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’ (McCorduck, 2004). While the Turing Test serves as a measure of how closely machines can imitate human behavior, the Chinese Room Experiment critiques the idea that such imitation equates to real understanding.
  • 35. Chinese Room Experiment: Setup and Findings Chinese Room Setup Proposed by philosopher John Searle in 1980, the Chinese Room Experiment challenges the notion of computers possessing true understanding or consciousness. The setup involves: Room: A closed room containing a person (who does not understand Chinese) and a set of instructions. Person (Human Processor): Inside the room, the person follows a comprehensive set of syntactic rules (instructions) for manipulating Chinese symbols. Input and Output: Chinese characters (input) are passed into the room, and the person uses the rule book to arrange symbols and produce the appropriate output (responses in Chinese) without understanding their meaning. The person inside the room can generate responses that seem coherent to Chinese- speaking individuals outside, even though they have no understanding of the language.
  • 36. Findings of the Chinese Room Experiment Understanding vs. Processing: The experiment demonstrates that while a machine (or a human using rule-based instructions) can simulate an understanding of language, it does not imply that it actually understands the content. This supports Searle's argument that computers process symbols syntactically without any semantic comprehension. Implications for AI: The experiment raises questions about the nature of artificial intelligence. Even if a machine passes the Turing Test, it does not mean the machine has a true "mind" or consciousness. It suggests that AI may excel at simulating human-like responses without possessing genuine understanding or intentionality.
  • 37. In summary, while the Turing Test serves as a measure of how closely machines can imitate human behavior, the Chinese Room Experiment critiques the idea that such imitation equates to real understanding. Both have significantly influenced the philosophical and technical discussions around AI and the nature of machine intelligence.
  • 39. what is Machine learning? Definition 1 ( an older/ informal one): Arther Samuel (in 1959) described Machine Learning as: “the field of study that gives computers the ability to learn without being explicitly programmed.”
  • 40. Tom Mitchell (in 1998): “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” Example: playing checkers. E = the experience of playing many games of checkers. T = the task of playing checkers. P = the probability that the program will win the next game. Definition 2 (a modern one):
  • 44. DATA REPRESENTATION iN Machine learning Experience in the form of raw data is a source of learning in many applications (human knowledge in linguistic form is an additional learning source). This raw data is pre-processed w.r.t. the class of tasks and this leads to an information- system that represents the knowledge in the raw data (input to the learning system shown in the following Figure). Figure: A block diagram representation of a learning machine. The information-system data maybe stored in a data warehouse which provides integrated, consistent and cleaned data to machine learning algorithms. However, for many applications, instead of analysing data accessed online from data warehouses, data maybe available in a flat file, which is simply a data table.
  • 45. Here, the information system is a form of data table D, as shown below, where each row represents a measurement/ observation and each column denotes the value of an attribute of the information system for all measurements/ observations. Table: A Template for Data Table
  • 47. SUPERVISED Machine learning Algorithm learns to map input data (features) to known output data (labels). Training data is labeled: each data point has a corresponding output label that the algorithm is trying to predict. Tasks: (i) Classification: the output variable is a categorical variable (response is qualitative) (ii) Regression: the output variable is a continuous variable (response is quantitative) Example: Classification task of predicting fruits
  • 48. Image classification: Identify objects or features in images, such as classifying whether an image contains a cat or a dog. Sentiment analysis: Determine the sentiment or opinion expressed in text data, such as classifying whether a product review is positive or negative. Fraud detection: Identify fraudulent transactions in financial data, such as predicting whether a credit card transaction is likely to be fraudulent. Predictive maintenance: Predict when maintenance is needed for equipment based on sensor data, such as predicting when a machine is likely to fail. Personalized recommendations: Recommend products or content to users based on their past behavior and preferences, such as suggesting movies or TV shows to watch. APPLICATIONS OF SUPERVISED Machine learning
  • 49. CHALLENGES IN SUPERVISED Machine learning Insufficient or biased data: Lack of data or biased data can lead to poor model performance or inaccurate predictions. Overfitting: When the model is too complex, it can fit the training data too closely, leading to poor generalization performance on new data. Feature engineering: Choosing and engineering the right set of features can be a time- consuming and subjective process that requires domain expertise. Model selection: Choosing the right model and hyperparameters for a given problem can be challenging and require extensive experimentation and tuning. Interpretability: Understanding why a model makes certain predictions or decisions can be difficult, especially for complex models like deep neural networks.
  • 52. Classification Definition: Classification is a supervised learning task where the objective is to categorize input data into predefined labels or classes. How it Works: The model learns from labeled training data, which consists of input-output pairs. The goal is to learn the mapping between input features and target labels to classify new, unseen data correctly. Examples: Email spam detection (spam or not spam). Image recognition (identifying objects like cats, dogs, cars, etc.). Sentiment analysis (positive, negative, or neutral sentiments in text). Common Algorithms: Decision Trees, Support Vector Machines (SVM), Neural Networks, Naive Bayes, K-Nearest Neighbors (KNN), Random Forests.
  • 53. Regression Definition: Regression is a supervised learning task aimed at predicting a continuous numerical value based on input features. How it Works: The model learns the relationship between input variables and the continuous output variable by finding the best-fitting function (often a line or curve) to minimize prediction error. Examples: Predicting house prices based on features like size, location, and number of bedrooms. Forecasting stock prices. Estimating a person's age from their photograph. Common Algorithms: Linear Regression, Polynomial Regression, Support Vector Regression (SVR), Decision Trees (for regression), Neural Networks (e.g., feedforward networks).
  • 54. UNSUPERVISED Machine learning Algorithm learns patterns and structures in the input data (features) without being given explicit labels / targets. Training data is unlabeled. Tasks: (i) Clustering: group similar data points together (ii) Dimensionality reduction: the output represents the input with reduced dimensions Example: Clustering task of grouping fruits
  • 55. APPLICATIONS OF UNSUPERVISED Machine learning Anomaly detection: Identify unusual or rare events or patterns in data, such as detecting fraudulent transactions. Clustering: Group similar data points together based on similarity or distance, such as clustering customers based on their purchasing behavior. Dimensionality reduction: Reduce the number of features or variables in high- dimensional data while preserving as much information as possible, such as reducing the number of dimensions in an image or text dataset. Topic modeling: Discover underlying topics or themes in a collection of documents or text data, such as identifying topics in customer reviews or news articles.
  • 56. CHALLENGES IN UNSUPERVISED Machine learning Evaluation: There are no clear evaluation metrics for unsupervised learning, making it difficult to objectively compare different models or algorithms. Interpretability: Understanding the meaning or interpretation of the patterns or clusters discovered by unsupervised learning can be challenging, especially for complex models or high-dimensional data. Scalability: Some unsupervised learning algorithms can be computationally expensive and difficult to scale to large datasets or high-dimensional data. Determining the number of clusters: Determining the optimal number of clusters in a clustering algorithm can be challenging and subjective. Data preprocessing: Preprocessing and cleaning the data can be time-consuming and require domain expertise, especially for complex or unstructured data like images or text.
  • 58. CLUSTERING Definition: Clustering is an unsupervised learning task where the goal is to group a set of data points into clusters based on their similarities. How it Works: The algorithm finds natural groupings in the data by measuring similarity (e.g., distance) between data points. It assigns points to clusters such that points in the same cluster are more similar to each other than to those in other clusters. Examples: Customer segmentation in marketing (grouping customers based on purchasing behavior). Document categorization (organizing news articles into topics). Image segmentation (grouping pixels in an image into regions). Common Algorithms: K-Means, Hierarchical Clustering, DBSCAN (Density-Based Spatial Clustering of Applications with Noise), Gaussian Mixture Models.
  • 60. Physical Symbol System HYPOTHESIS (PSSH): THE UNDERLYING ASSUMPTION 7
  • 61. The Physical Symbol System Hypothesis is a theory in AI proposed by researchers Allen Newell and Herbert A. Simon in 1976. It states that a physical symbol system—a system that uses symbols to represent knowledge and manipulate those symbols to produce actions—has the necessary and sufficient means for intelligent behavior. What Is a Physical Symbol System? It is a system that consists of symbols (like words, numbers, or other representations) and a set of rules (operations) for manipulating those symbols. Computers are an example of physical symbol systems. They use symbols (data) and follow rules (programs) to process information.
  • 62. Key Points of the Hypothesis: Necessary for Intelligence: The hypothesis claims that to exhibit intelligent behavior (like problem-solving or reasoning), a system must operate using symbols. Essentially, human-like intelligence requires symbols to represent objects, actions, and ideas. Sufficient for Intelligence: If a system has the ability to store, manipulate, and use symbols according to certain rules, it can achieve any level of intelligence. This means that a properly programmed computer, in theory, could display any form of human-like intelligence.
  • 63. Why Is It Important in AI? The Physical Symbol System Hypothesis laid the foundation for symbolic AI, which includes methods like logic, rule-based systems, and expert systems. These approaches rely on representing knowledge in symbols (like facts, rules, and concepts) and using algorithms to manipulate them. Example Chess-Playing AI: In a chess game, pieces (symbols) represent different roles (knight, rook, etc.), and the rules of chess define how these symbols can be manipulated (moved). A computer playing chess uses these symbols and rules to make intelligent moves, embodying the physical symbol system hypothesis. Thus, the hypothesis suggests that the use of symbols and rules to process information is the key to creating an intelligent system, much like how humans think and reason.
  • 65. What is an AI Technique? Exploring the Basics of AI Techniques and Knowledge Representation
  • 66. Artificial Intelligence Problems AI problems cover a wide range but share one key aspect: they are hard. Techniques are needed to solve these problems. Question: Are there specific techniques suited for various AI problems?
  • 67. AI Techniques Require Knowledge AI techniques rely heavily on knowledge. Key Properties of Knowledge: (i) Voluminous: Contains a large amount of information. (ii) Hard to characterize: Not easy to define accurately. (iii) Constantly changing: Needs frequent updates. (iv) Organized for use: Different from raw data as it’s structured for practical applications.
  • 68. Defining an AI Technique An AI technique is a method that uses knowledge, represented in such a way that: It captures generalizations (groups similar situations together). It can be understood by people who provide the knowledge.
  • 69. Essential Properties of an AI Technique Easily modifiable: Can be updated to correct errors or reflect changes. Versatile: Usable in many situations even if not perfectly accurate. Narrow down possibilities: Helps overcome information overload by narrowing options.
  • 70. AI Techniques and Problem Independence AI techniques are designed to be independent of the problems they solve. They can sometimes solve non-AI problems as well. Objective: Characterize AI techniques to make them as problem-independent as possible.
  • 71. Summary of AI Techniques AI techniques are knowledge-based methods designed to solve complex problems. Knowledge in AI must be well-structured, adaptable, and provide generalizations. The goal is to make AI techniques flexible and usable across various problem domains.
  • 72. AI techniques are problem independent. In order to try to characterize Al techniques in asproblem-independent a way aspossible, let's look at two very different problems and a series of approaches for solving each of them.
  • 74. To understand AI techniques, it's helpful to apply them to different types of problems. By exploring how AI can handle both simple (Tic- Tac-Toe) and complex (questioning) tasks, we gain insight into the flexibility and problem-solving capacity of AI methods. The aim is to characterize AI techniques as being as problem-independent as possible.
  • 75. What Are We Trying to Infer? For Tic-Tac-Toe: Understand how AI uses strategies, rules, and predictions to solve structured, game-like problems. Demonstrates how knowledge can be represented and used to achieve an optimal outcome in a well-defined scenario. For Questioning: Explore how AI handles unstructured, dynamic, and less predictable scenarios (like natural language questioning). Shows the challenges of knowledge representation, reasoning, and adaptability in complex situations. Main Objective: To identify how AI techniques adapt to various problem types and whether these techniques can be generalized for broader application.
  • 76. A Comparison of Four Programs to Play Tic-Tac-Toe TIC-TAC-TOE Programs: From Simple to Advanced AI Techniques
  • 77. Program 1 Simple Table Lookup Program 3 Magic Square Technique Program 4 Minimax Algorithm Overview of the Tic-Tac-Toe Programs Program 2 Rule-Based Strategy Four programs increasing in complexity and intelligence:
  • 78. Data Structures: Board:The game board is represented as a list (vector) with nine elements. Each element corresponds to one of the nine positions on the board, numbered 1 to 9. Values: (i) 0 means the position is empty. (ii) 1 means the position contains an "X". (iii) 2 means the position contains an "O". Movetable: A large list (vector) with 19,683 elements (since each board position can be 0, 1, or 2, giving us 393^939 possible board configurations). Each entry in this list represents a possible game state and what the board should look like after a move. Program 1: Simple Table Lookup
  • 79. How the Program Works? Convert the Board: The program views the current board as a "ternary" (base three) number and converts it to a regular decimal number. Use the Movetable: The decimal number is used to find the corresponding entry in the Movetable, which tells the program the next board state after making a move. Update the Board: The board is updated based on this information.
  • 80. Efficiency: The program is fast because it directly looks up the next move using a pre- calculated list (the Movetable). Limitations: Despite its efficiency, the program has several disadvantages: (i) Memory Intensive: Storing all possible board positions and moves requires a lot of memory. (ii) Manual Work: It requires a lot of manual effort to fill in all possible moves into the Movetable. (iii) Error-Prone: Manually determining the correct moves for every possible board state can lead to mistakes. (iv) Lacks Flexibility: If you wanted to extend the game (e.g., to a 3D version), you would need to start over, and the number of possible board states would be enormous (3273^{27}327), making it impractical to store.
  • 81. Conclusion This approach does not meet the qualities of a good AI technique, which should be flexible, generalizable, and easy to modify or extend. Hence, it is suggested to explore better methods for representing and solving the game.
  • 82. This program is an improvement over the first one, aiming to be more flexible and use less memory. Data Structures: Board: Similar to Program 1, it uses a nine-element list (vector) to represent the Tic-Tac-Toe board. However, it now stores: (i) 2 for a blank square. (ii) 3 for an "X". (iii) 5 for an "O". Turn: An integer that tracks which move of the game is being played (1 for the first move, 9 for the last). Program 2: Rule-Based StrategY
  • 83. The Algorithm: The main algorithm uses three smaller subroutines (mini-programs): Make2: 1. Checks if the center of the board is blank. If it is, the program places an "O" there. If not, it selects one of the blank squares in non-corner positions (like squares 2, 4, 6, or 8). Posswin(p): 2. This checks if the current player (p) can win on their next move. It does this by examining all rows, columns, and diagonals of the board. It multiplies the values of squares in a row. If the product is 18 (3 × 3 × 2), it means player "X" can win, since the values indicate two "X"s (3) and one empty space (2). Similarly, if the product is 50 (5 × 5 × 2), it indicates that player "O" can win. This function helps the program both win and block the opponent's potential win. Go(n): 3. Makes a move in the specified square (n). This updates the board for the next turn and increments the turn counter.
  • 84. Playing Strategy: The algorithm has a pre-defined strategy for each move: Turn 1: Makes the first move in the upper left corner. Turn 2: If the center is blank, it occupies it; otherwise, it chooses the upper left corner. Turn 3: If the bottom right corner is blank, it occupies it; otherwise, it chooses the center. Turn 4 to Turn 9: The program checks if it can win or block the opponent using the Posswin function. If not possible, it makes strategic moves like occupying empty squares or setting up a "fork" (a move that creates multiple win opportunities).
  • 85. Efficiency: This program is not as fast as the first one because it checks multiple conditions before making a move. However, it uses much less memory since it doesn't store every possible game state like the first program. Clarity: The strategy is easier to understand and modify if needed. However, the entire strategy is still manually programmed, meaning if there are bugs, the program won't play well. Limitations: The program's knowledge is specific to regular 2D Tic-Tac-Toe and can't be generalized to a different game format, such as a 3D version
  • 86. Conclusion While this program improves in terms of memory usage and strategy understanding, it still has limitations. It cannot adapt or generalize its strategy to different versions of Tic-Tac-Toe or other games, which is a key aspect of more advanced AI techniques.
  • 87. Program 2': Magic Square This program is very similar to Program 2 but changes how the Tic-Tac-Toe board is represented. Board Representation: The board is still a nine-element list (vector), but the positions are numbered differently: This arrangement forms a magic square, meaning the sum of numbers in every row, column, and diagonal is 15.
  • 88. How It Works? When checking if a player has a winning combination, the program uses the properties of the magic square. It keeps a list of all the squares that each player has occupied. 1. For each pair of occupied squares by a player, it calculates the difference between 15 and the sum of those two squares. 2. Example: If a player occupies squares 8 and 3, their sum is 11. The difference from 15 is 4, which means if the player also occupies square 4, they win. The program ignores pairs that don’t add up to a meaningful difference (either negative, greater than 9, or involve non-collinear squares). 3. This method simplifies the win-checking process since it reduces the number of checks needed. If a player can occupy a square indicated by this difference, they have a winning move. 4.
  • 89. Efficiency: This program is more efficient than Program 2 because it uses the unique properties of the magic square to limit the number of moves it needs to check for a win. Why Is It Interesting? The program highlights how different ways of representing the same problem (in this case, the board layout) can greatly impact the efficiency of a solution. Using a magic square makes it quicker to identify potential winning moves. Human vs. Computer Approach: The comments point out a difference between how humans and computers solve problems. Humans often scan the board visually, while computers process numbers systematically. The magic square approach might seem less intuitive to humans but is quite efficient for computers.
  • 90. Key Takeaway The main idea is that choosing the right way to represent a problem (like using a magic square) can make the AI more efficient at solving it. This is an important concept in AI programming, as it demonstrates how certain representations can simplify complex checks and decisions.
  • 91. Program 3: Minimax Algorithm Program 3 uses a method called the minimax algorithm, which is a standard technique in AI for decision-making. Unlike the previous programs that relied on predefined strategies or simple checks, this program evaluates future moves to make the best decision. Data Structures: BoardPosition: This structure contains: A nine-element list representing the board. 1. A list of all possible board positions that could result from the next move. 2. A number representing an estimate of how likely each board position is to lead to a win for the player. 3.
  • 92. The Algorithm (Minimax): The program looks ahead at possible future moves to decide the best one. It does this in the following steps: Check for Immediate Win: It checks if any of the potential moves can immediately lead to a win. If so, that move is given the highest possible rating. Simulate Opponent's Moves: If there is no immediate win, the program considers all possible moves the opponent could make next. For each of these moves, it recursively evaluates how unfavorable these moves would be for itself (the player). The worst possible outcome is assigned to the current move under consideration. Choose the Best Move: After evaluating all possible moves, the program selects the one with the highest rating (i.e., the one that maximizes its chances of winning while minimizing the opponent’s chances). This process of evaluating future moves is called the minimax procedure. The algorithm aims to maximize the player’s chance of winning while assuming the opponent will also play their best to minimize this chance.
  • 93. Comments on the Program Time-Intensive: This program takes much more time to run than the previous programs because it explores a "tree" of all possible moves and outcomes before making a decision. This means it has to consider many different sequences of moves to find the best path. More Flexible: Unlike the previous programs, this approach can handle more complex games than Tic-Tac-Toe. The exhaustive, pre-defined strategies in earlier programs would fail in such scenarios, but minimax can be extended to larger, more complicated games. Augmentation: The minimax algorithm can be improved with additional knowledge about the game. For example, it might consider only a subset of possible moves, reducing the time required to make decisions. AI Technique: This is a true example of an AI technique. For small problems like Tic- Tac-Toe, it may seem less efficient than earlier methods, but it can be adapted for complex situations where predefined strategies would not work.
  • 94. Key Takeaway Program 3 uses the minimax algorithm, which is a fundamental concept in AI. By evaluating possible future outcomes, the program makes decisions that maximize the chance of success, showing how advanced AI can plan multiple steps ahead. Although this approach is more computationally intensive, it is far more adaptable and powerful for a wide range of games and problems.
  • 95. Summary and Comparison Table: Tic-Tac-Toe Program Comparison
  • 96. Introduction to Question Answering in AI Involves reading an English text and answering questions about it in English. Different from structured tasks like Tic- Tac-Toe; the problem is harder to define formally. Example input text: "Russia massed troops on the Czech border." Requires models to interpret the text and provide contextually correct answers. Question Answering in AI
  • 97. How AI Models Influence Answers Example input text: "Russia massed troops on the Czech border." Q: Why did Russia do this? A: To take political control of Czechoslovakia. Q: What should the United States do? A: Intervene militarily. Dialogue 1: Conservative model: Dialogue 2: Liberal model: Q: Why did Russia do this? A: To increase political influence over Czechoslovakia. Q: What should the United States do? A: Denounce the action in the UN. Key Point: AI's answers change based on the model's perspective.
  • 98. Defining "Correct" Answers in AI: The Challenge of Correct Answers Defining a "correct" answer is subjective and relies on the model's beliefs. AI answers are computed based on predefined models, making the idea of a "correct" response complex. Example Text: "Mary went shopping for a new coat. She found a red one..." Sample Questions: What did Mary go shopping for? What did Mary find that she liked?
  • 99. Key Takeaways Question answering in AI is more complex than structured problem- solving. Answers depend heavily on the AI model used. This exploration shows how AI techniques need to adapt to different types of problems and the subjectivity involved.
  • 100. Program 1 – Pattern Matching for Question Answering (Literal Text Matching) Data Structures: Uses predefined templates and patterns to match questions to text fragments. Algorithm: Matches question patterns to the input text. 1. Substitutes terms to generate responses. 2. Example: Q1: What did Mary go shopping for? → "A new coat." Comments: Simple and fragile; fails when questions deviate from expected forms.
  • 101. Program 2 – Structured Text Representation (Structured Text for Better Understanding) Data Structures: EnglishKnow: Basic understanding of grammar and semantics. 1. StructuredText: Extracts essential information from the input text. 2. Algorithm: Converts input and questions into structured forms to identify correct answers. Example: Q2: What did Mary find that she liked? → "A red coat." Comments: More effective than Program 1; still limited by the predefined structure.
  • 102. Program 3 – World Knowledge Integration (Incorporating World Knowledge) Data Structures: WorldModel: Incorporates background knowledge about common scenarios. 1. IntegratedText: Combines text with related world knowledge. 2. Algorithm: Uses scripts (predefined sequences of events) to understand the text deeply. Example: Q3: Did Mary buy anything? → "She bought a red coat." Comments: Most powerful of the three but still lacks general inference capabilities.
  • 103. Evolution of AI Techniques: From Simple to Complex AI Program 1: Basic pattern matching – limited and fragile. Program 2: Structured text with semantic understanding – more flexible. Program 3: Uses world knowledge – deeper understanding, but not fully general. Key Takeaway: Effective AI techniques integrate knowledge, search, and abstraction to handle complex queries.
  • 104. 1. Search A method for solving problems where no direct approach exists. Provides a framework to embed any direct techniques available. 3. Abstraction Separates important features from unimportant details to avoid overwhelming the process. Three Key AI Techniques 2. Use of Knowledge Solves complex problems by utilizing the structure and relationships within the objects involved. Programs that use these techniques are less fragile and can handle large, complex problems. These techniques help define what an AI technique is, though a precise definition is challenging.
  • 105. REFERENCES Artificial Intelligence by Elaine Rich, K. Knight, McGrawHill Search Methods in Artificial Intelligence by Deepak Khemani https://www.analyticsvidhya.com/blog/2021/06/machine-learning-vs-artificial-intelligence-vs- deep-learning/#:~:text=ML%20vs%20DL%20vs%20AI%3A%20Overview,- Artificial%20Intelligence%20(AI&text=AI%20simulates%20human%20intelligence%20to,it%2 0can%20use%20predefined%20rules. https://www.geeksforgeeks.org/difference-between-artificial-intelligence-vs-machine- learning-vs-deep-learning/ https://www.linkedin.com/pulse/rise-intelligent-machines-evolutionary-tale-ai-jeevitha-d-s- ixa4c/?trackingId=I%2FCP4XHZTSScRI2EiyL27Q%3D%3D https://www.linkedin.com/posts/mario-balladares-52263b22_ai-regression-classification- activity-7090701246453440513-1OTw/ https://nptel.ac.in/courses/106106247