SlideShare a Scribd company logo
Artificial Intelligence
Lecturer 1
Mr Blessings Ngwira (Lecturer in ICT)
Department of Information and Communication Technology
Msc, BSc,CCNA, HCIA, ESEFA ERP, ITIL V4
AI Definition
AI refers to the simulation of human intelligence by a system or a machine.
The goal of AI is to develop a machine that can think like humans and
mimic human behaviors, including perceiving, reasoning, learning,
planning, predicting, and so on.
Intelligence is one of the main characteristics that distinguishes human
beings from animals
Turing Test
“Can machines think?” Alan Turing posed this question in his famous paper
“Computing Machinery and Intelligence.”
He believes that to answer this question, we need to define what thinking
is. However, it is difficult to define thinking clearly, because thinking is a
subjective behavior.
Turing then introduced an indirect method to verify whether a machine
can think, the Turing test, which examines a machine’s ability to show
intelligence indistinguishable from that of human beings. A machine that
succeeds in the test is qualified to be labeled as artificial intelligence (AI).
Turing Test
The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence.
A computer passes the test if a human interrogator, after posing some
written questions, cannot tell whether the written responses come from a
person or from a computer
Turing Test
The computer would need to possess the following capabilities to pass this
test:
natural language processing to enable it to communicate successfully in
English;
knowledge representation to store what it knows or hears;
automated reasoning to use the stored information to answer questions and
to draw new conclusions
machine learning to adapt to new circumstances and to detect and
extrapolate patterns.
Total Turing Test
It includes a video signal so that the interrogator can test the
subject’s perceptual abilities, as well as the opportunity for the interrogator to
pass physical objects “through the hatch.”
To pass the total Turing Test, the computer will need
computer vision to perceive objects, and
• robotics to manipulate objects and move about.
These six disciplines compose most of AI, and Turing deserves credit for
designing a test that remains relevant 60 years later
Foundations of AI
The field of Artificial Intelligence (AI) is built upon several key disciplines and
foundational concepts, which together enable the creation of intelligent
systems. These foundations are essential for understanding, designing, and
implementing AI systems.
AI is inherently interdisciplinary, drawing from and contributing to various
fields. This cross-pollination of ideas and techniques has been crucial in
driving advancements in AI, leading to more robust, intelligent, and
capable systems. Understanding these foundational disciplines is essential
for anyone looking to delve into AI research or application development.
Foundations of AI
Mathematics
• Statistics and Probability: Essential for making inferences from data, modeling uncertainty,
and learning from data. Techniques include Bayesian inference, hypothesis testing, and
regression analysis.
• Linear Algebra: Used in many AI algorithms, particularly in machine learning and neural
networks. Concepts such as vectors, matrices, and tensor operations are fundamental.
• Calculus: Necessary for optimization algorithms, which are at the core of many machine
learning techniques. Gradient descent, for example, relies heavily on calculus.
• Discrete Mathematics: Includes logic, set theory, combinatorics, and graph theory, which
are crucial for understanding and designing algorithms.
Foundations of AI
Computer Science
• Algorithms and Data Structures: Fundamental for designing efficient AI programs.
Examples include search algorithms (e.g., A* algorithm), sorting algorithms, and data
structures like trees, graphs, and hash tables.
• Programming Languages: Languages such as Python, Java, and Lisp are commonly used
in AI development due to their capabilities in handling complex data and algorithms.
Foundations of AI
Software Engineering: Principles and practices for designing, developing,
testing, and maintaining reliable and scalable AI systems
Foundations of AI
Psychology
• Cognitive Science: Understanding how humans think, learn, and perceive
helps in modeling intelligent behavior in machines.
• Behavioral Psychology: Insights into human behavior and learning
processes inform the design of learning algorithms and human-computer
interaction.
Foundations of AI
4. Neuroscience
• Neural Networks: Inspired by the structure and functioning of the human
brain. Understanding how neurons and synapses work can guide the
development of artificial neural networks.
• Brain Imaging and Mapping: Techniques like fMRI and EEG provide insights
into brain activity, which can be used to improve AI models that mimic
human cognition.
Foundations of AI
5. Linguistics
• Natural Language Processing (NLP): Understanding and processing human
languages involves syntax, semantics, and pragmatics. Techniques such as
tokenization, parsing, and sentiment analysis are crucial.
• Computational Linguistics: Combines computer science and linguistics to
model and analyze human language using algorithms.
Foundations of AI
6. Philosophy
• Epistemology: Study of knowledge, its nature, and limitations. Important for
understanding the theoretical limits of what AI can achieve.
• Ethics: Addresses the moral implications of AI, including issues like bias,
fairness, privacy, and the impact of AI on society.
• Logic: Formal reasoning and logic are used in knowledge representation
and automated reasoning.
Foundations of AI
7. Control Theory and Cybernetics
• Control Systems: Understanding how to design systems that can regulate
themselves, such as robots and autonomous vehicles.
• Feedback Mechanisms: Key for designing adaptive systems that can learn
from their environment and improve over time.
History of AI
The beginning of modern AI research can be traced back to John
McCarthy, who coined the term “artificial intelligence (AI),” during a
conference at Dartmouth College in 1956. This symbolized the birth of the AI
scientific field.
Progress in the following years was astonishing. Many scientists and
researchers focused on automated reasoning and applied AI for proving
of mathematical theorems and solving of algebraic problems.
History of AI
One of the famous examples is Logic Theorist, a computer program written
by Allen Newell, Herbert A. Simon, and Cliff Shaw, which proves 38 of the
first 52 theorems in “Principia Mathematica”.
These successes made many AI pioneers wildly optimistic, and
underpinned the belief that fully intelligent machines would be built in the
near future
A dose of reality (1966–1973)
The following statement by Herbert Simon in 1957 is often quoted:
“It is not my aim to surprise or shock you—but the simplest way I can summarize
is to say that there are now in the world machines that think, that learn and
that create. Moreover their ability to do these things is going to increase
rapidly until in a visible future and the range of problems they can handle will
be coextensive with the range to which the human mind has been applied.”
Simon also made more concrete predictions: that within 10 years a computer
would be chess champion, and a significant mathematical theorem would be
proved by machine.
History of AI
However, they soon realized that there was still along way to go before the
end goals of human-equivalent intelligence in machines could come true
Many nontrivial problems could not be handled by the logic-based
programs. Another challenge was the lack of computational
resources to compute more and more complicated problems.
As a result, organizations and funders stopped supporting these
under-delivering AI projects.
Expert Systems
In the 1970s and 1980s, AI research shifted its focus towards
knowledge-based systems and expert systems. These systems aimed to
capture and utilize human expertise in specific domains. Knowledge-based
systems employed explicit rules and representations to solve complex
problems, while expert systems incorporated expert knowledge to provide
specialized recommendations and decision-making. These systems utilized
large databases of knowledge and employed inference engines to reason
and derive conclusions.
Expert Systems
Expert systems summarizes a series of basic rules from expert knowledge to
help non-experts make specific decisions.
Examples of expert systems include: MYCIN program which was used in
medical diagnosis. MYCIN could diagnose blood infections
With about 450 rules, MYCIN was able to perform as well as some experts,
and considerably better than junior doctors
The expert system derived logic rules from expert knowledge to
solve problems in the real world for the first time. The core of AI research during
this period is the knowledge that made machines “smarter
Neural networks (1986–present)
In the 1980s, AI research witnessed a resurgence of interest in connectionism, a field
focused on the study of artificial neural networks. Connectionism aimed to simulate the
behavior of biological neural networks and their learning capabilities. Neural networks
were inspired by the structure and functioning of the human brain. These networks
consisted of interconnected nodes, or “neurons,” which processed and transmitted
information
CONNECTIONISM AND RISE OF NEURAL
NETWORKS
Neural Networks
During this period, researchers such as Geoffrey Hinton, coined the
Godfather of Deep Learning, made significant contributions to the
development of neural networks. One notable breakthrough was the
introduction of the backpropagation algorithm in the 1980’s, which
revolutionized the training of neural networks and allowed them to learn
from examples and adjust their internal weights to make accurate
predictions.
Deep Learning
This breakthrough paved the way for the rise of deep learning, a subfield of
AI that focuses on training neural networks with multiple layers to extract
complex patterns and representations from data.
Deep learning, fueled by the availability of large-scale datasets and
advancements in computational power, has led to remarkable progress in
various AI applications. Neural networks have achieved impressive results in
image recognition, natural language processing, and speech recognition
tasks. The success of deep learning has propelled AI into new frontiers and
has opened up exciting possibilities for solving complex problems and
advancing AI technologies.
AI adopts the scientific method
(1987–present)
In terms of methodology, AI has finally come firmly under the scientific
method. To be accepted, hypotheses must be subjected to rigorous
empirical experiments, and the results must be analyzed statistically for their
importance (Cohen, 1995). It is now possible to replicate experiments by
using shared repositories of test data and code.
The emergence of intelligent agents
(1995–present)
The work of Allen Newell, John Laird,and Paul Rosenbloom on S OAR
(Newell, 1990; Laird et al., 1987) is the best-known example
of a complete agent architecture. One of the most important environments
for intelligent agents is the Internet
AI systems have become so common in Web-based applications that the
“-bot” suffix has entered everyday language.
Moreover, AI technologies underlie many Internet tools, such as search
engines, recommender systems, and Web site aggregators.
The availability of very large data sets
(2001–present)
Throughout the 60-year history of computer science, the emphasis has
been on the algorithm as the main subject of study. But some recent work
in AI suggests that for many problems, it makes more sense to worry about
the data and be less picky about what algorithm to apply.
This is true because of the increasing availability of very large data sources:
for example, trillions of words of English and billions of images from the Web
(Kilgarriff and Grefenstette,2006); or billions of base pairs of genomic
sequences (Collins et al., 2003).
State of the art
What can AI do today? A concise answer is difficult because there are so
many activities in so many subfields. Here we sample a few applications;
Robotic vehicles:
driverless robotic car named S TANLEY sped through the rough terrain of
the Mojave dessert at 22 mph, finishing the 132-mile course first to win the
2005 DARPA Grand Challenge.
STANLEY is a Volkswagen Touareg outfitted with cameras, radar,
and laser rangefinders to sense the environment and onboard software to
command the steering, braking, and acceleration (Thrun, 2006
State of the art
The following year CMU’s BOSS won the Urban Challenge, safely driving in
traffic through the streets of a closed Air Force base, obeying traffic rules
and avoiding pedestrians and other vehicles.
Speech recognition: A traveler calling United Airlines to book a flight can
have the entire conversation guided by an automated speech recognition
and dialog management system.
State of the art
Autonomous planning and scheduling: A hundred million miles from Earth,
NASA’s Remote Agent program became the first onboard autonomous
planning program to control the scheduling of operations for a spacecraft
(Jonsson et al., 2000).
REMOTE A GENT generated plans from high-level goals specified from the
ground and monitored the execution of those plans—detecting,
diagnosing, and recovering from problems as they occurred.
Successor program MAPGEN (Al-Chang et al., 2004) plans the daily
operations for NASA’s Mars Exploration Rovers, and MEXAR2 (Cesta et al.,
2007) did mission planning—both logistics and science planning—for the
European Space Agency’s Mars Express mission in 2008.
State of the art
Game playing:
IBM’s DEEP BLUE became the first computer program to defeat the
world champion in a chess match when it bested Garry Kasparov by a score
of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997).
Newsweek magazine described the match as “The brain’s last stand.” The
value of IBM’s stock increased by $18 billion. Human champions studied
Kasparov’s loss and were able to draw a few matches in subsequent years,
but the most recent human-computer matches have been won
convincingly by the computer.
State of the art
Spam fighting: Each day, learning algorithms classify over a billion messages
as spam,saving the recipient from having to waste time deleting what, for
many users, could comprise 80% or 90% of all messages, if not classified
away by algorithms. Because the spammers are continually updating their
tactics, it is difficult for a static programmed approach to keep up, and
learning algorithms work best (Sahami et al., 1998; Goodman and
Heckerman, 2004).
State of the art
Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces
deployed a Dynamic Analysis and Replanning Tool, DART (Cross and
Walker, 1994), to do automated logistics planning and scheduling for
transportation. This involved up to 50,000 vehicles, cargo, and people at a
time, and had to account for starting points, destinations, routes, and
conflict resolution among all parameters.
The AI planning techniques generated in hours a plan that would have
taken weeks with older methods. The Defense Advanced Research
Project Agency (DARPA) stated that this single application more than paid
back DARPA’s 30-year investment in AI.
State of the art
Robotics: The iRobot Corporation has sold over two million Roomba robotic
vacuum cleaners for home use. The company also deploys the more
rugged PackBot to Iraq and Afghanistan, where it is used to handle
hazardous materials, clear explosives, and identify the location of snipers.
State of the art
Machine Translation: A computer program automatically translates from
Arabic to English, allowing an English speaker to see the headline “Ardogan
Confirms That Turkey Would Not Accept Any Pressure, Urging Them to
Recognize Cyprus.” The program uses a statistical model built from
examples of Arabic-to-English translations and from examples of
English text totaling two trillion words (Brants et al., 2007).
None of the computer scientists on the team speak Arabic, but they do
understand statistics and machine learning algorithms.
Recent Developments
In recent years, AI has witnessed rapid advancements in areas such as
reinforcement learning, generative models, and explainable AI.
Reinforcement learning focuses on training AI agents to make sequential
decisions by interacting with an environment and receiving feedback.
Generative models, such as generative adversarial networks (GANs), are
capable of generating realistic and creative outputs, such as images or
text. Explainable AI aims to develop AI systems that can provide
transparent and interpretable explanations for their decisions and actions,
promoting trust and accountability.
Recent Developments
As AI continues to evolve, researchers are actively exploring ethical
considerations, such as fairness, accountability, transparency, and privacy.
These concerns are crucial in ensuring that AI technologies are developed
and deployed in a responsible and beneficial manner.

More Related Content

PPT
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
PPTX
Artificial and Human Intelligence in Business Prof. Oyedokun.pptx
PDF
Unit 1
PPTX
Latest technologies in computer system AI(Artificial Intelligence) Knowledg...
PDF
AI Chapter I for Computer Science Students
PDF
ARTIFICIAL INTELLIGENCETterm Paper
PPT
ai.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
Artificial and Human Intelligence in Business Prof. Oyedokun.pptx
Unit 1
Latest technologies in computer system AI(Artificial Intelligence) Knowledg...
AI Chapter I for Computer Science Students
ARTIFICIAL INTELLIGENCETterm Paper
ai.ppt

Similar to Lecture1-Artificial Intelligence.pptx.pdf (20)

PPT
ai.ppt
PPT
Introduction to Artificial Intelligences
PPT
computer science engineering spe ialized in artificial Intelligence
PPT
PDF
Lecture 1-Introduction to AI and its application.pdf
PDF
Introduction to Artificial Intelligence: Concepts and Applications
PPTX
Introduction to artificial intelligence.pptx
PPTX
Artificial Intelligence_Himani Patpatia.pptx
PPTX
PDF
Sesi 1-1 Pengantar AI.pdf
PPTX
AIML_Unit1.pptx
PPTX
AI_01_introduction.pptx
PPTX
What is Artificial Intelligence?
PPTX
AI UNIT-1(PPT)ccccxffrfydtffyfftdtxgxfxt
PPTX
1 Introduction to AI.pptx
PPTX
AI (Artificial Intelligence) Introduction.pptx
PDF
Ai notes
PPT
introduction to ai
PPT
ch1-What_is_Artificial_Intelligence1_1.ppt
PPT
AI Unit1.ppt
ai.ppt
Introduction to Artificial Intelligences
computer science engineering spe ialized in artificial Intelligence
Lecture 1-Introduction to AI and its application.pdf
Introduction to Artificial Intelligence: Concepts and Applications
Introduction to artificial intelligence.pptx
Artificial Intelligence_Himani Patpatia.pptx
Sesi 1-1 Pengantar AI.pdf
AIML_Unit1.pptx
AI_01_introduction.pptx
What is Artificial Intelligence?
AI UNIT-1(PPT)ccccxffrfydtffyfftdtxgxfxt
1 Introduction to AI.pptx
AI (Artificial Intelligence) Introduction.pptx
Ai notes
introduction to ai
ch1-What_is_Artificial_Intelligence1_1.ppt
AI Unit1.ppt
Ad

Recently uploaded (20)

PDF
Business Ethics Teaching Materials for college
PPTX
PPH.pptx obstetrics and gynecology in nursing
PPTX
Pharma ospi slides which help in ospi learning
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
PDF
01-Introduction-to-Information-Management.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Cell Structure & Organelles in detailed.
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
Institutional Correction lecture only . . .
Business Ethics Teaching Materials for college
PPH.pptx obstetrics and gynecology in nursing
Pharma ospi slides which help in ospi learning
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Microbial disease of the cardiovascular and lymphatic systems
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
01-Introduction-to-Information-Management.pdf
Anesthesia in Laparoscopic Surgery in India
Basic Mud Logging Guide for educational purpose
Cell Structure & Organelles in detailed.
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
human mycosis Human fungal infections are called human mycosis..pptx
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
O5-L3 Freight Transport Ops (International) V1.pdf
RMMM.pdf make it easy to upload and study
O7-L3 Supply Chain Operations - ICLT Program
Module 4: Burden of Disease Tutorial Slides S2 2025
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Institutional Correction lecture only . . .
Ad

Lecture1-Artificial Intelligence.pptx.pdf

  • 1. Artificial Intelligence Lecturer 1 Mr Blessings Ngwira (Lecturer in ICT) Department of Information and Communication Technology Msc, BSc,CCNA, HCIA, ESEFA ERP, ITIL V4
  • 2. AI Definition AI refers to the simulation of human intelligence by a system or a machine. The goal of AI is to develop a machine that can think like humans and mimic human behaviors, including perceiving, reasoning, learning, planning, predicting, and so on. Intelligence is one of the main characteristics that distinguishes human beings from animals
  • 3. Turing Test “Can machines think?” Alan Turing posed this question in his famous paper “Computing Machinery and Intelligence.” He believes that to answer this question, we need to define what thinking is. However, it is difficult to define thinking clearly, because thinking is a subjective behavior. Turing then introduced an indirect method to verify whether a machine can think, the Turing test, which examines a machine’s ability to show intelligence indistinguishable from that of human beings. A machine that succeeds in the test is qualified to be labeled as artificial intelligence (AI).
  • 4. Turing Test The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a computer
  • 5. Turing Test The computer would need to possess the following capabilities to pass this test: natural language processing to enable it to communicate successfully in English; knowledge representation to store what it knows or hears; automated reasoning to use the stored information to answer questions and to draw new conclusions machine learning to adapt to new circumstances and to detect and extrapolate patterns.
  • 6. Total Turing Test It includes a video signal so that the interrogator can test the subject’s perceptual abilities, as well as the opportunity for the interrogator to pass physical objects “through the hatch.” To pass the total Turing Test, the computer will need computer vision to perceive objects, and • robotics to manipulate objects and move about. These six disciplines compose most of AI, and Turing deserves credit for designing a test that remains relevant 60 years later
  • 7. Foundations of AI The field of Artificial Intelligence (AI) is built upon several key disciplines and foundational concepts, which together enable the creation of intelligent systems. These foundations are essential for understanding, designing, and implementing AI systems. AI is inherently interdisciplinary, drawing from and contributing to various fields. This cross-pollination of ideas and techniques has been crucial in driving advancements in AI, leading to more robust, intelligent, and capable systems. Understanding these foundational disciplines is essential for anyone looking to delve into AI research or application development.
  • 8. Foundations of AI Mathematics • Statistics and Probability: Essential for making inferences from data, modeling uncertainty, and learning from data. Techniques include Bayesian inference, hypothesis testing, and regression analysis. • Linear Algebra: Used in many AI algorithms, particularly in machine learning and neural networks. Concepts such as vectors, matrices, and tensor operations are fundamental. • Calculus: Necessary for optimization algorithms, which are at the core of many machine learning techniques. Gradient descent, for example, relies heavily on calculus. • Discrete Mathematics: Includes logic, set theory, combinatorics, and graph theory, which are crucial for understanding and designing algorithms.
  • 9. Foundations of AI Computer Science • Algorithms and Data Structures: Fundamental for designing efficient AI programs. Examples include search algorithms (e.g., A* algorithm), sorting algorithms, and data structures like trees, graphs, and hash tables. • Programming Languages: Languages such as Python, Java, and Lisp are commonly used in AI development due to their capabilities in handling complex data and algorithms.
  • 10. Foundations of AI Software Engineering: Principles and practices for designing, developing, testing, and maintaining reliable and scalable AI systems
  • 11. Foundations of AI Psychology • Cognitive Science: Understanding how humans think, learn, and perceive helps in modeling intelligent behavior in machines. • Behavioral Psychology: Insights into human behavior and learning processes inform the design of learning algorithms and human-computer interaction.
  • 12. Foundations of AI 4. Neuroscience • Neural Networks: Inspired by the structure and functioning of the human brain. Understanding how neurons and synapses work can guide the development of artificial neural networks. • Brain Imaging and Mapping: Techniques like fMRI and EEG provide insights into brain activity, which can be used to improve AI models that mimic human cognition.
  • 13. Foundations of AI 5. Linguistics • Natural Language Processing (NLP): Understanding and processing human languages involves syntax, semantics, and pragmatics. Techniques such as tokenization, parsing, and sentiment analysis are crucial. • Computational Linguistics: Combines computer science and linguistics to model and analyze human language using algorithms.
  • 14. Foundations of AI 6. Philosophy • Epistemology: Study of knowledge, its nature, and limitations. Important for understanding the theoretical limits of what AI can achieve. • Ethics: Addresses the moral implications of AI, including issues like bias, fairness, privacy, and the impact of AI on society. • Logic: Formal reasoning and logic are used in knowledge representation and automated reasoning.
  • 15. Foundations of AI 7. Control Theory and Cybernetics • Control Systems: Understanding how to design systems that can regulate themselves, such as robots and autonomous vehicles. • Feedback Mechanisms: Key for designing adaptive systems that can learn from their environment and improve over time.
  • 16. History of AI The beginning of modern AI research can be traced back to John McCarthy, who coined the term “artificial intelligence (AI),” during a conference at Dartmouth College in 1956. This symbolized the birth of the AI scientific field. Progress in the following years was astonishing. Many scientists and researchers focused on automated reasoning and applied AI for proving of mathematical theorems and solving of algebraic problems.
  • 17. History of AI One of the famous examples is Logic Theorist, a computer program written by Allen Newell, Herbert A. Simon, and Cliff Shaw, which proves 38 of the first 52 theorems in “Principia Mathematica”. These successes made many AI pioneers wildly optimistic, and underpinned the belief that fully intelligent machines would be built in the near future
  • 18. A dose of reality (1966–1973) The following statement by Herbert Simon in 1957 is often quoted: “It is not my aim to surprise or shock you—but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover their ability to do these things is going to increase rapidly until in a visible future and the range of problems they can handle will be coextensive with the range to which the human mind has been applied.” Simon also made more concrete predictions: that within 10 years a computer would be chess champion, and a significant mathematical theorem would be proved by machine.
  • 19. History of AI However, they soon realized that there was still along way to go before the end goals of human-equivalent intelligence in machines could come true Many nontrivial problems could not be handled by the logic-based programs. Another challenge was the lack of computational resources to compute more and more complicated problems. As a result, organizations and funders stopped supporting these under-delivering AI projects.
  • 20. Expert Systems In the 1970s and 1980s, AI research shifted its focus towards knowledge-based systems and expert systems. These systems aimed to capture and utilize human expertise in specific domains. Knowledge-based systems employed explicit rules and representations to solve complex problems, while expert systems incorporated expert knowledge to provide specialized recommendations and decision-making. These systems utilized large databases of knowledge and employed inference engines to reason and derive conclusions.
  • 21. Expert Systems Expert systems summarizes a series of basic rules from expert knowledge to help non-experts make specific decisions. Examples of expert systems include: MYCIN program which was used in medical diagnosis. MYCIN could diagnose blood infections With about 450 rules, MYCIN was able to perform as well as some experts, and considerably better than junior doctors The expert system derived logic rules from expert knowledge to solve problems in the real world for the first time. The core of AI research during this period is the knowledge that made machines “smarter
  • 22. Neural networks (1986–present) In the 1980s, AI research witnessed a resurgence of interest in connectionism, a field focused on the study of artificial neural networks. Connectionism aimed to simulate the behavior of biological neural networks and their learning capabilities. Neural networks were inspired by the structure and functioning of the human brain. These networks consisted of interconnected nodes, or “neurons,” which processed and transmitted information
  • 23. CONNECTIONISM AND RISE OF NEURAL NETWORKS
  • 24. Neural Networks During this period, researchers such as Geoffrey Hinton, coined the Godfather of Deep Learning, made significant contributions to the development of neural networks. One notable breakthrough was the introduction of the backpropagation algorithm in the 1980’s, which revolutionized the training of neural networks and allowed them to learn from examples and adjust their internal weights to make accurate predictions.
  • 25. Deep Learning This breakthrough paved the way for the rise of deep learning, a subfield of AI that focuses on training neural networks with multiple layers to extract complex patterns and representations from data. Deep learning, fueled by the availability of large-scale datasets and advancements in computational power, has led to remarkable progress in various AI applications. Neural networks have achieved impressive results in image recognition, natural language processing, and speech recognition tasks. The success of deep learning has propelled AI into new frontiers and has opened up exciting possibilities for solving complex problems and advancing AI technologies.
  • 26. AI adopts the scientific method (1987–present) In terms of methodology, AI has finally come firmly under the scientific method. To be accepted, hypotheses must be subjected to rigorous empirical experiments, and the results must be analyzed statistically for their importance (Cohen, 1995). It is now possible to replicate experiments by using shared repositories of test data and code.
  • 27. The emergence of intelligent agents (1995–present) The work of Allen Newell, John Laird,and Paul Rosenbloom on S OAR (Newell, 1990; Laird et al., 1987) is the best-known example of a complete agent architecture. One of the most important environments for intelligent agents is the Internet AI systems have become so common in Web-based applications that the “-bot” suffix has entered everyday language. Moreover, AI technologies underlie many Internet tools, such as search engines, recommender systems, and Web site aggregators.
  • 28. The availability of very large data sets (2001–present) Throughout the 60-year history of computer science, the emphasis has been on the algorithm as the main subject of study. But some recent work in AI suggests that for many problems, it makes more sense to worry about the data and be less picky about what algorithm to apply. This is true because of the increasing availability of very large data sources: for example, trillions of words of English and billions of images from the Web (Kilgarriff and Grefenstette,2006); or billions of base pairs of genomic sequences (Collins et al., 2003).
  • 29. State of the art What can AI do today? A concise answer is difficult because there are so many activities in so many subfields. Here we sample a few applications; Robotic vehicles: driverless robotic car named S TANLEY sped through the rough terrain of the Mojave dessert at 22 mph, finishing the 132-mile course first to win the 2005 DARPA Grand Challenge. STANLEY is a Volkswagen Touareg outfitted with cameras, radar, and laser rangefinders to sense the environment and onboard software to command the steering, braking, and acceleration (Thrun, 2006
  • 30. State of the art The following year CMU’s BOSS won the Urban Challenge, safely driving in traffic through the streets of a closed Air Force base, obeying traffic rules and avoiding pedestrians and other vehicles. Speech recognition: A traveler calling United Airlines to book a flight can have the entire conversation guided by an automated speech recognition and dialog management system.
  • 31. State of the art Autonomous planning and scheduling: A hundred million miles from Earth, NASA’s Remote Agent program became the first onboard autonomous planning program to control the scheduling of operations for a spacecraft (Jonsson et al., 2000). REMOTE A GENT generated plans from high-level goals specified from the ground and monitored the execution of those plans—detecting, diagnosing, and recovering from problems as they occurred. Successor program MAPGEN (Al-Chang et al., 2004) plans the daily operations for NASA’s Mars Exploration Rovers, and MEXAR2 (Cesta et al., 2007) did mission planning—both logistics and science planning—for the European Space Agency’s Mars Express mission in 2008.
  • 32. State of the art Game playing: IBM’s DEEP BLUE became the first computer program to defeat the world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997). Newsweek magazine described the match as “The brain’s last stand.” The value of IBM’s stock increased by $18 billion. Human champions studied Kasparov’s loss and were able to draw a few matches in subsequent years, but the most recent human-computer matches have been won convincingly by the computer.
  • 33. State of the art Spam fighting: Each day, learning algorithms classify over a billion messages as spam,saving the recipient from having to waste time deleting what, for many users, could comprise 80% or 90% of all messages, if not classified away by algorithms. Because the spammers are continually updating their tactics, it is difficult for a static programmed approach to keep up, and learning algorithms work best (Sahami et al., 1998; Goodman and Heckerman, 2004).
  • 34. State of the art Logistics planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do automated logistics planning and scheduling for transportation. This involved up to 50,000 vehicles, cargo, and people at a time, and had to account for starting points, destinations, routes, and conflict resolution among all parameters. The AI planning techniques generated in hours a plan that would have taken weeks with older methods. The Defense Advanced Research Project Agency (DARPA) stated that this single application more than paid back DARPA’s 30-year investment in AI.
  • 35. State of the art Robotics: The iRobot Corporation has sold over two million Roomba robotic vacuum cleaners for home use. The company also deploys the more rugged PackBot to Iraq and Afghanistan, where it is used to handle hazardous materials, clear explosives, and identify the location of snipers.
  • 36. State of the art Machine Translation: A computer program automatically translates from Arabic to English, allowing an English speaker to see the headline “Ardogan Confirms That Turkey Would Not Accept Any Pressure, Urging Them to Recognize Cyprus.” The program uses a statistical model built from examples of Arabic-to-English translations and from examples of English text totaling two trillion words (Brants et al., 2007). None of the computer scientists on the team speak Arabic, but they do understand statistics and machine learning algorithms.
  • 37. Recent Developments In recent years, AI has witnessed rapid advancements in areas such as reinforcement learning, generative models, and explainable AI. Reinforcement learning focuses on training AI agents to make sequential decisions by interacting with an environment and receiving feedback. Generative models, such as generative adversarial networks (GANs), are capable of generating realistic and creative outputs, such as images or text. Explainable AI aims to develop AI systems that can provide transparent and interpretable explanations for their decisions and actions, promoting trust and accountability.
  • 38. Recent Developments As AI continues to evolve, researchers are actively exploring ethical considerations, such as fairness, accountability, transparency, and privacy. These concerns are crucial in ensuring that AI technologies are developed and deployed in a responsible and beneficial manner.