SlideShare a Scribd company logo
3
Most read
5
Most read
11
Most read
ARTIFICIAL INTELLIGENCE (AI)
WHAT IS AI?
 Artificial Intelligence (AI) refers to the development of
computer systems of performing tasks that require
human intelligence.
 The ultimate goal of AI is to create machines that can
emulate capabilities and carry out diverse tasks, with
enhanced efficiency and precision. The field of AI holds
potential to revolutionize aspects of our daily lives.
TYPES OF AI
 ANI (Artificial Narrow Intelligence): ANI, also known as Weak AI, refers to AI systems that are
designed and trained for a specific task or a narrow set of tasks. These systems operate within a limited
context and cannot perform tasks beyond their predefined scope. Examples include virtual personal
assistants like Siri, recommendation systems, and chatbots.
 AGI (Artificial General Intelligence): AGI, also known as Strong AI or Human-Level AI, refers to
AI systems that possess human-like cognitive abilities, including the ability to understand, learn, and
apply knowledge across different domains. AGI can perform any intellectual task that a human can do.
Achieving AGI remains a long-term goal in AI research and development.
TYPES OF AI
 ASI (Artificial Super Intelligence): ASI refers to AI systems that surpass human intelligence in every
aspect. These hypothetical systems would be capable of outperforming humans in virtually every
cognitive task and could potentially lead to significant societal impacts. ASI is a concept often
discussed in speculative discussions about the future of AI, and it remains purely theoretical at present.
 Generative AI: Generative AI refers to AI systems that can generate new content, such as images, text,
music, or videos, that is original and not directly copied from existing data. Generative AI often utilizes
techniques such as deep learning and generative adversarial networks (GANs) to produce realistic and
creative outputs. Generative AI has applications in various fields, including art, design, and content
creation.
MACHINE LEARNING (ML) AND ITS TYPES
Type of Machine
Learning Description Example
Supervised
Learning
Algorithms learn from labeled data, with input-output pairs provided. The
model generalizes patterns and makes predictions on unseen data.
Classification: Identifying whether an email is spam or
not spam. Regression: Predicting the price of a house
based on its features.
Unsupervised
Learning
Algorithms learn from unlabeled data to discover patterns or structures in
the data. The model identifies hidden patterns or groupings without explicit
guidance.
Clustering: Grouping customers based on their
purchasing behavior. Anomaly detection: Detecting
fraudulent transactions.
Semi-supervised
Learning
Combines supervised and unsupervised learning, using a small amount of
labeled data with a large amount of unlabeled data. The model learns from
the labeled data and the structure of the unlabeled data.
Text classification: Training a model to classify news
articles with a small set of labeled articles and a large
collection of unlabeled articles.
Reinforcement
Learning
Agents learn to make decisions by interacting with an environment. The
model receives feedback in the form of rewards or penalties, guiding it
towards better decision-making.
Game playing: Training a model to play chess by
rewarding successful moves and penalizing mistakes.
Robotics: Teaching a robot to navigate a maze by
rewarding it for finding the correct path.
DATA
 Data refers to raw facts, observations, or measurements that are
typically collected and stored in a structured or unstructured format. It
can be anything from numbers, text, images, sounds, or any other form
of information. In its raw form, data lacks context and meaning.
However, when processed and analyzed, data can provide valuable
insights and information that can be used for decision-making,
problem-solving, or understanding various phenomena in different
fields such as science, business, healthcare, and more.
DATA LABELING METHODS
Data Labeling Method Description
Manual Labeling Human annotators assign labels or tags to data points based on predefined criteria.
Observing Behaviors
Behaviors or actions of subjects are observed and recorded in real-world
scenarios.
Semi-supervised Learning
Combines elements of supervised and unsupervised learning; some data points are
labeled, while others are not.
Active Learning
Iteratively selects the most informative data points for labeling, typically using
machine learning models.
Crowdsourcing
Outsourcing the labeling task to a large group of online workers or "crowd"
through platforms like Amazon Mechanical Turk.
Weak Supervision
Utilizes heuristics, rules, or noisy labels to automatically label large amounts of
data.
DATA USE -
MISUSE
Use of Data in AI Misuse of Data in AI
Training machine learning models Biased data leading to discriminatory outcomes
Improving accuracy and efficiency Unauthorized data collection
Personalizing user experiences Data breaches and privacy violations
Enhancing decision-making processes
Manipulation of data for political or commercial
gain
Identifying patterns and trends Data falsification or tampering
Automating tasks and processes Lack of transparency in data usage
Predicting future outcomes Excessive surveillance and monitoring
Conducting research and analysis Exploitation of sensitive personal information
Generating insights for businesses Monetization of user data without consent
Creating recommendation systems Creation of echo chambers or filter bubbles
Enabling targeted advertising
Tracking and profiling individuals without their
knowledge
Facilitating medical diagnosis and
treatment
Unauthorized sharing of medical data
Improving cybersecurity measures Creation of deepfake content for malicious purposes
Optimizing resource allocation
Discriminatory profiling based on protected
characteristics
COMPARING AI, ML, DS, STAT, AND MATHS
Field Focus Applications
Mathematics (Maths)
Provides foundational tools and language for modeling
complex systems and developing algorithms.
Used in all fields for modeling, optimization, and algorithm
development.
Statistics (Stat)
Deals with collecting, analyzing, interpreting, and presenting
data; provides methods for making inferences and
predictions based on data samples.
Used for hypothesis testing, regression analysis, clustering,
classification, and inference.
Machine Learning (ML)
Building systems that can learn from data and improve over
time without being explicitly programmed.
Used in various domains for pattern recognition, prediction,
decision-making, and automation.
Data Science (DS)
Integrates elements of statistics, machine learning, and
domain expertise to extract insights and knowledge from
structured and unstructured data.
Used for data cleaning, exploratory data analysis, feature
engineering, model building, and deployment.
Artificial Intelligence (AI)
Aims to create systems capable of performing tasks that
typically require human intelligence, including machine
learning as a key component.
Used in natural language processing, computer vision,
robotics, expert systems, and other areas for automation
and decision support.
MACHINE LEARNING VS DEEP LEARNING
Aspect Machine Learning Deep Learning
Architecture
Typically uses simpler algorithms and
models.
Utilizes artificial neural networks with multiple
layers.
Feature Engineering Requires manual feature engineering. Automatically learns features from raw data.
Data Requirement Moderate to large datasets. Large datasets, often with high dimensionality.
Performance May not perform as well on complex tasks.
Excels at complex tasks like image and speech
recognition.
Interpretability Generally, more interpretable. Often considered as "black-box" models.
Training Time
Faster training times compared to deep
learning.
Longer training times, especially with complex
models.
Hardware Dependency
Less hardware intensive compared to deep
learning.
Often requires powerful hardware (GPUs/TPUs)
for training.
Examples
Linear regression, decision trees, SVMs, k-
NN.
Convolutional Neural Networks (CNNs), RNNs,
Transformers.
THE INTERNET HAS BEEN SUPER IMPORTANT FOR MAKING
AI BETTER BECAUSE
 Lots of Data: The internet gives AI tons of information to learn from.
 Helps Label Data: People online can help mark data for AI to learn from.
 Teamwork: AI experts from everywhere can work together online.
 Big Computers: The internet lets AI use powerful computers from far away.
 Fast Processing: AI can quickly understand and respond to information online.
 Ready-Made Models: Online, there are already-made AI models for developers to use.
 Easy Sharing: The internet helps put AI tools where people can easily get them.
WORKFLOW OF
A MACHINE
LEARNING
PROJECT
Stage Description
1. Problem Definition
Define the problem you're trying to solve and determine if it's suitable for
machine learning.
2. Data Collection
Gather relevant data from various sources, ensuring it's clean, relevant,
and in the right format.
3. Data Preprocessing
Clean the data by handling missing values, outliers, and encoding
categorical variables if necessary.
4. Exploratory Data
Analysis (EDA)
Understand the data through visualizations and statistical summaries to
gain insights.
5. Feature Engineering
Create new features or transform existing ones to enhance the predictive
power of the model.
6. Model Selection
Choose appropriate machine learning algorithms based on the problem
type and data characteristics.
7. Model Training
Train the selected models on the training data and fine-tune
hyperparameters to improve performance.
8. Model Evaluation
Evaluate the models using appropriate metrics and cross-validation to
assess performance accurately.
9. Model Deployment
Deploy the trained model into production, ensuring scalability, reliability,
and security.
10. Monitoring &
Maintenance
Continuously monitor the deployed model's performance and update it as
needed with new data or improvements.
11. Documentation
Document the entire process, including data sources, preprocessing steps,
LIMITATION
OF MACHINE
LEARNING
1. Data Quality Matters: Machine learning needs good data. If the data is bad or
biased, the results will be too.
2. Learning Too Much: Sometimes, ML models learn too much from the data they're
given, making them too specific and unable to handle new situations.
3. Hard to Understand: ML models can be like black boxes, making it tough to
understand how they make decisions.
4. Big Data, Big Problem: Handling lots of data or complex data can be really hard
for ML algorithms.
5. Correlation ≠ Causation: ML can find patterns in data, but it's not great at
understanding why things happen, just that they do.
6. Tricked Easily: ML models can be fooled by small changes in data, leading to
wrong predictions.
7. Need Experts: ML often needs people who know a lot about the field to choose the
right data and set up the model correctly.
8. Fairness and Bias: ML can make biased decisions based on biased data, which can
be unfair and even illegal.
9. Privacy Matters: ML often uses sensitive data, so keeping it safe and private is a
big concern.
10. Learning is Hard: Many ML models can't keep learning after they're trained, so
they struggle to adapt to new situations.
WORKFLOW
OF A DATA
SCIENCE
PROJECT
Stage Description
1. Problem Definition
Clearly define the problem you aim to solve and its significance, ensuring
alignment with stakeholders' goals.
2. Data Acquisition
Gather relevant data from diverse sources, ensuring it's comprehensive,
accurate, and legally obtained.
3. Data Exploration
Explore the data to understand its structure, quality, and relationships through
visualizations and summaries.
4. Data Preprocessing
Cleanse the data by handling missing values, outliers, and inconsistencies,
ensuring it's ready for analysis.
5. Feature Engineering
Create new features or transform existing ones to improve model performance
or enhance insights.
6. Model Development
Develop statistical or machine learning models tailored to address the
problem, selecting appropriate algorithms.
7. Model Evaluation
Assess model performance using relevant metrics, cross-validation, and
comparing against baselines or benchmarks.
8. Model Interpretation
Interpret model predictions or findings, understanding the factors influencing
outcomes and their implications.
9. Visualization
Communicate insights effectively through visualizations, helping stakeholders
understand complex findings intuitively.
10. Deployment
Deploy the model or analytical solution, ensuring it integrates seamlessly into
existing systems or workflows.
11. Monitoring & Maintenance
Continuously monitor model performance and data quality, updating models
as needed to maintain effectiveness.
12. Documentation
Document the entire project including data sources, methodologies, findings,
and recommendations for future reference.
CPU VS GPU
FEATURE CPU GPU
Purpose General-purpose computation Specialized for parallel processing
Architecture
Typically, fewer cores, optimized for tasks requiring serial
processing
Many cores optimized for parallel processing
Core Count Usually fewer cores (4 to 32) Many cores (hundreds to thousands)
Clock Speed Higher clock speeds (GHz range) Lower clock speeds (MHz range)
Memory Typically has smaller, faster caches Larger memory bandwidth, slower access time
Power Consumption Lower power consumption Higher power consumption
Flexibility Versatile, suitable for a wide range of tasks
Specialized for graphics rendering, but adaptable to other
parallel tasks
Cost Generally, more expensive per core Can be more cost-effective for parallel tasks
Machine Learning Slower for deep learning tasks due to serial processing nature
Highly efficient for parallelized deep learning tasks;
widely used in neural network training and inference
LIMITATION OF
ARTIFICIAL
INTELLIGENCE
Limitation Description
Lack of Creativity
AI struggles with tasks requiring creativity, intuition, or emotional
understanding.
Data Dependency
AI heavily relies on data for training and decision-making, which can lead
to biased or inaccurate results.
Ethical Concerns
AI systems may perpetuate societal biases present in training data, posing
ethical challenges.
Interpretability & Explainability
Many AI models are considered "black boxes," making it difficult to
interpret or explain their decisions.
Limited Generalization
AI models may struggle to generalize knowledge to new or unseen
scenarios, leading to errors.
Resource Intensiveness
Developing and training AI models requires significant computational
resources and data.
Vulnerability to Adversarial
Attacks
AI systems can be manipulated by adversaries to produce incorrect
outputs.
Lack of Common Sense
Understanding
AI often lacks nuanced understanding of common sense reasoning.
Human Dependence for
Oversight
AI systems may require human supervision to ensure safe and ethical
operation.
Regulatory and Legal Challenges Legal frameworks for governing AI use are often lacking or insufficient.
COMPUTER VISION
 Computer vision is a field of artificial intelligence (AI) that enables machines to interpret and understand the
visual world. It involves developing algorithms and techniques to extract meaningful information from images or
videos. This information can range from simple tasks like object detection and recognition to more complex tasks
such as image segmentation, scene understanding, and even action recognition.
 Computer vision finds applications in various industries, including healthcare, automotive, agriculture, retail,
security, and more. Some common applications include facial recognition, autonomous vehicles, medical image
analysis, augmented reality, and quality inspection in manufacturing.
COMPUTER VISION VS DEEP LEARNING
Aspect Computer Vision Deep Learning
Definition
A field of study focusing on enabling computers to interpret and understand
visual information from the real world.
A subset of machine learning where artificial neural networks with multiple
layers (deep architectures) learn representations of data.
Core Techniques
Image processing, feature extraction, object detection, image classification,
segmentation, and recognition.
Neural networks, including convolutional neural networks (CNNs), recurrent
neural networks (RNNs), and deep belief networks (DBNs).
Application Areas
Medical imaging, autonomous vehicles, surveillance, augmented reality,
robotics, satellite imagery analysis.
Natural language processing (NLP), speech recognition, recommendation
systems, gaming, financial forecasting, healthcare diagnostics.
Data Requirements
Requires labeled datasets for training models and often relies on pre-defined
features or engineered representations.
Needs large amounts of labeled data for training, but can automatically learn
features and representations from raw data.
Performance
Performance highly dependent on the quality of feature extraction and
engineering, often requiring domain expertise.
Can achieve state-of-the-art performance in various tasks when trained on
large datasets with sufficient computational resources.
Interpretability
Generally, more interpretable as feature extraction and processing steps are
often explicit and well-defined.
Can be less interpretable due to the complex, hierarchical representations
learned automatically from data, often referred to as "black box" models.
Flexibility
Less flexible in adapting to new tasks without significant modifications to
feature extraction and processing pipelines.
More flexible in adapting to new tasks and domains, as deep learning models
can learn relevant features directly from data.
Computational
Cost
Typically, less computationally expensive compared to deep learning models,
especially for simpler tasks and smaller datasets.
Can be computationally expensive, requiring powerful hardware (GPUs/TPUs)
and large amounts of data for training complex models.
GENERATIVE ADVERSARIAL NETWORKS
 Generative Adversarial Networks (GANs) in AI are an exciting and powerful class of machine learning models used for generating new data
samples that resemble a given dataset. GANs consist of two neural networks, namely the generator and the discriminator, which are trained
simultaneously through adversarial training.
 The generator network takes random noise as input and tries to generate synthetic data samples, such as images or text, that are indistinguishable
from real data. The discriminator network, on the other hand, tries to distinguish between real and fake data samples.
 During training, the generator aims to produce samples that are so realistic that the discriminator cannot differentiate them from real samples, while
the discriminator aims to become more accurate in distinguishing real from fake samples. This adversarial process drives both networks to improve
over time until the generator produces high-quality synthetic data.
 GANs have been used in various applications, including image generation, style transfer, super-resolution, image-to-image translation, and even
generating synthetic human faces. However, they also present challenges such as training instability and mode collapse, where the generator
produces limited diversity in its outputs. Nonetheless, GANs continue to be an active area of research in the AI community, with ongoing efforts to
improve their stability, diversity, and applicability.
HAPPY
LEARNING!

More Related Content

PPTX
Introduction-to-Artificial Intelligence.pptx.pptx
PPTX
TensorFlow Event presentation08-12-2024.pptx
PPTX
Ai & ML workshop-1.pptx ppt presentation
PDF
AI-ML in Business: Unlocking Opportunities and Navigating Challenges
PPSX
Artificial intelligence
PPTX
Machine learning _new.pptx for a presentation
PPTX
What is Artificial Intelligence and Machine Learning (1).pptx
PDF
PPT1: Introduction to Artificial Intelligence, AI Applications and Advantages...
Introduction-to-Artificial Intelligence.pptx.pptx
TensorFlow Event presentation08-12-2024.pptx
Ai & ML workshop-1.pptx ppt presentation
AI-ML in Business: Unlocking Opportunities and Navigating Challenges
Artificial intelligence
Machine learning _new.pptx for a presentation
What is Artificial Intelligence and Machine Learning (1).pptx
PPT1: Introduction to Artificial Intelligence, AI Applications and Advantages...

Similar to Artificial intelligence ( AI ) | Guide (20)

PPTX
Lecture 2 - Types of AI and its different purposes
PDF
Introduction to Artificial Intelligence.pdf
PDF
Machine Learning Fundamentals.pdf - jntu
PDF
Artificial Intelligence PowerPoint Presentation Slide Template Complete Deck
PPTX
AI Ml Introduction with images and examples.pptx
PPTX
AI hype or reality
PDF
Day 1 wazz up ai
PPT
Recent trends in Artificial intelligence and Machine learning
PPTX
chapter-1-1.pptxmsdsdvsdvnsdvndsvdvds i.
PPTX
artificialintelligencedata driven analytics23.pptx
PPTX
Lecture 1.pptxgggggggggggggggggggggggggggggggggggggggggggg
PDF
Machine Learning Deep Learning AI and Data Science
PDF
Exploring AI as tools in your career.pdf
PDF
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
PPTX
machine learning introduction notes foRr
PDF
Artificial Intelligence Machine Learning Deep Learning PPT PowerPoint Present...
PDF
Intro to machine learning
PPTX
AI & Machine Learning – The Future of Technology
PPTX
Internship - Python - AI ML.pptx
PPTX
Internship - Python - AI ML.pptx
Lecture 2 - Types of AI and its different purposes
Introduction to Artificial Intelligence.pdf
Machine Learning Fundamentals.pdf - jntu
Artificial Intelligence PowerPoint Presentation Slide Template Complete Deck
AI Ml Introduction with images and examples.pptx
AI hype or reality
Day 1 wazz up ai
Recent trends in Artificial intelligence and Machine learning
chapter-1-1.pptxmsdsdvsdvnsdvndsvdvds i.
artificialintelligencedata driven analytics23.pptx
Lecture 1.pptxgggggggggggggggggggggggggggggggggggggggggggg
Machine Learning Deep Learning AI and Data Science
Exploring AI as tools in your career.pdf
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
machine learning introduction notes foRr
Artificial Intelligence Machine Learning Deep Learning PPT PowerPoint Present...
Intro to machine learning
AI & Machine Learning – The Future of Technology
Internship - Python - AI ML.pptx
Internship - Python - AI ML.pptx
Ad

More from To Sum It Up (20)

PPTX
Django Framework Interview Guide - Part 1
PPTX
On Page SEO (Search Engine Optimization)
PDF
Prompt Engineering | Beginner's Guide - For You
PPTX
Natural Language Processing (NLP) | Basics
PPTX
It's Machine Learning Basics -- For You!
PPTX
Polymorphism in Python
PDF
DSA Question Bank
PDF
Web API - Overview
PDF
CSS Overview
PDF
HTML Overview
PDF
EM Algorithm
PDF
User story mapping
PDF
User stories
PDF
Problem solving using computers - Unit 1 - Study material
PDF
Problem solving using computers - Chapter 1
PDF
Quality Circle | Case Study on Self Esteem | Team Opus Geeks.pdf
PPTX
Multimedia Content and Content Acquisition
PPTX
PHP Arrays_Introduction
PDF
System Calls - Introduction
PDF
Leadership
Django Framework Interview Guide - Part 1
On Page SEO (Search Engine Optimization)
Prompt Engineering | Beginner's Guide - For You
Natural Language Processing (NLP) | Basics
It's Machine Learning Basics -- For You!
Polymorphism in Python
DSA Question Bank
Web API - Overview
CSS Overview
HTML Overview
EM Algorithm
User story mapping
User stories
Problem solving using computers - Unit 1 - Study material
Problem solving using computers - Chapter 1
Quality Circle | Case Study on Self Esteem | Team Opus Geeks.pdf
Multimedia Content and Content Acquisition
PHP Arrays_Introduction
System Calls - Introduction
Leadership
Ad

Recently uploaded (20)

PDF
KodekX | Application Modernization Development
PDF
Machine learning based COVID-19 study performance prediction
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
Big Data Technologies - Introduction.pptx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
sap open course for s4hana steps from ECC to s4
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Empathic Computing: Creating Shared Understanding
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
KodekX | Application Modernization Development
Machine learning based COVID-19 study performance prediction
Per capita expenditure prediction using model stacking based on satellite ima...
Network Security Unit 5.pdf for BCA BBA.
Mobile App Security Testing_ A Comprehensive Guide.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Spectral efficient network and resource selection model in 5G networks
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MYSQL Presentation for SQL database connectivity
Big Data Technologies - Introduction.pptx
Chapter 3 Spatial Domain Image Processing.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
20250228 LYD VKU AI Blended-Learning.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
sap open course for s4hana steps from ECC to s4
Digital-Transformation-Roadmap-for-Companies.pptx
Empathic Computing: Creating Shared Understanding
Reach Out and Touch Someone: Haptics and Empathic Computing

Artificial intelligence ( AI ) | Guide

  • 2. WHAT IS AI?  Artificial Intelligence (AI) refers to the development of computer systems of performing tasks that require human intelligence.  The ultimate goal of AI is to create machines that can emulate capabilities and carry out diverse tasks, with enhanced efficiency and precision. The field of AI holds potential to revolutionize aspects of our daily lives.
  • 3. TYPES OF AI  ANI (Artificial Narrow Intelligence): ANI, also known as Weak AI, refers to AI systems that are designed and trained for a specific task or a narrow set of tasks. These systems operate within a limited context and cannot perform tasks beyond their predefined scope. Examples include virtual personal assistants like Siri, recommendation systems, and chatbots.  AGI (Artificial General Intelligence): AGI, also known as Strong AI or Human-Level AI, refers to AI systems that possess human-like cognitive abilities, including the ability to understand, learn, and apply knowledge across different domains. AGI can perform any intellectual task that a human can do. Achieving AGI remains a long-term goal in AI research and development.
  • 4. TYPES OF AI  ASI (Artificial Super Intelligence): ASI refers to AI systems that surpass human intelligence in every aspect. These hypothetical systems would be capable of outperforming humans in virtually every cognitive task and could potentially lead to significant societal impacts. ASI is a concept often discussed in speculative discussions about the future of AI, and it remains purely theoretical at present.  Generative AI: Generative AI refers to AI systems that can generate new content, such as images, text, music, or videos, that is original and not directly copied from existing data. Generative AI often utilizes techniques such as deep learning and generative adversarial networks (GANs) to produce realistic and creative outputs. Generative AI has applications in various fields, including art, design, and content creation.
  • 5. MACHINE LEARNING (ML) AND ITS TYPES Type of Machine Learning Description Example Supervised Learning Algorithms learn from labeled data, with input-output pairs provided. The model generalizes patterns and makes predictions on unseen data. Classification: Identifying whether an email is spam or not spam. Regression: Predicting the price of a house based on its features. Unsupervised Learning Algorithms learn from unlabeled data to discover patterns or structures in the data. The model identifies hidden patterns or groupings without explicit guidance. Clustering: Grouping customers based on their purchasing behavior. Anomaly detection: Detecting fraudulent transactions. Semi-supervised Learning Combines supervised and unsupervised learning, using a small amount of labeled data with a large amount of unlabeled data. The model learns from the labeled data and the structure of the unlabeled data. Text classification: Training a model to classify news articles with a small set of labeled articles and a large collection of unlabeled articles. Reinforcement Learning Agents learn to make decisions by interacting with an environment. The model receives feedback in the form of rewards or penalties, guiding it towards better decision-making. Game playing: Training a model to play chess by rewarding successful moves and penalizing mistakes. Robotics: Teaching a robot to navigate a maze by rewarding it for finding the correct path.
  • 6. DATA  Data refers to raw facts, observations, or measurements that are typically collected and stored in a structured or unstructured format. It can be anything from numbers, text, images, sounds, or any other form of information. In its raw form, data lacks context and meaning. However, when processed and analyzed, data can provide valuable insights and information that can be used for decision-making, problem-solving, or understanding various phenomena in different fields such as science, business, healthcare, and more.
  • 7. DATA LABELING METHODS Data Labeling Method Description Manual Labeling Human annotators assign labels or tags to data points based on predefined criteria. Observing Behaviors Behaviors or actions of subjects are observed and recorded in real-world scenarios. Semi-supervised Learning Combines elements of supervised and unsupervised learning; some data points are labeled, while others are not. Active Learning Iteratively selects the most informative data points for labeling, typically using machine learning models. Crowdsourcing Outsourcing the labeling task to a large group of online workers or "crowd" through platforms like Amazon Mechanical Turk. Weak Supervision Utilizes heuristics, rules, or noisy labels to automatically label large amounts of data.
  • 8. DATA USE - MISUSE Use of Data in AI Misuse of Data in AI Training machine learning models Biased data leading to discriminatory outcomes Improving accuracy and efficiency Unauthorized data collection Personalizing user experiences Data breaches and privacy violations Enhancing decision-making processes Manipulation of data for political or commercial gain Identifying patterns and trends Data falsification or tampering Automating tasks and processes Lack of transparency in data usage Predicting future outcomes Excessive surveillance and monitoring Conducting research and analysis Exploitation of sensitive personal information Generating insights for businesses Monetization of user data without consent Creating recommendation systems Creation of echo chambers or filter bubbles Enabling targeted advertising Tracking and profiling individuals without their knowledge Facilitating medical diagnosis and treatment Unauthorized sharing of medical data Improving cybersecurity measures Creation of deepfake content for malicious purposes Optimizing resource allocation Discriminatory profiling based on protected characteristics
  • 9. COMPARING AI, ML, DS, STAT, AND MATHS Field Focus Applications Mathematics (Maths) Provides foundational tools and language for modeling complex systems and developing algorithms. Used in all fields for modeling, optimization, and algorithm development. Statistics (Stat) Deals with collecting, analyzing, interpreting, and presenting data; provides methods for making inferences and predictions based on data samples. Used for hypothesis testing, regression analysis, clustering, classification, and inference. Machine Learning (ML) Building systems that can learn from data and improve over time without being explicitly programmed. Used in various domains for pattern recognition, prediction, decision-making, and automation. Data Science (DS) Integrates elements of statistics, machine learning, and domain expertise to extract insights and knowledge from structured and unstructured data. Used for data cleaning, exploratory data analysis, feature engineering, model building, and deployment. Artificial Intelligence (AI) Aims to create systems capable of performing tasks that typically require human intelligence, including machine learning as a key component. Used in natural language processing, computer vision, robotics, expert systems, and other areas for automation and decision support.
  • 10. MACHINE LEARNING VS DEEP LEARNING Aspect Machine Learning Deep Learning Architecture Typically uses simpler algorithms and models. Utilizes artificial neural networks with multiple layers. Feature Engineering Requires manual feature engineering. Automatically learns features from raw data. Data Requirement Moderate to large datasets. Large datasets, often with high dimensionality. Performance May not perform as well on complex tasks. Excels at complex tasks like image and speech recognition. Interpretability Generally, more interpretable. Often considered as "black-box" models. Training Time Faster training times compared to deep learning. Longer training times, especially with complex models. Hardware Dependency Less hardware intensive compared to deep learning. Often requires powerful hardware (GPUs/TPUs) for training. Examples Linear regression, decision trees, SVMs, k- NN. Convolutional Neural Networks (CNNs), RNNs, Transformers.
  • 11. THE INTERNET HAS BEEN SUPER IMPORTANT FOR MAKING AI BETTER BECAUSE  Lots of Data: The internet gives AI tons of information to learn from.  Helps Label Data: People online can help mark data for AI to learn from.  Teamwork: AI experts from everywhere can work together online.  Big Computers: The internet lets AI use powerful computers from far away.  Fast Processing: AI can quickly understand and respond to information online.  Ready-Made Models: Online, there are already-made AI models for developers to use.  Easy Sharing: The internet helps put AI tools where people can easily get them.
  • 12. WORKFLOW OF A MACHINE LEARNING PROJECT Stage Description 1. Problem Definition Define the problem you're trying to solve and determine if it's suitable for machine learning. 2. Data Collection Gather relevant data from various sources, ensuring it's clean, relevant, and in the right format. 3. Data Preprocessing Clean the data by handling missing values, outliers, and encoding categorical variables if necessary. 4. Exploratory Data Analysis (EDA) Understand the data through visualizations and statistical summaries to gain insights. 5. Feature Engineering Create new features or transform existing ones to enhance the predictive power of the model. 6. Model Selection Choose appropriate machine learning algorithms based on the problem type and data characteristics. 7. Model Training Train the selected models on the training data and fine-tune hyperparameters to improve performance. 8. Model Evaluation Evaluate the models using appropriate metrics and cross-validation to assess performance accurately. 9. Model Deployment Deploy the trained model into production, ensuring scalability, reliability, and security. 10. Monitoring & Maintenance Continuously monitor the deployed model's performance and update it as needed with new data or improvements. 11. Documentation Document the entire process, including data sources, preprocessing steps,
  • 13. LIMITATION OF MACHINE LEARNING 1. Data Quality Matters: Machine learning needs good data. If the data is bad or biased, the results will be too. 2. Learning Too Much: Sometimes, ML models learn too much from the data they're given, making them too specific and unable to handle new situations. 3. Hard to Understand: ML models can be like black boxes, making it tough to understand how they make decisions. 4. Big Data, Big Problem: Handling lots of data or complex data can be really hard for ML algorithms. 5. Correlation ≠ Causation: ML can find patterns in data, but it's not great at understanding why things happen, just that they do. 6. Tricked Easily: ML models can be fooled by small changes in data, leading to wrong predictions. 7. Need Experts: ML often needs people who know a lot about the field to choose the right data and set up the model correctly. 8. Fairness and Bias: ML can make biased decisions based on biased data, which can be unfair and even illegal. 9. Privacy Matters: ML often uses sensitive data, so keeping it safe and private is a big concern. 10. Learning is Hard: Many ML models can't keep learning after they're trained, so they struggle to adapt to new situations.
  • 14. WORKFLOW OF A DATA SCIENCE PROJECT Stage Description 1. Problem Definition Clearly define the problem you aim to solve and its significance, ensuring alignment with stakeholders' goals. 2. Data Acquisition Gather relevant data from diverse sources, ensuring it's comprehensive, accurate, and legally obtained. 3. Data Exploration Explore the data to understand its structure, quality, and relationships through visualizations and summaries. 4. Data Preprocessing Cleanse the data by handling missing values, outliers, and inconsistencies, ensuring it's ready for analysis. 5. Feature Engineering Create new features or transform existing ones to improve model performance or enhance insights. 6. Model Development Develop statistical or machine learning models tailored to address the problem, selecting appropriate algorithms. 7. Model Evaluation Assess model performance using relevant metrics, cross-validation, and comparing against baselines or benchmarks. 8. Model Interpretation Interpret model predictions or findings, understanding the factors influencing outcomes and their implications. 9. Visualization Communicate insights effectively through visualizations, helping stakeholders understand complex findings intuitively. 10. Deployment Deploy the model or analytical solution, ensuring it integrates seamlessly into existing systems or workflows. 11. Monitoring & Maintenance Continuously monitor model performance and data quality, updating models as needed to maintain effectiveness. 12. Documentation Document the entire project including data sources, methodologies, findings, and recommendations for future reference.
  • 15. CPU VS GPU FEATURE CPU GPU Purpose General-purpose computation Specialized for parallel processing Architecture Typically, fewer cores, optimized for tasks requiring serial processing Many cores optimized for parallel processing Core Count Usually fewer cores (4 to 32) Many cores (hundreds to thousands) Clock Speed Higher clock speeds (GHz range) Lower clock speeds (MHz range) Memory Typically has smaller, faster caches Larger memory bandwidth, slower access time Power Consumption Lower power consumption Higher power consumption Flexibility Versatile, suitable for a wide range of tasks Specialized for graphics rendering, but adaptable to other parallel tasks Cost Generally, more expensive per core Can be more cost-effective for parallel tasks Machine Learning Slower for deep learning tasks due to serial processing nature Highly efficient for parallelized deep learning tasks; widely used in neural network training and inference
  • 16. LIMITATION OF ARTIFICIAL INTELLIGENCE Limitation Description Lack of Creativity AI struggles with tasks requiring creativity, intuition, or emotional understanding. Data Dependency AI heavily relies on data for training and decision-making, which can lead to biased or inaccurate results. Ethical Concerns AI systems may perpetuate societal biases present in training data, posing ethical challenges. Interpretability & Explainability Many AI models are considered "black boxes," making it difficult to interpret or explain their decisions. Limited Generalization AI models may struggle to generalize knowledge to new or unseen scenarios, leading to errors. Resource Intensiveness Developing and training AI models requires significant computational resources and data. Vulnerability to Adversarial Attacks AI systems can be manipulated by adversaries to produce incorrect outputs. Lack of Common Sense Understanding AI often lacks nuanced understanding of common sense reasoning. Human Dependence for Oversight AI systems may require human supervision to ensure safe and ethical operation. Regulatory and Legal Challenges Legal frameworks for governing AI use are often lacking or insufficient.
  • 17. COMPUTER VISION  Computer vision is a field of artificial intelligence (AI) that enables machines to interpret and understand the visual world. It involves developing algorithms and techniques to extract meaningful information from images or videos. This information can range from simple tasks like object detection and recognition to more complex tasks such as image segmentation, scene understanding, and even action recognition.  Computer vision finds applications in various industries, including healthcare, automotive, agriculture, retail, security, and more. Some common applications include facial recognition, autonomous vehicles, medical image analysis, augmented reality, and quality inspection in manufacturing.
  • 18. COMPUTER VISION VS DEEP LEARNING Aspect Computer Vision Deep Learning Definition A field of study focusing on enabling computers to interpret and understand visual information from the real world. A subset of machine learning where artificial neural networks with multiple layers (deep architectures) learn representations of data. Core Techniques Image processing, feature extraction, object detection, image classification, segmentation, and recognition. Neural networks, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and deep belief networks (DBNs). Application Areas Medical imaging, autonomous vehicles, surveillance, augmented reality, robotics, satellite imagery analysis. Natural language processing (NLP), speech recognition, recommendation systems, gaming, financial forecasting, healthcare diagnostics. Data Requirements Requires labeled datasets for training models and often relies on pre-defined features or engineered representations. Needs large amounts of labeled data for training, but can automatically learn features and representations from raw data. Performance Performance highly dependent on the quality of feature extraction and engineering, often requiring domain expertise. Can achieve state-of-the-art performance in various tasks when trained on large datasets with sufficient computational resources. Interpretability Generally, more interpretable as feature extraction and processing steps are often explicit and well-defined. Can be less interpretable due to the complex, hierarchical representations learned automatically from data, often referred to as "black box" models. Flexibility Less flexible in adapting to new tasks without significant modifications to feature extraction and processing pipelines. More flexible in adapting to new tasks and domains, as deep learning models can learn relevant features directly from data. Computational Cost Typically, less computationally expensive compared to deep learning models, especially for simpler tasks and smaller datasets. Can be computationally expensive, requiring powerful hardware (GPUs/TPUs) and large amounts of data for training complex models.
  • 19. GENERATIVE ADVERSARIAL NETWORKS  Generative Adversarial Networks (GANs) in AI are an exciting and powerful class of machine learning models used for generating new data samples that resemble a given dataset. GANs consist of two neural networks, namely the generator and the discriminator, which are trained simultaneously through adversarial training.  The generator network takes random noise as input and tries to generate synthetic data samples, such as images or text, that are indistinguishable from real data. The discriminator network, on the other hand, tries to distinguish between real and fake data samples.  During training, the generator aims to produce samples that are so realistic that the discriminator cannot differentiate them from real samples, while the discriminator aims to become more accurate in distinguishing real from fake samples. This adversarial process drives both networks to improve over time until the generator produces high-quality synthetic data.  GANs have been used in various applications, including image generation, style transfer, super-resolution, image-to-image translation, and even generating synthetic human faces. However, they also present challenges such as training instability and mode collapse, where the generator produces limited diversity in its outputs. Nonetheless, GANs continue to be an active area of research in the AI community, with ongoing efforts to improve their stability, diversity, and applicability.