SlideShare a Scribd company logo
Organizing Secretary
Dr.R.Gunavathi,
Head & Associate professor,
PG & Research Department of Computer
Applications
PG & Research Department of Computer Applications
organized
IoT – Applications and Machine Learning
-By Sushama
Assistant Professor (CSE)
JECRC University
“Learning is any process by which a system
improves performance from experience.”
- Herbert Simon
Definition by Tom Mitchell (1998):
Machine Learning is the study of algorithms that
• improve their performance P
• at some task T
• with experience E.
A well-defined learning task is given by <P, T,
E>.
3
Machine
Learning
Computer Output
Computer
Data
Output
Program
4
Data
Program
ML is used when:
• Human expertise does not exist (navigating on Mars)
• Humans can’t explain their expertise (speech
recognition)
• Models must be customized (personalized medicine)
• Models are based on huge amounts of data
(genomics)
Learning isn’t always useful:
• There is no need to “learn” to calculate payroll
5
6
Slide credit: Geoffrey
7
• Recognizing patterns:
– Facial identities or facial expressions
– Handwritten or spoken words
– Medical images
• Generating patterns:
– Generating images or motion sequences
• Recognizing anomalies:
– Unusual credit card transactions
– Unusual patterns of sensor readings in a nuclear
power plant
• Prediction:
– Future stock prices or currency exchange rates
8
• Web search
• Computational
biology
• Finance
• E-commerce
• Space exploration
• Robotics
• Information
extraction
• Social networks
• Debugging software
• [Your favorite area]
“Machine Learning: Field of study that gives
computers the ability to learn without being
explicitly programmed.” -Arthur Samuel
(1959)
9
10
Improve on task T, with respect to
performance metric P, based on experience E
T: Playing checkers
P: Percentage of games won against an arbitrary opponent
E: Playing practice games against itself
T: Recognizing hand-written words
P: Percentage of words correctly classified
E: Database of human-labeled images of handwritten words
T: Driving on four-lane highways using vision sensors
P: Average distance traveled before a human-judged error
E: A sequence of images and steering commands recorded while
observing a human driver.
T: Categorize email messages as spam or legitimate.
P: Percentage of email messages correctly classified.
E: Database of emails, some with human-given labels
11
• Nevada made it legal for
autonomous cars to drive
on roads in June 2011
• As of 2013, four states
(Nevada, Florida, California,
and Michigan) have
legalized autonomous cars
Penn’s Autonomous Car 
(Ben Franklin Racing Team)
12
13
Laser Terrain
Mapping
Stanley
Learning from Human
Drivers
Sebastian
Adaptive
Vision
Pat
h
Plannin
g
Images and movies taken from Sebastian Thrun’smultimedia
w1e4bsite.
15
pixels
edges
object parts
(combination
of edges)
object models
16
17
 Trained on 4 classes
(cars, faces,
motorbikes, airplanes).
 Second layer: Shared-
features and object-
specific features.
 Third layer: More
specific features.
18
[Farabet et al. ICML 2012, PAMI
2013]
19
Input
images
Samples
from
feedforward
Inference
(control)
Samples
from Full
posterior
inference
Generating posterior samples from faces by “filling in”experiments
(cf. Lee and Mumford, 2003). Combine bottom-up and top-down
inference.
Slide credit: Andrew
20
A Typical Speech Recognition
System
ML used to predict of phone states from the sound
spectrogram
Deep learning has state-of-the-art
results
# Hidden Layers 1 2 4 8 10 12
Word Error Rate % 16.0 12.8 11.4 10.9 11.0 11.1
Baseline GMM performance = 15.4%
[Zeiler et al. “On rectified linear units for
speech recognition” ICASSP 2013]
21
Slide credit: Li Deng, MS
22
23
• Supervised (inductive) learning
– Given: training data + desired outputs (labels)
• Unsupervised learning
– Given: training data (without desired outputs)
• Semi-supervised learning
– Given: training data + a few desired outputs
• Reinforcement learning
– Rewards from sequence of actions
24
• Given (x1, y1), (x2, y2), ..., (xn, yn)
• Learn a function f(x) to predict y givenx
– y is real-valued == regression
9
8
7
6
5
4
3
2
1
0
1970 1980 1990 2000 2010 2020
SeptemberArcticSeaIce
Extent(1,000,000sq
km)
26
• Given (x1, y1), (x2, y2), ..., (xn, yn)
• Learn a function f(x) to predict y givenx
– y is categorical == classification
Breast Cancer (Malignant / Benign)
1(Malignant)
0(Benign)
Tumor Size
27
28
• Given (x1, y1), (x2, y2), ..., (xn, yn)
• Learn a function f(x) to predict y givenx
– y is categorical == classification
Breast Cancer (Malignant / Benign)
1(Malignant)
0(Benign)
Tumor Size
Tumor Size
29
• Given (x1, y1), (x2, y2), ..., (xn, yn)
• Learn a function f(x) to predict y given
x
– y is categorical == classification
Breast Cancer (Malignant / Benign)
1(Malignant)
0(Benign)
Tumor Size
Predict Benign Predict Malignant
Tumor Size
Tumor
Size
Ag
e
- Clump Thickness
- Uniformity of Cell Size
- Uniformity of Cell
Shape
…
• x can be multi-dimensional
– Each dimension corresponds to an
attribute
30
• Given x1, x2, ..., xn (without labels)
• Output hidden structure behind the x’s
– E.g., clustering
31
Genes
Individuals
Genomics application: group individuals by genetic
similarity
32
Organize computing clusters Social network analysis
Image credit: NASA/JPL-Caltech/E. Churchwell (Univ. of Wisconsin, Madison)
Astronomical data analysisMarket segmentation
33
34
• Independent component analysis – separate a
combined signal into its original sources
35
• Independent component analysis – separate a
combined signal into its original sources
36
• Given a sequence of states and actions
with (delayed) rewards, output a policy
– Policy is a mapping from states  actions
that tells you what to do in a given state
• Examples:
– Credit assignment problem
– Game playing
– Robot in a maze
– Balance a pole on your hand
37
Agent and environment interact at discrete timesteps
Agent observes state at step t: st S
: t  0, 1, 2, K
produces action at step t : at  A(st )
gets resulting reward :
and resulting next state :
rt1 
st 1
. . . st at
rt +1 st +1
at +1
rt +2 st+2
at +2
rt +3 st +3
. . .
at +3
38
39
• Learn policy from user demonstrations
40
• Choose the training experience
• Choose exactly what is to be learned
– i.e. the target function
• Choose how to represent the target
function
• Choose a learning algorithm to infer the
target function from the experience
Environment/
Experience
Learner
Knowledge
Performance
Element
Training
data
Testing
data
41
• We generally assume that the training and test
examples are independently drawn from the
same overall distribution of data
– We call this “i.i.d” which stands for “independent and
identically distributed”
• If examples are not independent, requires
collective classification
• If test distribution is different, requires
transfer learning
42
• Tens of thousands of machine
learning algorithms
– Hundreds new every year
• Every ML algorithm has three
components:
– Representation
– Optimization
– Evaluation
43
44
• Numerical functions
– Linear regression
– Neural networks
– Support vector machines
• Symbolic functions
– Decision trees
– Rules in propositional logic
– Rules in first-order predicate logic
• Instance-based functions
– Nearest-neighbor
– Case-based
• Probabilistic Graphical Models
– Naïve Bayes
– Bayesian networks
– Hidden-Markov Models (HMMs)
– Probabilistic Context Free Grammars
(PCFGs)
– Markov networks
45
• Gradient descent
– Perceptron
– Backpropagation
• Dynamic Programming
– HMM Learning
– PCFG Learning
• Divide and Conquer
– Decision tree induction
– Rule learning
• Evolutionary Computation
– Genetic Algorithms (GAs)
– Genetic Programming (GP)
– Neuro-evolution
47
• Accuracy
• Precision and recall
• Squared error
• Likelihood
• Posterior probability
• Cost / Utility
• Margin
• Entropy
• K-L divergence etc.
• Understand domain, prior knowledge, and goals
• Data integration, selection, cleaning, pre-
processing, etc.
• Learn models
• Interpret results
• Consolidate and deploy discovered knowledge
Loop
48
49
• Learning can be viewed as using direct or
indirect experience to approximate a chosen
target function.
• Function approximation can be viewed as a
search through a space of hypotheses
(representations of functions) for one that best
fits a set of training data.
• Different learning methods assume different
hypothesis spaces (representation languages)
and/or employ different search techniques.
50
51
• 1950s
– Samuel’s checker player
– Selfridge’s Pandemonium
• 1960s:
– Neural networks: Perceptron
– Pattern recognition
– Learning in the limit theory
– Minsky and Papert prove limitations of Perceptron
• 1970s:
– Symbolic concept induction
– Winston’s arch learner
– Expert systems and the knowledge acquisition
bottleneck
– Quinlan’s ID3
– Michalski’s AQ and soybean diagnosis
– Scientific discovery with BACON
– Mathematical discovery with AM
52
• 1980s:
– Advanced decision tree and rule learning
– Explanation-based Learning (EBL)
– Learning and planning and problem solving
– Utility problem
– Analogy
– Cognitive architectures
– Resurgence of neural networks (connectionism,
backpropagation)
– Valiant’s PAC Learning Theory
– Focus on experimental methodology
• 1990s
– Data mining
– Adaptive software agents and web applications
– Text learning
– Reinforcement learning (RL)
– Inductive Logic Programming (ILP)
– Ensembles: Bagging, Boosting, and Stacking
– Bayes Net learning
53
• 2000s
– Support vector machines & kernel methods
– Graphical models
– Statistical relational learning
– Transfer learning
– Sequence labeling
– Collective classification and structured outputs
– Computer Systems Applications (Compilers, Debugging, Graphics, Security)
– E-mail management
– Personalized assistants that learn
– Learning in robotics and vision
• 2010s
– Deep learning systems
– Learning for big data
– Bayesian methods
– Multi-task & lifelong learning
– Applications to vision, speech, social networks, learning to read, etc.
– ???
54
• Supervised learning
– Decision tree induction
– Linear regression
– Logistic regression
– Support vector
machines & kernel
methods
– Model ensembles
– Bayesian learning
– Neural networks &
deep learning
– Learning theory
• Unsupervised
learning
– Clustering
– Dimensionality
reduction
• Reinforcement
learning
– Temporal
difference
learning
– Q learning
• Evaluation
• Applications
Launching into machine learning
Launching into machine learning

More Related Content

PPTX
Lecture 01: Machine Learning for Language Technology - Introduction
PDF
Lesson 33
PPT
课堂讲义(最后更新:2009-9-25)
PPT
Eick/Alpaydin Introduction
PPTX
Machine Learning SPPU Unit 1
PPTX
Machine Learning in Finance
PPTX
Supervised Machine Learning in R
PPT
Topic_6
Lecture 01: Machine Learning for Language Technology - Introduction
Lesson 33
课堂讲义(最后更新:2009-9-25)
Eick/Alpaydin Introduction
Machine Learning SPPU Unit 1
Machine Learning in Finance
Supervised Machine Learning in R
Topic_6

What's hot (20)

PPTX
Introduction to machine learning and model building using linear regression
PDF
Machine learning Introduction
PPT
This is a heavily data-oriented
PPT
introducción a Machine Learning
PPTX
Machine Can Think
PPTX
Intro to machine learning(with animations)
PPT
Machine learning
PPT
2.17Mb ppt
PDF
Interpretability of machine learning
DOC
Lecture #1: Introduction to machine learning (ML)
PDF
PPTX
Introduction into machine learning
PPTX
Introduction to Machine Learning
PDF
Machine learning
PDF
How machines can take decisions
PDF
Introduction to Machine Learning
PDF
MultiC2: an Optimization Framework for Learning from Task and Worker Dual Het...
PPTX
Machine Learning and its Applications
PPTX
Introduction to machine learning
Introduction to machine learning and model building using linear regression
Machine learning Introduction
This is a heavily data-oriented
introducción a Machine Learning
Machine Can Think
Intro to machine learning(with animations)
Machine learning
2.17Mb ppt
Interpretability of machine learning
Lecture #1: Introduction to machine learning (ML)
Introduction into machine learning
Introduction to Machine Learning
Machine learning
How machines can take decisions
Introduction to Machine Learning
MultiC2: an Optimization Framework for Learning from Task and Worker Dual Het...
Machine Learning and its Applications
Introduction to machine learning
Ad

Similar to Launching into machine learning (20)

PPTX
Introduction to Machine Learningg
PDF
01_introduction to machine learning algorithms and basics .pdf
PDF
01_introduction_ML.pdf
PPTX
ppt on introduction to Machine learning tools
PPTX
Machine learning
PPTX
INTRO TO ML.pptx
PPTX
Machine Learning GDSC DCE Darbhanga.pptx
PPTX
L 8 introduction to machine learning final kirti.pptx
PDF
01_introduction.pdfbnmelllleitrthnjjjkkk
PPTX
introduction to machine learning and ai.pptx
PPTX
Introduction to Machine Learning and AI.pptx
PPTX
introduction to machine learning education.pptx
PPTX
1_Introduction.pptx
PPT
Machine learning and deep learning algorithms
PPT
intro to ML by the way m toh phasee movie Punjabi
PPT
Machine Learning Ch 1.ppt
PDF
Introduction to Machine Learning
PPTX
Unit - 1 - Introduction of the machine learning
PPT
Introduction to Machine Learning LD6051.ppt
PPT
Machine Learning Techniques all units .ppt
Introduction to Machine Learningg
01_introduction to machine learning algorithms and basics .pdf
01_introduction_ML.pdf
ppt on introduction to Machine learning tools
Machine learning
INTRO TO ML.pptx
Machine Learning GDSC DCE Darbhanga.pptx
L 8 introduction to machine learning final kirti.pptx
01_introduction.pdfbnmelllleitrthnjjjkkk
introduction to machine learning and ai.pptx
Introduction to Machine Learning and AI.pptx
introduction to machine learning education.pptx
1_Introduction.pptx
Machine learning and deep learning algorithms
intro to ML by the way m toh phasee movie Punjabi
Machine Learning Ch 1.ppt
Introduction to Machine Learning
Unit - 1 - Introduction of the machine learning
Introduction to Machine Learning LD6051.ppt
Machine Learning Techniques all units .ppt
Ad

More from Dr.R. Gunavathi Ramasamy (9)

PPTX
Iot transforming the future of agriculture
PPTX
Machine learningfor computervision_ashutoshupadhyay
PPTX
Io t with biometrics
PPTX
Iot trends and technologies development in terms of Machine Learning
PPTX
Blockchain overview
PPTX
Artificial Intelligence and Expert System
Iot transforming the future of agriculture
Machine learningfor computervision_ashutoshupadhyay
Io t with biometrics
Iot trends and technologies development in terms of Machine Learning
Blockchain overview
Artificial Intelligence and Expert System

Recently uploaded (20)

PPTX
Cell Types and Its function , kingdom of life
PDF
Computing-Curriculum for Schools in Ghana
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
01-Introduction-to-Information-Management.pdf
PPTX
Institutional Correction lecture only . . .
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Insiders guide to clinical Medicine.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
master seminar digital applications in india
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Pre independence Education in Inndia.pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Cell Structure & Organelles in detailed.
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Cell Types and Its function , kingdom of life
Computing-Curriculum for Schools in Ghana
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPH.pptx obstetrics and gynecology in nursing
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Final Presentation General Medicine 03-08-2024.pptx
01-Introduction-to-Information-Management.pdf
Institutional Correction lecture only . . .
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Insiders guide to clinical Medicine.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Microbial disease of the cardiovascular and lymphatic systems
master seminar digital applications in india
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Pre independence Education in Inndia.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Cell Structure & Organelles in detailed.
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...

Launching into machine learning

  • 1. Organizing Secretary Dr.R.Gunavathi, Head & Associate professor, PG & Research Department of Computer Applications PG & Research Department of Computer Applications organized IoT – Applications and Machine Learning
  • 2. -By Sushama Assistant Professor (CSE) JECRC University
  • 3. “Learning is any process by which a system improves performance from experience.” - Herbert Simon Definition by Tom Mitchell (1998): Machine Learning is the study of algorithms that • improve their performance P • at some task T • with experience E. A well-defined learning task is given by <P, T, E>. 3
  • 5. ML is used when: • Human expertise does not exist (navigating on Mars) • Humans can’t explain their expertise (speech recognition) • Models must be customized (personalized medicine) • Models are based on huge amounts of data (genomics) Learning isn’t always useful: • There is no need to “learn” to calculate payroll 5
  • 7. 7 • Recognizing patterns: – Facial identities or facial expressions – Handwritten or spoken words – Medical images • Generating patterns: – Generating images or motion sequences • Recognizing anomalies: – Unusual credit card transactions – Unusual patterns of sensor readings in a nuclear power plant • Prediction: – Future stock prices or currency exchange rates
  • 8. 8 • Web search • Computational biology • Finance • E-commerce • Space exploration • Robotics • Information extraction • Social networks • Debugging software • [Your favorite area]
  • 9. “Machine Learning: Field of study that gives computers the ability to learn without being explicitly programmed.” -Arthur Samuel (1959) 9
  • 10. 10 Improve on task T, with respect to performance metric P, based on experience E T: Playing checkers P: Percentage of games won against an arbitrary opponent E: Playing practice games against itself T: Recognizing hand-written words P: Percentage of words correctly classified E: Database of human-labeled images of handwritten words T: Driving on four-lane highways using vision sensors P: Average distance traveled before a human-judged error E: A sequence of images and steering commands recorded while observing a human driver. T: Categorize email messages as spam or legitimate. P: Percentage of email messages correctly classified. E: Database of emails, some with human-given labels
  • 11. 11
  • 12. • Nevada made it legal for autonomous cars to drive on roads in June 2011 • As of 2013, four states (Nevada, Florida, California, and Michigan) have legalized autonomous cars Penn’s Autonomous Car  (Ben Franklin Racing Team) 12
  • 13. 13
  • 14. Laser Terrain Mapping Stanley Learning from Human Drivers Sebastian Adaptive Vision Pat h Plannin g Images and movies taken from Sebastian Thrun’smultimedia w1e4bsite.
  • 15. 15
  • 17. 17
  • 18.  Trained on 4 classes (cars, faces, motorbikes, airplanes).  Second layer: Shared- features and object- specific features.  Third layer: More specific features. 18
  • 19. [Farabet et al. ICML 2012, PAMI 2013] 19
  • 20. Input images Samples from feedforward Inference (control) Samples from Full posterior inference Generating posterior samples from faces by “filling in”experiments (cf. Lee and Mumford, 2003). Combine bottom-up and top-down inference. Slide credit: Andrew 20
  • 21. A Typical Speech Recognition System ML used to predict of phone states from the sound spectrogram Deep learning has state-of-the-art results # Hidden Layers 1 2 4 8 10 12 Word Error Rate % 16.0 12.8 11.4 10.9 11.0 11.1 Baseline GMM performance = 15.4% [Zeiler et al. “On rectified linear units for speech recognition” ICASSP 2013] 21
  • 22. Slide credit: Li Deng, MS 22
  • 23. 23
  • 24. • Supervised (inductive) learning – Given: training data + desired outputs (labels) • Unsupervised learning – Given: training data (without desired outputs) • Semi-supervised learning – Given: training data + a few desired outputs • Reinforcement learning – Rewards from sequence of actions 24
  • 25. • Given (x1, y1), (x2, y2), ..., (xn, yn) • Learn a function f(x) to predict y givenx – y is real-valued == regression 9 8 7 6 5 4 3 2 1 0 1970 1980 1990 2000 2010 2020 SeptemberArcticSeaIce Extent(1,000,000sq km) 26
  • 26. • Given (x1, y1), (x2, y2), ..., (xn, yn) • Learn a function f(x) to predict y givenx – y is categorical == classification Breast Cancer (Malignant / Benign) 1(Malignant) 0(Benign) Tumor Size 27
  • 27. 28 • Given (x1, y1), (x2, y2), ..., (xn, yn) • Learn a function f(x) to predict y givenx – y is categorical == classification Breast Cancer (Malignant / Benign) 1(Malignant) 0(Benign) Tumor Size Tumor Size
  • 28. 29 • Given (x1, y1), (x2, y2), ..., (xn, yn) • Learn a function f(x) to predict y given x – y is categorical == classification Breast Cancer (Malignant / Benign) 1(Malignant) 0(Benign) Tumor Size Predict Benign Predict Malignant Tumor Size
  • 29. Tumor Size Ag e - Clump Thickness - Uniformity of Cell Size - Uniformity of Cell Shape … • x can be multi-dimensional – Each dimension corresponds to an attribute 30
  • 30. • Given x1, x2, ..., xn (without labels) • Output hidden structure behind the x’s – E.g., clustering 31
  • 31. Genes Individuals Genomics application: group individuals by genetic similarity 32
  • 32. Organize computing clusters Social network analysis Image credit: NASA/JPL-Caltech/E. Churchwell (Univ. of Wisconsin, Madison) Astronomical data analysisMarket segmentation 33
  • 33. 34 • Independent component analysis – separate a combined signal into its original sources
  • 34. 35 • Independent component analysis – separate a combined signal into its original sources
  • 35. 36 • Given a sequence of states and actions with (delayed) rewards, output a policy – Policy is a mapping from states  actions that tells you what to do in a given state • Examples: – Credit assignment problem – Game playing – Robot in a maze – Balance a pole on your hand
  • 36. 37 Agent and environment interact at discrete timesteps Agent observes state at step t: st S : t  0, 1, 2, K produces action at step t : at  A(st ) gets resulting reward : and resulting next state : rt1  st 1 . . . st at rt +1 st +1 at +1 rt +2 st+2 at +2 rt +3 st +3 . . . at +3
  • 37. 38
  • 38. 39 • Learn policy from user demonstrations
  • 39. 40
  • 40. • Choose the training experience • Choose exactly what is to be learned – i.e. the target function • Choose how to represent the target function • Choose a learning algorithm to infer the target function from the experience Environment/ Experience Learner Knowledge Performance Element Training data Testing data 41
  • 41. • We generally assume that the training and test examples are independently drawn from the same overall distribution of data – We call this “i.i.d” which stands for “independent and identically distributed” • If examples are not independent, requires collective classification • If test distribution is different, requires transfer learning 42
  • 42. • Tens of thousands of machine learning algorithms – Hundreds new every year • Every ML algorithm has three components: – Representation – Optimization – Evaluation 43
  • 43. 44 • Numerical functions – Linear regression – Neural networks – Support vector machines • Symbolic functions – Decision trees – Rules in propositional logic – Rules in first-order predicate logic • Instance-based functions – Nearest-neighbor – Case-based • Probabilistic Graphical Models – Naïve Bayes – Bayesian networks – Hidden-Markov Models (HMMs) – Probabilistic Context Free Grammars (PCFGs) – Markov networks
  • 44. 45 • Gradient descent – Perceptron – Backpropagation • Dynamic Programming – HMM Learning – PCFG Learning • Divide and Conquer – Decision tree induction – Rule learning • Evolutionary Computation – Genetic Algorithms (GAs) – Genetic Programming (GP) – Neuro-evolution
  • 45. 47 • Accuracy • Precision and recall • Squared error • Likelihood • Posterior probability • Cost / Utility • Margin • Entropy • K-L divergence etc.
  • 46. • Understand domain, prior knowledge, and goals • Data integration, selection, cleaning, pre- processing, etc. • Learn models • Interpret results • Consolidate and deploy discovered knowledge Loop 48
  • 47. 49 • Learning can be viewed as using direct or indirect experience to approximate a chosen target function. • Function approximation can be viewed as a search through a space of hypotheses (representations of functions) for one that best fits a set of training data. • Different learning methods assume different hypothesis spaces (representation languages) and/or employ different search techniques.
  • 48. 50
  • 49. 51 • 1950s – Samuel’s checker player – Selfridge’s Pandemonium • 1960s: – Neural networks: Perceptron – Pattern recognition – Learning in the limit theory – Minsky and Papert prove limitations of Perceptron • 1970s: – Symbolic concept induction – Winston’s arch learner – Expert systems and the knowledge acquisition bottleneck – Quinlan’s ID3 – Michalski’s AQ and soybean diagnosis – Scientific discovery with BACON – Mathematical discovery with AM
  • 50. 52 • 1980s: – Advanced decision tree and rule learning – Explanation-based Learning (EBL) – Learning and planning and problem solving – Utility problem – Analogy – Cognitive architectures – Resurgence of neural networks (connectionism, backpropagation) – Valiant’s PAC Learning Theory – Focus on experimental methodology • 1990s – Data mining – Adaptive software agents and web applications – Text learning – Reinforcement learning (RL) – Inductive Logic Programming (ILP) – Ensembles: Bagging, Boosting, and Stacking – Bayes Net learning
  • 51. 53 • 2000s – Support vector machines & kernel methods – Graphical models – Statistical relational learning – Transfer learning – Sequence labeling – Collective classification and structured outputs – Computer Systems Applications (Compilers, Debugging, Graphics, Security) – E-mail management – Personalized assistants that learn – Learning in robotics and vision • 2010s – Deep learning systems – Learning for big data – Bayesian methods – Multi-task & lifelong learning – Applications to vision, speech, social networks, learning to read, etc. – ???
  • 52. 54 • Supervised learning – Decision tree induction – Linear regression – Logistic regression – Support vector machines & kernel methods – Model ensembles – Bayesian learning – Neural networks & deep learning – Learning theory • Unsupervised learning – Clustering – Dimensionality reduction • Reinforcement learning – Temporal difference learning – Q learning • Evaluation • Applications