SlideShare a Scribd company logo
Data Mining
Course Overview
About the course – Administrivia
 Instructor:
 George Kollios, gkollios@cs.bu.edu
MCS 288, Mon 2:30-4:00PM and Tue 10:25-11:55AM
 Home Page:
 http://www.cs.bu.edu/fac/gkollios/dm07
Check frequently! Syllabus, schedule, assignments,
announcements…
Grading
 Programming projects (3) 35%
 Homework set (3): 15%
 Midterm 20%
 Final 30%
Data Mining Overview
 Data warehouses and OLAP (On Line Analytical
Processing.)
 Association Rules Mining
 Clustering: Hierarchical and Partition approaches
 Classification: Decision Trees and Bayesian classifiers
 Sequential Pattern Mining
 Advanced topics: graph mining, privacy preserving data
mining, outlier detection, spatial data mining
What is Data Mining?
 Data Mining is:
(1) The efficient discovery of previously unknown,
valid, potentially useful, understandable patterns in
large datasets
(2) The analysis of (often large) observational data
sets to find unsuspected relationships and to
summarize the data in novel ways that are both
understandable and useful to the data owner
Overview of terms
 Data: a set of facts (items) D, usually stored in a
database
 Pattern: an expression E in a language L, that
describes a subset of facts
 Attribute: a field in an item i in D.
 Interestingness: a function ID,L that maps an
expression E in L into a measure space M
Overview of terms
 The Data Mining Task:
For a given dataset D, language of facts L,
interestingness function ID,L and threshold c, find
the expression E such that ID,L(E) > c efficiently.
Knowledge Discovery
Examples of Large Datasets
 Government: IRS, NGA, …
 Large corporations
 WALMART: 20M transactions per day
 MOBIL: 100 TB geological databases
 AT&T 300 M calls per day
 Credit card companies
 Scientific
 NASA, EOS project: 50 GB per hour
 Environmental datasets
Examples of Data mining Applications
1. Fraud detection: credit cards, phone cards
2. Marketing: customer targeting
3. Data Warehousing: Walmart
4. Astronomy
5. Molecular biology
How Data Mining is used
1. Identify the problem
2. Use data mining techniques to transform the
data into information
3. Act on the information
4. Measure the results
The Data Mining Process
1. Understand the domain
2. Create a dataset:
 Select the interesting attributes
 Data cleaning and preprocessing
3. Choose the data mining task and the specific
algorithm
4. Interpret the results, and possibly return to 2
 Draws ideas from machine learning/AI,
pattern recognition, statistics, and database
systems
 Must address:
 Enormity of data
 High dimensionality
of data
 Heterogeneous,
distributed nature
of data
Origins of Data Mining
AI /
Machine Learning
Statistics
Data Mining
Database
systems
Data Mining Tasks
1. Classification: learning a function that maps an
item into one of a set of predefined classes
2. Regression: learning a function that maps an
item to a real value
3. Clustering: identify a set of groups of similar
items
Data Mining Tasks
4. Dependencies and associations:
identify significant dependencies between data
attributes
5. Summarization: find a compact description of
the dataset or a subset of the dataset
Data Mining Methods
1. Decision Tree Classifiers:
Used for modeling, classification
2. Association Rules:
Used to find associations between sets of attributes
3. Sequential patterns:
Used to find temporal associations in time series
4. Hierarchical clustering:
used to group customers, web users, etc
Why Data Preprocessing?
 Data in the real world is dirty
 incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
 noisy: containing errors or outliers
 inconsistent: containing discrepancies in codes or names
 No quality data, no quality mining results!
 Quality decisions must be based on quality data
 Data warehouse needs consistent integration of quality data
 Required for both OLAP and Data Mining!
Why can Data be Incomplete?
 Attributes of interest are not available (e.g., customer
information for sales transaction data)
 Data were not considered important at the time of
transactions, so they were not recorded!
 Data not recorder because of misunderstanding or
malfunctions
 Data may have been recorded and later deleted!
 Missing/unknown values for some data
Data Cleaning
 Data cleaning tasks
 Fill in missing values
 Identify outliers and smooth out noisy data
 Correct inconsistent data
Classification: Definition
 Given a collection of records (training set )
 Each record contains a set of attributes, one of the attributes
is the class.
 Find a model for class attribute as a function
of the values of other attributes.
 Goal: previously unseen records should be
assigned a class as accurately as possible.
 A test set is used to determine the accuracy of the model.
Usually, the given data set is divided into training and test
sets, with training set used to build the model and test set
used to validate it.
Classification Example
Tid Home
Owner
Marital
Status
Taxable
Income Default
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Home
Owner
Marital
Status
Taxable
Income Default
No Single 75K ?
Yes Married 50K ?
No Married 150K ?
Yes Divorced 90K ?
No Single 40K ?
No Married 80K ?
10
Test
Set
Training
Set
Model
Learn
Classifier
Example of a Decision Tree
Tid Home
Owner
Marital
Status
Taxable
Income Default
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
HO
MarSt
TaxInc
YES
NO
NO
NO
Yes No
Married
Single, Divorced
< 80K > 80K
Splitting Attributes
Training Data Model: Decision Tree
Another Example of Decision Tree
Tid Home
Owner
Marital
Status
Taxable
Income Default
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
MarSt
HO
TaxInc
YES
NO
NO
NO
Yes No
Married
Single,
Divorced
< 80K > 80K
There could be more than one tree that
fits the same data!
Classification: Application 1
 Direct Marketing
 Goal: Reduce cost of mailing by targeting a set of consumers
likely to buy a new cell-phone product.
 Approach:
 Use the data for a similar product introduced before.
 We know which customers decided to buy and which decided
otherwise. This {buy, don’t buy} decision forms the class attribute.
 Collect various demographic, lifestyle, and company-interaction
related information about all such customers.
 Type of business, where they stay, how much they earn, etc.
 Use this information as input attributes to learn a classifier model.
From [Berry & Linoff] Data Mining Techniques, 1997
Classification: Application 2
 Fraud Detection
 Goal: Predict fraudulent cases in credit card transactions.
 Approach:
 Use credit card transactions and the information on its account-
holder as attributes.
 When does a customer buy, what does he buy, how often he
pays on time, etc
 Label past transactions as fraud or fair transactions. This forms the
class attribute.
 Learn a model for the class of the transactions.
 Use this model to detect fraud by observing credit card
transactions on an account.
Clustering Definition
 Given a set of data points, each having a set of
attributes, and a similarity measure among
them, find clusters such that
 Data points in one cluster are more similar to one
another.
 Data points in separate clusters are less similar to one
another.
 Similarity Measures:
 Euclidean Distance if attributes are continuous.
 Other Problem-specific Measures.
Illustrating Clustering
Euclidean Distance Based Clustering in 3-D space.
Intracluster distances
are minimized
Intercluster distances
are maximized
Clustering: Application 1
 Market Segmentation:
 Goal: subdivide a market into distinct subsets of customers
where any subset may conceivably be selected as a market
target to be reached with a distinct marketing mix.
 Approach:
 Collect different attributes of customers based on their
geographical and lifestyle related information.
 Find clusters of similar customers.
 Measure the clustering quality by observing buying patterns of
customers in same cluster vs. those from different clusters.
Clustering: Application 2
 Document Clustering:
 Goal: To find groups of documents that are similar to
each other based on the important terms appearing in
them.
 Approach: To identify frequently occurring terms in
each document. Form a similarity measure based on
the frequencies of different terms. Use it to cluster.
 Gain: Information Retrieval can utilize the clusters to
relate a new document or search term to clustered
documents.
Illustrating Document
Clustering
 Clustering Points: 3204 Articles of Los Angeles Times.
 Similarity Measure: How many words are common in
these documents (after some word filtering).
Category Total
Articles
Correctly
Placed
Financial 555 364
Foreign 341 260
National 273 36
Metro 943 746
Sports 738 573
Entertainment 354 278
Association Rule Discovery:
Definition
 Given a set of records each of which contain some
number of items from a given collection;
 Produce dependency rules which will predict occurrence of an
item based on occurrences of other items.
TID Items
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk
Rules Discovered:
{Milk} --> {Coke}
{Diaper, Milk} --> {Beer}
Association Rule Discovery:
Application 1
 Marketing and Sales Promotion:
 Let the rule discovered be
{Bagels, … } --> {Potato Chips}
 Potato Chips as consequent => Can be used to determine what
should be done to boost its sales.
 Bagels in the antecedent => Can be used to see which products
would be affected if the store discontinues selling bagels.
 Bagels in antecedent and Potato chips in consequent => Can be
used to see what products should be sold with Bagels to
promote sale of Potato chips!
Data Compression
Original Data Compressed
Data
lossless
Original Data
Approximated
Numerosity Reduction:
Reduce the volume of data
 Parametric methods
 Assume the data fits some model, estimate model parameters,
store only the parameters, and discard the data (except
possible outliers)
 Non-parametric methods
 Do not assume models
 Major families: histograms, clustering, sampling
Clustering
 Partitions data set into clusters, and models it by one
representative from each cluster
 Can be very effective if data is clustered but not if data
is “smeared”
 There are many choices of clustering definitions and
clustering algorithms, more later!
Sampling
 Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
 Choose a representative subset of the data
 Simple random sampling may have very poor performance in the
presence of skew
 Develop adaptive sampling methods
 Stratified sampling:
 Approximate the percentage of each class (or subpopulation of
interest) in the overall database
 Used in conjunction with skewed data
 Sampling may not reduce database I/Os (page at a time).
Sampling
Raw Data
Sampling
Raw Data Cluster/Stratified Sample
•The number of samples drawn from each
cluster/stratum is analogous to its size
•Thus, the samples represent better the data and
outliers are avoided

More Related Content

PPT
Data science: DATA MINING AND DATA WHEREHOUSE.ppt
PPT
Data Mining
PPT
Lecture1
PPTX
Data mining Basics and complete description onword
PPT
Data Mining- Unit-I PPT (1).ppt
PPTX
Introduction to Data Mining
PDF
chapter1_Introduction.pdf data mining ppt
Data science: DATA MINING AND DATA WHEREHOUSE.ppt
Data Mining
Lecture1
Data mining Basics and complete description onword
Data Mining- Unit-I PPT (1).ppt
Introduction to Data Mining
chapter1_Introduction.pdf data mining ppt

Similar to lect1lect1lect1lect1lect1lect1lect1lect1.ppt (20)

PPTX
Lecture 1 - Data Mining (data minging).pptx
PPTX
1. Introduction to Data Mining (12).pptx
PDF
Understanding big data and data analytics - Data Mining
PPTX
Business analytics and data mining
PPTX
Business analytics and data mining
PPTX
Business analytics and data mining
PPTX
Business analytics and data mining
PPTX
Business analytics and data mining
PPTX
Business analytics and data mining
PPTX
Business analytics and data mining
PPTX
lec01-IntroductionToDataMining.pptx
PPT
Unit 3 part ii Data mining
PPTX
Data mining , Knowledge Discovery Process, Classification
DOCX
Data Mining IntroductionLecture Notes for Chapter 1.docx
PPTX
Data warehousing and mining furc
PDF
Module-1-IntroductionToDataMining (Data Mining)
PPT
Draws ideas from machine learning/AI, pattern recognition, statistics, and da...
PPTX
Data Mining
PPTX
Classification & Clustering.pptx
PPT
introDMintroDMintroDMintroDMintroDMintroDM.ppt
Lecture 1 - Data Mining (data minging).pptx
1. Introduction to Data Mining (12).pptx
Understanding big data and data analytics - Data Mining
Business analytics and data mining
Business analytics and data mining
Business analytics and data mining
Business analytics and data mining
Business analytics and data mining
Business analytics and data mining
Business analytics and data mining
lec01-IntroductionToDataMining.pptx
Unit 3 part ii Data mining
Data mining , Knowledge Discovery Process, Classification
Data Mining IntroductionLecture Notes for Chapter 1.docx
Data warehousing and mining furc
Module-1-IntroductionToDataMining (Data Mining)
Draws ideas from machine learning/AI, pattern recognition, statistics, and da...
Data Mining
Classification & Clustering.pptx
introDMintroDMintroDMintroDMintroDMintroDM.ppt
Ad

More from DEEPAK948083 (20)

PPT
Basics of RFID Technologyddscccccddd.ppt
PDF
SMA-Unit-I: The Foundation for Analytics
PPT
turban_ch07ch07ch07ch07ch07ch07dss9e_ch07.ppt
PPT
introAdhocRoutingRoutingRoutingRouting-new.ppt
PPT
SensorSensorSensorSensorSensorSensor.ppt
PPT
Chapter1_IntroductionIntroductionIntroduction.ppt
PPTX
Chchchchchchchchchchchchchchchchc 11.pptx
PPT
applicationapplicationapplicationapplication.ppt
PPT
MOBILE & WIRELESS SECURITY And MOBILE & WIRELESS SECURITY
PPTX
datastructureppt-190327174340 (1).pptx
PPTX
5virusandmaliciouscodechapter5-130716024935-phpapp02-converted.pptx
PPT
Lect no 13 ECC.ppt
PPTX
block ciphermodes of operation.pptx
PPT
Lect no 13 ECC.ppt
PPTX
unit1Intro_final.pptx
PPT
whitman_ch04.ppt
PPT
lesson333.ppt
PPT
ICS PPT Unit 4.ppt
PPTX
stack-Intro.pptx
PPT
BST.ppt
Basics of RFID Technologyddscccccddd.ppt
SMA-Unit-I: The Foundation for Analytics
turban_ch07ch07ch07ch07ch07ch07dss9e_ch07.ppt
introAdhocRoutingRoutingRoutingRouting-new.ppt
SensorSensorSensorSensorSensorSensor.ppt
Chapter1_IntroductionIntroductionIntroduction.ppt
Chchchchchchchchchchchchchchchchc 11.pptx
applicationapplicationapplicationapplication.ppt
MOBILE & WIRELESS SECURITY And MOBILE & WIRELESS SECURITY
datastructureppt-190327174340 (1).pptx
5virusandmaliciouscodechapter5-130716024935-phpapp02-converted.pptx
Lect no 13 ECC.ppt
block ciphermodes of operation.pptx
Lect no 13 ECC.ppt
unit1Intro_final.pptx
whitman_ch04.ppt
lesson333.ppt
ICS PPT Unit 4.ppt
stack-Intro.pptx
BST.ppt
Ad

Recently uploaded (20)

DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
Lecture Notes Electrical Wiring System Components
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
DOCX
573137875-Attendance-Management-System-original
PDF
Well-logging-methods_new................
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
PPT on Performance Review to get promotions
PDF
Digital Logic Computer Design lecture notes
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
composite construction of structures.pdf
PPTX
OOP with Java - Java Introduction (Basics)
PPT
Mechanical Engineering MATERIALS Selection
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Lecture Notes Electrical Wiring System Components
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
573137875-Attendance-Management-System-original
Well-logging-methods_new................
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Operating System & Kernel Study Guide-1 - converted.pdf
PPT on Performance Review to get promotions
Digital Logic Computer Design lecture notes
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
composite construction of structures.pdf
OOP with Java - Java Introduction (Basics)
Mechanical Engineering MATERIALS Selection
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx

lect1lect1lect1lect1lect1lect1lect1lect1.ppt

  • 2. About the course – Administrivia  Instructor:  George Kollios, gkollios@cs.bu.edu MCS 288, Mon 2:30-4:00PM and Tue 10:25-11:55AM  Home Page:  http://www.cs.bu.edu/fac/gkollios/dm07 Check frequently! Syllabus, schedule, assignments, announcements…
  • 3. Grading  Programming projects (3) 35%  Homework set (3): 15%  Midterm 20%  Final 30%
  • 4. Data Mining Overview  Data warehouses and OLAP (On Line Analytical Processing.)  Association Rules Mining  Clustering: Hierarchical and Partition approaches  Classification: Decision Trees and Bayesian classifiers  Sequential Pattern Mining  Advanced topics: graph mining, privacy preserving data mining, outlier detection, spatial data mining
  • 5. What is Data Mining?  Data Mining is: (1) The efficient discovery of previously unknown, valid, potentially useful, understandable patterns in large datasets (2) The analysis of (often large) observational data sets to find unsuspected relationships and to summarize the data in novel ways that are both understandable and useful to the data owner
  • 6. Overview of terms  Data: a set of facts (items) D, usually stored in a database  Pattern: an expression E in a language L, that describes a subset of facts  Attribute: a field in an item i in D.  Interestingness: a function ID,L that maps an expression E in L into a measure space M
  • 7. Overview of terms  The Data Mining Task: For a given dataset D, language of facts L, interestingness function ID,L and threshold c, find the expression E such that ID,L(E) > c efficiently.
  • 9. Examples of Large Datasets  Government: IRS, NGA, …  Large corporations  WALMART: 20M transactions per day  MOBIL: 100 TB geological databases  AT&T 300 M calls per day  Credit card companies  Scientific  NASA, EOS project: 50 GB per hour  Environmental datasets
  • 10. Examples of Data mining Applications 1. Fraud detection: credit cards, phone cards 2. Marketing: customer targeting 3. Data Warehousing: Walmart 4. Astronomy 5. Molecular biology
  • 11. How Data Mining is used 1. Identify the problem 2. Use data mining techniques to transform the data into information 3. Act on the information 4. Measure the results
  • 12. The Data Mining Process 1. Understand the domain 2. Create a dataset:  Select the interesting attributes  Data cleaning and preprocessing 3. Choose the data mining task and the specific algorithm 4. Interpret the results, and possibly return to 2
  • 13.  Draws ideas from machine learning/AI, pattern recognition, statistics, and database systems  Must address:  Enormity of data  High dimensionality of data  Heterogeneous, distributed nature of data Origins of Data Mining AI / Machine Learning Statistics Data Mining Database systems
  • 14. Data Mining Tasks 1. Classification: learning a function that maps an item into one of a set of predefined classes 2. Regression: learning a function that maps an item to a real value 3. Clustering: identify a set of groups of similar items
  • 15. Data Mining Tasks 4. Dependencies and associations: identify significant dependencies between data attributes 5. Summarization: find a compact description of the dataset or a subset of the dataset
  • 16. Data Mining Methods 1. Decision Tree Classifiers: Used for modeling, classification 2. Association Rules: Used to find associations between sets of attributes 3. Sequential patterns: Used to find temporal associations in time series 4. Hierarchical clustering: used to group customers, web users, etc
  • 17. Why Data Preprocessing?  Data in the real world is dirty  incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data  noisy: containing errors or outliers  inconsistent: containing discrepancies in codes or names  No quality data, no quality mining results!  Quality decisions must be based on quality data  Data warehouse needs consistent integration of quality data  Required for both OLAP and Data Mining!
  • 18. Why can Data be Incomplete?  Attributes of interest are not available (e.g., customer information for sales transaction data)  Data were not considered important at the time of transactions, so they were not recorded!  Data not recorder because of misunderstanding or malfunctions  Data may have been recorded and later deleted!  Missing/unknown values for some data
  • 19. Data Cleaning  Data cleaning tasks  Fill in missing values  Identify outliers and smooth out noisy data  Correct inconsistent data
  • 20. Classification: Definition  Given a collection of records (training set )  Each record contains a set of attributes, one of the attributes is the class.  Find a model for class attribute as a function of the values of other attributes.  Goal: previously unseen records should be assigned a class as accurately as possible.  A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
  • 21. Classification Example Tid Home Owner Marital Status Taxable Income Default 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 Home Owner Marital Status Taxable Income Default No Single 75K ? Yes Married 50K ? No Married 150K ? Yes Divorced 90K ? No Single 40K ? No Married 80K ? 10 Test Set Training Set Model Learn Classifier
  • 22. Example of a Decision Tree Tid Home Owner Marital Status Taxable Income Default 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 HO MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Splitting Attributes Training Data Model: Decision Tree
  • 23. Another Example of Decision Tree Tid Home Owner Marital Status Taxable Income Default 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 MarSt HO TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K There could be more than one tree that fits the same data!
  • 24. Classification: Application 1  Direct Marketing  Goal: Reduce cost of mailing by targeting a set of consumers likely to buy a new cell-phone product.  Approach:  Use the data for a similar product introduced before.  We know which customers decided to buy and which decided otherwise. This {buy, don’t buy} decision forms the class attribute.  Collect various demographic, lifestyle, and company-interaction related information about all such customers.  Type of business, where they stay, how much they earn, etc.  Use this information as input attributes to learn a classifier model. From [Berry & Linoff] Data Mining Techniques, 1997
  • 25. Classification: Application 2  Fraud Detection  Goal: Predict fraudulent cases in credit card transactions.  Approach:  Use credit card transactions and the information on its account- holder as attributes.  When does a customer buy, what does he buy, how often he pays on time, etc  Label past transactions as fraud or fair transactions. This forms the class attribute.  Learn a model for the class of the transactions.  Use this model to detect fraud by observing credit card transactions on an account.
  • 26. Clustering Definition  Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that  Data points in one cluster are more similar to one another.  Data points in separate clusters are less similar to one another.  Similarity Measures:  Euclidean Distance if attributes are continuous.  Other Problem-specific Measures.
  • 27. Illustrating Clustering Euclidean Distance Based Clustering in 3-D space. Intracluster distances are minimized Intercluster distances are maximized
  • 28. Clustering: Application 1  Market Segmentation:  Goal: subdivide a market into distinct subsets of customers where any subset may conceivably be selected as a market target to be reached with a distinct marketing mix.  Approach:  Collect different attributes of customers based on their geographical and lifestyle related information.  Find clusters of similar customers.  Measure the clustering quality by observing buying patterns of customers in same cluster vs. those from different clusters.
  • 29. Clustering: Application 2  Document Clustering:  Goal: To find groups of documents that are similar to each other based on the important terms appearing in them.  Approach: To identify frequently occurring terms in each document. Form a similarity measure based on the frequencies of different terms. Use it to cluster.  Gain: Information Retrieval can utilize the clusters to relate a new document or search term to clustered documents.
  • 30. Illustrating Document Clustering  Clustering Points: 3204 Articles of Los Angeles Times.  Similarity Measure: How many words are common in these documents (after some word filtering). Category Total Articles Correctly Placed Financial 555 364 Foreign 341 260 National 273 36 Metro 943 746 Sports 738 573 Entertainment 354 278
  • 31. Association Rule Discovery: Definition  Given a set of records each of which contain some number of items from a given collection;  Produce dependency rules which will predict occurrence of an item based on occurrences of other items. TID Items 1 Bread, Coke, Milk 2 Beer, Bread 3 Beer, Coke, Diaper, Milk 4 Beer, Bread, Diaper, Milk 5 Coke, Diaper, Milk Rules Discovered: {Milk} --> {Coke} {Diaper, Milk} --> {Beer}
  • 32. Association Rule Discovery: Application 1  Marketing and Sales Promotion:  Let the rule discovered be {Bagels, … } --> {Potato Chips}  Potato Chips as consequent => Can be used to determine what should be done to boost its sales.  Bagels in the antecedent => Can be used to see which products would be affected if the store discontinues selling bagels.  Bagels in antecedent and Potato chips in consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato chips!
  • 33. Data Compression Original Data Compressed Data lossless Original Data Approximated
  • 34. Numerosity Reduction: Reduce the volume of data  Parametric methods  Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers)  Non-parametric methods  Do not assume models  Major families: histograms, clustering, sampling
  • 35. Clustering  Partitions data set into clusters, and models it by one representative from each cluster  Can be very effective if data is clustered but not if data is “smeared”  There are many choices of clustering definitions and clustering algorithms, more later!
  • 36. Sampling  Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the data  Choose a representative subset of the data  Simple random sampling may have very poor performance in the presence of skew  Develop adaptive sampling methods  Stratified sampling:  Approximate the percentage of each class (or subpopulation of interest) in the overall database  Used in conjunction with skewed data  Sampling may not reduce database I/Os (page at a time).
  • 38. Sampling Raw Data Cluster/Stratified Sample •The number of samples drawn from each cluster/stratum is analogous to its size •Thus, the samples represent better the data and outliers are avoided