SlideShare a Scribd company logo
Data Processing
1. Objectives....................................................................................2
2. Why Is Data Dirty?......................................................................2
3. Why Is Data Preprocessing Important?.......................................3
4. Major Tasks in Data Processing..................................................4
5. Forms of Data Processing:...........................................................5
6. Data Cleaning...............................................................................6
7. Missing Data................................................................................6
8. Noisy Data...................................................................................7
9. Simple Discretization Methods: Binning.....................................8
10. Cluster Analysis.......................................................................11
11. Regression................................................................................12
12. Data Integration.......................................................................13
13. Data Transformation................................................................15
14. Data reduction Strategies.........................................................16
15. Similarity and Dissimilarity.....................................................16
15.1. Similarity/Dissimilarity for Simple Attributes..................17
15.2. Euclidean Distance............................................................17
15.3. Minkowski Distance.........................................................18
15.4. Mahalanobis Distance.......................................................20
15.5. Common Properties of a Distance....................................22
15.6. Common Properties of a Similarity..................................22
15.7. Similarity Between Binary Vectors..................................23
15.8. Cosine Similarity..............................................................24
15.9. Extended Jaccard Coefficient (Tanimoto)........................24
15.10. Correlation......................................................................25
A. Bellaachia Page: 1
1. Objectives
• Incomplete:
o Lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data:
 e.g., occupation=“”
• Noisy:
o Containing errors or outliers
 e.g., Salary=“-10”
•
• Inconsistent:
o Containing discrepancies in codes or names
 e.g., Age=“42” Birthday=“03/07/1997”
 e.g., Was rating “1,2,3”, now rating “A, B, C”
 e.g., discrepancy between duplicate records
2. Why Is Data Dirty?
• Incomplete data comes from
o n/a data value when collected
o Different consideration between the time when the
data was collected and when it is analyzed.
o Human/hardware/software problems
• Noisy data comes from the process of data
o Collection
o Entry
A. Bellaachia Page: 2
o Transmission
• Inconsistent data comes from
o Different data sources
o Functional dependency violation
3. Why Is Data Preprocessing Important?
• No quality data, no quality mining results!
 Quality decisions must be based on quality data
o e.g., duplicate or missing data may cause
incorrect or even misleading statistics.
o Data warehouse needs consistent integration
of quality data
• Data extraction, cleaning, and transformation comprise the
majority of the work of building a data warehouse. —Bill
Inmon
.
A. Bellaachia Page: 3
4. Major Tasks in Data Processing
• Data cleaning
 Fill in missing values, smooth noisy data, identify
or remove outliers, and resolve inconsistencies
• Data integration
 Integration of multiple databases, data cubes, or
files
• Data transformation
 Normalization and aggregation
• Data reduction
 Obtains reduced representation in volume but
produces the same or similar analytical results
• Data discretization
 Part of data reduction but with particular
importance, especially for numerical data
A. Bellaachia Page: 4
5. Forms of Data Processing:
A. Bellaachia Page: 5
6. Data Cleaning
• Importance
o “Data cleaning is one of the three biggest problems in
data warehousing”—Ralph Kimball
o “Data cleaning is the number one problem in data
warehousing”—DCI survey
• Data cleaning tasks
o Fill in missing values
o Identify outliers and smooth out noisy data
o Correct inconsistent data
o Resolve redundancy caused by data integration
7. Missing Data
• Data is not always available
 E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
• Missing data may be due to
 Equipment malfunction
 Inconsistent with other recorded data and thus deleted
 Data not entered due to misunderstanding
 Certain data may not be considered important at the
time of entry
 Not register history or changes of the data
• Missing data may need to be inferred.
• How to Handle Missing Data?
A. Bellaachia Page: 6
o Ignore the tuple: usually done when class label is
missing (assuming the tasks in classification—not
effective when the percentage of missing values per
attribute varies considerably.
o Fill in the missing value manually: tedious +
infeasible?
o Fill in it automatically with
 A global constant: e.g., “unknown”, a new
class?!
 the attribute mean
 the attribute mean for all samples belonging
to the same class: smarter
 the most probable value: inference-based
such as Bayesian formula or decision tree
8. Noisy Data
• Noise: random error or variance in a measured variable
• Incorrect attribute values may due to
o faulty data collection instruments
o data entry problems
o data transmission problems
o technology limitation
o inconsistency in naming convention
• Other data problems which requires data cleaning
o duplicate records
o incomplete data
o inconsistent data
A. Bellaachia Page: 7
• How to Handle Noisy Data?
o Binning method:
 first sort data and partition into (equi-depth)
bins
 then one can smooth by bin means, smooth
by bin median, smooth by bin boundaries,
etc.
o Clustering
 detect and remove outliers
o Combined computer and human inspection
 detect suspicious values and check by
human (e.g., deal with possible outliers)
o Regression
 smooth by fitting the data into regression
functions
9. Simple Discretization Methods: Binning
• Equal-width (distance) partitioning:
o Divides the range into N intervals of equal size:
uniform grid
o if A and B are the lowest and highest values of the
attribute, the width of intervals will be: W = (B –A)/N.
o The most straightforward, but outliers may dominate
presentation
A. Bellaachia Page: 8
o Skewed data is not handled well.
• Equal-depth (frequency) partitioning:
o Divides the range into N intervals, each containing
approximately same number of samples
o Good data scaling
o Managing categorical attributes can be tricky.
• Binning methods
o They smooth a sorted data value by consulting its
“neighborhood”, that is the values around it.
o The sorted values are partitioned into a number of
buckets or bins.
o Smoothing by bin means: Each value in the bin is
replaced by the mean value of the bin.
o Smoothing by bin medians: Each value in the bin is
replaced by the bin median.
o Smoothing by boundaries: The min and max values of
a bin are identified as the bin boundaries.
o Each bin value is replaced by the closest boundary
value.
• Example: Binning Methods for Data Smoothing
A. Bellaachia Page: 9
o Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24,
25, 26, 28, 29, 34
o Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
o Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
o Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
A. Bellaachia Page: 10
10. Cluster Analysis
A. Bellaachia Page: 11
11. Regression
A. Bellaachia Page: 12
x
y
y = x + 1
X1
Y1
Y1’
12. Data Integration
• Data integration:
o Combines data from multiple sources into a coherent
store
• Schema integration
o Integrate metadata from different sources
o Entity identification problem: identify real world
entities from multiple data sources, e.g., A.cust-id º
B.cust-#
• Detecting and resolving data value conflicts
o For the same real world entity, attribute values from
different sources are different
o Possible reasons: different representations, different
scales, e.g., metric vs. British units
• Handling Redundancy in Data Integration
o Redundant data occur often when integration of
multiple databases
 The same attribute may have different names in
different databases
 One attribute may be a “derived” attribute in
another table, e.g., annual revenue
o Redundant data may be able to be detected by
correlational analysis
A. Bellaachia Page: 13
o Careful integration of the data from multiple sources
may help reduce/avoid redundancies and
inconsistencies and improve mining speed and quality
A. Bellaachia Page: 14
13. Data Transformation
• Smoothing: remove noise from data
• Aggregation: summarization, data cube construction
• Generalization: concept hierarchy climbing
• Normalization: scaled to fall within a small, specified
range
o min-max normalization:
o z-score normalization:
o normalization by decimal scaling
Where j is the smallest integer such that Max(|v’|)<1
• Attribute/feature construction
o New attributes constructed from the given ones
A. Bellaachia Page: 15
AminnewAminnewAmaxnew
AminAmax
Aminv
v _)__(' +−
−
−
=
devAstand
meanAv
v
_
'
−
=
j
v
v
10
'=
14. Data reduction Strategies
• A data warehouse may store terabytes of data
o Complex data analysis/mining may take a very long
time to run on the complete data set
• Data reduction
o Obtain a reduced representation of the data set that is
much smaller in volume but yet produce the same (or
almost the same) analytical results
• Data reduction strategies
o Data cube aggregation
o Dimensionality reduction—remove unimportant
attributes
•
o Data Compression
o Numerosity reduction—fit data into models
o Discretization and concept hierarchy generation
15. Similarity and Dissimilarity
• Similarity
o Numerical measure of how alike two data objects
are.
o Is higher when objects are more alike.
o Often falls in the range [0,1]
• Dissimilarity
o Numerical measure of how different are two data
objects
o Lower when objects are more alike
A. Bellaachia Page: 16
o Minimum dissimilarity is often 0
o Upper limit varies
• Proximity refers to a similarity or dissimilarity
15.1. Similarity/Dissimilarity for Simple Attributes
• p and q are the attribute values for two data objects.
15.2. Euclidean Distance
∑=
−=
n
k
kk qpdist
1
2
)(
Where n is the number of dimensions (attributes) and pk and qk are,
respectively, the kth attributes (components) or data objects p and
q.
A. Bellaachia Page: 17
Distance Matrix
15.3. Minkowski Distance
• Minkowski Distance is a generalization of Euclidean
Distance:
r
n
k
r
kk qpdist
1
1
)||( ∑
=
−=
A. Bellaachia Page: 18
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
0
1
2
3
0 1 2 3 4 5 6
p1
p2
p3 p4
p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
Where r is a parameter, n is the number of dimensions
(attributes) and pk and qk are, respectively, the kth attributes
(components) or data objects p and q.
• Minkowski Distance: Examples
o r = 1. City block (Manhattan, taxicab, L1 norm)
distance.
 A common example of this is the Hamming
distance, which is just the number of bits that are
different between two binary vectors
o r = 2. Euclidean distance
o r → ∞. “supremum” (Lmax norm, L∞ norm) distance
 This is the maximum difference between any
component of the vectors
o Do not confuse r with n, i.e., all these distances are
defined for all numbers of dimensions.
point x y
p1 0 2
p2 2 0
p3 3 1
p4 5 1
A. Bellaachia Page: 19
L1 p1 p2 p3 p4
p1 0 4 4 6
p2 4 0 2 4
p3 4 2 0 2
p4 6 4 2 0
Distance Matrix
15.4. Mahalanobis Distance
T
qpqpqpsmahalanobi )()(),( 1
−∑−= −
Where
Σ is the covariance matrix of the input data X
If X is a column vector with n scalar random variable components,
and μk is the expected value of the kth element of X, i.e., μk =
E(Xk), then the covariance matrix is defined as:
A. Bellaachia Page: 20
L2 p1 p2 p3 p4
p1 0 2.828 3.162 5.099
p2 2.828 0 1.414 3.162
p3 3.162 1.414 0 2
p4 5.099 3.162 2 0
L∞ p1 p2 p3 p4
p1 0 2 3 5
p2 2 0 1 3
p3 3 1 0 2
p4 5 3 2 0
∑ = E[(X-E[X]) (X-E[X])T] =












−−−−
−−−−
−−−−−−
=
=∑
)](X)E[(X......)](X)E[(X
............
...)](X)E[(X)](X)E[(X
)](X)E[(X...)](X)E[(X)](X)E[(X
]E[X])-(XE[X])-E[(X
nn11n
22221122
n1122111111
T
nnn
n
µµµµ
µµµµ
µµµµµµ
The (i,j) element is the covariance between Xi and Xj.
For red points, the Euclidean distance is 14.7, Mahalanobis
distance is 6.
• If the covariance matrix is the identity matrix, the
Mahalanobis distance reduces to the Euclidean distance. If
A. Bellaachia Page: 21
the covariance matrix is diagonal, then the resulting distance
measure is called the normalized Euclidean distance:
15.5. Common Properties of a Distance
• Distances, such as the Euclidean distance, have some well
known properties.
1. d(p, q) ≥ 0 for all p and q and d(p, q) = 0 only if
p = q. (Positive definiteness)
2. d(p, q) = d(q, p) for all p and q. (Symmetry)
3. d(p, r) ≤ d(p, q) + d(q, r) for all points p, q, and r.
(Triangle Inequality)
where d(p, q) is the distance (dissimilarity) between points (data
objects), p and q.
• A distance that satisfies these properties is a metric
15.6. Common Properties of a Similarity
• Similarities, also have some well known properties.
1. s(p, q) = 1 (or maximum similarity) only if p = q.
2. s(p, q) = s(q, p) for all p and q. (Symmetry)
where s(p, q) is the similarity between points (data objects),
p and q.
A. Bellaachia Page: 22
15.7. Similarity Between Binary Vectors
• Common situation is that objects, p and q, have only binary
attributes
• Compute similarities using the following quantities
M01 = the number of attributes where p was 0 and q was 1
M10 = the number of attributes where p was 1 and q was 0
M00 = the number of attributes where p was 0 and q was 0
M11 = the number of attributes where p was 1 and q was 1
• Simple Matching and Jaccard Coefficients
SMC = number of matches / number of attributes
= (M11 + M00) / (M01 + M10 + M11 + M00)
J = number of 11 matches / number of not-both-zero
attributes values
= (M11) / (M01 + M10 + M11)
• SMC versus Jaccard: Example
p = 1 0 0 0 0 0 0 0 0 0
q = 0 0 0 0 0 0 1 0 0 1
M01 = 2 (the number of attributes where p was 0 and q was 1)
M10 = 1 (the number of attributes where p was 1 and q was 0)
M00 = 7 (the number of attributes where p was 0 and q was 0)
M11 = 0 (the number of attributes where p was 1 and q was 1)
SMC = (M11 + M00)/(M01 + M10 + M11 + M00) = (0+7) /
(2+1+0+7) = 0.7
A. Bellaachia Page: 23
J = (M11) / (M01 + M10 + M11) = 0 / (2 + 1 + 0) = 0
15.8. Cosine Similarity
• If d1 and d2 are two document vectors, then
cos( d1, d2 ) = (d1 • d2) / ||d1|| ||d2|| ,
Where • indicates vector dot product and || d || is the length
of vector d.
• Example:
d1 = 3 2 0 5 0 0 0 2 0 0
d2 = 1 0 0 0 0 0 0 1 0 2
d1 • d2= 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 +
0*0 + 0*2 = 5
||d1|| = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 =
(42) 0.5 = 6.481
||d2|| = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 =
(6) 0.5 = 2.245
cos( d1, d2 ) = .3150
15.9. Extended Jaccard Coefficient (Tanimoto)
• Variation of Jaccard for continuous or count attributes
o Reduces to Jaccard for binary attributes
qpqp
qp
qpT
•−+
•
= 22
),(
A. Bellaachia Page: 24
15.10. Correlation
• Correlation measures the linear relationship between objects
• To compute correlation, we standardize data objects, p and q,
and then take their dot product
)(/))(( pstdpmeanpp kk −=′
)(/))(( qstdqmeanqq kk −=′
qpqpncorrelatio ′•′=),(
A. Bellaachia Page: 25

More Related Content

PDF
03 preprocessing
PDF
PPTX
Starting ms access 2010
PDF
Modern Database Systems - Lecture 01
PPTX
Datapreprocessing
PPTX
Chapter 5: Database Systems, Data Centers, and Business Intelligence
PDF
Ms access
PPT
Database intro
03 preprocessing
Starting ms access 2010
Modern Database Systems - Lecture 01
Datapreprocessing
Chapter 5: Database Systems, Data Centers, and Business Intelligence
Ms access
Database intro

What's hot (20)

PPTX
L4 working with tables and data
PPTX
Chapter 8(designing of documnt databases)no sql for mere mortals
PDF
Ecdl v5 module 5 print
PPTX
Intro To DataBase
PPTX
ECDL module 5: using databases [To be continued]
PPTX
Database Concepts and Terminologies
PPTX
Microsoft Access
PDF
Access 2010
PDF
Nota ict form 5
PPTX
SAP BW - Creation of hierarchies (time dependant hierachy structures)
PPTX
Chapter 6(introduction to documnet databse) no sql for mere mortals
PPTX
Databases and its representation
PPS
ความรู้เบื้องต้นฐานข้อมูล 1
PPTX
B tree
PPT
Data Dictionary
PDF
Introduction to database
PPTX
Introduction - Database (MS Access)
PPT
MS Access Training
PDF
001.general
L4 working with tables and data
Chapter 8(designing of documnt databases)no sql for mere mortals
Ecdl v5 module 5 print
Intro To DataBase
ECDL module 5: using databases [To be continued]
Database Concepts and Terminologies
Microsoft Access
Access 2010
Nota ict form 5
SAP BW - Creation of hierarchies (time dependant hierachy structures)
Chapter 6(introduction to documnet databse) no sql for mere mortals
Databases and its representation
ความรู้เบื้องต้นฐานข้อมูล 1
B tree
Data Dictionary
Introduction to database
Introduction - Database (MS Access)
MS Access Training
001.general
Ad

Viewers also liked (19)

PPT
Downloadbrandequity doc of Arvinoor Siregar SH MH
PPTX
The Importance of Images
PDF
Artist With Hope
PPTX
YZM 2116 - Bölüm 3 - Listeler
PPTX
Importance of Brand image
PPTX
Yzm 2116 - Bölüm 2 (Algoritma Analizi)
PDF
Importance Performance Analysis for 7-eleven
PDF
Building customer trust to drive your KPIs
PPT
similarity measure
PDF
Brand Personality
PPT
Data mining :Concepts and Techniques Chapter 2, data
PPT
Maggi noodles : ECONOMIC SURVEY
PPTX
Market Research on Nestle Maggi
PPT
Brand personality
PPTX
Ppt on brand personality
PPT
Brand Image
 
PPTX
Brand identity, brand personality & brand image
PPT
Brand Personality
PPTX
Brand Personality
Downloadbrandequity doc of Arvinoor Siregar SH MH
The Importance of Images
Artist With Hope
YZM 2116 - Bölüm 3 - Listeler
Importance of Brand image
Yzm 2116 - Bölüm 2 (Algoritma Analizi)
Importance Performance Analysis for 7-eleven
Building customer trust to drive your KPIs
similarity measure
Brand Personality
Data mining :Concepts and Techniques Chapter 2, data
Maggi noodles : ECONOMIC SURVEY
Market Research on Nestle Maggi
Brand personality
Ppt on brand personality
Brand Image
 
Brand identity, brand personality & brand image
Brand Personality
Brand Personality
Ad

Similar to Data processing (20)

PPT
Data pre processing
PPT
Data preprocessing
PDF
Data Preprocessing in Data Mining Lecture Slide
PPTX
CST 466 exam help data mining mod2.pptx
PDF
data processing.pdf
PPTX
UNIT-1 Data pre-processing-Data cleaning, Transformation, Reduction, Integrat...
PPT
Data preprocessing
PPTX
Unit _2 Data Processing.pptx FOR THE DATA SCIENCE STUDENTSHE
PPT
Data processing
PDF
3-DataPreprocessing a complete guide.pdf
PPTX
Assignmentdatamining
PPT
Preprocessing
PPT
Preprocessing.ppt
PPT
Preprocessing.ppt
PPT
Preprocessing.ppt
PPT
Preprocessing.ppt
PPT
Data_Preparation_Modeling_Evaluation.ppt
PPT
Preprocessing.ppt
PPT
data Preprocessing different techniques summarized
PPT
Datapreprocess
Data pre processing
Data preprocessing
Data Preprocessing in Data Mining Lecture Slide
CST 466 exam help data mining mod2.pptx
data processing.pdf
UNIT-1 Data pre-processing-Data cleaning, Transformation, Reduction, Integrat...
Data preprocessing
Unit _2 Data Processing.pptx FOR THE DATA SCIENCE STUDENTSHE
Data processing
3-DataPreprocessing a complete guide.pdf
Assignmentdatamining
Preprocessing
Preprocessing.ppt
Preprocessing.ppt
Preprocessing.ppt
Preprocessing.ppt
Data_Preparation_Modeling_Evaluation.ppt
Preprocessing.ppt
data Preprocessing different techniques summarized
Datapreprocess

Recently uploaded (20)

PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Machine Learning_overview_presentation.pptx
PPTX
Spectroscopy.pptx food analysis technology
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Empathic Computing: Creating Shared Understanding
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Mushroom cultivation and it's methods.pdf
PPTX
Tartificialntelligence_presentation.pptx
PDF
Encapsulation theory and applications.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
A Presentation on Artificial Intelligence
Digital-Transformation-Roadmap-for-Companies.pptx
Group 1 Presentation -Planning and Decision Making .pptx
Per capita expenditure prediction using model stacking based on satellite ima...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Machine Learning_overview_presentation.pptx
Spectroscopy.pptx food analysis technology
SOPHOS-XG Firewall Administrator PPT.pptx
Network Security Unit 5.pdf for BCA BBA.
Empathic Computing: Creating Shared Understanding
A comparative study of natural language inference in Swahili using monolingua...
Mobile App Security Testing_ A Comprehensive Guide.pdf
Univ-Connecticut-ChatGPT-Presentaion.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Mushroom cultivation and it's methods.pdf
Tartificialntelligence_presentation.pptx
Encapsulation theory and applications.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
A Presentation on Artificial Intelligence

Data processing

  • 1. Data Processing 1. Objectives....................................................................................2 2. Why Is Data Dirty?......................................................................2 3. Why Is Data Preprocessing Important?.......................................3 4. Major Tasks in Data Processing..................................................4 5. Forms of Data Processing:...........................................................5 6. Data Cleaning...............................................................................6 7. Missing Data................................................................................6 8. Noisy Data...................................................................................7 9. Simple Discretization Methods: Binning.....................................8 10. Cluster Analysis.......................................................................11 11. Regression................................................................................12 12. Data Integration.......................................................................13 13. Data Transformation................................................................15 14. Data reduction Strategies.........................................................16 15. Similarity and Dissimilarity.....................................................16 15.1. Similarity/Dissimilarity for Simple Attributes..................17 15.2. Euclidean Distance............................................................17 15.3. Minkowski Distance.........................................................18 15.4. Mahalanobis Distance.......................................................20 15.5. Common Properties of a Distance....................................22 15.6. Common Properties of a Similarity..................................22 15.7. Similarity Between Binary Vectors..................................23 15.8. Cosine Similarity..............................................................24 15.9. Extended Jaccard Coefficient (Tanimoto)........................24 15.10. Correlation......................................................................25 A. Bellaachia Page: 1
  • 2. 1. Objectives • Incomplete: o Lacking attribute values, lacking certain attributes of interest, or containing only aggregate data:  e.g., occupation=“” • Noisy: o Containing errors or outliers  e.g., Salary=“-10” • • Inconsistent: o Containing discrepancies in codes or names  e.g., Age=“42” Birthday=“03/07/1997”  e.g., Was rating “1,2,3”, now rating “A, B, C”  e.g., discrepancy between duplicate records 2. Why Is Data Dirty? • Incomplete data comes from o n/a data value when collected o Different consideration between the time when the data was collected and when it is analyzed. o Human/hardware/software problems • Noisy data comes from the process of data o Collection o Entry A. Bellaachia Page: 2
  • 3. o Transmission • Inconsistent data comes from o Different data sources o Functional dependency violation 3. Why Is Data Preprocessing Important? • No quality data, no quality mining results!  Quality decisions must be based on quality data o e.g., duplicate or missing data may cause incorrect or even misleading statistics. o Data warehouse needs consistent integration of quality data • Data extraction, cleaning, and transformation comprise the majority of the work of building a data warehouse. —Bill Inmon . A. Bellaachia Page: 3
  • 4. 4. Major Tasks in Data Processing • Data cleaning  Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies • Data integration  Integration of multiple databases, data cubes, or files • Data transformation  Normalization and aggregation • Data reduction  Obtains reduced representation in volume but produces the same or similar analytical results • Data discretization  Part of data reduction but with particular importance, especially for numerical data A. Bellaachia Page: 4
  • 5. 5. Forms of Data Processing: A. Bellaachia Page: 5
  • 6. 6. Data Cleaning • Importance o “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball o “Data cleaning is the number one problem in data warehousing”—DCI survey • Data cleaning tasks o Fill in missing values o Identify outliers and smooth out noisy data o Correct inconsistent data o Resolve redundancy caused by data integration 7. Missing Data • Data is not always available  E.g., many tuples have no recorded value for several attributes, such as customer income in sales data • Missing data may be due to  Equipment malfunction  Inconsistent with other recorded data and thus deleted  Data not entered due to misunderstanding  Certain data may not be considered important at the time of entry  Not register history or changes of the data • Missing data may need to be inferred. • How to Handle Missing Data? A. Bellaachia Page: 6
  • 7. o Ignore the tuple: usually done when class label is missing (assuming the tasks in classification—not effective when the percentage of missing values per attribute varies considerably. o Fill in the missing value manually: tedious + infeasible? o Fill in it automatically with  A global constant: e.g., “unknown”, a new class?!  the attribute mean  the attribute mean for all samples belonging to the same class: smarter  the most probable value: inference-based such as Bayesian formula or decision tree 8. Noisy Data • Noise: random error or variance in a measured variable • Incorrect attribute values may due to o faulty data collection instruments o data entry problems o data transmission problems o technology limitation o inconsistency in naming convention • Other data problems which requires data cleaning o duplicate records o incomplete data o inconsistent data A. Bellaachia Page: 7
  • 8. • How to Handle Noisy Data? o Binning method:  first sort data and partition into (equi-depth) bins  then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc. o Clustering  detect and remove outliers o Combined computer and human inspection  detect suspicious values and check by human (e.g., deal with possible outliers) o Regression  smooth by fitting the data into regression functions 9. Simple Discretization Methods: Binning • Equal-width (distance) partitioning: o Divides the range into N intervals of equal size: uniform grid o if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B –A)/N. o The most straightforward, but outliers may dominate presentation A. Bellaachia Page: 8
  • 9. o Skewed data is not handled well. • Equal-depth (frequency) partitioning: o Divides the range into N intervals, each containing approximately same number of samples o Good data scaling o Managing categorical attributes can be tricky. • Binning methods o They smooth a sorted data value by consulting its “neighborhood”, that is the values around it. o The sorted values are partitioned into a number of buckets or bins. o Smoothing by bin means: Each value in the bin is replaced by the mean value of the bin. o Smoothing by bin medians: Each value in the bin is replaced by the bin median. o Smoothing by boundaries: The min and max values of a bin are identified as the bin boundaries. o Each bin value is replaced by the closest boundary value. • Example: Binning Methods for Data Smoothing A. Bellaachia Page: 9
  • 10. o Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 o Partition into (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 o Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 o Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34 A. Bellaachia Page: 10
  • 11. 10. Cluster Analysis A. Bellaachia Page: 11
  • 12. 11. Regression A. Bellaachia Page: 12 x y y = x + 1 X1 Y1 Y1’
  • 13. 12. Data Integration • Data integration: o Combines data from multiple sources into a coherent store • Schema integration o Integrate metadata from different sources o Entity identification problem: identify real world entities from multiple data sources, e.g., A.cust-id º B.cust-# • Detecting and resolving data value conflicts o For the same real world entity, attribute values from different sources are different o Possible reasons: different representations, different scales, e.g., metric vs. British units • Handling Redundancy in Data Integration o Redundant data occur often when integration of multiple databases  The same attribute may have different names in different databases  One attribute may be a “derived” attribute in another table, e.g., annual revenue o Redundant data may be able to be detected by correlational analysis A. Bellaachia Page: 13
  • 14. o Careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality A. Bellaachia Page: 14
  • 15. 13. Data Transformation • Smoothing: remove noise from data • Aggregation: summarization, data cube construction • Generalization: concept hierarchy climbing • Normalization: scaled to fall within a small, specified range o min-max normalization: o z-score normalization: o normalization by decimal scaling Where j is the smallest integer such that Max(|v’|)<1 • Attribute/feature construction o New attributes constructed from the given ones A. Bellaachia Page: 15 AminnewAminnewAmaxnew AminAmax Aminv v _)__(' +− − − = devAstand meanAv v _ ' − = j v v 10 '=
  • 16. 14. Data reduction Strategies • A data warehouse may store terabytes of data o Complex data analysis/mining may take a very long time to run on the complete data set • Data reduction o Obtain a reduced representation of the data set that is much smaller in volume but yet produce the same (or almost the same) analytical results • Data reduction strategies o Data cube aggregation o Dimensionality reduction—remove unimportant attributes • o Data Compression o Numerosity reduction—fit data into models o Discretization and concept hierarchy generation 15. Similarity and Dissimilarity • Similarity o Numerical measure of how alike two data objects are. o Is higher when objects are more alike. o Often falls in the range [0,1] • Dissimilarity o Numerical measure of how different are two data objects o Lower when objects are more alike A. Bellaachia Page: 16
  • 17. o Minimum dissimilarity is often 0 o Upper limit varies • Proximity refers to a similarity or dissimilarity 15.1. Similarity/Dissimilarity for Simple Attributes • p and q are the attribute values for two data objects. 15.2. Euclidean Distance ∑= −= n k kk qpdist 1 2 )( Where n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q. A. Bellaachia Page: 17
  • 18. Distance Matrix 15.3. Minkowski Distance • Minkowski Distance is a generalization of Euclidean Distance: r n k r kk qpdist 1 1 )||( ∑ = −= A. Bellaachia Page: 18 point x y p1 0 2 p2 2 0 p3 3 1 p4 5 1 0 1 2 3 0 1 2 3 4 5 6 p1 p2 p3 p4 p1 p2 p3 p4 p1 0 2.828 3.162 5.099 p2 2.828 0 1.414 3.162 p3 3.162 1.414 0 2 p4 5.099 3.162 2 0
  • 19. Where r is a parameter, n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q. • Minkowski Distance: Examples o r = 1. City block (Manhattan, taxicab, L1 norm) distance.  A common example of this is the Hamming distance, which is just the number of bits that are different between two binary vectors o r = 2. Euclidean distance o r → ∞. “supremum” (Lmax norm, L∞ norm) distance  This is the maximum difference between any component of the vectors o Do not confuse r with n, i.e., all these distances are defined for all numbers of dimensions. point x y p1 0 2 p2 2 0 p3 3 1 p4 5 1 A. Bellaachia Page: 19 L1 p1 p2 p3 p4 p1 0 4 4 6 p2 4 0 2 4 p3 4 2 0 2 p4 6 4 2 0
  • 20. Distance Matrix 15.4. Mahalanobis Distance T qpqpqpsmahalanobi )()(),( 1 −∑−= − Where Σ is the covariance matrix of the input data X If X is a column vector with n scalar random variable components, and μk is the expected value of the kth element of X, i.e., μk = E(Xk), then the covariance matrix is defined as: A. Bellaachia Page: 20 L2 p1 p2 p3 p4 p1 0 2.828 3.162 5.099 p2 2.828 0 1.414 3.162 p3 3.162 1.414 0 2 p4 5.099 3.162 2 0 L∞ p1 p2 p3 p4 p1 0 2 3 5 p2 2 0 1 3 p3 3 1 0 2 p4 5 3 2 0
  • 21. ∑ = E[(X-E[X]) (X-E[X])T] =             −−−− −−−− −−−−−− = =∑ )](X)E[(X......)](X)E[(X ............ ...)](X)E[(X)](X)E[(X )](X)E[(X...)](X)E[(X)](X)E[(X ]E[X])-(XE[X])-E[(X nn11n 22221122 n1122111111 T nnn n µµµµ µµµµ µµµµµµ The (i,j) element is the covariance between Xi and Xj. For red points, the Euclidean distance is 14.7, Mahalanobis distance is 6. • If the covariance matrix is the identity matrix, the Mahalanobis distance reduces to the Euclidean distance. If A. Bellaachia Page: 21
  • 22. the covariance matrix is diagonal, then the resulting distance measure is called the normalized Euclidean distance: 15.5. Common Properties of a Distance • Distances, such as the Euclidean distance, have some well known properties. 1. d(p, q) ≥ 0 for all p and q and d(p, q) = 0 only if p = q. (Positive definiteness) 2. d(p, q) = d(q, p) for all p and q. (Symmetry) 3. d(p, r) ≤ d(p, q) + d(q, r) for all points p, q, and r. (Triangle Inequality) where d(p, q) is the distance (dissimilarity) between points (data objects), p and q. • A distance that satisfies these properties is a metric 15.6. Common Properties of a Similarity • Similarities, also have some well known properties. 1. s(p, q) = 1 (or maximum similarity) only if p = q. 2. s(p, q) = s(q, p) for all p and q. (Symmetry) where s(p, q) is the similarity between points (data objects), p and q. A. Bellaachia Page: 22
  • 23. 15.7. Similarity Between Binary Vectors • Common situation is that objects, p and q, have only binary attributes • Compute similarities using the following quantities M01 = the number of attributes where p was 0 and q was 1 M10 = the number of attributes where p was 1 and q was 0 M00 = the number of attributes where p was 0 and q was 0 M11 = the number of attributes where p was 1 and q was 1 • Simple Matching and Jaccard Coefficients SMC = number of matches / number of attributes = (M11 + M00) / (M01 + M10 + M11 + M00) J = number of 11 matches / number of not-both-zero attributes values = (M11) / (M01 + M10 + M11) • SMC versus Jaccard: Example p = 1 0 0 0 0 0 0 0 0 0 q = 0 0 0 0 0 0 1 0 0 1 M01 = 2 (the number of attributes where p was 0 and q was 1) M10 = 1 (the number of attributes where p was 1 and q was 0) M00 = 7 (the number of attributes where p was 0 and q was 0) M11 = 0 (the number of attributes where p was 1 and q was 1) SMC = (M11 + M00)/(M01 + M10 + M11 + M00) = (0+7) / (2+1+0+7) = 0.7 A. Bellaachia Page: 23
  • 24. J = (M11) / (M01 + M10 + M11) = 0 / (2 + 1 + 0) = 0 15.8. Cosine Similarity • If d1 and d2 are two document vectors, then cos( d1, d2 ) = (d1 • d2) / ||d1|| ||d2|| , Where • indicates vector dot product and || d || is the length of vector d. • Example: d1 = 3 2 0 5 0 0 0 2 0 0 d2 = 1 0 0 0 0 0 0 1 0 2 d1 • d2= 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5 ||d1|| = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 = (42) 0.5 = 6.481 ||d2|| = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.245 cos( d1, d2 ) = .3150 15.9. Extended Jaccard Coefficient (Tanimoto) • Variation of Jaccard for continuous or count attributes o Reduces to Jaccard for binary attributes qpqp qp qpT •−+ • = 22 ),( A. Bellaachia Page: 24
  • 25. 15.10. Correlation • Correlation measures the linear relationship between objects • To compute correlation, we standardize data objects, p and q, and then take their dot product )(/))(( pstdpmeanpp kk −=′ )(/))(( qstdqmeanqq kk −=′ qpqpncorrelatio ′•′=),( A. Bellaachia Page: 25