SlideShare a Scribd company logo
1
Data Preprocessing
— Unit 2 —
Unit 2: Data Preprocessing
• ETL in Data Warehousing:
• Data Integration and Transformation, Data Cleaning,
• Data Reduction:
• Data Cube Aggregation,
• Dimensionality Reduction,
• Data Discretization and Concept Hierarchy Generation,
• Data Compression.
2
2
Data Quality: Why Preprocess the Data?
Measures for data quality: A multidimensional view
• Accuracy: correct or wrong, accurate or not
• Completeness: not recorded, unavailable, …
• Consistency: some modified but some not, dangling, …
• Timeliness: timely update?
• Believability: how trustable the data are correct?
• Interpretability: how easily the data can be understood?
3
Major Tasks in Data Preprocessing
• Data cleaning
• Fill in missing values, smooth noisy data, identify or remove outliers, and
resolve inconsistencies, etc.
• Data integration
• Integration of multiple databases, data cubes, or files
• Data reduction
• Dimensionality reduction
• Numerosity reduction
• Data compression
• Data transformation and data discretization
• Normalization
• Concept hierarchy generation
4
Data Cleaning
• Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g.,
instrument faulty, human or computer error, transmission error, etc.
• incomplete: lacking attribute values, lacking certain attributes of interest,
or containing only aggregate data
• e.g., Occupation=“ ” (missing data)
• noisy: containing noise, errors, or outliers
• e.g., Salary=“−10” (an error)
• inconsistent: containing discrepancies in codes or names, e.g.,
• Age=“42”, Birthday=“03/07/2010”
• Was rating “1, 2, 3”, now rating “A, B, C”
• discrepancy between duplicate records
• Intentional (e.g., disguised missing data)
• Jan. 1 as everyone’s birthday?
5
Incomplete (Missing) Data
•Data is not always available
•E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data.
•Missing data may be due to
•equipment malfunction.
•inconsistent with other recorded data and thus deleted.
•data not entered due to misunderstanding.
•certain data may not be considered important at the time
of entry.
•not register history or changes of the data.
6
How to Handle Missing Data?
• Ignore the tuple: usually done when class label is missing (when doing
classification)
• not effective when the % of missing values per attribute varies
considerably.
• Fill in the missing value manually: tedious + infeasible?
• this approach is time consuming and may not be feasible given a large
data set with many missing values.
• Fill in it automatically with
• a global constant : e.g., “unknown”, a new class?!
• the attribute mean or median
• the attribute mean for all samples belonging to the same class: smarter
• the most probable value: inference-based such as Bayesian formula or
decision tree induction.
7
8
Noisy Data
•Noise: random error or variance in a measured variable
•Incorrect attribute values may be due to
•faulty data collection instruments
•data entry problems
•data transmission problems
•technology limitation
•inconsistency in naming convention
•Other data problems which require data cleaning
•duplicate records
•incomplete data
•inconsistent data
How to Handle Noisy Data?
Binning
•first sort data and partition into
(equal-frequency) bins
•then one can
• smooth by bin means: each value in a
bin is replaced by the mean value of the
bin.
• smooth by bin median: each bin value is
replaced by the bin median.
9
• smooth by bin boundaries: minimum and maximum values in a
given bin are identified as the bin boundaries.
Each bin value is then replaced by the closest boundary value.
etc.
How to Handle Noisy Data?
•Regression
•Data smoothing can also be done by regression, a
technique that con- forms data values to a function.
•Linear regression involves finding the “best” line to fit
two attributes (or variables) so that one attribute can be
used to predict the other.
•Multiple linear regression is an extension of linear
regression, where more than two attributes are involved
and the data are fit to a multidimensional surface.
10
How to Handle Noisy Data?
•Clustering or Outlier analysis:
• Outliers may be detected by clustering, for example, where similar
values are organized into groups, or “clusters.” Intuitively, values
that fall outside of the set of clusters may be considered outliers
11
How to Handle Noisy Data?
•Combined computer and human inspection
• detect suspicious values and check by human (e.g., deal with
possible outliers)
12
Data Cleaning as a Process
Data discrepancy (illogical/unsuitable data ): how happen?
• The first step in data cleaning as a process is discrepancy
detection.
• Discrepancies can be caused by several factors, including
• poorly designed data entry forms that have many optional
fields,
• human error in data entry,
• deliberate errors (e.g., respondents not wanting to divulge
information about themselves), and
• data decay (e.g., outdated addresses).
• Discrepancies may also arise from inconsistent data
representations and inconsistent use of codes.
• Other sources of discrepancies include errors in instrumentation
devices that record data and system errors.
13
Data Cleaning as a Process
Data discrepancy detection
• Use metadata (e.g., domain, range, dependency, distribution)
• Check field overloading
• Field overloading is another error source that typically results when
developers squeeze new attribute definitions into unused (bit)
portions of already defined attributes (e.g., an unused bit of an
attribute that has a value range that uses only, say, 31 out of 32 bits).
• Check uniqueness rule, consecutive rule and null rule.
• Use commercial tools
• Data scrubbing: use simple domain knowledge (e.g., postal code,
spell-check) to detect errors and make corrections
• Data auditing: by analyzing data to discover rules and relationship
to detect violators (e.g., correlation and clustering to find outliers)
14
Data Cleaning as a Process
•Data migration and integration
•Data migration tools: allow transformations to be
specified
•ETL (Extraction/Transformation/Loading) tools: allow
users to specify transformations through a graphical user
interface
•Integration of the two processes
•Iterative and interactive (e.g., Potter’s Wheels)
15
Data Integration
• Data Warehousing/OLAP/Data mining often requires data
integration—the merging of data from multiple data stores.
• Data integration: Combines data from multiple sources into a
coherent store.
• Careful integration can help reduce and avoid redundancies
and inconsistencies in the resulting data set.
• This can help improve the accuracy and speed of the
subsequent data mining process.
16
16
Data Integration
•The semantic heterogeneity and structure of data pose great
challenges in data integration.
•How can we match schema and objects from different
sources?
• This is the essence of the entity identification problem,
•Are any attributes correlated?
• correlation tests for numeric and nominal data.
17
17
Entity identification problem in Data Integration
• It is likely that your data analysis task will involve data
integration, which combines data from multiple sources into a
coherent data store, as in data warehousing.
• These sources may include multiple databases, data cubes, or flat files.
• There are a number of issues to consider during data
integration. Schema integration and object matching can be
tricky.
• How can equivalent real-world entities from multiple data
sources be matched up?
• This is referred to as the entity identification problem.
18
18
Entity identification problem in Data Integration
• Identify real world entities from multiple data sources,
• e.g., Bill Clinton = William Clinton || customer_id =
cust_number ??
• Detecting and resolving data value conflicts
• For the same real world entity, attribute values from different
sources are different
• Possible reasons: different representations, different scales,
• e.g., Metric vs. British units
19
19
Redundancy and Correlation Analysis
in Data Integration
• Redundant data occur often when integration of multiple databases
• Object identification: The same attribute or object may have different
names in different databases
• Derivable data: One attribute may be a “derived” attribute in another
table, e.g., annual revenue
• Redundant attributes may be able to be detected by correlation analysis and
covariance analysis
• Careful integration of the data from multiple sources may help reduce/avoid
redundancies and inconsistencies and improve mining speed and quality
20
20
Correlation Analysis (Nominal Data)
• For nominal data, a correlation relationship between two attributes, A and B, can
be discovered by a χ2 (chi-square) test.
• Suppose A has c distinct values, namely a1,a2,...ac.
• B has r distinct values, namely b1, b2,...br.
• The data tuples described by A and B can be shown as a contingency table, with
the c values of A making up the columns and the r values of B making up the
rows.
• Let (Ai , Bj ) denote the joint event that attribute A takes on value ai and attribute
B takes on value bj, that is, where (A = ai, B = bj).
• Each and every possible (Ai, Bj) joint event has its own cell (or slot) in the table.
21
Correlation Analysis (Nominal Data)
• Χ2 (chi-square) test: χ2 value is computed as
• where oij is the observed frequency (i.e., actual count) of the joint event (Ai , Bj ) and eij
is the expected frequency of (Ai , Bj ), which can be computed as
• where n is the number of data tuples, count (A = ai ) is the number of tuples having value ai
for A, and count(B = bj) is the number of tuples having value bj for B.
• i.e.
• The χ2 statistic tests the hypothesis that A and B are independent, that is,
there is no correlation between them.



Expected
Expected
Observed 2
2 )
(

22
Example: χ2 (chi-square) test
• Suppose that a group of 1500 people was surveyed.
• The gender of each person was noted.
• Each person was polled as to whether his or her preferred type of reading material
was fiction or nonfiction.
• Thus, we have two attributes, gender and preferred_reading.
expected frequency for the cell (male, fiction) is
Chi-Square Calculation: An Example
Χ2 (chi-square) calculation (numbers in parenthesis are expected counts
calculated based on the data distribution in the two categories)
93
.
507
840
)
840
1000
(
360
)
360
200
(
210
)
210
50
(
90
)
90
250
( 2
2
2
2
2










24
• For this 2 × 2 table, the degrees of freedom are (2 − 1)(2 − 1) = 1.
• For 1 degree of freedom, the χ 2 value needed to reject the hypothesis at
the 0.001 significance level is 10.828 (taken from the table of upper
percentage points of the χ2 distribution, typically available from any
textbook on statistics).
• Since our computed value is above this, we can reject the hypothesis that
gender and preferred reading are independent and conclude that the two
attributes are (strongly) correlated for the given group of people.
Correlation Analysis (Numeric Data)
• For numeric attributes, we can evaluate the correlation between two attributes, A and
B, by computing the correlation coefficient (also known as Pearson’s product
moment coefficient, named after its inventor, Karl Pearson)
• Correlation coefficient (also called Pearson’s product moment coefficient)
where n is the number of tuples, and are the respective means of A and B, σA and
σB are the respective standard deviation of A and B, and Σ(aibi) is the sum of the AB
cross-product.
• If rA,B > 0, A and B are positively correlated (A’s values increase as B’s). The
higher, the stronger correlation.
• rA,B = 0: independent; rAB < 0: negatively correlated
A
25
B
Covariance (Numeric Data)
26
• In probability theory and statistics, correlation and covariance are two
similar measures for assessing how much two attributes change together.
• Consider two numeric attributes A and B, and a set of n observations
{(a1,b1),...,(an,bn)}.
• The mean values of A and B, respectively, are also known as the expected
values on A and B, that is
• The covariance between A and B is defined as
Covariance (Numeric Data)
27
• Correlation coefficient
where σA and σB are the standard deviations of A and B,
respectively.
• It can also be shown that
This equation may simplify calculations.
Positive covariance: If Cov(A,B)> 0, then A and B both tend to be larger than their
expected values.
Negative covariance: If Cov(A,B)< 0 then if A is larger than its expected value, B is
likely to be smaller than its expected value.
Independence: Cov(A,B) = 0 but the converse is not true:
Some pairs of random variables may have a covariance of 0 but are not independent.
Only under some additional assumptions (e.g., the data follow multivariate normal
distributions) does a covariance of 0 imply independence
Co-Variance: An Example
Consider Table, which presents a
simplified example of stock prices
observed at five time points for
AllElectronics and HighTech, a
hightech company.
If the stocks are affected by the
same industry trends, will their
prices rise or fall together?
Expected values:
Therefore, given the positive covariance we can say that stock prices for both
companies rise together.
Data Reduction
Data Reduction Strategies
• Data reduction: Obtain a reduced representation of the data set that is much
smaller in volume but yet produces the same (or almost the same) analytical
results
• Why data reduction? — A database/data warehouse may store terabytes of data.
Complex data analysis may take a very long time to run on the complete data set.
• Data reduction strategies
• Dimensionality reduction, e.g., remove unimportant attributes
• Wavelet transforms
• Principal Components Analysis (PCA)
• Feature subset selection, feature creation
• Numerosity reduction (some simply call it: Data Reduction)
• Regression and Log-Linear Models
• Histograms, clustering, sampling
• Data cube aggregation
• Data compression
30
31
Data Reduction 1:
Dimensionality Reduction
• Curse of dimensionality
• When dimensionality increases, data becomes increasingly sparse
• Density and distance between points, which is critical to clustering, outlier analysis,
becomes less meaningful
• The possible combinations of subspaces will grow exponentially
• Dimensionality reduction
• Avoid the curse of dimensionality
• Help eliminate irrelevant features and reduce noise
• Reduce time and space required in data mining
• Allow easier visualization
• Dimensionality reduction techniques
• Wavelet transforms
• Principal Component Analysis
• Supervised and nonlinear techniques (e.g., feature selection)
32
Data Reduction Strategies
• Data reduction strategies include
• dimensionality reduction,
• numerosity reduction, and
• data compression.
• Dimensionality reduction is the process of reducing the number of random
variables or attributes under consideration.
• Dimensionality reduction methods include wavelet transforms and principal components
analysis which transform or project the original data onto a smaller space.
33
Data Reduction Strategies
• Numerosity reduction techniques replace the original data volume by
alternative, smaller forms of data representation.
• These techniques may be parametric or non- parametric.
•For parametric methods, a model is used to estimate the
data, so that typically only the data parameters need to be
stored, instead of the actual data. (Outliers may also be
stored.)
• Regression and log-linear models are examples.
•Nonparametric methods for storing reduced
representations of the data include
• histograms,
• clustering,
• sampling, and
• data cube aggregation.
34
Data Reduction Strategies
• In data compression, transformations are applied so as to obtain a reduced or
“compressed” representation of the original data.
• If the original data can be reconstructed from the compressed data without any information
loss, the data reduction is called lossless.
• If, instead, we can reconstruct only an approximation of the original data, then the data
reduction is called lossy.
• There are several lossless algorithms for string compression; however, they typically allow
only limited data manipulation.
• Dimensionality reduction and numerosity reduction techniques can also be
considered forms of data compression.
35
Two Sine Waves Two Sine Waves + Noise Frequency
 Fourier transform
 Wavelet transform
https://www.youtube.com/watch?v=ZnmvUCtUAEE&t=331s
https://www.youtube.com/watch?v=QX1-xGVFqmw&t=206s
36
What Is Wavelet Transform?
• The discrete wavelet transform (DWT) is a linear signal processing technique
that, when applied to a data vector X, transforms it to a numerically different
vector, X′, of wavelet coefficients.
• The two vectors are of the same length.
• When applying this technique to data reduction, we consider each tuple as an n-
dimensional data vector, that is, X = (x1,x2,...,xn), depicting n measurements
made on the tuple from n database attributes.
Question. How can this technique be useful for data reduction if the wavelet
transformed data are of the same length as the original data?
37
Wavelet Transformation
• The DWT is closely related to the Discrete Fourier Transform (DFT), a signal
processing technique involving sines and cosines.
• In general, however, the DWT achieves better lossy compression.
• That is, if the same number of coefficients is retained for a DWT and a DFT of
a given data vector, the DWT version will provide a more accurate
approximation of the original data.
• Hence, for an equivalent approximation, the DWT requires less space than the
DFT.
• Unlike the DFT, wavelets are quite localized in space, contributing to the
conservation of local detail.
• There is only one DFT, yet there are several families of DWTs.
• Popular wavelet transforms include the Haar-2, Daubechies-4, and
Daubechies-6.
38
Wavelet Transformation
The general procedure for applying a discrete wavelet transform uses a
hierarchical pyramid algorithm that halves the data at each iteration, resulting in
fast computational speed. The method is as follows:
39
Wavelet Transformation
DWT Method:
40
Wavelet Decomposition
• Wavelets: A math tool for space-efficient hierarchical decomposition
of functions
• S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S’ = [23/4, -11/4, 1/2, 0,
0, -1, -1, 0]
• Compression: many small detail coefficients can be replaced by 0’s,
and only the significant coefficients are retained
41
Haar Wavelet Coefficients
Coefficient “Supports”
2 2 0 2 3 5 4 4
-1.25
2.75
0.5 0
0 -1 0
-1
+
-
+
+
+ + +
+
+
- -
- - - -
+
-
+
+ -
+ -
+-
+-
-
+
+-
-1
-1
0.5
0
2.75
-1.25
0
0
Original frequency distribution
Hierarchical
decomposition
structure (a.k.a.
“error tree”)
42
Why Wavelet Transform?
• Use hat-shape filters
• Emphasize region where points cluster
• Suppress weaker information in their boundaries
• Effective removal of outliers
• Insensitive to noise, insensitive to input order
• Multi-resolution
• Detect arbitrary shaped clusters at different scales
• Efficient
• Complexity O(N)
• Only applicable to low dimensional data
Principal Component Analysis (PCA)
• Find a projection that captures the largest amount of variation in data
• The original data are projected onto a much smaller space, resulting in
dimensionality reduction. We find the eigenvectors of the covariance matrix,
and these eigenvectors define the new space
43
x2
x1
e
Principal Component Analysis (Steps)
• Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal
components) that can be best used to represent data
• Normalize input data: Each attribute falls within the same range
• Compute k orthonormal (unit) vectors, i.e., principal components
• Each input data (vector) is a linear combination of the k principal component
vectors
• The principal components are sorted in order of decreasing “significance” or
strength
• Since the components are sorted, the size of the data can be reduced by
eliminating the weak components, i.e., those with low variance (i.e., using the
strongest principal components, it is possible to reconstruct a good
approximation of the original data)
• Works for numeric data only
44
Attribute Subset Selection
• Another way to reduce dimensionality of data
• Redundant attributes
• Duplicate much or all of the information contained in one or more
other attributes
• E.g., purchase price of a product and the amount of sales tax paid
• Irrelevant attributes
• Contain no information that is useful for the data mining task at
hand
• E.g., students' ID is often irrelevant to the task of predicting
students' GPA
45
Attribute Subset Selection
“How can we find a ‘good’ subset of the original attributes?”
• For n attributes, there are 2n possible subsets.
• An exhaustive search for the optimal subset of attributes can be prohibitively
expensive, especially as n and the number of data classes increase.
• Therefore, heuristic methods that explore a reduced search space are commonly
used for attribute subset selection.
46
Heuristic Search in Attribute Selection
• There are 2d possible attribute combinations of d attributes
• Typical heuristic attribute selection methods:
• Best single attribute under the attribute independence
assumption: choose by significance tests
• Best step-wise feature selection:
• The best single-attribute is picked first
• Then next best attribute condition to the first, ...
• Step-wise attribute elimination:
• Repeatedly eliminate the worst attribute
• Best combined attribute selection and elimination
• Optimal branch and bound:
• Use attribute elimination and backtracking
47
48
Attribute Creation (Feature Generation)
• Create new attributes (features) that can capture the important
information in a data set more effectively than the original ones
• Three general methodologies
• Attribute extraction
• Domain-specific
• Mapping data to new space (see: data reduction)
• E.g., Fourier transformation, wavelet transformation, manifold approaches (not covered)
• Attribute construction
• Combining features (see: discriminative frequent patterns in Chapter 7)
• Data discretization
Data Reduction 2:
Numerosity Reduction
• Reduce data volume by choosing alternative, smaller forms of
data representation
• Parametric methods (e.g., regression)
• Assume the data fits some model, estimate model parameters,
store only the parameters, and discard the data (except
possible outliers)
• Ex.: Log-linear models—obtain value at a point in m-D
space as the product on appropriate marginal subspaces
• Non-parametric methods
• Do not assume models
• Major families: histograms, clustering, sampling, …
49
Parametric Data Reduction:
Regression and Log-Linear Models
• Linear regression
• Data modeled to fit a straight line
• Often uses the least-square method to fit the line
• Multiple regression
• Allows a response variable Y to be modeled as a linear
function of multidimensional feature vector
• Log-linear model
• Approximates discrete multidimensional probability
distributions
50
Regression Analysis
• Regression analysis: A collective name for
techniques for the modeling and analysis of
numerical data consisting of values of a
dependent variable (also called response
variable or measurement) and of one or more
independent variables (aka. explanatory
variables or predictors)
• The parameters are estimated so as to give a
"best fit" of the data
• Most commonly the best fit is evaluated by using
the least squares method, but other criteria have
also been used
• Used for prediction (including
forecasting of time-series data),
inference, hypothesis testing, and
modeling of causal relationships
51
y
x
y = x + 1
X1
Y1
Y1’
least squares method
Regression Analysis and
Log-Linear Models
• Linear regression: Y = w X + b
• Two regression coefficients, w(slope) and b(intercept), specify the line and are
to be estimated by using the data at hand
• Using the least squares criterion to the known values of Y1, Y2, …, X1, X2, ….
• Multiple regression: Y = b0 + b1 X1 + b2 X2
• Multiple linear regression is an extension of (simple) linear regression, which
allows a response variable, y, to be modeled as a linear function of two or more
predictor variables.
53
Regress Analysis and
Log-Linear Models
Log-linear models:
• Log-linear models approximate discrete multidimensional probability distributions.
• Given a set of tuples in n dimensions (e.g., described by n attributes), we can consider
each tuple as a point in an n-dimensional space.
• Log-linear models can be used to estimate the probability of each point in a
multidimensional space for a set of discretized attributes, based on a smaller subset of
dimensional combinations.
• This allows a higher-dimensional data space to be constructed from lower-dimensional
spaces.
• Log-linear models are therefore also useful for:
• dimensionality reduction: since the lower-dimensional points together typically
occupy less space than the original data points
• data smoothing: since aggregate estimates in the lower-dimensional space are less
subject to sampling variations than the estimates in the higher-dimensional space.
54
Histogram Analysis
• Divide data into buckets and store
average (sum) for each bucket
• Partitioning rules:
• Equal-width: equal bucket
range
• Equal-frequency (or equal-
depth)
55
0
5
10
15
20
25
30
35
40
10000 30000 50000 70000 90000
Histogram Analysis
• Example: The following data are a list of AllElectronics prices for commonly sold items (rounded to the nearest dollar).
• The numbers have been sorted: 1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18,
18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30.
56
Histogram Analysis
• Example: The following data are a list of AllElectronics prices for commonly sold items (rounded to the nearest dollar).
• The numbers have been sorted: 1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18,
18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30.
57
Histogram Analysis
58
“How are the buckets determined and the attribute values partitioned?” There are
several partitioning rules, including the following:
• Equal-width: In an equal-width histogram, the width of each bucket range is
uniform (e.g., the width of $10 for the buckets)
• Equal-frequency (or equal-depth): In an equal-frequency histogram, the buckets
are created so that, roughly, the frequency of each bucket is constant (i.e., each
bucket contains roughly the same number of contiguous data samples).
• Histograms are highly effective at approximating both sparse and dense data, as
well as highly skewed and uniform data.
• The histograms described before for single attributes can be extended for multiple
attributes.
• Multidimensional histograms can capture dependencies between attributes.
• These histograms have been found effective in approximating data with up to five
attributes.
• More studies are needed regarding the effectiveness of multidimensional
histograms for high dimensionalities.
Clustering
• Clustering techniques consider data tuples as objects.
• They partition the objects into groups, or clusters, so that objects within a cluster are
“similar” to one another and “dissimilar” to objects in other clusters.
• Similarity is commonly defined in terms of how “close” the objects are in space,
based on a distance function.
• The “quality” of a cluster may be represented by its diameter, the maximum
distance between any two objects in the cluster.
• Centroid distance is an alternative measure of cluster quality and is defined as the
average distance of each cluster object from the cluster centroid (denoting the
“average object,” or average point in space for the cluster).
• In data reduction, the cluster representations of the data are used to replace the
actual data.
59
Sampling
• Sampling: obtaining a small sample s to represent the whole data set
N
• Allow a mining algorithm to run in complexity that is potentially sub-
linear to the size of the data
• Key principle: Choose a representative subset of the data
• Simple random sampling may have very poor performance in the
presence of skew
• Develop adaptive sampling methods, e.g., stratified sampling:
• Note: Sampling may not reduce database I/Os (page at a time)
60
Types of Sampling
• Simple random sampling
• There is an equal probability of selecting any particular item
• Sampling without replacement
• Once an object is selected, it is removed from the population
• Sampling with replacement
• A selected object is not removed from the population
• Stratified sampling:
• Partition the data set, and draw samples from each partition
(proportionally, i.e., approximately the same percentage of the
data)
• Used in conjunction with skewed data
61
62
Sampling: With or without Replacement
Raw Data
Sampling: Cluster or
Stratified Sampling
63
Raw Data Cluster/Stratified Sample
64
Data Cube Aggregation
• The lowest level of a data cube (base cuboid)
• The aggregated data for an individual entity of interest
• E.g., a customer in a phone calling data warehouse
• Multiple levels of aggregation in data cubes
• Further reduce the size of data to deal with
• Reference appropriate levels
• Use the smallest representation which is enough to solve the
task
• Queries regarding aggregated information should be answered
using data cube, when possible
65
Data Reduction 3: Data Compression
• String compression
• There are extensive theories and well-tuned algorithms
• Typically lossless, but only limited manipulation is possible
without expansion
• Audio/video compression
• Typically lossy compression, with progressive refinement
• Sometimes small fragments of signal can be reconstructed without
reconstructing the whole
• Time sequence is not audio
• Typically short and vary slowly with time
• Dimensionality and numerosity reduction may also be considered
as forms of data compression
66
Data Compression
Original Data Compressed
Data
lossless
Original Data
Approximated
Data Transformation
68
Data Transformation
• The data are transformed or consolidated into forms appropriate for mining.
• A function is used to maps the entire set of values of a given attribute to a new set of
replacement values s.t. each old value can be identified with one of the new values.
Strategies for data transformation include the following:
1. Smoothing: which works to remove noise from the data. Techniques include
binning, regression, and clustering.
2. Attribute construction (or feature construction): new attributes are constructed
and added from the given set of attributes to help the mining process.
• For example, from Date of Birth attribute, age attribute can be constructed.
3. Aggregation: Summary or aggregation operations are applied to the data.
• For example, the daily sales data may be aggregated to compute monthly
and annual total amounts.
• This step is typically used in constructing a data cube for data analysis at
multiple abstraction levels.
69
Data Transformation
• The data are transformed or consolidated into forms appropriate for mining.
• A function is used to maps the entire set of values of a given attribute to a new set of
replacement values s.t. each old value can be identified with one of the new values.
Strategies for data transformation include the following:
4. Normalization: the attribute data are scaled so as to fall within a smaller range,
such as −1.0 to 1.0, or 0.0 to 1.0.
5. Discretization: the raw values of a numeric attribute (e.g., age) are replaced by
• interval labels (e.g., 0–10, 11–20, etc.) or
• conceptual labels (e.g., youth, adult, senior).
6. Concept hierarchy generation for nominal data: attributes such as street can
be generalized to higher-level concepts, like city or country.
Data Transformation by Normalization
70
• Normalizing the data attempts to give all attributes an equal weight.
• Normalization is particularly useful for classification algorithms involving
neural networks or distance measurements such as nearest-neighbor
classification and clustering.
• For distance-based methods, normalization helps prevent attributes with initially
large ranges (e.g., income) from outweighing attributes with initially smaller
ranges (e.g., age or some binary attributes).
• There are many methods for data normalization:
• min-max normalization,
• z-score normalization, and
• normalization by decimal scaling.
• For our discussion,
let A be a numeric attribute with n observed values, v1,v2,...,vn.
Min-max normalization
71
EXAMPLE: Suppose that the minimum and maximum values for the attribute income
are $12,000 and $98,000, respectively. We would like to map income to the range
[0.0,1.0]. By min-max normalization, a value of $73,600 for income is transformed to a
VALUE = ?
z-score normalization
(or zero-mean normalization)
72
EXAMPLE: Suppose that the mean and standard deviation of the values for
the attribute income are $54,000 and $16,000, respectively. With z-score normalization,
a value of $73,600 for income is transformed to a VALUE= ?
Normalization by decimal scaling
73
EXAMPLE: Suppose that the recorded values of A range from −986 to 917.
The maximum absolute value of A is 986. To normalize by decimal
scaling, we therefore divide each value by 1000 (i.e., j = 3) so that −986
normalizes to −0.986 and 917 normalizes to 0.917.
Types of Attributes
Discretization
• Discretization: Divide the range of a continuous attribute into intervals
• Interval labels can then be used to replace actual data values
• Reduce data size by discretization
• Supervised vs. unsupervised
• Split (top-down) vs. merge (bottom-up)
• Discretization can be performed recursively on an attribute
• Prepare for further analysis, e.g., classification
75
76
Data Discretization Methods
• Typical methods: All the methods can be applied recursively
• Binning
• Top-down split, unsupervised
• Histogram analysis
• Top-down split, unsupervised
• Clustering analysis (unsupervised, top-down split or bottom-up
merge)
• Decision-tree analysis (supervised, top-down split)
• Correlation (e.g., 2) analysis (unsupervised, bottom-up merge)
Simple Discretization: Binning
• Equal-width (distance) partitioning
• Divides the range into N intervals of equal size: uniform grid
• if A and B are the lowest and highest values of the attribute, the width of intervals
will be: W = (B –A)/N.
• The most straightforward, but outliers may dominate presentation
• Skewed data is not handled well
• Equal-depth (frequency) partitioning
• Divides the range into N intervals, each containing approximately same number
of samples
• Good data scaling
• Managing categorical attributes can be tricky
77
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
78
79
Discretization Without Using Class Labels
(Binning vs. Clustering)
Data Equal interval width (binning)
Equal frequency (binning) K-means clustering leads to better results
80
Discretization by
Classification & Correlation Analysis
• Classification (e.g., decision tree analysis)
• Supervised: Given class labels, e.g., cancerous vs. benign
• Using entropy to determine split point (discretization point)
• Top-down, recursive split
• Details to be covered in Chapter 7
• Correlation analysis (e.g., Chi-merge: χ2-based discretization)
• Supervised: use class information
• Bottom-up merge: find the best neighboring intervals (those having similar
distributions of classes, i.e., low χ2 values) to merge
• Merge performed recursively, until a predefined stopping condition
Concept Hierarchy Generation
• Concept hierarchy organizes concepts (i.e., attribute values) hierarchically and
is usually associated with each dimension in a data warehouse
• Concept hierarchies facilitate drilling and rolling in data warehouses to view
data in multiple granularity
• Concept hierarchy formation: Recursively reduce the data by collecting and
replacing low level concepts (such as numeric values for age) by higher level
concepts (such as youth, adult, or senior)
• Concept hierarchies can be explicitly specified by domain experts and/or data
warehouse designers
• Concept hierarchy can be automatically formed for both numeric and nominal
data. For numeric data, use discretization methods shown.
81
Concept Hierarchy Generation
for Nominal Data
• Specification of a partial/total ordering of attributes explicitly at
the schema level by users or experts
• street < city < state < country
• Specification of a hierarchy for a set of values by explicit data
grouping
• {Urbana, Champaign, Chicago} < Illinois
• Specification of only a partial set of attributes
• E.g., only street < city, not others
• Automatic generation of hierarchies (or attribute levels) by the
analysis of the number of distinct values
• E.g., for a set of attributes: {street, city, state, country}
82
Automatic Concept Hierarchy Generation
• Some hierarchies can be automatically generated based on the analysis of the
number of distinct values per attribute in the data set
• The attribute with the most distinct values is placed at the lowest level of
the hierarchy
• Exceptions, e.g., weekday, month, quarter, year
83
country
province_or_ state
city
street
15 distinct values
365 distinct values
3567 distinct values
674,339 distinct values
Summary
• Data quality: accuracy, completeness, consistency, timeliness, believability,
interpretability
• Data cleaning: e.g. missing/noisy values, outliers
• Data integration from multiple sources:
• Entity identification problem
• Remove redundancies
• Detect inconsistencies
• Data reduction
• Dimensionality reduction
• Numerosity reduction
• Data compression
• Data transformation and data discretization
• Normalization
• Concept hierarchy generation
84
Thank You

More Related Content

PPTX
CST 466 exam help data mining mod2.pptx
PPTX
Data preprocessing
PPT
Chapter 2 Cond (1).ppt
PDF
Cs501 data preprocessingdw
PPT
Data preprocessing
PPT
Data Preprocessing_17924109858fc09abd41bc880e540c13.ppt
PPT
Unit 3-2.ppt
PPTX
Data Preprocessing
CST 466 exam help data mining mod2.pptx
Data preprocessing
Chapter 2 Cond (1).ppt
Cs501 data preprocessingdw
Data preprocessing
Data Preprocessing_17924109858fc09abd41bc880e540c13.ppt
Unit 3-2.ppt
Data Preprocessing

Similar to Data Preprocessing in Data Mining Lecture Slide (20)

PPT
Data Preprocessing
PPT
03 preprocessing
PPT
Data Preprocessing in Pharmaceutical.ppt
PPT
03Preprocessing.ppt Processing in Computer Science
PPT
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
PDF
Preprocessing Step in Data Cleaning - Data Mining
PPT
data Preprocessing different techniques summarized
PPTX
03Preprocessing_20160222datamining5678.pptx
PDF
12.Data processing and concepts.pdf
PDF
data processing.pdf
PPT
Upstate CSCI 525 Data Mining Chapter 3
PPT
preprocessing so that u can you these thing in your daily lifeppt
PPT
chapter 3 - Preprocessing data mining ppt
PPT
DATA PREPROCESSING NOTES ABOUT DATA MINING AND DATA
PPT
03tahapanpengolahanPreprocessingdata.ppt
PPT
Preprocessing steps in Data mining steps
PPT
03Preprocessing_DataMining_Conce ddd.ppt
PPT
03PreprocessindARA AJAJAJJAJAJAJJJAg.ppt
PPT
Data Preprocessing in research methodology
PPT
Preprocessing of data mining process.ppt
Data Preprocessing
03 preprocessing
Data Preprocessing in Pharmaceutical.ppt
03Preprocessing.ppt Processing in Computer Science
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
Preprocessing Step in Data Cleaning - Data Mining
data Preprocessing different techniques summarized
03Preprocessing_20160222datamining5678.pptx
12.Data processing and concepts.pdf
data processing.pdf
Upstate CSCI 525 Data Mining Chapter 3
preprocessing so that u can you these thing in your daily lifeppt
chapter 3 - Preprocessing data mining ppt
DATA PREPROCESSING NOTES ABOUT DATA MINING AND DATA
03tahapanpengolahanPreprocessingdata.ppt
Preprocessing steps in Data mining steps
03Preprocessing_DataMining_Conce ddd.ppt
03PreprocessindARA AJAJAJJAJAJAJJJAg.ppt
Data Preprocessing in research methodology
Preprocessing of data mining process.ppt
Ad

More from Nehal668249 (7)

PDF
Chapter 14 slides Distributed System Presentation
PDF
Chapter 15 slides Distributed System Presentation
PDF
Distributed System Presentation Chapter 2
PDF
Distributed System Presentation Chapter 1
PDF
Architectual Models Distributed System Presentation
PDF
Distributed System Introduction Presentation
PDF
Overview of Data Warehousing and Data Mining Lecture Slide
Chapter 14 slides Distributed System Presentation
Chapter 15 slides Distributed System Presentation
Distributed System Presentation Chapter 2
Distributed System Presentation Chapter 1
Architectual Models Distributed System Presentation
Distributed System Introduction Presentation
Overview of Data Warehousing and Data Mining Lecture Slide
Ad

Recently uploaded (20)

PDF
Computing-Curriculum for Schools in Ghana
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
Cell Types and Its function , kingdom of life
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Classroom Observation Tools for Teachers
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
Pharma ospi slides which help in ospi learning
PPTX
master seminar digital applications in india
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
Computing-Curriculum for Schools in Ghana
VCE English Exam - Section C Student Revision Booklet
Microbial diseases, their pathogenesis and prophylaxis
Cell Types and Its function , kingdom of life
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
102 student loan defaulters named and shamed – Is someone you know on the list?
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Anesthesia in Laparoscopic Surgery in India
Classroom Observation Tools for Teachers
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Pharma ospi slides which help in ospi learning
master seminar digital applications in india
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
O5-L3 Freight Transport Ops (International) V1.pdf

Data Preprocessing in Data Mining Lecture Slide

  • 2. Unit 2: Data Preprocessing • ETL in Data Warehousing: • Data Integration and Transformation, Data Cleaning, • Data Reduction: • Data Cube Aggregation, • Dimensionality Reduction, • Data Discretization and Concept Hierarchy Generation, • Data Compression. 2 2
  • 3. Data Quality: Why Preprocess the Data? Measures for data quality: A multidimensional view • Accuracy: correct or wrong, accurate or not • Completeness: not recorded, unavailable, … • Consistency: some modified but some not, dangling, … • Timeliness: timely update? • Believability: how trustable the data are correct? • Interpretability: how easily the data can be understood? 3
  • 4. Major Tasks in Data Preprocessing • Data cleaning • Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies, etc. • Data integration • Integration of multiple databases, data cubes, or files • Data reduction • Dimensionality reduction • Numerosity reduction • Data compression • Data transformation and data discretization • Normalization • Concept hierarchy generation 4
  • 5. Data Cleaning • Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g., instrument faulty, human or computer error, transmission error, etc. • incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data • e.g., Occupation=“ ” (missing data) • noisy: containing noise, errors, or outliers • e.g., Salary=“−10” (an error) • inconsistent: containing discrepancies in codes or names, e.g., • Age=“42”, Birthday=“03/07/2010” • Was rating “1, 2, 3”, now rating “A, B, C” • discrepancy between duplicate records • Intentional (e.g., disguised missing data) • Jan. 1 as everyone’s birthday? 5
  • 6. Incomplete (Missing) Data •Data is not always available •E.g., many tuples have no recorded value for several attributes, such as customer income in sales data. •Missing data may be due to •equipment malfunction. •inconsistent with other recorded data and thus deleted. •data not entered due to misunderstanding. •certain data may not be considered important at the time of entry. •not register history or changes of the data. 6
  • 7. How to Handle Missing Data? • Ignore the tuple: usually done when class label is missing (when doing classification) • not effective when the % of missing values per attribute varies considerably. • Fill in the missing value manually: tedious + infeasible? • this approach is time consuming and may not be feasible given a large data set with many missing values. • Fill in it automatically with • a global constant : e.g., “unknown”, a new class?! • the attribute mean or median • the attribute mean for all samples belonging to the same class: smarter • the most probable value: inference-based such as Bayesian formula or decision tree induction. 7
  • 8. 8 Noisy Data •Noise: random error or variance in a measured variable •Incorrect attribute values may be due to •faulty data collection instruments •data entry problems •data transmission problems •technology limitation •inconsistency in naming convention •Other data problems which require data cleaning •duplicate records •incomplete data •inconsistent data
  • 9. How to Handle Noisy Data? Binning •first sort data and partition into (equal-frequency) bins •then one can • smooth by bin means: each value in a bin is replaced by the mean value of the bin. • smooth by bin median: each bin value is replaced by the bin median. 9 • smooth by bin boundaries: minimum and maximum values in a given bin are identified as the bin boundaries. Each bin value is then replaced by the closest boundary value. etc.
  • 10. How to Handle Noisy Data? •Regression •Data smoothing can also be done by regression, a technique that con- forms data values to a function. •Linear regression involves finding the “best” line to fit two attributes (or variables) so that one attribute can be used to predict the other. •Multiple linear regression is an extension of linear regression, where more than two attributes are involved and the data are fit to a multidimensional surface. 10
  • 11. How to Handle Noisy Data? •Clustering or Outlier analysis: • Outliers may be detected by clustering, for example, where similar values are organized into groups, or “clusters.” Intuitively, values that fall outside of the set of clusters may be considered outliers 11
  • 12. How to Handle Noisy Data? •Combined computer and human inspection • detect suspicious values and check by human (e.g., deal with possible outliers) 12
  • 13. Data Cleaning as a Process Data discrepancy (illogical/unsuitable data ): how happen? • The first step in data cleaning as a process is discrepancy detection. • Discrepancies can be caused by several factors, including • poorly designed data entry forms that have many optional fields, • human error in data entry, • deliberate errors (e.g., respondents not wanting to divulge information about themselves), and • data decay (e.g., outdated addresses). • Discrepancies may also arise from inconsistent data representations and inconsistent use of codes. • Other sources of discrepancies include errors in instrumentation devices that record data and system errors. 13
  • 14. Data Cleaning as a Process Data discrepancy detection • Use metadata (e.g., domain, range, dependency, distribution) • Check field overloading • Field overloading is another error source that typically results when developers squeeze new attribute definitions into unused (bit) portions of already defined attributes (e.g., an unused bit of an attribute that has a value range that uses only, say, 31 out of 32 bits). • Check uniqueness rule, consecutive rule and null rule. • Use commercial tools • Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors and make corrections • Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers) 14
  • 15. Data Cleaning as a Process •Data migration and integration •Data migration tools: allow transformations to be specified •ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface •Integration of the two processes •Iterative and interactive (e.g., Potter’s Wheels) 15
  • 16. Data Integration • Data Warehousing/OLAP/Data mining often requires data integration—the merging of data from multiple data stores. • Data integration: Combines data from multiple sources into a coherent store. • Careful integration can help reduce and avoid redundancies and inconsistencies in the resulting data set. • This can help improve the accuracy and speed of the subsequent data mining process. 16 16
  • 17. Data Integration •The semantic heterogeneity and structure of data pose great challenges in data integration. •How can we match schema and objects from different sources? • This is the essence of the entity identification problem, •Are any attributes correlated? • correlation tests for numeric and nominal data. 17 17
  • 18. Entity identification problem in Data Integration • It is likely that your data analysis task will involve data integration, which combines data from multiple sources into a coherent data store, as in data warehousing. • These sources may include multiple databases, data cubes, or flat files. • There are a number of issues to consider during data integration. Schema integration and object matching can be tricky. • How can equivalent real-world entities from multiple data sources be matched up? • This is referred to as the entity identification problem. 18 18
  • 19. Entity identification problem in Data Integration • Identify real world entities from multiple data sources, • e.g., Bill Clinton = William Clinton || customer_id = cust_number ?? • Detecting and resolving data value conflicts • For the same real world entity, attribute values from different sources are different • Possible reasons: different representations, different scales, • e.g., Metric vs. British units 19 19
  • 20. Redundancy and Correlation Analysis in Data Integration • Redundant data occur often when integration of multiple databases • Object identification: The same attribute or object may have different names in different databases • Derivable data: One attribute may be a “derived” attribute in another table, e.g., annual revenue • Redundant attributes may be able to be detected by correlation analysis and covariance analysis • Careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality 20 20
  • 21. Correlation Analysis (Nominal Data) • For nominal data, a correlation relationship between two attributes, A and B, can be discovered by a χ2 (chi-square) test. • Suppose A has c distinct values, namely a1,a2,...ac. • B has r distinct values, namely b1, b2,...br. • The data tuples described by A and B can be shown as a contingency table, with the c values of A making up the columns and the r values of B making up the rows. • Let (Ai , Bj ) denote the joint event that attribute A takes on value ai and attribute B takes on value bj, that is, where (A = ai, B = bj). • Each and every possible (Ai, Bj) joint event has its own cell (or slot) in the table. 21
  • 22. Correlation Analysis (Nominal Data) • Χ2 (chi-square) test: χ2 value is computed as • where oij is the observed frequency (i.e., actual count) of the joint event (Ai , Bj ) and eij is the expected frequency of (Ai , Bj ), which can be computed as • where n is the number of data tuples, count (A = ai ) is the number of tuples having value ai for A, and count(B = bj) is the number of tuples having value bj for B. • i.e. • The χ2 statistic tests the hypothesis that A and B are independent, that is, there is no correlation between them.    Expected Expected Observed 2 2 ) (  22
  • 23. Example: χ2 (chi-square) test • Suppose that a group of 1500 people was surveyed. • The gender of each person was noted. • Each person was polled as to whether his or her preferred type of reading material was fiction or nonfiction. • Thus, we have two attributes, gender and preferred_reading. expected frequency for the cell (male, fiction) is
  • 24. Chi-Square Calculation: An Example Χ2 (chi-square) calculation (numbers in parenthesis are expected counts calculated based on the data distribution in the two categories) 93 . 507 840 ) 840 1000 ( 360 ) 360 200 ( 210 ) 210 50 ( 90 ) 90 250 ( 2 2 2 2 2           24 • For this 2 × 2 table, the degrees of freedom are (2 − 1)(2 − 1) = 1. • For 1 degree of freedom, the χ 2 value needed to reject the hypothesis at the 0.001 significance level is 10.828 (taken from the table of upper percentage points of the χ2 distribution, typically available from any textbook on statistics). • Since our computed value is above this, we can reject the hypothesis that gender and preferred reading are independent and conclude that the two attributes are (strongly) correlated for the given group of people.
  • 25. Correlation Analysis (Numeric Data) • For numeric attributes, we can evaluate the correlation between two attributes, A and B, by computing the correlation coefficient (also known as Pearson’s product moment coefficient, named after its inventor, Karl Pearson) • Correlation coefficient (also called Pearson’s product moment coefficient) where n is the number of tuples, and are the respective means of A and B, σA and σB are the respective standard deviation of A and B, and Σ(aibi) is the sum of the AB cross-product. • If rA,B > 0, A and B are positively correlated (A’s values increase as B’s). The higher, the stronger correlation. • rA,B = 0: independent; rAB < 0: negatively correlated A 25 B
  • 26. Covariance (Numeric Data) 26 • In probability theory and statistics, correlation and covariance are two similar measures for assessing how much two attributes change together. • Consider two numeric attributes A and B, and a set of n observations {(a1,b1),...,(an,bn)}. • The mean values of A and B, respectively, are also known as the expected values on A and B, that is • The covariance between A and B is defined as
  • 27. Covariance (Numeric Data) 27 • Correlation coefficient where σA and σB are the standard deviations of A and B, respectively. • It can also be shown that This equation may simplify calculations. Positive covariance: If Cov(A,B)> 0, then A and B both tend to be larger than their expected values. Negative covariance: If Cov(A,B)< 0 then if A is larger than its expected value, B is likely to be smaller than its expected value. Independence: Cov(A,B) = 0 but the converse is not true: Some pairs of random variables may have a covariance of 0 but are not independent. Only under some additional assumptions (e.g., the data follow multivariate normal distributions) does a covariance of 0 imply independence
  • 28. Co-Variance: An Example Consider Table, which presents a simplified example of stock prices observed at five time points for AllElectronics and HighTech, a hightech company. If the stocks are affected by the same industry trends, will their prices rise or fall together? Expected values: Therefore, given the positive covariance we can say that stock prices for both companies rise together.
  • 30. Data Reduction Strategies • Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same (or almost the same) analytical results • Why data reduction? — A database/data warehouse may store terabytes of data. Complex data analysis may take a very long time to run on the complete data set. • Data reduction strategies • Dimensionality reduction, e.g., remove unimportant attributes • Wavelet transforms • Principal Components Analysis (PCA) • Feature subset selection, feature creation • Numerosity reduction (some simply call it: Data Reduction) • Regression and Log-Linear Models • Histograms, clustering, sampling • Data cube aggregation • Data compression 30
  • 31. 31 Data Reduction 1: Dimensionality Reduction • Curse of dimensionality • When dimensionality increases, data becomes increasingly sparse • Density and distance between points, which is critical to clustering, outlier analysis, becomes less meaningful • The possible combinations of subspaces will grow exponentially • Dimensionality reduction • Avoid the curse of dimensionality • Help eliminate irrelevant features and reduce noise • Reduce time and space required in data mining • Allow easier visualization • Dimensionality reduction techniques • Wavelet transforms • Principal Component Analysis • Supervised and nonlinear techniques (e.g., feature selection)
  • 32. 32 Data Reduction Strategies • Data reduction strategies include • dimensionality reduction, • numerosity reduction, and • data compression. • Dimensionality reduction is the process of reducing the number of random variables or attributes under consideration. • Dimensionality reduction methods include wavelet transforms and principal components analysis which transform or project the original data onto a smaller space.
  • 33. 33 Data Reduction Strategies • Numerosity reduction techniques replace the original data volume by alternative, smaller forms of data representation. • These techniques may be parametric or non- parametric. •For parametric methods, a model is used to estimate the data, so that typically only the data parameters need to be stored, instead of the actual data. (Outliers may also be stored.) • Regression and log-linear models are examples. •Nonparametric methods for storing reduced representations of the data include • histograms, • clustering, • sampling, and • data cube aggregation.
  • 34. 34 Data Reduction Strategies • In data compression, transformations are applied so as to obtain a reduced or “compressed” representation of the original data. • If the original data can be reconstructed from the compressed data without any information loss, the data reduction is called lossless. • If, instead, we can reconstruct only an approximation of the original data, then the data reduction is called lossy. • There are several lossless algorithms for string compression; however, they typically allow only limited data manipulation. • Dimensionality reduction and numerosity reduction techniques can also be considered forms of data compression.
  • 35. 35 Two Sine Waves Two Sine Waves + Noise Frequency  Fourier transform  Wavelet transform https://www.youtube.com/watch?v=ZnmvUCtUAEE&t=331s https://www.youtube.com/watch?v=QX1-xGVFqmw&t=206s
  • 36. 36 What Is Wavelet Transform? • The discrete wavelet transform (DWT) is a linear signal processing technique that, when applied to a data vector X, transforms it to a numerically different vector, X′, of wavelet coefficients. • The two vectors are of the same length. • When applying this technique to data reduction, we consider each tuple as an n- dimensional data vector, that is, X = (x1,x2,...,xn), depicting n measurements made on the tuple from n database attributes. Question. How can this technique be useful for data reduction if the wavelet transformed data are of the same length as the original data?
  • 37. 37 Wavelet Transformation • The DWT is closely related to the Discrete Fourier Transform (DFT), a signal processing technique involving sines and cosines. • In general, however, the DWT achieves better lossy compression. • That is, if the same number of coefficients is retained for a DWT and a DFT of a given data vector, the DWT version will provide a more accurate approximation of the original data. • Hence, for an equivalent approximation, the DWT requires less space than the DFT. • Unlike the DFT, wavelets are quite localized in space, contributing to the conservation of local detail. • There is only one DFT, yet there are several families of DWTs. • Popular wavelet transforms include the Haar-2, Daubechies-4, and Daubechies-6.
  • 38. 38 Wavelet Transformation The general procedure for applying a discrete wavelet transform uses a hierarchical pyramid algorithm that halves the data at each iteration, resulting in fast computational speed. The method is as follows:
  • 40. 40 Wavelet Decomposition • Wavelets: A math tool for space-efficient hierarchical decomposition of functions • S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S’ = [23/4, -11/4, 1/2, 0, 0, -1, -1, 0] • Compression: many small detail coefficients can be replaced by 0’s, and only the significant coefficients are retained
  • 41. 41 Haar Wavelet Coefficients Coefficient “Supports” 2 2 0 2 3 5 4 4 -1.25 2.75 0.5 0 0 -1 0 -1 + - + + + + + + + - - - - - - + - + + - + - +- +- - + +- -1 -1 0.5 0 2.75 -1.25 0 0 Original frequency distribution Hierarchical decomposition structure (a.k.a. “error tree”)
  • 42. 42 Why Wavelet Transform? • Use hat-shape filters • Emphasize region where points cluster • Suppress weaker information in their boundaries • Effective removal of outliers • Insensitive to noise, insensitive to input order • Multi-resolution • Detect arbitrary shaped clusters at different scales • Efficient • Complexity O(N) • Only applicable to low dimensional data
  • 43. Principal Component Analysis (PCA) • Find a projection that captures the largest amount of variation in data • The original data are projected onto a much smaller space, resulting in dimensionality reduction. We find the eigenvectors of the covariance matrix, and these eigenvectors define the new space 43 x2 x1 e
  • 44. Principal Component Analysis (Steps) • Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal components) that can be best used to represent data • Normalize input data: Each attribute falls within the same range • Compute k orthonormal (unit) vectors, i.e., principal components • Each input data (vector) is a linear combination of the k principal component vectors • The principal components are sorted in order of decreasing “significance” or strength • Since the components are sorted, the size of the data can be reduced by eliminating the weak components, i.e., those with low variance (i.e., using the strongest principal components, it is possible to reconstruct a good approximation of the original data) • Works for numeric data only 44
  • 45. Attribute Subset Selection • Another way to reduce dimensionality of data • Redundant attributes • Duplicate much or all of the information contained in one or more other attributes • E.g., purchase price of a product and the amount of sales tax paid • Irrelevant attributes • Contain no information that is useful for the data mining task at hand • E.g., students' ID is often irrelevant to the task of predicting students' GPA 45
  • 46. Attribute Subset Selection “How can we find a ‘good’ subset of the original attributes?” • For n attributes, there are 2n possible subsets. • An exhaustive search for the optimal subset of attributes can be prohibitively expensive, especially as n and the number of data classes increase. • Therefore, heuristic methods that explore a reduced search space are commonly used for attribute subset selection. 46
  • 47. Heuristic Search in Attribute Selection • There are 2d possible attribute combinations of d attributes • Typical heuristic attribute selection methods: • Best single attribute under the attribute independence assumption: choose by significance tests • Best step-wise feature selection: • The best single-attribute is picked first • Then next best attribute condition to the first, ... • Step-wise attribute elimination: • Repeatedly eliminate the worst attribute • Best combined attribute selection and elimination • Optimal branch and bound: • Use attribute elimination and backtracking 47
  • 48. 48 Attribute Creation (Feature Generation) • Create new attributes (features) that can capture the important information in a data set more effectively than the original ones • Three general methodologies • Attribute extraction • Domain-specific • Mapping data to new space (see: data reduction) • E.g., Fourier transformation, wavelet transformation, manifold approaches (not covered) • Attribute construction • Combining features (see: discriminative frequent patterns in Chapter 7) • Data discretization
  • 49. Data Reduction 2: Numerosity Reduction • Reduce data volume by choosing alternative, smaller forms of data representation • Parametric methods (e.g., regression) • Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers) • Ex.: Log-linear models—obtain value at a point in m-D space as the product on appropriate marginal subspaces • Non-parametric methods • Do not assume models • Major families: histograms, clustering, sampling, … 49
  • 50. Parametric Data Reduction: Regression and Log-Linear Models • Linear regression • Data modeled to fit a straight line • Often uses the least-square method to fit the line • Multiple regression • Allows a response variable Y to be modeled as a linear function of multidimensional feature vector • Log-linear model • Approximates discrete multidimensional probability distributions 50
  • 51. Regression Analysis • Regression analysis: A collective name for techniques for the modeling and analysis of numerical data consisting of values of a dependent variable (also called response variable or measurement) and of one or more independent variables (aka. explanatory variables or predictors) • The parameters are estimated so as to give a "best fit" of the data • Most commonly the best fit is evaluated by using the least squares method, but other criteria have also been used • Used for prediction (including forecasting of time-series data), inference, hypothesis testing, and modeling of causal relationships 51 y x y = x + 1 X1 Y1 Y1’
  • 53. Regression Analysis and Log-Linear Models • Linear regression: Y = w X + b • Two regression coefficients, w(slope) and b(intercept), specify the line and are to be estimated by using the data at hand • Using the least squares criterion to the known values of Y1, Y2, …, X1, X2, …. • Multiple regression: Y = b0 + b1 X1 + b2 X2 • Multiple linear regression is an extension of (simple) linear regression, which allows a response variable, y, to be modeled as a linear function of two or more predictor variables. 53
  • 54. Regress Analysis and Log-Linear Models Log-linear models: • Log-linear models approximate discrete multidimensional probability distributions. • Given a set of tuples in n dimensions (e.g., described by n attributes), we can consider each tuple as a point in an n-dimensional space. • Log-linear models can be used to estimate the probability of each point in a multidimensional space for a set of discretized attributes, based on a smaller subset of dimensional combinations. • This allows a higher-dimensional data space to be constructed from lower-dimensional spaces. • Log-linear models are therefore also useful for: • dimensionality reduction: since the lower-dimensional points together typically occupy less space than the original data points • data smoothing: since aggregate estimates in the lower-dimensional space are less subject to sampling variations than the estimates in the higher-dimensional space. 54
  • 55. Histogram Analysis • Divide data into buckets and store average (sum) for each bucket • Partitioning rules: • Equal-width: equal bucket range • Equal-frequency (or equal- depth) 55 0 5 10 15 20 25 30 35 40 10000 30000 50000 70000 90000
  • 56. Histogram Analysis • Example: The following data are a list of AllElectronics prices for commonly sold items (rounded to the nearest dollar). • The numbers have been sorted: 1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18, 18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30. 56
  • 57. Histogram Analysis • Example: The following data are a list of AllElectronics prices for commonly sold items (rounded to the nearest dollar). • The numbers have been sorted: 1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18, 18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30. 57
  • 58. Histogram Analysis 58 “How are the buckets determined and the attribute values partitioned?” There are several partitioning rules, including the following: • Equal-width: In an equal-width histogram, the width of each bucket range is uniform (e.g., the width of $10 for the buckets) • Equal-frequency (or equal-depth): In an equal-frequency histogram, the buckets are created so that, roughly, the frequency of each bucket is constant (i.e., each bucket contains roughly the same number of contiguous data samples). • Histograms are highly effective at approximating both sparse and dense data, as well as highly skewed and uniform data. • The histograms described before for single attributes can be extended for multiple attributes. • Multidimensional histograms can capture dependencies between attributes. • These histograms have been found effective in approximating data with up to five attributes. • More studies are needed regarding the effectiveness of multidimensional histograms for high dimensionalities.
  • 59. Clustering • Clustering techniques consider data tuples as objects. • They partition the objects into groups, or clusters, so that objects within a cluster are “similar” to one another and “dissimilar” to objects in other clusters. • Similarity is commonly defined in terms of how “close” the objects are in space, based on a distance function. • The “quality” of a cluster may be represented by its diameter, the maximum distance between any two objects in the cluster. • Centroid distance is an alternative measure of cluster quality and is defined as the average distance of each cluster object from the cluster centroid (denoting the “average object,” or average point in space for the cluster). • In data reduction, the cluster representations of the data are used to replace the actual data. 59
  • 60. Sampling • Sampling: obtaining a small sample s to represent the whole data set N • Allow a mining algorithm to run in complexity that is potentially sub- linear to the size of the data • Key principle: Choose a representative subset of the data • Simple random sampling may have very poor performance in the presence of skew • Develop adaptive sampling methods, e.g., stratified sampling: • Note: Sampling may not reduce database I/Os (page at a time) 60
  • 61. Types of Sampling • Simple random sampling • There is an equal probability of selecting any particular item • Sampling without replacement • Once an object is selected, it is removed from the population • Sampling with replacement • A selected object is not removed from the population • Stratified sampling: • Partition the data set, and draw samples from each partition (proportionally, i.e., approximately the same percentage of the data) • Used in conjunction with skewed data 61
  • 62. 62 Sampling: With or without Replacement Raw Data
  • 63. Sampling: Cluster or Stratified Sampling 63 Raw Data Cluster/Stratified Sample
  • 64. 64 Data Cube Aggregation • The lowest level of a data cube (base cuboid) • The aggregated data for an individual entity of interest • E.g., a customer in a phone calling data warehouse • Multiple levels of aggregation in data cubes • Further reduce the size of data to deal with • Reference appropriate levels • Use the smallest representation which is enough to solve the task • Queries regarding aggregated information should be answered using data cube, when possible
  • 65. 65 Data Reduction 3: Data Compression • String compression • There are extensive theories and well-tuned algorithms • Typically lossless, but only limited manipulation is possible without expansion • Audio/video compression • Typically lossy compression, with progressive refinement • Sometimes small fragments of signal can be reconstructed without reconstructing the whole • Time sequence is not audio • Typically short and vary slowly with time • Dimensionality and numerosity reduction may also be considered as forms of data compression
  • 66. 66 Data Compression Original Data Compressed Data lossless Original Data Approximated
  • 68. 68 Data Transformation • The data are transformed or consolidated into forms appropriate for mining. • A function is used to maps the entire set of values of a given attribute to a new set of replacement values s.t. each old value can be identified with one of the new values. Strategies for data transformation include the following: 1. Smoothing: which works to remove noise from the data. Techniques include binning, regression, and clustering. 2. Attribute construction (or feature construction): new attributes are constructed and added from the given set of attributes to help the mining process. • For example, from Date of Birth attribute, age attribute can be constructed. 3. Aggregation: Summary or aggregation operations are applied to the data. • For example, the daily sales data may be aggregated to compute monthly and annual total amounts. • This step is typically used in constructing a data cube for data analysis at multiple abstraction levels.
  • 69. 69 Data Transformation • The data are transformed or consolidated into forms appropriate for mining. • A function is used to maps the entire set of values of a given attribute to a new set of replacement values s.t. each old value can be identified with one of the new values. Strategies for data transformation include the following: 4. Normalization: the attribute data are scaled so as to fall within a smaller range, such as −1.0 to 1.0, or 0.0 to 1.0. 5. Discretization: the raw values of a numeric attribute (e.g., age) are replaced by • interval labels (e.g., 0–10, 11–20, etc.) or • conceptual labels (e.g., youth, adult, senior). 6. Concept hierarchy generation for nominal data: attributes such as street can be generalized to higher-level concepts, like city or country.
  • 70. Data Transformation by Normalization 70 • Normalizing the data attempts to give all attributes an equal weight. • Normalization is particularly useful for classification algorithms involving neural networks or distance measurements such as nearest-neighbor classification and clustering. • For distance-based methods, normalization helps prevent attributes with initially large ranges (e.g., income) from outweighing attributes with initially smaller ranges (e.g., age or some binary attributes). • There are many methods for data normalization: • min-max normalization, • z-score normalization, and • normalization by decimal scaling. • For our discussion, let A be a numeric attribute with n observed values, v1,v2,...,vn.
  • 71. Min-max normalization 71 EXAMPLE: Suppose that the minimum and maximum values for the attribute income are $12,000 and $98,000, respectively. We would like to map income to the range [0.0,1.0]. By min-max normalization, a value of $73,600 for income is transformed to a VALUE = ?
  • 72. z-score normalization (or zero-mean normalization) 72 EXAMPLE: Suppose that the mean and standard deviation of the values for the attribute income are $54,000 and $16,000, respectively. With z-score normalization, a value of $73,600 for income is transformed to a VALUE= ?
  • 73. Normalization by decimal scaling 73 EXAMPLE: Suppose that the recorded values of A range from −986 to 917. The maximum absolute value of A is 986. To normalize by decimal scaling, we therefore divide each value by 1000 (i.e., j = 3) so that −986 normalizes to −0.986 and 917 normalizes to 0.917.
  • 75. Discretization • Discretization: Divide the range of a continuous attribute into intervals • Interval labels can then be used to replace actual data values • Reduce data size by discretization • Supervised vs. unsupervised • Split (top-down) vs. merge (bottom-up) • Discretization can be performed recursively on an attribute • Prepare for further analysis, e.g., classification 75
  • 76. 76 Data Discretization Methods • Typical methods: All the methods can be applied recursively • Binning • Top-down split, unsupervised • Histogram analysis • Top-down split, unsupervised • Clustering analysis (unsupervised, top-down split or bottom-up merge) • Decision-tree analysis (supervised, top-down split) • Correlation (e.g., 2) analysis (unsupervised, bottom-up merge)
  • 77. Simple Discretization: Binning • Equal-width (distance) partitioning • Divides the range into N intervals of equal size: uniform grid • if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B –A)/N. • The most straightforward, but outliers may dominate presentation • Skewed data is not handled well • Equal-depth (frequency) partitioning • Divides the range into N intervals, each containing approximately same number of samples • Good data scaling • Managing categorical attributes can be tricky 77
  • 78. Binning Methods for Data Smoothing Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 * Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 * Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 * Smoothing by bin boundaries: - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34 78
  • 79. 79 Discretization Without Using Class Labels (Binning vs. Clustering) Data Equal interval width (binning) Equal frequency (binning) K-means clustering leads to better results
  • 80. 80 Discretization by Classification & Correlation Analysis • Classification (e.g., decision tree analysis) • Supervised: Given class labels, e.g., cancerous vs. benign • Using entropy to determine split point (discretization point) • Top-down, recursive split • Details to be covered in Chapter 7 • Correlation analysis (e.g., Chi-merge: χ2-based discretization) • Supervised: use class information • Bottom-up merge: find the best neighboring intervals (those having similar distributions of classes, i.e., low χ2 values) to merge • Merge performed recursively, until a predefined stopping condition
  • 81. Concept Hierarchy Generation • Concept hierarchy organizes concepts (i.e., attribute values) hierarchically and is usually associated with each dimension in a data warehouse • Concept hierarchies facilitate drilling and rolling in data warehouses to view data in multiple granularity • Concept hierarchy formation: Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as youth, adult, or senior) • Concept hierarchies can be explicitly specified by domain experts and/or data warehouse designers • Concept hierarchy can be automatically formed for both numeric and nominal data. For numeric data, use discretization methods shown. 81
  • 82. Concept Hierarchy Generation for Nominal Data • Specification of a partial/total ordering of attributes explicitly at the schema level by users or experts • street < city < state < country • Specification of a hierarchy for a set of values by explicit data grouping • {Urbana, Champaign, Chicago} < Illinois • Specification of only a partial set of attributes • E.g., only street < city, not others • Automatic generation of hierarchies (or attribute levels) by the analysis of the number of distinct values • E.g., for a set of attributes: {street, city, state, country} 82
  • 83. Automatic Concept Hierarchy Generation • Some hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the data set • The attribute with the most distinct values is placed at the lowest level of the hierarchy • Exceptions, e.g., weekday, month, quarter, year 83 country province_or_ state city street 15 distinct values 365 distinct values 3567 distinct values 674,339 distinct values
  • 84. Summary • Data quality: accuracy, completeness, consistency, timeliness, believability, interpretability • Data cleaning: e.g. missing/noisy values, outliers • Data integration from multiple sources: • Entity identification problem • Remove redundancies • Detect inconsistencies • Data reduction • Dimensionality reduction • Numerosity reduction • Data compression • Data transformation and data discretization • Normalization • Concept hierarchy generation 84