SlideShare a Scribd company logo
Lecture 1
Dr. Fawad Hussain
GIK Institute
Fall 2015
Data Warehousing and MiningData Warehousing and MiningData Warehousing and MiningData Warehousing and Mining
(CS437)(CS437)(CS437)(CS437)
Some lectures in this course have been partially adapted from lecture series by StephenA. Brobst, Chief Technology Officer atTeradata and
professor at MIT.
General Course Description
Datawarehousing
What is the motivation behind DataWarehousing and Mining?
Advanced Indexing, Query Processing and Optimization.
Building DataWarehouses.
Data Cubes, OLAP, De-Normalization,etc.
Data MiningTechniques
Regression
Clustering
DecisionTrees
Other Information
Office Hours (Pasted on the office door)
Office: G03 (FCSE)
CourseTA (Mr. Bilal)
Text Books (Optional)
Introduction to Data Mining;Tan, Steinbach & Kumar.
Data Mining: Concepts andTechniques by Jiawei Han and
Micheline Kamber Morgan Kaufmann Publishers, 2nd Edition,
March 2006, ISBN 1-55860-901-6.
Building a DataWarehouse for Decision Support byVidette
Poe.
Fundamentals of Database Systems by Elmasri and Navathe
Addison-Wesley, 5th Edition, 2007.
Grading Plan
Grading Plan for Course %
Tentative
Number(s)
Midterm Exam 25 01
Quizzes 10 06
Project 20 02
Final Exam 45 01
Tentative Schedule
Tentative Schedule
Tentative Schedule
Lecture 1
Introduction and Overview
Why this Course?
The world is changing (actually changed), either change or be
left behind.
Missing the opportunities or going in the wrong direction has
prevented us from growing.
What is the right direction?
Harnessing the data, in a knowledge driven economy.
The Need
Knowledge is power, Intelligence is
absolute power!
“Drowning in data and starving for
information”
Data Processing Steps
DATA
INFORMATION
POWER
INTELLIGENCE
$
End goal?
Historical Overview
1960
Master Files & Reports
1965
Lots of Master files!
1970
Direct Access Memory & DBMS
1975
Online high performance transaction processing
1980
PCs and 4GL Technology (MIS/DSS)
Post 1990
Data Warehousing and Data Mining
Crises of Credibility
What is the financial health of our company?
-10%
+10%
??
Why a Data Warehouse?
Data recording and storage is growing.
History is excellent predictor of the future.
Gives total view of the organization.
Intelligent decision-support is required for decision-
making.
Why Data Warehouse?
Size of Data Sets are going up ↑.
Cost of data storage is coming down ↓.
The amount of data average business collects and stores
is doubling every year
Total hardware and software cost to store and manage 1
Mbyte of data
1990: ~ $15
2002: ~ ¢15 (Down 100 times)
By 2007: < ¢1 (Down 150 times)
Why Data Warehouse?
A Few Examples
WalMart: 24TB
FranceTelecom: ~ 100TB
CERN: Up to 20 PB by 2006
Stanford LinearAccelerator Center (SLAC): 500TB
Businesses demand Intelligence (BI).
Complex questions from integrated data.
“Intelligent Enterprise”
List of all items that were sold last month?
List of all items purchased by X?
The total sales of the last month grouped by branch?
How many sales transactions occurred during the month of
January?
DBMS Approach
Which items sell together? Which items to stock?
Where and how to place the items? What
discounts to offer?
How best to target customers to increase sales at
a branch?
Which customers are most likely to respond to
my next promotional campaign, and why?
Intelligent Enterprise
What is a Data Warehouse?
A complete repository of historical corporate data extracted
from transaction systems that is available for ad-hoc access by
knowledge workers.
What is Data Mining?
“There are things that we know that we know…
there are things that we know that we don’t know…
there are things that we don’t know we don’t know.”
Donald Rumsfield
Former US Secretary of Defence
What is Data Mining?
Tell me something that I should know.
When you don’t know what you should be knowing,
how do you write SQL?
You cant!!
What is Data Mining?
Knowledge Discovery in Databases (KDD).
Data mining digs out valuable non-trivial information from large
multidimensional apparently unrelated data bases(sets).
It’s the integration of business knowledge, people, information,
algorithms, statistics and computing technology.
Discovering useful hidden patterns and relationships in data.
HUGE VOLUME THERE IS WAY TOO MUCH
DATA & GROWING!
Data collected much faster than it can be processed or
managed. NASA Earth Observation System (EOS), will
alone, collect 15 Peta bytes by 2007
(15,000,000,000,000,000 bytes).
• Much of which won't be used - ever!
• Much of which won't be seen - ever!
• Why not?
There's so much volume, usefulness of some of it will never
be discovered
SOLUTION: Reduce the volume and/or raise the
information content by structuring, querying, filtering,
summarizing, aggregating, mining...
Requires solution of fundamentally new
problems
1. developing algorithms and systems to mine large, massive
and high dimensional data sets;
2. developing algorithms and systems to mine new types of
data (images, music, videos);
3. developing algorithms, protocols, and other infrastructure
to mine distributed data; and
4. improving the ease of use of data mining systems;
5. developing appropriate privacy and security techniques
for data mining.
Future of Data Mining
10 Hottest Jobs of year 2025
TIME Magazine,22 May,2000
10 emerging areas of technology
MIT’s Magazine ofTechnology Review,
Jan/Feb,2001
Data Mining
Data Mining
Machine
Learning
Database
Technology
Statistics
Visualization
Other
Disciplines
Information
Science
Logical and Physical DatabaseLogical and Physical DatabaseLogical and Physical DatabaseLogical and Physical Database
DesignDesignDesignDesign
Data Mining is one step of Knowledge
Discovery in Databases (KDD)
Raw
Data
Preprocessing
• Extraction
• Transformation
• Cleansing
• Validation
Data Mining
• Identify Patterns
• Create Models
Interpretation/
Evaluation
• Visualization
• Feature Extraction
• Analysis
Clean
Data
$ $ $
Knowledge
Cs437 lecture 1-6
Information Evolution in a Data
Warehouse Environment
Primarily Batch Event Based
Triggering
Takes Hold
Increase in
Ad Hoc
Queries
Analytical
Modeling
Grows
Continuous Update &
Time Sensitive Queries
Become Important
Batch Ad Hoc Analytics Continuous Update/Short Queries Event-Based Triggering
STAGE 2:
ANALYZE
WHY did
it happen?
STAGE 3:
PREDICT
What WILL
happen?
STAGE 1:
REPORT
WHAT happened?
STAGE 4:
OPERATIONALIZE
What IS happening?
STAGE 5:
ACTIVATE
What do you WANT to
happen?
Normalization and Denormalization
Normalization
A relational database relates subsets of a dataset to each other.
A dataset is a set of tables (or schema in Oracle)
A table defines the structure and contains the row and column data for each
subset.
Tables are related to each other by linking them based on common items and
values between two tables.
Normalization is the optimization of record keeping for insertion, deletion
and updation (in addition to selection, ofcourse)
De-normalization
Why denormalize?
When to denormalize
How to denormalize
Cs437 lecture 1-6
Cs437 lecture 1-6
Why De-normalization?
Do you have performance problems?
If not, then you shouldn’t be studying this course!
The root cause of 99% of database performance problems is
poorly written SQL code.
Usually as a result of poorly optimized underlying structure
Do you have disk storage problems?
Consider separating large, less used datasets and frequently used
datasets.
When to Denormalize?
Denormalization sometimes implies the undoing of some of the
steps of Normalization
Denormalization is not necessarily the reverse of the steps of
Normalization.
Denormalization does not imply complete removal of specific
Normal Form levels.
Denormalization results in duplication.
It is quite possible that table structure is much too granular or possibly even
incompatible with structure imposed by applications.
Denormalization usually involves merging of multiple
transactional tables or multiple static tables into single
When to Denormalize?
Look for one-to-one relationships.
These may be unnecessary if the required removal of null values
causes costly joins. Disk space is cheap. Complex SQL join statements
can destroy performance.
Do you have many-to-many join resolution entities? Are they all
necessary? Are they all used by applications?
When constructing SQL statement joins are you finding many
tables in joins where those tables are scattered throughout the
entity relationship diagram?
When searching for static data items such as customer details are
you querying a single or multiple tables?
A single table is much more efficient than multiple tables.
How to Denormalize?
Common Forms of Denormalization
Pre-join de-normalization.
Column replication or movement.
Pre-aggregation.
Considerations in Assessing
De-normalization
Performance implications
Storage implications
Ease-of-use implications
Maintenance implications
Most commonly missed/disregarded.
Pre-join Denormalization
Take tables which are frequently joined and “glue” them together
into a single table.
Avoids performance impact of the frequent joins.
Typically increases storage requirements.
Pre-join Denormalization
A simplified retail example...
Before denormalization:
sale_id store_id sale_dt …
tx_id sale_id item_id … item_qty sale$
1
m
Pre-join Denormalization
tx_id sale_id store_id sale_dt item_id … item_qty $
A simplified retail example...
After denormalization:
Points to Ponder
Which Normal Form is being violated?
Will there be maintenance issues?
Pre-join Denormalization
Storage implications...
Assume 1:3 record count ratio between sales header and detail.
Assume 1 billion sales (3 billion sales detail).
Assume 8 byte sales_id.
Assume 30 byte header and 40 byte detail records.
Which businesses will be most hurt, in terms of storage capacity, by
this form of denormalization?
Pre-join Denormalization
Storage implications...
Before denormalization: 150 GB raw data.
After denormalization: 186 GB raw data.
Net result is 24% increase in raw data size for the database.
Pre-join may actually result in space saving, if many concurrent queries are
demanding frequent joins on the joined tables! HOW?
Pre-join Denormalization
Sample Query:
What was my total $ volume betweenThanksgiving and Christmas in
1999?
Pre-join Denormalization
Before de-normalization:
select sum(sales_detail.sale_amt)
from sales
,sales_detail
where sales.sales_id =
sales_detail.sales_id
and sales.sales_dt between '1999-11-26'
and '1999-12-25'
;
Pre-join Denormalization
After de-normalization:
select sum(d_sales_detail.sale_amt)
from d_sales_detail
where d_sales_detail.sales_dt between '1999-
11-26' and '1999-12-25'
;
No join operation performed.
How to compare performance?
Pre-join Denormalization
But consider the question...
How many sales (transactions) did I make betweenThanksgiving and
Christmas in 1999?
Pre-join Denormalization
Before denormalization:
select count(*)
from sales
where sales.sales_dt between '1999-11-26' and
'1999-12-25';
After denormalization:
select count(distinct d_sales_detail.sales_id)
from d_sales_detail
where d_sales_detail.sales_dt between '1999-11-
26' and '1999-12-25';
Which query will perform better?
Pre-join Denormalization
Performance implications...
Performance penalty for count distinct (forces sort) can be quite large.
May be worth 30 GB overhead to keep sales header records if this is a common
query structure because both ease-of-use and performance will be enhanced (at
some cost in storage)?
Considerations in Assessing
De-normalization
Performance implications
Storage implications
Ease-of-use implications
Maintenance implications
Most commonly missed/disregarded.
Column Replication or Movement
Take columns that are frequently accessed via large scale joins and
replicate (or move) them into detail table(s) to avoid join
operation.
Avoids performance impact of the frequent joins.
Increases storage requirements for database.
Possible to “move” frequently accessed column to detail instead of
replicating it.
Note: This technique is no different than a limited form of the pre-
join denormalization described previously.
ColA ColB
Table_1
ColA ColC ColD … ColZ
Table_2
ColA ColB
Table_1’
ColA ColC ColD … ColZ
Table_2
ColC
Column Replication or Movement
Health Care DW Example: Take member_id from claim header
and move it to claim detail.
Result: An extra ten bytes per row on claim line table allows
avoiding join to claim header table on some (many?) queries.
Which normal form does this technique violates?
Column Replication or Movement
Beware of the results of de-normalization:
Assuming a 100 byte record before the denormalization, all scans
through the claim line detail will now take 10% longer than
previously.
A significant percentage of queries must get benefit from access to
the denormalized column in order to justify movement into the
claim line table.
Need to quantify both cost and benefit of each denormalization
decision.
Column Replication or Movement
May want to replicate columns in order to facilitate co-location of commonly joined
tables.
Before denormalization:
A three table join requires re-distribution of significant amounts of data to answer many
important questions related to customer transaction behavior.
Customer_Id Customer_Nm Address Ph …
Account_Id Customer_Id Balance$ Open_Dt …
Tx_Id Account_Id Tx$ Tx_Dt Location_Id …
1
m
1
m
CustTable
AcctTable
TrxTable

More Related Content

PDF
Introduction to Data Warehousing
PPTX
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!
PDF
Lecture2 big data life cycle
PDF
Hadoop(Term Paper)
PPT
Cssu dw dm
PPTX
Data mining presentation.ppt
PPTX
PowerPoint Template
ODP
Introduction To Data Warehousing
Introduction to Data Warehousing
Big Data Warehousing Meetup: Dimensional Modeling Still Matters!!!
Lecture2 big data life cycle
Hadoop(Term Paper)
Cssu dw dm
Data mining presentation.ppt
PowerPoint Template
Introduction To Data Warehousing

What's hot (20)

PDF
Architecting a Data Warehouse: A Case Study
PPT
An introduction to data warehousing
PPT
Datawarehousing
PPT
DM Lecture 3
PDF
Data Warehouse Design & Dimensional Modeling
PPT
Chapter 13 data warehousing
PDF
Data Warehouse Project Report
PPT
Part1
PDF
Setting Up the Data Lake
PPTX
Data warehousing - Dr. Radhika Kotecha
PPTX
Best Practices: Datawarehouse Automation Conference September 20, 2012 - Amst...
PPTX
142230 633685297550892500
PPT
Datamining
PPTX
Basic Introduction of Data Warehousing from Adiva Consulting
PPT
Warehouse components
PDF
WHAT IS A DATA LAKE? Know DATA LAKES & SALES ECOSYSTEM
PPTX
001 More introduction to big data analytics
PDF
Enterprise Data Lake - Scalable Digital
Architecting a Data Warehouse: A Case Study
An introduction to data warehousing
Datawarehousing
DM Lecture 3
Data Warehouse Design & Dimensional Modeling
Chapter 13 data warehousing
Data Warehouse Project Report
Part1
Setting Up the Data Lake
Data warehousing - Dr. Radhika Kotecha
Best Practices: Datawarehouse Automation Conference September 20, 2012 - Amst...
142230 633685297550892500
Datamining
Basic Introduction of Data Warehousing from Adiva Consulting
Warehouse components
WHAT IS A DATA LAKE? Know DATA LAKES & SALES ECOSYSTEM
001 More introduction to big data analytics
Enterprise Data Lake - Scalable Digital
Ad

Viewers also liked (20)

PDF
Forever Living Products… where ordinary people achieve extraordinary results
DOCX
Ici final project report
PPT
Ken Hughes and morning presentations at ECR Ireland Category Management Shopp...
PPTX
Hukum newton gravitasi
PPT
Engranajes fotos
PPTX
1st group!!
PPTX
Balance of payments
PDF
Why I love the Rain and You Will too - Guarenteed
PDF
Lu siau vay_616_wds_
PPTX
Obesity
PDF
Health supervision policy for the workplace
PDF
Cs437 lecture 13
DOC
Year 7 energy_resources_and_electrical_circuits_mark_scheme (1)
PDF
Programme on recently recruited clerks of UCB/DCC/State Cooperative Banks
PPTX
Fit notes and work
PDF
Web coding principle
PPTX
Digital business briefing January 2015
PDF
經濟部訴願委員會第A410501007號決定書
Forever Living Products… where ordinary people achieve extraordinary results
Ici final project report
Ken Hughes and morning presentations at ECR Ireland Category Management Shopp...
Hukum newton gravitasi
Engranajes fotos
1st group!!
Balance of payments
Why I love the Rain and You Will too - Guarenteed
Lu siau vay_616_wds_
Obesity
Health supervision policy for the workplace
Cs437 lecture 13
Year 7 energy_resources_and_electrical_circuits_mark_scheme (1)
Programme on recently recruited clerks of UCB/DCC/State Cooperative Banks
Fit notes and work
Web coding principle
Digital business briefing January 2015
經濟部訴願委員會第A410501007號決定書
Ad

Similar to Cs437 lecture 1-6 (20)

PPTX
Introduction to data mining and data warehousing
PDF
Data Warehouse Design and Best Practices
PPT
1-_Intro_to_Data_Minning__DWH.ppt
PPS
Introduction to Data Warehousing
PDF
Scaling a Beast: Lessons from 400x Growth in a High-Stakes Financial System b...
PPT
E06WarehouseDesignissuesindatawarehousedesign.ppt
DOCX
Information Systems For Business and BeyondChapter 4Data a.docx
PPT
E06WarehouseDesign.pptxkjhjkljhlkjhlkhlkj
PDF
Emerging database landscape july 2011
PPT
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
PPT
Big Data
PDF
''Taming Explosive Growth: Building Resilience in a Hyper-Scaled Financial Pl...
PPTX
Building an Effective Data Warehouse Architecture
PPT
Presentation
PPTX
Big data analytics(BAD601) module-1 ppt
PDF
The Art of Requesting Data from IT
PPT
Lecture 01.ppt
PDF
Top 60+ Data Warehouse Interview Questions and Answers.pdf
PPTX
What is a Data Warehouse and How Do I Test It?
PPTX
1-Data Warehousing-Multi Dim Data Model.pptx
Introduction to data mining and data warehousing
Data Warehouse Design and Best Practices
1-_Intro_to_Data_Minning__DWH.ppt
Introduction to Data Warehousing
Scaling a Beast: Lessons from 400x Growth in a High-Stakes Financial System b...
E06WarehouseDesignissuesindatawarehousedesign.ppt
Information Systems For Business and BeyondChapter 4Data a.docx
E06WarehouseDesign.pptxkjhjkljhlkjhlkhlkj
Emerging database landscape july 2011
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
Big Data
''Taming Explosive Growth: Building Resilience in a Hyper-Scaled Financial Pl...
Building an Effective Data Warehouse Architecture
Presentation
Big data analytics(BAD601) module-1 ppt
The Art of Requesting Data from IT
Lecture 01.ppt
Top 60+ Data Warehouse Interview Questions and Answers.pdf
What is a Data Warehouse and How Do I Test It?
1-Data Warehousing-Multi Dim Data Model.pptx

More from Aneeb_Khawar (6)

PDF
Cs437 lecture 16-18
PDF
Cs437 lecture 14_15
PDF
Cs437 lecture 10-12
PDF
Cs437 lecture 09
PDF
Cs437 lecture 7-8
PPTX
Developing for Windows 8 based devices
Cs437 lecture 16-18
Cs437 lecture 14_15
Cs437 lecture 10-12
Cs437 lecture 09
Cs437 lecture 7-8
Developing for Windows 8 based devices

Recently uploaded (20)

PPT
Quality review (1)_presentation of this 21
PDF
Clinical guidelines as a resource for EBP(1).pdf
PDF
.pdf is not working space design for the following data for the following dat...
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PDF
Launch Your Data Science Career in Kochi – 2025
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPTX
Database Infoormation System (DBIS).pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
Supervised vs unsupervised machine learning algorithms
PPTX
1_Introduction to advance data techniques.pptx
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Quality review (1)_presentation of this 21
Clinical guidelines as a resource for EBP(1).pdf
.pdf is not working space design for the following data for the following dat...
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Moving the Public Sector (Government) to a Digital Adoption
Launch Your Data Science Career in Kochi – 2025
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Miokarditis (Inflamasi pada Otot Jantung)
Business Acumen Training GuidePresentation.pptx
Introduction to Knowledge Engineering Part 1
IBA_Chapter_11_Slides_Final_Accessible.pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Business Ppt On Nestle.pptx huunnnhhgfvu
Introduction-to-Cloud-ComputingFinal.pptx
Database Infoormation System (DBIS).pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Fluorescence-microscope_Botany_detailed content
Supervised vs unsupervised machine learning algorithms
1_Introduction to advance data techniques.pptx
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf

Cs437 lecture 1-6

  • 1. Lecture 1 Dr. Fawad Hussain GIK Institute Fall 2015 Data Warehousing and MiningData Warehousing and MiningData Warehousing and MiningData Warehousing and Mining (CS437)(CS437)(CS437)(CS437) Some lectures in this course have been partially adapted from lecture series by StephenA. Brobst, Chief Technology Officer atTeradata and professor at MIT.
  • 2. General Course Description Datawarehousing What is the motivation behind DataWarehousing and Mining? Advanced Indexing, Query Processing and Optimization. Building DataWarehouses. Data Cubes, OLAP, De-Normalization,etc. Data MiningTechniques Regression Clustering DecisionTrees Other Information Office Hours (Pasted on the office door) Office: G03 (FCSE) CourseTA (Mr. Bilal)
  • 3. Text Books (Optional) Introduction to Data Mining;Tan, Steinbach & Kumar. Data Mining: Concepts andTechniques by Jiawei Han and Micheline Kamber Morgan Kaufmann Publishers, 2nd Edition, March 2006, ISBN 1-55860-901-6. Building a DataWarehouse for Decision Support byVidette Poe. Fundamentals of Database Systems by Elmasri and Navathe Addison-Wesley, 5th Edition, 2007.
  • 4. Grading Plan Grading Plan for Course % Tentative Number(s) Midterm Exam 25 01 Quizzes 10 06 Project 20 02 Final Exam 45 01
  • 9. Why this Course? The world is changing (actually changed), either change or be left behind. Missing the opportunities or going in the wrong direction has prevented us from growing. What is the right direction? Harnessing the data, in a knowledge driven economy.
  • 10. The Need Knowledge is power, Intelligence is absolute power! “Drowning in data and starving for information”
  • 12. Historical Overview 1960 Master Files & Reports 1965 Lots of Master files! 1970 Direct Access Memory & DBMS 1975 Online high performance transaction processing 1980 PCs and 4GL Technology (MIS/DSS) Post 1990 Data Warehousing and Data Mining
  • 13. Crises of Credibility What is the financial health of our company? -10% +10% ??
  • 14. Why a Data Warehouse? Data recording and storage is growing. History is excellent predictor of the future. Gives total view of the organization. Intelligent decision-support is required for decision- making.
  • 15. Why Data Warehouse? Size of Data Sets are going up ↑. Cost of data storage is coming down ↓. The amount of data average business collects and stores is doubling every year Total hardware and software cost to store and manage 1 Mbyte of data 1990: ~ $15 2002: ~ ¢15 (Down 100 times) By 2007: < ¢1 (Down 150 times)
  • 16. Why Data Warehouse? A Few Examples WalMart: 24TB FranceTelecom: ~ 100TB CERN: Up to 20 PB by 2006 Stanford LinearAccelerator Center (SLAC): 500TB Businesses demand Intelligence (BI). Complex questions from integrated data. “Intelligent Enterprise”
  • 17. List of all items that were sold last month? List of all items purchased by X? The total sales of the last month grouped by branch? How many sales transactions occurred during the month of January? DBMS Approach
  • 18. Which items sell together? Which items to stock? Where and how to place the items? What discounts to offer? How best to target customers to increase sales at a branch? Which customers are most likely to respond to my next promotional campaign, and why? Intelligent Enterprise
  • 19. What is a Data Warehouse? A complete repository of historical corporate data extracted from transaction systems that is available for ad-hoc access by knowledge workers.
  • 20. What is Data Mining? “There are things that we know that we know… there are things that we know that we don’t know… there are things that we don’t know we don’t know.” Donald Rumsfield Former US Secretary of Defence
  • 21. What is Data Mining? Tell me something that I should know. When you don’t know what you should be knowing, how do you write SQL? You cant!!
  • 22. What is Data Mining? Knowledge Discovery in Databases (KDD). Data mining digs out valuable non-trivial information from large multidimensional apparently unrelated data bases(sets). It’s the integration of business knowledge, people, information, algorithms, statistics and computing technology. Discovering useful hidden patterns and relationships in data.
  • 23. HUGE VOLUME THERE IS WAY TOO MUCH DATA & GROWING! Data collected much faster than it can be processed or managed. NASA Earth Observation System (EOS), will alone, collect 15 Peta bytes by 2007 (15,000,000,000,000,000 bytes). • Much of which won't be used - ever! • Much of which won't be seen - ever! • Why not? There's so much volume, usefulness of some of it will never be discovered SOLUTION: Reduce the volume and/or raise the information content by structuring, querying, filtering, summarizing, aggregating, mining...
  • 24. Requires solution of fundamentally new problems 1. developing algorithms and systems to mine large, massive and high dimensional data sets; 2. developing algorithms and systems to mine new types of data (images, music, videos); 3. developing algorithms, protocols, and other infrastructure to mine distributed data; and 4. improving the ease of use of data mining systems; 5. developing appropriate privacy and security techniques for data mining.
  • 25. Future of Data Mining 10 Hottest Jobs of year 2025 TIME Magazine,22 May,2000 10 emerging areas of technology MIT’s Magazine ofTechnology Review, Jan/Feb,2001
  • 27. Logical and Physical DatabaseLogical and Physical DatabaseLogical and Physical DatabaseLogical and Physical Database DesignDesignDesignDesign
  • 28. Data Mining is one step of Knowledge Discovery in Databases (KDD) Raw Data Preprocessing • Extraction • Transformation • Cleansing • Validation Data Mining • Identify Patterns • Create Models Interpretation/ Evaluation • Visualization • Feature Extraction • Analysis Clean Data $ $ $ Knowledge
  • 30. Information Evolution in a Data Warehouse Environment Primarily Batch Event Based Triggering Takes Hold Increase in Ad Hoc Queries Analytical Modeling Grows Continuous Update & Time Sensitive Queries Become Important Batch Ad Hoc Analytics Continuous Update/Short Queries Event-Based Triggering STAGE 2: ANALYZE WHY did it happen? STAGE 3: PREDICT What WILL happen? STAGE 1: REPORT WHAT happened? STAGE 4: OPERATIONALIZE What IS happening? STAGE 5: ACTIVATE What do you WANT to happen?
  • 31. Normalization and Denormalization Normalization A relational database relates subsets of a dataset to each other. A dataset is a set of tables (or schema in Oracle) A table defines the structure and contains the row and column data for each subset. Tables are related to each other by linking them based on common items and values between two tables. Normalization is the optimization of record keeping for insertion, deletion and updation (in addition to selection, ofcourse) De-normalization Why denormalize? When to denormalize How to denormalize
  • 34. Why De-normalization? Do you have performance problems? If not, then you shouldn’t be studying this course! The root cause of 99% of database performance problems is poorly written SQL code. Usually as a result of poorly optimized underlying structure Do you have disk storage problems? Consider separating large, less used datasets and frequently used datasets.
  • 35. When to Denormalize? Denormalization sometimes implies the undoing of some of the steps of Normalization Denormalization is not necessarily the reverse of the steps of Normalization. Denormalization does not imply complete removal of specific Normal Form levels. Denormalization results in duplication. It is quite possible that table structure is much too granular or possibly even incompatible with structure imposed by applications. Denormalization usually involves merging of multiple transactional tables or multiple static tables into single
  • 36. When to Denormalize? Look for one-to-one relationships. These may be unnecessary if the required removal of null values causes costly joins. Disk space is cheap. Complex SQL join statements can destroy performance. Do you have many-to-many join resolution entities? Are they all necessary? Are they all used by applications? When constructing SQL statement joins are you finding many tables in joins where those tables are scattered throughout the entity relationship diagram? When searching for static data items such as customer details are you querying a single or multiple tables? A single table is much more efficient than multiple tables.
  • 37. How to Denormalize? Common Forms of Denormalization Pre-join de-normalization. Column replication or movement. Pre-aggregation.
  • 38. Considerations in Assessing De-normalization Performance implications Storage implications Ease-of-use implications Maintenance implications Most commonly missed/disregarded.
  • 39. Pre-join Denormalization Take tables which are frequently joined and “glue” them together into a single table. Avoids performance impact of the frequent joins. Typically increases storage requirements.
  • 40. Pre-join Denormalization A simplified retail example... Before denormalization: sale_id store_id sale_dt … tx_id sale_id item_id … item_qty sale$ 1 m
  • 41. Pre-join Denormalization tx_id sale_id store_id sale_dt item_id … item_qty $ A simplified retail example... After denormalization: Points to Ponder Which Normal Form is being violated? Will there be maintenance issues?
  • 42. Pre-join Denormalization Storage implications... Assume 1:3 record count ratio between sales header and detail. Assume 1 billion sales (3 billion sales detail). Assume 8 byte sales_id. Assume 30 byte header and 40 byte detail records. Which businesses will be most hurt, in terms of storage capacity, by this form of denormalization?
  • 43. Pre-join Denormalization Storage implications... Before denormalization: 150 GB raw data. After denormalization: 186 GB raw data. Net result is 24% increase in raw data size for the database. Pre-join may actually result in space saving, if many concurrent queries are demanding frequent joins on the joined tables! HOW?
  • 44. Pre-join Denormalization Sample Query: What was my total $ volume betweenThanksgiving and Christmas in 1999?
  • 45. Pre-join Denormalization Before de-normalization: select sum(sales_detail.sale_amt) from sales ,sales_detail where sales.sales_id = sales_detail.sales_id and sales.sales_dt between '1999-11-26' and '1999-12-25' ;
  • 46. Pre-join Denormalization After de-normalization: select sum(d_sales_detail.sale_amt) from d_sales_detail where d_sales_detail.sales_dt between '1999- 11-26' and '1999-12-25' ; No join operation performed. How to compare performance?
  • 47. Pre-join Denormalization But consider the question... How many sales (transactions) did I make betweenThanksgiving and Christmas in 1999?
  • 48. Pre-join Denormalization Before denormalization: select count(*) from sales where sales.sales_dt between '1999-11-26' and '1999-12-25'; After denormalization: select count(distinct d_sales_detail.sales_id) from d_sales_detail where d_sales_detail.sales_dt between '1999-11- 26' and '1999-12-25'; Which query will perform better?
  • 49. Pre-join Denormalization Performance implications... Performance penalty for count distinct (forces sort) can be quite large. May be worth 30 GB overhead to keep sales header records if this is a common query structure because both ease-of-use and performance will be enhanced (at some cost in storage)?
  • 50. Considerations in Assessing De-normalization Performance implications Storage implications Ease-of-use implications Maintenance implications Most commonly missed/disregarded.
  • 51. Column Replication or Movement Take columns that are frequently accessed via large scale joins and replicate (or move) them into detail table(s) to avoid join operation. Avoids performance impact of the frequent joins. Increases storage requirements for database. Possible to “move” frequently accessed column to detail instead of replicating it. Note: This technique is no different than a limited form of the pre- join denormalization described previously.
  • 52. ColA ColB Table_1 ColA ColC ColD … ColZ Table_2 ColA ColB Table_1’ ColA ColC ColD … ColZ Table_2 ColC
  • 53. Column Replication or Movement Health Care DW Example: Take member_id from claim header and move it to claim detail. Result: An extra ten bytes per row on claim line table allows avoiding join to claim header table on some (many?) queries. Which normal form does this technique violates?
  • 54. Column Replication or Movement Beware of the results of de-normalization: Assuming a 100 byte record before the denormalization, all scans through the claim line detail will now take 10% longer than previously. A significant percentage of queries must get benefit from access to the denormalized column in order to justify movement into the claim line table. Need to quantify both cost and benefit of each denormalization decision.
  • 55. Column Replication or Movement May want to replicate columns in order to facilitate co-location of commonly joined tables. Before denormalization: A three table join requires re-distribution of significant amounts of data to answer many important questions related to customer transaction behavior. Customer_Id Customer_Nm Address Ph … Account_Id Customer_Id Balance$ Open_Dt … Tx_Id Account_Id Tx$ Tx_Dt Location_Id … 1 m 1 m CustTable AcctTable TrxTable