SlideShare a Scribd company logo
Data Science &
Big Data Analytics
Discovering, Analyzing, Visualizing
and Presenting Data
EMC Education Services
WILEY
'
Data Science & Big Data Analytics: Discovering, Analyzing,
Visualizing and Presenting Data
Published by
John Wiley & Sons, Inc.
10475 Crosspoint Boulevard
Indianapolis, IN 46256
www. wiley. com
Copyright© 2015 by John Wiley & Sons, Inc., Indianapolis,
Indiana
Published simultaneously in Canada
ISBN: 978-1-118-87613-8
ISBN: 978-1-118-87622-0 (ebk)
ISBN: 978-1-118-87605-3 (ebk)
Manufactured in the United States of America
10987654321
No part ofthis publication may be reproduced, stored in a
retrieval system or transmitted in any form or by any means,
electronic, mechanical, photocopying,
recording, scanning or otherwise, except as permitted under
Sections 107 or 108 of the 1976 United States Copyright Act,
without either the prior written permis-
sion of the Publisher, or authorization through payment of the
appropriate per-copy fee to the Copyright Clearance Center, 222
Rosewood Drive, Danvers, MA
01923, (978) 750-8400, fax (978) 646-8600. Requests to the
Publisher for permission should be addressed to the Permissions
Department, John Wiley & Sons, Inc.,
111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax
(201) 748-6008, or online at http: I /www. wiley. com/
go/permissions.
limit ofliability/DisclaimerofWarranty: The publisher and the
author make no representations or warranties with respect to the
accuracy or completeness of
the contents of this work and specifically disclaim all
warranties, including without limitation warranties of fitness for
a particular purpose. No warranty may be
created or extended by sales or promotional materials. The
advice and strategies contained herein may not be suitable for
every situation. This work is sold with
the understanding that the publisher is not engaged in rendering
legal, accounting, or other professional services. If professional
assistance is required, the
services of a competent professional person should be sought.
Neither the publisher nor the author shall be liable for damages
arising herefrom. The fact that an
organization or Web site is referred to in this work as a citation
and/or a potential source of further information does not mean
that the author or the publisher
endorses the information the organization or website may
provide or recommendations it may make. Further, readers
should be aware that Internet websites
listed in this work may have changed or disappeared between
when this work was written and when it is read.
For general information on our other products and services
please contact our Customer Care Department within the United
States at (877) 762-2974, outside the
United States at (317) 572-3993 orfax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and
by print-on-demand. Some material included with standard print
versions of this book may not be
included in e-books or in print-on-demand.lf this book refers to
media such as a CD or DVD that is not included in the version
you purchased, you may download
this material at http: I /book support. wiley. com. For more
information about Wiley products, visit www. wiley. com.
library of Congress Control Number: 2014946681
Trademarks: Wiley and the Wiley logo are trademarks or
registered trademarks of John Wiley & Sons, Inc. and/or its
affiliates, in the United States and other coun-
tries, and may not be used without written permission. All other
trademarks are the property of their respective owners. John
Wiley & Sons, Inc. is not associated
with any product or vendor mentioned in this book.
Credits
Executive Editor
Carol Long
Project Editor
Kelly Talbot
Production Manager
Kathleen Wisor
Copy Editor
Karen Gill
Manager of Content Development
and Assembly
Mary Beth Wakefield
Marketing Director
David Mayhew
Marketing Manager
Carrie Sherrill
Professional Technology and Strategy Director
Ba rry Pruett
Business Manager
Amy Knies
Associate Publisher
Jim Minatel
Project Coordinator, Cover
Patrick Redmond
Proofreader
Nancy Carrasco
Indexer
Johnna Van Hoose Dinse
Cover Designer
Mallesh Gurram
About the Key Contributors
David Dietrich heads the data science education team within
EMC Education Services, where he leads the
curriculum, strategy and course development related to Big Data
Analytics and Data Science. He co-au-
thored the first course in EMC's Data Science curriculum, two
additional EMC courses focused on teaching
leaders and executives about Big Data and data science, and is a
contributing author and editor of this
book. He has filed 14 patents in the areas of data science, data
privacy, and cloud computing.
David has been an advisor to severa l universities looking to
develop academic programs related to data
analytics, and has been a frequent speaker at conferences and
industry events. He also has been a a guest lecturer at universi-
ties in the Boston area. His work has been featured in major
publications including Forbes, Harvard Business Review, and
the
2014 Massachusetts Big Data Report, commissioned by
Governor Deval Patrick.
Involved with analytics and technology for nearly 20 years,
David has worked with many Fortune 500 companies over his
career, holding mu lti ple roles involving analytics, including
managing ana lytics and operations teams, delivering analytic
con-
sulting engagements, managing a line of analytical software
products for regulating the US banking industry, and developing
Sohware-as-a-Service and BI-as-a-Service offerings.
Additionally, David collaborated with the U.S. Federal Reserve
in develop-
ing predictive models for monitoring mortgage portfolios.
Barry Heller is an advisory technical education consultant at
EMC Education Services. Barry is a course developer and cu r-
riculum advisor in the emerging technology areas of Big Data
and data science. Prior to his current role, Barry was a consul-
tant research scientist leadi ng numerous analytical initiatives
within EMC's Total Customer Experience
organization. Early in his EMC career, he managed the
statistical engineering group as well as led the
data warehousing efforts in an Enterprise Resource Planning
(ERP) implementation. Prior to joining EMC,
Barry held managerial and analytical roles in reliability
engineering functions at medical diagnostic and
technology companies. During his career, he has applied his
quantitative skill set to a myriad of business
applications in the Customer Service, Engineering, Ma
nufacturing, Sales/Marketing, Finance, and Legal
arenas. Underscoring the importance of strong executive
stakeholder engagement, many of his successes
have resulted from not only focusing on the technical details of
an analysis, but on the decisions that will be resulting from
the analysis. Barry earned a B.S. in Computational Mathematics
from the Rochester Institute ofTechnology and an M.A. in
Mathematics from the State University of New York (SUNY)
New Paltz.
Beibei Yang is a Technical Education Consultant of EMC
Education Services, responsible for developing severa l open
courses
at EMC related to Data Science and Big Data Analytics. Beibei
has seven years of experi ence in the IT industry. Prior to EMC
she
worked as a sohware engineer, systems manager, and network
manager for a Fortune 500 company where she introduced
new technologies to improve efficiency and encourage
collaboration. Beibei has published papers to
prestigious conferences and has filed multiple patents. She
received her Ph.D. in computer science from
the University of Massachusetts Lowell. She has a passion
toward natural language processing and data
mining, especially using various tools and techniques to find
hidden patterns and tell storie s with data.
Data Science and Big Data Analytics is an exciting domain
where the potential of digital information is
maximized for making intelligent business decisions. We
believe that this is an area that will attract a lot of
talented students and professiona ls in the short, mid, and long
term.
Acknowledgments
EMC Education Services embarked on learning this subject with
the intent to develop an "open" curriculum and
certification. It was a challenging journey at the time as not
many understood what it would take to be a true
data scientist. After initial research (and struggle), we were able
to define what was needed and attract very
talented professionals to work on the project. The course, "Data
Science and Big Data Analytics," has become
well accepted across academia and the industry.
Led by EMC Education Services, this book is the result of
efforts and contributions from a number of key EMC
organizations and supported by the office of the CTO, IT,
Global Services, and Engi neering. Many sincere
thanks to many key contributors and subject matter experts
David Dietrich, Barry Heller, and Beibei Yang
for their work developing content and graphics for the chapters.
A special thanks to subject matter experts
John Cardente and Ganesh Rajaratnam for their active
involvement reviewing multiple book chapters and
providing valuable feedback throughout the project.
We are also grateful to the fol lowing experts from EMC and
Pivotal for their support in reviewing and improving
the content in this book:
Aidan O'Brien Joe Kambourakis
Alexander Nunes Joe Milardo
Bryan Miletich John Sopka
Dan Baskette Kathryn Stiles
Daniel Mepham Ken Taylor
Dave Reiner Lanette Wells
Deborah Stokes Michael Hancock
Ellis Kriesberg Michael Vander Donk
Frank Coleman Narayana n Krishnakumar
Hisham Arafat Richard Moore
Ira Sch ild Ron Glick
Jack Harwood Stephen Maloney
Jim McGroddy Steve Todd
Jody Goncalves Suresh Thankappan
Joe Dery Tom McGowa n
We also thank Ira Schild and Shane Goodrich for coordinating
this project, Mallesh Gurram for the cover design, Chris Conroy
and Rob Bradley for graphics, and the publisher, John Wiley
and Sons, for timely support in bringing this book to the
industry.
Nancy Gessler
Director, Education Services, EMC Corporation
Alok Shrivastava
Sr. Direc tor, Education Services, EMC Corporation
Contents
Introduction ................ . .. . .....• . •.. ... .... •..... .. .. . .. .
.......... .. ... . ..................... •.•...... xvii
Chapter 1 • Introduction to Big Data Analytics ................... . . .
....................... 1
1.1 Big Data Overview ..................... ....... .....•... • ...... . . .
........ • .. ... . . ... ....... ....... 2
1.1.1 Data Structures .. . .. . . . .. ................ ... ... . .. . ...... . ..
.. .... . .................... ..... . .. . . . .. 5
1.1.2 Analyst Perspective on Data Repositories .
............................. . .......... .......•. ... ... .. .. 9
1.2 State of the Practice in Analytics
................................................................. . 11
1.2.1 Bl Versus Data Science .............. .... ....... . .. . ........... . .
. .... . ....................... .. .... 12
1.2.2 Current Analytical Architecture ... . .... .• . . ................
.... .............. .... .... ...... •.. . ..... 13
1.2.3 Drivers of Big Data .................................................... . .
. .. ................. .. ... . . 15
1.2.4 Emerging Big Data Ecosystem and a New Approach to
Analytics .. ....... ...... . ............ .. ....... 16
1.3 Key Roles for the New Big Data Ecosystem ....... ..... .........
. ....... . ..... .. .................... 19
1.4 Examples of Big Data Analytics ... .... .......... .... . ... .......
... .... . ...... . .................... 22
Summary .............. ............ ... ... ......... .... • ... •....... ........ .. •
..•... . ................ 23
Exercises ..................... .... ..... .. ...... . ......•......... .. .. . ... ....
. ..•.................... 23
Bibliography ........................... .... .. ... ... ... •................... .. •
...... ..... ..... ....... 24
Chapter 2 • Data Ana lytics Lifecycle
..................................................... . 25
2.1 Data Analytics Lifecycle Overview ... ..... . ............. • ......
•.. ..... ...... • ... •............. . . . 26
2.1.1 Key Roles for a Successful Anolytics Project .... . .. . ....
.... . ........ . .. .. . ..•......... •. •....... . .. . . 26
2.1.2 Background and Overview of Data Analytics Lifecyc/e
.......................... . .......•... . ..... ... 28
2.2 Phase 1: Discovery ..... .. .. .. . ............................. .
..•..................... •........... . 30
2.2.1 Learning the Business Domain .. . ....... ... ..•.•. •.... . ..
..... . . .. . ...................•........... .30
2.2.2 Resources . . ... . ................... . ...... . .........................
..... ............. •.......•.... 31
2.2.3 Framing the Problem ............•.... .
...................................•......... •.•.... . . ...... 32
2.2.41dentifying Key Stakeholders ... .. ....................... ... . ...
......... .... . ....... •. . .......... . . 33
2.2.51nterviewing the Analytics Sponsor ...... ........ ...... ..
.......... .... ... .. ... ..... .. ........... ... 33
2.2.6 Developing Initial Hypotheses ................. .. . . . .. . . . .. .
. . . ... .... .. ........... . . •............ . . 35
2.2.71dentifying Po tential Data Sources . ... ...•. •.. .... . . .. .
......•. •.......... . ....... . ..... . ... . .. .. . . 35
2.3 Phase 2: Data Preparation
...........................................................•...•..•..... 36
2.3.1 Preparing the Analytic Sandbox . .............. .
...................... ... •. •.......•.......... .. .... 37
2.3.2 Performing ETLT
..................................................................•.•.......•... .. . 38
2.3.3 Learning About the Data .. ..... . .............. ..
........................•.•.......•.•........ ..... . 39
2.3.4 Data Conditioning ....... .. ....•.......... . ....................... .. .
.. . . . ......•. •............. .. .40
2.3.5 Survey and Visualize . . . ... .. .... .. .. ...... . . ..... .. .
.................. . . •. ...... . .•.. .. .. .. . . . ..... 41
2.3.6 Common Tools for the Data Preparation Phase . . . .... ..
..... ....... . •......... •.• .•.. .. ..... .. .. . . .42
2.4 Phase 3: Model Planning ............................•................. .
... . .. •..... .....•........ 42
2.4.1 Data Exploration and Variable Selection . . ... . . .. . .........
•... . ... . . ........ . .............. .. .. . . . .44
2.4.2 Model Selection . ... ................ . .. . . . ................
•.......•...•.......................... . .45
2.4.3 Common Tools for the Model Planning Phase . .
..........•....... . . •. ........................... . . . .45
CONTENTS
2.5 Phase 4: Model Building ...... .................. ...... •. ... ..... ....
• ... •. . •. .. •.........•...•.... 46
2.5.1 Common Tools for th e Mode/Building Phase ...... .. .. .
..... .. ..... . ....... . .. . . .. . . .. . .... . . .. . .... 48
2.6 Phase 5: Communicate Re sults ......... .... ...... . ... •........
........ ... . •..... .....•. ..... •.... 49
2.7 Phase 6: Operationalize ... ... ....... ... . .. ........ ....... ...
........... •. . •. . ... ....... .......... SO
2.8 Case Study: Global Innovation Network and Analysis
(GINA) ................. •...................... 53
2.8.1 Phase 1: Discovery
................................................................................. 54
2.8.2 Phase 2: Data Preparation .... •........ .
...................................................... . .... 55
2.8.3 Phase 3: Model Planning . . . ...•.•. . . .. . . ..... .. . . .. . .....
.. .. ... ...... . . . ................... . . . .. . . 56
2.8.4 Phase 4: Mode/Building ..... . ....•.. .. .. .......... . ..............
. . .. . ... . . ....... .. . .... ... . .. . . . 56
2.8.5 Phase 5: Commun icate Results .. . . ..... . ...... .. ...... ... ..
. .. . . ..................... ...... ........ 58
2.8.6 Phase 6: Operationalize . . ... ......•..... ..• .. . . . .. . .
..............•................................ 59
Summary ................................. • ................. •..•..
•.......•.....••........ . ....•.... 60
Exercises .................................•.... .. ..............•. .
•....................... . . . . . •.... 61
Bibliography ....• . .••...................................•.... . . • ..... ..
............................. 61
Chapter 3 • Review of Basic Data Analytic Methods Using R . .
. . . . .. . ... . .. .. . ... . . . . . .. ... . 63
3.1 Introd uction toR ............................ ...
.................................... ..... ......... 64
3.1.1 R Graphical User Interfaces . ............ .
............................... ...... . .. ... . . . ... ....... ... 67
3.1.2 Data Import and Export. . ......... . .. ............. ...........
........... .................... ....... 69
3.1.3 Attribute and Data Types . .......... .. ...... .
....................................................... 71
3.1.4 Descriptive Statistics ....................... . . .
..................................................... 79
3.2 Exploratory Data Analysis .............. • ... . .•
•.............•........... . .................... .... 80
3.2.1 Visualization Before Analysis ........ .
..................................................•........... 82
3.2.2 Dirty Data ............ .. ................................................ .
........... ...•...... .... . 85
3.2.3 Visualizing a Single Variable ........ •.. . ................ .. .. . .
........... . .... ....... •.. . . . .... .. . . 88
3.2.4 Examining Multiple Varia bles . .... .... ....• . .. . ... ..........
.............. ...... . .. .. .............. 91
3.2.5 Data Exploration Versus Presentation ...... . ........ •. . . . ..
. . ..... ...... ................... ...... .. 99
3.3 Statistical Methods for Evaluation .................... . .. .•
......... ... . .. .................... . .. 101
3.3.1 Hypoth esis Testing ........ ........ .......... ....
............................ . .. . ...... .. ...... . ... 102
3.3.2 Difference of Means ...... . .... .. . .... ..... .
..................................................... 704
3.3.3 Wilcoxon Rank-Sum Test ................•........................ ...
.. . ... . .................. •... 108
3.3.4 Type I and Type II Errors ... . ...... . .. . .................. .
........ . .. .... .. ......................... 109
3.3.5 Power and Sample Size .....•.. . . .. . ... ...... . ........ .......
.............. ....... .. .... .......... 110
3.3.6 ANOVA . ................ . .. ......... . . .... .. . . ... .... ........ . .
.. ..... . ... .. .. .... . •. •.......•... . 110
Summary ...... ............. • ....... ...... ....• .. •... •
............................... •......•...... 114
Exercises ...... ......... ......................... . ............... ...... . ... ...
....... •............. 114
Bibliography ................................... . . . .................
.................. •.... . . .. . .... 11 5
Chapter 4 • Advanced Analytical Theory and Method s: Clu
stering .. . . .. . .. . ... . .. . . . ... . .. 117
4.1 Overview of Clustering ........ ...... ......... ..
................................................. 11 8
4.2 K-means ............... ....... ... ....................... .. ........ . ... .
.......... . .... . .... .... 11 8
4.2.1 Use Cases ..... .. ............. . •.....• ... ... .. ..... ........ ..........
. . .. ........ ...... ... .. . ...... 119
4.2.2 Overview of the Method . ............ ....... ... . .. ........
................... ... ... .. . .•. ..... . .. . 120
4.2.3 Determining the Number of Clusters . . . .. .. •.
•...................... . .......... ..... .. ... ...... . ... 123
4.2.4 Diagnostics .. ......................... ...•.... ........... .....
....................... .. .. ....... . 128
CONTENTS
4.2.5 Reasons to Choose and Cautions .. . .. . . . . . . .. . . . . . ..
... . ..... ... .. .. . . • . •. • . . ...•. • .• . ... . ..... ... 730
4.3 Add itional Algorithms .............. ... . . . . .. . ...... . ... .
........ .• .. .. . .. ................ .. .... 134
Summary ......... .... ........................ .. . ....................... . . .
..•.. . .................. 135
Exercises ........... ..................... . . ..... . ............................... .
.......... .. ..... . 135
Bibliography ............................. ....... ................................ .
.................. 136
Chapter 5 • Advanced Analytica l Theory and Methods:
Association Ru les .................. 137
5.1 Overview .... . . ... ........................................ .. . .. . ..... . ..
.................. .. .... 138
5.2 A priori Algorit hm ........... . ............... . . . ...... ... . . .... . .
..... .......... .. ......... ... ... 140
5.3 Evaluation of Candidate Rules ....................... . ... .. . .. .....
• ....... . ................ ..... 141
5.4 Applications of Association Rules ............ ... ..... . ..... . . .
... ..... . . .. . . . ...... .............. 143
5.5 An Example: Transactions in a Grocery Store ... .
.................... .... . . ... .......... ........... 143
5.5.1 The Groceries Dataset ................... . . .. ..............
•........... •... . .......•............... 144
5.5.2 Frequent ltemset Generation . . ........................... .. .........
. . • . •......... •............... 146
5.5.3 Rule Generation and Visualization ...... . ... .
......................... . .•. •.... . •. •........... . .. . 752
5.6 Validation and Testing ........... . ... .... . .
............................................. . ....... 157
5.7 Diagnostics .. .... ..................... . .. . . ..... . ............ . ... . .
... . ...... . ......... .. .... . . . 158
Summa ry ....... .. ................ . ..... ... . . .. . . ...... .... .... . ........
. . .... ..... .............. . . 158
Exercises ................................ ... . . . ........ . ................. . ....
....... ......... . .... 159
Bibliog raphy ................................ . .. .... ..... ............ ..... . ...
........... ... . ...... . 160
Chapter 6 • Advanced Analytical Theory and Methods:
Regression .................. . ..... 161
6.1 Li near Regression .......... . .......... . .. . .. .. ...... . ............
.... . . . ....... ........... ...... 162
6.1.1 UseCases . . . ... . . . .. . ...... ..... ......................... .. .
....... .... .... .. ...... . .......... . .. . /62
6.1.2 Model Description .. ... .. . .. . ..... . ........... . .. . .. .... . .
•. ..... . •. •.• . ...... . .•............. . .. . 163
6.1.3 Diagnostics ....................... . .... .. . . . . . . ....... •.•.• .....•.
•.•...... .• . • .•.. . .. . .... . . . . . . . 773
6.2 Logistic Regression ............ ........ . .....
................................ . ......... .. .. . .. .. 178
6.2.1 Use Cases ...... . ....................................... .... ................
.... ................... 179
6.2.2 Model Description ........ .. .... ... •..... . .... ........ .. .. • .
..... ... . .•. •...• .•................... 179
6.2.3 Diagnostics ................. ..... ...... . . .. ............•. •. ........•.
..... .• .•................... 181
6.3 Reasons to Choose and Cautions ....... . . .... .. .... ............
........... ......... ....... ..... . 188
6.4 Additional Regression Models ............ ... .. ...... . ... .
............. . ... ........ ........... ... 189
Summary ........... .... . . ........... . ....... . .........•... . ...... . ......
... . .. . . ... .. ........... . . 190
Exercises ............ .. .......... .. . .. ................ .. .. .. ............ . . ..
.......... . . . .. .. .... . . 190
Chapter 7 • Advanced Ana lytical Theory and Methods:
Classification ...... . .......... . .... 191
7.1 Decision Trees ... .. ............... ...... ............ .............
.......... .............. ... .... 192
7.1.1 Overview of a Decision Tree ...... . .................... .. .
........................ .. .... ..... . ...... 193
7.1.2 The General Algorithm . .............. .............. ... ..•. ...
.............. .• .. .. ........ .... . .. . . 197
7.1.3 Decision Tree Algorithms ............. .. . .... .. ......•. . .•.. ...
• . •... .... . .... ... . .............. .. 203
7.1.4 Evaluating a Decision Tree ............. . . •... . ... . ...•... .... .
....... . .................... . ... . . . . 204
7.1.5 Decision Trees in R . . . .. ................ ...... .. .. ..... ..... ....
.................. . ..... ........ .. 206
7.2 Na'lve Bayes . .... ... ................ . ..... . ...... . .......... . .. . ...
. ..... .. ..... ......... . ...... 211
7.2.1 Bayes' Theorem . . .. . ........................ .
..................................................... 212
7.2.2 Nai've Bayes Classifier ................... •... . ... .....
.......•.................................. .. . 214
CONTENTS
7.2.3 Smoothing . ............... .................... . .. . ........ . .. . ......
.. • . .. .......... .. .......... . 277
7.2.4 Diagnostics .. . ........... . ..................... .. .... . .•.........
•.•.....•...•........ . . . ......... 217
7.2.5 Nai've Bayes in R ............... . . .. . .....•... .. .
...•.•.........•.•.. .. . .. •. •.•.... ........ . .. .... . 278
7.3 Diagnostics of Classifiers ............ •...... ........... •..........
...•...• .. •... •. .... ........... 224
7.4 Additiona l Classification Methods .... • ... • ...... • .............
• .................•... .... ......... 228
Summary ................. ..... ............ • ......•.............. ..
..........................•..... 229
Exercises .................. ... ......... .... .........................•.... . . .
.................•..... 230
Bibliography ...... . ..........•......... .... ........... . ... . .............. ...
...•................... 231
Chapter 8 • Advanced Analytical Theory and Methods: Time
Series Analysis . . .. ... . ... . .. . 233
8.1 Overview of Time Series Analysis ....... ....... ................
......................... .... ..... 234
8.1.1 Box-Jenkins Methodology ................... . .. .... ...... .
.................... . .. ..... ............ 235
8.2 ARIMA Model. ................ . .. . ....... •..•..... .. ...... . ...
•................. • ... . ..•........ 236
8.2.1 Autocorrelation Function (A CF) .. ......... ......................
... ........ . ......... . .. ..... ..... 236
8.2.2 Autoregressive Models . ...... ... ............ . . . .. •. ... ..... ...
. .. ... ... . ......... . ....... .. . . .... 238
8.2.3 Moving Average Models . .. .. . ....................................
.................... •..... . .... . 239
8.2.4 ARMA and ARIMA Models ............. .
.................................•...........•.....•....... 241
8.2.5 Building and Evaluating an ARIMA Model
............................. . .•.........•. •. . ... •...... 244
8.2.6 Reasons to Choose and Cautions .. ................ . .. . ........
.. . . .. . ....... . .... .•.•. •.. . •. . .... . 252
8.3 Additional Methods ........ ... . ... ....... ... .. ...... ...... .. .......
....... .. ... . .... . ... . ...... . 253
Summary ........................ ... ... ...... .. ............ • ......... .........
..• .. .......• ... ..... 254
Exercises .............. . .......... ... ......... . •. ..
.............................• .. . . .. • . .• ... ..... 254
Chapter 9 • Advanced Analytical Theory and Methods: Text
Analysis ...... . ... . .. .. .. . . ... 255
9.1 Text Analysis Steps .......... . .... ......... ...... ... ....................
. ...... . ...... . . .•....... 257
9.2 A Text Analysis Example ..... •.... .... ............................ ..
............ ...... • .... ...... 259
9.3 Collecting Raw Text ........ .. .............. 00 00 00 00 ••••• 00
••• ••• ••• ••••• 00 ••••• 00 ••••• •• ••• 00 ••• 260
9.4 Representing Text .......................... ... .................. .
...................•.. ...... .. 264
9.5 Term Frequency-Inverse Document Freq uency (TFIDF) ......
• .......... • ..... .•. ...... . ......... 269
9.6 Categorizing Documents by Topics .... ................... .. .•.....
. . ... • ...... •.. . . .. . . ......... 274
9.7 Determining Sentim ents ............... . ...... . ......•...•..•.... .. ..
.. •.. •... •.. . . .. ........... 277
9.8 Gaining Insig hts ................ .. ....................... •..•....... ..
........•... . ..... . ....... 283
Summary ............... . ........... . ......... •.................... • ..... . . .
......... •..... . ....... 290
Exercises ...............•... . ..... . . .. ........ •..•... . ............. •
................. . ..... . ....... 290
Bibliography ............ •. ..•... . ..... . ....... ... . ....... . .. .
................ . ............ . ........ 291
Chapter 10 • Advanced Analytics-Technology and Tools:
MapReduce and Hadoop . . . ..... 295
10.1 Analytics for Unstructured Data . 00 .. .... .. 00 ••••• 00. 00
••• 00 00 .......... 00 ......... 00 •• 00 .. . .... 296
10.1.1 UseCasesoo .. 00.00 00 ••••• 00.00 00 •••••• 00 ••••••• 00
••• 00 • • 00.00 .. . .................... 00 . .... .. 00. 296
10.1.2 MapReduce . .. .... ......... .. ............... . .......... •......... •.
•....... •.•. •....... . ....... 298
70.7.3 Apache Hadoop ......... ... ........... . ......... . . .. ....... .. . .
.. . .... ... . .• ...•.... .. . •....... 300
10.2 The Hadoop Ecosystem ....•... . ........... ..... ... . •... ..
.............• . •. .. .. ....... . •• ...... 306
70.2.1 Pig . ....... ..... ........ . ......................................... . .. . .
.......•... . ..... •.•..... 306
70.2.2 Hive ............... . ............•................ . ...
•.•...........•.......•. . .. . .. . ..... . .. . .. 308
70.2.3 HBase ...... .. 00 .. ... . . 00 ••••••••••••••• 00 •••••• 00 .
.... . ..... .. ...... 00 .. .. . . . 00 ••• 00 00 ... 00 •••• • 317
10.2.4 Mahout .. 00 • •• • ••••••••••• 00 ............ . . . .. . ... . ....
.... . ..... .. .......... .. .. 00 • • • 00 .. . .. . .. • 319
CONTENTS
10.3 NoSOL ...............•........................ • .................
•..................... • ....... 322
Summary .............•...•........................•.................
•.....................•....... 323
Exercises .................•........................ • ..................... •......
................... 324
Bibliography ....... •...... • ................. •...... • .................•.........
.... ........ • ..•.... 324
Chapter 11 • Advanced Analytics-Technology and Tools: In-
Database Analytics ........ . . . 327
11.1 SOL Essentials ............................................................. ..
. . ........ • ..•.... 328
77.1.1 Joins .. . .. . . .. .. . .. ... .. . ......... ... ............. . .. .. . ......
.. .. ... . ....... ... .... . .. ...... . .. . 330
77.1.2 Set Operations ................ . .. . ...................... . ...... ...
........................... . ... 332
11.1.3 Grouping Extensions ......... .. .. .. . . . . ..
........................ ............. .. ................ . 334
11.2 In-Database Text Analysis ............... •... .
.............•......•......... . . .. ... . • . . . •..•.... 338
11 .3 Advanced SOL ... .. ......................... •.. •
.................•........... . .........•....... 343
71 .3.1 Window Functions . . . . ............................... ... .. .... ..
. . . ..... . ....................... 343
11.3.2 User-Defined Functions and Aggregates
............................•. • . •............... .. ... .... .347
11.3.3 Ordered Aggregates ............. ..... .... ..... ....... .... ..
..................................... 351
11.3.4 MADiib ...................................... ............•. ....... . . ....
. .... • . •................ .352
Summary ..........•.. • ... •
.......................................................... .. . . .......... 356
Exercises ......... . .............................. ........ ............................
.. . . .......... 356
Bibliography .......•...... •. .• • .................... • ... .. ........... . •.
..• . ......... .... .. . ........ 357
Chapter 12 • The Endgame, or Putting It All Together
..................................... 359
12.1 Communicating and Operationalizing an Analytics Project.
........ . .....................•....... 360
12.2 Creating the Final Deliverables ......................... ..... . .. ..
.. .•.......................... 362
12.2.1 Developing Core Material for Multiple Audiences
........................ •..... .. •.•.............. 364
12.2.2 Project Goals . . . . . .. . . ............ . ............ . ..... . ........
..... . .. . . ..... . . . ................ 365
12.2.3 Main Findings ....... . ... . . ... . ....................... . .. ... . ..
. ... ....• . . . ... . •. •........... . .. .367
12.2.4 Approach ... . .. . . .. . .
............................................................ .... .... ...... 369
12.2.5 Model Description ... . .. . .................................... ..
......... . .... . ...•..... . ..... .... 371
12.2.6 Key Points Supported with Data . .......................... . . . .
. ....... . . . ..... .. .. .. . ..... . ..... .372
12.2.7 Model Details .. . . .. ................................................. .
....... •.•....... . ........ .372
12.2.8 Recommendations ........ ... .... ....... .... ........... .......... .
.... . . ...... • .•.• .. .... ..... . . 374
12.2.9 Additional Tips on Final Presentation ......... . .. .
............ .. . . . . .. . .. . ..... . •. •.............. .375
12.2.10 Providing Technica15pecificarions and Code
................................... . ................ . 376
12.3 Data Visua lization Basics .......... .... ... ....
....................•.......... . .... . ............. 377
12.3.1 Key Points Supported with Data ............... . ... . . .
.................. . ............... ... ...... .378
12.3.2 Evolution of a Graph ................ ..... .... ............. ...... .
...... •.•... •. •.•......... •.... 380
12.3.3 Common Representation Methods .............. .. ............ ..
. . . •. • .. . .... • . . ................ 386
12.3.4 How to Clean Up a Graphic ................... •. . . .... . ..... .
.......... . . . ..... . ... .......... ... .387
12.3.5 Additional Considerations ..... ................. .... ... . ..... .. .
. . . •.•. .. ... . •.• ...... . ...... ... . 392
Summary ............ .. .........................•...... • ... • . ... .........•...
•..................... 393
Exercises ........... . . .... . ................. .. .. . . . .... • ................. .
. .. . .. • .......... . ....... 394
References and Further Reading ... .. ............ .... ...... .....
......... . .... . . .................... 394
Bibliography .... . . ... ......... .... . ........................ • .................
.. . .. .. . ... . . ... ...... 394
Index .. . .............. . .. . .. . .. . . .. . . ........... . . . .. . .. . . . .......
. . . ... . . .. . .. .. . .. . . . ... .. . . ............... . 397
Foreword
Technological advances and the associated changes in practical
daily life have produced a rapidly expanding
"parallel universe" of new content, new data, and new
information sources all around us. Regardless of how one
defines it, the phenomenon of Big Data is ever more present,
ever more pervasive, and ever more important. There
is enormous value potential in Big Data: innovative insights,
improved understanding of problems, and countless
opportunities to predict-and even to shape-the future. Data
Science is the principal means to discover and
tap that potential. Data Science provides ways to deal with and
benefit from Big Data: to see patterns, to discover
relationships, and to make sense of stunningly varied images
and information.
Not everyone has studied statistical analysis at a deep level.
People with advanced degrees in applied math-
ematics are not a commodity. Relatively few organizations have
committed resources to large collections of data
gathered primarily for the purpose of exploratory analysis. And
yet, while applying the practices of Data Science
to Big Data is a valuable differentiating strategy at present, it
will be a standard core competency in the not so
distant future.
How does an organization operationalize quickly to take
advantage of this trend? We've created this book for
that exact purpose.
EMC Education Services has been listening to the industry and
organizations, observing the multi-faceted
transformation of the technology landscape, and doing direct
research in order to create curriculum and con-
tent to help individuals and organizations transform themselves.
For the domain of Data Science and Big Data
Analytics, our educational strategy balances three things:
people-especially in the context of data science teams,
processes-such as the analytic lifecycle approach presented in
this book, and tools and technologies-in this case
with the emphasis on proven analytic tools.
So let us help you capitalize on this new "parallel universe" that
surrounds us. We invite you to learn about
Data Science and Big Data Analytics through this book and
hope it significantly accelerates your efforts in the
transformational process.
Introduction
Big Data is creating significant new opportunities for
organizations to derive new value and create competitive
advantage from their most valuable asset: information. For
businesses, Big Data helps drive efficiency, quality, and
personalized products and services, producing improved levels
of customer satisfaction and profit. For scientific
efforts, Big Data analytics enable new avenues of investigation
with potentially richer results and deeper insights
than previously available. In many cases, Big Data analytics
integrate structured and unstructured data with real-
time feeds and queries, opening new paths to innovation and
insight.
This book provides a practitioner's approach to some of the key
techniques and tools used in Big Data analytics.
Knowledge ofthese methods will help people become active
contributors to Big Data analytics projects. The book's
content is designed to assist multiple stakeholders: business and
data analysts looking to add Big Data analytics
skills to their portfolio; database professionals and managers of
business intelligence, analytics, or Big Data groups
looking to enrich their analytic skills; and college graduates
investigating data science as a career field.
The content is structured in twelve chapters. The first chapter
introduces the reader to the domain of Big Data,
the drivers for advanced analytics, and the role of the data
scientist. The second chapter presents an analytic project
lifecycle designed for the particular characteristics and
challenges of hypothesis-driven analysis with Big Data.
Chapter 3 examines fundamental statistical techniques in the
context of the open source R analytic software
environment. This chapter also highlights the importance of
exploratory data analysis via visualizations and reviews
the key notions of hypothesis development and testing.
Chapters 4 through 9 discuss a range of advanced analytical
methods, including clustering, classification,
regression analysis, time series and text analysis.
Chapters 10 and 11 focus on specific technologies and tools that
support advanced analytics with Big Data. In
particular, the Map Reduce paradigm and its instantiation in the
Hadoop ecosystem, as well as advanced topics
in SOL and in-database text analytics form the focus of these
chapters.
XVIII ! INTRODUCTION
Chapter 12 provides guidance on operationalizing Big Data
analytics projects. This chapter focuses on creat·
ing the final deliverables, converting an analytics project to an
ongoing asset of an organization's operation, and
creating clear, useful visual outputs based on the data.
EMC Academic Alliance
University and college faculties are invited to join t he
Academic Alliance program to access unique "ope n"
curriculum-based education on the following top ics:
• Data Science and Big Data Analytics
• Information Storage and Management
• Cloud Infrastructure and Services
• Backup Recovery Systems and Architecture
The program provides faculty with course re sources to prepare
students for opportunities that exist in today's
evolving IT industry at no cost. For more information, visit
http: // education . EMC . com/ academicalliance.
EMC Proven Professional Certification
EMC Proven Professional is a leading education and
certification program in the IT industry, providing compre-
hensive coverage of information storage technologies,
virtualization, cloud computing, data science/ Big Data
analytics, and more.
Being proven means investing in yourself and formally
validating your expertise.
This book prepares you for Data Science Associate (EMCDSA)
certification. Visit http : I I educat i on . EMC
. com for details.
INTRODUCTION TO BIG DATA ANAL YTICS
Much has been written about Big Data and the need for
advanced analytics within industry, academ ia,
and government. Availa bility of new data sources and the rise
of more complex analytical opportunities
have created a need to rethink existing data architectures to
enable analytics that take advantage of Big
Data. In addition, sig nificant debate exists about what Big Data
is and what kinds of skil ls are required to
make best use of it. This chapter explains severa l key concepts
to clarify what is meant by Big Data, why
adva nced analyt ics are needed, how Data Science differs from
Business Intelligence (BI), and what new
roles are needed for the new Big Data ecosystem.
1.1 Big Data Overview
Data is created constantly, and at an ever-increasing rate.
Mobile phones, social media, imaging technologies
to determine a medical diagnosis-all these and more create new
data, and that must be stored somewhere
for some purp ose. Devices and sensors automatically generate
diagnostic information that needs to be
stored and processed in real time. Merely keeping up with this
huge influx of data is difficult, but su bstan-
tially more cha llenging is analyzing vast amounts of it,
especially when it does not conform to traditional
notions of data structure, to identify meaningful patterns and
extract useful information. These challenges
of the data deluge present the opportunity to transform business,
government, science, and everyday life.
Several industries have led the way in developing their ability
to gather and exploit data:
• Credit ca rd companies monitor every purchase their
customers make and can identify fraudulent
purchases with a high degree of accuracy using rules derived by
processing billions of transactions.
• Mobi le phone companies analyze subscribers' calling patterns
to determine, for example, whether a
caller's frequent contacts are on a rival network. If that rival
network is offeri ng an attractive promo-
tion t hat might cause the subscriber to defect, the mobile phone
company can proactively offer the
subscriber an incentive to remai n in her contract.
• For compan ies such as Linked In and Facebook, data itself is
their primary product. The valuations of
these compan ies are heavi ly derived from the data they gather
and host, which contains more and
more intrinsic va lue as the data grows.
Three attributes stand out as defining Big Data characteristics:
• Huge volume of data: Rather than thousands or millions of
rows, Big Data can be billions of rows and
millions of columns.
• Complexity of data t ypes and st ructures: Big Data reflects
the variety of new data sources, forma ts,
and structures, including digital traces being left on the web and
other digital repositories for subse-
quent analysis.
• Speed of new dat a crea tion and growt h: Big Data can
describe high velocity data, with rapid data
ingestion and near real time analysis.
Although the vol ume of Big Data tends to attract the most
attention, genera lly the variety and veloc-
ity of the data provide a more apt defi nition of Big Data. (Big
Data is sometimes described as havi ng 3 Vs:
volu me, vari ety, and velocity.) Due to its size or structure, Big
Data cannot be efficiently analyzed using on ly
traditional databases or methods. Big Data problems req uire
new tools and tech nologies to store, manage,
and realize the business benefit. These new tools and
technologies enable creation, manipulation, and
1.1 Big Data Overview
management of large datasets and t he storage environments that
house them. Another definition of Big
Data comes from the McKi nsey Global report from 2011:
Big Data is data whose s cale, dis tribution, diversity, and/ or
timeliness require th e
use of new technical architectures and analytics to e nable
insights that unlock ne w
sources of business value.
McKinsey & Co.; Big Data: The Next Frontier for Innovation,
Competit ion, and
Prod uctivity [1]
McKinsey's definition of Big Data impl ies that orga nizations
will need new data architectures and ana-
lytic sandboxes, new tools, new analytical methods, and an
integration of multiple skills into the new ro le
of the data scientist, which will be discussed in Section 1.3.
Figure 1-1 highlights several sources of the Big
Data deluge.
What's Driving Data Deluge?
Mobile
Sensors
Smart
Grids
Social
Media
Geophysical
Exploration
FtGURE 1-1 What 's driving the da ta deluge
Video
Surveillance
• Medical Imaging
Video
Rendering
Gene
Seque ncing
The rate of data creation is accelerating, driven by many of the
items in Figure 1-1.
Social media and genetic sequencing are among the fastest-
growing sources of Big Data and examples
of untraditional sources of data being used for analysis.
For example, in 2012 Facebook users posted 700 status updates
per second worldwide, which can be
leveraged to deduce latent interests or political views of users
and show relevant ads. For instance, an
update in wh ich a woman changes her relationship status from
"single" to "engaged" wou ld t rigger ads
on bri dal dresses, wedding plann ing, or name-changing
services.
Facebook can also construct social graphs to ana lyze which
users are connected to each other as an
interconnected network. In March 2013, Facebook released a
new featu re called "Graph Search," enabling
users and developers to search social graphs for people with
similar interests, hobbies, and shared locations.
INTRODUCTION TO BIG DATA ANALYTICS
Another example comes from genomics. Genetic sequencing and
human genome mapping provide a
detailed understanding of genetic makeup and lineage. The
health care industry is looking toward these
advances to help predict which illnesses a person is li kely to
get in his lifetime and take steps to avoid these
maladies or reduce their impact through the use of personalized
med icine and treatment. Such tests also
highlight typical responses to different medications and
pharmaceutical drugs, heightening risk awareness
of specific drug treatments.
While data has grown, the cost to perform this work has fall en
dramatically. The cost to sequence one
huma n genome has fallen from $100 million in 2001 to $10,000
in 2011, and the cost continues to drop. Now,
websites such as 23andme (Figure 1-2) offer genotyp ing for
less than $100. Although genotyping analyzes
on ly a fraction of a genome and does not provide as much
granularity as genetic sequencing, it does point
to the fact that data and complex analysis is becoming more
prevalent and less expensive to deploy.
23 pairs of
chromosomes.
One unique you.
Bring your ancestry to life.
F1ncl out what percent or your DNA comes !rom
populations around the world. rang1ng from East As1a
Sub-Saharan Alllca Europe, and more. B1eak
European ancestry down 1010 d1st1nct regions such as
the Bnush Isles. Scnnd1navla Italy and Ashkenazi
Jewish. People IVIh mixed ancestry. Alncan
Amencans. Launos. and Nauve Amencans w111 also
get a detailed breakdown.
20.5%
( .t A! n
Find relatives across
continents or across
the street.
Build your family tree
and enhance your
ex erience.
: 38.6%
· s, b·S 1h Jn Afr c.an
24.7%
Europe.,,
•
' Share your knowledge. Watch it
row.
FIGURE 1-2 Examples of what can be learned through
genotyping, from 23andme.com
1.1 Big Dat a Overview
As illustrated by the examples of social media and genetic
sequencing, ind ividuals and organizations
both derive benefits from analysis of ever-larger and more comp
lex data sets that require increasingly
powerful analytical capabilities.
1.1.1 Data Structures
Big data can come in multiple forms, including structured and
non -structured data such as financial
data, text files, multimedia files, and genetic mappings.
Contrary to much of the traditional data ana lysis
performed by organizations, most of the Big Data is
unstructured or semi-structured in nature, which
requires different techniques and tools to process and analyze.
[2) Distributed computing environments
and massively parallel processing (MPP) architectures that
enable parallelized data ingest and analysis are
the preferred approach to process such complex data.
With this in mind, this section takes a closer look at data
structures.
Figure 1-3 shows four types of data structures, with 80-90% of
future data growth coming from non-
structured data types. [2) Though different, the four are
commonly mixed. For example, a classic Relational
Database Management System (RDBMS) may store call logs for
a software support call center. The RDBMS
may store characteristics of the support calls as typical
structured data, with attributes such as time stamps,
machine type, problem type, and operating system. In addition,
the system will likely have unstructured,
quasi- or semi-structured data, such as free-form call log
information taken from an e-mail ticket of the
problem, customer chat history, or transcript of a phone call
describing the technical problem and the solu-
tion or aud io file of the phone call conversation. Many insights
could be extracted from the unstructured,
quasi- or semi-structu red data in the call center data.
'0
Q)
E
u
2
iii
Q)
0
~
Big Data Characteristics: Data Structures
Data Growth Is Increasingly Unstructured
I
Structured
FIGURE 1-3 Big Data Growth is increasingly unstructured
INTRODUCTION TO BIG DATA ANALYTICS
Although analyzing structured data tends to be the most familiar
technique, a different technique is
required to meet the challenges to analyze semi-structured data
(shown as XML), quasi-structured (shown
as a clickstream), and unstructured data.
Here are examples of how each of the four main types of data
structures may look.
o Structured data: Data containing a defined data type, format,
and structure (that is, transaction data,
online analytical processing [OLAP] data cubes, traditional
RDBMS, CSV files, and even simple spread-
sheets). See Figure 1-4.
SUMMER FOOD SERVICE PROGRAM 11
Data as of August 01. 2011)
Fiscal Number of Peak (July) Meals Total Federal
Year Sites Participation Served Expenditures 2]
---Thousands-- -MiL- -Million$-
1969 1.2 99 2.2 0.3
1970 1.9 227 8.2 1.8
1971 3.2 569 29.0 8.2
1972 6.5 1,080 73.5 21.9
1973 11.2 1,437 65.4 26.6
1974 10.6 1,403 63.6 33.6
1975 12.0 1,785 84.3 50.3
1976 16.0 2,453 104.8 73.4
TQ3] 22.4 3,455 198.0 88.9
1977 23.7 2,791 170.4 114.4
1978 22.4 2,333 120.3 100.3
1979 23.0 2,126 121.8 108.6
1980 21.6 1,922 108.2 110.1
1981 20.6 1,726 90.3 105.9
1982 14.4 1,397 68.2 87.1
1983 14.9 1,401 71.3 93.4
1984 15.1 1,422 73.8 96.2
1985 16.0 1,462 77.2 111.5
1986 16.1 1,509 77.1 114.7
1987 16.9 1,560 79.9 129.3
1988 17.2 1,577 80.3 133.3
1989 18.5 1.652 86.0 143.8
1990 19? 1 ~Q? 91? 1~11
FIGURE 1-4 Example of structured data
o Semi-structured data: Textual data files with a discernible
pattern that enables parsing (such
as Extensible Markup Language [XML] data files that are self-
describing and defined by an XML
schema). See Figure 1-5.
o Quasi-structured data: Textual data with erratic data formats
that can be formatted with effort,
tools, and time (for instance, web clickstream data that may
contain inconsistencies in data values
and formats). See Figure 1-6.
o Unstructured data: Data that has no inherent structure, which
may include text documents, PDFs,
images, and video. See Figure 1-7.
1.1 Big Data Ove rvi ew
Quasi-structured data is a common phenomenon that bears
closer scrutiny. Consider the following
example. A user attend s the EMC World conference and
subsequently runs a Google search online to find
information related to EMC and Data Scien ce. This would
produce a URL such as https: I /www . googl e
. c om/ #q=EMC+ data +scienc e and a list of results, such as in
the first graphic of Figure 1-5.
- ~ ....- . .
•• 0
o:.~t.a c!':a=-set.•"~t.t-e">
<z:.~ca l':cc.p-eq-.:.:.v•"X-:J;.-cc:r.;:a c.:.t:~" cc::te::c.•"
:.::·~d.Q"e , c~.=cr:."!•: ">
<t.:.e:"!>~~C - :ead.:. ~o Clc~d Co~~e.:.~~, 3~Q' Dace., a ::d
T:~sced ! ! Sol~t.:.o~s</t.:.t!e>
clc::d cc::,r·..:e.:.::r; . ">
<l.:.::k =e:•"se;·:es!':eee" 1':=-et•" / R. /a;;e;;t c;s / ccv.rrp""' /
jo;n:e· ~ ze: c':." >
<l.:.::k =~:•"St.i':es!:eet." !-:.=-et•" / B1/a.s:t::;s t c,;u /
1ooorrapo g c / rra ·-. . C!!!! '" >
<l.:. ::Jc :-el""" !!t.)-'les!'l.eec " l':=e~•" / 5~ /a.:.;ets / c .,, /
corr:rtgjJ/ .. c!lcO""'. ve:-,.cade:- c:~s">
<l.:.::.k =e:• " st.:,·:esl':ee:t. " !':.:et• " 15· / a;;ee, t
c:z:Jisgrur:c ... / -e:;n;o;gs· ve:-tco;c• c='a ">
<~c::.;::t. :.:,1=e•" t.ex::. / : ·e:;asc::.pt. " s:-c• '" // c l a; t o ;n:
P' ' p;•"" ccrrt-.~ .. dce:t:t.- ; - ><I sc:l.p:t.>
< :~ c:.:.;::t. .!l:c •"' / R. /a:.sec:J(<~.;/ cgrr;;:c""/rred•--.1z ..
_2 I 6 I 2 .;;,;. "'j;. ~ 3 "' ></ ~c:.:.pt.>
FIGURE 1-5 Example of semi-structured data
Tool!un
QUKkt~b~
b:plorerbars
Go to
Stop
R<foosh
Zoom(IOO'Jil
Tcxtsa:e
&>coding
Sty!<
C• rct brOWSing
Source
Stc:unt frpclt
lnt~ ~loON I 0. tt u re-
Wdlpoge pnv.cy potoey_
P""""'JI>ond
Ful scr~
Ctri•Q
h e
F5
F7
Fll
After doing this search, the user may choose the second link, to
read more about the headline "Data
Scientist- EM( Educa tion, Training, and Certification." This
brings the user to an erne . com site focu sed on
this topic and a new URL, h t t p s : I / e d ucation . e rne . com/
guest / campa i gn / data_ science
INTRODUCTION TO BIG DATA ANALYTICS
1
. aspx, that displays the page shown as (2) in Figure 1-6. Arrivi
ng at this site, the user may decide to click
to learn more about the process of becoming certified in data
science. The user chooses a link to ward the
top of the page on Certifications, bringing the user to a new
URL: ht tps : I I education. erne. com/
guest / certifica tion / framewo rk / stf / data_science . aspx,
which is (3) in Figure 1-6.
Visiting these three websites adds three URLs to the log files
monitoring the user's computer or network
use. These three URLs are:
https: // www.google . com/# q=EMC+data+ s cience
https: // education . emc.com/ guest / campaign/ data science .
aspx
https : // education . emc . com/ guest / certification/ framework
/ stf / data_
science . aspx
- - ...... - .._.. ............. _
O.Uk*-andi'IO..~T~ · OIC~ o
---·- t..._ ·-- . -- ·-A-- ------·----- .. -,.. _ , _____ ....
0.. ldHIWI • DtC (Ot.aiiOI. l....,... and~ 0 --- -~-~· 1 .. ....... _
.. _....._. __ , ___ -~-·-·
· ~----"' .. ~_.,.. ..... -
:c ~::...~ and Cenbbcrt 0
t-e •·,-'""""... '•'-""'•• ..,...._ _ ... --......
~ .... __ .... .....,.,_.... ... ,...._~·
-
~O•Uik~R........, A0.1t-~~_,...h", • £MC O --------.. ... .- . '""
..._. ______ , ______ ...., -
- ···-.. ... -~--.-- ....
https:/ /www.google.com/#q
3
------
---
,_ __
----
~-:::.::.::·--===-=-== .. ------·---------·------..---::=--.....::..-..=-
.:.-.=-.......
-- ·------·---
-·---·--·---·~--·-· -----------·--·--., ______ ... ___ ____ _ -·-------
---·-______ , _______ _
- -------~ · --· -----
>l __ _ __ , , _ _ _
... , ------., :::... ::
FiGURE 1-6 Example of EMC Data Science search results
1.1 Big Data Overview
FIGURE 1-7 Example of unstructured data: video about
Antarctica expedition [3]
This set of three URLs reflects the websites and actions taken to
find Data Science inform ation related
to EMC. Together, this comprises a clicksrream that can be
parsed and mined by data scientists to discover
usage patterns and uncover relation ships among clicks and
areas of interest on a website or group of sites.
The four data types described in this chapter are sometimes
generalized into two groups: structured
and unstructu red data. Big Data describes new kinds of data
with which most organizations may not be
used to working. With this in mind, the next section discusses
common technology arch itectures from the
standpoint of someone wanting to analyze Big Data.
1.1.2 Analyst Perspective on Data Repositories
The introduction of spreadsheets enabled business users to crea
te simple logic on data structured in rows
and columns and create their own analyses of business
problems. Database administrator training is not
requ ired to create spreadsheets: They can be set up to do many
things qu ickly and independently of
information technology (IT) groups. Spreadsheets are easy to
share, and end users have control over the
logic involved. However, their proliferation can result in "many
versions of the t ruth." In other words, it
can be challenging to determine if a particular user has the most
relevant version of a spreadsheet, with
the most current data and logic in it. Moreover, if a laptop is
lost or a file becomes corrupted, the data and
logic within the spreadsheet could be lost. This is an ongoing
challenge because spreadsheet programs
such as Microsoft Excel still run on many computers worldwide.
With the proliferation of data islands (or
spread marts), the need to centralize the data is more pressing
than ever.
As data needs grew, so did mo re scalable data warehousing
solutions. These technologies enabled
data to be managed centrally, providing benefits of security,
failover, and a single repository where users
INTRODUCTION TO BIG DATA ANALYTICS
could rely on getting an "official" source of data for finan cial
reporting or other mission-critical tasks. This
structure also enabled the creation ofOLAP cubes and 81
analytical tools, which provided quick access to a
set of dimensions within an RD8MS. More advanced features
enabled performance of in-depth analytical
techniques such as regressions and neural networks. Enterprise
Data Warehouses (EDWs) are critica l for
reporting and 81 tasks and solve many of the problems that
proliferating spreadsheets introduce, such as
which of multiple versions of a spreadsheet is correct. EDWs-
and a good 81 strategy-provide direct data
feeds from sources that are centrally managed, backed up, and
secured.
Despite the benefits of EDWs and 81, these systems tend to
restri ct the flexibility needed to perform
robust or exploratory data analysis. With the EDW model, data
is managed and controlled by IT groups
and database administrators (D8As), and data analysts must
depend on IT for access and changes to the
data schemas. This imposes longer lead ti mes for analysts to
get data; most of the time is spent waiting for
approvals rather than starting meaningful work. Additionally,
many times the EDW rul es restrict analysts
from building datasets. Consequently, it is com mon for
additional systems to emerge containing critical
data for constructing analytic data sets, managed locally by
power users. IT groups generally dislike exis-
tence of data sources outside of their control because, unlike an
EDW, these data sets are not managed,
secured, or backed up. From an analyst perspective, EDW and
81 solve problems related to data accuracy
and availabi lity. However, EDW and 81 introduce new
problems related to flexibility and agil ity, which were
less pronounced when dealing with spreads heets.
A solution to this problem is the analytic sandbox, which
attempts to resolve the conflict for analysts and
data scientists with EDW and more formally managed corporate
data. In this model, the IT group may still
manage the analytic sandboxes, but they will be purposefully
designed to enable robust analytics, while
being centrally managed and secured. These sandboxes, often
referred to as workspaces, are designed to
enable teams to explore many datasets in a controlled fashion
and are not typically used for enterprise-
level financial reporting and sales dashboards.
Many times, analytic sa ndboxes enable high-performance
computing using in-database processing-
the analytics occur within the database itself. The idea is that
performance of the analysis will be better if
the analytics are run in the database itself, rather than bringing
the data to an analytical tool that resides
somewhere else. In-database analytics, discussed further in
Chapter 11, "Advanced Analytics- Technology
and Tools: In-Database Analytics." creates relationships to
multiple data sources within an organization and
saves time spent creating these data feeds on an individual
basis. In-database processing for deep analytics
enables faster turnaround time for developing and executing
new analytic models, while reducing, though
not eli minating, the cost associated with data stored in local,
"shadow" file systems. In addition, rather
than the typical structured data in the EDW, analytic sandboxes
ca n house a greater variety of data, such
as raw data, textual data, and other kinds of unstructured data,
without interfering with critical production
databases. Table 1-1 summarizes the characteristics of the data
repositories mentioned in this section.
TABLE 1-1 Types of Data Repositories, from an Analyst
Perspective
Data Repository Characteristics
Spreadsheets and
data marts
("spreadmarts")
Spreadsheets and low-volume databases for record keeping
Analyst depends on data extracts.
Data Warehouses
Analytic Sandbox
(works paces)
1.2 State of the Practice in Analytics
Centralized data containers in a purpose-built space
Suppo rt s Bl and reporting, but restri cts robust analyses
Ana lyst d ependent o n IT and DBAs for data access and
schema changes
Ana lysts must spend significant t ime to g et aggregat ed and d
isaggre-
gated data extracts f rom multiple sources.
Data assets gathered f rom multiple sources and technologies fo
r ana lysis
Enables fl exible, high-performance ana lysis in a
nonproduction environ-
ment; can leverage in-d atabase processing
Reduces costs and risks associated w ith data replication into
"shadow" file
systems
"Analyst owned" rather t han "DBA owned"
There are several things to consider with Big Data Analytics
projects to ensure the approach fits w ith
the desired goals. Due to the characteristics of Big Data, these
projects le nd them selves to decision su p-
port for high-value, strategic decision making w ith high
processing complexi t y. The analytic techniques
used in this context need to be iterative and fl exible, due to the
high volume of data and its complexity.
Performing rapid and complex analysis requires high throughput
network con nections and a consideration
for the acceptable amount of late ncy. For instance, developing
a real- t ime product recommender for a
website imposes greater syst em demands than developing a
near· real·time recommender, which may
still pro vide acceptable p erform ance, have sl ight ly greater
latency, and may be cheaper to deploy. These
considerations requi re a different approach to thinking about
analytics challenges, which will be explored
further in the next section.
1.2 State of the Practice in Analytics
Current business problems provide many opportunities for
organizations to become more analytical and
data dri ven, as shown in Table 1 ·2.
TABLE 1-2 Business Drivers for Advanced Analytics
Business Driver Examples
Optimize business operations
Identify business ri sk
Predict new business opportunities
Comply w ith laws or regu latory
requirements
Sales, pricing, profitability, efficiency
Customer churn, fraud, default
Upsell, cross-sell, best new customer prospects
Anti-Money Laundering, Fa ir Lending, Basel II-III, Sarbanes-
Oxley(SOX)
INTRODUCTION TO BIG DATA ANALYTICS
Table 1-2 outlines four categories of common business problems
that organizations contend with where
they have an opportunity to leverage advanced analytics to
create competitive advantage. Rather than only
performing standard reporting on these areas, organizations can
apply advanced analytical techniques
to optimize processes and derive more value from these common
tasks. The first three examples do not
represent new problems. Organizations have been trying to
reduce customer churn, increase sales, and
cross-sell customers for many years. What is new is the
opportunity to fuse advanced analytical techniques
with Big Data to produce more impactful analyses for these
traditional problems. The last example por-
trays emerging regulatory requirements. Many compliance and
regulatory laws have been in existence for
decades, but additional requirements are added every year,
which represent additional complexity and
data requirements for organizations. Laws related to anti-money
laundering (AML) and fraud prevention
require advanced analytical techniques to comply with and
manage properly.
1.2.1 81 Versus Data Science
The four business drivers shown in Table 1-2 require a variety
of analytical techniques to address them prop-
erly. Although much is written generally about analytics, it is
important to distinguish between Bland Data
Science. As shown in Figure 1-8, there are several ways to
compare these groups of analytical techniques.
One way to evaluate the type of analysis being performed is to
examine the time horizon and the kind
of analytical approaches being used. Bl tends to provide reports,
dashboards, and queries on business
questions for the current period or in the past. Bl systems make
it easy to answer questions related to
quarter-to-date revenue, progress toward quarterly targets, and
understand how much of a given product
was sold in a prior quarter or year. These questions tend to be
closed-ended and explain current or past
behavior, typically by aggregating historical data and grouping
it in some way. 81 provides hindsight and
some insight and generally answers questions related to "when"
and "where" events occurred.
By comparison, Data Science tends to use disaggregated data in
a more forward-looking, exploratory
way, focusing on analyzing the present and enabling informed
decisions about the future. Rather than
aggregating historical data to look at how many of a given
product sold in the previous quarter, a team
may employ Data Science techniques such as time series
analysis, further discussed in Chapter 8, "Advanced
Analytical Theory and Methods: Time Series Analysis," to
forecast future product sales and revenue more
accurately than extending a simple trend line. In addition, Data
Science tends to be more exploratory in
nature and may use scenario optimization to deal with more
open-ended questions. This approach provides
insight into current activity and foresight into future events,
while generally focusing on questions related
to "how" and "why" events occur.
Where 81 problems tend to require highly structured data
organized in rows and columns for accurate
reporting, Data Science projects tend to use many types of data
sources, including large or unconventional
datasets. Depending on an organization's goals, it may choose to
embark on a 81 project if it is doing reporting,
creating dashboards, or performing simple visualizations, or it
may choose Data Science projects if it needs
to do a more sophisticated analysis with disaggregated or varied
datasets.
Exploratory
Analytical
Approach
Explanatory
I
, .. -- ---,
1 Busin ess 1
1 Inte lligence 1
 , .... _____ ..,
Past
fiGUR E 1 ·8 Comparing 81 with Data Science
1.2.2 Current Analytical Architecture
1 .2 State ofthe Practice In Analytlcs
Predictive Analytics and Data Mini ng
(Data Sci ence)
Typical • Optimization. predictive modo lin£
Techniques forocastlnC. statlatlcal analysis
and • Structured/unstructured data. many
Data Types types of sources, very Ioree datasata
Common
Questions
Typical
Techniques
and
Data Types
Tim e
Common
Questions
• What II ... ?
• What's tho optlmaltconarlo tor our bualnoss?
• What wtll happen next? What II these trend$
continuo? Why Is this happonlnt?
Busi ness Intelligence
• Standard and ad hoc reportlnc. dashboards.
alerts, queries, details on demand
• Structured data. traditional sourcoa.
manac:eable datasets
• What happened lut quarter?
• How many units sold?
• Whore Is the problem? In whic h situations?
Future
As described earlier, Data Science projects need workspaces
that are purpose-built for experimenting with
data, with flexible and agile data architectures. Most
organizations still have data warehouses that provide
excellent support for traditional reporting and simple data
analysis activities but unfortunately have a more
difficult time supporting more robust analyses. This section
examines a typical analytical data architecture
that may exist within an organization.
Figure 1-9 shows a typical data architecture and several of the
challenges it presents to data scientists
and others trying to do advanced analytics. This section
examines the data flow to the Data Scientist and
how this individual tits into the process of getting data to
analyze on proj ects.
INTRODUCTION TO BIG DATA ANALYTICS
FIGURE 1-9 Typical analytic architecture
i..l ,_,
It
An alysts
Dashboards
Reports
Al erts
1. For data sources to be loaded into the data wa rehouse, data
needs to be well understood,
structured, and normalized with the appropriate data type defini
t ions. Although th is kind of
centralization enabl es security, backup, and fai lover of highly
critical data, it also means that data
typically must go through significant preprocessing and
checkpoints before it can enter this sort
of controll ed environment, which does not lend itself to data
exploration and iterative analytic s.
2. As a result of t his level of control on the EDW, add itional
local systems may emerge in the form of
departmental wa rehou ses and loca l data marts t hat business
users create to accommodate thei r
need for flexible analysis. These local data marts may not have
the same constraints for secu-
ri ty and structu re as the main EDW and allow users to do some
level of more in-depth analysis.
However, these one-off systems reside in isolation, often are not
synchronized or integrated with
other data stores, and may not be backed up.
3. Once in the data warehouse, data is read by additional
applications across the enterprise for Bl
and reporting purposes. These are high-priority operational
processes getting critical data feeds
from the data warehouses and repositories.
4. At the end of this workfl ow, analysts get data provisioned
for their downstream ana lytics.
Because users generally are not allowed to run custom or
intensive analytics on production
databases, analysts create data extracts from the EDW to
analyze data offline in R or other local
analytical tools. Many times the se tools are lim ited to in-
memory analytics on desktops analyz-
ing sa mples of data, rath er than the entire population of a
dataset. Because the se analyses are
based on data extracts, they reside in a separate location, and
the results of the analysis-and
any insights on the quality of the data or anomalies- rarely are
fed back into the main data
repository.
Because new data sources slowly accum ulate in the EDW due
to the rigorous validation and
data struct uring process, data is slow to move into the EDW,
and the data schema is slow to change.
1.2 State of the Practice in Analytics
Departmental data warehouses may have been originally
designed for a specific purpose and set of business
needs, but over time evolved to house more and more data,
some of which may be forced into existing
schemas to enable Bland the creation of OLAP cubes for
analysis and reporting. Although the EDW achieves
the objective of reporting and sometimes the creation of
dashboards, EDWs generally limit the ability of
analysts to iterate on the data in a separate nonproduction
environment where they can conduct in-depth
analytics or perform analysis on unstructured data.
The typical data architectures just described are designed for
storing and processing mission-critical
data, supporting enterprise applications, and enabling corporate
reporting activities. Although reports and
dashboards are still important for organizations, most
traditional data architectures inhibit data exploration
and more sophisticated analysis. Moreover, traditional data
architectures have several additional implica-
tions for data scientists.
o High-value data is hard to reach and leverage, and predictive
analytics and data mining activities
are last in line for data. Because the EDWs are designed for
central data management and reporting,
those wanting data for analysis are generally prioritized after
operational processes.
o Data moves in batches from EDW to local analytical tools.
This workflow means that data scientists
are limited to performing in-memory analytics (such as with R,
SAS, SPSS, or Excel), which will restrict
the size of the data sets they can use. As such, analysis may be
subject to constraints of sampling,
which can skew model accuracy.
o Data Science projects will remain isolated and ad hoc, rather
than centrally managed. The implica-
tion of this isolation is that the organization can never harness
the power of advanced analytics in a
scalable way, and Data Science projects will exist as
nonstandard initiatives, which are frequently not
aligned with corporate business goals or strategy.
All these symptoms of the traditional data architecture result in
a slow "time-to-insight" and lower
business impact than could be achieved if the data were more
readily accessible and supported by an envi-
ronment that promoted advanced analytics. As stated earlier,
one solution to this problem is to introduce
analytic sandboxes to enable data scientists to perform
advanced analytics in a controlled and sanctioned
way. Meanwhile, the current Data Warehousing solutions
continue offering reporting and Bl services to
support management and mission-critical operations.
1.2.3 Drivers of Big Data
To better understand the market drivers related to Big Data, it is
helpful to first understand some past
history of data stores and the kinds of repositories and tools to
manage these data stores.
As shown in Figure 1-10, in the 1990s the volume of
information was often measured in terabytes.
Most organizations analyzed structured data in rows and
columns and used relational databases and data
warehouses to manage large stores of enterprise information.
The following decade saw a proliferation of
different kinds of data sources-mainly productivity and
publishing tools such as content management
repositories and networked attached storage systems-to manage
this kind of information, and the data
began to increase in size and started to be measured at petabyte
scales. In the 2010s, the information that
organizations try to manage has broadened to include many
other kinds of data. In this era, everyone
and everything is leaving a digital footprint. Figure 1-10 shows
a summary perspective on sources of Big
Data generated by new applications and the scale and growth
rate of the data. These applications, which
generate data volumes that can be measured in exabyte scale,
provide opportunities for new analytics and
driving new value for organizations. The data now comes from
multiple sources, such as these:
INTRODUCTION TO BIG DATA ANALYTICS
• Medical information, such as genomic sequencing and diag
nostic imagi ng
• Photos and video footage uploaded to the World Wide Web
• Video surveillance, such as the thousands of video ca meras
spread across a city
• Mobile devices, which provide geospatiallocation data of the
users, as well as metadata about text
messages, phone calls, and application usage on smart phones
• Smart devices, which provide sensor-based collection of
information from smart electric grids, smart
bu ildings, and many other public and ind ustry infrastructures
• Nontraditional IT devices, including the use of radio-freq
uency identifica tion (RFID) reader s, GPS
navigation systems, and seismic processing
MEASURED IN MEASURED IN WILL BE MEASURED IN
TERABYTES PET A BYTES EXABYTES
lTB • 1.000GB lPB • l .OOOTB lEB l .OOOPB
IIEII
You(D
.... ~ .. ·,
A n ''  . ~
I b ~
~
~
SMS
w: '-----"
ORACLE =
1.9905 20005 201.05
( RDBMS & DATA (CONTENT & DIGITAL ASSET (NO-SQL
& KEY VALUE)
WAREHOUSE) MANAGEMENT)
FIGURE 1-10 Data evolution and the rise of Big Data sources
Th e Big Data t rend is ge nerating an enorm ous amount of
information from many new sources. This
data deluge requires advanced analytics and new market players
to take adva ntage of these opportunities
and new market dynamics, which wi ll be discussed in the
following section.
1.2.4 Emerging Big Data Ecosystem and a New Approach to
Analytics
Organ izations and data collectors are realizing that the data
they ca n gath er from individuals contain s
intrinsic value and, as a result, a new economy is emerging. As
this new digital economy continues to
1.2 State of the Practice in Analytics
evol ve, the market sees the introduction of data vendors and
data cl eaners that use crowdsourcing (such
as Mechanica l Turk and Ga laxyZoo) to test the outcomes of
machine learning techniques. Other vendors
offer added va lue by repackaging open source tools in a
simpler way and bri nging the tools to market.
Vendors such as Cloudera, Hortonworks, and Pivotal have
provid ed thi s value-add for the open source
framework Hadoop.
As the new ecosystem takes shape, there are four main groups
of playe rs within this interconnected
web. These are shown in Figure 1-11.
• Data devices [shown in the (1) section of Figure 1-1 1] and the
"Sensornet" gat her data from multiple
locations and continuously generate new data about th is data.
For each gigabyte of new data cre-
ated, an additional petabyte of data is created about that data.
[2)
• For example, consider someone playing an online video game
through a PC, game console,
or smartphone. In this case, the video game provider captures
data about the skill and levels
attained by the playe r. Intelligent systems monitor and log how
and when the user plays the
game. As a consequence, the game provider can fine -tune the
difficulty of the game,
suggest other related games that would most likely interest the
user, and offer add itional
equipment and enhancements for the character based on the
user's age, gender, and
interests. Th is information may get stored loca lly or uploaded
to the game provider's cloud
to analyze t he gaming habits and opportunities for ups ell and
cross-sell, and identify
archetypica l profiles of specific kinds of users.
• Smartphones provide another rich source of data . In add ition
to messag ing and basic phone
usage, they store and transmit data about Internet usage, SMS
usage, and real-time location.
This metadata can be used for analyzing traffic patterns by sca
nning the density of smart-
phones in locations to track the speed of cars or the relative
traffi c congestion on busy
roads. In t his way, GPS devices in ca rs can give drivers real-
time updates an d offer altern ative
routes to avoid traffic delays .
• Retail shopping loyalty cards record not just the amo unt an
individual spends, but the loca-
tions of stores that person visits, the kind s of products
purchased, the stores where goods
are purchased most ofte n, and the combinations of prod ucts
purchased together. Collecting
this data provides insights into shopping and travel habits and
the likelihood of successful
advertiseme nt targeting for certa in types of retail promotions.
• Data collectors [the blue ovals, identified as (2) within Figure
1-1 1] incl ude sa mple entities that
col lect data from the dev ice and users.
• Data resul ts from a cable TV provider tracking the shows a
person wa tches, which TV
channels someone wi ll and will not pay for to watch on
demand, and t he prices someone is
will ing to pay fo r premiu m TV content
• Retail stores tracking the path a customer takes through their
store w hile pushing a shop-
ping cart with an RFID chip so they can gauge which products
get the most foot traffic using
geospatial data co llected from t he RFID chips
• Data aggregators (the dark gray ovals in Figure 1-11, marked
as (3)) make sense of the data co llected
from the various entities from the "Senso rN et" or the "Internet
ofThings." These org anizatio ns
compile data from the devices an d usage pattern s collected by
government agencies, retail stores,
INT RODUCTION TO BIG DATA ANALYTIC S
and websites. ln turn, t hey can choose to transform and package
the data as products to sell to list
brokers, who may want to generate marketing lists of people
who may be good targets for specific ad
campaigns.
• Data users and buyers are denoted by (4) in Figu re 1-11.
These groups directly benefit from t he data
collected and aggregated by others within the data value chain.
• Retai l ba nks, acting as a data buyer, may want to know
which customers have the hig hest
likelihood to apply for a second mortgage or a home eq uity line
of credit. To provide inpu t
for this analysis, retai l banks may purchase data from a data
aggregator. This kind of data
may include demograp hic information about people living in
specific locations; people who
appear to have a specific level of debt, yet still have solid credit
scores (or other characteris-
tics such as paying bil ls on time and having savings accounts)
that can be used to infer cred it
worthiness; and those who are sea rching the web for
information about paying off debts or
doing home remodeling projects. Obtaining data from these
various sources and aggrega-
tors will enable a more targeted marketing campaign, which
would have been more chal-
lenging before Big Data due to the lack of information or high-
performing technologies.
• Using technologies such as Hadoop to perform natural
language processing on
unstructured, textual data from social media websites, users can
gauge the reaction to
events such as presidential campaigns. People may, for
example, want to determine public
sentiments toward a candidate by analyzing related blogs and
online comments. Similarl y,
data users may want to track and prepare for natural disasters by
identifying which areas
a hurricane affects fi rst and how it moves, based on which
geographic areas are tweeting
about it or discussing it via social med ia.
r:t Data
.::J Devices {'[I t Ptto...r r.r..., l UC)(.K VlOLU l !I ill UO.
AI''
(,.MI
CfitUII CAfW COtPl!UR
RfAO(H
~ .~
Iff [) llOfO MfOICAI
IMoC'oi"G
Law
EniCHCefllefll
Data
Users/ Buyers
0
Media
FIGURE 1-11 Emerging Big Data ecosystem
Do live!)'
So Mea
'I If,. [Ill AN [
Privato
Investigators
/ lawyors
1.3 Key Roles for the New Big Data Ecosyst e m
As il lustrated by this emerging Big Data ecosystem, the kinds
of data and the related market dynamics
vary greatly. These data sets ca n include sensor data, text,
structured datasets, and social med ia . With this
in mind, it is worth recall ing that these data sets will not work
wel l within trad itional EDWs, which were
architected to streamline reporting and dashboards and be
centrally managed.lnstead, Big Data problems
and projects require different approaches to succeed.
Analysts need to partner with IT and DBAs to get the data they
need within an analytic sandbox. A
typical analytical sandbox contains raw data, agg regated data,
and data with mu ltiple kinds of structure.
The sandbox enables robust exploration of data and requires a
savvy user to leverage and take advantage
of data in the sandbox environment.
1.3 Key Roles for the New Big Data Ecosystem
As explained in the context of the Big Data ecosystem in
Section 1.2.4, new players have emerged to curate,
store, produce, clean, and transact data. In addition, the need
for applying more advanced ana lytica l tech-
niques to increasing ly complex business problems has driven
the emergence of new roles, new technology
platforms, and new analytical methods. This section explores
the new roles that address these needs, and
subsequent chapters explore some of the analytica l methods
and technology platforms.
The Big Data ecosystem demands three ca tegories of roles, as
shown in Figure 1-12. These roles were
described in the McKinsey Global study on Big Data, from May
2011 [1].
Three Key Roles of The New Data Ecosystem
Role
Deep Analytical Talent
Data Savvy Professionals
Technology and Data Enablers
Data Scientists
.. Projected U.S. tal ent
gap: 1.40 ,000 to 1.90,000
.. Projected U.S. talent
gap: 1..5 million
Note: RcuresaboYe m~ • projected talent CDP In US In 201.8.
as ihown In McKinsey May 2011 article "81& Data: l he Nut
rront* t ot
Innovation. Competition. and Product~
FIGURE 1-12 Key roles of the new Big Data ecosystem
The first group- Deep Analytical Talent- is technically savvy,
with strong analytical skills. Members pos-
sess a combi nation of skills to handle raw, unstructured data
and to apply complex analytical techniques at
INTRODUCTION TO BIG DATA ANALYTICS
massive scales. This group has advanced training in quantitative
disciplines, such as mathematics, statistics,
and machine learning. To do their jobs, members need access to
a robust analytic sandbox or workspace
where they can perform large-scale analytical data experiments.
Examples of current professions fitting
into this group include statisticians, economists,
mathematicians, and the new role of the Data Scientist.
The McKinsey study forecasts that by the year 2018, the United
States will have a talent gap of 140,000-
190,000 people with deep analytical talent. This does not
represent the number of people needed with
deep analytical talent; rather, this range represents the
difference between what will be available in the
workforce compared with what will be needed. In addition,
these estimates only reflect forecasted talent
shortages in the United States; the number would be much
larger on a global basis.
The second group-Data Savvy Professionals-has less technical
depth but has a basic knowledge of
statistics or machine learning and can define key questions that
can be answered using advanced analytics.
These people tend to have a base knowledge of working with
data, or an appreciation for some of the work
being performed by data scientists and others with deep
analytical talent. Examples of data savvy profes-
sionals include financial analysts, market research analysts, life
scientists, operations managers, and business
and functional managers.
The McKinsey study forecasts the projected U.S. talent gap for
this group to be 1.5 million people by
the year 2018. At a high level, this means for every Data
Scientist profile needed, the gap will be ten times
as large for Data Savvy Professionals. Moving toward becoming
a data savvy professional is a critical step
in broadening the perspective of managers, directors, and
leaders, as this provides an idea of the kinds of
questions that can be solved with data.
The third category of people mentioned in the study is
Technology and Data Enablers. This group
represents people providing technical expertise to support
analytical projects, such as provisioning and
administrating analytical sandboxes, and managing large-scale
data architectures that enable widespread
analytics within companies and other organizations. This role
requires skills related to computer engineering,
programming, and database administration.
These three groups must work together closely to solve complex
Big Data challenges. Most organizations
are familiar with people in the latter two groups mentioned, but
the first group, Deep Analytical Talent,
tends to be the newest role for most and the least understood.
For simplicity, this discussion focuses on
the emerging role of the Data Scientist. It describes the kinds of
activities that role performs and provides
a more detailed view of the skills needed to fulfill that role.
There are three recurring sets of activities that data scientists
perform:
o Reframe business challenges as analytics challenges.
Specifically, this is a skill to diagnose busi-
ness problems, consider the core of a given problem, and
determine which kinds of candidate analyt-
ical methods can be applied to solve it. This concept is explored
further in Chapter 2, "Data Analytics
lifecycle."
o Design, implement, and deploy statistical models and data
mining techniques on Big Data. This
set of activities is mainly what people think about when they
consider the role of the Data Scientist:
1.3 Key Roles for the New Big Data Ecosystem
namely, applying complex or advanced ana lytical methods to a
variety of busi ness problems using
data. Chapter 3 t hrough Chapter 11 of this book introd uces the
reader to many of the most popular
analytical techniques and tools in this area.
• Develop insights that lead to actionable recommendations. It
is critical to note that applying
advanced methods to data problems does not necessarily drive
new business va lue. Instead, it is
important to learn how to draw insights out of the data and
communicate them effectively. Chapter 12,
"The Endgame, or Putting It All Together;' has a brief overview
of techniques for doing this.
Data scientists are generally thoug ht of as having fi ve mai n
sets of skills and behaviora l characteristics,
as shown in Figure 1-13:
• Quantitative skill: such as mathematics or statistics
• Technical aptitude: namely, software engineering, machine
learning, and programming skills
• Skeptical mind-set and critica l thin king: It is important that
data scientists can examine their work
critica lly rather than in a one-sided way.
• Curious and creative: Data scientists are passionate about data
and finding creative ways to solve
problems and portray information.
• Communicative and collaborative: Data scie ntists must be
able to articulate the business val ue
in a clear way and collaboratively work with other groups,
including project sponsors and key
stakeholders.
Quantitative
Technical
Skeptical
Curious and
Creative
Communlcativr
and
CDDaborati~
fiGURE 1 Profile of a Data Scientist
INTRODUCTION TO BIG DATA ANALYTICS
Data scientists are generally comfortable using this blend of
skills to acquire, manage, analyze, and
visualize data and tell compelling stories about it. The next
section includes examples of what Data Science
teams have created to drive new value or innovation with Big
Data.
1.4 Examples of Big Data Analytics
After describing the emerging Big Data ecosystem and new
roles needed to support its growth, this section
provides three examples of Big Data Analytics in different
areas: retail, IT infrastructure, and social media.
As mentioned earlier, Big Data presents many opportunities to
improve sa les and marketing ana lytics.
An example of this is the U.S. retailer Target. Cha rles Duhigg's
book The Power of Habit [4] discusses how
Target used Big Data and advanced analytical methods to drive
new revenue. After analyzing consumer-
purchasing behavior, Target's statisticians determin ed that the
retailer made a great deal of money from
three main life-event situations.
• Marriage, when people tend to buy many new products
• Divorce, when people buy new products and change their
spending habits
• Pregnancy, when people have many new things to buy and
have an urgency to buy t hem
Target determined that the most lucrative of these life-events is
the thi rd situation: pregnancy. Using
data collected from shoppers, Ta rget was able to identify this
fac t and predict which of its shoppers were
pregnant. In one case, Target knew a female shopper was
pregnant even before her family knew [5]. This
kind of knowledge allowed Target to offer specifi c coupons and
incentives to thei r pregnant shoppers. In
fact, Target could not only determine if a shopper was pregnant,
but in which month of pregnancy a shop-
per may be. This enabled Target to manage its inventory, knowi
ng that there would be demand for specific
products and it wou ld likely vary by month over the com ing
nine- to ten- month cycl es.
Hadoop [6] represents another example of Big Data innovation
on the IT infra structure. Apache Hadoop
is an open source framework that allows companies to process
vast amounts of information in a highly paral-
lelized way. Hadoop represents a specific implementation of t
he MapReduce paradigm and was designed
by Doug Cutting and Mike Cafa rel la in 2005 to use data with
varying structu res. It is an ideal technical
framework for many Big Data projects, which rely on large or
unwieldy data set s with unconventiona l data
structures. One of the main benefits of Hadoop is that it
employs a distributed file system, meaning it can
use a distributed cluster of servers and commodity hardware to
process larg e amounts of data. Some of
the most co mmon examples of Hadoop imp lementations are in
the social med ia space, where Hadoop
ca n manage transactions, give textual updates, and develop
social graphs among millions of users. Twitter
and Facebook generate massive amounts of unstructured data
and use Hadoop and its ecosystem of tools
to manage this hig h volu me. Hadoop and its ecosystem are
covered in Chapter 10, "Adva nced Ana lytics-
Technology and Tools: MapReduce and Hadoop."
Finally, social media represents a tremendous opportunity to
leverage social and professional interac-
tions to derive new insights. Linked In exemplifies a company
in which data itself is the product. Early on,
Linkedln founder Reid Hoffman saw the opportunity to create a
social network for working professionals.
Exercises
As of 2014, Linkedln has more than 250 million user accounts
and has added many additional features and
data-related products, such as recruiting, job seeker too ls,
advertising, and lnMa ps, whic h show a social
graph of a user's professional network. Figure 1-14 is an
example of an In Map visualization that enables
a Linked In user to get a broader view of the interconnectedness
of his contacts and understand how he
knows most of them .
fiGURE 1-14 Data visualization of a user's social network using
lnMaps
Summary
Big Data comes from myriad sources, including social media,
sensors, the Internet ofThings, video surveil-
lance, and many sources of data that may not have been
considered data even a few years ago. As businesses
struggle to keep up with changing market requirements, some
companies are finding creative ways to apply
Big Data to their growing business needs and increasing ly
complex problems. As organizations evolve
their processes and see the opportunities that Big Data can
provide, they try to move beyond t raditional Bl
activities, such as using data to populate reports and
dashboards, and move toward Data Science- driven
projects that attempt to answer more open-ended and complex
questions.
However, exploiting the opportunities that Big Data presents
requires new data architectures, includ -
ing analytic sandboxes, new ways of working, and people with
new skill sets. These drivers are causing
organizations to set up analytic sandboxes and build Data
Science teams. Although some organizations are
fortunate to have data scientists, most are not, because there is a
growing talent gap that makes finding
and hi ring data scientists in a timely man ner difficult. Still,
organizations such as those in web retail, health
care, genomics, new IT infrast ructures, and social media are
beginning to take advantage of Big Data and
apply it in creati ve and novel ways.
Exercises
1. What are the three characteristics of Big Data, and what are
the main considerations in processing Big
Data?
2 . What is an analytic sa ndbox, and why is it important?
3. Explain the differences between Bland Data Science.
4 . Describe the challenges of the current analytical architecture
for data scientists.
5. What are the key skill sets and behavioral characteristics of a
data scientist?
INTRODUCTION TO BIG DATA ANALYTICS
Bibliography
[1] C. B. B. D. Manyika, "Big Data: The Next Frontier for
Innovation, Competition, and Productivity,"
McKinsey Globa l Institute, 2011 .
[2] D. R. John Ga ntz, "The Digital Universe in 2020: Big Data,
Bigger Digital Shadows, and Biggest
Growth in the Far East," IDC, 2013.
[3] http: I l www. willisresilience . coml emc-data l ab [Online].
[4] C. Duhigg, The Power of Habit: Why We Do What We Do in
Life and Business, New York: Random
House, 2012.
[5] K. Hil l, "How Target Figured Out a Teen Girl Was Pregnant
Before Her Father Did," Forb es, February
2012.
[6] http: I l hadoop. apache . org [Online].
DATA ANALYTICS LIFECYCLE
Data science projects differ from most traditional Business
Intelligence projects and many data ana lysis
projects in that data science projects are more exploratory in
nature. For t his reason, it is critical to have a
process to govern them and ensure t hat the participants are
thorough and rigorous in their approach, yet
not so rigid that the process impedes exploration.
Many problems that appear huge and daunting at first can be
broken down into smaller pieces or
actionable phases that can be more easily addressed. Having a
good process ensures a comprehensive and
repeatable method for conducting analysis. In addition, it helps
focus time and energy early in the process
to get a clear grasp of the business problem to be solved.
A common mistake made in data science projects is rushing into
data collection and analysis, wh ich
precludes spending sufficient time to plan and scope the amount
of work involved, understanding requ ire-
ments, or even framing the business problem properly.
Consequently, participants may discover mid-stream
that the project sponsors are actually trying to achieve an
objective that may not match the available data,
or they are attempting to address an interest that differs from
what has been explicitly communicated.
When this happens, the project may need to revert to the initial
phases of the process for a proper discovery
phase, or the project may be canceled.
Creating and documenting a process helps demonstrate rigor,
which provides additional credibility
to the project when the data science team shares its findings. A
well-defi ned process also offers a com-
mon framework for others to adopt, so the methods and analysis
can be repeated in the future or as new
members join a team.
2.1 Data Analytics Lifecycle Overview
The Data Analytics Lifecycle is designed specifica lly for Big
Data problems and data science projects. The
lifecycle has six phases, and project work can occur in several
phases at once. For most phases in the life-
cycle, the movement can be either forward or backward. This
iterative depiction of the lifecycle is intended
to more closely portray a real project, in which aspects of the
project move forward and may return to
earlier stages as new information is uncovered and team
members learn more about various stages of the
project. This enables participants to move iteratively through
the process and drive toward operationa l-
izing the project work.
2.1.1 Key Roles for a Successful Analytics Project
In recent years, substantial attention has been placed on the
emerging role of the data scientist. In October
2012, Harvard Business Review featured an article titled "Data
Scientist: The Sexiest Job of the 21st Century"
[1], in which experts OJ Patil and Tom Davenport described the
new role and how to find and hire data
scientists. More and more conferences are held annually
focusing on innovation in the areas of Data Science
and topics dealing with Big Data. Despite this strong focus on
the emerg ing role of the data scientist specifi-
cally, there are actually seven key roles that need to be fulfilled
for a high-functioning data science team
to execute analytic projects successfully.
Figure 2-1 depicts the various roles and key stakeholders of an
analytics project. Each plays a critical part
in a successful ana lytics project. Although seven roles are
listed, fewer or more peop le can accomplish the
work depending on t he scope of the project, the organizational
structure, and the skills of t he participants.
For example, on a small, versatile team, these seven roles may
be fulfilled by only 3 people, but a very large
proj ect may require 20 or more people. The seven roles follow.
2.1 Data Analytics Lifecycle Overview
...
•
FIGURE 2-1 Key roles for a successful analytics project
• Business User: Someone who understands the domain area and
usually benefits from the resu lts.
Th is person can consult and advise the project team on the
context of the project, the value of the
results, and how the outputs will be operationalized. Usually a
business analyst, line manager, or
deep subject matter expert in the project domain fulfills this
role.
• Project Sponsor: Responsible for the genesis of the project.
Provides the impetus and requirements
for the project and defines the core business problem. Generally
provides the funding and gauges
the degree of value from the final outputs of the working team.
This person set s the priorities for the
project and clarifies the desired outputs.
• Proj ect Manage r: Ensures that key milestones and objectives
are met on time and at the expected
quality.
• Busin ess Intelligence Analyst : Provides business domain
expertise based on a deep understanding
of the data, key performance indicators (KPis), key metrics, and
business intelligence from a reporting
perspective. Business Intelligence Analysts generally create
dashboards and reports and have knowl-
edge of the data feeds and sources.
• Database Administrator (DBA): Provisions and configures the
database environment to support
the analytics needs of the working team. These responsibilities
may include provid ing access to
key databases or tables and ensuring the appropriate security
levels are in place related to the data
repositories.
• Dat a Engineer: Leverag es deep technical skills to assist with
tuning SQL queries for data manage-
ment and data extraction, and provides support for data
ingestion into the analytic sandbox, which
DATA ANALYTICS LIFECYCLE
was discussed in Chapter 1, "Introduction to Big Data
Analytics." Whereas the DBA sets up and config-
ures the databases to be used, the data engineer executes the
actual data extractions and performs
substantial data manipulation to facilitate the analytics. The
data engineer works closely with the
data scientist to help shape data in the right ways for analyses.
o Data Scientist: Provides subject matter expertise for analytical
techniques, data modeling, and
applying valid analytical techniques to given business problems.
Ensures overall analytics objectives
are met. Designs and executes analytical methods and
approaches with the data available to the
project.
Although most of these roles are not new, the last two roles-
data engineer and data scientist-have
become popular and in high demand [2] as interest in Big Data
has grown.
2.1.2 Background and Overview of Data Analytics Lifecycle
The Data Analytics Lifecycle defines analytics process best
practices spanning discovery to project
completion. The lifecycle draws from established methods in
the realm of data analytics and decision
science. This synthesis was developed after gathering input
from data scientists and consulting estab-
lished approaches that provided input on pieces of the process.
Several of the processes that were
consulted include these:
o Scientific method [3], in use for centuries, still provides a
solid framework for thinking about and
deconstructing problems into their principal parts. One of the
most valuable ideas of the scientific
method relates to forming hypotheses and finding ways to test
ideas.
o CRISP-OM [4] provides useful input on ways to frame
analytics problems and is a popular approach
for data mining.
o Tom Davenport's DELTA framework [5]: The DELTA
framework offers an approach for data analytics
projects, including the context of the organization's skills,
datasets, and leadership engagement.
o Doug Hubbard's Applied Information Economics (AlE)
approach [6]: AlE provides a framework for
measuring intangibles and provides guidance on developing
decision models, calibrating expert
estimates, and deriving the expected value of information.
o "MAD Skills" by Cohen et al. [7] offers input for several of
the techniques mentioned in Phases 2-4
that focus on model planning, execution, and key findings.
figure 2-2 presents an overview of the Data Analytics Lifecycle
that includes six phases. Teams commonly
learn new things in a phase that cause them to go back and
refine the work done in prior phases based
on new insights and information that have been uncovered. for
this reason, figure 2-2 is shown as a cycle.
The circular arrows convey iterative movement between phases
until the team members have sufficient
information to move to the next phase. The callouts include
sample questions to ask to help guide whether
each of the team members has enough information and has made
enough progress to move to the next
phase of the process. Note that these phases do not represent
formal stage gates; rather, they serve as
criteria to help test whether it makes sense to stay in the current
phase or move to the next.
Is t he model robust
enough? Have we
fai led for sure?
······ ::-.. ~~?
........ ··
FIGURE 2-2 Overview of Data Analytics Lifecycle
2.1 Data Analytlcs Lifecycle Overview
Do I have enough
Information to draft
an analytic plan and
share for peer review?
Do I have
enough good
quality data to
start building
the model?
Do I have a good Idea
about the type of model
to try? Can I refine the
analytic plan?
Here is a brief overview of the main phases of the Data
Analytics Lifecycle:
• Phase 1- Discovery: In Phase 1, the team learns the business
domain, including relevant history
such as whether the organization or business unit has attempted
similar projects in the past from
which they can learn. The team assesses the resources available
to support the project in terms of
people, technology, time, and data. Important activities in this
phase include fram ing the business
problem as an analytics challenge that can be addressed in
subsequent phases and formulating ini-
tial hypotheses (IHs) to test and begin learn ing the data.
• Phase 2- Data prepa ration: Phase 2 requires the presence of
an analytic sandbox, in which the
team can work with data and perform analytics for the duration
of the project. The team needs to
execute ext ract, load, and transform (ELT) or extract,
transform and load (ETL) to get data into the
sandbox. The ELT and ETL are sometimes abbreviated as
ETLT. Data should be t ransformed in the
ETLT process so t he team can work with it and analyze it. In t
his phase, the team also needs to famil-
iarize itself with the data thoroughly and take steps to condition
the data (Section 2.3.4).
DATA ANALYTICS LIFECYCLE
• Phase 3-Model planning: Phase 3 is model planning, where the
team determines the methods,
techniques, and workflow it intends to follow for the subsequent
model building phase. The team
explores the data to learn about the relationships between
variables and subsequently se lects key
variables and the most suitable models.
• Phase 4-Mode l building: In Phase 4, the team deve lops data
sets for testing, trai ning, and produc-
tion purposes. In addition, in this phase the team builds and
executes models based on the work
done in the model planning phase. The team also considers
whether its existing tools will suffice for
running the models, or if it will need a more robust environment
for executing models and work flows
(for example, fast hardware and parallel processing, if
applicable).
• Phase 5-Commu nicate results: In Phase 5, the team, in
collaboration with major stakeholders,
determines if the results of the project are a success or a failure
based on the criteria developed in
Phase 1. The team should identify key findings, quantify the
business value, and develop a narrative
to summarize and convey findings to stakeholders.
• Phase 6-0perationalize: In Phase 6, the team delivers final
reports, briefings, code, and technical
documents. In addition, the team may run a pilot project to
implement the models in a production
envi ronment.
Once team members have run models and produced findings, it
is critical to frame these results in a
way that is tailored to the audience that engaged the team .
Moreover, it is critical to frame the results of
the work in a manner that demonstrates clear value. If the team
performs a technically accurate analysis
but fails to translate the results into a language that resonates
with the audience, people will not see the
value, and much of the time and effort on the project will have
been wasted.
The rest of the chapter is organized as follows. Sections 2.2-2.7
discuss in detail how each of the six
phases works, and Section 2.8 shows a case study of
incorporating the Data Analytics Lifecycle in a real-
world data science project.
2.2 Phase 1: Discovery
The first phase of the Data Analytics Lifecycle involves
discovery (Figure 2-3).1n this phase, the data science
team must learn and investigate the problem, develop context
and understanding, and learn about the
data sources needed and ava ilable for the project. In addition,
the team formulates initial hypotheses that
can later be tested with data.
2.2.1 Learning the Business Domain
Understanding the domain area of the problem is essential. In
many cases, data scientists will have deep
computational and quantitative knowledge that can be broadly
applied across many disciplines. An example
of this role would be someone with an advanced degree in
applied mathematics or statistics.
These data scientists have deep knowledge of the methods,
technique s, and ways for applying heuris-
tics to a variety of business and conceptual problems. Others in
this area may have deep knowledge of a
domain area, coupled with quantitative expertise. An example of
this would be someone with a Ph.D. in
life sciences. This person would have deep knowledge of a field
of study, such as oceanography, biology,
or genetics, with some depth of quantitative knowledge.
At this early stage in the process, the team needs to determine
how much business or domain knowledge
the data scientist needs to develop models in Phases 3 and 4.
The earlier the team can make this assessment
2.2 Phase 1: Discovery
the better, because t he decision helps dictate the resources
needed for the project team and ensures the
tea m has t he right balance of domain knowledge and technica l
expertise.
FIGURE 2-3 Discovery phase
2.2.2 Resources
Do I have enough
Inform ation to draft
an analytic plan and
share for peer review?
As part of t he discovery phase, the team needs to assess the
resources ava ila ble to support the proj ect. In
this context, resources include technology, tools, system s, data,
and people.
During this scoping, consider the available tools and technology
t he team will be using and the types
of systems needed for later phases to operat ionalize the models.
In add itio n, try to evaluate the level of
analytica l sophisticat ion within the orga nization and gaps that
may exist related to tools, technology, and
skills. For instance, for th e model being developed to have
longevity in an organization, consider what
types of skills and roles will be re qui red that may not exist
today. For the proj ect to have long-term success,
DATA ANALYTIC$ LIFECVCLE
what types of skills and roles will be needed for the recipients
of the model being developed? Does the
requisite level of expertise exist within the organization today,
or will it need to be cultivated? Answering
these questions will influence the techniques the team selects
and the kind of implementation the team
chooses to pursue in subsequent phases of the Data Analytics
lifecycle.
In addition to the skills and computing resources, it is advisable
to take inventory of the types of data
available to the team for the project. Consider if the data
available is sufficient to support the project's
goals. The team will need to determine whether it must collect
additional data, purchase it from outside
sources, or transform existing data. Often, projects are started
looking only at the data available. When
the data is less than hoped for, the size and scope of the project
is reduced to work within the constraints
of the existing data.
An alternative approach is to consider the long-term goals of
this kind of project, without being con-
strained by the current data. The team can then consider what
data is needed to reach the long-term goals
and which pieces of this multistep journey can be achieved
today with the existing data. Considering
longer-term goals along with short-term goals enables teams to
pursue more ambitious projects and treat
a project as the first step of a more strategic initiative, rather
than as a standalone initiative. It is critical
to view projects as part of a longer-term journey, especially if
executing projects in an organization that
is new to Data Science and may not have embarked on the
optimum datasets to support robust analyses
up to this point.
Ensure the project team has the right mix of domain experts,
customers, analytic talent, and project
management to be effective. In addition, evaluate how much
time is needed and if the team has the right
breadth and depth of skills.
After taking inventory of the tools, technology, data, and
people, consider if the team has sufficient
resources to succeed on this project, or if additional resources
are needed. Negotiating for resources at the
outset of the project, while seeping the goals, objectives, and
feasibility, is generally more useful than later
in the process and ensures sufficient time to execute it properly.
Project managers and key stakeholders have
better success negotiating for the right resources at this stage
rather than later once the project is underway.
2.2.3 Framing the Problem
Framing the problem well is critical to the success of the
project. Framing is the process of stating the
analytics problem to be solved. At this point, it is a best
practice to write down the problem statement
and share it with the key stakeholders. Each team member may
hear slightly different things related to
the needs and the problem and have somewhat different ideas of
possible solutions. For these reasons, it
is crucial to state the analytics problem, as well as why and to
whom it is important. Essentially, the team
needs to clearly articulate the current situation and its main
challenges.
As part of this activity, it is important to identify the main
objectives of the project, identify what needs
to be achieved in business terms, and identify what needs to be
done to meet the needs. Additionally,
consider the objectives and the success criteria for the project.
What is the team attempting to achieve by
doing the project, and what will be considered "good enough" as
an outcome of the project? This is critical
to document and share with the project team and key
stakeholders. It is best practice to share the statement
of goals and success criteria with the team and confirm
alignment with the project sponsor's expectations.
Perhaps equally important is to establish failure criteria. Most
people doing projects prefer only to think
of the success criteria and what the conditions will look like
when the participants are successful. However,
this is almost taking a best-case scenario approach, assuming
that everything will proceed as planned
2.2 Phase 1: Discovery
and the project team will reach its goals. However, no matter
how well planned, it is almost impossible to
plan for everything that will emerge in a project. The failure
criteria will guide the team in understanding
when it is best to stop trying or settle for the results that have
been gleaned from the data. Many times
people will continue to perform analyses past the point when
any meaningful insights can be drawn from
the data. Establishing criteria for both success and failure helps
the participants avoid unproductive effort
and remain aligned with the project sponsors
2.2.4 Identifying Key Stakeholders
Another important step is to identify the key stakeholders and
their interests in the project. During
these discussions, the team can identify the success criteria, key
risks, and stakeholders, which should
include anyone who will benefit from the project or will be
significantly impacted by the project. When
interviewing stakeholders, learn about the domain area and any
relevant history from similar analytics
projects. For example, the team may identify the results each
stakeholder wants from the project and the
criteria it will use to judge the success of the project.
Keep in mind that the analytics project is being initiated for a
reason. It is critical to articulate the pain
points as clearly as possible to address them and be aware of
areas to pursue or avoid as the team gets
further into the analytical process. Depending on the number of
stakeholders and participants, the team
may consider outlining the type of activity and participation
expected from each stakeholder and partici-
pant. This will set clear expectations with the participants and
avoid delays later when, for example, the
team may feel it needs to wait for approval from someone who
views himself as an adviser rather than an
approver of the work product.
2.2.5 Interviewing the Analytics Sponsor
The team should plan to collaborate with the stakeholders to
clarify and frame the analytics problem. At the
outset, project sponsors may have a predetermined solution that
may not necessarily realize the desired
outcome. In these cases, the team must use its knowledge and
expertise to identify the true underlying
problem and appropriate solution.
For instance, suppose in the early phase of a project, the team is
told to create a recommender system
for the business and that the way to do this is by speaking with
three people and integrating the product
recommender into a legacy corporate system. Although this may
be a valid approach, it is important to test
the assumptions and develop a clear understanding of the
problem. The data science team typically may
have a more objective understanding of the problem set than the
stakeholders, who may be suggesting
solutions to a given problem. Therefore, the team can probe
deeper into the context and domain to clearly
define the problem and propose possible paths from the problem
to a desired outcome. In essence, the
data science team can take a more objective approach, as the
stakeholders may have developed biases
over time, based on their experience. Also, what may have been
true in the past may no longer be a valid
working assumption. One possible way to circumvent this issue
is for the project sponsor to focus on clearly
defining the requirements, while the other members of the data
science team focus on the methods needed
to achieve the goals.
When interviewing the main stakeholders, the team needs to
take time to thoroughly interview the
project sponsor, who tends to be the one funding the project or
providing the high-level requirements.
This person understands the problem and usually has an idea of
a potential working solution. It is critical
DATA ANALYTIC S LIFE CYCLE
to thoroughly understand t he sponsor's perspective to guide the
team in getting started on the proj ect.
Here are some ti ps for interviewing project sponsors:
• Prepare for the interview; draft questio ns, and review with
coll eagues.
• Use open-ended questi ons; avoid asking lead ing questions.
• Probe for details and pose foll ow-up questions.
• Avoid filling every silence in t he co nversation; give the
other person time to think.
• Let the sponsors express t hei r ideas and ask clarifying
questions, such as "Why? Is that correct? Is t his
idea on target? Is there anything else?"
• Use active listening techniques; repeat back what was heard to
make sure t he team heard it correctly,
or reframe what was sa id.
• Try to avoid expressing the team's opinions, which can
introduce bias; instead, focus on listening.
• Be mindfu l of the body language of the interviewers and sta
keholders; use eye contact where appro-
priate, and be attentive.
• Mi nimize distractions.
• Document what t he team heard, and review it with the
sponsors.
Following is a brief list of common questions that are helpful to
ask during the discovery phase when
interviewi ng t he project sponsor. The responses wi ll begin to
shape the scope of the projec t and give the
team an idea of the goals and objectives of the project.
• What busi ness problem is t he team trying to solve?
• What is t he desired outcome of the proj ect?
• What data sources are available?
• What industry issues may impact t he analysis?
• What timelines need to be considered?
• Who could provide insight into the project?
• Who has final decision-making authority on the project?
• How wi ll t he focus and scope of t he problem change if the
following dimensions change:
• Time: Analyzing 1 year or 10 years' worth of data?
• People: Assess impact of changes in resources on project
timelin e.
• Risk: Conservative to aggressive
• Resources: None to unlimited (tools, technology, systems)
• Size and attributes of data: Includi ng internal and external
data sou rces
2.2 Phase 1: Discovery
2.2.6 Developing Initial Hypotheses
Developing a set of IHs is a key facet of the discovery phase.
This step involves forming ideas that the team
can test with data. Generally, it is best to come up with a few
primary hypotheses to test and then be
creative about developing several more. These IHs form the
basis of the analytical tests the team will use
in later phases and serve as the foundation for the findings in
Phase 5. Hypothesis testing from a statisti-
cal perspective is covered in greater detail in Chapter 3,
"Review of Basic Data Analytic Methods Using R."
In this way, the team can compare its answers with the outcome
of an experiment or test to generate
additional possible solutions to problems. As a result, the team
will have a much richer set of observations
to choose from and more choices for agreeing upon the most
impactful conclusions from a project.
Another part of this process involves gathering and assessing
hypotheses from stakeholders and domain
experts who may have their own perspective on what the
problem is, what the solution should be, and how
to arrive at a solution. These stakeholders would know the
domain area well and can offer suggestions on
ideas to test as the team formulates hypotheses during this
phase. The team will likely collect many ideas
that may illuminate the operating assumptions of the
stakeholders. These ideas will also give the team
opportunities to expand the project scope into adjacent spaces
where it makes sense or design experiments
in a meaningful way to address the most important interests of
the stakeholders. As part of this exercise,
it can be useful to obtain and explore some initial data to inform
discussions with stakeholders during the
hypothesis-forming stage.
2.2.7 Identifying Potential Data Sources
As part of the discovery phase, identify the kinds of data the
team will need to solve the problem. Consider
the volume, type, and time span of the data needed to test the
hypotheses. Ensure that the team can access
more than simply aggregated data. In most cases, the team will
need the raw data to avoid introducing
bias for the downstream analysis. Recalling the characteristics
of Big Data from Chapter 1, assess the main
characteristics of the data, with regard to its volume, variety,
and velocity of change. A thorough diagno-
sis of the data situation will influence the kinds of tools and
techniques to use in Phases 2-4 of the Data
Analytics lifecycle.ln addition, performing data exploration in
this phase will help the team determine
the amount of data needed, such as the amount of historical data
to pull from existing systems and the
data structure. Develop an idea of the scope of the data needed,
and validate that idea with the domain
experts on the project.
The team should perform five main activities during this step of
the discovery phase:
o Identify data sources: Make a list of candidate data sources
the team may need to test the initial
hypotheses outlined in this phase. Make an inventory of the
datasets currently available and those
that can be purchased or otherwise acquired for the tests the
team wants to perform.
o Capture aggregate data sources: This is for previewing the
data and providing high-level under-
standing. It enables the team to gain a quick overview of the
data and perform further exploration on
specific areas. It also points the team to possible areas of
interest within the data.
o Review the raw data: Obtain preliminary data from initial data
feeds. Begin understanding the
interdependencies among the data attributes, and become
familiar with the content of the data, its
quality, and its limitations.
DATA ANALYTICS LIFEC YCLE
• Evaluate the data structures and tools needed: The data type
and structure dictate which tools the
team can use to analyze the data. This evaluation gets the team
thinking about which technologies
may be good candidates for the project and how to start getting
access to these tools.
• Scope the sort of data infrastructure needed for this type of
problem: In addition to the tools
needed, the data influences the kind of infrastructure that 's
required, such as disk storage and net-
work capacity.
Unlike many traditional stage-gate processes, in which the team
can advance only when specific criteria
are met, the Data Ana lytics Lifecycle is intended to
accommodate more ambiguity. This more closely reflects
how data science projects work in real-life situati ons. For each
phase of the process, it is recomm ended
to pass certain checkpoints as a way of gauging whether the
team is ready to move to t he next phase of
the Data Analytics Lifecycle.
The team can move to the next phase when it has enough
information to draft an analytics plan and
share it for peer review. Although a peer review of the plan may
not actually be required by the project,
creating t he plan is a good test of the team's grasp of the busin
ess problem and the tea m's approach
to add ressing it. Creating the analytic plan also requires a clear
understanding of the domain area, the
problem to be solved, and scoping of the data sources to be
used. Developing success criteria early in the
project clarifies the problem definition and helps the team when
it comes time to make choices about
the analytical methods being used in later phases.
2.3 Phase 2: Data Preparation
The second phase of the Data Analytics Lifecycle involves data
preparation, which includes the steps to
explore, preprocess, and condition data prior to model ing and
analysis. In this phase, the team needs to
create a robust environment in which it can explore the data that
is separate from a production environment.
Usua lly, this is done by preparing an ana lytics sandbox. To get
the data into the sandbox, the team needs to
perform ETLT, by a combination of extracting, transforming,
and load ing data into the sandbox. Once the
data is in the sa ndbox, the team needs to learn about the data
and become familiar with it. Understandi ng
the data in detail is critical to t he success of the proj ect. The
team also must decide how to condition and
transform data to get it into a format to facilitate subsequent
analys is. The tea m may perform data visua liza-
tions to help team members understand the data, including its
trends, outliers, and relationships among
data variables. Each of these steps of the data preparation phase
is discussed throughout this section.
Data preparation tends to be the most labor-intensive step in the
analytics lifecycle.ln fact, it is common
for teams to spend at least SOo/o of a data science project's
time in this critical phase. if the team cannot obtain
enough data of sufficient quality, it may be unable to perform
the subsequent steps in the lifecycle process.
Figu re 2-4 shows an overview of the Data Analytics Lifecycle
for Phase 2. The data preparation phase is
generally the most iterative and the one that teams tend to
undere stimate most often. This is because most
teams and leaders are anxious to begin analyzing the data,
testing hypotheses, and getting answers to some
of the questions posed in Phase 1. Many tend to jump into Phase
3 or Phase 4 to begin rapid ly developing
models and algorithms without spendi ng the time to prepare the
data for modeling. Consequently, teams
come to realize the data they are working with does not allow
them to execute the models they want, and
they end up back in Phase 2 anyway.
Frc;uRE 2 Data preparation phase
2.3.1 Preparing the Analytic Sandbox
2.3 Phase 2: Data Preparation
Do I have
enough good
quality dat a to
start building
the model?
The firs t subphase of data preparation requires the team to
obtain an analytic sandbox (also commonly
referred to as a wo rkspace), in which the tea m ca n explore the
data without interfering with live produc-
tion databa ses. Consider an exa mple in which the team needs
to work with a company's fin ancial data.
The team should access a copy of the fin ancial data from the
analytic sand box rather than interacting with
the product ion version of t he organization's ma in database,
because that will be tight ly controlled and
needed for fi nancial reporting.
When developi ng the analytic sandbox, it is a best practice to
collect all kinds of data there, as tea m
mem bers need access to high volumes and varieties of data for
a Big Data analytics project. This ca n include
DATA ANALYTICS LIFECYCLE
everything from summary-level aggregated data, structured
data, raw data feeds, and unstructured text
data from call logs or web logs, depending on the kind of
analysis the team plans to undertake.
This expansive approach for attracting data of all kind differs
considerably from the approach advocated
by many information technology (IT) organizations. Many IT
groups provide access to only a particular sub-
segment of the data for a specific purpose. Often, the mindset of
the IT group is to provide the minimum
amount of data required to allow the team to achieve its
objectives. Conversely, the data science team
wants access to everything. From its perspective, more data is
better, as oftentimes data science projects
are a mixture of purpose-driven analyses and experimental
approaches to test a variety of ideas. In this
context, it can be challenging for a data science team if it has to
request access to each and every dataset
and attribute one at a time. Because of these differing views on
data access and use, it is critical for the data
science team to collaborate with IT, make clear what it is trying
to accomplish, and align goals.
During these discussions, the data science team needs to give IT
a justification to develop an analyt-
ics sandbox, which is separate from the traditional IT-governed
data warehouses within an organization.
Successfully and amicably balancing the needs of both the data
science team and IT requires a positive
working relationship between multiple groups and data owners.
The payoff is great. The analytic sandbox
enables organizations to undertake more ambitious data science
projects and move beyond doing tradi-
tional data analysis and Business Intelligence to perform more
robust and advanced predictive analytics.
Expect the sandbox to be large.lt may contain raw data,
aggregated data, and other data types that are
less commonly used in organizations. Sandbox size can vary
greatly depending on the project. A good rule
is to plan for the sandbox to be at least 5-10 times the size of
the original data sets, partly because copies of
the data may be created that serve as specific tables or data
stores for specific kinds of analysis in the project.
Although the concept of an analytics sandbox is relatively new,
companies are making progress in this
area and are finding ways to offer sandboxes and workspaces
where teams can access data sets and work
in a way that is acceptable to both the data science teams and
the IT groups.
2.3.2 Performing ETLT
As the team looks to begin data transformations, make sure the
analytics sandbox has ample bandwidth
and reliable network connections to the underlying data sources
to enable uninterrupted read and write.
In ETL, users perform extract, transform, load processes to
extract data from a datastore, perform data
transformations, and load the data back into the datastore.
However, the analytic sandbox approach differs
slightly; it advocates extract, load, and then transform.ln this
case, the data is extracted in its raw form and
loaded into the datastore, where analysts can choose to
transform the data into a new state or leave it in
its original, raw condition. The reason for this approach is that
there is significant value in preserving the
raw data and including it in the sandbox before any
transformations take place.
For instance, consider an analysis for fraud detection on credit
card usage. Many times, outliers in this
data population can represent higher-risk transactions that may
be indicative of fraudulent credit card
activity. Using ETL, these outliers may be inadvertently filtered
out or transformed and cleaned before being
loaded into the datastore.ln this case, the very data that would
be needed to evaluate instances of fraudu-
lent activity would be inadvertently cleansed, preventing the
kind of analysis that a team would want to do.
Following the ELT approach gives the team access to clean data
to analyze after the data has been loaded
into the database and gives access to the data in its original
form for finding hidden nuances in the data.
This approach is part of the reason that the analytic sandbox can
quickly grow large. The team may want
clean data and aggregated data and may need to keep a copy of
the original data to compare against or
2.3 Phase 2: Data Preparation
look for hidden patterns that may have existed in the data before
the cleaning stage. This process can be
summarized as ETLT to reflect the fact that a team may choose
to perform ETL in one case and ELT in another.
Depending on the size and number of the data sources, the team
may need to consider how to paral-
lelize the movement of the datasets into the sandbox. For this
purpose, moving large amounts of data is
sometimes referred to as Big ETL. The data movement can be
parallelized by technologies such as Hadoop
or MapReduce, which will be explained in greater detail in
Chapter 10, "Advanced Analytics-Technology
and Tools: MapReduce and Hadoop." At this point, keep in
mind that these technologies can be used to
perform parallel data ingest and introduce a huge number of
files or datasets in parallel in a very short
period of time. Hadoop can be useful for data loading as well as
for data analysis in subsequent phases.
Prior to moving the data into the analytic sandbox, determine
the transformations that need to be
performed on the data. Part of this phase involves assessing data
quality and structuring the data sets
properly so they can be used for robust analysis in subsequent
phases. In addition, it is important to con-
sider which data the team will have access to and which new
data attributes will need to be derived in the
data to enable analysis.
As part of the ETLT step, it is advisable to make an inventory
of the data and compare the data currently
available with datasets the team needs. Performing this sort of
gap analysis provides a framework for
understanding which datasets the team can take advantage of
today and where the team needs to initiate
projects for data collection or access to new datasets currently
unavailable. A component of this sub phase
involves extracting data from the available sources and
determining data connections for raw data, online
transaction processing (OLTP) databases, online analytical
processing (OLAP) cubes, or other data feeds.
Application programming interface (API) is an increasingly
popular way to access a data source [8]. Many
websites and social network applications now provide APis that
offer access to data to support a project
or supplement the datasets with which a team is working. For
example, connecting to the Twitter API can
enable a team to download millions of tweets to perform a
project for sentiment analysis on a product, a
company, or an idea. Much of the Twitter data is publicly
available and can augment other data sets used
on the project.
2.3.3 Learning About the Data
A critical aspect of a data science project is to become familiar
with the data itself. Spending time to learn the
nuances of the datasets provides context to understand what
constitutes a reasonable value and expected
output versus what is a surprising finding. In addition, it is
important to catalog the data sources that the
team has access to and identify additional data sources that the
team can leverage but perhaps does not
have access to today. Some of the activities in this step may
overlap with the initial investigation of the
datasets that occur in the discovery phase. Doing this activity
accomplishes several goals.
o Clarifies the data that the data science team has access to at
the start of the project
o Highlights gaps by identifying datasets within an organization
that the team may find useful but may
not be accessible to the team today. As a consequence, this
activity can trigger a project to begin
building relationships with the data owners and finding ways to
share data in appropriate ways. In
addition, this activity may provide an impetus to begin
collecting new data that benefits the organi-
zation or a specific long-term project.
o Identifies datasets outside the organization that may be useful
to obtain, through open APis, data
sharing, or purchasing data to supplement already existing
datasets
DATA ANALYTICS LIFECYCLE
Table 2-1 demonstrates one way to organize this type of data
inventory.
TABLE: 1 Sample Dataset Inventory
Data
Available Data to Obtain
and Data Available, but Data to from Third
Dataset Accessible not Accessible Collect Party Sources
Products shipped e
Product Fina ncials •
Product Ca ll Center •
Data
Live Product
Feedback Surveys
Product Sentiment
from Social Media
2.3.4 Data Conditioning
•
•
Data conditioning refers to the process of cleaning data,
normalizing datasets, and perform ing trans-
formations on the data. A critical step with in the Data Ana
lytics Lifecycle, data conditioning can involve
many complex steps to join or merge data sets or otherwise get
datasets into a state that enables analysis
in further phases. Data conditioning is often viewed as a
preprocessing step for the data analysis because
it involves many operations on the dataset before developing
models to process or analyze the data. This
implies that the data-conditioning step is performed only by IT,
the data owners, a DBA, or a data eng ineer.
However, it is also important to involve the data scientist in this
step because many decisions are made in
the data conditioning phase that affect subsequent analysis. Part
of this phase involves decidi ng which
aspects of particular datasets will be useful to analyze in later
steps. Because teams begin forming ideas
in this phase about which data to keep and which data to
transform or discard, it is important to involve
mu ltiple team members in these decisions. Leaving such
decisions to a single person may cause teams to
return to this phase to retrieve data that may have been
discarded.
As with the previous example of deciding which data to keep as
it relates to fraud detection on credit
card usage, it is critical to be thoughtful about which data the
team chooses to keep and which data will
be discarded. This can have far-reaching consequences that will
cause the team to retrace previous steps
if th e team discards too much of the data at too early a point in
this process. Typically, data science teams
would rather keep more data than too little data for the analysis.
Additional questions and considerations
for the data conditioning step include these.
• What are the data sources? What are the target fields (for
example, columns of the tables)?
• How clean is the data?
2.3 Phase 2: Data Preparation
o How consistent are the contents and files? Determine to what
degree the data contains missing or
inconsistent values and ifthe data contains values deviating
from normal.
o Assess the consistency of the data types. For instance, if the
team expects certain data to be numeric,
confirm it is numeric or if it is a mixture of alphanumeric
strings and text.
o Review the content of data columns or other inputs, and check
to ensure they make sense. For
instance, if the project involves analyzing income levels,
preview the data to confirm that the income
values are positive or if it is acceptable to have zeros or
negative values.
o Look for any evidence of systematic error. Examples include
data feeds from sensors or other data
sources breaking without anyone noticing, which causes invalid,
incorrect, or missing data values. In
addition, review the data to gauge if the definition ofthe data is
the same over all measurements. In
some cases, a data column is repurposed, or the column stops
being populated, without this change
being annotated or without others being notified.
2.3.5 Survey and Visualize
After the team has collected and obtained at least some of the
datasets needed for the subsequent
analysis, a useful step is to leverage data visualization tools to
gain an overview of the data. Seeing high-level
patterns in the data enables one to understand characteristics
about the data very quickly. One example
is using data visualization to examine data quality, such as
whether the data contains many unexpected
values or other indicators of dirty data. (Dirty data will be
discussed further in Chapter 3.) Another example
is skewness, such as if the majority of the data is heavily
shifted toward one value or end of a continuum.
Shneiderman [9] is well known for his mantra for visual data
analysis of "overview first, zoom and filter,
then details-on-demand." This is a pragmatic approach to visual
data analysis. It enables the user to find
areas of interest, zoom and filter to find more detailed
information about a particular area of the data, and
then find the detailed data behind a particular area. This
approach provides a high-level view of the data
and a great deal of information about a given dataset in a
relatively short period of time.
When pursuing this approach with a data visualization tool or
statistical package, the following guide-
lines and considerations are recommended.
o Review data to ensure that calculations remained consistent
within columns or across tables for a
given data field. For instance, did customer lifetime value
change at some point in the middle of data
collection? Or if working with financials, did the interest
calculation change from simple to com-
pound at the end of the year?
o Does the data distribution stay consistent over all the data? If
not, what kinds of actions should be
taken to address this problem?
o Assess the granularity of the data, the range of values, and the
level of aggregation of the data.
o Does the data represent the population of interest? For
marketing data, if the project is focused on
targeting customers of child-rearing age, does the data represent
that, or is it full of senior citizens
and teenagers?
o For time-related variables, are the measurements daily,
weekly, monthly? Is that good enough? Is
time measured in seconds everywhere? Or is it in milliseconds
in some places? Determine the level of
granularity of the data needed for the analysis, and assess
whether the current level of timestamps
on the data meets that need.
DATA ANALYTICS LIFECYCLE
• Is the data standardized/ normalized? Are the scales
consistent? If not, how consistent or irregular is
the data?
• For geospatial datasets, are state or country abbreviations
consistent across the data? Are personal
names normalized? English units? Metric units?
These are typical considerations that should be part of the
thought process as the team evaluates the
data sets that are obtained for the project. Becoming deeply
knowledgeable about the data will be critica l
when it comes time to construct and run models later in the
process.
2.3.6 Common Tools for the Data Preparation Phase
Several tools are commonly used for this phase:
• Hadoop [10] ca n perform massively para llel ingest and
custom analysis for web traffic parsing, GPS
location ana lytics, genomic analysis, and combining of massive
unstructured data fe eds from mul-
tiple sources.
• Alpine Mi ner [11 ] provides a graphical user interface (GUI)
for creating analytic work flows, includi ng
data manipu lations and a series of analytic events such as
staged data-mining techniques (for exam-
ple, first select the top 100 customers, and then run descriptive
statistics and clustering) on Postgres
SQL and other Big Data sources.
• Open Refine (formerly ca lled Google Refine) [12] is "a free,
open source, powerful tool for working
with messy data." It is a popular GUI-based tool for performing
data transformations, and it's one of
the most robust free tools cu rrentl y available.
• Simi lar to Open Refin e, Data Wrangler [13] is an interactive
tool for data clean ing and transformation.
Wrangler was developed at Stanford University and can be used
to perform many transformations on
a given dataset. In addition, data transformation outputs can be
put into Java or Python. The advan-
tage of this feature is that a subset of the data can be
manipulated in Wrangler via its GUI, and then
the same operations can be written out as Java or Python code
to be executed against the full, larger
dataset offline in a local analytic sandbox.
For Phase 2, the team needs assistance from IT, DBAs, or
whoever controls the Enterprise Data Warehouse
(EDW) for data sources the data science team would like to use.
2.4 Phase 3: Model Planning
In Phase 3, the data science team identifi es candidate models to
apply to the data for clustering,
cla ssifying, or findin g relationships in the data depending on
the goa l of the project, as shown in Fig ure 2-5.
It is during this phase that the team refers to the hypotheses
developed in Phase 1, when they first became
acquainted with the data and understanding the business
problems or domain area. These hypotheses help
the team fram e the analytics to execute in Phase 4 and select
the right methods to achieve its objectives.
Some of the activities to consider in this phase include the
following:
• Assess the structure of the datasets. The structure of the data
sets is one factor that dictates the tools
and analytical techniques for the next phase. Depending on
whether the team plans to analyze tex-
tual data or transactional data, for example, different tools and
approaches are required.
• Ensure that the analytical techniques enable the team to meet
the business objectives and accept or
reject the working hypotheses.
2 .4 Phase 3: Model Planning
• Determine if the situation warrants a single model or a series
of techn iques as part of a larger ana lytic
workflow. A few example models include association rules
(Chapter 5, "Advanced Ana lytical Theory
and Methods: Association Rules") and logistic regression
(Chapter 6, "Adva nced Analytical Theory
and Methods: Regression"). Other tools, such as Alpine Miner,
enable users to set up a series of steps
and analyses and can serve as a front·end user interface (UI) for
manipulating Big Data sources in
PostgreSQL.
FIGURE 2· 5 Model planning phase
Do I have a good Idea
about the type of model
to try? Can I refine the
analytic plan?
In addition to the considerations just listed, it is useful to
research and understand how other ana lysts
generally approach a specific kind of problem. Given the kind
of data and resources that are available, eva lu-
ate whether similar, existing approaches will work or if the
team will need to create something new. Many
times teams can get ideas from analogous problems that other
people have solved in different industry
verticals or domain areas. Table 2-2 summarizes the results of
an exercise of this type, involving several
doma in areas and the types of models previously used in a
classification type of problem after conducting
research on chu rn models in multiple industry verti ca ls.
Performing this sort of diligence gives the team
DATAANALYT ICS LI FECYCLE
ideas of how others have solved similar problems and presents
the team with a list of candidate models to
try as part of the model planning phase.
TABLE 2-2 Research on Model Planning in industry Verticals
Market Sector Analytic Techniques/Methods Used
Consumer Packaged
Goods
Retail Banking
Reta il Business
Wireless Telecom
Multiple linear regression, automatic relevance determination
(ARD). and
decision tree
Multiple regression
Logistic regression, ARD, decision tree
Neural network, decision tree, hierarchical neurofuzzy systems,
rule
evolver, logistic regression
2.4.1 Data Exploration and Variable Selection
Although some data exploration takes place in t he data
preparation phase, those activities focus mainly on
data hygiene and on assessing the quality of the data itself. In
Phase 3, the objective of the data exploration
is to understand the relationships among the variables to inform
selection of the variables and methods
and to understand the prob lem domain. As with earlier phases
of t he Data Analytics Lifecycle, it is impor-
tant to spend t ime and focus attention on this preparatory work
to make t he subsequent phases of model
selection and execution easier and more efficient. A common
way to cond uct this step involves using tools
to perform data visualizations. Approaching the data
exploration in this way aids t he team in previewing
the data and assessing relationsh ips between varia bles at a
high level.
In many cases, stakeholders and subject matter experts have
instincts and hunches about what the data
science team should be considering and ana lyzing. Likely, this
group had some hypothesis that led to the
genesis of the proj ect. Often, stakeholders have a good grasp of
the problem and domain, although they
may not be aware of t he subtleties within the data or the model
needed to accept or reject a hypothesi s.
Oth er times, sta keholders may be correct, bu t for the wrong
reasons (for insta nce, they may be correct
about a correlation that exists but infer an incorrect reason for
the corr elation). Meanwhile, data scientists
have to approach problems with an unbiased mind-set and be
ready to question all assumptions.
As the team begi ns to question the incoming assumptions and
test initial ideas of the projec t sponsors
and stakeholders, it needs to consider the inputs and data that
will be needed, and then it must examine
whether these inputs are actually correlated with the outcomes
that the team plans to predict or analyze.
Some methods and types of models will hand le correlated
variables better than others. Depending on what
the team is attempting to solve, it may need to consider an
alternate method, reduce the number of data
inputs, or transform the inputs to allow the team to use the best
method for a given business problem.
Some of these techniques will be explored further in Chapter 3
and Chapter 6.
The key to this approach is to aim for capturing the most
essential predictors and variables rather than
considering every possible variable that people think may
influence the outcome. Approachi ng the prob-
lem in this manner requires iterations and testing to identify the
most essential variables for the intended
analyses. The team should plan to test a range of variables to
include in the model and then focus on the
most important and influential variab les.
2.4 Phase 3: Model Planning
If the team plans to run regression analyses, identify the
candidate predictors and outcome variables
of the model. Plan to create variables that determine outcomes
but demonstrate a strong relationship to
the outcome rather than to the other input variables. This
includes remaining vigilant for problems such
as serial correlation, multicollinearity, and other typical data
modeling challenges that interfere with the
validity of these models. Sometimes these issues can be avoided
simply by looking at ways to reframe a
given problem. In addition, sometimes determining correlation
is all that is needed ("black box prediction"),
and in other cases, the objective of the project is to understand
the causal relationship better. In the latter
case, the team wants the model to have explanatory power and
needs to forecast or stress test the model
under a variety of situations and with different datasets.
2.4.2 Model Selection
In the model selection subphase, the team's main goal is to
choose an analytical technique, or a short list
of candidate techniques, based on the end goal of the project.
For the context of this book, a model is
discussed in general terms. In this case, a model simply refers
to an abstraction from reality. One observes
events happening in a real-world situation or with live data and
attempts to construct models that emulate
this behavior with a set of rules and conditions. In the case of
machine learning and data mining, these
rules and conditions are grouped into several general sets of
techniques, such as classification, association
rules, and clustering. When reviewing this list of types of
potential models, the team can winnow down the
list to several viable models to try to address a given problem.
More details on matching the right models
to common types of business problems are provided in Chapter
3 and Chapter 4, "Advanced Analytical
Theory and Methods: Clustering."
An additional consideration in this area for dealing with Big
Data involves determining if the team will
be using techniques that are best suited for structured data,
unstructured data, or a hybrid approach. For
instance, the team can leverage MapReduce to analyze
unstructured data, as highlighted in Chapter 10.
Lastly, the team should take care to identify and document the
modeling assumptions it is making as it
chooses and constructs preliminary models.
Typically, teams create the initial models using a statistical
software package such as R, SAS, or Matlab.
Although these tools are designed for data mining and machine
learning algorithms, they may have limi-
tations when applying the models to very large datasets, as is
common with Big Data. As such, the team
may consider redesigning these algorithms to run in the
database itself during the pilot phase mentioned
in Phase 6.
The team can move to the model building phase once it has a
good idea about the type of model to try
and the team has gained enough knowledge to refine the
analytics plan. Advancing from this phase requires
a general methodology for the analytical model, a solid
understanding of the variables and techniques to
use, and a description or diagram of the analytic workflow.
2.4.3 Common Tools for the Model Planning Phase
Many tools are available to assist in this phase. Here are several
of the more common ones:
o R [14] has a complete set of modeling capabilities and
provides a good environment for building
interpretive models with high-quality code.ln addition, it has
the ability to interface with databases
via an ODBC connection and execute statistical tests and
analyses against Big Data via an open
source connection. These two factors makeR well suited to
performing statistical tests and analyt-
ics on Big Data. As of this writing, R contains nearly 5,000
packages for data analysis and graphical
representation. New packages are posted frequently, and many
companies are providing value-add
D ATA ANALVTICS LIFECVCLE
services for R (such as train ing, instruction, and best
practices), as well as packaging it in ways to make
it easier to use and more robust. This phenomenon is si milar to
what happened with Linux in the late
1980s and ea rl y 1990s, when companies appeared to package
and ma ke Linux easier for companies
to consume and deploy. UseR with fi le extracts for offline ana
lysis and optimal performance, and use
RODBC connections for dynamic queri es and faster
development.
• SQL Analysis services [1 5] ca n perform in-database
analytics of common data mining fun ctions,
involved aggregations, and basic predicti ve models.
• SAS/ACCESS [16] provides integration bet ween SAS and the
analytics sandbox via multiple data
con nectors such as OBDC, JOB(, and OLE DB. SAS itself is ge
nera lly used on fil e extract s, but with
SAS/ACCESS, users ca n conn ect to relational databases (such
as Orac le or Teradata) and data ware-
house appliances (such as Green plum or Aster), files, and
enterpri se applications (such as SAP and
Sa lesforce.com).
2.5 Phase 4: Model Building
In Phase 4, the data science team needs to develop data sets for
training, testing, and production purposes.
These data sets enable the data scientist to develop the
analytical model and train it ("t rai ning data"), while
holding aside some of t he data ("hold-out data" or "test data")
for testing the model. (These topics are
addressed in more detail in Chapter 3.) During th is process, it
is critical to ensure t hat the t raining and test
datasets are sufficiently robust for the model and analytical
techn iques. A si mple way to t hink of these
datasets is to view the training dataset for cond ucting the
initial experiments and the test sets for va lidating
an approach once the initia l experiments and models have been
run.
In the model building phase, shown in Figure 2-6, an ana lytica
l model is developed and fit on the trai n-
ing data and eva luated (scored) against t he test data. The
phases of model planning and model building
can overl ap quite a bit, and in practice one ca n iterate back
and forth between the two phases for a while
before settli ng on a final model.
Although the modeling techniques and logic required to develop
models ca n be highly complex, the
actual dura tion of th is phase can be short compared to the time
spent preparing the data and defi ning the
approaches. In general, plan to spend more ti me preparing and
learning the data (Phases 1-2) and crafting
a pres entation of the fin di ngs (Phase 5). Phases 3 and 4 tend
to move more quickly, although they are more
complex from a conceptual standpoint.
As part of this phase, t he data science tea m needs to execute
the mod els defined in Phase 3.
During this phase, users run models from ana lytical software
packages, such as R or SAS, on fil e extracts
and small data sets for testing purposes. On a small scale,
assess t he va lidity of the model and its results.
For insta nce, determine if the model accounts for most of the
data and has robust predictive power. At t his
point, refine the models to optimize the results, such as by
modifying variable inputs or reducing correla ted
variables where appropriate. In Phase 3, the team may have had
some knowledge of correlated variables
or problematic data attributes, w hich will be confirmed or
denied once the models are actually executed.
When immersed in the details of constructing models and
transforming data, many small decisions are often
made about the data and the approach for the modeli ng. These
details can be easily forgotten once the
proj ect is completed. Therefore, it is vital to record t he resu lts
and logic of the model du ring this phase. In
addi tion, one must take care to record any operating
assumptions that were made in the modeling process
regarding the data or the context.
Is the model robust
enough? Have we
failed for sure?
FIGURE 2· 6 Model building phase
2.5 Phase 4 : Model Building
Creating robust models that are suitable to a specific situation
requ ires thoughtful consideration to
ensure the models being developed ultimately meet the
objectives outlined in Phase 1. Questions to con-
sider include these:
• Does the model appear valid and accurate on the test data?
• Does the model output/behavior make sense to the domain
experts? That is, does it appear as if the
model is giving answers that make sense in this contex t?
• Do the parameter values of the fitted model make sense in the
context of the domain?
• Is the model sufficiently accurate to meet the goal?
• Does the model avoid intolerable mistakes? Depending on
context, false positives may be more seri-
ous or less serious than false negatives, for instance. (False
positives and false negatives are discussed
further in Chapter 3 and Chapter 7, "Advanced Analytical
Theory and Methods: Classification.")
DATA ANALYT ICS LIFECYCLE
• Are more data or more inputs needed? Do any of the inputs
need to be transformed or eliminated?
• Will the kind of model chosen support the runtime
requirements?
• Is a different form of the model required to address the
business problem? If so, go back to the model
planning phase and revise the modeling approach.
Once the data science team can evaluate either if the model is
sufficiently robust to solve the problem
or if the team has failed, it can move to the next phase in the
Data Analytics Lifecycle.
2.5.1 Common Tools for the Model Building Phase
There are many tools avai lable to assist in this phase, focused
primarily on statistical analysis or data mining
soft wa re. Common tools in this space include, but are not
limited to, the following:
• Commercial Tools:
• SAS Enterprise Mi ner (17) allows users to run predictive and
descriptive models based on large
volumes of data from across the enterprise. It interoperates with
other large data stores, has
many partnerships, and is built for enterpri se-level computing
and analytics.
• SPSS Modeler [18) (provided by IBM and now called IBM
SPSS Modeler) offers methods to
explore and analyze data through a GUI.
• Matlab [19) provides a high-level language for performing a
variety of data analytics, algo-
rithms, and data exploration.
• Alpine Miner [1 1) provides a GUI front end for users to
develop ana lytic workfiows and intera ct
with Big Data tools and platforms on the back end.
• STATISTICA [20) and Mathematica [21) are also popular and
well-regarded data mining and
analytics tools.
• Free or Open Source tool s:
• Rand PL/R [14) R was described earlier in the model planning
phase, and PL!R is a procedural
language for PostgreSQL with R. Using this approach means
that R commands can be exe-
cuted in database. This technique provides higher performance
and is more scalable than
running R in memory.
• Octave [22), a free software programming language for
computational modeling, has some of
the functionality of Matlab. Because it is freely available,
Octave is used in major universities
when teaching machine learning.
• WEKA [23) is a free data mining software package with an
analytic workbench. The functions
created in WEKA can be executed within Java code.
• Python is a programming language that provides toolkits for
machine learning and analysis,
such as scikit-learn, numpy, scipy, pandas, and related data
visualization using matplotlib.
• SQL in-database implementations, such as MADlib [241.
provide an alterative to in -memory
desktop analytical tools. MADiib provides an open-source
machine learning library of algo-
rithms that can be executed in-database, for PostgreSQL or
Greenplum.
2.6 Phase 5: Communicate Results
2.6 Phase 5: Communicate Results
After executing the model, the team needs to compare the
outcomes of the modeling to the criteria estab-
lished for success and failure. In Phase 5, shown in Figure 2-7,
the team considers how best to articulate
the findings and outcomes to the various team members and
stakeholders, taking into account caveats,
assumptions, and any limita t ions of the results. Because the
presentation is of ten circulated within an
orga nization, it is critical to articulate t he results properly and
position the findings in a way that is appro-
priate for the audience.
FIGURE 2-7 Communicate results phase
. .. :.: ··· .. :~· ,.. .·
.......... ~··
As part of Phase 5, the team needs to determine if it succeeded
or failed in its objectives. Many times
people do not wa nt to admit to failing, but in this instance
failure should not be considered as a true
failure, but rather as a failure of the data to accept or reject a
given hypothesis adequately. This concept
can be counterintuitive for those w ho have been told their
whole careers not to fail. However, t he key is
DATA ANALYTICS LIFEC YCLE
to remember that the team must be rigorous enough with the
data to determine whether it wil l prove or
disprove the hypotheses outlined in Phase 1 (discovery).
Sometimes teams have only done a superficial
analysis, which is not robust enough to accept or reject a
hypothesis. Other times, teams perform very robust
analysis and are searching for ways to show results, even when
results may not be there. It is important
to strike a balance between these two extremes when it comes to
analyzing data and being pragmatic in
terms of showing real-world results.
When conducting this assessment, determine if the results are
statistica lly significant and va lid. If they
are, identify the aspects of the results that stand out and may
provide sa lient findings when it comes time
to communicate them. If the results are not valid, think about
adjustments that can be made to refine and
iterate on the model to make it va lid. During this step, assess
the re sults and identify which data points
may have been surprising and which were in line with the
hypotheses that were developed in Phase 1.
Comparing the actual resu lts to the ideas formulated early on
produces additiona l ideas and insights that
would have been missed if the team had not taken time to
formulate initial hypotheses early in the process.
By this time, the team should have determined which model or
models address the analytical challenge
in the most appropriate way. In addition, the team should have
ideas of some of the findings as a result of the
project. The best practice in this phase is to record all the
findings and then select the three most significant
ones that can be shared with the stakeholders. In addition, the
team needs to reflect on the implications
of these findings and measure the business value. Depending on
what emerged as a result of the model,
the team may need to spend time quantifying the business
impact of the results to help prepare for the
presentation and demonstrate the value of the finding s. Doug
Hubbard's work [6) offers insights on how
to assess intangibles in business and quantify the value of
seemingly unmeasurable th ings.
Now that the team has run the model, completed a thorough
discovery phase, and learned a great deal
about the datasets, reflect on the project and consider what
obstacles were in the project and what can be
improved in the future. Make recommendations for future work
or improvements to existing processes, and
consider what each of the team members and sta keholders
needs to fulfi ll her responsibil ities. For instance,
sponsors must champion the project. Stakeholders must
understand how the model affects their processes.
(For example, if the team has created a model to predict
customer churn, the Marketing team must under-
stand how to use the churn model predictions in planning their
interventions.) Production eng ineers need
to operationalize the work that has been done. In addition, this
is the phase to underscore the business
benefits of the work and begin making the case to implement
the logic into a live production environment.
As a result of this phase, the team will have documented the key
findings and major insights derived
from the analysis. The deliverable of this phase will be the most
visible portion of the process to the outside
stakeholders and sponsors, so take care to clearly articulate the
results, methodology, and business value
of the findings. More details will be provided about data
visualization tools and references in Chapter 12,
"The Endgame, or Putting It All Together."
2.7 Phase 6: Operationalize
In the final phase, the team communicates the benefits of the
project more broadly and sets up a pilot
project to deploy the work in a controlled way before
broadening the work to a full enterprise or ecosystem
of users. In Phase 4, the team scored the model in the analytics
sandbox. Phase 6, shown in Figure 2-8,
represents the first time that most analytics teams approach dep
loying the new ana lytical methods or
models in a production environment. Rather than deploying
these models immediately on a wide-scale
2 .7 Phase 6 : Operationalize
basis, the risk can be managed more effectively and the team
can learn by undertaking a small scope, pilot
deployment before a wide-scale rollout. This approach enables
the team to learn about the performance
and related constraints of the model in a production
environment on a small scale and make adjustments
before a full deployment. During the pilot project, the team may
need to consider executing the algorithm
in the database rather than with in-memory tools such as R
because the run time is significantly faster and
more efficient than running in-memory, especially on larger
datasets.
FIGURE 2-8 Model operationalize phase
While scoping the effort involved in conducting a pilot project,
consider running th e model in a
production environment for a discrete set of products or a single
line of business, which tests the model
in a live setting. Thi s allows the team to learn from the
deployment and make any needed adjustments
before launching the model across the enterprise. Be aware that
this phase can bring in a new set of team
members- usually the engineers responsible for the production
environment who have a new set of
issues and concerns beyond those of the core project team. This
technical group needs to ensure that
DATA ANALYTIC$ LIFEC YCLE
running the model fits smoothly into the production environ
ment and that the model can be integrated
into related business processes.
Part of the operationalizing phase includes creating a
mechanism for performing ongoing monitoring
of model accu racy and, if accuracy degrades, finding ways to
retrain the model. If fea sible, design alert s
for when the model is operating "out-of-bounds." This includes
situations when the inputs are beyond the
range that the model was trained on, which may cause the
outputs of the model to be inaccurate or invalid.
If this begins to happen regularly, the model needs to be
retrained on new data.
Often, analytical projects yield new insights about a business, a
problem, or an idea that people may
have taken at face value or thought was impossible to explore.
Four main deliverables can be created to
meet the needs of most stakeholders. This approach for
developing the four deliverables is discussed in
greater detail in Chapter 12.
Figure 2-9 portrays the key outputs for each of the main
stakeholders of an analytics project and what
they usually expect at the conclusion of a project.
• Busi ness Use r typically tries to determine the benefit s and
implications of the findings to the
business.
• Proj ect Sponsor typica lly asks questions related to the
business impact of the project, the risks and
return on investment (ROI ), and the way the project ca n be
evangelized within the organization (and
beyond).
• Project Ma nager needs to determine if the project was
completed on time and within budget and
how well the goals were met.
• Business Intellig ence Analyst needs to know if the reports
and dashboards he manages will be
impacted and need to change.
• Data Engineer and Database Administrator (DBA) typical ly
need to share their code from the ana-
lytics project and create a technical document on how to
implement it.
• Dat a Scientist needs to share the code and explain the model
to her peers, managers, and other
stakeholders.
Although these seven roles represent many interests within a
project, these interests usually overlap,
and most of them can be met with four main deliverables.
• Presentation for project sponsors: This contains high-level
takeaways for executive leve l stakehold-
ers, with a few key messages to aid th eir decision-making
process. Focus on clean, easy visuals for the
prese nter to explain and for the viewer to grasp.
• Presentation for analysts, which describes business process
changes and reporting changes. Fellow
data scientists will want the details and are comfortable with
technical graphs (such as Receiver
Operating Characteristic [ROC) curves, density plots, and
histograms shown in Chapter 3 and Chapter 7).
• Code for technical people.
• Technical specifications of implementing the code.
As a general rul e, the more executive the audience, the more
succinct the presentation needs to be.
Most executive sponsors attend many briefin gs in the course of
a day or a week. Ensure that the presenta-
tion gets to the point quickly and frames the results in terms of
value to the sponsor's organization. For
instance, if the team is working with a bank to analyze cases of
credit card fraud, highlight the frequency
of fraud, the number of cases in the past month or year, and the
cost or revenue impact to the bank
2.8 Case Study: Globa l Innovation Netw ork and Analysis
(GINA)
(or focu s on the reverse-how much more revenue the bank
could gain if it addresses the fraud problem).
This demonstrates the business impact better than deep dives on
the methodology. The presentation
needs to include supporting information about analytical
methodology and data sources, but genera lly
only as supporting detail or to ensure the audience has
confidence in the approach that was taken to
analyze the data.
-Key Outputs from a Successful Analytic Project
Code Presentation for Analysts
== Technlt411 Specs 11:011 Present1t1on for Project Sponsors
•
FIGURE 2-9 Key outputs from a successful ana lyrics project
When presenting to other audiences with more quantitative
backgrounds, focus more time on the
methodology and findings. In these in stances, the team can be
more expansive in describing t he out-
comes, methodology, and analytical experiment with a peer
group. This audience will be more interested
in the techniques, especia lly if the team developed a new way
of processing or analyzing data that can be
reused in the future or appl ied to similar problems. In addition,
use imagery or data visualization when
possible. Although it may take more time to develop imagery,
people tend to remember mental pictu re s to
demonstrate a point more than long lists of bullets [25]. Data
visualization and presentations are discussed
further in Chapter 12.
2.8 Case Study: Global Innovation Network
and Analysis (GINA)
EMC's Global Innovation Network and Analytics (GINA) team
is a group of senior technologists located in
centers of excellence (COEs) around the world. This team's
charter is to engage employees across global
COEs to drive innovation, research, and university partnerships.
In 2012, a newly hired director wanted to
DATA ANALYTICS LIFECYCLE
improve these activities and provide a mechanism to track and
analyze the related information. In addition,
this team wanted to create more robust mechanisms for
capturing the results of its informal conversations
with other thought leaders within EMC, in academia, or in other
organizations, which could later be mined
for insights.
The GINA team thought its approach would provide a means to
share ideas globally and increase
knowledge sharing among GINA members who may be
separated geographically. It planned to create a
data repository containing both structured and unstructured data
to accomplish three main goals.
o Store formal and informal data.
o Track research from global technologists.
o Mine the data for patterns and insights to improve the team's
operations and strategy.
The GINA case study provides an example of how a team
applied the Data Analytics Ufecycle to analyze
innovation data at EMC.Innovation is typically a difficult
concept to measure, and this team wanted to look
for ways to use advanced analytical methods to identify key
innovators within the company.
2.8.1 Phase 1: Discovery
In the GINA project's discovery phase, the team began
identifying data sources. Although GINA was a
group of technologists skilled in many different aspects of
engineering, it had some data and ideas about
what it wanted to explore but lacked a formal team that could
perform these analytics. After consulting
with various experts including Tom Davenport, a noted expert
in analytics at Babson College, and Peter
Gloor, an expert in collective intelligence and creator of CoiN
(Collaborative Innovation Networks) at MIT,
the team decided to crowdsource the work by seeking volunteers
within EMC.
Here is a list of how the various roles on the working team were
fulfilled.
o Business User, Project Sponsor, Project Manager: Vice
President from Office of the CTO
o Business Intelligence Analyst: Representatives from IT
o Data Engineer and Database Administrator (DBA):
Representatives from IT
o Data Scientist: Distinguished Engineer, who also developed
the social graphs shown in the GINA
case study
The project sponsor's approach was to leverage social media and
blogging [26] to accelerate the col-
lection of innovation and research data worldwide and to
motivate teams of "volunteer" data scientists
at worldwide locations. Given that he lacked a formal team, he
needed to be resourceful about finding
people who were both capable and willing to volunteer their
time to work on interesting problems. Data
scientists tend to be passionate about data, and the project
sponsor was able to tap into this passion of
highly talented people to accomplish challenging work in a
creative way.
The data for the project fell into two main categories. The first
category represented five years of idea
submissions from EMC's internal innovation contests, known as
the Innovation Road map (formerly called the
Innovation Showcase). The Innovation Road map is a formal,
organic innovation process whereby employees
from around the globe submit ideas that are then vetted and
judged. The best ideas are selected for further
incubation. As a result, the data is a mix of structured data,
such as idea counts, submission dates, inventor
names, and unstructured content, such as the textual
descriptions of the ideas themselves.
2.8 Case Study: Global Innovation Network and Analysis
(GINA)
The second category of data encompassed minutes and notes
representing innovation and research
activity from around the world. This also represented a mix of
structured and unstructured data. The
structured data included attributes such as dates, names, and
geographic locations. The unstructured
documents contained the "who, what, when, and where"
information that represents rich data about
knowledge growth and transfer within the company. This type
of information is often stored in business
silos that have little to no visibility across disparate research
teams.
The 10 main IHs that the GINA team developed were as
follows:
o IH1: Innovation activity in different geographic regions can
be mapped to corporate strategic
directions.
o IH2: The length oftime it takes to deliver ideas decreases
when global knowledge transfer occurs as
part of the idea delivery process.
o IH3: Innovators who participate in global knowledge transfer
deliver ideas more quickly than those
who do not.
o IH4: An idea submission can be analyzed and evaluated for
the likelihood of receiving funding.
o IHS: Knowledge discovery and growth for a particular topic
can be measured and compared across
geographic regions.
o IH6: Knowledge transfer activity can identify research-
specific boundary spanners in disparate
regions.
o IH7: Strategic corporate themes can be mapped to geographic
regions.
o IHS: Frequent knowledge expansion and transfer events
reduce the time it takes to generate a corpo-
rate asset from an idea.
o IH9: Lineage maps can reveal when knowledge expansion and
transfer did not (or has not) resulted in
a corporate asset.
o IH1 0: Emerging research topics can be classified and mapped
to specific ideators, innovators, bound-
ary spanners, and assets.
The GINA (IHs) can be grouped into two categories:
o Descriptive analytics of what is currently happening to spark
further creativity, collaboration, and
asset generation
o Predictive analytics to advise executive management of where
it should be investing in the future
2.8.2 Phase 2: Data Preparation
The team partnered with its IT department to set up a new
analytics sandbox to store and experiment on
the data. During the data exploration exercise, the data
scientists and data engineers began to notice that
certain data needed conditioning and normalization. In addition,
the team realized that several missing
data sets were critical to testing some of the analytic
hypotheses.
As the team explored the data, it quickly realized that if it did
not have data of sufficient quality or could
not get good quality data, it would not be able to perform the
subsequent steps in the lifecycle process.
As a result, it was important to determine what level of data
quality and cleanliness was sufficient for the
DATA ANALYTICS LIFECYCLE
project being undertaken. In the case of the GINA, the team
discovered that many of the names of the
researchers and people interacting with the universities were
misspelled or had leading and trailing spaces
in the datastore. Seemingly small problems such as these in the
data had to be addressed in this phase to
enable better analysis and data aggregation in subsequent
phases.
2.8.3 Phase 3: Model Planning
In the GINA project, for much of the dataset, it seemed feasible
to use social network analysis techniques to
look at the networks of innovators within EMC.In other cases, it
was difficult to come up with appropriate
ways to test hypotheses due to the lack of data. In one case
(IH9), the team made a decision to initiate a
longitudinal study to begin tracking data points over time
regarding people developing new intellectual
property. This data collection would enable the team to test the
following two ideas in the future:
o IHS: Frequent knowledge expansion and transfer events
reduce the amount oftime it takes to
generate a corporate asset from an idea.
o IH9: Lineage maps can reveal when knowledge expansion and
transfer did not (or has not} result(ed)
in a corporate asset.
For the longitudinal study being proposed, the team needed to
establish goal criteria for the study.
Specifically, it needed to determine the end goal of a successful
idea that had traversed the entire journey.
The parameters related to the scope of the study included the
following considerations:
o Identify the right milestones to achieve this goal.
o Trace how people move ideas from each milestone toward the
goal.
o Once this is done, trace ideas that die, and trace others that
reach the goal. Compare the journeys of
ideas that make it and those that do not.
o Compare the times and the outcomes using a few different
methods (depending on how the data is
collected and assembled). These could be as simple as t-tests or
perhaps involve different types of
classification algorithms.
2.8.4 Phase 4: Model Building
In Phase 4, the GINA team employed several analytical
methods. This included work by the data scientist
using Natural Language Processing (NLP} techniques on the
textual descriptions of the Innovation Road map
ideas. In addition, he conducted social network analysis using
Rand RStudio, and then he developed social
graphs and visualizations of the network of communications
related to innovation using R's ggplot2
package. Examples of this work are shown in Figures 2-10 and
2-11.
2.8 Case Study: Global Innovation Network and Analysis
(GINA)
• • , t-ft11lf f
·-·· '-..lr f .. fi"-CIW1o
FIGURE 2-10 Social graph [27] visualization of idea submitt
ers and finalists
• •
•
0
0 0 0
0 o o o 0 0
0
0 oo
O o
0
()
o ocflr_ og
Betweenness Ranks 0 ..() 0
1. 578 o cr~
2. 5 11 0 0() 0
3. 341 0
4. 171 0 <i"' 0
5. 138 0 0 vv
00 0 0
o·o
FIGURE 2-11 Social graph visualization of top innovation
influencers
DATA ANALYTICS LIFECYCLE
Figure 2-10 shows social graphs that portray the relationships
between idea submitters within GINA.
Each color represents an innovator from a different country.
The large dots with red circles around them
represent hubs. A hub represents a person with high
connectivity and a high "betweenness" score. The
cluster in Figure 2-11 contains geographic variety, which is
critical to prove the hypothesis about geo-
graphic boundary spanners. One person in this graph has an
unusually high score when compared to the
rest of the nodes in the graph. The data scientist identified this
person and ran a query against his name
within the analytic sandbox. These actions yielded the following
information about this research scientist
(from the social graph), which illustrated how influential he was
within his business unit and across many
other areas of the company worldwide:
o In 2011, he attended the ACM SIGMOD conference, which is
a top-tier conference on large-scale data
management problems and databases.
o He visited employees in France who are part of the business
unit for EMC's content management
teams within Documentum (now part of the Information
Intelligence Group, or II G).
o He presented his thoughts on the SIGMOD conference at a
virtual brown bag session attended by
three employees in Russia, one employee in Cairo, one
employee in Ireland, one employee in India,
three employees in the United States, and one employee in
Israel.
o In 2012, he attended the SDM 2012 conference in California.
o On the same trip he visited innovators and researchers at EMC
federated companies, Pivotal and
VMware.
o Later on that trip he stood before an internal council of
technology leaders and introduced two of his
researchers to dozens of corporate innovators and researchers.
This finding suggests that at least part of the initial hypothesis
is correct; the data can identify innovators
who span different geographies and business units. The team
used Tableau software for data visualization
and exploration and used the Pivotal Green plum database as the
main data repository and analytics engine.
2.8.5 Phase 5: Communicate Results
In Phase 5, the team found several ways to cull results of the
analysis and identify the most impactful
and relevant findings. This project was considered successful in
identifying boundary spanners and
hidden innovators. As a result, the CTO office launched
longitudinal studies to begin data collection efforts
and track innovation results over longer periods of time. The
GINA project promoted knowledge sharing
related to innovation and researchers spanning multiple areas
within the company and outside of it. GINA
also enabled EMC to cultivate additional intellectual property
that led to additional research topics and
provided opportunities to forge relationships with universities
for joint academic research in the fields of
Data Science and Big Data. In addition, the project was
accomplished with a limited budget, leveraging a
volunteer force of highly skilled and distinguished engineers
and data scientists.
One of the key findings from the project is that there was a
disproportionately high density of innova-
tors in Cork, Ireland. Each year, EMC hosts an innovation
contest, open to employees to submit innovation
ideas that would drive new value for the company. When
looking at the data in 2011, 15% of the finalists
and 15% of the winners were from Ireland. These are unusually
high numbers, given the relative size of the
Cork COE compared to other larger centers in other parts of the
world. After further research, it was learned
that the COE in Cork, Ireland had received focused training in
innovation from an external consultant, which
2.8 Case Study: Global Innovation Network and Analysis
(GINA)
was proving effective. The Cork COE came up with more
innovation ideas, and better ones, than it had in
the past, and it was making larger contributions to innovation at
EMC. It would have been difficu lt, if not
impossible, to identify this cluster of innovators through
traditional methods or even anecdotal, word-of-
mouth feedback. Applying social network analysis enabled the
team to find a pocket of people within EMC
who were making disproportionately strong contributions. These
findings were sha red internally through
presentations and conferences and promoted through social
media and blogs.
2.8.6 Phase 6: Ope rationalize
Running analytics against a sandbox fi lled with notes, minutes,
and presentations from innovation activities
yielded great insights into EMC's innovation cu lture. Key find
ings from the project include these:
• The CTO office and GINA need more data in the future,
including a marketing initiative to convince
people to inform the globa l community on their
innovation/research activities.
• Some of the data is sensitive, and the team needs to consider
security and privacy related to t he data,
such as who can ru n the models and see the results.
• In addition to ru nning models, a parallel initiative needs to be
created to improve basic Business
Intelligence activities, such as dashboards, reporting, and
queries on research activities worldwide.
• A mechanism is needed to continually reevaluate the model
after deployment. Assessing the ben-
efits is one of the main goals of this stage, as is defining a
process to retrain the model as needed.
In addition to the actions and find ings listed , the team
demonstrated how analytics can drive new
insights in projects that are traditionally difficult to measure
and quantify. This project informed investment
decisions in university research projects by the CTO office and
identified hidden, high -value innovators.
In addition, the CTO office developed too ls to help submitters
improve ideas using topic modeling as part
of new recommender systems to help idea submitters find
similar ideas and refine thei r proposals for new
intellectual property.
Table 2-3 outlines an analytics plan for the GIN A case study
example. Although this proj ect shows only
three findings, there were many more. For instance, perhaps the
biggest overarching result from this project
is that it demonstrated, in a concrete way, that analytics can
drive new insights in projects that deal with
topics that may seem difficult to measure, such as innovation.
TABLE 2-3 Analytic Plan from the EMC GINA Project
Components of
Analytic Plan GINA Case Study
Discovery Business
Problem Framed
Initial Hypotheses
Data
Tracking global knowledge growth, ensuring effective
knowledge
transfer, and qu ickly converting it into corporate asset s.
Executing on
these three elements should accelerate innovation.
An increase in geographic knowledge transfer improves t he
speed of
idea delivery.
Five years of innovation idea submissions and history; six
months of
textual notes from global innovation and research activities
(continues)
DATA ANALYTIC$ LIFECYCLE
TABLE 2-3 Analytic Plan from the EMC GINA Project
(Continued )
Components of
Analytic Plan GINA Case Study
Model Planning
Analytic Technique
Result and Key Findings
Social netw ork analysis, socia l graphs, clustering, and
regression
analysis
1. Identified hidden, high-value innovators and fo und ways to
share
their knowledge
2. Informed investment decisions in university research projects
3. Created tools to help submitters improve ideas w ith idea
recommender systems
Innovation is an idea that every company wants to promote, but
it can be difficult to m easure innovation
or identify ways to increase innovation. Th is project explored
this issue f rom the standpoint of evaluating
informal socia l networks t o identify boundary spanners and
influ ential peop le within innovation sub-
networks. In essence, this project took a seemingly nebulous
problem and applied advanced analytica l
methods t o tea se out answers using an objective, fact-based
approach.
Another outcome fro m the project included the need to
supplement analytics w ith a separate d ata-
store for Business Intell igence reporting, accessib le to search
innovation/ res earch initiatives. Aside from
supporting decision making, th is w ill provide a mechanism to
be inform ed on discussions and research
happening world wide among team members in disparate
locations. Fina lly, it highlighted the value that
ca n be g leaned t hrough data and subsequent analysis.
Therefore, the need was identified to start f orm al
marketi ng program s to convince people to su bmit (or inform)
the global commun ity on their innovation/
research activities. The knowledge sharing was critical. Without
it, GINA would not have been able t o
perfo rm the analysis and identify the hidden innovators w ithin
the company.
Summary
This chapter described the Dat a Analytics Lifecycle, which is
an approach to manag ing and exec uting
analytical project s. This appro ach describes the process in six
phases.
1. Discovery
2. Data preparation
3. Model planning
4. Model building
5. Co mmun ica te results
6 . Operationali ze
Through these steps, data science teams can identify problems
and perform rig orous investigati on of
the dat asets needed for in-d epth analysi s. As stated in the
chapt er, although much is w ri tten about the
analytical met hods, t he bulk of t he t ime spent on these kinds
of project s is spent in preparation-namely,
Bibliography
in Phases 1 and 2 (discovery and data preparation). In addition,
this chapter discussed the seven roles
needed for a data science team.lt is critical that organizations
recognize that Data Science is a team effort,
and a balance of skills is needed to be successful in tackling Big
Data projects and other complex projects
involving data analytics.
Exercises
1 . In which phase would the team expect to invest most of the
project time? Why? Where would the
team expect to spend the least time?
2 . What are the benefits of doing a pilot program before a full -
scale rollout of a new analytical method-
ology? Discuss this in the context of the mini case study.
3. What kinds of tools would be used in the following phases,
and for wh ich kinds of use scenarios?
a. Phase 2: Data preparation
b. Phase 4: Model building
Bibliography
[1] T. H. Davenport and D. J. Patil, "Data Scientist: The Sexiest
Job of the 21st Century," Harvard
Business Review, October 2012.
[2) J. Manyika, M. Chiu, B. Brown, J. Bughin, R. Dobbs, C.
Roxburgh, and A. H. Byers, "Big Data: The Next
Frontier for Innovation, Competition, and Productivity,"
McKinsey Global Institute, 2011.
[3] " Scientific Method" [Online]. Available: http : //en.
wikipedia . org/wiki/
Scientific method.
[4) "CRISP-OM" [Online]. Available: http : //en . wikipedia .
org/wiki/
Cross_Indust ry_Standard_Process_for_Da ta_Mining.
[5) T. H. Davenport, J. G. Harris, and R. Morison, Analytics at
Work: Smarter Decisions, Better Results,
2010, Harvard Business Review Press.
[6] D. W. Hubbard, How to Measure Anything: Finding the
Value of Intangibles in Business, 2010,
Hoboken, NJ: John Wiley & Sons.
[7) J. Cohen, B. Dolan, M. Dunlap, J. M. Hellerstein and C.
Welton, MAD Skills: New Analysis Practices
for Big Data, Watertown, MA 2009.
[8] " List of APis" [Online]. Available: http : // www.
programmableweb . com/apis.
[9] B. Shneiderman [Online]. Available: http : //www . ifp .
illinois . edu/nabhcs/
abstracts/shneiderman.html.
[10) "Hadoop" [Online]. Available: http : //hadoop . apache .
org.
[11] "Alpine Miner" [Online]. Available: http : // alpinenow .
com.
[12] "OpenRefine" [Online]. Available: http : // open refine .
org.
[13] "Data Wrang ler" [Online). Avai lable: http : //vis .
stanford . edu/wrangler / .
[14] "CR AN" [Online). Available: http: // cran . us . r-project .
org.
[15] "SQL" [Online]. Available: h tt p: //en . wikipedia .
org/wiki/SQL.
[16) "SAS/ACCESS" [Onl ine]. Available: http : //www . sas .
com/en_ us/software/
data-management/access . htm.
DATA ANALYTICS LIFECYCLE
[17] "SAS Enterprise Miner" [Online]. Available: http: I /www.
sas. com/ en_us/ software/
analytics/enterprise-miner.html.
[18] "SPSS Modeler" [Online]. Available: http: I /www- 03.
ibm. com/ software/products/
en/category/business-analytics.
[19] "Matlab" [Online]. Available: http: I /www. mathworks.
com/products/matlab/.
[20] "Statistica" [Online]. Available: https: I /www. statsoft.
com.
[21] "Mathematica" [Online]. Available: http: I /www. wolfram.
com/mathematical.
[22] "Octave" [Online]. Available: https: I /www. gnu.
erg/software/octave/.
[23] "WEKA" [Online]. Available: http: I /www. cs. waikato.
ac. nz/ml/weka/.
[24] "MADiib" [Online]. Available: http: I /madl ib. net.
[25] K. L. Higbee, Your Memory-How It Works and How to
Improve It, New York: Marlowe &
Company, 1996.
[26] S. Todd, "Data Science and Big Data Curriculum" [Online].
Available: http: I I steve todd
.typepad.com/my_weblog/data-science-and-big-data-
curriculum/.
[27] T. H Davenport and D. J. Patil, "Data Scientist: The
Sexiest Job of the 21st Century," Harvard
Business Review, October 2012.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
The previous chapter presented the six phases of the Data
Analytics Lifecycle.
• Phase 1: Discovery
• Phase 2: Data Preparation
• Phase 3: Model Planning
• Phase 4: Model Building
• Phase 5: Communicate Results
• Phase 6: Operationalize
The first three phases involve various aspects of data exploratio
n. In general, the success of a da ta
analysis project requires a deep understanding of the data. It
also requires a toolbox for mining and pre-
senting the data. These activities include the study of the data in
terms of basic statistical measures and
creation of graphs and plots to visualize and identify
relationships and patterns. Severa l free or commercial
tools are available for exploring, conditioning, modeling, and
presenting data. Because of its popularity and
versatility, the open-source programming language R is used to
illustrate many of the presented analytical
tasks and models in this book.
This chapter introduces the basic functionality of the R
programming language and environment. The
first section gives an overview of how to useR to acquire, parse,
and filter the data as well as how to obtain
some basic descriptive statistics on a dataset. The second
section examines using R to perform exploratory
data analysis tasks using visua lization. The final section
focuses on statistical inference, such as hypothesis
testing and analysis of variance in R.
3.1 Introduction toR
R is a programming language and software framework for
statistical analysis and graphics. Available for use
under the GNU General Public License [1], R software and
installation instructions can be obtained via the
Comprehensive R Archive and Network [2]. This section
provides an overview of the basic functionality of R.
In later chapters, this foundation in R is utilized to demonstrate
many of the presented analytical techniques.
Before delving into specific operations and functions of R later
in this chapter, it is important to under-
stand the now of a basic R script to address an analytical
problem. The following R code illustrates a typical
analytical situation in which a dataset is imported, the contents
of the dataset are examined, and some
modeling building tasks are executed. Although the reader may
not yet be familiar with the R syntax,
the code can be followed by reading the embedded comments,
denoted by #. In the following scenario,
the annual sales in U.S. dollars for 10,000 retail customers have
been provided in the form of a comma-
separated-value (CS V) file. The read . csv () function is used to
import the CSV file. This dataset is stored
to the R variable sales using the assignment operator <- .
H imp< rt a CSV file of theo tot'l.- annual sa es for each
customcl·
sales <- read . csv( "c:/data/yearly_sales.csv")
# €'>:amine Lhe imported dataset
head(sales)
summary (sales)
# plot num_of_orders vs. sales
plot(sales$num_of_orders,sales$sales_total,
main .. "Number of Orders vs. Sales")
# perform a statistical analysis (fit a linear regression model)
results <- lm(sales$sales_total - sales$num_of_orders)
summary(results)
# perform some diagnostics on the fitted model
# plot histogram of the residuals
hist(results$residuals, breaks .. 800)
3.1 Introduction toR
In this example, the data file is imported using the read. csv ()
function. Once the file has been
imported, it is useful to examine the contents to ensure that the
data was loaded properly as well as to become
familiar with the data. In the example, the head ( ) function, by
default, displays the first six records of sales.
# examine the imported dataset
head(sales)
cust id sales total num of orders gender
- - - -
100001 800.64
2 100002 217.53
100003 74.58 2 t·l
4 100004 ·198. 60 t•l
5 100005 723.11 4 F
6 100006 69.43 2 F
The summary () function provides some descriptive statistics,
such as the mean and median, for
each data column. Additionally, the minimum and maximum
values as well as the 1st and 3rd quartiles are
provided. Because the gender column contains two possible
characters, an "F" (female) or "M" (male),
the summary () function provides the count of each character's
occurrence.
summary(sales)
cust id
i"lin. :100001
1st Qu . : 1 o 2 5o 1
l>ledian :105001
!>lean :105001
3rd Qu. :107500
llax. : 110 0 0 0
sales total
!'-lin. 30.02
1 s t Qu . : 8 0 . 2 9
r'ledian : 151.65
t•lean 24 9. 46
3 rd Qu. : 2 9 5 . 50
t•lax. :7606.09
num of orde1·s gender
!'-lin. 1.000 F:5035
1st Qu.: 2.000 !·l: 4965
t>ledian : 2.000
!>lean 2.428
3rd Qu.: 3.000
1•1ax. :22.000
Plotting a dataset's contents can provide information about the
relationships between the vari-
ous columns. In this example, the plot () function generates a
scatterplot of the number of orders
(sales$num_of_orders) againsttheannual sales
(sales$sales_total). The$ is used to refer-
ence a specific column in the dataset sales. The resulting plot is
shown in Figure 3-1.
# plot num_of_orders vs. sales
plot(sales$num_of_orders,sales$sales_total,
main .. "Number of Orders vs. Sales")
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
Number of Orders vs . Total Sales
0
iii 0 0 0
:§ 0 0
I <0
C/)
Q> 0
iii 0 § 0 0 C/) 0 0 II> 0
I I i i i
8 8 0 0 0 ~ C/) 0 0 Q> 0
• 8 iii N I I C/) • 0
5 10 15 20
sales$num_ of_ orders
FtGURE 3-1 Graphically examining th e data
Each point corresponds to the number of orders and the total
sales for each customer. The plot indicates
that the annual sales are proportional to the number of orders
placed. Although the observed relationship
between these two variables is not purely linear, the ana lyst
decided to apply linear regression using the
lm () function as a first step in the model ing process.
r esul t s <- lm(sal es$sa l es_total - sales$num_of _o r de rs)
r e s ults
ca.l:
lm formu.a sa.c Ssales ~ota. sales$num_of_orders
Coefti 1en·
In· er ep • sa:essnum o f orders
The resulting intercept and slope values are -154.1 and 166.2,
respectively, for the fitted linear equation.
However, results stores considerably more information that can
be examined with the summary ()
fun ction. Details on the contents of results are examined by
applying the at t ributes () fun ction.
Because regression analysis is presented in more detail later in
the book, the reader shou ld not overly focus
on interpreting the following output.
summary(results)
Call :
lm formu:a sa!esSsales_total - salcs$ num_of_orders
Re!'a ilnls:
Min IQ Med1an 3C 1·1ax
-666 . 5 12S . S - 26 . 7 86 . 6 4103 . 4
Coe f ficie nt:s:
Est1mate Std . Errol r value Prl> t
Intercept -15~.128
sal~s$num f orders 166 . 22l
.; . 12"' - 37 . 33
1 . 462 112 . 66
<2e-16
<2e- ~6
3.1 Introduction to R
Sior:1t . codes : 0 ' ... . 0.00! •·· · c . o: • • • 5 I • I . 1 I 1
Res1aua. star:da~d e~ro~ : ~! .: on 999° deg~ees of :reeo~m
~ultlple R·squar d : 0 . ~617 , Aa:usted P-sq~a~ed : . 561
The summary () function is an example of a generic function. A
generic function is a group of fu nc-
tions sharing the same name but behaving differently depending
on the number and the type of arguments
they receive. Utilized previously, plot () is another example of a
generic function; the plot is determi ned
by the passed variables. Generic functions are used throughout
this chapter and t he book. In t he final
portion of the example, the foll owing R code uses the generic
function hist () to ge nerate a histogram
(Figure 3-2) of t he re siduals stored in results. The function ca
ll illustrates that optional parameter values
can be passed. In this case, the number of breaks is specified to
observe the large residua ls.
~ pert H. some d13gnosLics or. the htted m .. de.
# plot hist >gnm f the residu, ls
his t (r esults $res idua l s, breaks= 8 00)
Histogra m of resultsSresid uals
0
I()
>-u
c 0 .. 0
:J
<:T
~ 0 u. I()
0
0 1000 2000 3000
resuttsSres1duals
FIGURE 3-2 Evidence of large residuals
4000
This simple example illustrates a few of the basic model
planning and bu ilding tasks that may occur
in Phases 3 and 4 of the Data Analytics Lifecycl e. Throughout
this chapter, it is useful to envision how the
presented R fun ctionality will be used in a more comprehensive
analysis.
3.1.1 R Graphical User Interfaces
R software uses a command-line interface (CLI) that is similar
to the BASH shell in Li nux or the interactive
versions of scripting languages such as Python. UNIX and
Linux users can enter command Rat the termina l
prompt to use the CU. For Windows installations, R comes with
RGui.exe, which provides a basic graphica l
user interface (GU I). However, to im prove the ease of writing,
executing, and debugging R code, several
additional GUis have been written for R. Popular GUis include
the R commander [3]. Ra ttle [4], and RStudio
[5). This section presents a brief overview of RStudio, w hich
was used to build the R examples in th is book.
Figure 3-3 provides a screenshot of the previous R code
example executed in RStudio.
RE V IEW O F BASIC D ATA ANALYTIC M ETHODS USING
R
._ CNtl.
.. ..
t 1 ules .. r t -.d.uv
" ... tw..o u l n
' 1 • -...1 • n • • "'
6llu,-...,.1.,_uh,1.u.,·
!~uln •
- - ....._.. -....y
.. tJ - ..... O....ft• f
.:1-
.-------, ulu 10000 OM. ol " " whbhs
Scripts
rn~o~hs ... .. , ... ... ... ... .•. ... ... ...
plot u1ts~of-orcl9ors,u1es ules_tot•l, utn-·~ ~ cwo...s '"· s..ln
. '
ruulu
t"ltSIIIU
•. '
to' 1 ' Jo flt I T
1• uln ,u lu_utul ulu~,..._ot_orct«'s
11 luu
- .... ,..,.,... ""'-
; :- •toMtt· 0 { a. .....
j Workspace
'" lU
Ul
•Ill r t f " ~ I
hht ruuhs1t"tst~•h. br u11 • 100 r Histogram or resu lts$
reslduals ·-'" •• • iu~-~.~ ... ~------:=::::::::::::::::~
"s-uy(resulu)
c•H:
l•(f or-..1& - ulu lulu_uul .. saln~of_orcMf's)
... , .... ls :
tUn tQ lllt'dhn 1Q ~b
· 6M. 1 - US,  •U .7 t6.6 .&10), <1
lsttuu Ud. Crrw 1 " ' ',.,. k(,.Jtl)
omuupt) . ,,,, u, •.ut -Jr.n ~ •. ,, •·•
ul•s~oLoro.-' u6. Ul 1. 41 61 uJ.M <-lt- 16 •••
sttnU . codts : o '• •• ' o. oot •••• 0.01 ••• o.os •, • 0.1 • • 1
l:utdlu• 1 sund¥d uror: 110.1 on tnt CS•vr"' or fr~
ttyttph I:•S.,¥-.1: 0 , $6 17, .lodJw~t f'd l•squ.vl'd: 0 . )6)7
f" • Uathttc : t . ltl••Ool on I wrd 9Ht Of", P•VA1U41: ot l .
h•16
,. #pf'rfor• ~- diA~ltn Of' Uw ttlud _,..,
Jo ?lot hhtOQrM ot tt,. re~lduh
~ Mn(ruwh~k'n lcN.th, .,.,,,, · tOO)
FIGURE 3-3 RStudio GUI
.:1
Console
j
The fou r highlighted window panes follow.
~
~
j
~
~
• Scripts: Serves as an area to write and saveR code
1000
• Workspace: Lists the datasets and variables in the R
environment
Plots
2000 3000
• Plot s: Displays the plots generated by the R code and
provides a straightfor ward mechanism to
export the plots
• Co nsole: Provides a history of the executed R code and the
output
Additionally, the console pane can be used to obtain help
information on R. Figure 3-4 illustrates that
by entering ? lm at the console prompt, the help details of the
lm ( ) function are provided on the right.
Alternatively, help ( lm ) could have been entered at the console
prompt.
Functions such as edit () and fix () allow the user to update the
contents of an R variable.
Alternatively, such changes can be implemented with RStudio
by selecting the appropriate variable from
the workspace pan e.
R allows one to save the work space environment, includ ing
variables and loaded li brari es, into an
. Rdata fil e using the save . image () function. An existing .
Rdata file can be loaded using the
load . image () function . Tools such as RS tud io prompt the
user for whether the developer wants to
save the workspace connects prior to exiting the GUI.
The reader is encouraged to install Rand a preferred GUI to try
out the R examples provided in the book
and utilize the help functionality to access more details about
the discussed topics.
., .. t tt • • ........... , 41 1ft f Ot l.a.l:?' ~~
t" u lu
N
r e...cl.csv 4111 , • .,.1yJ•1n.u
" ... ... . .. hud ulu s~salu . ,. ,. ..
. ._. .
"' .. , plot ultJ l......_.,....ot'MrS ,Uh.tSnles.toul. .. ,,..-~of
arden'''~· ~·u· , ..
>01 ...
>07 , .. , ..
110
• f • .I H-It t II t>t .II It~ r--r;~l~ ~)
ru.ulu 1• uluSulu,.ICKil ulus~of .. crd~
ru~ln
Ul "r•·f dl 1 nt,,d~l
U7 •ph•t hi t< t t~ '"' ••I
lU hht ru11hs Srut duah, bru~s • 100 1
u• ---
,;.·,;·iah .... ~ •• ;;;.;::.:---------=========----'
un:
lo(f or-.la .. ultsSul•s .. tO'CAI .. uohsJtuil,..of_or~s )
luto.uh :
Min IQ .. fltiM )Q '""'
~tM. , · US. S • lt.7 M , t 4110), 41
CCHff lc.l..r:ts:
Uttaau St d. lrror t "' ' '"' "'(•I t I>
(fnurcept) -1~.UI <I.Ut - J 1.JJ c2t-l6 ••·
u1e~ lnwa..ot_orws 1641.211 1."62 UI.M -.l t - 16 ...
stvnu. cCIIOH.: o ···-· 0.001 •••• 0.01 ••• o.o ·.• o. t • • 1
• ntdvll nln<t¥0 tf'ror: 210.1 on t9N detJ't.S of frt~
tt~~lt tple •-•qwwtd: 0. MJ7. AdJW1ttO •·squAred: o. ~617
r -su thttc : l.2t2 ... o.& on 1 1t10 tttl cw , .,._., ,..,.: c 2.21-16
) • .,..,.,0,.. ·- ctl~tt( 01'1 tr.. fltf'G ..,.,
• • plot ,.,,uq• of ttw rnldv• h
.. "'ht(r~whs~~sl..,.h, lilr'Uh • toO) ... ,.
• I -
FIGURE 3-4 Accessing help in Rstudio
-- , -
3.1.2 Data Import and Export
3.1 Introduction toR
J U
.:J-
ulu
.. ...... o.ttl01 • i
- ;:,
~·
ru~o~lu
J ,.. .... ,~ """
"' ...... .z..
ll f"'*"t 1--Ut M-" '"
Fitting Linear Models
.:J o .. c:ttptSon
~-u&t411 kllftl•f"'Idek I CM ... vt .. IIC:WfYWft9'*1-
UIQ!It1UIIIIIII~II-I .nd
1Nfyt4vlt-•(~ • ·~~ . _.,_...clfbab'UMn)
uqrcrw:a!• , 41 c..•, , ..... ,,,, .,.l111ht•, ft • . • etlc.tl,
_,bOd • • q" · · .o., .. T~. • .. ~. y .. r;u..,r, ql" • ru:z.
•~h:.ct • TWI, cMtr uu • II"J:.I. , o thu, . .. )
Argument1
',
f QI&l} &
J .....
lf'lot.,Kt~tl&•••t· t• liii.•(OfiDMII'IIfunbece«cldlotNIWIII
ttymtdcestiiCI¥IbOnol I
::::::.~.:. ~:-:=',::7'-=:::.7::~·~101CIU I
...... )t ...... tlw""'*''"tlltfNdee fnalbn:l.ncl&:.• tt.~ ... tll ... hm
u ... tr_::~, ft;....,ll l tYJIIUI1CIIt.,....,....,.kwaiiiii'O~•uhcl
II'I~'IIIKtorl~l ....... fl .......... lObeuMO~UMtc,.,poctst
II'I__......U.tl....,_,teMI<MCI•t!lotiftaltJI'X-tsl SPIOII4 c.r--
..:.or•-...aor
1--'UL ....,C~IM1Il ...... d•l4~..qll vt:~t•~• ~ng
, .. r .... . -,,, ...,.,...,.,.,.,,., ... ...,., .. YMO S..lban.tan .=.J
In the annual retail sales example, the dataset was imported into
R using the read . csv () function as
in the following code.
sales <- read . csv("c : /data/yearly_ sales . csv" )
R uses a forward slash {!) as the separator character in the d
irectory and file path s. This convention
makes script Iiies somewhat more portable at the expense of
some initial confusion on the part of Windows
users, w ho may be accustomed to using a backslash () as a
separator. To simpl ify the import of multiple Iiies
w ith long path names, the setwd () func tion can be used to set
the working d irectory for the su bsequent
impo rt and export o perations, as show n in the fo llow ing R
cod e.
setwd ( "c: / data / ")
sales < - read.csv ( "yearly_sales . csv " )
Other import functions include read. table ( l and read . de lim
() ,which are intended to import
other common fil e typ es such as TXT. These function s can
also be used to import the yearly _ sales
. csv fil e, as the following code illustrates.
sales_table <- read .table ( "yearly_sales . csv " , header=TRUE,
sep="," )
sales_delim <- read . delim ( "yearly_ sales . csv", sep=",")
Th e ma in difference between these import fun ctions is the d
efault values. For example, t he read
. de lim () function expect s the column separator t o be a tab("
t"). ln the event tha t the numerical data
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
in a data file uses a comma for the decimal, R also provides two
additional functions-read . csv2 () and
read . del im2 ()-to import such data. Table 3-1 includes the
expected defaults for headers, column
separators, and decimal point notations.
TABLE 3-1 Import Function Defaults
Function Headers Separator Decimal Point
r ead. t abl e () FALSE
r ead. csv () TRUE
r ead. csv2 ( ) TRUE "·" .
read . d e lim () TRUE "t "
r ead. d elirn2 () TRUE "t " .. .
The analogous R functions such as write . table () ,write . csv ()
,and write . csv 2 () enable
exporting of R datasets to an external fi le. For example, the
following R code adds an additional column
to the sales dataset and exports the modified dataset to an
external file.
t; ?dd 1 ,... ... l,;.f'1Il t .. • h.;;. _'b:!l JUt :i ~ ~ j
sales$per_order < - sa l es$sales_total/sales$num_of _orders
# exp 1 L d1ta 1s ''ll' 1.-.. <r u 1 "' L!1< llL t n• t '1'.' name,,
write . t a ble(sales ," sa l es_modified .txt ", sep= "t ", row.
names=FALSE
Sometimes it is necessary to re ad data from a database
management system (DBMS). R packages such
as DBI [6) and RODBC [7] are available for this purpose. These
packages provide database interfaces
for communication betwee n R and DBMSs such as MySQL,
Oracle, SQL Server, PostgreSQL, and Pivotal
Greenplum. The following R code demonstrates how to instal l
the RODBC package with the i ns t al l
. p acka ges () fun ction. The 1 ibr a ry () func tion loads the
package into the R workspace. Finally, a
connector (con n ) is initialized for connecting to a Pivotal
Greenpl um database tra i n i ng2 via open
database connectivity (ODBC) with user user. The training2
database must be defin ed either in the
I etc/ODBC . ini configuration file or using the Administrative
Tools under the Windows Control Panel.
install . packages ( "RODBC" )
library(RODBC)
conn <- odbcConnec t ("t r aining2", uid="user" , pwd= "passwo
r d " )
Th e con nector needs to be present to su bmit a SQL query to
an ODBC database by using the
sq l Qu ery () function from the RODBC package. The following
R code retrieves specific columns from
the hous i ng table in w hich household income (h inc ) is
greater than $1,000,000.
housing_data <- s qlQuery(conn, "select s erialno , s t ate,
person s, r ooms
from housing
where hinc > 1000000")
head(housing_data )
4552088
4 45"- 88
5 8699:!93 6
5
5
5
9
9
5
3.1 Intro ductio n t o R
Although plots can be saved using the RStudio GUI, plots can
also be saved using R code by specifying
the appropriate g raphic devices. Using the j peg () function, the
following R code creates a new JPEG
file, adds a histogram plot to the file, and then closes the fi le.
Such techniques are useful w hen automating
standard repor ts. Other functions, such as png () , bmp () , pdf
() ,and postscript () ,are available
in R to save plots in the des ired format.
jpeg ( fil e= "c : /data/ sale s_h ist . j peg" )
h ist(sales$num_of_ o rders )
creaLe a ne'" jpeg file
# export histogt·;un to jpeg
d ev. o ff () ~ shut off the graphic device
More information on data imports and exports can be fou nd at
http : I I cran . r-proj e ct . o rgl
doc I ma nuals I r- rel ease i R- d a ta . html, such as how to
import data sets from statistical software
packages including Minitab, SAS, and SPSS.
3.1.3 Attribute and Data Types
In the earli er exa mple, the sal es variable contained a record
for each cu st omer. Several cha racteristic s,
such as total an nual sa les, number of orders, and gender, were
provided for each customer. In general,
these characteristics or attributes provide the qualitative and
quantitative measures for each item or subject
of interest. Attributes can be categorized into four types:
nominal, ordinal, interval, and ratio (NOIR) [8).
Table 3-2 distinguishes these four attrib ute types and shows the
operations they support. Nominal and
ordinal attributes are considered categorical attributes, w hereas
interval and ratio attributes are considered
numeric attributes.
TABLE 3-2 NOIR Attribu te Types
Categorical (Qualitative) Numeric (Quantitative)
Nominal Ordina l Inte rv al Rat io
Definition The va lues represent Attributes The difference Both
the difference
labels that distin- imply a betw een two and the ratio of
guish one from sequence. values is two values are
another. meaningful. meaningful.
Examples ZIP codes, nationa l- Quality of Temperature in Age,
temperature
ity, street names, diamonds, Celsius or in Kelvin, counts,
gender, employee ID academic Fahrenheit, ca l- length, weight
numbers, TRUE or grades, mag- endar dates,
FALS E nitude of lati tudes
ea rthquakes
Operat ions =, >' =, ~, = , ;t., =,~,
< , s , > , 2: <, s , > , c:, <, s , >, ~,
+, - + , - ,
x, .:-
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
Data of one attribute type may be converted to another. For
example, the qual it yof diamonds {Fair,
Good, Very Good, Premium, Ideal} is considered ordinal but
can be converted to nominal {Good, Excellent}
with a defined mapping. Similarly, a ratio attribute like Age can
be converted into an ordinal attribute such
as {Infant, Adolescent, Adult, Senior}. Understanding the
attribute types in a given dataset is important
to ensure that the appropriate descriptive statistics and analytic
methods are applied and properly inter-
preted. For example, the mean and standard deviation of U.S.
postal ZIP codes are not very meaningful or
appropriate. Proper handling of categorical variables will be
addressed in subsequent chapters. Also, it is
useful to consider these attribute types during the following
discussion on R data types.
Numeric, Character, and Logical Data Types
Like other programming languages, R supports the use of
numeric, character, and logical (Boolean) values.
Examples of such variables are given in the following R code.
i <- 1
sport <- "football"
flag <- TRUE
# create a numeric variable
# create a character variable
# create a logical variable
R provides several functions, such as class () and type of (),to
examine the characteristics of a
given variable. The class () function represents the abstract
class of an object. The typeof () func-
tion determines the way an object is stored in memory.
Although i appears to be an integer, i is internally
stored using double precision. To improve the readability of the
code segments in this section, the inline
R comments are used to explain the code or to provide the
returned values.
class(i) # returns "numeric"
typeof(i) # returns "double"
class(sport) # returns "character"
typeof(sport) # returns "character"
class(flag) .. returns "logical" tt
typeof (flag) # returns "logical"
Additional R functions exist that can test the variables and
coerce a variable into a specific type. The
following R code illustrates how to test if i is an integer using
the is . integer ( } function and to coerce
i into a new integer variable, j, using the as. integer () function.
Similar functions can be applied
for double, character, and logical types.
is.integer(i)
j <- as.integer(i)
is.integer(j)
# returns FALSE
# coerces contents of i into an integer
# returns TRUE
The application of the length () function reveals that the created
variables each have a length of 1.
One might have expected the returned length of sport to have
been 8 for each of the characters in the
string 11 football". However, these three variables are actually
one element, vectors.
length{i)
length(flag)
length(sport)
# returns 1
# returns 1
# returns 1 (not 8 for "football")
3.1 Introduction to R
Vectors
Vectors are a basic building block for data in R. As seen
previously, simple R variables are actually vectors.
A vector can only consist of values in the same class. The tests
for vectors can be conducted using the
is. vector () function.
is.vector(i)
is.vector(flag)
is.vector(sport)
!t returns TRUE
# returns TRUE
±t returns TRUE
R provides functionality that enables the easy creation and
manipulation of vectors. The following R
code illustrates how a vector can be created using the combine
function, c () or the colon operator, :,
to build a vector from the sequence of integers from 1 to 5.
Furthermore, the code shows how the values
of an existing vector can be easily modified or accessed. The
code, related to the z vector, indicates how
logical comparisons can be built to extract certain elements of a
given vector.
u <- c("red", "yellow", "blue") " create a vector "red" "yello•d"
"blue"
u
u[l]
v <- 1:5
v
sum(v)
w <- v * 2
w
w[3]
±; t·eturns "red" "yellow'' "blue"
returns "red" 1st element in u)
# create a vector 1 2 3 4 5
# returns 1 2 3 4 5
It returns 15
It create a vector 2 4 6 8 10
# returns 2 4 6 8 10
returns 6 (the 3rd element of w)
z <- v + w
z
# sums two vectors element by element
# returns 6 9 12 15
z > 8
z [z > 8]
# returns FALSE FALSE TRUE TRUE TRUE
# returns 9 12 15
z[z > 8 I z < 5] returns 9 12 15 ("!"denotes "or")
Sometimes it is necessary to initialize a vector of a specific
length and then populate the content of
the vector later. The vector ( } function, by default, creates a
logical vector. A vector of a different type
can be specified by using the mode parameter. The vector c, an
integer vector of length 0, may be useful
when the number of elements is not initially known and the new
elements will later be added to the end
ofthe vector as the values become available.
a <- vector(length=3) # create a logical vector of length 3
a # returns FALSE FALSE FALSE
b <- vector(mode::"numeric 11 , 3) #create a numeric vector of
length 3
typeof(b) # returns "double"
b[2] <- 3.1 #assign 3.1 to the 2nd element
b # returns 0.0 3.1 0.0
c <- vector(mode= 11 integer", 0) # create an integer vectot· of
length o
c # returns integer(O)
length(c) # returns o
Although vectors may appear to be analogous to arrays of one
dimension, they are technically dimen-
sionless, as seen in the following R code. The concept of arrays
and matrices is addressed in the following
discussion.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
1·eturns 3 length(b)
dim(b) ~ 1·etun1s NULL (an undefined value)
Arrays and Matrices
The array () function can be used to restructure a vector as an
array. For example, the following R code
builds a three-dimensional array to hold the quarterly sales for
three regions over a two-year period and
then assign the sales amount of $158,000 to the second region
for the first quarter of the first year.
H the dimensions are 3 regions , 4 quarters, and 2 years
quarterly_sales < - array(O, dim=c(3,4,2))
quarterly_sales[2,1,11 <- 158000
quarterly_ sales
1
[. 1, [.:! 1 :. , 1 [,·s!
[1' 1 0 0 0
[:!,1 !58000 c 0 0
[ 3. 1 0 0 0 0
2
[. 11 (.21 [. 31 [. 4 1
[ 1' 1 0 0 0 0
[2 ' 1 0 0 0 0
[3 ' 1 0 0 0 0
A two-dimensional array is known as a matrix. The following
code initializes a matrix to hold the quar-
terly sales for the three region s. The parameters nrov1 and nco
l define the number of rows and columns,
respectively, for the sal es_ma tri x.
sales_matrix <- matrix(O, nrow = 3, neal 4)
sales_matrix
[ 1, 1
[2,1
[ 1' 1
[.11 ;,:!) 1.31 [. .; 1
0
0
0
0
0
0
0
0
n
R provides the standard matrix operations such as addition,
subtraction, and multiplication, as well
as the transpose function t () and the inverse matrix function ma
t r ix . inve r s e () included in the
matrixcalc package. Th e following R code builds a 3 x 3
matrix, M, and multiplies it by its inverse to
obtain the identity matrix.
library(matrixcalc)
M <- matrix(c(1,3,3,5,0,4,3 , 3,3) ,nrow 3,ncol 3) build a 3x3
matrix
3.1 Introduction toR
M %* % matrix . inverse (M} ~ multiply 1·! by inverse (:01 }
[. 1] [. 2] [ ' 3]
[ 1' J 0 0
[2' J 0 1 0
[3' J 0 0 1
Data Fram es
Simi lar to the concept of matrices, data frames provide a
structure for storing and accessing several variables
of possibly different data types. In fact, as the i s . d ata . fr a
me () function indicates, a data frame was
created by the r e ad . csv () function at the beginning of the
chapter.
r.import a CSV :ile of the total annual sales :or each customer
s ales < - read . csv ( "c : / data / ye arly_s a l es . c sv" )
i s .da t a . f r ame (sal e s ) ~ t·eturns TRUE
As seen earlier, the variables stored in the data frame can be
easily accessed using the $ notation. The
following R code illustrates that in this example, each variable
is a vector with the exception of gende r ,
which wa s, by a read . csv () default, imported as a factor.
Discussed in detail later in this section, a fa ctor
denotes a categorical variable, typically with a few finite levels
such as "F" and "M " in the case of gender.
l e ngth(sal es$num_o f _o r ders) returns 10000 (number of
customers)
i s . v ector(sales$cust id) returns TRUE -
is . v ector(sales$ sales_total) returns TRUE
i s .vector(sales$num_of_ orders ) returns TRUE
is . v ector (sales$gender) returns FALSE
is . factor(s a les$gender ) ~ returns TRUE
Because of their fl exibility to handle many data types, data
frames are the preferred input format for
many ofthe modeling functions available in R. The foll owing
use of the s t r () functio n provides the
structure of the sal es data frame. This fun ction identifi es the
integer and numeric (double) data types,
the factor variables and levels, as well as the first few values
for each variable.
str (sal es) # display structure of the data frame object
'data.ft·ame': 10000 obs . of 4 vanables :
$ CUSt id int 100001 100002 100003 100004 100005 100006 . ..
$ sales total num 800 . 6 217.5 74.6 498 . 6 723 . 1
$ num of orders : int 3 3 2 3 4 2 2 2 2 2 . .. -
$ gender Factor w/ 2 le,·els UfU I "f'-1" : 1 l 2 2 1 1 2 2 1 2 .. .
In the simplest sense, data frames are lists of variables of the
same length. A subset of the data frame
can be re trieved through subsetting operators. R's subsetting
operators are powerful in t hat they allow
one to express complex operations in a succinct fa shion and
easily retrieve a subset of the dataset.
'! extract the fourth column of the sales data frame
sal es [, 4]
H extract the gender column of the sales data frame
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
sales$gender
# retrieve the first two rows of the data frame
sales[l:2,]
# retrieve the first, third, and fourth columns
sales[,c(l,3,4)]
l! retrieve both the cust_id and the sales_total columns
sales[,c("cust_id", "sales_total")]
# retrieve all the records whose gender is female
sales[sales$gender=="F",]
The following R code shows that the class of the sales variable
is a data frame. However, the type of
the sales variable is a list. A list is a collection of objects that
can be of various types, including other lists.
class(sales)
"data. frame"
typeof(sales)
"list"
Lists
Lists can contain any type of objects, including other lists.
Using the vector v and the matrix M created in
earlier examples, the following R code creates assortment, a list
of different object types.
# build an assorted list of a string, a numeric, a list, a vector,
# and a matrix
housing<- list("own", "rent")
assortment <- list("football", 7.5, housing, v, M)
assortment
[ [1)]
[1) "football"
[ (2])
[1) 7. 5
[ (3])
[ [ 3)) [ [ 1))
[1) "own"
[ [3)) [ [2)]
[1) "rent"
[ [4)]
[1] 1 2 3 4 5
[ [5)]
[11 J
[21 J
[3 1 J
[I 1] [ 1 2] [ 1 3 J
1
3
3
5
0
4
3.1 Introduction toR
In displaying the contents of assortment, the use of the double
brackets, [ [] ] , is of particular
importance. As the following R code illustrates, the use of the
single set of brackets only accesses an item
in the list, not its content.
# examine the fifth object, loll in the list
class(assortment[S]) .. returns "2.ist" tt
length(assortment[S]) .. returns 1 tt
class(assortment[[S]]) # returns "matrix"
length(assortment[[S]]) # returns 9 {for the 3x3 matrix)
As presented earlier in the data frame discussion, the s tr ( )
function offers details about the structure
of a list.
str(assortment)
List of 5
$ : chr "football"
$ : num 705
$ :List of 2
0 0 $ : chr "own "
0 0$ : chr "rent"
$ int [ 1: 5] 1 2 3 4 5
$ : num [ 1: 3 1 1 : 3] 1 3 3 5 0 4 3 3 3
Factors
Factors were briefly introduced during the discussion of the
gender variable in the data frame sales.
In this case, gender could assume one of two levels: ForM.
Factors can be ordered or not ordered. In the
case of gender, the levels are not ordered.
class(sales$gender)
is.ordered(sales$gender)
# returns "factor"
# returns FALSE
Included with the ggplot2 package, the diamonds data frame
contains three ordered factors.
Examining the cut factor, there are five levels in order of
improving cut: Fair, Good, Very Good, Premium,
and Ideal. Thus, sales$gender contains nominal data, and
diamonds$cut contains ordinal data.
head(sales$gender)
F F l-1 1'-1 F F
Levels: F l1
library(ggplot2)
data(diamonds)
# display first six values and the levels
# load the data frame into the R workspace
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
str(diamonds)
'data.frame': 53940 obs. of 10 variables:
$ carat
$ cut
$ color
$ clarity:
$ depth
$ table
$ price
$ X
$ y
$ z
num 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 ...
Ord.factor w/ 5 levels "Fair"c"Good"c .. : 5 4 2 4 2 3 ...
Ord.factor w/ 7 levels "D"c"E"c"F"c"G"c .. : 2 2 2 6 7 7
Ord.factor w/ 8 levels "I1"c"SI2"c"SI1"< .. : 2 3 5 4 2
num 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4
num 55 61 65 58 58 57 57 55 61 61 ...
int 326 326 327 334 335 336 336 337 337 338
num 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
num 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05
num 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39
head(diamonds$cut) # display first six values and the levels
Ideal Premium Good Premium Good Very Good
Levels: Fair c Good c Very Good < Premium < Ideal
Suppose it is decided to categorize sales$sales_ totals into three
groups-small, medium,
and big-according to the amount of the sales with the following
code. These groupings are the basis for
the new ordinal factor, spender, with levels {small, medium,
big}.
# build an empty character vector of the same length as sales
sales_group <- vector (mode=''character",
length=length(sales$sales_total))
# group the customers according to the sales amount
sales_group[sales$sales_total<100] <- "small"
sales_group[sales$sales_total>=100 & sales$sales_total<500] <-
"medium"
sales_group[sales$sales_total>=500] <- "big"
# create and add the ordered factor to the sales data frame
spender<- factor(sales_group,levels=c("small", "medium",
"big"),
ordered = TRUE)
sales <- cbind(sales,spender)
str(sales$spender)
Ord.factor w/ 3 levels "small"c"medium"c .. : 3 2 1 2 3 1 1 1 2 1
...
head(sales$spender)
big medium small medium big small
Levels: small < medium c big
The cbind () function is used to combine variables column-wise.
The rbind () function is used
to combine datasets row-wise. The use of factors is important in
several R statistical modeling functions,
such as analysis of variance, aov ( ) , presented later in this
chapter, and the use of contingency tables,
discussed next.
3.11ntrodudion toR
Contingency Tables
In R, table refers to a class of objects used to store the observed
counts across the factors for a given dataset.
Such a table is commonly referred to as a contingency table and
is the basis for performing a statistical
test on the independence of the factors used to build the table.
The following R code builds a contingency
table based on the sales$gender and sales$ spender factors.
# build a contingency table based on the gender and spender
factors
sales_table <- table{sales$gender,sales$spender)
sales_table
small medium big
F 1726 2746 563
M 1656 2723 586
class(sales_table)
typeof(sales_table)
dim{sales_table)
# performs a chi-squared test
summary(sales_table)
Number of cases in table: 10000
Number of factors: 2
returns "table"
returns "integer"
# returns 2 3
Test for independence of all factors:
Chisq = 1.516, df = 2, p-value = 0.4686
Based on the observed counts in the table, the summary {)
function performs a chi-squared test
on the independence of the two factors. Because the reported p-
value is greater than 0.05, the assumed
independence of the two factors is not rejected. Hypothesis
testing and p-values are covered in more detail
later in this chapter. Next, applying descriptive statistics in R is
examined.
3.1.4 Descriptive Statistics
It has already been shown that the summary () function provides
several descriptive statistics, such as
the mean and median, about a variable such as the sales data
frame. The results now include the counts
for the three levels of the spender variable based on the earlier
examples involving factors.
summary(sales)
cust
·-
id sales - total nurn --- of orders gende1· spender
!,lin. :100001 r~lin. 30.02 !lin. 1.000 F:5035 small :3382
1st Qu. :102501 1st Qu.: 80.29 1st Qu.: 2.000 1<1:4965
medium:5469
!led ian :105001 r<ledian : 151.65 1ledian : 2.000 big : 114 9
!lean :105001 r-lean 249.46 r'lean 2.428
3rd Qu. : 107500 3rd Qu.: 295.50 3rd Qu.: 3.000
!lax. :110000 r~Iax. : 7 6 0 6 . 0 9 1~1ax. :22.000
The following code provides some common R functions that
include descriptive statistics. In parenthe-
ses, the comments describe the functions.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
:. ca.ls, assig::
x <- sales$sales_total
y <- sales$num_of_orders
cor(x,yl :: returns 0.75080!5 correlatlor.)
cov(x ,y l II returns 345.~111 (covarianc<?)
IQR(x) ff :retu::n~ :ns.:n (int.erquartile range)
mean(x) ~ returns 249.4';'i7 mean) ,.
median(x) returns 151.65 (me:lianl
range (x ) " te:urns 30.0::: 7 06 . 09 min
rna:-:)
sd(x) ., returns '-l9.0508 ::;:... :1. :ie·:.
var(x) retur::s ~C17r.l.~ ··ari:'l!!~"",...
The IQR () function provides the difference between the third
and the first qua rti les. The other fu nc-
tions are fairly self-explanatory by their names. The reader is
encouraged to review the available help files
for acceptable inputs and possible options.
The function apply () is useful when the same function is to be
applied to several variables in a data
frame. For example, the following R code calculates the
standard deviation for the first three variables in
sales. In the code, setting MARGIN=2 specifies that the sd ()
fun ction is applied over the columns.
Other fun ctions, such as lappl y () and sappl y (), apply a fun
ction to a list or vector. Readers can refer
to the R help fil es to learn how to use these functions.
apply (sales[,c (l : 3) ], MARGIN=2, FUN=Sd )
Additional descriptive statistics can be applied wi th user-
defined funct ions. The following R code
defin es a func tion, my_ range () , to compute the difference
between the maximum and min imum va lues
returned by the range () fun ction. In general, user-defined fun
ctions are usefu l for any task or operation
that needs to be fre quently repeated. More information on user-
defined fun ctions is available by entering
help ( 11 function 11 ) in the console.
# build a functi~n tv plvviJ~ the difterence bet~een
~ -he maxrmum and thE .:m • •. 1<
my_ range < - function (v) {range (v ) (2] - range (v) [1)}
my_range (x )
3.2 Exploratory Data Analysis
So far, this chapter has addressed importing and exporting data
in R, basic data types and operations, and
generating descriptive statistics. Functions such as summary ()
can help analysts ea sily get an idea of
th e magnitude and range of the data, but other aspects such as
linear relationships and distributions are
more difficult to see from descriptive statistics. For example,
the following code shows a summary view of
a data frame data with two columns x and y. The output shows
the range of x and y, but it's not clear
what the relationship may be between these two variables.
summary (data )
· ..
M1n. : 1.90481 ~·1n .
1st Qu . : -0.66321 !s• 0u
Nedian : 0 . 0"'167 N•d.111
Nean 0.0-52~
3rd Qu .: 0 . 65414 r t ,,u
3.2 Exploratory Data Analysi s
y
A useful way to detect patterns and anomalies in the data is
through the explora tory data analysis with
visualization. Visualization gives a succinct, holistic view of
the data that may be difficu lt to grasp from the
numbers and summaries alone. Variables x and y of the data
frame data can instead be visual ized in a
scatterplot (Figure 3-5). which easily depicts the relationship
between two variab les. An important facet
of the initial data exploration, visualization assesses data
cleanliness and suggests potentia lly important
relationships in the data pri or to the model planni ng and
building phases.
Scatterplot of X and Y
o·
-1·
2·
2 0
X
FIGURE 3-5 A scatterplot can easily show if x andy share a
relation
The code to generate data as well as Figure 3-5 is shown next.
x <- rno rm (SO)
y <- x + rnorm(SO , mea n=O , sd=O . S)
data<- as . data . f rame(cbind(x , y ) )
'·
2
REV IEW OF BASIC DATA ANA LYTIC METHODS USING R
s u mmary (data )
library (ggplo t 2)
ggpl o t (data, aes (x=x , y=y)) +
ge om_point (size=2) +
ggtitle ( "Scatterplo t o f X and Y" ) +
the me (axis.text=el eme n t_t e x t(s i ze= l 2) ,
axis. title e l emen t_text (si ze= l4 ) ,
plot.title = e l e me nt _ t e x t(si ze=20 , fa c e ="bold" ))
Explo ra tory data an alysis [9] is a data ana lysis approach to
reveal the important characteristics of a
dataset, mainly through visualization. This section discusses
how to use some basic visualization techniques
and the plotting feature in R to perform exploratory data
analysis.
3.2 .1 Visualization Before An alysis
To illustrate the importance of visualizing data, consider
Anscombe's quartet. Anscom be's quartet consists
of four datasets, as shown in Figure 3-6. It was constructed by
statistician Francis Anscom be [10] in 1973
to demonstrate the importance of graphs in statistical analyses.
# 1 # 2 # 3 # 4
X y X y X y X y
4 4.26 4 3 10 4 5. 39 8 5 25
5 5.68 5 4 74 5 5. 73 8 5.56
6 7.24 6 6 13 6 6.08 8 5.76
7 4.82 7 7.26 7 6. 42 8 6.58
8 6.95 8 8. 14 6. 77 8 6.89
9 8.81 9 8.77 9 7. 11 8 7.04
10 8.04 10 9. 14 10 7. 46 8 7.7 1
11 8.33 11 9.26 11 7. 81 8 7.91
12 10.84 12 9. 13 12 8. 15 8 8.47
13 7. 58 13 8.74 13 12.74 8 8.84
14 9.96 14 8. 10 14 8. 84 19 12.50
fiGURE 3-6 Anscom be's quartet
The four data sets in Anscom be's quartet have nearly identical
statistical properties, as shown in Table 3-3.
TABLE 3-3 Statistical Properties of Anscombe's Quartet
Statistical Property Value
Meanof x 9
Variance of y 11
M ean ofY 7.50 (to 2 decimal points)
3.2 Exploratory Data Analysis
Variance of Y 4.12 or4.13 (to 2 decimal points)
Correla tions between x andy 0.816
Linear regression line y = 3.00 + O.SOx (to 2 decimal points)
Based on the nearly identica l statistical properties across each
dataset, one might conclude that these
four datasets are quite similar. However, the scatterplots in
Figure 3-7 tell a different story. Each dataset is
plotted as a scatterplot, and the fitted lines are the result of
applying linear regression models. The estimated
regression line fits Dataset 1 reasonably well. Dataset 2 is
definitely nonlinear. Dataset 3 exhibits a linear
trend, with one apparent outlier at x = 13. For Dataset 4, the
regression line fits the dataset quite well.
However, with only points at two x value s, it is not possible to
determine that the linearity assumption is
proper.
12
•
•
12
:t
5
I
I
~I
•
3
• • • •
10
•
•
15
X
FIGURE 3-7 Anscom be's quartet visualized as scatterplots
•
•
•
• •
~:
5
• • •
10
2
~ • •
4
•
15
The R code for generating Figure 3-7 is shown next. It requires
the R package ggplot2 [11]. which can
be installed simply by running the command install . p ackages (
"ggp lot2" ) . The anscombe
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
dataset for the plot is included in the standard R distribution.
Enter data ( ) for a list of datasets included
in the R base distribution. Enter data ( Da tase tName) to make
a dataset available in the current
workspace.
In the code that follows, variable levels is created using the gl
(} function, which generates
factors offour levels (1, 2, 3, and 4), each repeating 11 times.
Variable myda ta is created using the
with (data, expression) function, which evaluates an expression
in an environment con-
structed from da ta.ln this example, the data is the anscombe
dataset, which includes eight attributes:
xl, x2, x3, x4, yl, y2, y3, and y4. The expression part in the
code creates a data frame from the
anscombe dataset, and it only includes three attributes: x, y, and
the group each data point belongs
to (mygroup).
install.packages(''ggplot2") # not required if package has been
installed
data (anscombe)
anscombe
x1 x2 x3 x4
1 10 10 10 8
2 8 8 8 8
13 13 13
4 9 9
5 11 11 11
6 14 14 14 8
7 6 6 6 8
8 4 4 4 19
9 12 12 12 8
10 7 7 7 8
11 5 5 5 8
nrow(anscombe)
[1] 11
It load the anscombe dataset into the current 'iOrkspace
y1 y2 y3 y4
8. O·l 9.14 7.-16 6.58
6.95 8.14 6.77 5.76
7.58 8.74 12.74 7.71
8.81 8.77 7.11 8.84
8.33 9.26 7.81 8.·±7
9. 9G 8.10 8.34 7.04
7.24 6.13 6. •J8 5.25
·l. 26 3.10 5. 3 9 12.50
10. 8•1 9.13 8.15 5.56
4.82 7.26 6.-12 7.91
5.68 4.74 5.73 6.89
It number of rows
# generates levels to indicate which group each data point
belongs to
levels<- gl(4, nrow(anscombe))
levels
[1] 1 1 1 1 1 1 1 1 l l l 2 .; 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3
[ 34] 4 4 4 4 4 •l ·l 4 4 4 4
Levels: 1 2 3 4
# Group anscombe into a data frame
mydata <- with(anscombe, data.frame(x=c(xl,x2,x3,x4),
y=c(yl,y2,y3,y4),
mygroup=levels))
mydata
X y mygroup
10 8.04
2 8 6.95
13 7.58
4 9 8.81
... 1 1 ... 4
4, B 'i.S6
-l3 8 7 0 l 4
44 B 6 . 89 4
A Ma~f -~atterp "tF ~siny th ruplot~ package
library (ggplot2 )
therne_set (therne_bw ()) - s L rlot color :~erne
ggplot (rnydata, aes (x, y )) +
geom_point (size=4 ) +
j11 ' 1 7
geom_srnooth (rnethod="l rn ", fill=NA, f ullrange=TRUE ) +
facet_wrap (-rnygroup )
3.2.2 Dirty Data
3.2 Exploratory Data Analysis
This section addresses how dirt y data ca n be detected in th e
data expl oration phase with visual izations. In
general, analysts should look for anomalies, verify the data with
domain knowledge, and decide the most
appropriate approach to clean the data.
Consider a scenario in which a bank is conducting data analyses
of its account holders to gauge customer
retention. Figure 3-8 shows the age distribution of the account
holders.
0
0 -...
>.
~ ~
u
~ ~
c
QJ
:J
0"
QJ
u:
8 -J
~
0 -'
Age
FIGURE 3-8 Age distribution o f bank account holders
If the age data is in a vector called age, t he graph can be
created with the following R script:
h ist(age, b r eaks=l OO , main= "Age Distributi on of Account
Holders ",
xlab="Age", ylab="Frequency", col ="gray" )
The figure shows that the median age of the account holders is
around 40. A few accounts with account
holder age less than 10 are unusual but plausible. These could
be custodial accounts or college savings
accou nts set up by the parents of young children. These
accounts should be retained for future analyses.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
However, the left side of the graph shows a huge spike of
customers who are zero years old or have
negative ages. This is likely to be evidence of missing data. One
possible explanation is that the null age
values could have been replaced by 0 or negative values during
the data input. Such an occurrence may
be caused by entering age in a text box that only allows
numbers and does not accept empty values. Or it
might be caused by transferring data among several systems that
have different definitions for null values
(such as NULL, NA, 0, -1, or-2). Therefore, data cleansing
needs to be performed over the accounts with
abnormal age values. Analysts should take a closer look at the
records to decide if the missing data should
be eliminated or if an appropriate age value can be determined
using other available information for each
of the accounts.
In R, the is . na (} function provides tests for missing values.
The following example creates a vector
x where the fourth value is not available (NA). The is . na ( }
function returns TRUE at each NA value
and FALSE otherwise.
X<- c(l, 2, 3, NA, 4)
is.na(x)
[1) FALSE FALSE FALSE TRUE FALSE
Some arithmetic functions, such as mean ( } , applied to data
containing missing values can yield an
NA result. To prevent this, set the na. rm parameter to TRUE to
remove the missing value during the
function's execution.
mean(x)
[1) NA
mean(x, na.rm=TRUE)
[1) 2. 5
The na. exclude (} function returns the object with incomplete
cases removed.
DF <- data.frame(x = c(l, 2, 3), y = c(lO, 20, NA))
DF
X y
1 1 10
2 2 20
3 3 NA
DFl <- na.exclude(DF)
DFl
X y
1 1 10
2 2 20
Account holders older than 100 may be due to bad data caused
by typos. Another possibility is that these
accounts may have been passed down to the heirs of the original
account holders without being updated.
In this case, one needs to further examine the data and conduct
data cleansing if necessary. The dirty data
could be simply removed or filtered out with an age threshold
for future analyses. If removing records is
not an option, the analysts can look for patterns within the data
and develop a set of heuristics to attack
the problem of dirty data. For example, wrong age values could
be replaced with approximation based
on the nearest neighbor-the record that is the most similar to the
record in question based on analyzing
the differences in all the other variables besides age.
3.2 Exploratory Data Analysis
Figure 3-9 presents another example of dirty data. The
distribution shown here corresponds to the age
of mortgages in a bank's home loan portfolio. The mortgage age
is calculated by subtracting the orig ina-
tion date of the loan from the current date. The vertical axis
corresponds to the number of mortgages at
each mortgage age.
Portfolio Distributio n, Years Since Origination
0
0
"' ~
0
0
~
0
>-u
c
0
(X)
Qj 0
::J 0
cY <D
~
u. 0
0 ..,.
0
0
"'
0
I
0 2 6 8 10
Mortgage Age
FIGURE 3-9 Distribution of mortgage in years since origination
from a bank's home loan portfolio
If the data is in a vector called mortgage, Figure 3-9 can be
produced by the following R script.
hist (mortgage, breaks=lO, xlab='Mortgage Age ", col= "gray•,
main="Portfolio Distribution, Years Since Origination" )
Figure 3-9 shows that the loans are no more than 10 years old,
and these 10-year-old loans have a
disproportionate frequency compared to the res t of the
population. One possible explanation is that the
10-year-old loans do not only include loans originated 10 years
ago, but also those originated earlier than
that. In other words, the 10 in the x-axis actually means"<! 10.
This sometimes happens when data is ported
from one system to another or because the data provider
decided, for some reason, not to distinguish loans
that are more than 10 years old. Analysts need to study the data
further and decide the most appropriate
way to perform data cleansing.
Data analysts shou ld perform san ity checks against domain
knowledge and decide if the dirty data
needs to be eliminated. Consider the task to find out the
probability of mortga ge loan default. If the
past observations suggest that most defaults occur before about
the 4th year and 10-year-old mortgages
rarely default, it may be safe to eliminate the dirt y data and
assu me that the defaulted loans are less than
10 years old. For other ana lyses, it may become necessary to
track down the source and find out the true
origination dates.
Dirty data can occur due to acts of omission.ln the sales data
used at the beginning of this chapter,
it was seen that the minimum number of orders was 1 and the
minimum annual sales amount was $30.02.
Thus, there is a strong possibility that the provided dataset did
not include the sales data on all customers,
just the customers who purchased something during the past
year.
REVIEW OF BASIC DATA ANALYTIC METHODS USI NG R
3.2.3 Visualizing a Single Variable
Using visual representations of data is a hallmark of exploratory
data analyses: letting the data speak to
its audience rather than imposing an interpretation on the data a
priori. Sections 3.2.3 and 3.2.4 examine
ways of displaying data to help explain the underlying
distributions of a single variable or the relationships
of two or more variables.
R has many fu nctions avai lable to examine a single variable.
Some of t hese func t ions are listed in
Table 3-4.
TABLE 3-4 Example Functions for Visualizing a Single
Variable
Function Purpose
p l o t (data )
barp lot (data )
dotchart ( data )
hist (data )
plot(density (data ))
s tem (data)
rug (data )
Dotchart and Barplot
Scatterplot where x is the index andy is the value;
suitable for low-volume data
Barplot with vertical or horizontal bars
Cleveland dot plot [12)
Histog ram
Density p lot (a continuous histogram)
Stem-and -leaf plot
Add a ru g representa t ion (1-d plot) of t he data to an
existing plot
Dotcha rt and barplot portray continuous values with labels
from a discrete variable. A dotchart can be
created in R with the functio n dot cha rt ( x , lab e l= ... ) ,
where x is a numeric vector and l a bel
is a vector of cat egorical labels for x. A barplot can be created
with the barplot (h e igh t ) fun ction,
w here h eigh t represent s a vector or matrix. Figure 3-10
shows (a) a dotchart and (b) a barplot based
o n the mtcars dataset, which includes t he f uel consumption
and 10 aspects of automobile design and
performance of 32 automobiles. This dataset comes with the
standard R distribution.
The plots in Figure 3-10 can be produced with the following R
code.
data (mtcars )
dotchart (mtcars$mpg,labels=row . names (mtcars ) ,cex=.7,
ma in= "Mi les Per Gallon (MPG ) of Car Models",
xlab = "MPG" )
barplot (tabl e (mtcars$cyl ) , main ="Distribu:ion of Car Cyl
inder Counts",
x lab= "Number of Cylinders" )
Histogram and Density Plot
Figure 3-ll(a) includ es a histogram of household income. The
histogram shows a clear concentration of
low household incom es on the left and the long tail of t he
higher incomes on t he right.
Volvo U2f
Uastreb Bota
Ferran [)n)
Ford Panttra L
Lotus Europa
Pot3che 91 • -2
F'"1X1·9
Ponbac Frebrd
ComaroZ28
AJ.ICJavtlw1
Dodge Chalenger
Toyota Corona
Toyota Carob
Hondt CNC
F,.1 128
Chrysltr ~nal
L11cok1 Cont11tnt.l
C.d .. c Fleotwoo4 o
Utrc •54Slt
Utrc 4SOSL
Utrc.c.SQSE.
Utrc2!0(
Were 280
Wtre230
llerc 2•00
Ousttr 360
Vttant
Homtt SportabOU1
Homtl' Drrve
Datsun 7 10
Uazdo R.X' Wag
Uazda RXI
10
l.llles Per Gallon (t.IPG) of Cor Models
0
0
0
0
0
0
0
0
0
15 20 2S 30
UPG
(a)
3.2 Explorat ory Dat a Analysi s
Distribution of Car Cylinder counts
~
0 ~
0
~
co
"'
....
0 D
6 8
Ntmler ol Cylinders
(b)
FIGURE 3-10 (a) Dotchart on the miles per gallon of cars and
(b) Barplot on the distribution of car cylind er
counts
Histogram of Income Distribution of Income (log10 scale)
0 " ....
0
N
"'
0
"'
0
i'; 0 ?;-
00
c: "'
0 .. v;
:> c:
CT 0
.,
"' .,
"'
0 0 u:
0 "' N 0
~
N
0
0
0 0
Oe• OO 1e+05 2e+05 3e+05 4e•05 5e • 05 4 0 4 5 50 55
Income N = 4000 BandYidlh = 0 02069
FIGURE 3·11 (a) Histogram and (b) Density plot of household
income
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
Figure 3-11 (b) shows a density plot of the logarithm of
household income values, which emphasizes
the distribution. The income distribution is concentrated in the
center portion of the graph. The code to
generate the two plots in Figure 3-11 is provided next. The rug (
} function creates a one-dimensional
density plot on the bottom of the graph to emphasize the
distribution of the observation.
# randomly generate 4000 observations from the log normal
distribution
income<- rlnorm(4000, meanlog = 4, sdlog = 0.7)
summary (income)
Min. 1st Qu. t.Jedian t>!ean 3rd Qu. f.!ax.
4.301 33.720 54.970 70.320 88.800 659.800
income <- lOOO*income
summary (income)
Min. 1st Qu. f.!edian f.!ean 3rd Qu. 1!ax.
4301 33720 54970 70320 88800 659800
# plot the histogram
hist(income, breaks=SOO, xlab="Income", main="Histogram of
Income")
# density plot
plot(density(loglO(income), adjust=O.S),
main="Distribution of Income (loglO scale)")
# add rug to the density plot
rug(loglO(income))
In the data preparation phase of the Data Analytics Lifecycle,
the data range and distribution can be
obtained. If the data is skewed, viewing the logarithm of the
data (if it's all positive) can help detect struc-
tures that might otherwise be overlooked in a graph with a
regular, nonlogarithmic scale.
When preparing the data, one should look for signs of dirty
data, as explained in the previous section.
Examining if the data is unimodal or multi modal will give an
idea of how many distinct populations with
different behavior patterns might be mixed into the overall
population. Many modeling techniques assume
that the data follows a normal distribution. Therefore, it is
important to know if the available dataset can
match that assumption before applying any of those modeling
techniques.
Consider a density plot of diamond prices (in USD). Figure 3-
12(a) contains two density plots for pre-
mium and ideal cuts of diamonds. The group of premium cuts is
shown in red, and the group of ideal cuts
is shown in blue. The range of diamond prices is wide-in this
case ranging from around $300 to almost
$20,000. Extreme values are typical of monetary data such as
income, customer value, tax liabilities, and
bank account sizes.
Figure 3-12(b) shows more detail of the diamond prices than
Figure 3-12(a) by taking the logarithm. The
two humps in the premium cut represent two distinct groups of
diamond prices: One group centers around
log
10
price= 2.9 (where the price is about $794), and the other centers
around log
10
price= 3.7 (where the
price is about $5,012). The ideal cut contains three humps,
centering around 2.9, 3.3, and 3.7 respectively.
The R script to generate the plots in Figure 3-12 is shown next.
The diamonds dataset comes with
the ggplot2 package.
library("ggplot2")
data(diamonds) # load the diamonds dataset from ggplot2
# Only keep the premium and ideal cuts of diamonds
.. '
3t '
~
o;
; '.
"
It '
n iceDiamonds < - diamonds [diamonds$cut=="P r emi um" I
diamonds$c ut== " Ide a l •, I
s ummary(niceDiamonds$cut )
0 0
Pr m1u
137<ll
!:! a ..
. lSSl
# plot density plot of diamond prices
ggplo t( niceDiamonds, ae s(x=price , fill =cut) ) +
g eom_density(alpha = .3, col or= NA )
# plot density plot of the loglO of diamond prices
ggpl ot (niceDiamonds , aes (x =logl O(price) , f il l =c ut ) ) +
geom_density (alpha = . 3 , color=NA)
3.2 Exploratory Data Analysis
As an alternative to ggplot2 , the lattice package provides a
function ca lled densityplot ()
for making simple d ensity plot s .
~
;;
c ..
"
0 0
1~300
pri ce
(a)
togtO(prtc e)
(b)
cut
Premium
Jcsu•
FIGURE 3-12 Density plot s of (a) d iamond prices and (b) t he
logarit hm of diamond p rices
3.2.4 Examining Multiple Variables
A scatterp lot (shown previo usly in Fig ure 3-1 and Fig ure 3-5)
is a simple and w idely used visualizatio n
fo r fin din g the relationshi p among multiple va ri ables. A sca
tterpl o t ca n represent data with up to fi ve
variables using x-axi s, y-axis, size, color, and shape. But
usually only t wo to four variables are portrayed
in a scatterplot to minimize confusion. When examining a
scatterplot, one needs to pay close at tention
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
to the possible relationship between the vari ables. If the
functiona l relationship between the variables is
somewhat pronounced, the data may roughly lie along a straight
line, a parabola, or an exponential curve.
If variable y is related exponentially to x , then the plot of x
versus log (y) is approximately linea r. If the
plot looks more like a cluster without a pattern, the
corresponding variables may have a weak relationship.
The scatterplot in Figu re 3-13 portrays the relationship of two
variables: x and y . The red line shown
on the graph is the fitted li ne from the linear regression. Linear
regression wi ll be revisited in Chapter 6,
"Advanced Analytical Theory and Methods: Reg ression."
Figure 3-13 shows that the regression line does
not fit the data well. This is a case in which linear regression
cannot model the relationship between the
vari ables. Altern ative methods such as the l oess () functio n
ca n be used to fit a nonlinear line to the
data. The blue curve shown on the graph represents the LOESS
curve, which fits the data better than linear
regression.
0
0
N
0 .,.,
0
0
0 .,.,
0
0
0
2 4
FIGURE 3-13 Examining two variables with regression
0
0 o o 0 0 0
0
0
0
6 8 10
The R code to produce Figure 3-13 is as foll ows. The runi f ( 7
5, 0 , 1 0) ge nerates 75 numbers
between 0 to 10 with random deviates, and the numbers conform
to the uniform distrib ution. The
r norm ( 7 5 , o , 2 o) generates 75 numbers that conform to the
normal distribu tion, with the mean eq ual
to 0 and the standard deviation equal to 20. The poi n ts ()
function is a generic function that draws a
sequence of points at the specified coordinates. Parameter
type=" 1" tells the function to draw a solid
line. The col parameter sets the color of the line, where 2
represents the red color and 4 represents the
blue co lor.
7 numbers ben:een
x c- runif(75 , 0 , 1 0)
x c- sort(x)
~nd 10 ~f unifor~ distribution
y c - 200 + xA 3 - 10 * x A2 + x + rnorm(75, 0 , 20)
lr c- lm(y - x)
poly c- loess( y - x)
::a_l_ .eqL .;~l ..
L E5~
4
6
8
fit < - predict (pol y) fit a nonlinear lice
plot (x, y )
# draw the fitted li~e for the l i near regression
points (x, lr$coeffic i e nts [l ] + lr$coeffic i ents [2 ] • x ,
type= " 1 ", col = 2 )
po ints (x, fit, type = "1" , col = 4 )
Dotchart and Barplot
3.2 Exploratory Data Analysis
Dotchart and barplot from the previous section can visualize
multiple variables. Both of them use color as
an additional dimension for visualizing the data.
For the same mtcars dataset, Figure 3-14 shows a dotchart that
groups vehicle cyl inders at they-axis
and uses colors to distinguish different cylinders. The vehicles
are sorted accordi ng to their MPG values.
The code to generate Figure 3-14 is shown next.
Mil es Per Gallon (MPG) of Car Models
Grouped by Cy linder
To yota Corolla 0
Fl8t 128 0
Lo tus Eu ropa 0
Honda Civic 0
F1at X1-9 0
Porscl1e 914- 2 0
t.terc2400 0
Mere 230 0
Datsun 710 0
Toyota Corona 0
Volvo 142E 0
Hornet 4 Drive 0
Ma zd a RX4 Wag 0
Mazda RX4 0
Ferrari Dine 0
I.! ere 280 0
Valiant 0
I.! ere 280C 0
Pontiac Fire bird 0
Hornet Sporta bout 0
Mer e 450SL 0
Mere 450SE 0
Ford Pantera L 0
Dodge Challenger 0
AMC Javeun 0
Mere 450SLC 0
1.1aserati Bora 0
Chrysler Imperial 0
Duster 360 0
Camaro Z28 0
Lincoln Continental 0
Cadillac Fleet wood 0
I I I I
10 15 20 25 30
Miles Per Gallon
FIGURE 3-14 Dotplot to visualize multip le varia bles
REVI EW OF BASIC DATA ANALYTIC METHODS USING R
;: sor- bJ' mpg
cars <- mtcars[or der {mtcars$mpg ) , )
h grouping variab l e must be a factol
cars$cyl < - f actor {cars$cyl )
cars$col or[car s$cyl ==4) <- "red "
cars$color[ cars$cyl== 6) < - "blue •
ca r s$color[cars $cyl==8) < - "darkgreen •
dotchart {cars$mp g, labels=row.names{cars ) , cex - . 7, group
s= cars$cyl,
main =" Mi les Per Gallon {MPG ) of Car Mode l s  nGr ouped
by Cylinder• ,
xl ab ="Mil es Per Gal l on•, co l or=cars$color, gcolor="bl a c
k")
The barplot in Figure 3-15 visualizes the distri bution of car
cyli nder counts and num ber of gears. The
x-axis represents the number of cylinders, and the color
represents the number of gears. The code to
genera te Figure 3-15 is shown next.
~
~
CD
~
:J
0
u
<D
""
N
0
4
Distribution of Car Cylinder Counts and Gears
Number of Gears
• 3
!;! 4
D 5
6
Number of Cylinders
FIGURE 3-15 Barplot to visualize multiple variables
count s <- t a ble {mtcars$gear , mtcars$cyl )
barpl o t (counts, ma in= "Di s tributi on o f Ca r Cylinder Coun
ts and Gears • ,
x l ab ="Number of Cylinders • , ylab="Co unts •,
col=c ( " #OOO OFFFF" , "# 008 0 FFFF", " #OOF FFFFF") ,
legend = rownames (c ounts ) , beside- TRUE,
args. l egend = list (x= "top", title= "Number of Gears" ))
8
3 .2 Exploratory Data Analysis
Box-and-Whisker Plot
Box-and-whisker plots show the distribution of a continuous
variable for each value o f a discrete variable.
The box-and-whisker plot in Figure 3-16 visualizes mean
household incomes as a fun ction of region in
th e United States. The first digit of the U.S. postal ("ZIP")
code corresponds to a geographical reg ion
in the Un ited States. In Figure 3-16, each data point
corresponds to the mean househo ld income from a
particular zip code. The horizontal axis represents t he fi rst
digit of a zip code, ranging from 0 to 9, where
0 corresponds t o t he northeast reg ion ofthe United States
(such as Maine, Verm ont, and Massachusetts),
and 9 corresponds to t he southwest region (such as Ca lifornia
and Hawa ii). The vertical axis rep resents
the logarithm of mean household incomes. Th e loga rithm is
take n to bet t er v isualize the d istr ibution
of th e mean household incomes.
..,
E
0
u
c
;:;
0
.c
'" "' ::J
0
J:
so-
iii ~ 5- •
'" :::!!
0
Cl
2
Mean Household Income by Zip Code
' ' '
2 5 6
Zlp1
FIGURE 3-16 A box-and-whisker plot of mean household
income and geographical region
'
8 9
In this figure, the scatterplot is displayed beneath the box-and-
whisker plot, with some jittering fo r t he
overl ap po ints so that each line of points widens into a strip.
The "box" of the box-and-whisker shows t he
range that contains the central 50% of the data, and the line
inside the box is the location of the median
value. The upper and lower hinges of the boxes correspond to
the first and third quartiles of the data. The
upper whisker extends f rom the hinge to t he highest value that
is within 1.5 * IQR of the hinge. The lower
whisker extends from the hinge to the lowes t value w ithin 1.5
* IQR of the hinge. IQR is the inter-qua rt ile
range, as discussed in Section 3.1.4. The points outside the wh
iskers can be considered possible outliers.
REVIEW OF BAS IC DATA ANALYTIC M ETHODS USING R
j
8
c C!
'0
</) 0
0 0
!
:;) 8 0 </)
I <i I c IB ::!i
0
0. 0 C!
.2
" 0
0
The gra ph shows how household income varies by reg ion. The
hig hest median incomes are in region
0 and region 9. Region 0 is slig htly higher, but the boxes for
the two regions overlap enough that the dif-
ference between the two reg ions probably is not significant.
The lowest household incomes tend to be in
region 7, which includes states such as Louisiana, Arka nsas,
and Oklahoma.
Assuming a data frame called DF contains two columns
(MeanHousehol din come and Zipl), the
following R script uses the ggplot2 1ibrary [11 ] to plot a graph
that is simi lar to Figure 3-16.
library (ggplot2 )
plot the jittered scat-erplot w/ boxplot
H color -code points with z1p codes
h th~ outlier . s.ze pr~vents the boxplot from p:c-•inq •h~
uutlier
ggplot (data=DF, aes (x=as . factor (Zipl ) ,
y=loglO(MeanHouseholdincome) )) +
geom_point(aes(color=factor (Zipl )) , alpha= 0 .2, pos it i
on="j itter") +
geom_boxpl ot(outlier .size=O, alpha=O . l ) +
guides(colour=FALSE) +
ggtitle ( "Mean Hous ehold Income by Zip Code")
Alternatively, one can create a simple box-and-whisker plot
with the boxplot () fun ction provided
by the R base package.
Hexbinplot for Large Data sets
This chapter ha s shown t hat scat terplot as a popular
visualization can visualize data containing one or
more variables. But one should be ca reful about using it on
high-volume data. lf there is too much data, the
structure of the data may become difficult to see in a
scatterplot. Consider a case to compare the logarithm
of household income aga inst the years of ed ucation, as shown
in Figure 3-17. The cluster in the scatterplot
on the left (a) suggests a somewhat linear relationship of the
two variables. However, one cannot rea lly see
the structure of how the data is distributed inside the cluster.
This is a Big Data type of problem. Mi llions
or billions of data points would require different approaches for
exploration, visualization, and analysis.
g] Counts 71&8
6328
I ··-
SS22
4 77 1
0 407S
0 " 34 8 ~ 1&15
0 f u .J
316
0 1640
141 8 a
~
1051
0
739
r 432 .... 279
132
39
0 1
10 ..
5 10 15 ~'•W~.Eduauon
MeanEduca1ion
(a) (b)
FIGURE 3-17 (a) Scatterplot and (b) Hexbinplot o f household
incom e against y ears of education
3.2 Exploratory Data Analysis
Although color and transparency can be used in a scatterplot to
address this issue, a hexbinplot is
sometimes a better alternative. A hexbinplot combines the ideas
of scatterplot and histogram. Similar to
a scatterplot, a hexbinplot visualizes data in the x-axis andy-
axis. Data is placed into hex bins, and the third
dimension uses shading to represent the concentration of data in
each hexbin.
In Figure 3-17(b), the same data is plotted using a hexbinplot.
The hexbinplot shows that the data is
more densely clustered in a streak that runs through the center
of the cluster, roughly along the regression
line. The biggest concentration is around 12 years of education,
extending to about 15 years.
In Figure 3-17, note the outlier data at MeanEducation=O.
These data points may correspond to
some missing data that needs further cleansing.
Assuming the two variables MeanHouseholdincome and
MeanEduca tion are from a data
frame named zeta, the scatterplot of Figure 3-17(a) is plotted by
the following R code.
# plot the data points
plot(loglO(MeanHouseholdincome) - MeanEducation,
data=zcta)
# add a straight fitted line of the linear regression
abline(lm(loglO(MeanHouseholdincome) - MeanEducation,
data=zcta), col='red')
Using the zeta data frame, the hexbinplot of Figure 3-17(b) is
plotted by the following R code.
Running the code requires the use of the hexbin package, which
can be installed by running ins tall
.packages ( "hexbin").
library(hexbin)
# "g" adds the grid, "r" adds the regression line
# sqrt transform on the count gives more dynamic range to the
shading
# inv provides the inverse transformation function of trans
hexbinplot(loglO(MeanHouseholdincome) - MeanEducation,
data=zcta, trans= sqrt, inv = function(x) x ... 2, type=c( 11 g 11
, 11 r 11 ))
Scatterplot Matrix
A scatterplot matrix shows many scatterplots in a compact,
side-by-side fashion. The scatterplot matrix,
therefore, can visually represent multiple attributes of a dataset
to explore their relationships, magnify
differences, and disclose hidden patterns.
Fisher's iris dataset [13] includes the measurements in
centimeters ofthe sepal length, sepal width,
petal length, and petal width for 50 flowers from three species
of iris. The three species are setosa, versicolor,
and virginica. The iris dataset comes with the standard R
distribution.
In Figure 3-18, all the variables of Fisher's iris dataset (sepal
length, sepal width, petal length, and
petal width) are compared in a scatterplot matrix. The three
different colors represent three species of iris
flowers. The scatterplot matrix in Figure 3-18 allows its viewers
to compare the differences across the iris
species for any pairs of attributes.
REVIEW O F BA SIC DATA A N A LYT IC M ETHODS USIN
G R
Q ..,
0
"'
"' Q
Sepal. length
..
;~·: .. • • " . • ··~til!~-= ,. . . . . . .,
~ ..
·.t.* •.
H 55 65 7 5
Fisher's Iris Dataset
20 25 30 35 •o
••• I
~-t' ...... .. ;.;.:.··
• • •• •
Sepal. Width
• • •
- ~·
• =:t • • .
... ..
~· .....
Petal. length
12 3<567
• setosa D verstcolor • virgimca
FIGURE 3·18 Scatterplot matrix of Fisher's {13] iris dataset
0510152025
..... .....
f·
w.4"t~ ,. .• .
"' ....
"'
..
"' 0&<4
"'
14 11
'"
10>1
'19
"' ~ .. 11t
I ll
f.· .. . ~-· ... . . .
_ic
)9
I
Petal. Width
Consider the scatterplot from the first row and third col umn of
Figure 3-18, where sepal length is com-
pared against petal length. The horizontal axis is the peta l
length, and t he vertical axis is t he sepa l length.
The scatterplot shows that versicolor and virginica share similar
sepal and petal lengths, although the latter
has longer peta ls. The petal lengths of all setosa are about the
sa me, and the petal lengths are remarkably
shorter than the other two species. The scatterplot shows t hat
for versicolor and virgin ica, sepal length
grows linea rly with the petal length.
The R code for generating the scatterplot mat ri x is provided
next.
I; define the colors
colors<- C( 11 red 11 J 11 green 11 , 11 blue•')
~ draw the plot ma:rix
pairs(iris[l : 4], main= "Fisher ' s Iris Datase t•,
pch = 21, bg = colors[unclass ( iris$Species)]
= ~Qr qrdp~ica: pa~a~ :e~· - cl~!' p!ot - 1~9 :c :te ~1gure
~~a1o~
par (xpd = TRUE )
" ada l<"go::d
legend ( 0.2, 0 . 02, horiz = TRUE, as.vector (unique (
iris$Species )) ,
fil l = colors, bty = "n" )
3.2 Explorat ory Data Analysis
The vector colors defines th e colo r sc heme for the plot. It
could be changed to something li ke
colors<- c("gray50", "white" , " black " } to
makethescatterplotsgrayscale.
Analyzing a Variable over Tim e
Visua lizing a variable over time is the same as visualizing any
pair of vari ables, but in this case the goal is
to identify time-specific patterns.
Figure 3-19 plots the mon thly total numbers of international
airline passengers (in thousands) from
January 1940 to December 1960. Enter plot (AirPassengers} in
the R console to obta in a similar
graph. The plot shows that, for each year, a la rge peak occurs
mid-year around July and August, and a sma ll
peak happens around t he end of the year, possibly due to the
holidays. Such a phenomenon is referr ed to
as a seasonality effect.
0
0
CD
0
0
II'>
"' Q;
0 0>
c 0 ., ....
"' "' "' 0 Q, 0
< (")
0
0
N
0
~
1950 1952 1954 1956 1958 1960
Tune
FIGURE 3-19 Airline passenger counts from 1949 to 1960
Additionally, the overall trend is that the number of air
passengers steadily increased from 1949 to
1960. Chapter 8, "Advanced Analytica l Theory and Methods:
Time Series Analysis," discusses the analysis
of such data sets in greater detail.
3.2.5 Data Exploration Versus Presentation
Using visualization for data exploration is different from
presenting results to stakeholders. Not every type
of plot is suitable for all audiences. Most of the plots presented
earli er try to detail the data as clearly as pos-
sible for data scientists to identify structures and relationships.
These graphs are more technical in nature
and are better suited to technical audiences such as data
scientists. Nontechnical sta keholders, however,
ge nerally prefer simple, clear graphics that focus on the
message rather than the data.
Figure 3-20 shows the density plot on the distribution of
account va lues from a bank. The data has been
converted to the log
10
scale. The plot includes a ru g on the bottom to show the
distribution of the variable.
This graph is more suitable for data scientists and business
analysts because it provides information that
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
can be relevant to the downstream analysis. The graph shows
that the transformed account values follow
an approximate normal distribution, in the range from $100 to
$10,000,000. The median account value is
approximately $30,000 (1 o4s), with the majority of the
accounts between $1,000 (1 03) and $1,000,000 (1 06).
Distribution of Account Values (log10 scale)
CD
ci
II)
ci
oq:
0
~
("') ·a;
c c::)
cu
0
N
c::)
.....
c::)
0
c::)
2 3 4 5
N = 5000 Bandwidth= 0.05759
FIGURE 3-20 Density plots are better to show to data scientists
6 7
Density plots are fairly technical, and they contain so much
information that they would be difficult to
explain to less technical stakeholders. For example, it would be
challenging to explain why the account
values are in the log
10
scale, and such information is not relevant to stakeholders. The
same message can
be conveyed by partitioning the data into log-like bins and
presenting it as a histogram. As can be seen in
Figure 3-21, the bulk of the accounts are in the S 1,000-
1,000,000 range, with the peak concentration in the
$10-SOK range, extending to $500K. This portrayal gives the
stakeholders a better sense of the customer
base than the density plot shown in Figure 3-20.
Note that the bin sizes should be carefully chosen to avoid
distortion of the data.ln this example, the bins
in Figure 3-21 are chosen based on observations from the
density plot in Figure 3-20. Without the density
plot, the peak concentration might be just due to the somewhat
arbitrary appearing choices for the bin sizes.
This simple example addresses the different needs of two
groups of audience: analysts and stakehold-
ers. Chapter 12, "The Endgame, or Putting It All Together,"
further discusses the best practices of delivering
presentations to these two groups.
Following is the R code to generate the plots in Figure 3-20 and
Figure 3-21.
# Generate random log normal income data
income= rlnorm(SOOO, meanlog=log(40000), sdlog=log(S))
# Part I: Create the density plot
plot(density(loglO(income), adjust=O.S),
main= 11 Distribution of Account Values (loglO scale)")
# Add rug to the density plot
3.3 Statistica l Methods for Evaluation
r ug (logl O(income))
l• :l... "1 ! .... < bl:-;.5''
breaks = c(O, 1000, 5000, 10000, 50000, 100000, SeS, le6, 2e7
)
"'! 1.:: .... ... ••' ,
bins = cut(income, breaks, include .lowest =T,
labels c ( "< lK", "1 - SK", "5- lOK" , "10 - SOK",
"50-lOOK" , "100 -S OOK" , "SOCK-1M", "> 1M") )
~ n r •L ri. ..
plot(bins, main "Dis tribut i on of Account Val ues ",
x l ab "Account value ($ USD) ",
ylab = "Number of Accounts", col= "blue ")
Distribution of Account Values
0 - --<1K 1-5K 5-I OK 10-50K 50- l OOK 100.500K 500K-1M >
11.1
AccOirl value (S USO)
FIGURE 3·21 Histograms are better to show to stakeholders
3.3 Statistical Methods for Evaluation
Visualization is useful for data exploration and presentation,
but statistics is crucial because it may exist
throughout the entire Data Analytics Lifecycle. Statistical
techniques are used during the initial data explo-
ration and data preparation, model building, evaluation of the
final models, and assessment of how the
new models improve the situation when deployed in the field. In
particular, statistics can help answer the
following questions for data analytics:
• Model Building and Planning
• What are the best input variables for the model?
• Can the model predict the outcome given the input?
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
• Model Evaluation
• Is the model accurate?
• Does the model perform better than an obvious guess?
• Does the model perform better than another cand idate model?
• Model Deployment
• Is the prediction sound?
• Does t he model have the desired effect (such as reduc ing the
co st}?
This sec tion discusses some useful statistical tools that may
answer these questions.
3.3.1 Hypothesis Testing
When compari ng populations, such as testing or evaluating the
difference of the means from two samples
of data (Figure 3-22}, a common technique to assess the
difference or the significance of the difference is
hy p oth esis testin g.
FIGURE 3-22 Distribut ions of two samples of data
The basic concept of hypothesis testing is to form an assertion
and test it with data. When perform-
in g hypothesis tests, the common assumption is that there is no
difference between two samples. This
assumptio n is used as the default position for building the test
or conducting a scientific experiment.
Statisticians refer to this as the null hy p o thesis (H
0
). The altern a tive hyp o th esis (H) is that there is a
3.3 Statistical Method s for Evaluation
difference between two samples. For example, if the task is to
identify the effect of drug A compared to
drug Bon patients, the null hypothesis and alternative
hypothesis would be th is.
• fl0: Drug A and drug B have the same effect on patients.
• fl A: Drug A has a greater effect than drug Bon patients.
If the task is to identify whether advertising Campaign C is
effective on reducing customer churn, the
null hypothesis and alternative hypothesis wou ld be as follows.
• fl0 : Campaign C does not reduce customer churn better than
the cu rrent campa ign method.
• fl A: Campaign C does reduce customer churn better than the
current campa ign.
It is important to state the null hypothesis and alternative
hypothesis, because misstating them is likely
to underm ine the subsequent steps of the hypothesis testing
process. A hypothesis test leads to either
rejecting the null hypothesis in favor of the alternative or not
rejecting the null hypothesis.
Table 3·5 includes some examples of null and alternative
hypotheses that should be answered during
the analytic lifecycle.
TABLE 3-5 Example Null Hypotheses and Alternative
Hypotheses
Application Null Hypothesis Alternative Hypothesis
Accuracy Forecast
Recommendation
Engine
Regression
Modeling
Model X does not predict better
than the existing model.
Algorithm Y does not produce
better recommendations than
the current algorithm being
used.
This variable does not affect the
outcome because its coefficient
is zero.
Model X predicts better than the existing
model.
Algorithm Y produces better recommen·
dations than t he current algorithm being
used.
This variable affects outcome because its
coefficient is not zero.
Once a model is built over the t raining data, it needs to be eva
luated over the testing data to see if the
proposed model predicts better than the existing model curren
tly being used. Th e null hypothesis is that
the proposed model does not predict better than the existing
model. The alternative hypothesis is that
the proposed model indeed predicts better than the existing
model. In accuracy forecast, the null model
could be that the sales of the next month are the same as the
prior month. The hypothesis test needs to
evaluate if the proposed model provides a better prediction.
Take a recommendation engine as an example.
The null hypothesis could be that the new algorithm does not
produce better recommendations than the
current algorithm being deployed. The alternative hypothesis is
that the new algorithm produces better
recommendations than the old algorithm.
When eva luating a model, sometimes it needs to be determined
if a given input variable improves the
model. In regression analysis (Chapter 6), for example, this is
the same as asking if the regression coefficient
for a variable is zero. The null hypothesis is that the coefficient
is zero, which means the variable does not
have an impact on the outcome. The alternative hypothesis is
that the coefficient is nonzero, which means
the variable does have an impact on the outcome.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
A common hypothesis test is to compare the means of two
populations. Two such hypothesis test s are
discussed in Section 3.3.2.
3.3.2 Difference of Means
Hypothesis testing is a common approach to draw inferences on
whether or not the two populations,
denoted pop1 and pop2, are different from each other. This
section provides two hypothesis tests to com-
pare the means of the respective populations based on sam ples
randomly drawn from each population.
Specifically, the two hypothesis tests in this section consider
the following null and alternative hypotheses.
• Ho: II , = ll 2
• HA: II , ""' ll2
The 1' , and 11
2
denote the population means of pop1 and pop2, respectively.
The basic testing approach is to compare the observed sample
means, X, and X
2
, corresponding to each
population. If the values of X
1
and X
2
are approximately equal to each other, the distributions of X,
and
X2 overlap substantially (Figure 3-23), and the null hypothesis
is supported. A large observed difference
between the sample means indicates that the null hypothesis
should be rejected. Formally, the difference
in means can be tested using Student's t-test or the Welch's t-
test.
Irx, '" :X2 .
this area is
large
FIGURE 3-23 Overlap of the two distributions is larg e if X1
::::: X2
Student's t-test
Stud ent 's t- test ass umes that distributions of t he t wo
populations have equal but unknow n
varian ces. Suppose n
1
and n
2
samples are random ly and independently selected from two
populations,
pop1 and pop2, respectively. If each population is normally
distributed with the same mean (Jt
1
= Jt
2
) and
wi th the sa me variance, then T (the t-statistic ), given in
Equation 3-1, follows a t -distribution w ith
n, + n
2
- 2 degrees of f reed om (df).
where (3-1)
3.3 Statistical Methods for Evaluation
The shape of the t-distribution is similar to the normal
distribution. In fact, as the degrees of freedom
approaches 30 or more, the t-distribution is nearly identical to
the normal distribution. Because the numera-
tor ofT is the difference of the sample means, if the observed
value ofT is far enough from zero such that
the probability of observing such a value of Tis unlikely, one
would reject the null hypothesis that the
population means are equal. Thus, for a small probability, say
a= 0.05, T* is determined such that
P(ITI2: T*) = 0.05. After the samples are collected and the
observed value ofT is calculated according to
Equation 3-1, the null hypothesis (p,1 = p2) is rejected ifiTI2:
r·.
In hypothesis testing, in general, the small probability, n, is
known as the significance level of the test.
The significance level of the test is the probability of rejecting
the null hypothesis, when the null hypothesis
is actually TRUE.In other words, for n = 0.05, if the means
from the two populations are truly equal, then
in repeated random sampling, the observed magnitude ofT
would only exceed r· 5% of the time.
In the following R code example, 10 observations are randomly
selected from two normally distributed
populations and assigned to the variables x andy. The two
populations have a mean of 100 and 105,
respectively, and a standard deviation equal to 5. Student's t-
test is then conducted to determine if the
obtained random samples support the rejection of the null
hypothesis.
# generate random observations from the two populations
x <- rnorm(lO, mean=lOO, sd=S) # normal distribution centered
at 100
y <- rnorm(20, mean=lOS, sd=S) ll no~:mal distribution
centered at 105
t.test(x, y, var.equal=TRUE)
Two Sample t-test
data: x and y
# run the Student's t-test
t = -1.7828, df = 28, p-value = 0.08547
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-6.1611557 0.4271393
sample estimates:
mean of x mean of y
102.2136 105.0806
From the R output, the observed value of Tis t = -1.7828. The
negative sign is due to the fact that the
sample mean of xis less than the sample mean of y. Using the qt
() function in R, a Tvalue of 2.0484
corresponds to a 0.05 significance level.
# obtain t value for a two-sidec test at a 0.05 significance level
qt(p=O.OS/2, df=28, lower.tail= FALSE)
2.048407
Because the magnitude of the observed T statistic is less than
the T value corresponding to the 0.05
significance level Q -1.78281< 2.0484), the null hypothesis is
not rejected. Because the alternative hypothesis
is that the means are not equal (p
1
:;z:: 11
2
), the possibilities of both p, > 11
2
and p 1 < 112 need to be considered.
This form of Student's t-test is known as a two-sided hypothesis
test, and it is necessary for the sum of the
probabilities under both tails of the t-distribution to equal the
significance level. It is customary to evenly
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
divide the significance level between both tails. So, p = 0.05/2
= 0.025 was used in the qt () function to
obtain the appropriate t-value.
To simplify the comparison of the t-test results to the
significance level, the R output includes a quantity
known as the p -value. ln the preceding example, the p-value is
0.08547, which is the sum of P(T ~ - 1.7828)
and P(T ~ 1.7828). Figure 3-24 illustrates the t-statistic for the
area under the tail of a t-distribution. The -t
and tare the observed values of the t-statistic. ln the R output, t
= 1.7828. The left shaded area corresponds
to the P(T ~ - 1.7828), and the right shaded area corresponds to
the P(T ~ 1.7828).
-t 0
FIGURE 3-24 Area under the tails (shaded) of a student's t-
distribution
In the R output, for a significance level of 0.05, the nu ll
hypothesis would not be rejected because the
likelihood of a Tvalue of magnitude 1.7828 or greater would
occur at higher probability than 0.05. However,
based on the p -value, if the significance level was chosen to be
0.10, instead of 0.05, the null hypothesis
would be rejected. In general, the p-value offers the probability
of observing such a sample result given
the null hypothesis is TRUE.
A key assumption in using Student's t-test is that the population
variances are equal. In the previous
example, the t . test ( ) function call includes var . equal=TRUE
to specify that equa lity of the vari-
ances should be assumed. If that assumption is not appropriate,
then Welch's t-test should be used.
Welch 's t-test
When the equal population variance assumption is not justified
in performing Student's t-test for the dif-
ference of means, Welch's t-test [14] can be used based on T
expressed in Equation 3-2.
(3-2)
where X,. 5,2, and n, correspond to the i-th sample mean,
sample variance, and sample size. Notice that
Welch's t-test uses the sample va riance (5ll for each population
instead of the pooled sample variance.
In Welch's test, under the remaining assumptions of random
samples from two normal populations with
the same mea n, the distribution of Tis approximated by the t-
distribution. The following R code performs
the We lch's t-test on the same set of data analyzed in the ea
rlier Student's t-test example.
3.3 Statistical Methods for Evaluation
t.test(x, y, var.equal=FALSE) # run the Welch's t-test
l'lelch Two Sample t-test
data: x andy
t = -1.6596, df = 15.118, p-value = 0.1176
alternative hypothesis: true difference in neans is not equal to o
95 percent confidence interval:
-6.546629 0.812663
sample estimates:
mean of x mean of y
102.2136 105.0806
In this particular example of using Welch's t-test, the p-value is
0.1176, which is greater than the p-value
of 0.08547 observed in the Student's t-test example. In this
case, the null hypothesis would not be rejected
at a 0.10 or 0.05 significance level.
It should be noted that the degrees of freedom calculation is not
as straightforward as in the Student's
t-test. In fact, the degrees of freedom calculation often results
in a non-integer value, as in this example.
The degrees of freedom for Welch's t-test is defined in Equation
3-3.
df=l~r l~:r
--+--
n,-1 n2 -1
(3-3)
In both the Student's and Welch's t-test examples, the R output
provides 95% confidence intervals on
the difference of the means. In both examples, the confidence
intervals straddle zero. Regardless of the
result of the hypothesis test, the confidence interval provides an
interval estimate of the difference of the
population means, not just a point estimate.
A confidence interval is an interval estimate of a population
parameter or characteristic based on
sample data. A confidence interval is used to indicate the
uncertainty of a point estimate.lfx is the estimate
of some unknown population mean f..L, the confidence interval
provides an idea of how close xis to the
unknown p. For example, a 95% confidence interval for a
population mean straddles the TRUE, but
unknown mean 95% of the time. Consider Figure 3-25 as an
example. Assume the confidence level is 95%.
If the task is to estimate the mean of an unknown value Jt in a
normal distribution with known standard
deviation u and the estimate based on n observations is x, then
the interval x ± ~ straddles the unknown
value of Jl with about a 95% chance. If one takes 100 different
samples and computes the 95% confi-
dence interval for the mean, 95 of the 100 confidence intervals
will be expected to straddle the population
mean Jt.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
FIGURE 3-25 A 95% confidence interval straddlin g the
unknown population mean 1J
Confidence intervals appear again in Section 3.3.6 on AN OVA.
Return ing to t he discussion of hypoth-
es is t est ing, a key assumpti on in b oth t he Stud ent 's and
Welch 's t-tes t is that the relevant population
attri bute is norma lly distributed. For non-norm ally dist
ributed data, it is sometimes p ossible to transform
the co llected data to approx imate a normal distribution . For
example, taki ng the logarithm of a d ataset
can often transfo rm skewed d ata to a dataset that is at least
symmetric arou nd its mean. Howeve r, if such
transform ations are ineffective, there are tes t s like t he Wi
lcoxon ra nk-su m test that can be ap plied to see
if t wo population distributions are different.
3.3.3 Wilcoxon Rank-Sum Test
At-test represents a parametric test in t hat it makes
assumptions about the population distributions f rom
w hich th e sa mples are drawn. If t he populations cann ot be
assu med or transformed to follow a normal
distribut ion, a n onparametric test can be used . The Wilcoxo n
rank-sum test [15] is a nonpa ramet ric
hypothesis test that checks w hether two populations are
identically d istributed. Assuming the two popula-
t ions are identica lly distributed, o ne would expect that the
ordering of any sampled observations would
be evenly intermixed among t hemselves. For example, in orderi
ng the observations, one would not expect
to see a large number of observations from one population
grouped together, especially at the beginning
or the end of ordering.
Let the t wo p opulations again be popl and pop2, w ith
independently random samples of size n
1
and
n
2
respecti vely. The tot al number of observations is then N = n
1
+ n
2
• The first step of the Wilcoxon t est is
to rank t he set of observat ions from t he t wo groups as if they
came from one la rge group. The smallest
observation receives a rank of 1, t he second smallest
observation receives a rank of 2, and so on with the
largest observatio n being assig ned the rank of N. Ties amo ng
the observations receive a ran k equal to
t he average of the ranks they span. The test uses ranks instead
of numerical o utcom es t o avoid specific
assumpt io ns about the shape of the distributi on.
After ranking all the ob serva t ions, t he assig ned ranks are
summed for at least one population's sample.
If t he distribution of popl is shift ed to t he right of the other
distribution, t he rank-sum corr espondi ng to
popl's sa mple shou ld be larger than the rank-sum of pop2. The
Wilcoxon rank- sum test determines the
3.3 Statistical Methods for Evaluat io n
significance of the observed rank-su ms. The following R code
performs the test on the same dataset used
for the previous t-test.
wilcox.test(x, y, conf.int TRUE)
:·:~.c :-c n r rJ.:
! 1 t
The wilcox. test ( l function ranks the observations, determines
the respective rank-sums cor-
responding to each population's sample, and then determines the
probability of such rank-sums of such
magnitude being observed assuming that the population
distributions are identical. In this example, the
probability is given by the p-value of 0.04903. Thu s, the null
hypothesis would be rejected at a 0.05 sig-
nificance level. The reader is cautioned against interpreting that
one hypothesis test is clearly better than
another test based solely on the examples given in this section.
Because the Wilcoxon test does not assume anything about the
population distribution, it is generally
considered more robust than the t-test. In other words, there are
fewer assumptions to violate. However,
when it is reasonable to assume that the data is normally
distributed, Student's or Welch's t-test is an
appropriate hypothesis test to consider.
3.3.4 Type I and Type II Erro rs
A hypothesis test may result in two types of errors, depending
on whether the test accepts or rejects the
null hypothesis. These two errors are known as type I and type
II errors.
• A type I error is the rejection of the null hypothesis when the
null hypothesis is TRUE. The probabil-
ity of the type I error is denoted by the Greek letter n .
• A type II error is the acceptance of a null hypothesis when the
null hypothesis is FALSE. The prob-
ability of the type II error is denoted by the Greek letter .1.
Table 3-61ists the four possible states of a hypothesis test,
including the two types of errors.
TABLE 3-6 Type I and Type II Error
H
0
is true H
0
is false
H
0
is accepted Correct outcome Type II Error
H
0
is rejected Type/error Correct outcome
REVIEW OF BASI C DATA ANALYTIC METHO DS USING R
The significance level, as mentioned in the Student's t-test
discussion, is equivalent to the type I error.
For a significance level such as o = 0.05, if the null hypothesis
(Jt 1 = J1 1) is TRUE, there is a So/o chance that
the observed Tvalue based on the sample data will be large
enough to reject the null hypothesis. By select-
ing an appropriate sig nificance level, the probability of commi
tting a type I error can be defined before
any data is collected or analyzed.
The probability of committing a Type II error is somewhat more
difficult to determine. If two population
means are truly not equal, the probability of committing a type
II error will depend on how far apart the
means truly are. To reduce the probability of a type II error to a
reasonable level, it is often necessary to
increase the sample size. This topic is addressed in the next
section.
3.3.5 Power and Sample Size
The power of a test is the probability of correctly rejecting the
null hypothesis. It is denoted by 1- /.3, where
f] is the probability of a type II err or. Because the power of a
test improves as the sample size increases,
power is used to determine the necessary sample size. In the
difference of means, the power of a hypothesis
test depends on the true difference of the po pulation means. In
ot her words, for a fixed significance level,
a larger sample size is required to detect a smaller difference in
the means. In general, the magn itude of
the difference is known as the effect size. As the sample size
becomes larger, it is easier to detect a given
effec t size, 6, as illustrated in Figure 3-26.
Moderate Sample Size Larger Sample Size
1------1 1------1
a' a'
F IGURE 3-26 A larger sample size better identifies a fixed
effect size
With a large enough sample size, almost any effect size can
appear statistica lly sign ificant. However, a
very small effect size may be useless in a practical sense. It is
import an t to consider an appropriate effect
size for the problem at hand.
3.3.6ANOVA
The hypothesis tests presented in the previous sections are good
for analyzing means between two popu-
lations. But what if there are more than two populations?
Consider an examp le of testing the impact of
3.3 Statistical Methods for Evaluation
nutrition and exercise on 60 candidates between age 18 and 50.
The candidates are randomly split into six
groups, each assigned with a different weight loss strategy, and
the goal is to determine which strategy
is the most effective.
o Group 1 only eats junk food.
o Group 2 only eats healthy food.
o Group 3 eats junk food and does cardia exercise every other
day.
o Group 4 eats healthy food and does cardia exercise every
other day.
o Group 5 eats junk food and does both cardia and strength
training every other day.
o Group 6 eats healthy food and does both cardia and strength
training every other day.
Multiple t-tests could be applied to each pair of weight loss
strategies. In this example, the weight loss
of Group 1 is compared with the weight loss of Group 2, 3, 4, 5,
or 6. Similarly, the weight loss of Group 2 is
compared with that of the next 4 groups. Therefore, a total of 15
t-tests would be performed.
However, multiplet-tests may not perform well on several
populations for two reasons. First, because the
number oft-tests increases as the number of groups increases,
analysis using the multiplet-tests becomes
cognitively more difficult. Second, by doing a greater number
of analyses, the probability of committing
at least one type I error somewhere in the analysis greatly
increases.
Analysis of Variance (ANOVA) is designed to address these
issues. AN OVA is a generalization of the
hypothesis testing of the difference of two population means.
AN OVA tests if any of the population means
differ from the other population means. The null hypothesis of
A NOVA is that all the population means are
equal. The alternative hypothesis is that at least one pair of the
population means is not equal. In other
words,
0 Ho:Jll = J12 = ··· = Jln
o H A: Jl; ::= J1i for at least one pair of i,j
As seen in Section 3.3.2, "Difference of Means," each
population is assumed to be normally distributed
with the same variance.
The first thing to calculate for the AN OVA is the test statistic.
Essentially, the goal is to test whether the
clusters formed by each population are more tightly grouped
than the spread across all the populations.
Let the total number of populations be k. The total number of
samples N is randomly split into the k
groups. The number of samples in the i-th group is denoted as n
1
, and the mean of the group is X1 where
iE[l,k]. The mean of all the samples is denoted as X
0
•
The between-groups mean sum of squares, s;, is an estimate of
the between-groups variance. It
measures how the population means vary with respect to the
grand mean, or the mean spread across all
the populations. Formally, this is presented as shown in
Equation 3-4.
k
52 =-1-~n.·(x.-x )2
8 k-1L...i I I 0
1=1
(3-4)
The within-group mean sum of squares, s~. is an estimate of the
within-group variance. It quantifies
the spread of values within groups. Formally, this is presented
as shown in Equation 3-5.
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
(3-5)
If s; is much larger than 5~, then some of the population means
are different from each other.
The F-test statistic is defined as the ratio of the between-groups
mean sum of squares and the within-
group mean sum of squares. Formally, this is presented as
shown in Equation 3-6.
(3-6)
The F-test statistic in A NOVA can be thought of as a measure
of how different the means are relative to
the variability within each group. The larger the observed F-test
statistic, the greater the likelihood that
the differences between the means are due to something other
than chance alone. The F-test statistic
is used to test the hypothesis that the observed effects are not
due to chance-that is, if the means are
significantly different from one another.
Consider an example that every customer who visits a retail
website gets one of two promotional offers
or gets no promotion at all. The goal is to see if making the
promotional offers makes a difference. ANOVA
could be used, and the null hypothesis is that neither promotion
makes a difference. The code that follows
randomly generates a total of 500 observations of purchase sizes
on three different offer options.
offers<- sample(c("offerl", "offer2", "nopromo"), size=SOO,
replace=T)
# Simulated 500 observations of purchase sizes on the 3 offer
options
purchasesize <- ifelse(offers=="offerl", rnorm(SOO, mean=SO,
sd=30),
ifelse(offers=="offer2", rnorm(SOO, mean=SS, sd=30),
rnorm(SOO, mean=40, sd=30)))
# create a data frame of offer option and purchase size
offertest <- data.frame(offer=as.factor(offers),
purchase_amt=purchasesize)
The summary ofthe offertest data frame shows that 170 offerl,
161 offer2, and 169
nopromo (no promotion) offers have been made. It also shows
the range of purchase size (purchase_
amt) for each of the three offer options.
# display a summary of offertest where o:fer="offer1"
summary(offertest[offertest$offer=="offerl",])
offer
nopromo:
offe:n :170
offer2 :
purchase_amt
t·li;;.. 4.521
1 s:: Qu . : 5 8 . 1 5 8
i·iedian : 76. 944
I·1ean .Sl. 936
3 rd Qu. : 1 D 4 . 9 59
t•la:·:. :130.507
# display a summary of offertest where o:fer="offer2"
summary(offertest[offertest$offer=="offer2",])
offer
nopromo: 0
offer! 0
offer2 :161
purchase_amt
~lin. 14.04
1st Qu . : 6 9 . 4 6
t·ledian : 90.20
r•lean 89.09
3 rd Qu. : 10 7. 4 8
!•lax. : 154. 3 3
# display a summary of offertest where offer="nopromo"
summary(offertest[offertest$offer== 11 nopromo 11 ,])
offer
nopromo:169
offerl 0
offer2 : 0
purchase_amt
Min. :-27.00
1st Qu.: 20.22
t<ledian : 42.44
f.lean 40.97
3rd Qu.: 58.96
i'olax. :164.04
3.3 Statistical Methods for Evaluation
The aov ( } function performs the AN OVA on purchase size
and offer options.
# fit ANOVA test
model <- aov(purchase_amt - offers, data=offertest)
The summary (} function shows a summary of the model. The
degrees of freedom for offers is 2,
which corresponds to the k -1 in the denominator of Equation 3-
4. The degrees of freedom for residuals
is 497, which corresponds to then- k in the denominator of
Equation 3-5.
summary (model)
Of Sum Sq !-lean Sq F value Pr (>F)
offers 2 225222 112611 130.6 <2e-16
Residuals 4 97 428470 862
Signif. codes: 0 1 *** 1 0.001 1 **' 0.01 1 * 1 0.05 '. 1 0.1 1 1 1
The output also includes the 5~ (112,611), 5~ (862), the F-test
statistic (130.6), and the p-value (< 2e-16).
The F-test statistic is much greater than 1 with a p-value much
less than 1. Thus, the null hypothesis that
the means are equal should be rejected.
However, the result does not show whether offerl is different
from offer2, which requires addi-
tional tests. The TukeyHSD (} function implements Tukey's
Honest Significant Difference (HSD) on all
pair-wise tests for difference of means.
TukeyHSD(model)
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aoviformula purchase amt - offers, data
$offers
diff lwr upr
offertest)
p adj
offerl-nopromo 40.961437 33.4638483 48.45903 0.0000000
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
offer2 nop romo 48 . 120286 40 . 51894~6 55 . 72163 O.
OOOOOCO
offer2--fferl 7 . 1"8849 -0 . 4315769 14 . 7 4928 o . r692895
The re sult includes p -values of pair-wise comparisons of the
three offer options. The p-values for
of ferl- nopromo and of fer- nop romo are equal to 0, smaller
than the significance level 0.05.
Thi s suggests t hat both of ferl and offer2 are sign ifica ntly
different from n opromo. A p-value of
0.0692895 for off er2 aga inst of fer 1 is greater than the
significance level 0.05. This suggests that
of fer2 is not significantly different from offerl.
Because only the influence of one fa ctor (offers) was executed,
the presented A NOVA is known as one-
way ANOVA. If the goal is to analyze two factors, such as
offers and day of week, that would be a two-way
A NOVA [16]. 1f the goal is to model more than one outcome
variable, then multivariate AN OVA (or MAN OVA)
cou ld be used.
Summary
R is a popu lar package and prog ramming language for data
exploration, analytics, and visualization. As an
introduction toR, this chapter covers t he R GUI, data 1/0,
attribute and data types, and descriptive statistics.
This chapter also discusses how to useR to perform exploratory
data analysis, including the discovery of
dirty data, visua lization of one or more variables, and
customization of visualization for different audiences.
Finally, the chapter introduces some basic statistical methods.
The firs t statistical method presented in the
chapter is the hypothesis testing. The Student's t-test and
Welch's t-test are included as two example hypoth-
esis tests designed for testing the difference of means. Other
statistical methods and tools presented in this
chapter include confidence interva ls, Wilcoxon rank-sum test,
type I and II errors, effect size, and ANOVA.
Exercises
1. How ma ny levels does fdata contain in the following R
code?
data = c(1 , 2,2,3,1,2 , 3,3 ,1 , 2,3,3 , 1)
fdata = factor(data)
2. Two vectors, vl and v2, are created with the following R
code:
vl <- 1:5
v2 <- 6 : 2
What are the results of cbi nd (vl , v2) and rbind (vl , v2)?
3. What R comma nd(s) would you use to remove null values
from a dataset?
4. What R command can be used to install an additiona l R
package?
5. What R fun ction is used to encode a vector as a ca tegory?
6. What is a rug plot used for in a density plot?
7. An online retailer wa nts to study the purchase behaviors of
its customers. Figure 3-27 shows the den-
sity plot of the purchase sizes (in dol lars) . What wou ld be
your recommendation to enhance the plot
to detect more structures that otherwise might be missed?
Bibliography
Be-04
6e-04
£
"' ~ 4e-04
2e-04
09+(}0
0 2000 4000 6000 8000 10000
purchase size (dollars)
fiGURE 3-27 Density plot of purchase size
8. How many sections does a box-and-whisker divide the data
into? What are these sections?
9. What attributes are correlated according to Figure 3-18? How
would you describe their relationships?
10. What function can be used to tit a nonlinear line to the data?
11. If a graph of data is skewed and all the data is positive,
what mathematical technique may be used to
help detect structures that might otherwise be overlooked?
12. What is a type I error? What is a type II error? Is one
always more serious than the other? Why?
13. Suppose everyone who visits a retail website gets one
promotional offer or no promotion at all. We
want to see if making a promotional offer ma kes a difference.
What statistical method wou ld you
recommend for this analysis?
14. You are ana lyzing two norma lly distributed populations,
and your null hypothesis is that the mean f1
1
of the first population is equal to the mean 112 of the second.
Assume the significance level is set at
0.05. If the observed p ·value is 4.33e-05, what will be your
decision regarding the null hypothesis?
Bibliography
[1] The R Project for Statistical Computing, "R Licenses."
[Online). Available: http : I l www. r-
proj ec t. orgiLicensesl. [Accessed 10 December 2013].
[2] The R Project for Statistical Computing, "The
Comprehensive R Arch ive Network." [Onli ne].
Available: http: I lcran . r-project. orgl. [Accessed 10 December
2013].
REVIEW OF BASIC DATA ANALYTIC METHODS USING R
[3] J. Fox and M. Bouchet-Valat, "The R Commander: A Basic-
Statistics GUI for R," CRAN. [Online].
Available: http : I / soc s e rv . mcmaste r. ca / j fox / Misc /
Rcmdr / . [Acce ssed 11
December 2013].
[4] G. William s, M. V. Culp, E. Cox, A. Nolan, D. White, D.
Medri, and A. Waljee, "Rattle: Graphical User
Interface for Data Mining in R," CRAN. [Online]. Available: ht
t p : I I c ran . r - p roj ect . org/
we b / pac kages / rattl e/ index . html. [Accessed 12 December
2013].
[5] RStudio, "RStudio IDE" [Online]. Available: http : I / www.
rstudio . com/ide / . [Accessed 11
December 2013].
[6] R Special Interest Group on Databases (R-SIG-DB), "OBI: R
Database Interface." CR AN [On line].
Available: http : I I cran . r-proj ec t. o rg/ we b / packages /
DBI / index . h tml .
[Accessed 13 December 2013].
(7] B. Ripley, "RODBC: ODBC Database Access," CRAN.
[Online]. Available: http : 11 cran . r-pr o j-
e c t . o rg/ web / packages / RODBC / index. html. [Accessed
13 December 2013].
[8] S. S. Stevens, "On the Theory of Scales of Measurement,"
Science, vol. 103, no. 2684, p. 677-680,
1946.
[9] D. C. Hoaglin, F. Mosteller, and J. W. Tukey, Understanding
Robust and Exploratory Data Analysis,
New York: Wi ley, 1983.
[10) F. J. Anscom be, "G raphs in Statistical Analysis," The
American Statistician, vol. 27, no. 1, pp. 17- 21,
1973.
[11) H. Wickham, "ggplot2," 2013. [Online]. Availa ble: h ttp: I
I ggplo t2 . org I . [Accessed 8
January 2014].
[12) W. S. Cleveland, Visualizing Data, Lafayette, IN: Hobart
Press, 1993.
[13] R. A. Fisher, "The Use of Multiple Measurements in
Taxonomic Problems," Annals of Eugenics, vol.
7, no. 2, pp. 179- 188, 1936.
[14) B. L. Welch, "The Generalization of "Student's" Problem
When Several Different Popu lation
Variances Are Involved," Biometrika, vol. 34, no. 1-2, pp. 28-
35, 1947.
[15) F. Wilcoxon, "Individual Comparisons by Ranking
Methods," Biometrics Bulletin, vol. 1, no. 6, pp.
80- 83, 1945.
[16) J. J. Faraway, "Practical Regression and A nova Using R,"
July 2002. [On line]. Available: ht tp : 11
c ran. r-pro ject . o r g / doc/ c o ntrib/ Fa r awa y - PRA. pdf.
[Accessed 22 January
2014].
ADVANCED ANALYTICAL THEORY AND METHODS:
CLUSTERING
Building upon the introduction toR presented in Chapter 3,
"Review of Basic Data Analytic Methods Using R,"
Chapter 4, "Advanced Analytical Theory and Methods: Cl
ustering" through Chapter 9, "Advanced Analytical
Theory and Methods: Tex t Analysis" describe several
commonly used analytical methods that may be
considered for the Model Planning and Execution phases
(Phases 3 and 4) of the Data Analytics Lifecycle.
This chapter considers clustering techniques and algorithms.
4.1 Overview of Clustering
In general, clustering is the use of unsupervised tech niques for
grouping sim ilar objects. In mach ine
learning, unsupervi sed refers to the problem of finding hidden
structure withi n unlabeled data. Clustering
techniques are unsupervised in the sense that the data scientist
does not determine, in advance, the labels
to apply to the clusters. The structure of the data describes the
objects of interest and determines how best
to group the object s. For example, based on customers'
personal income, it is straightforward to divide
the customers into three groups depending on arbitrarily
selected values. The customers could be divided
into three groups as follows:
• Earn less than $10,000
• Earn between 510,000 and $99,999
• Earn $100,000 or more
In this case, the income leve ls were chosen somewhat
subjectively based on easy-to-commun icate
points of de lineation. However, such groupings do not indicate
a natural affinity of the customers within
each group. In other word s, there is no inherent reason to
believe that the customer making $90,000 will
behave any differently than the customer making 5110,000. As
additional dimensions are introduced by
adding more variables about the customers, the ta sk of finding
meaningful groupings becomes more
complex. For instance, suppose variables such as age, years of
education, household size, and annual
purchase expenditures were considered along with the personal
income variable. What are t he natural
occurring groupings of customers? This is the type of question
that clustering analysis can help answer.
Clustering is a method often used for exploratory ana lysis of
the data. In clustering, there are no pre-
dictions made. Rather, clustering methods find the sim ilarities
between objec ts according to the object
attributes and group the simi lar objects into clusters. Clustering
techniques are utilized in marke ti ng,
economics, and various branches of science. A popular
clustering method is k-means.
4.2 K-means
Given a collection of objects each with n measurable attributes,
k-means [1] is an analytical technique that,
for a chose n value of k, identifies k clusters of obj ects based
on the objects' proximity to the center of the k
groups. The center is determined as the arithmetic average
(mean) of each cluster's n-dimensiona l vector of
attributes. This section describes the algorithm to determ ine the
k means as well as how best to apply this
technique to several use cases. Figure 4-1 illustrates three
clusters of objects with two attributes. Each object
in the dataset is represented by a small dot color-coded to the
closest large dot, the mean of the cluster.
4 .2K-means
• • • •••• • •• • •• • • • •
• • • • • • ~ 0 • • • e• o 0 • e o• • • • • 0 0 • • • 0 ••• ••• • 0 0 0 • •
,8 • e0o o • 0 0 0
0
fiGURE 4-1 Possible k-means clusters for k=3
4.2.1 Use Cases
Clustering is often used as a lead-in to classification. Once the
clusters are identified, labels can be applied
to each cluster to classify each group based on its chara
cteristics. Classification is covered in more detai l in
Chapter 7, "Adva nced Analytical Theory and Methods:
Classification." Clustering is primarily an exploratory
technique to discover hidden structures of the data, possibly as
a prelude to more focused analysis or
decision processes. Some specific applications of k-means are
image processing, medical, and customer
segmentation.
Image Processing
Video is one example of the growi ng volumes of unstructured
data being collected. Within each frame of
a video, k-means analysis ca n be used to identify objects in the
video. For each frame, the task is to deter-
mine which pixels are most similar to each other. The attributes
of each pixel ca n include bri ghtness, color,
and location, th e x and y coordinates in the frame. With
security video images, for example, successive
frames are examined to identify any cha nges to the clusters.
These newly identified clusters may indicate
unauthorized access to a facility.
Medical
Patient attributes such as age, height, weight, systolic and
diastolic blood pressures, cholesterol level, and
other attributes can identify naturally occurring clusters. These
clusters could be used to ta rget individuals
for specific preventive measures or clinica l trial participation.
Clustering, in general, is useful in biology for
the classification of plants and animals as well as in the field of
human genetics.
ADVANCED ANALYTICAL THEORY AND METHODS: CLU
STERI NG
Customer Segmentation
Marketing and sa les groups use k-means to better identify
customers who have similar behaviors and
spending patterns. For example, a wireless provider may look at
the following customer attributes: monthly
bill, number of text messages, data volume consumed, minutes
used during various daily periods, and
years as a customer. The wi reless company could then look at
the naturally occurring clusters and consider
tactics to increase sales or reduce the customer ch urn rate, the
proportion of customers who end their
relationship with a particular company.
4.2.2 Overview of the Method
To illustrate the met hod to find k clusters from a collection of
M objects wit h n attributes, t he two-
dimensional case (n = 2) is examined. It is much easier to
visualize the k-means method in two dimensions.
later in the chapter, the two-dimension scenario is generalized
to handle any number of attributes.
Because each object in this example has two attributes, it is
usefu l to consider each object correspond -
ing to the point (x,. y) , where x andy denote the two attributes
and i = 1, 2 ... M. For a given cluster of
m points (m ~M), the point that corresponds to the cluster's
mean is called a centroid. In mathematics, a
centroid refers to a point that corresponds to the center of mass
for an object.
The k-means algorithm to find k clusters can be described in the
following four steps.
1. Choose the value of k and the k initial guesses for the
centroids.
In this example, k = 3, and the initial centroids are indicated by
the points shaded in red, green,
and blue in Figure 4-2.
0
0 0
0 0 0 0 o
0
8
0
00
0
0
0
0
0
0
0 0
0
0
<e 0 0 0
0
o 0 o
0
0 oo0
0 0
• 0 0 0
0
0
0 0
o o 0
o0 o 0 0 0 0
0
0
~ 8 0 0
0 000 0 0
0
0
F IGURE 4-2 Init ial starting points for the centroids
4.2 K-means
2 . Compute the distance from each data point (x ,, y,) to each
centroid. Assign each point to the
closest centroid. This association defines the first k clusters.
FIGURE4 3
In two dimensions, the distance, d, between any two points, (x
1
, y
1
) and (x
2
, y
2
) , in the Cartesian
plane is typically expressed by using the Euclidean distance
measure provided in Equation 4-1.
(4-1)
In Figure 4-3, the points closest to a centroid are shaded the
corresponding color.
• • • •••• • •• • •• • • • •
• • • • • • ~ 0 • • • •• o 0 • e o• • • • • 0 0 • • • 0 ••• • • • ••• 0 0 •
,8 • e8 oo • 0 0 0
0
Points are assigned to the closest centroid
3. Compute the centroid, the center of mass, of each newly
defined cluster from Step 2.
In Figure 4-4, the computed centroids in Step 3 are the lightly
shaded points of the correspond-
ing color. In two dimensions, the centroid (xc, Y cl of them
points in a k-means cluster is calcu-
lated as follows in Equation 4-2.
(4-2)
Thu s, (xc, Yc l is the ordered pair of the arithmetic means of
the coordinates of them points in
the cluster. In this step, a centroid is computed for each of the k
clusters.
4. Repeat Steps 2 and 3 until the algorithm converges to an
answer.
ADVANCED ANALYTICA L THEORY AND METHODS:
CLUSTERING
a. Assign each point to the closest centroid computed in Step 3.
b. Compute the centroid of newly defined clusters.
c. Repeat until the algorithm reaches the final answer.
Convergence is reached when the computed centroids do not
change or the centroids and the
assigned points oscillate back and forth from one iteration to
the next. The latter case ca n occur
when there are one or more points that are equal distances from
the computed centroid .
• • •
••• • ••• •• • • • • •
• • 0 • • • • ~ • • • • • •• • • • •• • • • • • •
• •• '0 • •
••• • • O• • • • • •
,. .
• •••• • •
0
FIGURE 4-4 Compute the mean of each cluster
To generalize the prior algorithm to n dimensions, suppose
there are M objects, where each object is
described by n attributes or property values (p,, P2 • .• • pJ
Then object i is described by (p,,, P,2, • •• p,.) for
i = 1,2, .. . , M. In other words, there is a matrix with M rows
corresponding to theM objects and n columns
to store the attribute values. To expand the earlier process to
find the k clusters from two dimensions ton
dimensions, the following equations provide the formulas for
calculating the distances and the locations
of the centroids for n ~ 1.
For a given point, p
1
,at (p", P,
2
, ••• p,. ) and a centroid, q, located at(q,, q2, .. • q"). the
distance, d, between
p
1
and q, is expressed as shown in Equation 4-3.
(4-3)
The centroid, q, of a cluster of m points, (p
11
, p12 , •• . p1n)• is calculated as shown in Equation 4-4.
(4-4)
m
4.2K-means
4.2.3 Determining the Number of Clusters
With the preceding algorithm, k clusters can be identified in a
given dataset, but what value of k should
be selected? The value of k can be chosen based on a reasonable
guess or some predefined requirement.
However, even then, it would be good to know how much better
or worse having k clusters versus k- 1 or
k + 1 clusters would be in explaining the structure of the data.
Next, a heuristic using the Within Sum of
Squares (WSS) metric is examined to determine a reasonably
optimal value of k. Using the distance function
given in Equation 4-3, WSS is defined as shown in Equation 4-
5.
(4-5)
1=1 1=1 j=l
In other words, WSS is the sum of the squares of the distances
between each data point and the clos-
est centroid. The term q11l indicates the closest centroid that is
associated with the ith point. If the points
are relatively close to their respective centroids, the WSS is
relatively small. Thus, if k + 1 clusters do not
greatly reduce the value of WSS from the case with only k
clusters, there may be little benefit to adding
another cluster.
Using R to Perform a K-means Analysis
To illustrate how to use the WSS to determine an appropriate
number, k, of clusters, the following example
uses R to perform a k-means analysis. The task is to group 620
high school seniors based on their grades
in three subject areas: English, mathematics, and science. The
grades are averaged over their high school
career and assume values from 0 to 100. The following R code
establishes the necessary R libraries and
imports the CSV file containing the grades.
library (plyr)
library(ggplot2)
library(cluster)
library(lattice)
library(graphics)
library(grid)
library(gridExtra)
#import the student grades
grade_input =
as.data.frame(read.csv("c:/data/grades_km_input.csv"))
The following R code formats the grades for processing. The
data file contains four columns. The first
column holds a student identification (10) number, and the other
three columns are for the grades in the
three subject areas. Because the student ID is not used in the
clustering analysis, it is excluded from the
k-means input matrix, kmda ta.
kmdata_orig = as. matrix (grade_input [, c ("Student" I 11
English" I 11 Math" I 11 Science")])
kmdata <- kmdata_orig[,2:4]
ADVANCED ANALYTICAL T HEORY AND METHODS:
CLUSTERI NG
kmdata[1 : 10,)
Engllsh Math Science
( 1' l 99 96 97
(2. l 99 96 97
[ 3'; 98 97 97
(·l,) 95 100 95
[<;,: 95 96 96
(6. l % 97 %
[7 .: 100 96 97
[8, l 95 98 98
(9': 98 96 96
[10, l 99 'l9 95
To determ in e an appropriate value for k, the k-means algori
thm is used to identify clu sters for
k = 1, 2, .. . , 15. For each value ofk, the WSS is calculated. If
an additional cluster provides a better partition-
ing of the data points, the WSS should be markedly smaller than
without the additional cl uster.
The following R code loops through several k-means analyses
for the number of centroids, k, va rying
from 1 to 15. For each k, the option n start= 2 5 specifies that
the k-mea ns algorithm wil l be repeated
25 times, each starting with k random initial centroids. The
corresponding value ofWSS for each k-mean
analysis is stored in the ws s vector.
wss <- n ume r ic (1 5 )
for (kin 1 : 15 ) wss[k) < - s um (kmeans (kmdata, c e nters=k,
nstart=25 ) $wi thins s )
Using the basic R plot function, each WSS is plotted against the
respective number of centroids, 1
through 15. This plot is provided in Figure 4-5.
plot(1 : 15, wss, type =" b ", x lab="Numbe r of Clusters " ,
ylab="l'li t h in Sum o f
Squares" )
0
0
<J) 0
~ 0 II')
ro N :I
cr
en 0
0 0 0
E 0
:I
II')
en
c
5 0 0
~ 0 0
II')
0

0
""' o....._ o- o - o - o - o - o - o-o -o- o - o - o
I I
2 4 6 8 10 12 14
Nllllber of Clusters
FIOUI!E 4-5 WSS of the student grade data
As can be seen, th e WSS is greatly reduced when k increa ses
from one to two. Another substa ntial
reduction in WSS occurs at k = 3. However, the improvement in
WSS is fairly linear fork > 3. There fore,
the k-means analysis wi ll be conducted fork = 3. The process of
identifying the appropriate va lue of k is
referred to as finding the "elbow" of the WSS curve.
km kmeans(kmdata,3, nstart~2S)
km
K-means clustering with 3 clusters of sizes 158, 218, 244
Cluster means:
English Math Science
1 97.21519 93.37342 94.86076
2 73.22018 64.62844 65.84862
3 85.84426 79.68033 81.50820
Clustering vector:
[1] 1 1 1 1 1 1
1 1 1 1 1
[41] 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1
1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1
[81] 1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
[121] 1 1 1 1 1
3 3 3 3 3
[161] 3
1
1
1
3
1 1
3 3
3
1 1 1 1 1 1 1 1 1 1 1
3 3 3 3 1 1 3
[201] 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3
[241] 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3
[281] 3 3 3 3 3 3 3
3 3 3 3 3 3 3
[321] 3
3
[361] 3
3
3 3 3
3 3 3
3
3
2 2 2 2 3 3 2 2 2 2
3
3
3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3
3 3 3 3 2 2 2 2 2 2 2
1 1 1 1 1
1 3 3
3 3 3
3 3 3
3 3 3 3 3
3 3 3 3 3
2 3 2 3 3 3
[401] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[441] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2
2 2 2 2 2 2 2 2 3 2
[481] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[521] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[561] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2
[601] 3 2 2 3 1 1 3 3 3 2 2 3 2
Within cluster sum of squares by cluster:
[1] 6692.589 34806.339 22984.131
(between_SS I total_SS 76.5 %)
Available components:
[1] "cluster" "centers" "totss"
[6] "betweenss" "size" "iter"
"withinss"
"ifault"
"tot.withinss"
4.2K-means
ADVANCED ANALYTICAL THEORY AND METHODS:
CLUSTERING
The displayed contents of the variable km include the
following:
o The location of the cluster means
o A clustering vector that defines the membership of each
student to a corresponding cluster 1, 2, or 3
o The WSS of each cluster
o A list of all the available k-means components
The reader can find details on these components and using k-
means in R by employing the help facility.
The reader may have wondered whether the k-means results
stored in km are equivalent to the WSS
results obtained earlier in generating the plot in Figure 4-5. The
following check verifies that the results
are indeed equivalent.
c( wss[3] , sum(km$withinss)
[1] 64483.06 64483.06
In determining the value of k, the data scientist should visualize
the data and assigned clusters. In the
following code, the ggplot2 package is used to visualize the
identified student clusters and centroids.
#prepare the student data and clustering results for plotting
df; as.data.frame(kmdata_orig[,2:4])
df$cluster ; factor(km$cluster)
centers;as.data.frame(km$centers)
gl; ggplot(data=df, aes(x;English, y=Math, color=cluster )) +
geom_point() + theme(legend.position="right") +
geom_point(data=centers,
aes(x;English,y=Math, color=as.factor(c(l,2,3))),
size=lO, alpha=.3, show_guide=FALSE)
g2 =ggplot(data=df, aes(x=English, y=Science, color=cluster ))
+
geom _point () +
geom_point(data=centers,
aes(x=English,y=Science, color=as.factor(c(l,2,3))),
size=lO, alpha=.3, show_guide=FALSE)
g3 = ggplot(data=df, aes(x=Math, y=Science, color=cluster )) +
geom_point () +
geom_point(data=centers,
aes(x;Math,y=Science, color=as.factor(c(l,2,3))),
size=lO, alpha;.), show_guide=FALSE)
tmp ggplot_gtable(ggplot_build(gl))
100 -
..c:: 90 -
ro so -
::2 - o -
eo -
Q) 100 -
(.) 90 -
~ 80 -
u -o -
en eo -
Q) 100 -
(.) 90 -
~ so -
(.) 70 -
en eo -
grid.arrange(arrangeGrob(gl + theme(legend.position="none"),
g2 + theme(legend.position="none"),
g3 + theme(legend.position= "none " ),
main ="High School Student Cluster Analysis" ,
ncol=l) )
4.2K-means
The resul ting plots are provided in Figure 4-6. The large circles
represent the loca tion of the cluster
means provided earlier in the display of the kmcontents. The
small dots represent the students correspond-
ing to the appropriate cluster by assigned color: red, blue, or
green. In general, the plots indicate the three
clusters of students: the top academic students (red), the
academical ly cha llenged students (green), and
the other students (b lue) who fall somewhere between those
two groups. The plots also high lig ht which
students may excel in one or two subject areas but struggle in
othe r areas.
High School Student Cluster Analysis
• • • •
•
• •
• • • • • • • : • • I • • I •
~~;.:,······ -=· : ~ : : I I I I i I I I I ! I I
I I
eo
• •
•
. I. . . . . I . . :1 •!::!• a . : ' : .
I
eo
70
• •
so
Engli sh
•
, . •.. d, ll ll:
• • • a I · • I a • : : • • • • • e I
I I
70 so
English
Math
9D
90
• • • •
100
;pt11!
• • •
100
FIGURE 4-6 Plots of the identified student clust ers
Assig ning labels to the identified clusters is useful to
communicate the results of an analysis. In a mar-
keting context, it is common to label a group of customers as
frequent shoppers or big spenders. Such
designations are especia lly useful when communicating the
clustering results to business users or execu-
tives. It is better to describe the marketing plan for big spenders
rather than Cluster #1.
ADVANCED ANALYTICAL THEORY AND METHODS:
CLUSTERING
4.2.4 Diagnostics
The heuristic using WSS can provide at least several possible k
values to consider. When the num ber of
attributes is relatively small, a common approach to further
refine the choice of k is to plot the data to
determine how distinct the identified clusters are from each
other. In general, the following questions
should be considered.
• Are the clusters well separated from each other?
• Do any of the clusters have only a few poi nts?
• Do any of the centroids appear to be too close to each other?
In the first case, ideally t he plot would look like the one shown
in Figure 4-7, when n = 2. The clusters
are well defined, with considerable space between the fo ur
identified clusters. However, in other cases,
such as Figure 4-8, the clusters may be close to each other, and
the distinction may not be so obvious.
0
0~ ~C>.>
<D o g.?
- 0
0
o~,~
<D
>-
«:
N <%>0 00 <§
0 @0(/>
I I I I
2 4 6 8
X
FIGURE • -7 Example of distinct clusters
In such cases, it is important to apply some judgment on
whether anything different will result by using
more clusters. For example, Figure 4-9 uses six clusters to
describe the sa me dataset as used in Figu re 4-8.
If using more clusters does not better distinguish the groups, it
is almost certainly better to go with fewer
clusters.
4.2 K-means
0
0
n
0 0 0 fJ ,j
(X) 0
0 0 0
Oo o o 0
00 0 C• 0
10 0 0 C> 0
0 0
oo 0
>- 0
0 0
0 C• 0
0
0
~
00
0
0 0
8
0
0
0 0 0 0
N ~ 0 0
0
0 0
T I I
2 4 6 8
X
F IGURE 4-8 Example of less obvious clusters
0
0 0 8 0 f) 0
(X) - 0
0 0
Oo oo 0
0 0
0
10 - 0 0 0
0 0
0
>- 0 0
•:1 0 0
0
-:r - 0 0 0
8
0
0
0 0 0 0
N ~ 0 0
0
0 0
I
2 4 6 8
X
FIGURE 4-9 Six clusters applied to the points from Figure 4-8
ADVANCED ANALYTICAL THEORY AND METHODS:
CLUSTERING
4.2.5 Reasons to Choose and Cautions
K-means is a simple and straightforward method for defining
clusters. Once clusters and their associated
centroids are identified, it is easy to assign new objects (for
example, new customers) to a cluster based on
the object's distance from the closest centroid. Because the
method is unsupervised, using k-means helps
to eliminate subjectivity from the analysis.
Although k-means is considered an unsupervised method, there
are still several decisions that the
practitioner must make:
o What object attributes should be included in the analysis?
o What unit of measure (for example, miles or kilometers)
should be used for each attribute?
o Do the attributes need to be rescaled so that one attribute does
not have a disproportionate effect on
the results?
o What other considerations might apply?
Object Attributes
Regarding which object attributes (for example, age and
income) to use in the analysis, it is important
to understand what attributes will be known at the time a new
object will be assigned to a cluster. For
example, information on existing customers' satisfaction or
purchase frequency may be available, but such
information may not be available for potential customers.
The Data Scientist may have a choice of a dozen or more
attributes to use in the clustering analysis.
Whenever possible and based on the data, it is best to reduce the
number of attributes to the extent pos-
sible. Too many attributes can minimize the impact of the most
important variables. Also, the use of several
similar attributes can place too much importance on one type of
attribute. For example, if five attributes
related to personal wealth are included in a clustering analysis,
the wealth attributes dominate the analysis
and possibly mask the importance of other attributes, such as
age.
When dealing with the problem of too many attributes, one
useful approach is to identify any highly
correlated attributes and use only one or two of the correlated
attributes in the clustering analysis. As
illustrated in Figure 4-10, a scatterplot matrix, as introduced in
Chapter 3, is a useful tool to visualize the
pair-wise relationships between the attributes.
The strongest relationship is observed to be between Attribute3
and Attribute7.lfthe value
of one of these two attributes is known, it appears that the value
of the other attribute is known with
near certainty. Other linear relationships are also identified in
the plot. For example, consider the plot of
Attribute2 against Attribute3.lfthe value of Attribute2 is known,
there is still a wide range of
possible values for At tribu t e3. Thus, greater consideration
must be given prior to dropping one of these
attributes from the clustering analysis.
4.2 K-mean s
Another option to reduce the number of attributes is to combine
several attributes into one measure.
For example, instead of using two attribute variables, one for
Debt and one for Assets, a Debt to Asset ra tio
could be used. This option also addresses the problem when the
magnitude of an attribute is not of real
interest, but the relative magnitude is a more important
measure.
FIGURE 4-10 Scatterplot matrix for seven attributes
ADVANCED ANALYTICAL T H EORY AND METHODS:
CLUSTERI NG
Uni t s of Measure
From a computational perspective, the k-means algorithm is
somewhat indifferent to the units of measure
for a given attribute (for example, meters or centimeters for a
patient's height). However, the algorithm
will identify different clusters depending on the choice of the
units of measure. For example, suppose that
k-means is used to cluster pat ients based on age in years and
height in centimeters. For k=2, Figure 4-11
illustrates the two clusters that would be determined for a given
dataset.
0
0
0 -
N
E' 0
~ lO -
J:
01
0 00 0% 00 0 0 00 g> 0
<o 0 o o o o a o o o 0 o o o
0'0 (; 0 f] <S>
~ o o 00 ~ o
0 ooooo>CX> o o o o
o Ooo 0 o 0
Qi 0
r. 0 -
0
lO -
0
I I I I
0 20 40 60 80
age (years)
FIGURE 4-11 Clusters with height expressed in centimeters
But if the height was rescaled from centimeters to meters by
dividing by 100, the resulting clusters
would be slightly different, as illustrated in Figure 4-12.
0
0
I
0 20 40 60 80
age (years)
FIGURE 4-12 Clusters with height expressed in meters
4 .2K·means
When the height is expressed in meters, the magnitude of the
ages dominates the distance calculation
between two points. The height attribute provides only as much
as the square between the difference of the
maximum height and the minimum height or (2.0 - 0)2 = 4 to
the rad icand, the number under the square
root symbol in the distance formu la given in Eq uation 4·3.
Age ca n contribute as much as(S0 - 0)2 = 6,400
to the radicand when measuring the distance.
Rescaling
Attributes that are ex pressed in dollars are common in
clustering analyses and can differ in magnit ude
from the other attributes. For example, if personal income is
expressed in dollars and age is expressed in
years, the income attribute, often exceeding $10,000, can easily
dominate the distance calculation with
ages typically less than 100 years.
Although some adjustments could be made by expressing the
income in thousands of dollars (for
example, 10 for $10,000), a more straightforward method is to
divide each attribute by th e attribute's
sta nda rd deviation. The resulting attributes wi ll each have a
standa rd deviation equal to 1 and wi ll be
without units. Returning to the age and height example, the
standard deviations are 23.1 years and 36.4
em, respectively. Dividing each attribute val ue by the appropri
ate standard deviation and performing the
k-means analysis yields the result shown in Figure 4-13.
0
0
I I I I
00 OS 10 1 5 20 25 30 35
age (rescaled)
FIGURE4 13 Clusters with rescaled attributes
With the resca led attributes for age and height, the borders of
the resulting clusters now fall somewhere
between the two earlier clustering analyses. Such an occurrence
is not surpri sing based on the magnitudes
of the attributes of the previous clusteri ng attempts. Some
practitioners also subtract the means of the
attributes to center the attributes around zero. However, this
step is unnecessa ry because the distance
formu la is only sensitive to the scale of the attribute, not its
location.
ADVANCED ANALYTICAL THEORY AND METHODS:
CLUSTERING
In many statistical analyses, it is com mon to transform
typically skewed data, such as income, with long
tails by taking the logarithm of the data. Such transformation
can also be appied ink-means, but the Data
Scientist needs to be aware of what effect th is transformation
will have. For example, if /og
10
of income
expressed in dollars is used, the practitioner is essentially
stating that, from a clustering perspective, $1,000 is
as cl ose to S10,000as $10,000 is to $100,000 (because log10
1,000 = 3,1og10 10,000 = 4, and log10 100,000 = 5).
In many cases, the skewness of the data may be the reason to
perform the clustering analysis in the first place.
Additional Consideration s
The k-means algorithm is sensitive to the starting positions of
the initial centroid. Thus, it is important to
rerun the k-means analysis several times for a particular value
of k to ensure the cluster results provide the
overa ll minimum WSS. As seen earlier, this task is accompl
ished in R by using the nstart option in the
kmeans () function call.
This chapter presented the use of the Euclidean distance
function to assign the points to the closest cen-
troids. Other possible function choices include the cosine
similarity and the Manhattan distance fun ctions.
The cosine similarity function is often chosen to compare two
documents based on the frequency of each
word that appea rs in each of the documents [2). For two points,
p and q, at (p
1
, pl' ... p
0
) and (q
1
, q
2
, .• • q
0
),
respectively, the Manhattan distance, d
1
, between p and q is expressed as shown in Equation 4-6.
n
dl(p.q) = L h - q~ (4-6)
I I
The Manhattan dista nce function is analogous to the distance
traveled by a car in a city, where the
streets are laid out in a rectangular grid (such as city blocks). In
Euclidean distance, the measurement is
made in a straight line. Using Equation 4-6, the distance from
(1, 1) to (4, 5) would be J1 - 41 + J1 - 51 = 7.
From an optimi zation perspectiv e, if there is a need to use the
Manhattan distan ce for a clustering analysis,
the median is a better choice for the centroid than use of the
mean [2).
K-means cl ustering is applicable to obj ects that can be
described by attributes that are numerical with
a meaningful distance measure. From Chapter 3, interval and
ratio attribute types can certai nly be used.
However, k-means does not handle categorical variables well.
For example, suppose a clustering analysis
is to be conducted on new car sales. Among other attributes,
such as the sale price, the color of the car is
considered important. Although one could assign numerical
values to the color, such as red = 1, yellow
= 2, and green = 3, it is not useful to consider that ye llow is as
close to red as yellow is to green from a
clustering perspective. In such cases, it may be necessary to use
an alte rn ative clustering methodology.
Such methods are described in the next section.
4.3 Additional Algorithms
The k-means clustering method is easily applied to numeric data
where the concept of distance can natu rally
be applied. However, it may be necessary or desirable to use an
alternative clustering algorithm. As discussed
at the end of the previous section, k-means does not handle
categorical data. In such cases, k-modes [3) is a
common ly used method for clustering categorical data based on
the number of differences in the respective
components of the attribute s. For example, if each obj ect has
four attributes, the distance from (a, b, e, d)
to (d, d, d, d) is 3. In R, the function kmode () is implemented
in the klaR package.
Exercises
Because k-means and k-modes divide the entire dataset into di
stinct groups, both approaches are
considered partitioning methods. A third partitioning method is
known as Partitioning around Medoids
(PAM) [4). In general, a medoid is a representative object in a
set of objects. in cl usteri ng, the medoids are
the objects in each cluster that minimize the sum of the
distances from the medoid to the other objects
in the cluster. The advantage of using PAM is that the "center"
of each cluster is an actual object in the
dataset. PAM is implemented in R by the pam () function
included in the cluster R package. The fpc
R package includes a function pamk () ,which uses the pam ()
function to find the optimal value fork.
Other clustering methods include hierarchical agglomerative
clustering and density clustering methods.
In hierarchical agglomerative clustering, each object is initially
placed in its own cluster. The clusters are
then combined with the most similar cluster. This process is
repeated until one cluster, which includes all
the objects, exists. The R stats package includes the h clust ()
function for performing hierarchical
agglomerative clustering. in density-based clustering methods,
the clusters are identified by the concentra-
tion of points. The fpc R package includes a function, dbscan ()
,to perform density-based clustering
analysis. Density-based clustering can be useful to identify
irregularly shaped clusters.
Summary
Clustering analysis groups similar objects based on the objects'
attributes. Clustering is applied in are as
such as marketing, economics, biology, and medicine. This
chapter presented a detailed explanation of the
k-means algorithm and its implementation in R. To use k-means
properly, it is important to do the following:
• Properly scale the attribute values to prevent certain attributes
from dominating the other attributes.
• Ensure that the concept of dista nce between the assigned
values within an attribute is meaningful.
• Choose the number of clusters, k, such that the sum of the
Within Sum of Squares (WSS) of the
distances is reasonably minimized. A plot such as the exam ple
in Figu re 4-5 can be helpfu l in this
respect.
If k-means does not appear to be an appropriate clustering
technique for a given dataset, then alterna-
tive techniques such as k-modes or PAM should be considered.
Once the clusters are identified, it is often useful to label these
clusters in some descriptive way. Especially
when dealing with upper management, these labels are useful to
easily communicate the findings of the
clustering analysis. In clusteri ng, the labels are not preassigned
to each object. The labels are subjectively
assigned after the clusters have been identified. Chapter 7
considers severa l methods to perform the cl as-
sification of objects with predetermined labels. Clustering can
be used with other analytical techniques,
such as regre ssion. Linear regression and logistic regression
are covered in Chapter 6, "Advanced Analytica l
Theory and Methods: Regression."
Exercises
1. Using the age and height clustering example in section 4.2.5,
algebraically illustrate the impact on the
measured distance when the height is expressed in meters rather
than centimeters. Explain why different
clusters will result depending on the choice of units for the
patient's height.
2. Compare and contrast five clustering algorithms, assigned by
the instructor or selected by the student.
ADVANCED ANALYTICAL THEORY AND METHODS:
CLUSTERI NG
3. Using the ruspini dataset provided with the cl us ter package
in R, perform a k-means analysis.
Document the findings and j ustify the choice of k. Hint: use
data ( ruspini ) to load the dataset into
the R workspace.
Bibliography
[1] J. MacQueen, "Some Methods for Classification and
Analysis of Multivari ate Observa tions," in
Proceedings of th e Fifth Berkeley Symposium on Mathematical
Statistics and Probability, Berkeley, CA,
1967.
[2] P.-N. Tan, V. Kumar, and M. Steinbach, Introduction to
Data Mining, Upper Saddle River, NJ: Person,
2013.
[3] Z. Huang, "A Fast Clustering Algorithm to Cluster Very
Large Categorica l Data Sets in Data Mining,"
1997. [Onlin e] . Available: http : I I ci teseerx . is t . psu .
edulviewdoc l
download ? d o i=1 0 . 1. 1 . 13 4. 83&rep=rep1 &type =pdf.
[Accessed 13 March 2014].
[4] L. Kaufman and P. J. Rous seeuw, "Partitioning Around
Medoids (Program PAM)," in Finding Groups
in Data: An Introduction to Cluster Analysis, Hoboken, NJ,
John Wiley & Sons, Inc, 2008, p. 68-125,
Chapter 2.
ADVANCED ANALYTICAL THEORY AND METHODS:
ASSOCIATION RULES
This chapter discusses an unsupervised learning method called
association rules. This is a descriptive, not
predictive, method often used to discover interesting
relationships hidden in a large dataset. The disclosed
relationships can be represented as rules or frequent itemsets.
Association rules are commonly used for
mining transactions in databases.
Here are some possible questions that association rules can
answer:
• Which products tend to be purchased together?
• Of those customers who are similar to this person, what
products do they tend to buy?
• Of those customers who have purchased this product, what
other similar products do they tend to
view or purchase?
5.1 Overview
Figure 5-l shows the general logic behind association rules.
Given a large collection of transactions (depicted
as three stacks of receipts in the figure), in which each
transaction consists of one or more items, associa-
tion rules go through the items being purchased to see what
items are frequently bought together and to
discover a list of rules that describe the purchasing behavior.
The goal with association rules is to discover
interesting relationsh ips among the items. (The relationship
occu rs too frequently to be random and is
meaningful from a business perspective, which may or may not
be obvious.) The relationships that are inter-
esting depend both on the business context and the natu re of
the algorithm being used for the discovery.
:.• ... ...
:.•
WIUIO 'iitl rtot
Jl tJint't' --.••
t:UC~~o. •• t:US
lHU110 .,_,.,,,,
'0 &Ol''
FIGURE 5-1 The genera/logic behind association rules
Cereal
Bread
Milk
Milk
Rules
.. Milk (90%)
.. Milk (40%)
.. Cereal (23%)
.. Apples (10%)
Wine .. Diapers (2%)
Each of the uncovered rules is in the form X ~ Y, meaning that
when item X is observed, item Y is
also observed. In this case, the left-hand side (LHS) of the rule
is X, and the right-hand side (RHS) of the
rule is Y.
Usi ng association rules, patterns ca n be discovered from the
data that allow the association ru le algo-
rithms to disclose rules of related product purchases. The
uncovered rules are listed on the right side of
5.1 Overview
Figure 5-1. The first three rules suggest that when cereal is
purchased, 90% of the time milk is purchased
also. When bread is purchased, 40% of the time milk is
purchased also. When milk is purchased, 23% of
the time cereal is also purchased.
In the example of a retail store, association rules are used over
transactions that consist of one or
more items. In fact, because of their popularity in mining
customer transactions, association rules are
sometimes referred to as market basket analysis. Each
transaction can be viewed as the shopping
basket of a customer that contains one or more items. This is
also known as an item set. The term itemset
refers to a collection of items or individual entities that contain
some kind of relationship. This could be
a set of retail items purchased together in one transaction, a set
of hyperlinks clicked on by one user in
a single session, or a set of tasks done in one day. An item set
containing k items is called a k-itemset.
This chapter uses curly braces like { i tern 1, i tern 2, . . . i tern
k} to denote a k-itemset.
Computation of the association rules is typically based on item
sets.
The research of association rules started as early as the 1960s.
Early research by Hajek et al. [1] intro-
duced many of the key concepts and approaches of association
rule learning, but it focused on the
mathematical representation rather than the algorithm. The
framework of association rule learning was
brought into the database community by Agrawal et al. [2] in
the early 1990s for discovering regularities
between products in a large database of customer transactions
recorded by point-of-sale systems in
supermarkets. In later years, it expanded to web contexts, such
as mining path traversal patterns [3] and
usage patterns [4] to facilitate organization of web pages.
This chapter chooses Apriori as the main focus of the discussion
of association rules. Apriori [5] is
one of the earliest and the most fundamental algorithms for
generating association rules. It pioneered
the use of support for pruning the itemsets and controlling the
exponential growth of candidate item-
sets. Shorter candidate item sets, which are known to be
frequent item sets, are combined and pruned
to generate longer frequent itemsets. This approach eliminates
the need for all possible item sets to be
enumerated within the algorithm, since the number of all
possible itemsets can become exponentially
large.
One major component of Apriori is support. Given an item set
L, the support [2] of Lis the
percentage of transactions that contain L. For example, if 80%
of all transactions contain item set
{bread}, then the support of {bread} is 0.8. Similarly, if 60% of
all transactions contain itemset
{bread, butter}, then the support of {bread, butter} is 0.6.
A frequent itemset has items that appear together often enough.
The term "often enough" is for-
mally defined with a minimum support criterion. If the
minimum support is set at 0.5, any itemset can
be considered a frequent item set if at least 50% of the
transactions contain this itemset. In other words,
the support of a frequent itemset should be greater than or equal
to the minimum support. For the
previous example, both {bread} and {bread, butter} are
considered frequent item sets at th

More Related Content

PDF
Data Science Big Data Analytics - 2015 - EMC Education Services.pdf
DOCX
DDBA 8307 Week 4 Assignment TemplateJohn DoeDDBA 8.docx
DOCX
ffirs.indd 24316PM12112014 Page iData Scienc.docx
DOCX
ffirs.indd 24316PM12112014 Page iData Scienc.docx
DOCX
Identify design strategies that address human recognition and reca.docx
DOCX
Social Media in a Law Enforcement WorkplaceClarissa N. Iverson.docx
PDF
Phân tích trong Big Data.pdf
PDF
bart-baesens-analytics-in-a-big-data-world.-the-essential-guide-to-data-scien...
Data Science Big Data Analytics - 2015 - EMC Education Services.pdf
DDBA 8307 Week 4 Assignment TemplateJohn DoeDDBA 8.docx
ffirs.indd 24316PM12112014 Page iData Scienc.docx
ffirs.indd 24316PM12112014 Page iData Scienc.docx
Identify design strategies that address human recognition and reca.docx
Social Media in a Law Enforcement WorkplaceClarissa N. Iverson.docx
Phân tích trong Big Data.pdf
bart-baesens-analytics-in-a-big-data-world.-the-essential-guide-to-data-scien...

Similar to Data Science & Big Data Analytics Discovering, Analyzing.docx (20)

DOCX
Amazing. That was my first word, when I started readi.docx
PDF
Big Data Management For Dummies Informatica
PDF
Data Science Business Analytics Sneha Kumari K K Tripathy
PDF
Big Data; Big Potential: How to find the talent who can harness its power
DOCX
please, answer for your INITIAL posting and discuss ALL the follow.docx
PDF
Big Data Analytics Infrastructure for Dummies
PDF
The 10 best data science training institutes in india 2020
PDF
IBM Big Data Beyond the Hype
PPTX
Introduction to Big Data Analytics
PDF
A Statistician's View on Big Data and Data Science (Version 1)
DOCX
INFORMATION TECHNOLOGY AND A NALYTICS • SUBHASHISH SAMAD.docx
PDF
Most Intelligent Leaders in Data Science & Analytics, 2023.pdf
PDF
365 Data Science
PDF
Mighty Guides Data Disruption
DOCX
ffirs.indd iiffirs.indd ii 4192012 121326 PM419201.docx
PPTX
Chapter 1 Introduction to Data Science (Computing)
PDF
Think Like A Data Analyst Meap V02 Chapters 1 To 4 Of 13 Mona Khalil
PDF
Machine Learning With Spark And Python 2nd Edition Michael Bowles
PDF
Machine Learning With Spark And Python 2nd Edition Michael Bowles
PDF
Industry and academic partnerships july 2015 final
Amazing. That was my first word, when I started readi.docx
Big Data Management For Dummies Informatica
Data Science Business Analytics Sneha Kumari K K Tripathy
Big Data; Big Potential: How to find the talent who can harness its power
please, answer for your INITIAL posting and discuss ALL the follow.docx
Big Data Analytics Infrastructure for Dummies
The 10 best data science training institutes in india 2020
IBM Big Data Beyond the Hype
Introduction to Big Data Analytics
A Statistician's View on Big Data and Data Science (Version 1)
INFORMATION TECHNOLOGY AND A NALYTICS • SUBHASHISH SAMAD.docx
Most Intelligent Leaders in Data Science & Analytics, 2023.pdf
365 Data Science
Mighty Guides Data Disruption
ffirs.indd iiffirs.indd ii 4192012 121326 PM419201.docx
Chapter 1 Introduction to Data Science (Computing)
Think Like A Data Analyst Meap V02 Chapters 1 To 4 Of 13 Mona Khalil
Machine Learning With Spark And Python 2nd Edition Michael Bowles
Machine Learning With Spark And Python 2nd Edition Michael Bowles
Industry and academic partnerships july 2015 final
Ad

More from theodorelove43763 (20)

DOCX
Exam Questions1. (Mandatory) Assess the strengths and weaknesse.docx
DOCX
Evolving Leadership roles in HIM1. Increased adoption of hea.docx
DOCX
exam 2 logiWhatsApp Image 2020-01-18 at 1.01.20 AM (1).jpeg.docx
DOCX
Evolution of Terrorism300wrdDo you think terrorism has bee.docx
DOCX
Evidence-based practice is an approach to health care where health c.docx
DOCX
Evidence-Based EvaluationEvidence-based practice is importan.docx
DOCX
Evidence TableStudy CitationDesignMethodSampleData C.docx
DOCX
Evidence SynthesisCritique the below evidence synthesis ex.docx
DOCX
Evidence Collection PolicyScenarioAfter the recent secur.docx
DOCX
Everyone Why would companies have quality programs even though they.docx
DOCX
Even though technology has shifted HRM to strategic partner, has thi.docx
DOCX
Even though people are aware that earthquakes and volcanoes typi.docx
DOCX
Evaluative Essay 2 Grading RubricCriteriaLevels of Achievement.docx
DOCX
Evaluation Title Research DesignFor this first assignment, .docx
DOCX
Evaluation is the set of processes and methods that managers and sta.docx
DOCX
Evaluation Plan with Policy RecommendationAfter a program ha.docx
DOCX
Evaluating 19-Channel Z-score Neurofeedback Addressi.docx
DOCX
Evaluate the history of the Data Encryption Standard (DES) and then .docx
DOCX
Evaluate the Health History and Medical Information for Mrs. J.,.docx
DOCX
Evaluate the environmental factors that contribute to corporate mana.docx
Exam Questions1. (Mandatory) Assess the strengths and weaknesse.docx
Evolving Leadership roles in HIM1. Increased adoption of hea.docx
exam 2 logiWhatsApp Image 2020-01-18 at 1.01.20 AM (1).jpeg.docx
Evolution of Terrorism300wrdDo you think terrorism has bee.docx
Evidence-based practice is an approach to health care where health c.docx
Evidence-Based EvaluationEvidence-based practice is importan.docx
Evidence TableStudy CitationDesignMethodSampleData C.docx
Evidence SynthesisCritique the below evidence synthesis ex.docx
Evidence Collection PolicyScenarioAfter the recent secur.docx
Everyone Why would companies have quality programs even though they.docx
Even though technology has shifted HRM to strategic partner, has thi.docx
Even though people are aware that earthquakes and volcanoes typi.docx
Evaluative Essay 2 Grading RubricCriteriaLevels of Achievement.docx
Evaluation Title Research DesignFor this first assignment, .docx
Evaluation is the set of processes and methods that managers and sta.docx
Evaluation Plan with Policy RecommendationAfter a program ha.docx
Evaluating 19-Channel Z-score Neurofeedback Addressi.docx
Evaluate the history of the Data Encryption Standard (DES) and then .docx
Evaluate the Health History and Medical Information for Mrs. J.,.docx
Evaluate the environmental factors that contribute to corporate mana.docx
Ad

Recently uploaded (20)

PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Yogi Goddess Pres Conference Studio Updates
PDF
Trump Administration's workforce development strategy
PPTX
GDM (1) (1).pptx small presentation for students
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
01-Introduction-to-Information-Management.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Final Presentation General Medicine 03-08-2024.pptx
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
Abdominal Access Techniques with Prof. Dr. R K Mishra
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Final Presentation General Medicine 03-08-2024.pptx
Yogi Goddess Pres Conference Studio Updates
Trump Administration's workforce development strategy
GDM (1) (1).pptx small presentation for students
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
O7-L3 Supply Chain Operations - ICLT Program
FourierSeries-QuestionsWithAnswers(Part-A).pdf
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
01-Introduction-to-Information-Management.pdf
human mycosis Human fungal infections are called human mycosis..pptx
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf

Data Science & Big Data Analytics Discovering, Analyzing.docx

  • 1. Data Science & Big Data Analytics Discovering, Analyzing, Visualizing and Presenting Data EMC Education Services WILEY ' Data Science & Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data Published by John Wiley & Sons, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 www. wiley. com Copyright© 2015 by John Wiley & Sons, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN: 978-1-118-87613-8 ISBN: 978-1-118-87622-0 (ebk) ISBN: 978-1-118-87605-3 (ebk)
  • 2. Manufactured in the United States of America 10987654321 No part ofthis publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permis- sion of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http: I /www. wiley. com/ go/permissions. limit ofliability/DisclaimerofWarranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation
  • 3. and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that Internet websites listed in this work may have changed or disappeared between when this work was written and when it is read. For general information on our other products and services please contact our Customer Care Department within the United States at (877) 762-2974, outside the United States at (317) 572-3993 orfax (317) 572-4002. Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand.lf this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http: I /book support. wiley. com. For more information about Wiley products, visit www. wiley. com. library of Congress Control Number: 2014946681 Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other coun- tries, and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book. Credits
  • 4. Executive Editor Carol Long Project Editor Kelly Talbot Production Manager Kathleen Wisor Copy Editor Karen Gill Manager of Content Development and Assembly Mary Beth Wakefield Marketing Director David Mayhew Marketing Manager Carrie Sherrill Professional Technology and Strategy Director Ba rry Pruett Business Manager Amy Knies Associate Publisher Jim Minatel
  • 5. Project Coordinator, Cover Patrick Redmond Proofreader Nancy Carrasco Indexer Johnna Van Hoose Dinse Cover Designer Mallesh Gurram About the Key Contributors David Dietrich heads the data science education team within EMC Education Services, where he leads the curriculum, strategy and course development related to Big Data Analytics and Data Science. He co-au- thored the first course in EMC's Data Science curriculum, two additional EMC courses focused on teaching leaders and executives about Big Data and data science, and is a contributing author and editor of this book. He has filed 14 patents in the areas of data science, data privacy, and cloud computing. David has been an advisor to severa l universities looking to develop academic programs related to data analytics, and has been a frequent speaker at conferences and industry events. He also has been a a guest lecturer at universi- ties in the Boston area. His work has been featured in major publications including Forbes, Harvard Business Review, and
  • 6. the 2014 Massachusetts Big Data Report, commissioned by Governor Deval Patrick. Involved with analytics and technology for nearly 20 years, David has worked with many Fortune 500 companies over his career, holding mu lti ple roles involving analytics, including managing ana lytics and operations teams, delivering analytic con- sulting engagements, managing a line of analytical software products for regulating the US banking industry, and developing Sohware-as-a-Service and BI-as-a-Service offerings. Additionally, David collaborated with the U.S. Federal Reserve in develop- ing predictive models for monitoring mortgage portfolios. Barry Heller is an advisory technical education consultant at EMC Education Services. Barry is a course developer and cu r- riculum advisor in the emerging technology areas of Big Data and data science. Prior to his current role, Barry was a consul- tant research scientist leadi ng numerous analytical initiatives within EMC's Total Customer Experience organization. Early in his EMC career, he managed the statistical engineering group as well as led the data warehousing efforts in an Enterprise Resource Planning (ERP) implementation. Prior to joining EMC, Barry held managerial and analytical roles in reliability engineering functions at medical diagnostic and technology companies. During his career, he has applied his quantitative skill set to a myriad of business applications in the Customer Service, Engineering, Ma nufacturing, Sales/Marketing, Finance, and Legal
  • 7. arenas. Underscoring the importance of strong executive stakeholder engagement, many of his successes have resulted from not only focusing on the technical details of an analysis, but on the decisions that will be resulting from the analysis. Barry earned a B.S. in Computational Mathematics from the Rochester Institute ofTechnology and an M.A. in Mathematics from the State University of New York (SUNY) New Paltz. Beibei Yang is a Technical Education Consultant of EMC Education Services, responsible for developing severa l open courses at EMC related to Data Science and Big Data Analytics. Beibei has seven years of experi ence in the IT industry. Prior to EMC she worked as a sohware engineer, systems manager, and network manager for a Fortune 500 company where she introduced new technologies to improve efficiency and encourage collaboration. Beibei has published papers to prestigious conferences and has filed multiple patents. She received her Ph.D. in computer science from the University of Massachusetts Lowell. She has a passion toward natural language processing and data mining, especially using various tools and techniques to find hidden patterns and tell storie s with data. Data Science and Big Data Analytics is an exciting domain where the potential of digital information is maximized for making intelligent business decisions. We believe that this is an area that will attract a lot of talented students and professiona ls in the short, mid, and long
  • 8. term. Acknowledgments EMC Education Services embarked on learning this subject with the intent to develop an "open" curriculum and certification. It was a challenging journey at the time as not many understood what it would take to be a true data scientist. After initial research (and struggle), we were able to define what was needed and attract very talented professionals to work on the project. The course, "Data Science and Big Data Analytics," has become well accepted across academia and the industry. Led by EMC Education Services, this book is the result of efforts and contributions from a number of key EMC organizations and supported by the office of the CTO, IT, Global Services, and Engi neering. Many sincere thanks to many key contributors and subject matter experts David Dietrich, Barry Heller, and Beibei Yang for their work developing content and graphics for the chapters. A special thanks to subject matter experts John Cardente and Ganesh Rajaratnam for their active involvement reviewing multiple book chapters and providing valuable feedback throughout the project. We are also grateful to the fol lowing experts from EMC and Pivotal for their support in reviewing and improving the content in this book: Aidan O'Brien Joe Kambourakis
  • 9. Alexander Nunes Joe Milardo Bryan Miletich John Sopka Dan Baskette Kathryn Stiles Daniel Mepham Ken Taylor Dave Reiner Lanette Wells Deborah Stokes Michael Hancock Ellis Kriesberg Michael Vander Donk Frank Coleman Narayana n Krishnakumar Hisham Arafat Richard Moore Ira Sch ild Ron Glick Jack Harwood Stephen Maloney Jim McGroddy Steve Todd Jody Goncalves Suresh Thankappan Joe Dery Tom McGowa n We also thank Ira Schild and Shane Goodrich for coordinating this project, Mallesh Gurram for the cover design, Chris Conroy and Rob Bradley for graphics, and the publisher, John Wiley and Sons, for timely support in bringing this book to the
  • 10. industry. Nancy Gessler Director, Education Services, EMC Corporation Alok Shrivastava Sr. Direc tor, Education Services, EMC Corporation Contents Introduction ................ . .. . .....• . •.. ... .... •..... .. .. . .. . .......... .. ... . ..................... •.•...... xvii Chapter 1 • Introduction to Big Data Analytics ................... . . . ....................... 1 1.1 Big Data Overview ..................... ....... .....•... • ...... . . . ........ • .. ... . . ... ....... ....... 2 1.1.1 Data Structures .. . .. . . . .. ................ ... ... . .. . ...... . .. .. .... . .................... ..... . .. . . . .. 5 1.1.2 Analyst Perspective on Data Repositories . ............................. . .......... .......•. ... ... .. .. 9 1.2 State of the Practice in Analytics ................................................................. . 11 1.2.1 Bl Versus Data Science .............. .... ....... . .. . ........... . . . .... . ....................... .. .... 12 1.2.2 Current Analytical Architecture ... . .... .• . . ................ .... .............. .... .... ...... •.. . ..... 13 1.2.3 Drivers of Big Data .................................................... . . . .. ................. .. ... . . 15 1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics .. ....... ...... . ............ .. ....... 16
  • 11. 1.3 Key Roles for the New Big Data Ecosystem ....... ..... ......... . ....... . ..... .. .................... 19 1.4 Examples of Big Data Analytics ... .... .......... .... . ... ....... ... .... . ...... . .................... 22 Summary .............. ............ ... ... ......... .... • ... •....... ........ .. • ..•... . ................ 23 Exercises ..................... .... ..... .. ...... . ......•......... .. .. . ... .... . ..•.................... 23 Bibliography ........................... .... .. ... ... ... •................... .. • ...... ..... ..... ....... 24 Chapter 2 • Data Ana lytics Lifecycle ..................................................... . 25 2.1 Data Analytics Lifecycle Overview ... ..... . ............. • ...... •.. ..... ...... • ... •............. . . . 26 2.1.1 Key Roles for a Successful Anolytics Project .... . .. . .... .... . ........ . .. .. . ..•......... •. •....... . .. . . 26 2.1.2 Background and Overview of Data Analytics Lifecyc/e .......................... . .......•... . ..... ... 28 2.2 Phase 1: Discovery ..... .. .. .. . ............................. . ..•..................... •........... . 30 2.2.1 Learning the Business Domain .. . ....... ... ..•.•. •.... . .. ..... . . .. . ...................•........... .30 2.2.2 Resources . . ... . ................... . ...... . ......................... ..... ............. •.......•.... 31 2.2.3 Framing the Problem ............•.... . ...................................•......... •.•.... . . ...... 32 2.2.41dentifying Key Stakeholders ... .. ....................... ... . ... ......... .... . ....... •. . .......... . . 33 2.2.51nterviewing the Analytics Sponsor ...... ........ ...... .. .......... .... ... .. ... ..... .. ........... ... 33 2.2.6 Developing Initial Hypotheses ................. .. . . . .. . . . .. . . . . ... .... .. ........... . . •............ . . 35
  • 12. 2.2.71dentifying Po tential Data Sources . ... ...•. •.. .... . . .. . ......•. •.......... . ....... . ..... . ... . .. .. . . 35 2.3 Phase 2: Data Preparation ...........................................................•...•..•..... 36 2.3.1 Preparing the Analytic Sandbox . .............. . ...................... ... •. •.......•.......... .. .... 37 2.3.2 Performing ETLT ..................................................................•.•.......•... .. . 38 2.3.3 Learning About the Data .. ..... . .............. .. ........................•.•.......•.•........ ..... . 39 2.3.4 Data Conditioning ....... .. ....•.......... . ....................... .. . .. . . . ......•. •............. .. .40 2.3.5 Survey and Visualize . . . ... .. .... .. .. ...... . . ..... .. . .................. . . •. ...... . .•.. .. .. .. . . . ..... 41 2.3.6 Common Tools for the Data Preparation Phase . . . .... .. ..... ....... . •......... •.• .•.. .. ..... .. .. . . .42 2.4 Phase 3: Model Planning ............................•................. . ... . .. •..... .....•........ 42 2.4.1 Data Exploration and Variable Selection . . ... . . .. . ......... •... . ... . . ........ . .............. .. .. . . . .44 2.4.2 Model Selection . ... ................ . .. . . . ................ •.......•...•.......................... . .45 2.4.3 Common Tools for the Model Planning Phase . . ..........•....... . . •. ........................... . . . .45 CONTENTS 2.5 Phase 4: Model Building ...... .................. ...... •. ... ..... .... • ... •. . •. .. •.........•...•.... 46 2.5.1 Common Tools for th e Mode/Building Phase ...... .. .. . ..... .. ..... . ....... . .. . . .. . . .. . .... . . .. . .... 48
  • 13. 2.6 Phase 5: Communicate Re sults ......... .... ...... . ... •........ ........ ... . •..... .....•. ..... •.... 49 2.7 Phase 6: Operationalize ... ... ....... ... . .. ........ ....... ... ........... •. . •. . ... ....... .......... SO 2.8 Case Study: Global Innovation Network and Analysis (GINA) ................. •...................... 53 2.8.1 Phase 1: Discovery ................................................................................. 54 2.8.2 Phase 2: Data Preparation .... •........ . ...................................................... . .... 55 2.8.3 Phase 3: Model Planning . . . ...•.•. . . .. . . ..... .. . . .. . ..... .. .. ... ...... . . . ................... . . . .. . . 56 2.8.4 Phase 4: Mode/Building ..... . ....•.. .. .. .......... . .............. . . .. . ... . . ....... .. . .... ... . .. . . . 56 2.8.5 Phase 5: Commun icate Results .. . . ..... . ...... .. ...... ... .. . .. . . ..................... ...... ........ 58 2.8.6 Phase 6: Operationalize . . ... ......•..... ..• .. . . . .. . . ..............•................................ 59 Summary ................................. • ................. •..•.. •.......•.....••........ . ....•.... 60 Exercises .................................•.... .. ..............•. . •....................... . . . . . •.... 61 Bibliography ....• . .••...................................•.... . . • ..... .. ............................. 61 Chapter 3 • Review of Basic Data Analytic Methods Using R . . . . . . .. . ... . .. .. . ... . . . . . .. ... . 63 3.1 Introd uction toR ............................ ... .................................... ..... ......... 64 3.1.1 R Graphical User Interfaces . ............ . ............................... ...... . .. ... . . . ... ....... ... 67 3.1.2 Data Import and Export. . ......... . .. ............. ........... ........... .................... ....... 69
  • 14. 3.1.3 Attribute and Data Types . .......... .. ...... . ....................................................... 71 3.1.4 Descriptive Statistics ....................... . . . ..................................................... 79 3.2 Exploratory Data Analysis .............. • ... . .• •.............•........... . .................... .... 80 3.2.1 Visualization Before Analysis ........ . ..................................................•........... 82 3.2.2 Dirty Data ............ .. ................................................ . ........... ...•...... .... . 85 3.2.3 Visualizing a Single Variable ........ •.. . ................ .. .. . . ........... . .... ....... •.. . . . .... .. . . 88 3.2.4 Examining Multiple Varia bles . .... .... ....• . .. . ... .......... .............. ...... . .. .. .............. 91 3.2.5 Data Exploration Versus Presentation ...... . ........ •. . . . .. . . ..... ...... ................... ...... .. 99 3.3 Statistical Methods for Evaluation .................... . .. .• ......... ... . .. .................... . .. 101 3.3.1 Hypoth esis Testing ........ ........ .......... .... ............................ . .. . ...... .. ...... . ... 102 3.3.2 Difference of Means ...... . .... .. . .... ..... . ..................................................... 704 3.3.3 Wilcoxon Rank-Sum Test ................•........................ ... .. . ... . .................. •... 108 3.3.4 Type I and Type II Errors ... . ...... . .. . .................. . ........ . .. .... .. ......................... 109 3.3.5 Power and Sample Size .....•.. . . .. . ... ...... . ........ ....... .............. ....... .. .... .......... 110 3.3.6 ANOVA . ................ . .. ......... . . .... .. . . ... .... ........ . . .. ..... . ... .. .. .... . •. •.......•... . 110 Summary ...... ............. • ....... ...... ....• .. •... • ............................... •......•...... 114 Exercises ...... ......... ......................... . ............... ...... . ... ...
  • 15. ....... •............. 114 Bibliography ................................... . . . ................. .................. •.... . . .. . .... 11 5 Chapter 4 • Advanced Analytical Theory and Method s: Clu stering .. . . .. . .. . ... . .. . . . ... . .. 117 4.1 Overview of Clustering ........ ...... ......... .. ................................................. 11 8 4.2 K-means ............... ....... ... ....................... .. ........ . ... . .......... . .... . .... .... 11 8 4.2.1 Use Cases ..... .. ............. . •.....• ... ... .. ..... ........ .......... . . .. ........ ...... ... .. . ...... 119 4.2.2 Overview of the Method . ............ ....... ... . .. ........ ................... ... ... .. . .•. ..... . .. . 120 4.2.3 Determining the Number of Clusters . . . .. .. •. •...................... . .......... ..... .. ... ...... . ... 123 4.2.4 Diagnostics .. ......................... ...•.... ........... ..... ....................... .. .. ....... . 128 CONTENTS 4.2.5 Reasons to Choose and Cautions .. . .. . . . . . . .. . . . . . .. ... . ..... ... .. .. . . • . •. • . . ...•. • .• . ... . ..... ... 730 4.3 Add itional Algorithms .............. ... . . . . .. . ...... . ... . ........ .• .. .. . .. ................ .. .... 134 Summary ......... .... ........................ .. . ....................... . . . ..•.. . .................. 135 Exercises ........... ..................... . . ..... . ............................... . .......... .. ..... . 135 Bibliography ............................. ....... ................................ . .................. 136
  • 16. Chapter 5 • Advanced Analytica l Theory and Methods: Association Ru les .................. 137 5.1 Overview .... . . ... ........................................ .. . .. . ..... . .. .................. .. .... 138 5.2 A priori Algorit hm ........... . ............... . . . ...... ... . . .... . . ..... .......... .. ......... ... ... 140 5.3 Evaluation of Candidate Rules ....................... . ... .. . .. ..... • ....... . ................ ..... 141 5.4 Applications of Association Rules ............ ... ..... . ..... . . . ... ..... . . .. . . . ...... .............. 143 5.5 An Example: Transactions in a Grocery Store ... . .................... .... . . ... .......... ........... 143 5.5.1 The Groceries Dataset ................... . . .. .............. •........... •... . .......•............... 144 5.5.2 Frequent ltemset Generation . . ........................... .. ......... . . • . •......... •............... 146 5.5.3 Rule Generation and Visualization ...... . ... . ......................... . .•. •.... . •. •........... . .. . 752 5.6 Validation and Testing ........... . ... .... . . ............................................. . ....... 157 5.7 Diagnostics .. .... ..................... . .. . . ..... . ............ . ... . . ... . ...... . ......... .. .... . . . 158 Summa ry ....... .. ................ . ..... ... . . .. . . ...... .... .... . ........ . . .... ..... .............. . . 158 Exercises ................................ ... . . . ........ . ................. . .... ....... ......... . .... 159 Bibliog raphy ................................ . .. .... ..... ............ ..... . ... ........... ... . ...... . 160 Chapter 6 • Advanced Analytical Theory and Methods: Regression .................. . ..... 161 6.1 Li near Regression .......... . .......... . .. . .. .. ...... . ............
  • 17. .... . . . ....... ........... ...... 162 6.1.1 UseCases . . . ... . . . .. . ...... ..... ......................... .. . ....... .... .... .. ...... . .......... . .. . /62 6.1.2 Model Description .. ... .. . .. . ..... . ........... . .. . .. .... . . •. ..... . •. •.• . ...... . .•............. . .. . 163 6.1.3 Diagnostics ....................... . .... .. . . . . . . ....... •.•.• .....•. •.•...... .• . • .•.. . .. . .... . . . . . . . 773 6.2 Logistic Regression ............ ........ . ..... ................................ . ......... .. .. . .. .. 178 6.2.1 Use Cases ...... . ....................................... .... ................ .... ................... 179 6.2.2 Model Description ........ .. .... ... •..... . .... ........ .. .. • . ..... ... . .•. •...• .•................... 179 6.2.3 Diagnostics ................. ..... ...... . . .. ............•. •. ........•. ..... .• .•................... 181 6.3 Reasons to Choose and Cautions ....... . . .... .. .... ............ ........... ......... ....... ..... . 188 6.4 Additional Regression Models ............ ... .. ...... . ... . ............. . ... ........ ........... ... 189 Summary ........... .... . . ........... . ....... . .........•... . ...... . ...... ... . .. . . ... .. ........... . . 190 Exercises ............ .. .......... .. . .. ................ .. .. .. ............ . . .. .......... . . . .. .. .... . . 190 Chapter 7 • Advanced Ana lytical Theory and Methods: Classification ...... . .......... . .... 191 7.1 Decision Trees ... .. ............... ...... ............ ............. .......... .............. ... .... 192 7.1.1 Overview of a Decision Tree ...... . .................... .. . ........................ .. .... ..... . ...... 193 7.1.2 The General Algorithm . .............. .............. ... ..•. ... .............. .• .. .. ........ .... . .. . . 197 7.1.3 Decision Tree Algorithms ............. .. . .... .. ......•. . .•.. ...
  • 18. • . •... .... . .... ... . .............. .. 203 7.1.4 Evaluating a Decision Tree ............. . . •... . ... . ...•... .... . ....... . .................... . ... . . . . 204 7.1.5 Decision Trees in R . . . .. ................ ...... .. .. ..... ..... .... .................. . ..... ........ .. 206 7.2 Na'lve Bayes . .... ... ................ . ..... . ...... . .......... . .. . ... . ..... .. ..... ......... . ...... 211 7.2.1 Bayes' Theorem . . .. . ........................ . ..................................................... 212 7.2.2 Nai've Bayes Classifier ................... •... . ... ..... .......•.................................. .. . 214 CONTENTS 7.2.3 Smoothing . ............... .................... . .. . ........ . .. . ...... .. • . .. .......... .. .......... . 277 7.2.4 Diagnostics .. . ........... . ..................... .. .... . .•......... •.•.....•...•........ . . . ......... 217 7.2.5 Nai've Bayes in R ............... . . .. . .....•... .. . ...•.•.........•.•.. .. . .. •. •.•.... ........ . .. .... . 278 7.3 Diagnostics of Classifiers ............ •...... ........... •.......... ...•...• .. •... •. .... ........... 224 7.4 Additiona l Classification Methods .... • ... • ...... • ............. • .................•... .... ......... 228 Summary ................. ..... ............ • ......•.............. .. ..........................•..... 229 Exercises .................. ... ......... .... .........................•.... . . . .................•..... 230 Bibliography ...... . ..........•......... .... ........... . ... . .............. ... ...•................... 231 Chapter 8 • Advanced Analytical Theory and Methods: Time
  • 19. Series Analysis . . .. ... . ... . .. . 233 8.1 Overview of Time Series Analysis ....... ....... ................ ......................... .... ..... 234 8.1.1 Box-Jenkins Methodology ................... . .. .... ...... . .................... . .. ..... ............ 235 8.2 ARIMA Model. ................ . .. . ....... •..•..... .. ...... . ... •................. • ... . ..•........ 236 8.2.1 Autocorrelation Function (A CF) .. ......... ...................... ... ........ . ......... . .. ..... ..... 236 8.2.2 Autoregressive Models . ...... ... ............ . . . .. •. ... ..... ... . .. ... ... . ......... . ....... .. . . .... 238 8.2.3 Moving Average Models . .. .. . .................................... .................... •..... . .... . 239 8.2.4 ARMA and ARIMA Models ............. . .................................•...........•.....•....... 241 8.2.5 Building and Evaluating an ARIMA Model ............................. . .•.........•. •. . ... •...... 244 8.2.6 Reasons to Choose and Cautions .. ................ . .. . ........ .. . . .. . ....... . .... .•.•. •.. . •. . .... . 252 8.3 Additional Methods ........ ... . ... ....... ... .. ...... ...... .. ....... ....... .. ... . .... . ... . ...... . 253 Summary ........................ ... ... ...... .. ............ • ......... ......... ..• .. .......• ... ..... 254 Exercises .............. . .......... ... ......... . •. .. .............................• .. . . .. • . .• ... ..... 254 Chapter 9 • Advanced Analytical Theory and Methods: Text Analysis ...... . ... . .. .. .. . . ... 255 9.1 Text Analysis Steps .......... . .... ......... ...... ... .................... . ...... . ...... . . .•....... 257 9.2 A Text Analysis Example ..... •.... .... ............................ .. ............ ...... • .... ...... 259 9.3 Collecting Raw Text ........ .. .............. 00 00 00 00 ••••• 00
  • 20. ••• ••• ••• ••••• 00 ••••• 00 ••••• •• ••• 00 ••• 260 9.4 Representing Text .......................... ... .................. . ...................•.. ...... .. 264 9.5 Term Frequency-Inverse Document Freq uency (TFIDF) ...... • .......... • ..... .•. ...... . ......... 269 9.6 Categorizing Documents by Topics .... ................... .. .•..... . . ... • ...... •.. . . .. . . ......... 274 9.7 Determining Sentim ents ............... . ...... . ......•...•..•.... .. .. .. •.. •... •.. . . .. ........... 277 9.8 Gaining Insig hts ................ .. ....................... •..•....... .. ........•... . ..... . ....... 283 Summary ............... . ........... . ......... •.................... • ..... . . . ......... •..... . ....... 290 Exercises ...............•... . ..... . . .. ........ •..•... . ............. • ................. . ..... . ....... 290 Bibliography ............ •. ..•... . ..... . ....... ... . ....... . .. . ................ . ............ . ........ 291 Chapter 10 • Advanced Analytics-Technology and Tools: MapReduce and Hadoop . . . ..... 295 10.1 Analytics for Unstructured Data . 00 .. .... .. 00 ••••• 00. 00 ••• 00 00 .......... 00 ......... 00 •• 00 .. . .... 296 10.1.1 UseCasesoo .. 00.00 00 ••••• 00.00 00 •••••• 00 ••••••• 00 ••• 00 • • 00.00 .. . .................... 00 . .... .. 00. 296 10.1.2 MapReduce . .. .... ......... .. ............... . .......... •......... •. •....... •.•. •....... . ....... 298 70.7.3 Apache Hadoop ......... ... ........... . ......... . . .. ....... .. . . .. . .... ... . .• ...•.... .. . •....... 300 10.2 The Hadoop Ecosystem ....•... . ........... ..... ... . •... .. .............• . •. .. .. ....... . •• ...... 306 70.2.1 Pig . ....... ..... ........ . ......................................... . .. . . .......•... . ..... •.•..... 306 70.2.2 Hive ............... . ............•................ . ... •.•...........•.......•. . .. . .. . ..... . .. . .. 308
  • 21. 70.2.3 HBase ...... .. 00 .. ... . . 00 ••••••••••••••• 00 •••••• 00 . .... . ..... .. ...... 00 .. .. . . . 00 ••• 00 00 ... 00 •••• • 317 10.2.4 Mahout .. 00 • •• • ••••••••••• 00 ............ . . . .. . ... . .... .... . ..... .. .......... .. .. 00 • • • 00 .. . .. . .. • 319 CONTENTS 10.3 NoSOL ...............•........................ • ................. •..................... • ....... 322 Summary .............•...•........................•................. •.....................•....... 323 Exercises .................•........................ • ..................... •...... ................... 324 Bibliography ....... •...... • ................. •...... • .................•......... .... ........ • ..•.... 324 Chapter 11 • Advanced Analytics-Technology and Tools: In- Database Analytics ........ . . . 327 11.1 SOL Essentials ............................................................. .. . . ........ • ..•.... 328 77.1.1 Joins .. . .. . . .. .. . .. ... .. . ......... ... ............. . .. .. . ...... .. .. ... . ....... ... .... . .. ...... . .. . 330 77.1.2 Set Operations ................ . .. . ...................... . ...... ... ........................... . ... 332 11.1.3 Grouping Extensions ......... .. .. .. . . . . .. ........................ ............. .. ................ . 334 11.2 In-Database Text Analysis ............... •... . .............•......•......... . . .. ... . • . . . •..•.... 338 11 .3 Advanced SOL ... .. ......................... •.. • .................•........... . .........•....... 343 71 .3.1 Window Functions . . . . ............................... ... .. .... ..
  • 22. . . . ..... . ....................... 343 11.3.2 User-Defined Functions and Aggregates ............................•. • . •............... .. ... .... .347 11.3.3 Ordered Aggregates ............. ..... .... ..... ....... .... .. ..................................... 351 11.3.4 MADiib ...................................... ............•. ....... . . .... . .... • . •................ .352 Summary ..........•.. • ... • .......................................................... .. . . .......... 356 Exercises ......... . .............................. ........ ............................ .. . . .......... 356 Bibliography .......•...... •. .• • .................... • ... .. ........... . •. ..• . ......... .... .. . ........ 357 Chapter 12 • The Endgame, or Putting It All Together ..................................... 359 12.1 Communicating and Operationalizing an Analytics Project. ........ . .....................•....... 360 12.2 Creating the Final Deliverables ......................... ..... . .. .. .. .•.......................... 362 12.2.1 Developing Core Material for Multiple Audiences ........................ •..... .. •.•.............. 364 12.2.2 Project Goals . . . . . .. . . ............ . ............ . ..... . ........ ..... . .. . . ..... . . . ................ 365 12.2.3 Main Findings ....... . ... . . ... . ....................... . .. ... . .. . ... ....• . . . ... . •. •........... . .. .367 12.2.4 Approach ... . .. . . .. . . ............................................................ .... .... ...... 369 12.2.5 Model Description ... . .. . .................................... .. ......... . .... . ...•..... . ..... .... 371 12.2.6 Key Points Supported with Data . .......................... . . . . . ....... . . . ..... .. .. .. . ..... . ..... .372 12.2.7 Model Details .. . . .. ................................................. .
  • 23. ....... •.•....... . ........ .372 12.2.8 Recommendations ........ ... .... ....... .... ........... .......... . .... . . ...... • .•.• .. .... ..... . . 374 12.2.9 Additional Tips on Final Presentation ......... . .. . ............ .. . . . . .. . .. . ..... . •. •.............. .375 12.2.10 Providing Technica15pecificarions and Code ................................... . ................ . 376 12.3 Data Visua lization Basics .......... .... ... .... ....................•.......... . .... . ............. 377 12.3.1 Key Points Supported with Data ............... . ... . . . .................. . ............... ... ...... .378 12.3.2 Evolution of a Graph ................ ..... .... ............. ...... . ...... •.•... •. •.•......... •.... 380 12.3.3 Common Representation Methods .............. .. ............ .. . . . •. • .. . .... • . . ................ 386 12.3.4 How to Clean Up a Graphic ................... •. . . .... . ..... . .......... . . . ..... . ... .......... ... .387 12.3.5 Additional Considerations ..... ................. .... ... . ..... .. . . . . •.•. .. ... . •.• ...... . ...... ... . 392 Summary ............ .. .........................•...... • ... • . ... .........•... •..................... 393 Exercises ........... . . .... . ................. .. .. . . . .... • ................. . . .. . .. • .......... . ....... 394 References and Further Reading ... .. ............ .... ...... ..... ......... . .... . . .................... 394 Bibliography .... . . ... ......... .... . ........................ • ................. .. . .. .. . ... . . ... ...... 394 Index .. . .............. . .. . .. . .. . . .. . . ........... . . . .. . .. . . . ....... . . . ... . . .. . .. .. . .. . . . ... .. . . ............... . 397 Foreword
  • 24. Technological advances and the associated changes in practical daily life have produced a rapidly expanding "parallel universe" of new content, new data, and new information sources all around us. Regardless of how one defines it, the phenomenon of Big Data is ever more present, ever more pervasive, and ever more important. There is enormous value potential in Big Data: innovative insights, improved understanding of problems, and countless opportunities to predict-and even to shape-the future. Data Science is the principal means to discover and tap that potential. Data Science provides ways to deal with and benefit from Big Data: to see patterns, to discover relationships, and to make sense of stunningly varied images and information. Not everyone has studied statistical analysis at a deep level. People with advanced degrees in applied math- ematics are not a commodity. Relatively few organizations have committed resources to large collections of data gathered primarily for the purpose of exploratory analysis. And yet, while applying the practices of Data Science to Big Data is a valuable differentiating strategy at present, it will be a standard core competency in the not so distant future. How does an organization operationalize quickly to take advantage of this trend? We've created this book for that exact purpose. EMC Education Services has been listening to the industry and organizations, observing the multi-faceted transformation of the technology landscape, and doing direct research in order to create curriculum and con- tent to help individuals and organizations transform themselves. For the domain of Data Science and Big Data
  • 25. Analytics, our educational strategy balances three things: people-especially in the context of data science teams, processes-such as the analytic lifecycle approach presented in this book, and tools and technologies-in this case with the emphasis on proven analytic tools. So let us help you capitalize on this new "parallel universe" that surrounds us. We invite you to learn about Data Science and Big Data Analytics through this book and hope it significantly accelerates your efforts in the transformational process. Introduction Big Data is creating significant new opportunities for organizations to derive new value and create competitive advantage from their most valuable asset: information. For businesses, Big Data helps drive efficiency, quality, and personalized products and services, producing improved levels of customer satisfaction and profit. For scientific efforts, Big Data analytics enable new avenues of investigation with potentially richer results and deeper insights than previously available. In many cases, Big Data analytics integrate structured and unstructured data with real- time feeds and queries, opening new paths to innovation and insight. This book provides a practitioner's approach to some of the key techniques and tools used in Big Data analytics. Knowledge ofthese methods will help people become active contributors to Big Data analytics projects. The book's content is designed to assist multiple stakeholders: business and data analysts looking to add Big Data analytics skills to their portfolio; database professionals and managers of
  • 26. business intelligence, analytics, or Big Data groups looking to enrich their analytic skills; and college graduates investigating data science as a career field. The content is structured in twelve chapters. The first chapter introduces the reader to the domain of Big Data, the drivers for advanced analytics, and the role of the data scientist. The second chapter presents an analytic project lifecycle designed for the particular characteristics and challenges of hypothesis-driven analysis with Big Data. Chapter 3 examines fundamental statistical techniques in the context of the open source R analytic software environment. This chapter also highlights the importance of exploratory data analysis via visualizations and reviews the key notions of hypothesis development and testing. Chapters 4 through 9 discuss a range of advanced analytical methods, including clustering, classification, regression analysis, time series and text analysis. Chapters 10 and 11 focus on specific technologies and tools that support advanced analytics with Big Data. In particular, the Map Reduce paradigm and its instantiation in the Hadoop ecosystem, as well as advanced topics in SOL and in-database text analytics form the focus of these chapters. XVIII ! INTRODUCTION Chapter 12 provides guidance on operationalizing Big Data analytics projects. This chapter focuses on creat· ing the final deliverables, converting an analytics project to an ongoing asset of an organization's operation, and
  • 27. creating clear, useful visual outputs based on the data. EMC Academic Alliance University and college faculties are invited to join t he Academic Alliance program to access unique "ope n" curriculum-based education on the following top ics: • Data Science and Big Data Analytics • Information Storage and Management • Cloud Infrastructure and Services • Backup Recovery Systems and Architecture The program provides faculty with course re sources to prepare students for opportunities that exist in today's evolving IT industry at no cost. For more information, visit http: // education . EMC . com/ academicalliance. EMC Proven Professional Certification EMC Proven Professional is a leading education and certification program in the IT industry, providing compre- hensive coverage of information storage technologies, virtualization, cloud computing, data science/ Big Data analytics, and more. Being proven means investing in yourself and formally validating your expertise. This book prepares you for Data Science Associate (EMCDSA) certification. Visit http : I I educat i on . EMC . com for details.
  • 28. INTRODUCTION TO BIG DATA ANAL YTICS Much has been written about Big Data and the need for advanced analytics within industry, academ ia, and government. Availa bility of new data sources and the rise of more complex analytical opportunities have created a need to rethink existing data architectures to enable analytics that take advantage of Big Data. In addition, sig nificant debate exists about what Big Data is and what kinds of skil ls are required to make best use of it. This chapter explains severa l key concepts to clarify what is meant by Big Data, why adva nced analyt ics are needed, how Data Science differs from Business Intelligence (BI), and what new roles are needed for the new Big Data ecosystem. 1.1 Big Data Overview Data is created constantly, and at an ever-increasing rate. Mobile phones, social media, imaging technologies to determine a medical diagnosis-all these and more create new data, and that must be stored somewhere for some purp ose. Devices and sensors automatically generate diagnostic information that needs to be stored and processed in real time. Merely keeping up with this huge influx of data is difficult, but su bstan- tially more cha llenging is analyzing vast amounts of it,
  • 29. especially when it does not conform to traditional notions of data structure, to identify meaningful patterns and extract useful information. These challenges of the data deluge present the opportunity to transform business, government, science, and everyday life. Several industries have led the way in developing their ability to gather and exploit data: • Credit ca rd companies monitor every purchase their customers make and can identify fraudulent purchases with a high degree of accuracy using rules derived by processing billions of transactions. • Mobi le phone companies analyze subscribers' calling patterns to determine, for example, whether a caller's frequent contacts are on a rival network. If that rival network is offeri ng an attractive promo- tion t hat might cause the subscriber to defect, the mobile phone company can proactively offer the subscriber an incentive to remai n in her contract. • For compan ies such as Linked In and Facebook, data itself is their primary product. The valuations of these compan ies are heavi ly derived from the data they gather and host, which contains more and more intrinsic va lue as the data grows. Three attributes stand out as defining Big Data characteristics: • Huge volume of data: Rather than thousands or millions of rows, Big Data can be billions of rows and millions of columns. • Complexity of data t ypes and st ructures: Big Data reflects
  • 30. the variety of new data sources, forma ts, and structures, including digital traces being left on the web and other digital repositories for subse- quent analysis. • Speed of new dat a crea tion and growt h: Big Data can describe high velocity data, with rapid data ingestion and near real time analysis. Although the vol ume of Big Data tends to attract the most attention, genera lly the variety and veloc- ity of the data provide a more apt defi nition of Big Data. (Big Data is sometimes described as havi ng 3 Vs: volu me, vari ety, and velocity.) Due to its size or structure, Big Data cannot be efficiently analyzed using on ly traditional databases or methods. Big Data problems req uire new tools and tech nologies to store, manage, and realize the business benefit. These new tools and technologies enable creation, manipulation, and 1.1 Big Data Overview management of large datasets and t he storage environments that house them. Another definition of Big Data comes from the McKi nsey Global report from 2011: Big Data is data whose s cale, dis tribution, diversity, and/ or timeliness require th e use of new technical architectures and analytics to e nable insights that unlock ne w sources of business value.
  • 31. McKinsey & Co.; Big Data: The Next Frontier for Innovation, Competit ion, and Prod uctivity [1] McKinsey's definition of Big Data impl ies that orga nizations will need new data architectures and ana- lytic sandboxes, new tools, new analytical methods, and an integration of multiple skills into the new ro le of the data scientist, which will be discussed in Section 1.3. Figure 1-1 highlights several sources of the Big Data deluge. What's Driving Data Deluge? Mobile Sensors Smart Grids Social Media Geophysical Exploration FtGURE 1-1 What 's driving the da ta deluge Video Surveillance • Medical Imaging Video
  • 32. Rendering Gene Seque ncing The rate of data creation is accelerating, driven by many of the items in Figure 1-1. Social media and genetic sequencing are among the fastest- growing sources of Big Data and examples of untraditional sources of data being used for analysis. For example, in 2012 Facebook users posted 700 status updates per second worldwide, which can be leveraged to deduce latent interests or political views of users and show relevant ads. For instance, an update in wh ich a woman changes her relationship status from "single" to "engaged" wou ld t rigger ads on bri dal dresses, wedding plann ing, or name-changing services. Facebook can also construct social graphs to ana lyze which users are connected to each other as an interconnected network. In March 2013, Facebook released a new featu re called "Graph Search," enabling users and developers to search social graphs for people with similar interests, hobbies, and shared locations. INTRODUCTION TO BIG DATA ANALYTICS Another example comes from genomics. Genetic sequencing and human genome mapping provide a detailed understanding of genetic makeup and lineage. The
  • 33. health care industry is looking toward these advances to help predict which illnesses a person is li kely to get in his lifetime and take steps to avoid these maladies or reduce their impact through the use of personalized med icine and treatment. Such tests also highlight typical responses to different medications and pharmaceutical drugs, heightening risk awareness of specific drug treatments. While data has grown, the cost to perform this work has fall en dramatically. The cost to sequence one huma n genome has fallen from $100 million in 2001 to $10,000 in 2011, and the cost continues to drop. Now, websites such as 23andme (Figure 1-2) offer genotyp ing for less than $100. Although genotyping analyzes on ly a fraction of a genome and does not provide as much granularity as genetic sequencing, it does point to the fact that data and complex analysis is becoming more prevalent and less expensive to deploy. 23 pairs of chromosomes. One unique you. Bring your ancestry to life. F1ncl out what percent or your DNA comes !rom populations around the world. rang1ng from East As1a Sub-Saharan Alllca Europe, and more. B1eak European ancestry down 1010 d1st1nct regions such as the Bnush Isles. Scnnd1navla Italy and Ashkenazi Jewish. People IVIh mixed ancestry. Alncan Amencans. Launos. and Nauve Amencans w111 also
  • 34. get a detailed breakdown. 20.5% ( .t A! n Find relatives across continents or across the street. Build your family tree and enhance your ex erience. : 38.6% · s, b·S 1h Jn Afr c.an 24.7% Europe.,, • ' Share your knowledge. Watch it row. FIGURE 1-2 Examples of what can be learned through genotyping, from 23andme.com 1.1 Big Dat a Overview As illustrated by the examples of social media and genetic sequencing, ind ividuals and organizations both derive benefits from analysis of ever-larger and more comp lex data sets that require increasingly powerful analytical capabilities.
  • 35. 1.1.1 Data Structures Big data can come in multiple forms, including structured and non -structured data such as financial data, text files, multimedia files, and genetic mappings. Contrary to much of the traditional data ana lysis performed by organizations, most of the Big Data is unstructured or semi-structured in nature, which requires different techniques and tools to process and analyze. [2) Distributed computing environments and massively parallel processing (MPP) architectures that enable parallelized data ingest and analysis are the preferred approach to process such complex data. With this in mind, this section takes a closer look at data structures. Figure 1-3 shows four types of data structures, with 80-90% of future data growth coming from non- structured data types. [2) Though different, the four are commonly mixed. For example, a classic Relational Database Management System (RDBMS) may store call logs for a software support call center. The RDBMS may store characteristics of the support calls as typical structured data, with attributes such as time stamps, machine type, problem type, and operating system. In addition, the system will likely have unstructured, quasi- or semi-structured data, such as free-form call log information taken from an e-mail ticket of the problem, customer chat history, or transcript of a phone call describing the technical problem and the solu- tion or aud io file of the phone call conversation. Many insights
  • 36. could be extracted from the unstructured, quasi- or semi-structu red data in the call center data. '0 Q) E u 2 iii Q) 0 ~ Big Data Characteristics: Data Structures Data Growth Is Increasingly Unstructured I Structured FIGURE 1-3 Big Data Growth is increasingly unstructured INTRODUCTION TO BIG DATA ANALYTICS Although analyzing structured data tends to be the most familiar technique, a different technique is required to meet the challenges to analyze semi-structured data (shown as XML), quasi-structured (shown as a clickstream), and unstructured data. Here are examples of how each of the four main types of data structures may look.
  • 37. o Structured data: Data containing a defined data type, format, and structure (that is, transaction data, online analytical processing [OLAP] data cubes, traditional RDBMS, CSV files, and even simple spread- sheets). See Figure 1-4. SUMMER FOOD SERVICE PROGRAM 11 Data as of August 01. 2011) Fiscal Number of Peak (July) Meals Total Federal Year Sites Participation Served Expenditures 2] ---Thousands-- -MiL- -Million$- 1969 1.2 99 2.2 0.3 1970 1.9 227 8.2 1.8 1971 3.2 569 29.0 8.2 1972 6.5 1,080 73.5 21.9 1973 11.2 1,437 65.4 26.6 1974 10.6 1,403 63.6 33.6 1975 12.0 1,785 84.3 50.3 1976 16.0 2,453 104.8 73.4 TQ3] 22.4 3,455 198.0 88.9 1977 23.7 2,791 170.4 114.4 1978 22.4 2,333 120.3 100.3 1979 23.0 2,126 121.8 108.6 1980 21.6 1,922 108.2 110.1 1981 20.6 1,726 90.3 105.9 1982 14.4 1,397 68.2 87.1 1983 14.9 1,401 71.3 93.4 1984 15.1 1,422 73.8 96.2 1985 16.0 1,462 77.2 111.5 1986 16.1 1,509 77.1 114.7 1987 16.9 1,560 79.9 129.3 1988 17.2 1,577 80.3 133.3
  • 38. 1989 18.5 1.652 86.0 143.8 1990 19? 1 ~Q? 91? 1~11 FIGURE 1-4 Example of structured data o Semi-structured data: Textual data files with a discernible pattern that enables parsing (such as Extensible Markup Language [XML] data files that are self- describing and defined by an XML schema). See Figure 1-5. o Quasi-structured data: Textual data with erratic data formats that can be formatted with effort, tools, and time (for instance, web clickstream data that may contain inconsistencies in data values and formats). See Figure 1-6. o Unstructured data: Data that has no inherent structure, which may include text documents, PDFs, images, and video. See Figure 1-7. 1.1 Big Data Ove rvi ew Quasi-structured data is a common phenomenon that bears closer scrutiny. Consider the following example. A user attend s the EMC World conference and subsequently runs a Google search online to find information related to EMC and Data Scien ce. This would produce a URL such as https: I /www . googl e . c om/ #q=EMC+ data +scienc e and a list of results, such as in the first graphic of Figure 1-5. - ~ ....- . .
  • 39. •• 0 o:.~t.a c!':a=-set.•"~t.t-e"> <z:.~ca l':cc.p-eq-.:.:.v•"X-:J;.-cc:r.;:a c.:.t:~" cc::te::c.•" :.::·~d.Q"e , c~.=cr:."!•: "> <t.:.e:"!>~~C - :ead.:. ~o Clc~d Co~~e.:.~~, 3~Q' Dace., a ::d T:~sced ! ! Sol~t.:.o~s</t.:.t!e> clc::d cc::,r·..:e.:.::r; . "> <l.:.::k =e:•"se;·:es!':eee" 1':=-et•" / R. /a;;e;;t c;s / ccv.rrp""' / jo;n:e· ~ ze: c':." > <l.:.::k =~:•"St.i':es!:eet." !-:.=-et•" / B1/a.s:t::;s t c,;u / 1ooorrapo g c / rra ·-. . C!!!! '" > <l.:. ::Jc :-el""" !!t.)-'les!'l.eec " l':=e~•" / 5~ /a.:.;ets / c .,, / corr:rtgjJ/ .. c!lcO""'. ve:-,.cade:- c:~s"> <l.:.::.k =e:• " st.:,·:esl':ee:t. " !':.:et• " 15· / a;;ee, t c:z:Jisgrur:c ... / -e:;n;o;gs· ve:-tco;c• c='a "> <~c::.;::t. :.:,1=e•" t.ex::. / : ·e:;asc::.pt. " s:-c• '" // c l a; t o ;n: P' ' p;•"" ccrrt-.~ .. dce:t:t.- ; - ><I sc:l.p:t.> < :~ c:.:.;::t. .!l:c •"' / R. /a:.sec:J(<~.;/ cgrr;;:c""/rred•--.1z .. _2 I 6 I 2 .;;,;. "'j;. ~ 3 "' ></ ~c:.:.pt.> FIGURE 1-5 Example of semi-structured data Tool!un QUKkt~b~ b:plorerbars Go to Stop
  • 40. R<foosh Zoom(IOO'Jil Tcxtsa:e &>coding Sty!< C• rct brOWSing Source Stc:unt frpclt lnt~ ~loON I 0. tt u re- Wdlpoge pnv.cy potoey_ P""""'JI>ond Ful scr~ Ctri•Q h e F5 F7 Fll After doing this search, the user may choose the second link, to read more about the headline "Data
  • 41. Scientist- EM( Educa tion, Training, and Certification." This brings the user to an erne . com site focu sed on this topic and a new URL, h t t p s : I / e d ucation . e rne . com/ guest / campa i gn / data_ science INTRODUCTION TO BIG DATA ANALYTICS 1 . aspx, that displays the page shown as (2) in Figure 1-6. Arrivi ng at this site, the user may decide to click to learn more about the process of becoming certified in data science. The user chooses a link to ward the top of the page on Certifications, bringing the user to a new URL: ht tps : I I education. erne. com/ guest / certifica tion / framewo rk / stf / data_science . aspx, which is (3) in Figure 1-6. Visiting these three websites adds three URLs to the log files monitoring the user's computer or network use. These three URLs are: https: // www.google . com/# q=EMC+data+ s cience https: // education . emc.com/ guest / campaign/ data science . aspx https : // education . emc . com/ guest / certification/ framework / stf / data_ science . aspx - - ...... - .._.. ............. _ O.Uk*-andi'IO..~T~ · OIC~ o ---·- t..._ ·-- . -- ·-A-- ------·----- .. -,.. _ , _____ ....
  • 42. 0.. ldHIWI • DtC (Ot.aiiOI. l....,... and~ 0 --- -~-~· 1 .. ....... _ .. _....._. __ , ___ -~-·-· · ~----"' .. ~_.,.. ..... - :c ~::...~ and Cenbbcrt 0 t-e •·,-'""""... '•'-""'•• ..,...._ _ ... --...... ~ .... __ .... .....,.,_.... ... ,...._~· - ~O•Uik~R........, A0.1t-~~_,...h", • £MC O --------.. ... .- . '"" ..._. ______ , ______ ...., - - ···-.. ... -~--.-- .... https:/ /www.google.com/#q 3 ------ --- ,_ __ ---- ~-:::.::.::·--===-=-== .. ------·---------·------..---::=--.....::..-..=- .:.-.=-....... -- ·------·--- -·---·--·---·~--·-· -----------·--·--., ______ ... ___ ____ _ -·------- ---·-______ , _______ _ - -------~ · --· ----- >l __ _ __ , , _ _ _ ... , ------., :::... :: FiGURE 1-6 Example of EMC Data Science search results
  • 43. 1.1 Big Data Overview FIGURE 1-7 Example of unstructured data: video about Antarctica expedition [3] This set of three URLs reflects the websites and actions taken to find Data Science inform ation related to EMC. Together, this comprises a clicksrream that can be parsed and mined by data scientists to discover usage patterns and uncover relation ships among clicks and areas of interest on a website or group of sites. The four data types described in this chapter are sometimes generalized into two groups: structured and unstructu red data. Big Data describes new kinds of data with which most organizations may not be used to working. With this in mind, the next section discusses common technology arch itectures from the standpoint of someone wanting to analyze Big Data. 1.1.2 Analyst Perspective on Data Repositories The introduction of spreadsheets enabled business users to crea te simple logic on data structured in rows and columns and create their own analyses of business problems. Database administrator training is not requ ired to create spreadsheets: They can be set up to do many things qu ickly and independently of information technology (IT) groups. Spreadsheets are easy to share, and end users have control over the logic involved. However, their proliferation can result in "many
  • 44. versions of the t ruth." In other words, it can be challenging to determine if a particular user has the most relevant version of a spreadsheet, with the most current data and logic in it. Moreover, if a laptop is lost or a file becomes corrupted, the data and logic within the spreadsheet could be lost. This is an ongoing challenge because spreadsheet programs such as Microsoft Excel still run on many computers worldwide. With the proliferation of data islands (or spread marts), the need to centralize the data is more pressing than ever. As data needs grew, so did mo re scalable data warehousing solutions. These technologies enabled data to be managed centrally, providing benefits of security, failover, and a single repository where users INTRODUCTION TO BIG DATA ANALYTICS could rely on getting an "official" source of data for finan cial reporting or other mission-critical tasks. This structure also enabled the creation ofOLAP cubes and 81 analytical tools, which provided quick access to a set of dimensions within an RD8MS. More advanced features enabled performance of in-depth analytical techniques such as regressions and neural networks. Enterprise Data Warehouses (EDWs) are critica l for reporting and 81 tasks and solve many of the problems that proliferating spreadsheets introduce, such as which of multiple versions of a spreadsheet is correct. EDWs-
  • 45. and a good 81 strategy-provide direct data feeds from sources that are centrally managed, backed up, and secured. Despite the benefits of EDWs and 81, these systems tend to restri ct the flexibility needed to perform robust or exploratory data analysis. With the EDW model, data is managed and controlled by IT groups and database administrators (D8As), and data analysts must depend on IT for access and changes to the data schemas. This imposes longer lead ti mes for analysts to get data; most of the time is spent waiting for approvals rather than starting meaningful work. Additionally, many times the EDW rul es restrict analysts from building datasets. Consequently, it is com mon for additional systems to emerge containing critical data for constructing analytic data sets, managed locally by power users. IT groups generally dislike exis- tence of data sources outside of their control because, unlike an EDW, these data sets are not managed, secured, or backed up. From an analyst perspective, EDW and 81 solve problems related to data accuracy and availabi lity. However, EDW and 81 introduce new problems related to flexibility and agil ity, which were less pronounced when dealing with spreads heets. A solution to this problem is the analytic sandbox, which attempts to resolve the conflict for analysts and data scientists with EDW and more formally managed corporate data. In this model, the IT group may still
  • 46. manage the analytic sandboxes, but they will be purposefully designed to enable robust analytics, while being centrally managed and secured. These sandboxes, often referred to as workspaces, are designed to enable teams to explore many datasets in a controlled fashion and are not typically used for enterprise- level financial reporting and sales dashboards. Many times, analytic sa ndboxes enable high-performance computing using in-database processing- the analytics occur within the database itself. The idea is that performance of the analysis will be better if the analytics are run in the database itself, rather than bringing the data to an analytical tool that resides somewhere else. In-database analytics, discussed further in Chapter 11, "Advanced Analytics- Technology and Tools: In-Database Analytics." creates relationships to multiple data sources within an organization and saves time spent creating these data feeds on an individual basis. In-database processing for deep analytics enables faster turnaround time for developing and executing new analytic models, while reducing, though not eli minating, the cost associated with data stored in local, "shadow" file systems. In addition, rather than the typical structured data in the EDW, analytic sandboxes ca n house a greater variety of data, such as raw data, textual data, and other kinds of unstructured data, without interfering with critical production databases. Table 1-1 summarizes the characteristics of the data repositories mentioned in this section.
  • 47. TABLE 1-1 Types of Data Repositories, from an Analyst Perspective Data Repository Characteristics Spreadsheets and data marts ("spreadmarts") Spreadsheets and low-volume databases for record keeping Analyst depends on data extracts. Data Warehouses Analytic Sandbox (works paces) 1.2 State of the Practice in Analytics Centralized data containers in a purpose-built space Suppo rt s Bl and reporting, but restri cts robust analyses Ana lyst d ependent o n IT and DBAs for data access and schema changes Ana lysts must spend significant t ime to g et aggregat ed and d isaggre- gated data extracts f rom multiple sources.
  • 48. Data assets gathered f rom multiple sources and technologies fo r ana lysis Enables fl exible, high-performance ana lysis in a nonproduction environ- ment; can leverage in-d atabase processing Reduces costs and risks associated w ith data replication into "shadow" file systems "Analyst owned" rather t han "DBA owned" There are several things to consider with Big Data Analytics projects to ensure the approach fits w ith the desired goals. Due to the characteristics of Big Data, these projects le nd them selves to decision su p- port for high-value, strategic decision making w ith high processing complexi t y. The analytic techniques used in this context need to be iterative and fl exible, due to the high volume of data and its complexity. Performing rapid and complex analysis requires high throughput network con nections and a consideration for the acceptable amount of late ncy. For instance, developing a real- t ime product recommender for a website imposes greater syst em demands than developing a near· real·time recommender, which may still pro vide acceptable p erform ance, have sl ight ly greater
  • 49. latency, and may be cheaper to deploy. These considerations requi re a different approach to thinking about analytics challenges, which will be explored further in the next section. 1.2 State of the Practice in Analytics Current business problems provide many opportunities for organizations to become more analytical and data dri ven, as shown in Table 1 ·2. TABLE 1-2 Business Drivers for Advanced Analytics Business Driver Examples Optimize business operations Identify business ri sk Predict new business opportunities Comply w ith laws or regu latory requirements Sales, pricing, profitability, efficiency Customer churn, fraud, default Upsell, cross-sell, best new customer prospects Anti-Money Laundering, Fa ir Lending, Basel II-III, Sarbanes- Oxley(SOX)
  • 50. INTRODUCTION TO BIG DATA ANALYTICS Table 1-2 outlines four categories of common business problems that organizations contend with where they have an opportunity to leverage advanced analytics to create competitive advantage. Rather than only performing standard reporting on these areas, organizations can apply advanced analytical techniques to optimize processes and derive more value from these common tasks. The first three examples do not represent new problems. Organizations have been trying to reduce customer churn, increase sales, and cross-sell customers for many years. What is new is the opportunity to fuse advanced analytical techniques with Big Data to produce more impactful analyses for these traditional problems. The last example por- trays emerging regulatory requirements. Many compliance and regulatory laws have been in existence for decades, but additional requirements are added every year, which represent additional complexity and data requirements for organizations. Laws related to anti-money laundering (AML) and fraud prevention require advanced analytical techniques to comply with and manage properly. 1.2.1 81 Versus Data Science The four business drivers shown in Table 1-2 require a variety of analytical techniques to address them prop- erly. Although much is written generally about analytics, it is important to distinguish between Bland Data Science. As shown in Figure 1-8, there are several ways to compare these groups of analytical techniques. One way to evaluate the type of analysis being performed is to
  • 51. examine the time horizon and the kind of analytical approaches being used. Bl tends to provide reports, dashboards, and queries on business questions for the current period or in the past. Bl systems make it easy to answer questions related to quarter-to-date revenue, progress toward quarterly targets, and understand how much of a given product was sold in a prior quarter or year. These questions tend to be closed-ended and explain current or past behavior, typically by aggregating historical data and grouping it in some way. 81 provides hindsight and some insight and generally answers questions related to "when" and "where" events occurred. By comparison, Data Science tends to use disaggregated data in a more forward-looking, exploratory way, focusing on analyzing the present and enabling informed decisions about the future. Rather than aggregating historical data to look at how many of a given product sold in the previous quarter, a team may employ Data Science techniques such as time series analysis, further discussed in Chapter 8, "Advanced Analytical Theory and Methods: Time Series Analysis," to forecast future product sales and revenue more accurately than extending a simple trend line. In addition, Data Science tends to be more exploratory in nature and may use scenario optimization to deal with more open-ended questions. This approach provides insight into current activity and foresight into future events, while generally focusing on questions related to "how" and "why" events occur. Where 81 problems tend to require highly structured data organized in rows and columns for accurate reporting, Data Science projects tend to use many types of data sources, including large or unconventional
  • 52. datasets. Depending on an organization's goals, it may choose to embark on a 81 project if it is doing reporting, creating dashboards, or performing simple visualizations, or it may choose Data Science projects if it needs to do a more sophisticated analysis with disaggregated or varied datasets. Exploratory Analytical Approach Explanatory I , .. -- ---, 1 Busin ess 1 1 Inte lligence 1 , .... _____ .., Past fiGUR E 1 ·8 Comparing 81 with Data Science 1.2.2 Current Analytical Architecture 1 .2 State ofthe Practice In Analytlcs Predictive Analytics and Data Mini ng (Data Sci ence) Typical • Optimization. predictive modo lin£ Techniques forocastlnC. statlatlcal analysis
  • 53. and • Structured/unstructured data. many Data Types types of sources, very Ioree datasata Common Questions Typical Techniques and Data Types Tim e Common Questions • What II ... ? • What's tho optlmaltconarlo tor our bualnoss? • What wtll happen next? What II these trend$ continuo? Why Is this happonlnt? Busi ness Intelligence • Standard and ad hoc reportlnc. dashboards. alerts, queries, details on demand • Structured data. traditional sourcoa. manac:eable datasets • What happened lut quarter? • How many units sold?
  • 54. • Whore Is the problem? In whic h situations? Future As described earlier, Data Science projects need workspaces that are purpose-built for experimenting with data, with flexible and agile data architectures. Most organizations still have data warehouses that provide excellent support for traditional reporting and simple data analysis activities but unfortunately have a more difficult time supporting more robust analyses. This section examines a typical analytical data architecture that may exist within an organization. Figure 1-9 shows a typical data architecture and several of the challenges it presents to data scientists and others trying to do advanced analytics. This section examines the data flow to the Data Scientist and how this individual tits into the process of getting data to analyze on proj ects. INTRODUCTION TO BIG DATA ANALYTICS FIGURE 1-9 Typical analytic architecture i..l ,_, It An alysts
  • 55. Dashboards Reports Al erts 1. For data sources to be loaded into the data wa rehouse, data needs to be well understood, structured, and normalized with the appropriate data type defini t ions. Although th is kind of centralization enabl es security, backup, and fai lover of highly critical data, it also means that data typically must go through significant preprocessing and checkpoints before it can enter this sort of controll ed environment, which does not lend itself to data exploration and iterative analytic s. 2. As a result of t his level of control on the EDW, add itional local systems may emerge in the form of departmental wa rehou ses and loca l data marts t hat business users create to accommodate thei r need for flexible analysis. These local data marts may not have the same constraints for secu- ri ty and structu re as the main EDW and allow users to do some level of more in-depth analysis. However, these one-off systems reside in isolation, often are not synchronized or integrated with other data stores, and may not be backed up. 3. Once in the data warehouse, data is read by additional applications across the enterprise for Bl and reporting purposes. These are high-priority operational processes getting critical data feeds from the data warehouses and repositories.
  • 56. 4. At the end of this workfl ow, analysts get data provisioned for their downstream ana lytics. Because users generally are not allowed to run custom or intensive analytics on production databases, analysts create data extracts from the EDW to analyze data offline in R or other local analytical tools. Many times the se tools are lim ited to in- memory analytics on desktops analyz- ing sa mples of data, rath er than the entire population of a dataset. Because the se analyses are based on data extracts, they reside in a separate location, and the results of the analysis-and any insights on the quality of the data or anomalies- rarely are fed back into the main data repository. Because new data sources slowly accum ulate in the EDW due to the rigorous validation and data struct uring process, data is slow to move into the EDW, and the data schema is slow to change. 1.2 State of the Practice in Analytics Departmental data warehouses may have been originally designed for a specific purpose and set of business needs, but over time evolved to house more and more data, some of which may be forced into existing schemas to enable Bland the creation of OLAP cubes for analysis and reporting. Although the EDW achieves the objective of reporting and sometimes the creation of dashboards, EDWs generally limit the ability of analysts to iterate on the data in a separate nonproduction environment where they can conduct in-depth
  • 57. analytics or perform analysis on unstructured data. The typical data architectures just described are designed for storing and processing mission-critical data, supporting enterprise applications, and enabling corporate reporting activities. Although reports and dashboards are still important for organizations, most traditional data architectures inhibit data exploration and more sophisticated analysis. Moreover, traditional data architectures have several additional implica- tions for data scientists. o High-value data is hard to reach and leverage, and predictive analytics and data mining activities are last in line for data. Because the EDWs are designed for central data management and reporting, those wanting data for analysis are generally prioritized after operational processes. o Data moves in batches from EDW to local analytical tools. This workflow means that data scientists are limited to performing in-memory analytics (such as with R, SAS, SPSS, or Excel), which will restrict the size of the data sets they can use. As such, analysis may be subject to constraints of sampling, which can skew model accuracy. o Data Science projects will remain isolated and ad hoc, rather than centrally managed. The implica- tion of this isolation is that the organization can never harness the power of advanced analytics in a scalable way, and Data Science projects will exist as nonstandard initiatives, which are frequently not aligned with corporate business goals or strategy. All these symptoms of the traditional data architecture result in
  • 58. a slow "time-to-insight" and lower business impact than could be achieved if the data were more readily accessible and supported by an envi- ronment that promoted advanced analytics. As stated earlier, one solution to this problem is to introduce analytic sandboxes to enable data scientists to perform advanced analytics in a controlled and sanctioned way. Meanwhile, the current Data Warehousing solutions continue offering reporting and Bl services to support management and mission-critical operations. 1.2.3 Drivers of Big Data To better understand the market drivers related to Big Data, it is helpful to first understand some past history of data stores and the kinds of repositories and tools to manage these data stores. As shown in Figure 1-10, in the 1990s the volume of information was often measured in terabytes. Most organizations analyzed structured data in rows and columns and used relational databases and data warehouses to manage large stores of enterprise information. The following decade saw a proliferation of different kinds of data sources-mainly productivity and publishing tools such as content management repositories and networked attached storage systems-to manage this kind of information, and the data began to increase in size and started to be measured at petabyte scales. In the 2010s, the information that organizations try to manage has broadened to include many other kinds of data. In this era, everyone and everything is leaving a digital footprint. Figure 1-10 shows a summary perspective on sources of Big Data generated by new applications and the scale and growth rate of the data. These applications, which generate data volumes that can be measured in exabyte scale,
  • 59. provide opportunities for new analytics and driving new value for organizations. The data now comes from multiple sources, such as these: INTRODUCTION TO BIG DATA ANALYTICS • Medical information, such as genomic sequencing and diag nostic imagi ng • Photos and video footage uploaded to the World Wide Web • Video surveillance, such as the thousands of video ca meras spread across a city • Mobile devices, which provide geospatiallocation data of the users, as well as metadata about text messages, phone calls, and application usage on smart phones • Smart devices, which provide sensor-based collection of information from smart electric grids, smart bu ildings, and many other public and ind ustry infrastructures • Nontraditional IT devices, including the use of radio-freq uency identifica tion (RFID) reader s, GPS navigation systems, and seismic processing MEASURED IN MEASURED IN WILL BE MEASURED IN TERABYTES PET A BYTES EXABYTES lTB • 1.000GB lPB • l .OOOTB lEB l .OOOPB IIEII You(D
  • 60. .... ~ .. ·, A n '' . ~ I b ~ ~ ~ SMS w: '-----" ORACLE = 1.9905 20005 201.05 ( RDBMS & DATA (CONTENT & DIGITAL ASSET (NO-SQL & KEY VALUE) WAREHOUSE) MANAGEMENT) FIGURE 1-10 Data evolution and the rise of Big Data sources Th e Big Data t rend is ge nerating an enorm ous amount of information from many new sources. This data deluge requires advanced analytics and new market players to take adva ntage of these opportunities and new market dynamics, which wi ll be discussed in the following section. 1.2.4 Emerging Big Data Ecosystem and a New Approach to Analytics Organ izations and data collectors are realizing that the data they ca n gath er from individuals contain s intrinsic value and, as a result, a new economy is emerging. As this new digital economy continues to
  • 61. 1.2 State of the Practice in Analytics evol ve, the market sees the introduction of data vendors and data cl eaners that use crowdsourcing (such as Mechanica l Turk and Ga laxyZoo) to test the outcomes of machine learning techniques. Other vendors offer added va lue by repackaging open source tools in a simpler way and bri nging the tools to market. Vendors such as Cloudera, Hortonworks, and Pivotal have provid ed thi s value-add for the open source framework Hadoop. As the new ecosystem takes shape, there are four main groups of playe rs within this interconnected web. These are shown in Figure 1-11. • Data devices [shown in the (1) section of Figure 1-1 1] and the "Sensornet" gat her data from multiple locations and continuously generate new data about th is data. For each gigabyte of new data cre- ated, an additional petabyte of data is created about that data. [2) • For example, consider someone playing an online video game through a PC, game console, or smartphone. In this case, the video game provider captures data about the skill and levels attained by the playe r. Intelligent systems monitor and log how and when the user plays the game. As a consequence, the game provider can fine -tune the difficulty of the game,
  • 62. suggest other related games that would most likely interest the user, and offer add itional equipment and enhancements for the character based on the user's age, gender, and interests. Th is information may get stored loca lly or uploaded to the game provider's cloud to analyze t he gaming habits and opportunities for ups ell and cross-sell, and identify archetypica l profiles of specific kinds of users. • Smartphones provide another rich source of data . In add ition to messag ing and basic phone usage, they store and transmit data about Internet usage, SMS usage, and real-time location. This metadata can be used for analyzing traffic patterns by sca nning the density of smart- phones in locations to track the speed of cars or the relative traffi c congestion on busy roads. In t his way, GPS devices in ca rs can give drivers real- time updates an d offer altern ative routes to avoid traffic delays . • Retail shopping loyalty cards record not just the amo unt an individual spends, but the loca- tions of stores that person visits, the kind s of products purchased, the stores where goods are purchased most ofte n, and the combinations of prod ucts purchased together. Collecting this data provides insights into shopping and travel habits and the likelihood of successful advertiseme nt targeting for certa in types of retail promotions. • Data collectors [the blue ovals, identified as (2) within Figure 1-1 1] incl ude sa mple entities that col lect data from the dev ice and users.
  • 63. • Data resul ts from a cable TV provider tracking the shows a person wa tches, which TV channels someone wi ll and will not pay for to watch on demand, and t he prices someone is will ing to pay fo r premiu m TV content • Retail stores tracking the path a customer takes through their store w hile pushing a shop- ping cart with an RFID chip so they can gauge which products get the most foot traffic using geospatial data co llected from t he RFID chips • Data aggregators (the dark gray ovals in Figure 1-11, marked as (3)) make sense of the data co llected from the various entities from the "Senso rN et" or the "Internet ofThings." These org anizatio ns compile data from the devices an d usage pattern s collected by government agencies, retail stores, INT RODUCTION TO BIG DATA ANALYTIC S and websites. ln turn, t hey can choose to transform and package the data as products to sell to list brokers, who may want to generate marketing lists of people who may be good targets for specific ad campaigns. • Data users and buyers are denoted by (4) in Figu re 1-11. These groups directly benefit from t he data collected and aggregated by others within the data value chain. • Retai l ba nks, acting as a data buyer, may want to know which customers have the hig hest
  • 64. likelihood to apply for a second mortgage or a home eq uity line of credit. To provide inpu t for this analysis, retai l banks may purchase data from a data aggregator. This kind of data may include demograp hic information about people living in specific locations; people who appear to have a specific level of debt, yet still have solid credit scores (or other characteris- tics such as paying bil ls on time and having savings accounts) that can be used to infer cred it worthiness; and those who are sea rching the web for information about paying off debts or doing home remodeling projects. Obtaining data from these various sources and aggrega- tors will enable a more targeted marketing campaign, which would have been more chal- lenging before Big Data due to the lack of information or high- performing technologies. • Using technologies such as Hadoop to perform natural language processing on unstructured, textual data from social media websites, users can gauge the reaction to events such as presidential campaigns. People may, for example, want to determine public sentiments toward a candidate by analyzing related blogs and online comments. Similarl y, data users may want to track and prepare for natural disasters by identifying which areas a hurricane affects fi rst and how it moves, based on which geographic areas are tweeting about it or discussing it via social med ia. r:t Data .::J Devices {'[I t Ptto...r r.r..., l UC)(.K VlOLU l !I ill UO. AI''
  • 65. (,.MI CfitUII CAfW COtPl!UR RfAO(H ~ .~ Iff [) llOfO MfOICAI IMoC'oi"G Law EniCHCefllefll Data Users/ Buyers 0 Media FIGURE 1-11 Emerging Big Data ecosystem Do live!)' So Mea 'I If,. [Ill AN [ Privato Investigators / lawyors 1.3 Key Roles for the New Big Data Ecosyst e m
  • 66. As il lustrated by this emerging Big Data ecosystem, the kinds of data and the related market dynamics vary greatly. These data sets ca n include sensor data, text, structured datasets, and social med ia . With this in mind, it is worth recall ing that these data sets will not work wel l within trad itional EDWs, which were architected to streamline reporting and dashboards and be centrally managed.lnstead, Big Data problems and projects require different approaches to succeed. Analysts need to partner with IT and DBAs to get the data they need within an analytic sandbox. A typical analytical sandbox contains raw data, agg regated data, and data with mu ltiple kinds of structure. The sandbox enables robust exploration of data and requires a savvy user to leverage and take advantage of data in the sandbox environment. 1.3 Key Roles for the New Big Data Ecosystem As explained in the context of the Big Data ecosystem in Section 1.2.4, new players have emerged to curate, store, produce, clean, and transact data. In addition, the need for applying more advanced ana lytica l tech- niques to increasing ly complex business problems has driven the emergence of new roles, new technology platforms, and new analytical methods. This section explores the new roles that address these needs, and subsequent chapters explore some of the analytica l methods and technology platforms. The Big Data ecosystem demands three ca tegories of roles, as shown in Figure 1-12. These roles were
  • 67. described in the McKinsey Global study on Big Data, from May 2011 [1]. Three Key Roles of The New Data Ecosystem Role Deep Analytical Talent Data Savvy Professionals Technology and Data Enablers Data Scientists .. Projected U.S. tal ent gap: 1.40 ,000 to 1.90,000 .. Projected U.S. talent gap: 1..5 million Note: RcuresaboYe m~ • projected talent CDP In US In 201.8. as ihown In McKinsey May 2011 article "81& Data: l he Nut rront* t ot Innovation. Competition. and Product~ FIGURE 1-12 Key roles of the new Big Data ecosystem The first group- Deep Analytical Talent- is technically savvy, with strong analytical skills. Members pos- sess a combi nation of skills to handle raw, unstructured data and to apply complex analytical techniques at INTRODUCTION TO BIG DATA ANALYTICS
  • 68. massive scales. This group has advanced training in quantitative disciplines, such as mathematics, statistics, and machine learning. To do their jobs, members need access to a robust analytic sandbox or workspace where they can perform large-scale analytical data experiments. Examples of current professions fitting into this group include statisticians, economists, mathematicians, and the new role of the Data Scientist. The McKinsey study forecasts that by the year 2018, the United States will have a talent gap of 140,000- 190,000 people with deep analytical talent. This does not represent the number of people needed with deep analytical talent; rather, this range represents the difference between what will be available in the workforce compared with what will be needed. In addition, these estimates only reflect forecasted talent shortages in the United States; the number would be much larger on a global basis. The second group-Data Savvy Professionals-has less technical depth but has a basic knowledge of statistics or machine learning and can define key questions that can be answered using advanced analytics. These people tend to have a base knowledge of working with data, or an appreciation for some of the work being performed by data scientists and others with deep analytical talent. Examples of data savvy profes- sionals include financial analysts, market research analysts, life scientists, operations managers, and business and functional managers. The McKinsey study forecasts the projected U.S. talent gap for this group to be 1.5 million people by the year 2018. At a high level, this means for every Data
  • 69. Scientist profile needed, the gap will be ten times as large for Data Savvy Professionals. Moving toward becoming a data savvy professional is a critical step in broadening the perspective of managers, directors, and leaders, as this provides an idea of the kinds of questions that can be solved with data. The third category of people mentioned in the study is Technology and Data Enablers. This group represents people providing technical expertise to support analytical projects, such as provisioning and administrating analytical sandboxes, and managing large-scale data architectures that enable widespread analytics within companies and other organizations. This role requires skills related to computer engineering, programming, and database administration. These three groups must work together closely to solve complex Big Data challenges. Most organizations are familiar with people in the latter two groups mentioned, but the first group, Deep Analytical Talent, tends to be the newest role for most and the least understood. For simplicity, this discussion focuses on the emerging role of the Data Scientist. It describes the kinds of activities that role performs and provides a more detailed view of the skills needed to fulfill that role. There are three recurring sets of activities that data scientists perform: o Reframe business challenges as analytics challenges. Specifically, this is a skill to diagnose busi- ness problems, consider the core of a given problem, and determine which kinds of candidate analyt- ical methods can be applied to solve it. This concept is explored further in Chapter 2, "Data Analytics
  • 70. lifecycle." o Design, implement, and deploy statistical models and data mining techniques on Big Data. This set of activities is mainly what people think about when they consider the role of the Data Scientist: 1.3 Key Roles for the New Big Data Ecosystem namely, applying complex or advanced ana lytical methods to a variety of busi ness problems using data. Chapter 3 t hrough Chapter 11 of this book introd uces the reader to many of the most popular analytical techniques and tools in this area. • Develop insights that lead to actionable recommendations. It is critical to note that applying advanced methods to data problems does not necessarily drive new business va lue. Instead, it is important to learn how to draw insights out of the data and communicate them effectively. Chapter 12, "The Endgame, or Putting It All Together;' has a brief overview of techniques for doing this. Data scientists are generally thoug ht of as having fi ve mai n sets of skills and behaviora l characteristics, as shown in Figure 1-13: • Quantitative skill: such as mathematics or statistics • Technical aptitude: namely, software engineering, machine learning, and programming skills
  • 71. • Skeptical mind-set and critica l thin king: It is important that data scientists can examine their work critica lly rather than in a one-sided way. • Curious and creative: Data scientists are passionate about data and finding creative ways to solve problems and portray information. • Communicative and collaborative: Data scie ntists must be able to articulate the business val ue in a clear way and collaboratively work with other groups, including project sponsors and key stakeholders. Quantitative Technical Skeptical Curious and Creative Communlcativr and CDDaborati~ fiGURE 1 Profile of a Data Scientist INTRODUCTION TO BIG DATA ANALYTICS Data scientists are generally comfortable using this blend of skills to acquire, manage, analyze, and
  • 72. visualize data and tell compelling stories about it. The next section includes examples of what Data Science teams have created to drive new value or innovation with Big Data. 1.4 Examples of Big Data Analytics After describing the emerging Big Data ecosystem and new roles needed to support its growth, this section provides three examples of Big Data Analytics in different areas: retail, IT infrastructure, and social media. As mentioned earlier, Big Data presents many opportunities to improve sa les and marketing ana lytics. An example of this is the U.S. retailer Target. Cha rles Duhigg's book The Power of Habit [4] discusses how Target used Big Data and advanced analytical methods to drive new revenue. After analyzing consumer- purchasing behavior, Target's statisticians determin ed that the retailer made a great deal of money from three main life-event situations. • Marriage, when people tend to buy many new products • Divorce, when people buy new products and change their spending habits • Pregnancy, when people have many new things to buy and have an urgency to buy t hem Target determined that the most lucrative of these life-events is the thi rd situation: pregnancy. Using
  • 73. data collected from shoppers, Ta rget was able to identify this fac t and predict which of its shoppers were pregnant. In one case, Target knew a female shopper was pregnant even before her family knew [5]. This kind of knowledge allowed Target to offer specifi c coupons and incentives to thei r pregnant shoppers. In fact, Target could not only determine if a shopper was pregnant, but in which month of pregnancy a shop- per may be. This enabled Target to manage its inventory, knowi ng that there would be demand for specific products and it wou ld likely vary by month over the com ing nine- to ten- month cycl es. Hadoop [6] represents another example of Big Data innovation on the IT infra structure. Apache Hadoop is an open source framework that allows companies to process vast amounts of information in a highly paral- lelized way. Hadoop represents a specific implementation of t he MapReduce paradigm and was designed by Doug Cutting and Mike Cafa rel la in 2005 to use data with varying structu res. It is an ideal technical framework for many Big Data projects, which rely on large or unwieldy data set s with unconventiona l data structures. One of the main benefits of Hadoop is that it employs a distributed file system, meaning it can use a distributed cluster of servers and commodity hardware to process larg e amounts of data. Some of the most co mmon examples of Hadoop imp lementations are in the social med ia space, where Hadoop ca n manage transactions, give textual updates, and develop
  • 74. social graphs among millions of users. Twitter and Facebook generate massive amounts of unstructured data and use Hadoop and its ecosystem of tools to manage this hig h volu me. Hadoop and its ecosystem are covered in Chapter 10, "Adva nced Ana lytics- Technology and Tools: MapReduce and Hadoop." Finally, social media represents a tremendous opportunity to leverage social and professional interac- tions to derive new insights. Linked In exemplifies a company in which data itself is the product. Early on, Linkedln founder Reid Hoffman saw the opportunity to create a social network for working professionals. Exercises As of 2014, Linkedln has more than 250 million user accounts and has added many additional features and data-related products, such as recruiting, job seeker too ls, advertising, and lnMa ps, whic h show a social graph of a user's professional network. Figure 1-14 is an example of an In Map visualization that enables a Linked In user to get a broader view of the interconnectedness of his contacts and understand how he knows most of them . fiGURE 1-14 Data visualization of a user's social network using lnMaps Summary Big Data comes from myriad sources, including social media,
  • 75. sensors, the Internet ofThings, video surveil- lance, and many sources of data that may not have been considered data even a few years ago. As businesses struggle to keep up with changing market requirements, some companies are finding creative ways to apply Big Data to their growing business needs and increasing ly complex problems. As organizations evolve their processes and see the opportunities that Big Data can provide, they try to move beyond t raditional Bl activities, such as using data to populate reports and dashboards, and move toward Data Science- driven projects that attempt to answer more open-ended and complex questions. However, exploiting the opportunities that Big Data presents requires new data architectures, includ - ing analytic sandboxes, new ways of working, and people with new skill sets. These drivers are causing organizations to set up analytic sandboxes and build Data Science teams. Although some organizations are fortunate to have data scientists, most are not, because there is a growing talent gap that makes finding and hi ring data scientists in a timely man ner difficult. Still, organizations such as those in web retail, health care, genomics, new IT infrast ructures, and social media are beginning to take advantage of Big Data and apply it in creati ve and novel ways. Exercises 1. What are the three characteristics of Big Data, and what are the main considerations in processing Big
  • 76. Data? 2 . What is an analytic sa ndbox, and why is it important? 3. Explain the differences between Bland Data Science. 4 . Describe the challenges of the current analytical architecture for data scientists. 5. What are the key skill sets and behavioral characteristics of a data scientist? INTRODUCTION TO BIG DATA ANALYTICS Bibliography [1] C. B. B. D. Manyika, "Big Data: The Next Frontier for Innovation, Competition, and Productivity," McKinsey Globa l Institute, 2011 . [2] D. R. John Ga ntz, "The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East," IDC, 2013. [3] http: I l www. willisresilience . coml emc-data l ab [Online]. [4] C. Duhigg, The Power of Habit: Why We Do What We Do in Life and Business, New York: Random House, 2012. [5] K. Hil l, "How Target Figured Out a Teen Girl Was Pregnant Before Her Father Did," Forb es, February 2012. [6] http: I l hadoop. apache . org [Online].
  • 77. DATA ANALYTICS LIFECYCLE Data science projects differ from most traditional Business Intelligence projects and many data ana lysis projects in that data science projects are more exploratory in nature. For t his reason, it is critical to have a process to govern them and ensure t hat the participants are thorough and rigorous in their approach, yet not so rigid that the process impedes exploration. Many problems that appear huge and daunting at first can be broken down into smaller pieces or actionable phases that can be more easily addressed. Having a good process ensures a comprehensive and repeatable method for conducting analysis. In addition, it helps focus time and energy early in the process to get a clear grasp of the business problem to be solved. A common mistake made in data science projects is rushing into data collection and analysis, wh ich precludes spending sufficient time to plan and scope the amount of work involved, understanding requ ire- ments, or even framing the business problem properly. Consequently, participants may discover mid-stream that the project sponsors are actually trying to achieve an objective that may not match the available data, or they are attempting to address an interest that differs from
  • 78. what has been explicitly communicated. When this happens, the project may need to revert to the initial phases of the process for a proper discovery phase, or the project may be canceled. Creating and documenting a process helps demonstrate rigor, which provides additional credibility to the project when the data science team shares its findings. A well-defi ned process also offers a com- mon framework for others to adopt, so the methods and analysis can be repeated in the future or as new members join a team. 2.1 Data Analytics Lifecycle Overview The Data Analytics Lifecycle is designed specifica lly for Big Data problems and data science projects. The lifecycle has six phases, and project work can occur in several phases at once. For most phases in the life- cycle, the movement can be either forward or backward. This iterative depiction of the lifecycle is intended to more closely portray a real project, in which aspects of the project move forward and may return to earlier stages as new information is uncovered and team members learn more about various stages of the project. This enables participants to move iteratively through the process and drive toward operationa l- izing the project work. 2.1.1 Key Roles for a Successful Analytics Project In recent years, substantial attention has been placed on the emerging role of the data scientist. In October 2012, Harvard Business Review featured an article titled "Data
  • 79. Scientist: The Sexiest Job of the 21st Century" [1], in which experts OJ Patil and Tom Davenport described the new role and how to find and hire data scientists. More and more conferences are held annually focusing on innovation in the areas of Data Science and topics dealing with Big Data. Despite this strong focus on the emerg ing role of the data scientist specifi- cally, there are actually seven key roles that need to be fulfilled for a high-functioning data science team to execute analytic projects successfully. Figure 2-1 depicts the various roles and key stakeholders of an analytics project. Each plays a critical part in a successful ana lytics project. Although seven roles are listed, fewer or more peop le can accomplish the work depending on t he scope of the project, the organizational structure, and the skills of t he participants. For example, on a small, versatile team, these seven roles may be fulfilled by only 3 people, but a very large proj ect may require 20 or more people. The seven roles follow. 2.1 Data Analytics Lifecycle Overview ... • FIGURE 2-1 Key roles for a successful analytics project • Business User: Someone who understands the domain area and usually benefits from the resu lts. Th is person can consult and advise the project team on the context of the project, the value of the
  • 80. results, and how the outputs will be operationalized. Usually a business analyst, line manager, or deep subject matter expert in the project domain fulfills this role. • Project Sponsor: Responsible for the genesis of the project. Provides the impetus and requirements for the project and defines the core business problem. Generally provides the funding and gauges the degree of value from the final outputs of the working team. This person set s the priorities for the project and clarifies the desired outputs. • Proj ect Manage r: Ensures that key milestones and objectives are met on time and at the expected quality. • Busin ess Intelligence Analyst : Provides business domain expertise based on a deep understanding of the data, key performance indicators (KPis), key metrics, and business intelligence from a reporting perspective. Business Intelligence Analysts generally create dashboards and reports and have knowl- edge of the data feeds and sources. • Database Administrator (DBA): Provisions and configures the database environment to support the analytics needs of the working team. These responsibilities may include provid ing access to key databases or tables and ensuring the appropriate security levels are in place related to the data repositories. • Dat a Engineer: Leverag es deep technical skills to assist with tuning SQL queries for data manage- ment and data extraction, and provides support for data
  • 81. ingestion into the analytic sandbox, which DATA ANALYTICS LIFECYCLE was discussed in Chapter 1, "Introduction to Big Data Analytics." Whereas the DBA sets up and config- ures the databases to be used, the data engineer executes the actual data extractions and performs substantial data manipulation to facilitate the analytics. The data engineer works closely with the data scientist to help shape data in the right ways for analyses. o Data Scientist: Provides subject matter expertise for analytical techniques, data modeling, and applying valid analytical techniques to given business problems. Ensures overall analytics objectives are met. Designs and executes analytical methods and approaches with the data available to the project. Although most of these roles are not new, the last two roles- data engineer and data scientist-have become popular and in high demand [2] as interest in Big Data has grown. 2.1.2 Background and Overview of Data Analytics Lifecycle The Data Analytics Lifecycle defines analytics process best practices spanning discovery to project completion. The lifecycle draws from established methods in the realm of data analytics and decision science. This synthesis was developed after gathering input from data scientists and consulting estab- lished approaches that provided input on pieces of the process. Several of the processes that were
  • 82. consulted include these: o Scientific method [3], in use for centuries, still provides a solid framework for thinking about and deconstructing problems into their principal parts. One of the most valuable ideas of the scientific method relates to forming hypotheses and finding ways to test ideas. o CRISP-OM [4] provides useful input on ways to frame analytics problems and is a popular approach for data mining. o Tom Davenport's DELTA framework [5]: The DELTA framework offers an approach for data analytics projects, including the context of the organization's skills, datasets, and leadership engagement. o Doug Hubbard's Applied Information Economics (AlE) approach [6]: AlE provides a framework for measuring intangibles and provides guidance on developing decision models, calibrating expert estimates, and deriving the expected value of information. o "MAD Skills" by Cohen et al. [7] offers input for several of the techniques mentioned in Phases 2-4 that focus on model planning, execution, and key findings. figure 2-2 presents an overview of the Data Analytics Lifecycle that includes six phases. Teams commonly learn new things in a phase that cause them to go back and refine the work done in prior phases based on new insights and information that have been uncovered. for this reason, figure 2-2 is shown as a cycle. The circular arrows convey iterative movement between phases until the team members have sufficient
  • 83. information to move to the next phase. The callouts include sample questions to ask to help guide whether each of the team members has enough information and has made enough progress to move to the next phase of the process. Note that these phases do not represent formal stage gates; rather, they serve as criteria to help test whether it makes sense to stay in the current phase or move to the next. Is t he model robust enough? Have we fai led for sure? ······ ::-.. ~~? ........ ·· FIGURE 2-2 Overview of Data Analytics Lifecycle 2.1 Data Analytlcs Lifecycle Overview Do I have enough Information to draft an analytic plan and share for peer review? Do I have enough good quality data to start building the model?
  • 84. Do I have a good Idea about the type of model to try? Can I refine the analytic plan? Here is a brief overview of the main phases of the Data Analytics Lifecycle: • Phase 1- Discovery: In Phase 1, the team learns the business domain, including relevant history such as whether the organization or business unit has attempted similar projects in the past from which they can learn. The team assesses the resources available to support the project in terms of people, technology, time, and data. Important activities in this phase include fram ing the business problem as an analytics challenge that can be addressed in subsequent phases and formulating ini- tial hypotheses (IHs) to test and begin learn ing the data. • Phase 2- Data prepa ration: Phase 2 requires the presence of an analytic sandbox, in which the team can work with data and perform analytics for the duration of the project. The team needs to execute ext ract, load, and transform (ELT) or extract, transform and load (ETL) to get data into the sandbox. The ELT and ETL are sometimes abbreviated as ETLT. Data should be t ransformed in the ETLT process so t he team can work with it and analyze it. In t his phase, the team also needs to famil- iarize itself with the data thoroughly and take steps to condition the data (Section 2.3.4).
  • 85. DATA ANALYTICS LIFECYCLE • Phase 3-Model planning: Phase 3 is model planning, where the team determines the methods, techniques, and workflow it intends to follow for the subsequent model building phase. The team explores the data to learn about the relationships between variables and subsequently se lects key variables and the most suitable models. • Phase 4-Mode l building: In Phase 4, the team deve lops data sets for testing, trai ning, and produc- tion purposes. In addition, in this phase the team builds and executes models based on the work done in the model planning phase. The team also considers whether its existing tools will suffice for running the models, or if it will need a more robust environment for executing models and work flows (for example, fast hardware and parallel processing, if applicable). • Phase 5-Commu nicate results: In Phase 5, the team, in collaboration with major stakeholders, determines if the results of the project are a success or a failure based on the criteria developed in Phase 1. The team should identify key findings, quantify the business value, and develop a narrative to summarize and convey findings to stakeholders. • Phase 6-0perationalize: In Phase 6, the team delivers final reports, briefings, code, and technical documents. In addition, the team may run a pilot project to implement the models in a production envi ronment. Once team members have run models and produced findings, it
  • 86. is critical to frame these results in a way that is tailored to the audience that engaged the team . Moreover, it is critical to frame the results of the work in a manner that demonstrates clear value. If the team performs a technically accurate analysis but fails to translate the results into a language that resonates with the audience, people will not see the value, and much of the time and effort on the project will have been wasted. The rest of the chapter is organized as follows. Sections 2.2-2.7 discuss in detail how each of the six phases works, and Section 2.8 shows a case study of incorporating the Data Analytics Lifecycle in a real- world data science project. 2.2 Phase 1: Discovery The first phase of the Data Analytics Lifecycle involves discovery (Figure 2-3).1n this phase, the data science team must learn and investigate the problem, develop context and understanding, and learn about the data sources needed and ava ilable for the project. In addition, the team formulates initial hypotheses that can later be tested with data. 2.2.1 Learning the Business Domain Understanding the domain area of the problem is essential. In many cases, data scientists will have deep computational and quantitative knowledge that can be broadly applied across many disciplines. An example of this role would be someone with an advanced degree in
  • 87. applied mathematics or statistics. These data scientists have deep knowledge of the methods, technique s, and ways for applying heuris- tics to a variety of business and conceptual problems. Others in this area may have deep knowledge of a domain area, coupled with quantitative expertise. An example of this would be someone with a Ph.D. in life sciences. This person would have deep knowledge of a field of study, such as oceanography, biology, or genetics, with some depth of quantitative knowledge. At this early stage in the process, the team needs to determine how much business or domain knowledge the data scientist needs to develop models in Phases 3 and 4. The earlier the team can make this assessment 2.2 Phase 1: Discovery the better, because t he decision helps dictate the resources needed for the project team and ensures the tea m has t he right balance of domain knowledge and technica l expertise. FIGURE 2-3 Discovery phase 2.2.2 Resources Do I have enough Inform ation to draft an analytic plan and share for peer review?
  • 88. As part of t he discovery phase, the team needs to assess the resources ava ila ble to support the proj ect. In this context, resources include technology, tools, system s, data, and people. During this scoping, consider the available tools and technology t he team will be using and the types of systems needed for later phases to operat ionalize the models. In add itio n, try to evaluate the level of analytica l sophisticat ion within the orga nization and gaps that may exist related to tools, technology, and skills. For instance, for th e model being developed to have longevity in an organization, consider what types of skills and roles will be re qui red that may not exist today. For the proj ect to have long-term success, DATA ANALYTIC$ LIFECVCLE what types of skills and roles will be needed for the recipients of the model being developed? Does the requisite level of expertise exist within the organization today, or will it need to be cultivated? Answering these questions will influence the techniques the team selects and the kind of implementation the team chooses to pursue in subsequent phases of the Data Analytics lifecycle. In addition to the skills and computing resources, it is advisable
  • 89. to take inventory of the types of data available to the team for the project. Consider if the data available is sufficient to support the project's goals. The team will need to determine whether it must collect additional data, purchase it from outside sources, or transform existing data. Often, projects are started looking only at the data available. When the data is less than hoped for, the size and scope of the project is reduced to work within the constraints of the existing data. An alternative approach is to consider the long-term goals of this kind of project, without being con- strained by the current data. The team can then consider what data is needed to reach the long-term goals and which pieces of this multistep journey can be achieved today with the existing data. Considering longer-term goals along with short-term goals enables teams to pursue more ambitious projects and treat a project as the first step of a more strategic initiative, rather than as a standalone initiative. It is critical to view projects as part of a longer-term journey, especially if executing projects in an organization that is new to Data Science and may not have embarked on the optimum datasets to support robust analyses up to this point. Ensure the project team has the right mix of domain experts, customers, analytic talent, and project management to be effective. In addition, evaluate how much time is needed and if the team has the right breadth and depth of skills. After taking inventory of the tools, technology, data, and people, consider if the team has sufficient resources to succeed on this project, or if additional resources
  • 90. are needed. Negotiating for resources at the outset of the project, while seeping the goals, objectives, and feasibility, is generally more useful than later in the process and ensures sufficient time to execute it properly. Project managers and key stakeholders have better success negotiating for the right resources at this stage rather than later once the project is underway. 2.2.3 Framing the Problem Framing the problem well is critical to the success of the project. Framing is the process of stating the analytics problem to be solved. At this point, it is a best practice to write down the problem statement and share it with the key stakeholders. Each team member may hear slightly different things related to the needs and the problem and have somewhat different ideas of possible solutions. For these reasons, it is crucial to state the analytics problem, as well as why and to whom it is important. Essentially, the team needs to clearly articulate the current situation and its main challenges. As part of this activity, it is important to identify the main objectives of the project, identify what needs to be achieved in business terms, and identify what needs to be done to meet the needs. Additionally, consider the objectives and the success criteria for the project. What is the team attempting to achieve by doing the project, and what will be considered "good enough" as an outcome of the project? This is critical to document and share with the project team and key stakeholders. It is best practice to share the statement of goals and success criteria with the team and confirm alignment with the project sponsor's expectations. Perhaps equally important is to establish failure criteria. Most
  • 91. people doing projects prefer only to think of the success criteria and what the conditions will look like when the participants are successful. However, this is almost taking a best-case scenario approach, assuming that everything will proceed as planned 2.2 Phase 1: Discovery and the project team will reach its goals. However, no matter how well planned, it is almost impossible to plan for everything that will emerge in a project. The failure criteria will guide the team in understanding when it is best to stop trying or settle for the results that have been gleaned from the data. Many times people will continue to perform analyses past the point when any meaningful insights can be drawn from the data. Establishing criteria for both success and failure helps the participants avoid unproductive effort and remain aligned with the project sponsors 2.2.4 Identifying Key Stakeholders Another important step is to identify the key stakeholders and their interests in the project. During these discussions, the team can identify the success criteria, key risks, and stakeholders, which should include anyone who will benefit from the project or will be significantly impacted by the project. When interviewing stakeholders, learn about the domain area and any relevant history from similar analytics projects. For example, the team may identify the results each stakeholder wants from the project and the criteria it will use to judge the success of the project. Keep in mind that the analytics project is being initiated for a
  • 92. reason. It is critical to articulate the pain points as clearly as possible to address them and be aware of areas to pursue or avoid as the team gets further into the analytical process. Depending on the number of stakeholders and participants, the team may consider outlining the type of activity and participation expected from each stakeholder and partici- pant. This will set clear expectations with the participants and avoid delays later when, for example, the team may feel it needs to wait for approval from someone who views himself as an adviser rather than an approver of the work product. 2.2.5 Interviewing the Analytics Sponsor The team should plan to collaborate with the stakeholders to clarify and frame the analytics problem. At the outset, project sponsors may have a predetermined solution that may not necessarily realize the desired outcome. In these cases, the team must use its knowledge and expertise to identify the true underlying problem and appropriate solution. For instance, suppose in the early phase of a project, the team is told to create a recommender system for the business and that the way to do this is by speaking with three people and integrating the product recommender into a legacy corporate system. Although this may be a valid approach, it is important to test the assumptions and develop a clear understanding of the problem. The data science team typically may have a more objective understanding of the problem set than the stakeholders, who may be suggesting solutions to a given problem. Therefore, the team can probe deeper into the context and domain to clearly define the problem and propose possible paths from the problem to a desired outcome. In essence, the
  • 93. data science team can take a more objective approach, as the stakeholders may have developed biases over time, based on their experience. Also, what may have been true in the past may no longer be a valid working assumption. One possible way to circumvent this issue is for the project sponsor to focus on clearly defining the requirements, while the other members of the data science team focus on the methods needed to achieve the goals. When interviewing the main stakeholders, the team needs to take time to thoroughly interview the project sponsor, who tends to be the one funding the project or providing the high-level requirements. This person understands the problem and usually has an idea of a potential working solution. It is critical DATA ANALYTIC S LIFE CYCLE to thoroughly understand t he sponsor's perspective to guide the team in getting started on the proj ect. Here are some ti ps for interviewing project sponsors: • Prepare for the interview; draft questio ns, and review with coll eagues. • Use open-ended questi ons; avoid asking lead ing questions. • Probe for details and pose foll ow-up questions. • Avoid filling every silence in t he co nversation; give the other person time to think.
  • 94. • Let the sponsors express t hei r ideas and ask clarifying questions, such as "Why? Is that correct? Is t his idea on target? Is there anything else?" • Use active listening techniques; repeat back what was heard to make sure t he team heard it correctly, or reframe what was sa id. • Try to avoid expressing the team's opinions, which can introduce bias; instead, focus on listening. • Be mindfu l of the body language of the interviewers and sta keholders; use eye contact where appro- priate, and be attentive. • Mi nimize distractions. • Document what t he team heard, and review it with the sponsors. Following is a brief list of common questions that are helpful to ask during the discovery phase when interviewi ng t he project sponsor. The responses wi ll begin to shape the scope of the projec t and give the team an idea of the goals and objectives of the project. • What busi ness problem is t he team trying to solve? • What is t he desired outcome of the proj ect? • What data sources are available? • What industry issues may impact t he analysis?
  • 95. • What timelines need to be considered? • Who could provide insight into the project? • Who has final decision-making authority on the project? • How wi ll t he focus and scope of t he problem change if the following dimensions change: • Time: Analyzing 1 year or 10 years' worth of data? • People: Assess impact of changes in resources on project timelin e. • Risk: Conservative to aggressive • Resources: None to unlimited (tools, technology, systems) • Size and attributes of data: Includi ng internal and external data sou rces 2.2 Phase 1: Discovery 2.2.6 Developing Initial Hypotheses Developing a set of IHs is a key facet of the discovery phase. This step involves forming ideas that the team can test with data. Generally, it is best to come up with a few primary hypotheses to test and then be creative about developing several more. These IHs form the basis of the analytical tests the team will use in later phases and serve as the foundation for the findings in Phase 5. Hypothesis testing from a statisti- cal perspective is covered in greater detail in Chapter 3, "Review of Basic Data Analytic Methods Using R."
  • 96. In this way, the team can compare its answers with the outcome of an experiment or test to generate additional possible solutions to problems. As a result, the team will have a much richer set of observations to choose from and more choices for agreeing upon the most impactful conclusions from a project. Another part of this process involves gathering and assessing hypotheses from stakeholders and domain experts who may have their own perspective on what the problem is, what the solution should be, and how to arrive at a solution. These stakeholders would know the domain area well and can offer suggestions on ideas to test as the team formulates hypotheses during this phase. The team will likely collect many ideas that may illuminate the operating assumptions of the stakeholders. These ideas will also give the team opportunities to expand the project scope into adjacent spaces where it makes sense or design experiments in a meaningful way to address the most important interests of the stakeholders. As part of this exercise, it can be useful to obtain and explore some initial data to inform discussions with stakeholders during the hypothesis-forming stage. 2.2.7 Identifying Potential Data Sources As part of the discovery phase, identify the kinds of data the team will need to solve the problem. Consider the volume, type, and time span of the data needed to test the hypotheses. Ensure that the team can access more than simply aggregated data. In most cases, the team will need the raw data to avoid introducing bias for the downstream analysis. Recalling the characteristics of Big Data from Chapter 1, assess the main characteristics of the data, with regard to its volume, variety,
  • 97. and velocity of change. A thorough diagno- sis of the data situation will influence the kinds of tools and techniques to use in Phases 2-4 of the Data Analytics lifecycle.ln addition, performing data exploration in this phase will help the team determine the amount of data needed, such as the amount of historical data to pull from existing systems and the data structure. Develop an idea of the scope of the data needed, and validate that idea with the domain experts on the project. The team should perform five main activities during this step of the discovery phase: o Identify data sources: Make a list of candidate data sources the team may need to test the initial hypotheses outlined in this phase. Make an inventory of the datasets currently available and those that can be purchased or otherwise acquired for the tests the team wants to perform. o Capture aggregate data sources: This is for previewing the data and providing high-level under- standing. It enables the team to gain a quick overview of the data and perform further exploration on specific areas. It also points the team to possible areas of interest within the data. o Review the raw data: Obtain preliminary data from initial data feeds. Begin understanding the interdependencies among the data attributes, and become familiar with the content of the data, its quality, and its limitations.
  • 98. DATA ANALYTICS LIFEC YCLE • Evaluate the data structures and tools needed: The data type and structure dictate which tools the team can use to analyze the data. This evaluation gets the team thinking about which technologies may be good candidates for the project and how to start getting access to these tools. • Scope the sort of data infrastructure needed for this type of problem: In addition to the tools needed, the data influences the kind of infrastructure that 's required, such as disk storage and net- work capacity. Unlike many traditional stage-gate processes, in which the team can advance only when specific criteria are met, the Data Ana lytics Lifecycle is intended to accommodate more ambiguity. This more closely reflects how data science projects work in real-life situati ons. For each phase of the process, it is recomm ended to pass certain checkpoints as a way of gauging whether the team is ready to move to t he next phase of the Data Analytics Lifecycle. The team can move to the next phase when it has enough information to draft an analytics plan and share it for peer review. Although a peer review of the plan may not actually be required by the project, creating t he plan is a good test of the team's grasp of the busin ess problem and the tea m's approach to add ressing it. Creating the analytic plan also requires a clear
  • 99. understanding of the domain area, the problem to be solved, and scoping of the data sources to be used. Developing success criteria early in the project clarifies the problem definition and helps the team when it comes time to make choices about the analytical methods being used in later phases. 2.3 Phase 2: Data Preparation The second phase of the Data Analytics Lifecycle involves data preparation, which includes the steps to explore, preprocess, and condition data prior to model ing and analysis. In this phase, the team needs to create a robust environment in which it can explore the data that is separate from a production environment. Usua lly, this is done by preparing an ana lytics sandbox. To get the data into the sandbox, the team needs to perform ETLT, by a combination of extracting, transforming, and load ing data into the sandbox. Once the data is in the sa ndbox, the team needs to learn about the data and become familiar with it. Understandi ng the data in detail is critical to t he success of the proj ect. The team also must decide how to condition and transform data to get it into a format to facilitate subsequent analys is. The tea m may perform data visua liza- tions to help team members understand the data, including its trends, outliers, and relationships among data variables. Each of these steps of the data preparation phase is discussed throughout this section. Data preparation tends to be the most labor-intensive step in the
  • 100. analytics lifecycle.ln fact, it is common for teams to spend at least SOo/o of a data science project's time in this critical phase. if the team cannot obtain enough data of sufficient quality, it may be unable to perform the subsequent steps in the lifecycle process. Figu re 2-4 shows an overview of the Data Analytics Lifecycle for Phase 2. The data preparation phase is generally the most iterative and the one that teams tend to undere stimate most often. This is because most teams and leaders are anxious to begin analyzing the data, testing hypotheses, and getting answers to some of the questions posed in Phase 1. Many tend to jump into Phase 3 or Phase 4 to begin rapid ly developing models and algorithms without spendi ng the time to prepare the data for modeling. Consequently, teams come to realize the data they are working with does not allow them to execute the models they want, and they end up back in Phase 2 anyway. Frc;uRE 2 Data preparation phase 2.3.1 Preparing the Analytic Sandbox 2.3 Phase 2: Data Preparation Do I have enough good quality dat a to start building the model?
  • 101. The firs t subphase of data preparation requires the team to obtain an analytic sandbox (also commonly referred to as a wo rkspace), in which the tea m ca n explore the data without interfering with live produc- tion databa ses. Consider an exa mple in which the team needs to work with a company's fin ancial data. The team should access a copy of the fin ancial data from the analytic sand box rather than interacting with the product ion version of t he organization's ma in database, because that will be tight ly controlled and needed for fi nancial reporting. When developi ng the analytic sandbox, it is a best practice to collect all kinds of data there, as tea m mem bers need access to high volumes and varieties of data for a Big Data analytics project. This ca n include DATA ANALYTICS LIFECYCLE everything from summary-level aggregated data, structured data, raw data feeds, and unstructured text data from call logs or web logs, depending on the kind of analysis the team plans to undertake. This expansive approach for attracting data of all kind differs considerably from the approach advocated by many information technology (IT) organizations. Many IT groups provide access to only a particular sub- segment of the data for a specific purpose. Often, the mindset of the IT group is to provide the minimum amount of data required to allow the team to achieve its objectives. Conversely, the data science team
  • 102. wants access to everything. From its perspective, more data is better, as oftentimes data science projects are a mixture of purpose-driven analyses and experimental approaches to test a variety of ideas. In this context, it can be challenging for a data science team if it has to request access to each and every dataset and attribute one at a time. Because of these differing views on data access and use, it is critical for the data science team to collaborate with IT, make clear what it is trying to accomplish, and align goals. During these discussions, the data science team needs to give IT a justification to develop an analyt- ics sandbox, which is separate from the traditional IT-governed data warehouses within an organization. Successfully and amicably balancing the needs of both the data science team and IT requires a positive working relationship between multiple groups and data owners. The payoff is great. The analytic sandbox enables organizations to undertake more ambitious data science projects and move beyond doing tradi- tional data analysis and Business Intelligence to perform more robust and advanced predictive analytics. Expect the sandbox to be large.lt may contain raw data, aggregated data, and other data types that are less commonly used in organizations. Sandbox size can vary greatly depending on the project. A good rule is to plan for the sandbox to be at least 5-10 times the size of the original data sets, partly because copies of the data may be created that serve as specific tables or data stores for specific kinds of analysis in the project. Although the concept of an analytics sandbox is relatively new, companies are making progress in this area and are finding ways to offer sandboxes and workspaces
  • 103. where teams can access data sets and work in a way that is acceptable to both the data science teams and the IT groups. 2.3.2 Performing ETLT As the team looks to begin data transformations, make sure the analytics sandbox has ample bandwidth and reliable network connections to the underlying data sources to enable uninterrupted read and write. In ETL, users perform extract, transform, load processes to extract data from a datastore, perform data transformations, and load the data back into the datastore. However, the analytic sandbox approach differs slightly; it advocates extract, load, and then transform.ln this case, the data is extracted in its raw form and loaded into the datastore, where analysts can choose to transform the data into a new state or leave it in its original, raw condition. The reason for this approach is that there is significant value in preserving the raw data and including it in the sandbox before any transformations take place. For instance, consider an analysis for fraud detection on credit card usage. Many times, outliers in this data population can represent higher-risk transactions that may be indicative of fraudulent credit card activity. Using ETL, these outliers may be inadvertently filtered out or transformed and cleaned before being loaded into the datastore.ln this case, the very data that would be needed to evaluate instances of fraudu- lent activity would be inadvertently cleansed, preventing the kind of analysis that a team would want to do. Following the ELT approach gives the team access to clean data to analyze after the data has been loaded into the database and gives access to the data in its original
  • 104. form for finding hidden nuances in the data. This approach is part of the reason that the analytic sandbox can quickly grow large. The team may want clean data and aggregated data and may need to keep a copy of the original data to compare against or 2.3 Phase 2: Data Preparation look for hidden patterns that may have existed in the data before the cleaning stage. This process can be summarized as ETLT to reflect the fact that a team may choose to perform ETL in one case and ELT in another. Depending on the size and number of the data sources, the team may need to consider how to paral- lelize the movement of the datasets into the sandbox. For this purpose, moving large amounts of data is sometimes referred to as Big ETL. The data movement can be parallelized by technologies such as Hadoop or MapReduce, which will be explained in greater detail in Chapter 10, "Advanced Analytics-Technology and Tools: MapReduce and Hadoop." At this point, keep in mind that these technologies can be used to perform parallel data ingest and introduce a huge number of files or datasets in parallel in a very short period of time. Hadoop can be useful for data loading as well as for data analysis in subsequent phases. Prior to moving the data into the analytic sandbox, determine the transformations that need to be performed on the data. Part of this phase involves assessing data quality and structuring the data sets properly so they can be used for robust analysis in subsequent phases. In addition, it is important to con-
  • 105. sider which data the team will have access to and which new data attributes will need to be derived in the data to enable analysis. As part of the ETLT step, it is advisable to make an inventory of the data and compare the data currently available with datasets the team needs. Performing this sort of gap analysis provides a framework for understanding which datasets the team can take advantage of today and where the team needs to initiate projects for data collection or access to new datasets currently unavailable. A component of this sub phase involves extracting data from the available sources and determining data connections for raw data, online transaction processing (OLTP) databases, online analytical processing (OLAP) cubes, or other data feeds. Application programming interface (API) is an increasingly popular way to access a data source [8]. Many websites and social network applications now provide APis that offer access to data to support a project or supplement the datasets with which a team is working. For example, connecting to the Twitter API can enable a team to download millions of tweets to perform a project for sentiment analysis on a product, a company, or an idea. Much of the Twitter data is publicly available and can augment other data sets used on the project. 2.3.3 Learning About the Data A critical aspect of a data science project is to become familiar with the data itself. Spending time to learn the nuances of the datasets provides context to understand what constitutes a reasonable value and expected output versus what is a surprising finding. In addition, it is important to catalog the data sources that the
  • 106. team has access to and identify additional data sources that the team can leverage but perhaps does not have access to today. Some of the activities in this step may overlap with the initial investigation of the datasets that occur in the discovery phase. Doing this activity accomplishes several goals. o Clarifies the data that the data science team has access to at the start of the project o Highlights gaps by identifying datasets within an organization that the team may find useful but may not be accessible to the team today. As a consequence, this activity can trigger a project to begin building relationships with the data owners and finding ways to share data in appropriate ways. In addition, this activity may provide an impetus to begin collecting new data that benefits the organi- zation or a specific long-term project. o Identifies datasets outside the organization that may be useful to obtain, through open APis, data sharing, or purchasing data to supplement already existing datasets DATA ANALYTICS LIFECYCLE Table 2-1 demonstrates one way to organize this type of data inventory. TABLE: 1 Sample Dataset Inventory Data Available Data to Obtain
  • 107. and Data Available, but Data to from Third Dataset Accessible not Accessible Collect Party Sources Products shipped e Product Fina ncials • Product Ca ll Center • Data Live Product Feedback Surveys Product Sentiment from Social Media 2.3.4 Data Conditioning • • Data conditioning refers to the process of cleaning data, normalizing datasets, and perform ing trans- formations on the data. A critical step with in the Data Ana lytics Lifecycle, data conditioning can involve many complex steps to join or merge data sets or otherwise get datasets into a state that enables analysis in further phases. Data conditioning is often viewed as a preprocessing step for the data analysis because it involves many operations on the dataset before developing models to process or analyze the data. This implies that the data-conditioning step is performed only by IT, the data owners, a DBA, or a data eng ineer.
  • 108. However, it is also important to involve the data scientist in this step because many decisions are made in the data conditioning phase that affect subsequent analysis. Part of this phase involves decidi ng which aspects of particular datasets will be useful to analyze in later steps. Because teams begin forming ideas in this phase about which data to keep and which data to transform or discard, it is important to involve mu ltiple team members in these decisions. Leaving such decisions to a single person may cause teams to return to this phase to retrieve data that may have been discarded. As with the previous example of deciding which data to keep as it relates to fraud detection on credit card usage, it is critical to be thoughtful about which data the team chooses to keep and which data will be discarded. This can have far-reaching consequences that will cause the team to retrace previous steps if th e team discards too much of the data at too early a point in this process. Typically, data science teams would rather keep more data than too little data for the analysis. Additional questions and considerations for the data conditioning step include these. • What are the data sources? What are the target fields (for example, columns of the tables)? • How clean is the data? 2.3 Phase 2: Data Preparation
  • 109. o How consistent are the contents and files? Determine to what degree the data contains missing or inconsistent values and ifthe data contains values deviating from normal. o Assess the consistency of the data types. For instance, if the team expects certain data to be numeric, confirm it is numeric or if it is a mixture of alphanumeric strings and text. o Review the content of data columns or other inputs, and check to ensure they make sense. For instance, if the project involves analyzing income levels, preview the data to confirm that the income values are positive or if it is acceptable to have zeros or negative values. o Look for any evidence of systematic error. Examples include data feeds from sensors or other data sources breaking without anyone noticing, which causes invalid, incorrect, or missing data values. In addition, review the data to gauge if the definition ofthe data is the same over all measurements. In some cases, a data column is repurposed, or the column stops being populated, without this change being annotated or without others being notified. 2.3.5 Survey and Visualize After the team has collected and obtained at least some of the datasets needed for the subsequent analysis, a useful step is to leverage data visualization tools to gain an overview of the data. Seeing high-level patterns in the data enables one to understand characteristics about the data very quickly. One example is using data visualization to examine data quality, such as
  • 110. whether the data contains many unexpected values or other indicators of dirty data. (Dirty data will be discussed further in Chapter 3.) Another example is skewness, such as if the majority of the data is heavily shifted toward one value or end of a continuum. Shneiderman [9] is well known for his mantra for visual data analysis of "overview first, zoom and filter, then details-on-demand." This is a pragmatic approach to visual data analysis. It enables the user to find areas of interest, zoom and filter to find more detailed information about a particular area of the data, and then find the detailed data behind a particular area. This approach provides a high-level view of the data and a great deal of information about a given dataset in a relatively short period of time. When pursuing this approach with a data visualization tool or statistical package, the following guide- lines and considerations are recommended. o Review data to ensure that calculations remained consistent within columns or across tables for a given data field. For instance, did customer lifetime value change at some point in the middle of data collection? Or if working with financials, did the interest calculation change from simple to com- pound at the end of the year? o Does the data distribution stay consistent over all the data? If not, what kinds of actions should be taken to address this problem? o Assess the granularity of the data, the range of values, and the level of aggregation of the data.
  • 111. o Does the data represent the population of interest? For marketing data, if the project is focused on targeting customers of child-rearing age, does the data represent that, or is it full of senior citizens and teenagers? o For time-related variables, are the measurements daily, weekly, monthly? Is that good enough? Is time measured in seconds everywhere? Or is it in milliseconds in some places? Determine the level of granularity of the data needed for the analysis, and assess whether the current level of timestamps on the data meets that need. DATA ANALYTICS LIFECYCLE • Is the data standardized/ normalized? Are the scales consistent? If not, how consistent or irregular is the data? • For geospatial datasets, are state or country abbreviations consistent across the data? Are personal names normalized? English units? Metric units? These are typical considerations that should be part of the thought process as the team evaluates the data sets that are obtained for the project. Becoming deeply knowledgeable about the data will be critica l when it comes time to construct and run models later in the process. 2.3.6 Common Tools for the Data Preparation Phase Several tools are commonly used for this phase:
  • 112. • Hadoop [10] ca n perform massively para llel ingest and custom analysis for web traffic parsing, GPS location ana lytics, genomic analysis, and combining of massive unstructured data fe eds from mul- tiple sources. • Alpine Mi ner [11 ] provides a graphical user interface (GUI) for creating analytic work flows, includi ng data manipu lations and a series of analytic events such as staged data-mining techniques (for exam- ple, first select the top 100 customers, and then run descriptive statistics and clustering) on Postgres SQL and other Big Data sources. • Open Refine (formerly ca lled Google Refine) [12] is "a free, open source, powerful tool for working with messy data." It is a popular GUI-based tool for performing data transformations, and it's one of the most robust free tools cu rrentl y available. • Simi lar to Open Refin e, Data Wrangler [13] is an interactive tool for data clean ing and transformation. Wrangler was developed at Stanford University and can be used to perform many transformations on a given dataset. In addition, data transformation outputs can be put into Java or Python. The advan- tage of this feature is that a subset of the data can be manipulated in Wrangler via its GUI, and then the same operations can be written out as Java or Python code to be executed against the full, larger dataset offline in a local analytic sandbox. For Phase 2, the team needs assistance from IT, DBAs, or whoever controls the Enterprise Data Warehouse (EDW) for data sources the data science team would like to use.
  • 113. 2.4 Phase 3: Model Planning In Phase 3, the data science team identifi es candidate models to apply to the data for clustering, cla ssifying, or findin g relationships in the data depending on the goa l of the project, as shown in Fig ure 2-5. It is during this phase that the team refers to the hypotheses developed in Phase 1, when they first became acquainted with the data and understanding the business problems or domain area. These hypotheses help the team fram e the analytics to execute in Phase 4 and select the right methods to achieve its objectives. Some of the activities to consider in this phase include the following: • Assess the structure of the datasets. The structure of the data sets is one factor that dictates the tools and analytical techniques for the next phase. Depending on whether the team plans to analyze tex- tual data or transactional data, for example, different tools and approaches are required. • Ensure that the analytical techniques enable the team to meet the business objectives and accept or reject the working hypotheses. 2 .4 Phase 3: Model Planning • Determine if the situation warrants a single model or a series of techn iques as part of a larger ana lytic workflow. A few example models include association rules (Chapter 5, "Advanced Ana lytical Theory and Methods: Association Rules") and logistic regression
  • 114. (Chapter 6, "Adva nced Analytical Theory and Methods: Regression"). Other tools, such as Alpine Miner, enable users to set up a series of steps and analyses and can serve as a front·end user interface (UI) for manipulating Big Data sources in PostgreSQL. FIGURE 2· 5 Model planning phase Do I have a good Idea about the type of model to try? Can I refine the analytic plan? In addition to the considerations just listed, it is useful to research and understand how other ana lysts generally approach a specific kind of problem. Given the kind of data and resources that are available, eva lu- ate whether similar, existing approaches will work or if the team will need to create something new. Many times teams can get ideas from analogous problems that other people have solved in different industry verticals or domain areas. Table 2-2 summarizes the results of an exercise of this type, involving several doma in areas and the types of models previously used in a classification type of problem after conducting research on chu rn models in multiple industry verti ca ls. Performing this sort of diligence gives the team DATAANALYT ICS LI FECYCLE
  • 115. ideas of how others have solved similar problems and presents the team with a list of candidate models to try as part of the model planning phase. TABLE 2-2 Research on Model Planning in industry Verticals Market Sector Analytic Techniques/Methods Used Consumer Packaged Goods Retail Banking Reta il Business Wireless Telecom Multiple linear regression, automatic relevance determination (ARD). and decision tree Multiple regression Logistic regression, ARD, decision tree Neural network, decision tree, hierarchical neurofuzzy systems, rule evolver, logistic regression 2.4.1 Data Exploration and Variable Selection Although some data exploration takes place in t he data preparation phase, those activities focus mainly on data hygiene and on assessing the quality of the data itself. In Phase 3, the objective of the data exploration
  • 116. is to understand the relationships among the variables to inform selection of the variables and methods and to understand the prob lem domain. As with earlier phases of t he Data Analytics Lifecycle, it is impor- tant to spend t ime and focus attention on this preparatory work to make t he subsequent phases of model selection and execution easier and more efficient. A common way to cond uct this step involves using tools to perform data visualizations. Approaching the data exploration in this way aids t he team in previewing the data and assessing relationsh ips between varia bles at a high level. In many cases, stakeholders and subject matter experts have instincts and hunches about what the data science team should be considering and ana lyzing. Likely, this group had some hypothesis that led to the genesis of the proj ect. Often, stakeholders have a good grasp of the problem and domain, although they may not be aware of t he subtleties within the data or the model needed to accept or reject a hypothesi s. Oth er times, sta keholders may be correct, bu t for the wrong reasons (for insta nce, they may be correct about a correlation that exists but infer an incorrect reason for the corr elation). Meanwhile, data scientists
  • 117. have to approach problems with an unbiased mind-set and be ready to question all assumptions. As the team begi ns to question the incoming assumptions and test initial ideas of the projec t sponsors and stakeholders, it needs to consider the inputs and data that will be needed, and then it must examine whether these inputs are actually correlated with the outcomes that the team plans to predict or analyze. Some methods and types of models will hand le correlated variables better than others. Depending on what the team is attempting to solve, it may need to consider an alternate method, reduce the number of data inputs, or transform the inputs to allow the team to use the best method for a given business problem. Some of these techniques will be explored further in Chapter 3 and Chapter 6. The key to this approach is to aim for capturing the most essential predictors and variables rather than considering every possible variable that people think may influence the outcome. Approachi ng the prob- lem in this manner requires iterations and testing to identify the most essential variables for the intended analyses. The team should plan to test a range of variables to include in the model and then focus on the most important and influential variab les.
  • 118. 2.4 Phase 3: Model Planning If the team plans to run regression analyses, identify the candidate predictors and outcome variables of the model. Plan to create variables that determine outcomes but demonstrate a strong relationship to the outcome rather than to the other input variables. This includes remaining vigilant for problems such as serial correlation, multicollinearity, and other typical data modeling challenges that interfere with the validity of these models. Sometimes these issues can be avoided simply by looking at ways to reframe a given problem. In addition, sometimes determining correlation is all that is needed ("black box prediction"), and in other cases, the objective of the project is to understand the causal relationship better. In the latter case, the team wants the model to have explanatory power and needs to forecast or stress test the model under a variety of situations and with different datasets. 2.4.2 Model Selection In the model selection subphase, the team's main goal is to choose an analytical technique, or a short list of candidate techniques, based on the end goal of the project. For the context of this book, a model is discussed in general terms. In this case, a model simply refers to an abstraction from reality. One observes events happening in a real-world situation or with live data and attempts to construct models that emulate this behavior with a set of rules and conditions. In the case of machine learning and data mining, these rules and conditions are grouped into several general sets of techniques, such as classification, association
  • 119. rules, and clustering. When reviewing this list of types of potential models, the team can winnow down the list to several viable models to try to address a given problem. More details on matching the right models to common types of business problems are provided in Chapter 3 and Chapter 4, "Advanced Analytical Theory and Methods: Clustering." An additional consideration in this area for dealing with Big Data involves determining if the team will be using techniques that are best suited for structured data, unstructured data, or a hybrid approach. For instance, the team can leverage MapReduce to analyze unstructured data, as highlighted in Chapter 10. Lastly, the team should take care to identify and document the modeling assumptions it is making as it chooses and constructs preliminary models. Typically, teams create the initial models using a statistical software package such as R, SAS, or Matlab. Although these tools are designed for data mining and machine learning algorithms, they may have limi- tations when applying the models to very large datasets, as is common with Big Data. As such, the team may consider redesigning these algorithms to run in the database itself during the pilot phase mentioned in Phase 6. The team can move to the model building phase once it has a good idea about the type of model to try and the team has gained enough knowledge to refine the analytics plan. Advancing from this phase requires a general methodology for the analytical model, a solid understanding of the variables and techniques to use, and a description or diagram of the analytic workflow.
  • 120. 2.4.3 Common Tools for the Model Planning Phase Many tools are available to assist in this phase. Here are several of the more common ones: o R [14] has a complete set of modeling capabilities and provides a good environment for building interpretive models with high-quality code.ln addition, it has the ability to interface with databases via an ODBC connection and execute statistical tests and analyses against Big Data via an open source connection. These two factors makeR well suited to performing statistical tests and analyt- ics on Big Data. As of this writing, R contains nearly 5,000 packages for data analysis and graphical representation. New packages are posted frequently, and many companies are providing value-add D ATA ANALVTICS LIFECVCLE services for R (such as train ing, instruction, and best practices), as well as packaging it in ways to make it easier to use and more robust. This phenomenon is si milar to what happened with Linux in the late 1980s and ea rl y 1990s, when companies appeared to package and ma ke Linux easier for companies to consume and deploy. UseR with fi le extracts for offline ana lysis and optimal performance, and use RODBC connections for dynamic queri es and faster development. • SQL Analysis services [1 5] ca n perform in-database analytics of common data mining fun ctions, involved aggregations, and basic predicti ve models.
  • 121. • SAS/ACCESS [16] provides integration bet ween SAS and the analytics sandbox via multiple data con nectors such as OBDC, JOB(, and OLE DB. SAS itself is ge nera lly used on fil e extract s, but with SAS/ACCESS, users ca n conn ect to relational databases (such as Orac le or Teradata) and data ware- house appliances (such as Green plum or Aster), files, and enterpri se applications (such as SAP and Sa lesforce.com). 2.5 Phase 4: Model Building In Phase 4, the data science team needs to develop data sets for training, testing, and production purposes. These data sets enable the data scientist to develop the analytical model and train it ("t rai ning data"), while holding aside some of t he data ("hold-out data" or "test data") for testing the model. (These topics are addressed in more detail in Chapter 3.) During th is process, it is critical to ensure t hat the t raining and test datasets are sufficiently robust for the model and analytical techn iques. A si mple way to t hink of these datasets is to view the training dataset for cond ucting the initial experiments and the test sets for va lidating an approach once the initia l experiments and models have been run. In the model building phase, shown in Figure 2-6, an ana lytica l model is developed and fit on the trai n- ing data and eva luated (scored) against t he test data. The phases of model planning and model building can overl ap quite a bit, and in practice one ca n iterate back and forth between the two phases for a while
  • 122. before settli ng on a final model. Although the modeling techniques and logic required to develop models ca n be highly complex, the actual dura tion of th is phase can be short compared to the time spent preparing the data and defi ning the approaches. In general, plan to spend more ti me preparing and learning the data (Phases 1-2) and crafting a pres entation of the fin di ngs (Phase 5). Phases 3 and 4 tend to move more quickly, although they are more complex from a conceptual standpoint. As part of this phase, t he data science tea m needs to execute the mod els defined in Phase 3. During this phase, users run models from ana lytical software packages, such as R or SAS, on fil e extracts and small data sets for testing purposes. On a small scale, assess t he va lidity of the model and its results. For insta nce, determine if the model accounts for most of the data and has robust predictive power. At t his point, refine the models to optimize the results, such as by modifying variable inputs or reducing correla ted variables where appropriate. In Phase 3, the team may have had some knowledge of correlated variables or problematic data attributes, w hich will be confirmed or denied once the models are actually executed. When immersed in the details of constructing models and transforming data, many small decisions are often made about the data and the approach for the modeli ng. These details can be easily forgotten once the proj ect is completed. Therefore, it is vital to record t he resu lts and logic of the model du ring this phase. In addi tion, one must take care to record any operating
  • 123. assumptions that were made in the modeling process regarding the data or the context. Is the model robust enough? Have we failed for sure? FIGURE 2· 6 Model building phase 2.5 Phase 4 : Model Building Creating robust models that are suitable to a specific situation requ ires thoughtful consideration to ensure the models being developed ultimately meet the objectives outlined in Phase 1. Questions to con- sider include these: • Does the model appear valid and accurate on the test data? • Does the model output/behavior make sense to the domain experts? That is, does it appear as if the model is giving answers that make sense in this contex t? • Do the parameter values of the fitted model make sense in the context of the domain? • Is the model sufficiently accurate to meet the goal? • Does the model avoid intolerable mistakes? Depending on context, false positives may be more seri- ous or less serious than false negatives, for instance. (False positives and false negatives are discussed
  • 124. further in Chapter 3 and Chapter 7, "Advanced Analytical Theory and Methods: Classification.") DATA ANALYT ICS LIFECYCLE • Are more data or more inputs needed? Do any of the inputs need to be transformed or eliminated? • Will the kind of model chosen support the runtime requirements? • Is a different form of the model required to address the business problem? If so, go back to the model planning phase and revise the modeling approach. Once the data science team can evaluate either if the model is sufficiently robust to solve the problem or if the team has failed, it can move to the next phase in the Data Analytics Lifecycle. 2.5.1 Common Tools for the Model Building Phase There are many tools avai lable to assist in this phase, focused primarily on statistical analysis or data mining soft wa re. Common tools in this space include, but are not limited to, the following: • Commercial Tools: • SAS Enterprise Mi ner (17) allows users to run predictive and descriptive models based on large volumes of data from across the enterprise. It interoperates with other large data stores, has many partnerships, and is built for enterpri se-level computing
  • 125. and analytics. • SPSS Modeler [18) (provided by IBM and now called IBM SPSS Modeler) offers methods to explore and analyze data through a GUI. • Matlab [19) provides a high-level language for performing a variety of data analytics, algo- rithms, and data exploration. • Alpine Miner [1 1) provides a GUI front end for users to develop ana lytic workfiows and intera ct with Big Data tools and platforms on the back end. • STATISTICA [20) and Mathematica [21) are also popular and well-regarded data mining and analytics tools. • Free or Open Source tool s: • Rand PL/R [14) R was described earlier in the model planning phase, and PL!R is a procedural language for PostgreSQL with R. Using this approach means that R commands can be exe- cuted in database. This technique provides higher performance and is more scalable than running R in memory. • Octave [22), a free software programming language for computational modeling, has some of the functionality of Matlab. Because it is freely available, Octave is used in major universities when teaching machine learning. • WEKA [23) is a free data mining software package with an analytic workbench. The functions
  • 126. created in WEKA can be executed within Java code. • Python is a programming language that provides toolkits for machine learning and analysis, such as scikit-learn, numpy, scipy, pandas, and related data visualization using matplotlib. • SQL in-database implementations, such as MADlib [241. provide an alterative to in -memory desktop analytical tools. MADiib provides an open-source machine learning library of algo- rithms that can be executed in-database, for PostgreSQL or Greenplum. 2.6 Phase 5: Communicate Results 2.6 Phase 5: Communicate Results After executing the model, the team needs to compare the outcomes of the modeling to the criteria estab- lished for success and failure. In Phase 5, shown in Figure 2-7, the team considers how best to articulate the findings and outcomes to the various team members and stakeholders, taking into account caveats, assumptions, and any limita t ions of the results. Because the presentation is of ten circulated within an orga nization, it is critical to articulate t he results properly and position the findings in a way that is appro- priate for the audience.
  • 127. FIGURE 2-7 Communicate results phase . .. :.: ··· .. :~· ,.. .· .......... ~·· As part of Phase 5, the team needs to determine if it succeeded or failed in its objectives. Many times people do not wa nt to admit to failing, but in this instance failure should not be considered as a true failure, but rather as a failure of the data to accept or reject a given hypothesis adequately. This concept can be counterintuitive for those w ho have been told their whole careers not to fail. However, t he key is DATA ANALYTICS LIFEC YCLE to remember that the team must be rigorous enough with the data to determine whether it wil l prove or disprove the hypotheses outlined in Phase 1 (discovery). Sometimes teams have only done a superficial analysis, which is not robust enough to accept or reject a hypothesis. Other times, teams perform very robust analysis and are searching for ways to show results, even when results may not be there. It is important to strike a balance between these two extremes when it comes to analyzing data and being pragmatic in terms of showing real-world results. When conducting this assessment, determine if the results are
  • 128. statistica lly significant and va lid. If they are, identify the aspects of the results that stand out and may provide sa lient findings when it comes time to communicate them. If the results are not valid, think about adjustments that can be made to refine and iterate on the model to make it va lid. During this step, assess the re sults and identify which data points may have been surprising and which were in line with the hypotheses that were developed in Phase 1. Comparing the actual resu lts to the ideas formulated early on produces additiona l ideas and insights that would have been missed if the team had not taken time to formulate initial hypotheses early in the process. By this time, the team should have determined which model or models address the analytical challenge in the most appropriate way. In addition, the team should have ideas of some of the findings as a result of the project. The best practice in this phase is to record all the findings and then select the three most significant ones that can be shared with the stakeholders. In addition, the team needs to reflect on the implications of these findings and measure the business value. Depending on what emerged as a result of the model, the team may need to spend time quantifying the business impact of the results to help prepare for the presentation and demonstrate the value of the finding s. Doug Hubbard's work [6) offers insights on how to assess intangibles in business and quantify the value of seemingly unmeasurable th ings.
  • 129. Now that the team has run the model, completed a thorough discovery phase, and learned a great deal about the datasets, reflect on the project and consider what obstacles were in the project and what can be improved in the future. Make recommendations for future work or improvements to existing processes, and consider what each of the team members and sta keholders needs to fulfi ll her responsibil ities. For instance, sponsors must champion the project. Stakeholders must understand how the model affects their processes. (For example, if the team has created a model to predict customer churn, the Marketing team must under- stand how to use the churn model predictions in planning their interventions.) Production eng ineers need to operationalize the work that has been done. In addition, this is the phase to underscore the business benefits of the work and begin making the case to implement the logic into a live production environment. As a result of this phase, the team will have documented the key findings and major insights derived from the analysis. The deliverable of this phase will be the most visible portion of the process to the outside stakeholders and sponsors, so take care to clearly articulate the results, methodology, and business value of the findings. More details will be provided about data visualization tools and references in Chapter 12, "The Endgame, or Putting It All Together." 2.7 Phase 6: Operationalize In the final phase, the team communicates the benefits of the project more broadly and sets up a pilot
  • 130. project to deploy the work in a controlled way before broadening the work to a full enterprise or ecosystem of users. In Phase 4, the team scored the model in the analytics sandbox. Phase 6, shown in Figure 2-8, represents the first time that most analytics teams approach dep loying the new ana lytical methods or models in a production environment. Rather than deploying these models immediately on a wide-scale 2 .7 Phase 6 : Operationalize basis, the risk can be managed more effectively and the team can learn by undertaking a small scope, pilot deployment before a wide-scale rollout. This approach enables the team to learn about the performance and related constraints of the model in a production environment on a small scale and make adjustments before a full deployment. During the pilot project, the team may need to consider executing the algorithm in the database rather than with in-memory tools such as R because the run time is significantly faster and more efficient than running in-memory, especially on larger datasets. FIGURE 2-8 Model operationalize phase While scoping the effort involved in conducting a pilot project, consider running th e model in a production environment for a discrete set of products or a single line of business, which tests the model
  • 131. in a live setting. Thi s allows the team to learn from the deployment and make any needed adjustments before launching the model across the enterprise. Be aware that this phase can bring in a new set of team members- usually the engineers responsible for the production environment who have a new set of issues and concerns beyond those of the core project team. This technical group needs to ensure that DATA ANALYTIC$ LIFEC YCLE running the model fits smoothly into the production environ ment and that the model can be integrated into related business processes. Part of the operationalizing phase includes creating a mechanism for performing ongoing monitoring of model accu racy and, if accuracy degrades, finding ways to retrain the model. If fea sible, design alert s for when the model is operating "out-of-bounds." This includes situations when the inputs are beyond the range that the model was trained on, which may cause the outputs of the model to be inaccurate or invalid. If this begins to happen regularly, the model needs to be retrained on new data. Often, analytical projects yield new insights about a business, a problem, or an idea that people may have taken at face value or thought was impossible to explore. Four main deliverables can be created to
  • 132. meet the needs of most stakeholders. This approach for developing the four deliverables is discussed in greater detail in Chapter 12. Figure 2-9 portrays the key outputs for each of the main stakeholders of an analytics project and what they usually expect at the conclusion of a project. • Busi ness Use r typically tries to determine the benefit s and implications of the findings to the business. • Proj ect Sponsor typica lly asks questions related to the business impact of the project, the risks and return on investment (ROI ), and the way the project ca n be evangelized within the organization (and beyond). • Project Ma nager needs to determine if the project was completed on time and within budget and how well the goals were met. • Business Intellig ence Analyst needs to know if the reports and dashboards he manages will be impacted and need to change. • Data Engineer and Database Administrator (DBA) typical ly need to share their code from the ana- lytics project and create a technical document on how to implement it. • Dat a Scientist needs to share the code and explain the model to her peers, managers, and other stakeholders.
  • 133. Although these seven roles represent many interests within a project, these interests usually overlap, and most of them can be met with four main deliverables. • Presentation for project sponsors: This contains high-level takeaways for executive leve l stakehold- ers, with a few key messages to aid th eir decision-making process. Focus on clean, easy visuals for the prese nter to explain and for the viewer to grasp. • Presentation for analysts, which describes business process changes and reporting changes. Fellow data scientists will want the details and are comfortable with technical graphs (such as Receiver Operating Characteristic [ROC) curves, density plots, and histograms shown in Chapter 3 and Chapter 7). • Code for technical people. • Technical specifications of implementing the code. As a general rul e, the more executive the audience, the more succinct the presentation needs to be. Most executive sponsors attend many briefin gs in the course of a day or a week. Ensure that the presenta- tion gets to the point quickly and frames the results in terms of value to the sponsor's organization. For instance, if the team is working with a bank to analyze cases of credit card fraud, highlight the frequency of fraud, the number of cases in the past month or year, and the cost or revenue impact to the bank 2.8 Case Study: Globa l Innovation Netw ork and Analysis
  • 134. (GINA) (or focu s on the reverse-how much more revenue the bank could gain if it addresses the fraud problem). This demonstrates the business impact better than deep dives on the methodology. The presentation needs to include supporting information about analytical methodology and data sources, but genera lly only as supporting detail or to ensure the audience has confidence in the approach that was taken to analyze the data. -Key Outputs from a Successful Analytic Project Code Presentation for Analysts == Technlt411 Specs 11:011 Present1t1on for Project Sponsors • FIGURE 2-9 Key outputs from a successful ana lyrics project When presenting to other audiences with more quantitative backgrounds, focus more time on the methodology and findings. In these in stances, the team can be more expansive in describing t he out- comes, methodology, and analytical experiment with a peer group. This audience will be more interested in the techniques, especia lly if the team developed a new way of processing or analyzing data that can be reused in the future or appl ied to similar problems. In addition, use imagery or data visualization when possible. Although it may take more time to develop imagery, people tend to remember mental pictu re s to
  • 135. demonstrate a point more than long lists of bullets [25]. Data visualization and presentations are discussed further in Chapter 12. 2.8 Case Study: Global Innovation Network and Analysis (GINA) EMC's Global Innovation Network and Analytics (GINA) team is a group of senior technologists located in centers of excellence (COEs) around the world. This team's charter is to engage employees across global COEs to drive innovation, research, and university partnerships. In 2012, a newly hired director wanted to DATA ANALYTICS LIFECYCLE improve these activities and provide a mechanism to track and analyze the related information. In addition, this team wanted to create more robust mechanisms for capturing the results of its informal conversations with other thought leaders within EMC, in academia, or in other organizations, which could later be mined for insights. The GINA team thought its approach would provide a means to share ideas globally and increase knowledge sharing among GINA members who may be separated geographically. It planned to create a data repository containing both structured and unstructured data to accomplish three main goals. o Store formal and informal data.
  • 136. o Track research from global technologists. o Mine the data for patterns and insights to improve the team's operations and strategy. The GINA case study provides an example of how a team applied the Data Analytics Ufecycle to analyze innovation data at EMC.Innovation is typically a difficult concept to measure, and this team wanted to look for ways to use advanced analytical methods to identify key innovators within the company. 2.8.1 Phase 1: Discovery In the GINA project's discovery phase, the team began identifying data sources. Although GINA was a group of technologists skilled in many different aspects of engineering, it had some data and ideas about what it wanted to explore but lacked a formal team that could perform these analytics. After consulting with various experts including Tom Davenport, a noted expert in analytics at Babson College, and Peter Gloor, an expert in collective intelligence and creator of CoiN (Collaborative Innovation Networks) at MIT, the team decided to crowdsource the work by seeking volunteers within EMC. Here is a list of how the various roles on the working team were fulfilled. o Business User, Project Sponsor, Project Manager: Vice President from Office of the CTO o Business Intelligence Analyst: Representatives from IT o Data Engineer and Database Administrator (DBA): Representatives from IT
  • 137. o Data Scientist: Distinguished Engineer, who also developed the social graphs shown in the GINA case study The project sponsor's approach was to leverage social media and blogging [26] to accelerate the col- lection of innovation and research data worldwide and to motivate teams of "volunteer" data scientists at worldwide locations. Given that he lacked a formal team, he needed to be resourceful about finding people who were both capable and willing to volunteer their time to work on interesting problems. Data scientists tend to be passionate about data, and the project sponsor was able to tap into this passion of highly talented people to accomplish challenging work in a creative way. The data for the project fell into two main categories. The first category represented five years of idea submissions from EMC's internal innovation contests, known as the Innovation Road map (formerly called the Innovation Showcase). The Innovation Road map is a formal, organic innovation process whereby employees from around the globe submit ideas that are then vetted and judged. The best ideas are selected for further incubation. As a result, the data is a mix of structured data, such as idea counts, submission dates, inventor names, and unstructured content, such as the textual descriptions of the ideas themselves. 2.8 Case Study: Global Innovation Network and Analysis (GINA)
  • 138. The second category of data encompassed minutes and notes representing innovation and research activity from around the world. This also represented a mix of structured and unstructured data. The structured data included attributes such as dates, names, and geographic locations. The unstructured documents contained the "who, what, when, and where" information that represents rich data about knowledge growth and transfer within the company. This type of information is often stored in business silos that have little to no visibility across disparate research teams. The 10 main IHs that the GINA team developed were as follows: o IH1: Innovation activity in different geographic regions can be mapped to corporate strategic directions. o IH2: The length oftime it takes to deliver ideas decreases when global knowledge transfer occurs as part of the idea delivery process. o IH3: Innovators who participate in global knowledge transfer deliver ideas more quickly than those who do not. o IH4: An idea submission can be analyzed and evaluated for the likelihood of receiving funding. o IHS: Knowledge discovery and growth for a particular topic can be measured and compared across geographic regions. o IH6: Knowledge transfer activity can identify research-
  • 139. specific boundary spanners in disparate regions. o IH7: Strategic corporate themes can be mapped to geographic regions. o IHS: Frequent knowledge expansion and transfer events reduce the time it takes to generate a corpo- rate asset from an idea. o IH9: Lineage maps can reveal when knowledge expansion and transfer did not (or has not) resulted in a corporate asset. o IH1 0: Emerging research topics can be classified and mapped to specific ideators, innovators, bound- ary spanners, and assets. The GINA (IHs) can be grouped into two categories: o Descriptive analytics of what is currently happening to spark further creativity, collaboration, and asset generation o Predictive analytics to advise executive management of where it should be investing in the future 2.8.2 Phase 2: Data Preparation The team partnered with its IT department to set up a new analytics sandbox to store and experiment on the data. During the data exploration exercise, the data scientists and data engineers began to notice that certain data needed conditioning and normalization. In addition, the team realized that several missing data sets were critical to testing some of the analytic hypotheses.
  • 140. As the team explored the data, it quickly realized that if it did not have data of sufficient quality or could not get good quality data, it would not be able to perform the subsequent steps in the lifecycle process. As a result, it was important to determine what level of data quality and cleanliness was sufficient for the DATA ANALYTICS LIFECYCLE project being undertaken. In the case of the GINA, the team discovered that many of the names of the researchers and people interacting with the universities were misspelled or had leading and trailing spaces in the datastore. Seemingly small problems such as these in the data had to be addressed in this phase to enable better analysis and data aggregation in subsequent phases. 2.8.3 Phase 3: Model Planning In the GINA project, for much of the dataset, it seemed feasible to use social network analysis techniques to look at the networks of innovators within EMC.In other cases, it was difficult to come up with appropriate ways to test hypotheses due to the lack of data. In one case (IH9), the team made a decision to initiate a longitudinal study to begin tracking data points over time regarding people developing new intellectual property. This data collection would enable the team to test the following two ideas in the future: o IHS: Frequent knowledge expansion and transfer events reduce the amount oftime it takes to generate a corporate asset from an idea.
  • 141. o IH9: Lineage maps can reveal when knowledge expansion and transfer did not (or has not} result(ed) in a corporate asset. For the longitudinal study being proposed, the team needed to establish goal criteria for the study. Specifically, it needed to determine the end goal of a successful idea that had traversed the entire journey. The parameters related to the scope of the study included the following considerations: o Identify the right milestones to achieve this goal. o Trace how people move ideas from each milestone toward the goal. o Once this is done, trace ideas that die, and trace others that reach the goal. Compare the journeys of ideas that make it and those that do not. o Compare the times and the outcomes using a few different methods (depending on how the data is collected and assembled). These could be as simple as t-tests or perhaps involve different types of classification algorithms. 2.8.4 Phase 4: Model Building In Phase 4, the GINA team employed several analytical methods. This included work by the data scientist using Natural Language Processing (NLP} techniques on the textual descriptions of the Innovation Road map ideas. In addition, he conducted social network analysis using Rand RStudio, and then he developed social graphs and visualizations of the network of communications related to innovation using R's ggplot2
  • 142. package. Examples of this work are shown in Figures 2-10 and 2-11. 2.8 Case Study: Global Innovation Network and Analysis (GINA) • • , t-ft11lf f ·-·· '-..lr f .. fi"-CIW1o FIGURE 2-10 Social graph [27] visualization of idea submitt ers and finalists • • • 0 0 0 0 0 o o o 0 0 0 0 oo O o 0 () o ocflr_ og Betweenness Ranks 0 ..() 0 1. 578 o cr~ 2. 5 11 0 0() 0 3. 341 0 4. 171 0 <i"' 0 5. 138 0 0 vv
  • 143. 00 0 0 o·o FIGURE 2-11 Social graph visualization of top innovation influencers DATA ANALYTICS LIFECYCLE Figure 2-10 shows social graphs that portray the relationships between idea submitters within GINA. Each color represents an innovator from a different country. The large dots with red circles around them represent hubs. A hub represents a person with high connectivity and a high "betweenness" score. The cluster in Figure 2-11 contains geographic variety, which is critical to prove the hypothesis about geo- graphic boundary spanners. One person in this graph has an unusually high score when compared to the rest of the nodes in the graph. The data scientist identified this person and ran a query against his name within the analytic sandbox. These actions yielded the following information about this research scientist (from the social graph), which illustrated how influential he was within his business unit and across many other areas of the company worldwide: o In 2011, he attended the ACM SIGMOD conference, which is a top-tier conference on large-scale data management problems and databases. o He visited employees in France who are part of the business unit for EMC's content management teams within Documentum (now part of the Information
  • 144. Intelligence Group, or II G). o He presented his thoughts on the SIGMOD conference at a virtual brown bag session attended by three employees in Russia, one employee in Cairo, one employee in Ireland, one employee in India, three employees in the United States, and one employee in Israel. o In 2012, he attended the SDM 2012 conference in California. o On the same trip he visited innovators and researchers at EMC federated companies, Pivotal and VMware. o Later on that trip he stood before an internal council of technology leaders and introduced two of his researchers to dozens of corporate innovators and researchers. This finding suggests that at least part of the initial hypothesis is correct; the data can identify innovators who span different geographies and business units. The team used Tableau software for data visualization and exploration and used the Pivotal Green plum database as the main data repository and analytics engine. 2.8.5 Phase 5: Communicate Results In Phase 5, the team found several ways to cull results of the analysis and identify the most impactful and relevant findings. This project was considered successful in identifying boundary spanners and hidden innovators. As a result, the CTO office launched longitudinal studies to begin data collection efforts and track innovation results over longer periods of time. The GINA project promoted knowledge sharing related to innovation and researchers spanning multiple areas
  • 145. within the company and outside of it. GINA also enabled EMC to cultivate additional intellectual property that led to additional research topics and provided opportunities to forge relationships with universities for joint academic research in the fields of Data Science and Big Data. In addition, the project was accomplished with a limited budget, leveraging a volunteer force of highly skilled and distinguished engineers and data scientists. One of the key findings from the project is that there was a disproportionately high density of innova- tors in Cork, Ireland. Each year, EMC hosts an innovation contest, open to employees to submit innovation ideas that would drive new value for the company. When looking at the data in 2011, 15% of the finalists and 15% of the winners were from Ireland. These are unusually high numbers, given the relative size of the Cork COE compared to other larger centers in other parts of the world. After further research, it was learned that the COE in Cork, Ireland had received focused training in innovation from an external consultant, which 2.8 Case Study: Global Innovation Network and Analysis (GINA) was proving effective. The Cork COE came up with more innovation ideas, and better ones, than it had in the past, and it was making larger contributions to innovation at EMC. It would have been difficu lt, if not impossible, to identify this cluster of innovators through traditional methods or even anecdotal, word-of-
  • 146. mouth feedback. Applying social network analysis enabled the team to find a pocket of people within EMC who were making disproportionately strong contributions. These findings were sha red internally through presentations and conferences and promoted through social media and blogs. 2.8.6 Phase 6: Ope rationalize Running analytics against a sandbox fi lled with notes, minutes, and presentations from innovation activities yielded great insights into EMC's innovation cu lture. Key find ings from the project include these: • The CTO office and GINA need more data in the future, including a marketing initiative to convince people to inform the globa l community on their innovation/research activities. • Some of the data is sensitive, and the team needs to consider security and privacy related to t he data, such as who can ru n the models and see the results. • In addition to ru nning models, a parallel initiative needs to be created to improve basic Business Intelligence activities, such as dashboards, reporting, and queries on research activities worldwide. • A mechanism is needed to continually reevaluate the model after deployment. Assessing the ben- efits is one of the main goals of this stage, as is defining a process to retrain the model as needed.
  • 147. In addition to the actions and find ings listed , the team demonstrated how analytics can drive new insights in projects that are traditionally difficult to measure and quantify. This project informed investment decisions in university research projects by the CTO office and identified hidden, high -value innovators. In addition, the CTO office developed too ls to help submitters improve ideas using topic modeling as part of new recommender systems to help idea submitters find similar ideas and refine thei r proposals for new intellectual property. Table 2-3 outlines an analytics plan for the GIN A case study example. Although this proj ect shows only three findings, there were many more. For instance, perhaps the biggest overarching result from this project is that it demonstrated, in a concrete way, that analytics can drive new insights in projects that deal with topics that may seem difficult to measure, such as innovation. TABLE 2-3 Analytic Plan from the EMC GINA Project Components of Analytic Plan GINA Case Study Discovery Business Problem Framed Initial Hypotheses
  • 148. Data Tracking global knowledge growth, ensuring effective knowledge transfer, and qu ickly converting it into corporate asset s. Executing on these three elements should accelerate innovation. An increase in geographic knowledge transfer improves t he speed of idea delivery. Five years of innovation idea submissions and history; six months of textual notes from global innovation and research activities (continues) DATA ANALYTIC$ LIFECYCLE TABLE 2-3 Analytic Plan from the EMC GINA Project (Continued ) Components of Analytic Plan GINA Case Study Model Planning Analytic Technique Result and Key Findings Social netw ork analysis, socia l graphs, clustering, and
  • 149. regression analysis 1. Identified hidden, high-value innovators and fo und ways to share their knowledge 2. Informed investment decisions in university research projects 3. Created tools to help submitters improve ideas w ith idea recommender systems Innovation is an idea that every company wants to promote, but it can be difficult to m easure innovation or identify ways to increase innovation. Th is project explored this issue f rom the standpoint of evaluating informal socia l networks t o identify boundary spanners and influ ential peop le within innovation sub- networks. In essence, this project took a seemingly nebulous problem and applied advanced analytica l methods t o tea se out answers using an objective, fact-based approach. Another outcome fro m the project included the need to supplement analytics w ith a separate d ata- store for Business Intell igence reporting, accessib le to search innovation/ res earch initiatives. Aside from supporting decision making, th is w ill provide a mechanism to
  • 150. be inform ed on discussions and research happening world wide among team members in disparate locations. Fina lly, it highlighted the value that ca n be g leaned t hrough data and subsequent analysis. Therefore, the need was identified to start f orm al marketi ng program s to convince people to su bmit (or inform) the global commun ity on their innovation/ research activities. The knowledge sharing was critical. Without it, GINA would not have been able t o perfo rm the analysis and identify the hidden innovators w ithin the company. Summary This chapter described the Dat a Analytics Lifecycle, which is an approach to manag ing and exec uting analytical project s. This appro ach describes the process in six phases. 1. Discovery 2. Data preparation 3. Model planning 4. Model building 5. Co mmun ica te results 6 . Operationali ze
  • 151. Through these steps, data science teams can identify problems and perform rig orous investigati on of the dat asets needed for in-d epth analysi s. As stated in the chapt er, although much is w ri tten about the analytical met hods, t he bulk of t he t ime spent on these kinds of project s is spent in preparation-namely, Bibliography in Phases 1 and 2 (discovery and data preparation). In addition, this chapter discussed the seven roles needed for a data science team.lt is critical that organizations recognize that Data Science is a team effort, and a balance of skills is needed to be successful in tackling Big Data projects and other complex projects involving data analytics. Exercises 1 . In which phase would the team expect to invest most of the project time? Why? Where would the team expect to spend the least time? 2 . What are the benefits of doing a pilot program before a full - scale rollout of a new analytical method- ology? Discuss this in the context of the mini case study. 3. What kinds of tools would be used in the following phases,
  • 152. and for wh ich kinds of use scenarios? a. Phase 2: Data preparation b. Phase 4: Model building Bibliography [1] T. H. Davenport and D. J. Patil, "Data Scientist: The Sexiest Job of the 21st Century," Harvard Business Review, October 2012. [2) J. Manyika, M. Chiu, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A. H. Byers, "Big Data: The Next Frontier for Innovation, Competition, and Productivity," McKinsey Global Institute, 2011. [3] " Scientific Method" [Online]. Available: http : //en. wikipedia . org/wiki/ Scientific method. [4) "CRISP-OM" [Online]. Available: http : //en . wikipedia . org/wiki/ Cross_Indust ry_Standard_Process_for_Da ta_Mining. [5) T. H. Davenport, J. G. Harris, and R. Morison, Analytics at Work: Smarter Decisions, Better Results, 2010, Harvard Business Review Press. [6] D. W. Hubbard, How to Measure Anything: Finding the Value of Intangibles in Business, 2010, Hoboken, NJ: John Wiley & Sons. [7) J. Cohen, B. Dolan, M. Dunlap, J. M. Hellerstein and C. Welton, MAD Skills: New Analysis Practices for Big Data, Watertown, MA 2009.
  • 153. [8] " List of APis" [Online]. Available: http : // www. programmableweb . com/apis. [9] B. Shneiderman [Online]. Available: http : //www . ifp . illinois . edu/nabhcs/ abstracts/shneiderman.html. [10) "Hadoop" [Online]. Available: http : //hadoop . apache . org. [11] "Alpine Miner" [Online]. Available: http : // alpinenow . com. [12] "OpenRefine" [Online]. Available: http : // open refine . org. [13] "Data Wrang ler" [Online). Avai lable: http : //vis . stanford . edu/wrangler / . [14] "CR AN" [Online). Available: http: // cran . us . r-project . org. [15] "SQL" [Online]. Available: h tt p: //en . wikipedia . org/wiki/SQL. [16) "SAS/ACCESS" [Onl ine]. Available: http : //www . sas . com/en_ us/software/ data-management/access . htm. DATA ANALYTICS LIFECYCLE [17] "SAS Enterprise Miner" [Online]. Available: http: I /www. sas. com/ en_us/ software/ analytics/enterprise-miner.html. [18] "SPSS Modeler" [Online]. Available: http: I /www- 03. ibm. com/ software/products/ en/category/business-analytics. [19] "Matlab" [Online]. Available: http: I /www. mathworks.
  • 154. com/products/matlab/. [20] "Statistica" [Online]. Available: https: I /www. statsoft. com. [21] "Mathematica" [Online]. Available: http: I /www. wolfram. com/mathematical. [22] "Octave" [Online]. Available: https: I /www. gnu. erg/software/octave/. [23] "WEKA" [Online]. Available: http: I /www. cs. waikato. ac. nz/ml/weka/. [24] "MADiib" [Online]. Available: http: I /madl ib. net. [25] K. L. Higbee, Your Memory-How It Works and How to Improve It, New York: Marlowe & Company, 1996. [26] S. Todd, "Data Science and Big Data Curriculum" [Online]. Available: http: I I steve todd .typepad.com/my_weblog/data-science-and-big-data- curriculum/. [27] T. H Davenport and D. J. Patil, "Data Scientist: The Sexiest Job of the 21st Century," Harvard Business Review, October 2012. REVIEW OF BASIC DATA ANALYTIC METHODS USING R The previous chapter presented the six phases of the Data Analytics Lifecycle. • Phase 1: Discovery • Phase 2: Data Preparation
  • 155. • Phase 3: Model Planning • Phase 4: Model Building • Phase 5: Communicate Results • Phase 6: Operationalize The first three phases involve various aspects of data exploratio n. In general, the success of a da ta analysis project requires a deep understanding of the data. It also requires a toolbox for mining and pre- senting the data. These activities include the study of the data in terms of basic statistical measures and creation of graphs and plots to visualize and identify relationships and patterns. Severa l free or commercial tools are available for exploring, conditioning, modeling, and presenting data. Because of its popularity and versatility, the open-source programming language R is used to illustrate many of the presented analytical tasks and models in this book. This chapter introduces the basic functionality of the R programming language and environment. The first section gives an overview of how to useR to acquire, parse, and filter the data as well as how to obtain some basic descriptive statistics on a dataset. The second section examines using R to perform exploratory data analysis tasks using visua lization. The final section focuses on statistical inference, such as hypothesis
  • 156. testing and analysis of variance in R. 3.1 Introduction toR R is a programming language and software framework for statistical analysis and graphics. Available for use under the GNU General Public License [1], R software and installation instructions can be obtained via the Comprehensive R Archive and Network [2]. This section provides an overview of the basic functionality of R. In later chapters, this foundation in R is utilized to demonstrate many of the presented analytical techniques. Before delving into specific operations and functions of R later in this chapter, it is important to under- stand the now of a basic R script to address an analytical problem. The following R code illustrates a typical analytical situation in which a dataset is imported, the contents of the dataset are examined, and some modeling building tasks are executed. Although the reader may not yet be familiar with the R syntax, the code can be followed by reading the embedded comments, denoted by #. In the following scenario, the annual sales in U.S. dollars for 10,000 retail customers have been provided in the form of a comma- separated-value (CS V) file. The read . csv () function is used to import the CSV file. This dataset is stored to the R variable sales using the assignment operator <- . H imp< rt a CSV file of theo tot'l.- annual sa es for each customcl· sales <- read . csv( "c:/data/yearly_sales.csv") # €'>:amine Lhe imported dataset head(sales)
  • 157. summary (sales) # plot num_of_orders vs. sales plot(sales$num_of_orders,sales$sales_total, main .. "Number of Orders vs. Sales") # perform a statistical analysis (fit a linear regression model) results <- lm(sales$sales_total - sales$num_of_orders) summary(results) # perform some diagnostics on the fitted model # plot histogram of the residuals hist(results$residuals, breaks .. 800) 3.1 Introduction toR In this example, the data file is imported using the read. csv () function. Once the file has been imported, it is useful to examine the contents to ensure that the data was loaded properly as well as to become familiar with the data. In the example, the head ( ) function, by default, displays the first six records of sales. # examine the imported dataset head(sales) cust id sales total num of orders gender - - - - 100001 800.64 2 100002 217.53
  • 158. 100003 74.58 2 t·l 4 100004 ·198. 60 t•l 5 100005 723.11 4 F 6 100006 69.43 2 F The summary () function provides some descriptive statistics, such as the mean and median, for each data column. Additionally, the minimum and maximum values as well as the 1st and 3rd quartiles are provided. Because the gender column contains two possible characters, an "F" (female) or "M" (male), the summary () function provides the count of each character's occurrence. summary(sales) cust id i"lin. :100001 1st Qu . : 1 o 2 5o 1 l>ledian :105001 !>lean :105001 3rd Qu. :107500 llax. : 110 0 0 0 sales total !'-lin. 30.02 1 s t Qu . : 8 0 . 2 9 r'ledian : 151.65 t•lean 24 9. 46 3 rd Qu. : 2 9 5 . 50 t•lax. :7606.09 num of orde1·s gender !'-lin. 1.000 F:5035 1st Qu.: 2.000 !·l: 4965
  • 159. t>ledian : 2.000 !>lean 2.428 3rd Qu.: 3.000 1•1ax. :22.000 Plotting a dataset's contents can provide information about the relationships between the vari- ous columns. In this example, the plot () function generates a scatterplot of the number of orders (sales$num_of_orders) againsttheannual sales (sales$sales_total). The$ is used to refer- ence a specific column in the dataset sales. The resulting plot is shown in Figure 3-1. # plot num_of_orders vs. sales plot(sales$num_of_orders,sales$sales_total, main .. "Number of Orders vs. Sales") REVIEW OF BASIC DATA ANALYTIC METHODS USING R Number of Orders vs . Total Sales 0 iii 0 0 0 :§ 0 0 I <0 C/) Q> 0 iii 0 § 0 0 C/) 0 0 II> 0
  • 160. I I i i i 8 8 0 0 0 ~ C/) 0 0 Q> 0 • 8 iii N I I C/) • 0 5 10 15 20 sales$num_ of_ orders FtGURE 3-1 Graphically examining th e data Each point corresponds to the number of orders and the total sales for each customer. The plot indicates that the annual sales are proportional to the number of orders placed. Although the observed relationship between these two variables is not purely linear, the ana lyst decided to apply linear regression using the lm () function as a first step in the model ing process. r esul t s <- lm(sal es$sa l es_total - sales$num_of _o r de rs) r e s ults ca.l: lm formu.a sa.c Ssales ~ota. sales$num_of_orders Coefti 1en· In· er ep • sa:essnum o f orders The resulting intercept and slope values are -154.1 and 166.2, respectively, for the fitted linear equation. However, results stores considerably more information that can be examined with the summary () fun ction. Details on the contents of results are examined by applying the at t ributes () fun ction.
  • 161. Because regression analysis is presented in more detail later in the book, the reader shou ld not overly focus on interpreting the following output. summary(results) Call : lm formu:a sa!esSsales_total - salcs$ num_of_orders Re!'a ilnls: Min IQ Med1an 3C 1·1ax -666 . 5 12S . S - 26 . 7 86 . 6 4103 . 4 Coe f ficie nt:s: Est1mate Std . Errol r value Prl> t Intercept -15~.128 sal~s$num f orders 166 . 22l .; . 12"' - 37 . 33 1 . 462 112 . 66 <2e-16 <2e- ~6 3.1 Introduction to R Sior:1t . codes : 0 ' ... . 0.00! •·· · c . o: • • • 5 I • I . 1 I 1 Res1aua. star:da~d e~ro~ : ~! .: on 999° deg~ees of :reeo~m ~ultlple R·squar d : 0 . ~617 , Aa:usted P-sq~a~ed : . 561
  • 162. The summary () function is an example of a generic function. A generic function is a group of fu nc- tions sharing the same name but behaving differently depending on the number and the type of arguments they receive. Utilized previously, plot () is another example of a generic function; the plot is determi ned by the passed variables. Generic functions are used throughout this chapter and t he book. In t he final portion of the example, the foll owing R code uses the generic function hist () to ge nerate a histogram (Figure 3-2) of t he re siduals stored in results. The function ca ll illustrates that optional parameter values can be passed. In this case, the number of breaks is specified to observe the large residua ls. ~ pert H. some d13gnosLics or. the htted m .. de. # plot hist >gnm f the residu, ls his t (r esults $res idua l s, breaks= 8 00) Histogra m of resultsSresid uals 0 I() >-u c 0 .. 0 :J <:T ~ 0 u. I() 0 0 1000 2000 3000 resuttsSres1duals
  • 163. FIGURE 3-2 Evidence of large residuals 4000 This simple example illustrates a few of the basic model planning and bu ilding tasks that may occur in Phases 3 and 4 of the Data Analytics Lifecycl e. Throughout this chapter, it is useful to envision how the presented R fun ctionality will be used in a more comprehensive analysis. 3.1.1 R Graphical User Interfaces R software uses a command-line interface (CLI) that is similar to the BASH shell in Li nux or the interactive versions of scripting languages such as Python. UNIX and Linux users can enter command Rat the termina l prompt to use the CU. For Windows installations, R comes with RGui.exe, which provides a basic graphica l user interface (GU I). However, to im prove the ease of writing, executing, and debugging R code, several additional GUis have been written for R. Popular GUis include the R commander [3]. Ra ttle [4], and RStudio [5). This section presents a brief overview of RStudio, w hich was used to build the R examples in th is book. Figure 3-3 provides a screenshot of the previous R code example executed in RStudio. RE V IEW O F BASIC D ATA ANALYTIC M ETHODS USING R ._ CNtl. .. .. t 1 ules .. r t -.d.uv
  • 164. " ... tw..o u l n ' 1 • -...1 • n • • "' 6llu,-...,.1.,_uh,1.u.,· !~uln • - - ....._.. -....y .. tJ - ..... O....ft• f .:1- .-------, ulu 10000 OM. ol " " whbhs Scripts rn~o~hs ... .. , ... ... ... ... .•. ... ... ... plot u1ts~of-orcl9ors,u1es ules_tot•l, utn-·~ ~ cwo...s '"· s..ln . ' ruulu t"ltSIIIU •. ' to' 1 ' Jo flt I T 1• uln ,u lu_utul ulu~,..._ot_orct«'s 11 luu - .... ,..,.,... ""'- ; :- •toMtt· 0 { a. ..... j Workspace '" lU
  • 165. Ul •Ill r t f " ~ I hht ruuhs1t"tst~•h. br u11 • 100 r Histogram or resu lts$ reslduals ·-'" •• • iu~-~.~ ... ~------:=::::::::::::::::~ "s-uy(resulu) c•H: l•(f or-..1& - ulu lulu_uul .. saln~of_orcMf's) ... , .... ls : tUn tQ lllt'dhn 1Q ~b · 6M. 1 - US, •U .7 t6.6 .&10), <1 lsttuu Ud. Crrw 1 " ' ',.,. k(,.Jtl) omuupt) . ,,,, u, •.ut -Jr.n ~ •. ,, •·• ul•s~oLoro.-' u6. Ul 1. 41 61 uJ.M <-lt- 16 ••• sttnU . codts : o '• •• ' o. oot •••• 0.01 ••• o.os •, • 0.1 • • 1 l:utdlu• 1 sund¥d uror: 110.1 on tnt CS•vr"' or fr~ ttyttph I:•S.,¥-.1: 0 , $6 17, .lodJw~t f'd l•squ.vl'd: 0 . )6)7 f" • Uathttc : t . ltl••Ool on I wrd 9Ht Of", P•VA1U41: ot l . h•16 ,. #pf'rfor• ~- diA~ltn Of' Uw ttlud _,.., Jo ?lot hhtOQrM ot tt,. re~lduh ~ Mn(ruwh~k'n lcN.th, .,.,,,, · tOO) FIGURE 3-3 RStudio GUI .:1 Console
  • 166. j The fou r highlighted window panes follow. ~ ~ j ~ ~ • Scripts: Serves as an area to write and saveR code 1000 • Workspace: Lists the datasets and variables in the R environment Plots 2000 3000 • Plot s: Displays the plots generated by the R code and provides a straightfor ward mechanism to export the plots • Co nsole: Provides a history of the executed R code and the output Additionally, the console pane can be used to obtain help information on R. Figure 3-4 illustrates that by entering ? lm at the console prompt, the help details of the lm ( ) function are provided on the right.
  • 167. Alternatively, help ( lm ) could have been entered at the console prompt. Functions such as edit () and fix () allow the user to update the contents of an R variable. Alternatively, such changes can be implemented with RStudio by selecting the appropriate variable from the workspace pan e. R allows one to save the work space environment, includ ing variables and loaded li brari es, into an . Rdata fil e using the save . image () function. An existing . Rdata file can be loaded using the load . image () function . Tools such as RS tud io prompt the user for whether the developer wants to save the workspace connects prior to exiting the GUI. The reader is encouraged to install Rand a preferred GUI to try out the R examples provided in the book and utilize the help functionality to access more details about the discussed topics. ., .. t tt • • ........... , 41 1ft f Ot l.a.l:?' ~~ t" u lu N r e...cl.csv 4111 , • .,.1yJ•1n.u " ... ... . .. hud ulu s~salu . ,. ,. .. . ._. . "' .. , plot ultJ l......_.,....ot'MrS ,Uh.tSnles.toul. .. ,,..-~of arden'''~· ~·u· , ..
  • 168. >01 ... >07 , .. , .. 110 • f • .I H-It t II t>t .II It~ r--r;~l~ ~) ru.ulu 1• uluSulu,.ICKil ulus~of .. crd~ ru~ln Ul "r•·f dl 1 nt,,d~l U7 •ph•t hi t< t t~ '"' ••I lU hht ru11hs Srut duah, bru~s • 100 1 u• --- ,;.·,;·iah .... ~ •• ;;;.;::.:---------=========----' un: lo(f or-.la .. ultsSul•s .. tO'CAI .. uohsJtuil,..of_or~s ) luto.uh : Min IQ .. fltiM )Q '""' ~tM. , · US. S • lt.7 M , t 4110), 41 CCHff lc.l..r:ts: Uttaau St d. lrror t "' ' '"' "'(•I t I> (fnurcept) -1~.UI <I.Ut - J 1.JJ c2t-l6 ••· u1e~ lnwa..ot_orws 1641.211 1."62 UI.M -.l t - 16 ... stvnu. cCIIOH.: o ···-· 0.001 •••• 0.01 ••• o.o ·.• o. t • • 1 • ntdvll nln<t¥0 tf'ror: 210.1 on t9N detJ't.S of frt~ tt~~lt tple •-•qwwtd: 0. MJ7. AdJW1ttO •·squAred: o. ~617 r -su thttc : l.2t2 ... o.& on 1 1t10 tttl cw , .,._., ,..,.: c 2.21-16 ) • .,..,.,0,.. ·- ctl~tt( 01'1 tr.. fltf'G ..,., • • plot ,.,,uq• of ttw rnldv• h
  • 169. .. "'ht(r~whs~~sl..,.h, lilr'Uh • toO) ... ,. • I - FIGURE 3-4 Accessing help in Rstudio -- , - 3.1.2 Data Import and Export 3.1 Introduction toR J U .:J- ulu .. ...... o.ttl01 • i - ;:, ~· ru~o~lu J ,.. .... ,~ """ "' ...... .z.. ll f"'*"t 1--Ut M-" '" Fitting Linear Models .:J o .. c:ttptSon ~-u&t411 kllftl•f"'Idek I CM ... vt .. IIC:WfYWft9'*1- UIQ!It1UIIIIIII~II-I .nd 1Nfyt4vlt-•(~ • ·~~ . _.,_...clfbab'UMn) uqrcrw:a!• , 41 c..•, , ..... ,,,, .,.l111ht•, ft • . • etlc.tl,
  • 170. _,bOd • • q" · · .o., .. T~. • .. ~. y .. r;u..,r, ql" • ru:z. •~h:.ct • TWI, cMtr uu • II"J:.I. , o thu, . .. ) Argument1 ', f QI&l} & J ..... lf'lot.,Kt~tl&•••t· t• liii.•(OfiDMII'IIfunbece«cldlotNIWIII ttymtdcestiiCI¥IbOnol I ::::::.~.:. ~:-:=',::7'-=:::.7::~·~101CIU I ...... )t ...... tlw""'*''"tlltfNdee fnalbn:l.ncl&:.• tt.~ ... tll ... hm u ... tr_::~, ft;....,ll l tYJIIUI1CIIt.,....,....,.kwaiiiii'O~•uhcl II'I~'IIIKtorl~l ....... fl .......... lObeuMO~UMtc,.,poctst II'I__......U.tl....,_,teMI<MCI•t!lotiftaltJI'X-tsl SPIOII4 c.r-- ..:.or•-...aor 1--'UL ....,C~IM1Il ...... d•l4~..qll vt:~t•~• ~ng , .. r .... . -,,, ...,.,...,.,.,.,,., ... ...,., .. YMO S..lban.tan .=.J In the annual retail sales example, the dataset was imported into R using the read . csv () function as in the following code. sales <- read . csv("c : /data/yearly_ sales . csv" ) R uses a forward slash {!) as the separator character in the d irectory and file path s. This convention makes script Iiies somewhat more portable at the expense of some initial confusion on the part of Windows users, w ho may be accustomed to using a backslash () as a
  • 171. separator. To simpl ify the import of multiple Iiies w ith long path names, the setwd () func tion can be used to set the working d irectory for the su bsequent impo rt and export o perations, as show n in the fo llow ing R cod e. setwd ( "c: / data / ") sales < - read.csv ( "yearly_sales . csv " ) Other import functions include read. table ( l and read . de lim () ,which are intended to import other common fil e typ es such as TXT. These function s can also be used to import the yearly _ sales . csv fil e, as the following code illustrates. sales_table <- read .table ( "yearly_sales . csv " , header=TRUE, sep="," ) sales_delim <- read . delim ( "yearly_ sales . csv", sep=",") Th e ma in difference between these import fun ctions is the d efault values. For example, t he read . de lim () function expect s the column separator t o be a tab(" t"). ln the event tha t the numerical data REVIEW OF BASIC DATA ANALYTIC METHODS USING R in a data file uses a comma for the decimal, R also provides two additional functions-read . csv2 () and
  • 172. read . del im2 ()-to import such data. Table 3-1 includes the expected defaults for headers, column separators, and decimal point notations. TABLE 3-1 Import Function Defaults Function Headers Separator Decimal Point r ead. t abl e () FALSE r ead. csv () TRUE r ead. csv2 ( ) TRUE "·" . read . d e lim () TRUE "t " r ead. d elirn2 () TRUE "t " .. . The analogous R functions such as write . table () ,write . csv () ,and write . csv 2 () enable exporting of R datasets to an external fi le. For example, the following R code adds an additional column to the sales dataset and exports the modified dataset to an external file. t; ?dd 1 ,... ... l,;.f'1Il t .. • h.;;. _'b:!l JUt :i ~ ~ j sales$per_order < - sa l es$sales_total/sales$num_of _orders # exp 1 L d1ta 1s ''ll' 1.-.. <r u 1 "' L!1< llL t n• t '1'.' name,, write . t a ble(sales ," sa l es_modified .txt ", sep= "t ", row. names=FALSE Sometimes it is necessary to re ad data from a database
  • 173. management system (DBMS). R packages such as DBI [6) and RODBC [7] are available for this purpose. These packages provide database interfaces for communication betwee n R and DBMSs such as MySQL, Oracle, SQL Server, PostgreSQL, and Pivotal Greenplum. The following R code demonstrates how to instal l the RODBC package with the i ns t al l . p acka ges () fun ction. The 1 ibr a ry () func tion loads the package into the R workspace. Finally, a connector (con n ) is initialized for connecting to a Pivotal Greenpl um database tra i n i ng2 via open database connectivity (ODBC) with user user. The training2 database must be defin ed either in the I etc/ODBC . ini configuration file or using the Administrative Tools under the Windows Control Panel. install . packages ( "RODBC" ) library(RODBC) conn <- odbcConnec t ("t r aining2", uid="user" , pwd= "passwo r d " ) Th e con nector needs to be present to su bmit a SQL query to an ODBC database by using the sq l Qu ery () function from the RODBC package. The following R code retrieves specific columns from the hous i ng table in w hich household income (h inc ) is greater than $1,000,000. housing_data <- s qlQuery(conn, "select s erialno , s t ate, person s, r ooms
  • 174. from housing where hinc > 1000000") head(housing_data ) 4552088 4 45"- 88 5 8699:!93 6 5 5 5 9 9 5 3.1 Intro ductio n t o R Although plots can be saved using the RStudio GUI, plots can also be saved using R code by specifying the appropriate g raphic devices. Using the j peg () function, the following R code creates a new JPEG file, adds a histogram plot to the file, and then closes the fi le. Such techniques are useful w hen automating
  • 175. standard repor ts. Other functions, such as png () , bmp () , pdf () ,and postscript () ,are available in R to save plots in the des ired format. jpeg ( fil e= "c : /data/ sale s_h ist . j peg" ) h ist(sales$num_of_ o rders ) creaLe a ne'" jpeg file # export histogt·;un to jpeg d ev. o ff () ~ shut off the graphic device More information on data imports and exports can be fou nd at http : I I cran . r-proj e ct . o rgl doc I ma nuals I r- rel ease i R- d a ta . html, such as how to import data sets from statistical software packages including Minitab, SAS, and SPSS. 3.1.3 Attribute and Data Types In the earli er exa mple, the sal es variable contained a record for each cu st omer. Several cha racteristic s, such as total an nual sa les, number of orders, and gender, were provided for each customer. In general, these characteristics or attributes provide the qualitative and quantitative measures for each item or subject of interest. Attributes can be categorized into four types: nominal, ordinal, interval, and ratio (NOIR) [8). Table 3-2 distinguishes these four attrib ute types and shows the operations they support. Nominal and ordinal attributes are considered categorical attributes, w hereas interval and ratio attributes are considered
  • 176. numeric attributes. TABLE 3-2 NOIR Attribu te Types Categorical (Qualitative) Numeric (Quantitative) Nominal Ordina l Inte rv al Rat io Definition The va lues represent Attributes The difference Both the difference labels that distin- imply a betw een two and the ratio of guish one from sequence. values is two values are another. meaningful. meaningful. Examples ZIP codes, nationa l- Quality of Temperature in Age, temperature ity, street names, diamonds, Celsius or in Kelvin, counts, gender, employee ID academic Fahrenheit, ca l- length, weight numbers, TRUE or grades, mag- endar dates, FALS E nitude of lati tudes ea rthquakes Operat ions =, >' =, ~, = , ;t., =,~, < , s , > , 2: <, s , > , c:, <, s , >, ~, +, - + , - , x, .:-
  • 177. REVIEW OF BASIC DATA ANALYTIC METHODS USING R Data of one attribute type may be converted to another. For example, the qual it yof diamonds {Fair, Good, Very Good, Premium, Ideal} is considered ordinal but can be converted to nominal {Good, Excellent} with a defined mapping. Similarly, a ratio attribute like Age can be converted into an ordinal attribute such as {Infant, Adolescent, Adult, Senior}. Understanding the attribute types in a given dataset is important to ensure that the appropriate descriptive statistics and analytic methods are applied and properly inter- preted. For example, the mean and standard deviation of U.S. postal ZIP codes are not very meaningful or appropriate. Proper handling of categorical variables will be addressed in subsequent chapters. Also, it is useful to consider these attribute types during the following discussion on R data types. Numeric, Character, and Logical Data Types Like other programming languages, R supports the use of numeric, character, and logical (Boolean) values. Examples of such variables are given in the following R code. i <- 1 sport <- "football" flag <- TRUE # create a numeric variable # create a character variable # create a logical variable R provides several functions, such as class () and type of (),to
  • 178. examine the characteristics of a given variable. The class () function represents the abstract class of an object. The typeof () func- tion determines the way an object is stored in memory. Although i appears to be an integer, i is internally stored using double precision. To improve the readability of the code segments in this section, the inline R comments are used to explain the code or to provide the returned values. class(i) # returns "numeric" typeof(i) # returns "double" class(sport) # returns "character" typeof(sport) # returns "character" class(flag) .. returns "logical" tt typeof (flag) # returns "logical" Additional R functions exist that can test the variables and coerce a variable into a specific type. The following R code illustrates how to test if i is an integer using the is . integer ( } function and to coerce i into a new integer variable, j, using the as. integer () function. Similar functions can be applied for double, character, and logical types. is.integer(i) j <- as.integer(i) is.integer(j) # returns FALSE # coerces contents of i into an integer # returns TRUE The application of the length () function reveals that the created
  • 179. variables each have a length of 1. One might have expected the returned length of sport to have been 8 for each of the characters in the string 11 football". However, these three variables are actually one element, vectors. length{i) length(flag) length(sport) # returns 1 # returns 1 # returns 1 (not 8 for "football") 3.1 Introduction to R Vectors Vectors are a basic building block for data in R. As seen previously, simple R variables are actually vectors. A vector can only consist of values in the same class. The tests for vectors can be conducted using the is. vector () function. is.vector(i) is.vector(flag) is.vector(sport) !t returns TRUE # returns TRUE ±t returns TRUE R provides functionality that enables the easy creation and manipulation of vectors. The following R
  • 180. code illustrates how a vector can be created using the combine function, c () or the colon operator, :, to build a vector from the sequence of integers from 1 to 5. Furthermore, the code shows how the values of an existing vector can be easily modified or accessed. The code, related to the z vector, indicates how logical comparisons can be built to extract certain elements of a given vector. u <- c("red", "yellow", "blue") " create a vector "red" "yello•d" "blue" u u[l] v <- 1:5 v sum(v) w <- v * 2 w w[3] ±; t·eturns "red" "yellow'' "blue" returns "red" 1st element in u) # create a vector 1 2 3 4 5 # returns 1 2 3 4 5 It returns 15 It create a vector 2 4 6 8 10 # returns 2 4 6 8 10 returns 6 (the 3rd element of w) z <- v + w z
  • 181. # sums two vectors element by element # returns 6 9 12 15 z > 8 z [z > 8] # returns FALSE FALSE TRUE TRUE TRUE # returns 9 12 15 z[z > 8 I z < 5] returns 9 12 15 ("!"denotes "or") Sometimes it is necessary to initialize a vector of a specific length and then populate the content of the vector later. The vector ( } function, by default, creates a logical vector. A vector of a different type can be specified by using the mode parameter. The vector c, an integer vector of length 0, may be useful when the number of elements is not initially known and the new elements will later be added to the end ofthe vector as the values become available. a <- vector(length=3) # create a logical vector of length 3 a # returns FALSE FALSE FALSE b <- vector(mode::"numeric 11 , 3) #create a numeric vector of length 3 typeof(b) # returns "double" b[2] <- 3.1 #assign 3.1 to the 2nd element b # returns 0.0 3.1 0.0 c <- vector(mode= 11 integer", 0) # create an integer vectot· of length o c # returns integer(O) length(c) # returns o Although vectors may appear to be analogous to arrays of one dimension, they are technically dimen- sionless, as seen in the following R code. The concept of arrays
  • 182. and matrices is addressed in the following discussion. REVIEW OF BASIC DATA ANALYTIC METHODS USING R 1·eturns 3 length(b) dim(b) ~ 1·etun1s NULL (an undefined value) Arrays and Matrices The array () function can be used to restructure a vector as an array. For example, the following R code builds a three-dimensional array to hold the quarterly sales for three regions over a two-year period and then assign the sales amount of $158,000 to the second region for the first quarter of the first year. H the dimensions are 3 regions , 4 quarters, and 2 years quarterly_sales < - array(O, dim=c(3,4,2)) quarterly_sales[2,1,11 <- 158000 quarterly_ sales 1 [. 1, [.:! 1 :. , 1 [,·s! [1' 1 0 0 0 [:!,1 !58000 c 0 0 [ 3. 1 0 0 0 0 2 [. 11 (.21 [. 31 [. 4 1
  • 183. [ 1' 1 0 0 0 0 [2 ' 1 0 0 0 0 [3 ' 1 0 0 0 0 A two-dimensional array is known as a matrix. The following code initializes a matrix to hold the quar- terly sales for the three region s. The parameters nrov1 and nco l define the number of rows and columns, respectively, for the sal es_ma tri x. sales_matrix <- matrix(O, nrow = 3, neal 4) sales_matrix [ 1, 1 [2,1 [ 1' 1 [.11 ;,:!) 1.31 [. .; 1 0 0 0 0 0 0 0 0 n
  • 184. R provides the standard matrix operations such as addition, subtraction, and multiplication, as well as the transpose function t () and the inverse matrix function ma t r ix . inve r s e () included in the matrixcalc package. Th e following R code builds a 3 x 3 matrix, M, and multiplies it by its inverse to obtain the identity matrix. library(matrixcalc) M <- matrix(c(1,3,3,5,0,4,3 , 3,3) ,nrow 3,ncol 3) build a 3x3 matrix 3.1 Introduction toR M %* % matrix . inverse (M} ~ multiply 1·! by inverse (:01 } [. 1] [. 2] [ ' 3] [ 1' J 0 0 [2' J 0 1 0 [3' J 0 0 1 Data Fram es Simi lar to the concept of matrices, data frames provide a structure for storing and accessing several variables of possibly different data types. In fact, as the i s . d ata . fr a me () function indicates, a data frame was created by the r e ad . csv () function at the beginning of the chapter.
  • 185. r.import a CSV :ile of the total annual sales :or each customer s ales < - read . csv ( "c : / data / ye arly_s a l es . c sv" ) i s .da t a . f r ame (sal e s ) ~ t·eturns TRUE As seen earlier, the variables stored in the data frame can be easily accessed using the $ notation. The following R code illustrates that in this example, each variable is a vector with the exception of gende r , which wa s, by a read . csv () default, imported as a factor. Discussed in detail later in this section, a fa ctor denotes a categorical variable, typically with a few finite levels such as "F" and "M " in the case of gender. l e ngth(sal es$num_o f _o r ders) returns 10000 (number of customers) i s . v ector(sales$cust id) returns TRUE - is . v ector(sales$ sales_total) returns TRUE i s .vector(sales$num_of_ orders ) returns TRUE is . v ector (sales$gender) returns FALSE is . factor(s a les$gender ) ~ returns TRUE Because of their fl exibility to handle many data types, data frames are the preferred input format for many ofthe modeling functions available in R. The foll owing use of the s t r () functio n provides the structure of the sal es data frame. This fun ction identifi es the integer and numeric (double) data types, the factor variables and levels, as well as the first few values for each variable. str (sal es) # display structure of the data frame object
  • 186. 'data.ft·ame': 10000 obs . of 4 vanables : $ CUSt id int 100001 100002 100003 100004 100005 100006 . .. $ sales total num 800 . 6 217.5 74.6 498 . 6 723 . 1 $ num of orders : int 3 3 2 3 4 2 2 2 2 2 . .. - $ gender Factor w/ 2 le,·els UfU I "f'-1" : 1 l 2 2 1 1 2 2 1 2 .. . In the simplest sense, data frames are lists of variables of the same length. A subset of the data frame can be re trieved through subsetting operators. R's subsetting operators are powerful in t hat they allow one to express complex operations in a succinct fa shion and easily retrieve a subset of the dataset. '! extract the fourth column of the sales data frame sal es [, 4] H extract the gender column of the sales data frame REVIEW OF BASIC DATA ANALYTIC METHODS USING R sales$gender # retrieve the first two rows of the data frame sales[l:2,] # retrieve the first, third, and fourth columns sales[,c(l,3,4)] l! retrieve both the cust_id and the sales_total columns sales[,c("cust_id", "sales_total")] # retrieve all the records whose gender is female
  • 187. sales[sales$gender=="F",] The following R code shows that the class of the sales variable is a data frame. However, the type of the sales variable is a list. A list is a collection of objects that can be of various types, including other lists. class(sales) "data. frame" typeof(sales) "list" Lists Lists can contain any type of objects, including other lists. Using the vector v and the matrix M created in earlier examples, the following R code creates assortment, a list of different object types. # build an assorted list of a string, a numeric, a list, a vector, # and a matrix housing<- list("own", "rent") assortment <- list("football", 7.5, housing, v, M) assortment [ [1)] [1) "football" [ (2]) [1) 7. 5 [ (3])
  • 188. [ [ 3)) [ [ 1)) [1) "own" [ [3)) [ [2)] [1) "rent" [ [4)] [1] 1 2 3 4 5 [ [5)] [11 J [21 J [3 1 J [I 1] [ 1 2] [ 1 3 J 1 3 3 5 0 4
  • 189. 3.1 Introduction toR In displaying the contents of assortment, the use of the double brackets, [ [] ] , is of particular importance. As the following R code illustrates, the use of the single set of brackets only accesses an item in the list, not its content. # examine the fifth object, loll in the list class(assortment[S]) .. returns "2.ist" tt length(assortment[S]) .. returns 1 tt class(assortment[[S]]) # returns "matrix" length(assortment[[S]]) # returns 9 {for the 3x3 matrix) As presented earlier in the data frame discussion, the s tr ( ) function offers details about the structure of a list. str(assortment) List of 5 $ : chr "football" $ : num 705 $ :List of 2 0 0 $ : chr "own " 0 0$ : chr "rent" $ int [ 1: 5] 1 2 3 4 5 $ : num [ 1: 3 1 1 : 3] 1 3 3 5 0 4 3 3 3 Factors Factors were briefly introduced during the discussion of the gender variable in the data frame sales. In this case, gender could assume one of two levels: ForM.
  • 190. Factors can be ordered or not ordered. In the case of gender, the levels are not ordered. class(sales$gender) is.ordered(sales$gender) # returns "factor" # returns FALSE Included with the ggplot2 package, the diamonds data frame contains three ordered factors. Examining the cut factor, there are five levels in order of improving cut: Fair, Good, Very Good, Premium, and Ideal. Thus, sales$gender contains nominal data, and diamonds$cut contains ordinal data. head(sales$gender) F F l-1 1'-1 F F Levels: F l1 library(ggplot2) data(diamonds) # display first six values and the levels # load the data frame into the R workspace REVIEW OF BASIC DATA ANALYTIC METHODS USING R str(diamonds) 'data.frame': 53940 obs. of 10 variables:
  • 191. $ carat $ cut $ color $ clarity: $ depth $ table $ price $ X $ y $ z num 0.23 0.21 0.23 0.29 0.31 0.24 0.24 0.26 0.22 ... Ord.factor w/ 5 levels "Fair"c"Good"c .. : 5 4 2 4 2 3 ... Ord.factor w/ 7 levels "D"c"E"c"F"c"G"c .. : 2 2 2 6 7 7 Ord.factor w/ 8 levels "I1"c"SI2"c"SI1"< .. : 2 3 5 4 2 num 61.5 59.8 56.9 62.4 63.3 62.8 62.3 61.9 65.1 59.4 num 55 61 65 58 58 57 57 55 61 61 ... int 326 326 327 334 335 336 336 337 337 338 num 3.95 3.89 4.05 4.2 4.34 3.94 3.95 4.07 3.87 4 ...
  • 192. num 3.98 3.84 4.07 4.23 4.35 3.96 3.98 4.11 3.78 4.05 num 2.43 2.31 2.31 2.63 2.75 2.48 2.47 2.53 2.49 2.39 head(diamonds$cut) # display first six values and the levels Ideal Premium Good Premium Good Very Good Levels: Fair c Good c Very Good < Premium < Ideal Suppose it is decided to categorize sales$sales_ totals into three groups-small, medium, and big-according to the amount of the sales with the following code. These groupings are the basis for the new ordinal factor, spender, with levels {small, medium, big}. # build an empty character vector of the same length as sales sales_group <- vector (mode=''character", length=length(sales$sales_total)) # group the customers according to the sales amount sales_group[sales$sales_total<100] <- "small" sales_group[sales$sales_total>=100 & sales$sales_total<500] <- "medium" sales_group[sales$sales_total>=500] <- "big" # create and add the ordered factor to the sales data frame spender<- factor(sales_group,levels=c("small", "medium", "big"), ordered = TRUE) sales <- cbind(sales,spender) str(sales$spender)
  • 193. Ord.factor w/ 3 levels "small"c"medium"c .. : 3 2 1 2 3 1 1 1 2 1 ... head(sales$spender) big medium small medium big small Levels: small < medium c big The cbind () function is used to combine variables column-wise. The rbind () function is used to combine datasets row-wise. The use of factors is important in several R statistical modeling functions, such as analysis of variance, aov ( ) , presented later in this chapter, and the use of contingency tables, discussed next. 3.11ntrodudion toR Contingency Tables In R, table refers to a class of objects used to store the observed counts across the factors for a given dataset. Such a table is commonly referred to as a contingency table and is the basis for performing a statistical test on the independence of the factors used to build the table. The following R code builds a contingency table based on the sales$gender and sales$ spender factors. # build a contingency table based on the gender and spender factors sales_table <- table{sales$gender,sales$spender) sales_table small medium big
  • 194. F 1726 2746 563 M 1656 2723 586 class(sales_table) typeof(sales_table) dim{sales_table) # performs a chi-squared test summary(sales_table) Number of cases in table: 10000 Number of factors: 2 returns "table" returns "integer" # returns 2 3 Test for independence of all factors: Chisq = 1.516, df = 2, p-value = 0.4686 Based on the observed counts in the table, the summary {) function performs a chi-squared test on the independence of the two factors. Because the reported p- value is greater than 0.05, the assumed independence of the two factors is not rejected. Hypothesis testing and p-values are covered in more detail later in this chapter. Next, applying descriptive statistics in R is examined. 3.1.4 Descriptive Statistics It has already been shown that the summary () function provides several descriptive statistics, such as the mean and median, about a variable such as the sales data
  • 195. frame. The results now include the counts for the three levels of the spender variable based on the earlier examples involving factors. summary(sales) cust ·- id sales - total nurn --- of orders gende1· spender !,lin. :100001 r~lin. 30.02 !lin. 1.000 F:5035 small :3382 1st Qu. :102501 1st Qu.: 80.29 1st Qu.: 2.000 1<1:4965 medium:5469 !led ian :105001 r<ledian : 151.65 1ledian : 2.000 big : 114 9 !lean :105001 r-lean 249.46 r'lean 2.428 3rd Qu. : 107500 3rd Qu.: 295.50 3rd Qu.: 3.000 !lax. :110000 r~Iax. : 7 6 0 6 . 0 9 1~1ax. :22.000 The following code provides some common R functions that include descriptive statistics. In parenthe- ses, the comments describe the functions. REVIEW OF BASIC DATA ANALYTIC METHODS USING R :. ca.ls, assig:: x <- sales$sales_total y <- sales$num_of_orders cor(x,yl :: returns 0.75080!5 correlatlor.)
  • 196. cov(x ,y l II returns 345.~111 (covarianc<?) IQR(x) ff :retu::n~ :ns.:n (int.erquartile range) mean(x) ~ returns 249.4';'i7 mean) ,. median(x) returns 151.65 (me:lianl range (x ) " te:urns 30.0::: 7 06 . 09 min rna:-:) sd(x) ., returns '-l9.0508 ::;:... :1. :ie·:. var(x) retur::s ~C17r.l.~ ··ari:'l!!~"",... The IQR () function provides the difference between the third and the first qua rti les. The other fu nc- tions are fairly self-explanatory by their names. The reader is encouraged to review the available help files for acceptable inputs and possible options. The function apply () is useful when the same function is to be applied to several variables in a data frame. For example, the following R code calculates the standard deviation for the first three variables in sales. In the code, setting MARGIN=2 specifies that the sd () fun ction is applied over the columns. Other fun ctions, such as lappl y () and sappl y (), apply a fun ction to a list or vector. Readers can refer to the R help fil es to learn how to use these functions. apply (sales[,c (l : 3) ], MARGIN=2, FUN=Sd )
  • 197. Additional descriptive statistics can be applied wi th user- defined funct ions. The following R code defin es a func tion, my_ range () , to compute the difference between the maximum and min imum va lues returned by the range () fun ction. In general, user-defined fun ctions are usefu l for any task or operation that needs to be fre quently repeated. More information on user- defined fun ctions is available by entering help ( 11 function 11 ) in the console. # build a functi~n tv plvviJ~ the difterence bet~een ~ -he maxrmum and thE .:m • •. 1< my_ range < - function (v) {range (v ) (2] - range (v) [1)} my_range (x ) 3.2 Exploratory Data Analysis So far, this chapter has addressed importing and exporting data in R, basic data types and operations, and generating descriptive statistics. Functions such as summary () can help analysts ea sily get an idea of th e magnitude and range of the data, but other aspects such as linear relationships and distributions are more difficult to see from descriptive statistics. For example, the following code shows a summary view of a data frame data with two columns x and y. The output shows the range of x and y, but it's not clear what the relationship may be between these two variables.
  • 198. summary (data ) · .. M1n. : 1.90481 ~·1n . 1st Qu . : -0.66321 !s• 0u Nedian : 0 . 0"'167 N•d.111 Nean 0.0-52~ 3rd Qu .: 0 . 65414 r t ,,u 3.2 Exploratory Data Analysi s y A useful way to detect patterns and anomalies in the data is through the explora tory data analysis with visualization. Visualization gives a succinct, holistic view of the data that may be difficu lt to grasp from the numbers and summaries alone. Variables x and y of the data frame data can instead be visual ized in a scatterplot (Figure 3-5). which easily depicts the relationship between two variab les. An important facet of the initial data exploration, visualization assesses data cleanliness and suggests potentia lly important relationships in the data pri or to the model planni ng and building phases. Scatterplot of X and Y o·
  • 199. -1· 2· 2 0 X FIGURE 3-5 A scatterplot can easily show if x andy share a relation The code to generate data as well as Figure 3-5 is shown next. x <- rno rm (SO) y <- x + rnorm(SO , mea n=O , sd=O . S) data<- as . data . f rame(cbind(x , y ) ) '· 2 REV IEW OF BASIC DATA ANA LYTIC METHODS USING R s u mmary (data ) library (ggplo t 2) ggpl o t (data, aes (x=x , y=y)) + ge om_point (size=2) + ggtitle ( "Scatterplo t o f X and Y" ) + the me (axis.text=el eme n t_t e x t(s i ze= l 2) , axis. title e l emen t_text (si ze= l4 ) ,
  • 200. plot.title = e l e me nt _ t e x t(si ze=20 , fa c e ="bold" )) Explo ra tory data an alysis [9] is a data ana lysis approach to reveal the important characteristics of a dataset, mainly through visualization. This section discusses how to use some basic visualization techniques and the plotting feature in R to perform exploratory data analysis. 3.2 .1 Visualization Before An alysis To illustrate the importance of visualizing data, consider Anscombe's quartet. Anscom be's quartet consists of four datasets, as shown in Figure 3-6. It was constructed by statistician Francis Anscom be [10] in 1973 to demonstrate the importance of graphs in statistical analyses. # 1 # 2 # 3 # 4 X y X y X y X y 4 4.26 4 3 10 4 5. 39 8 5 25 5 5.68 5 4 74 5 5. 73 8 5.56 6 7.24 6 6 13 6 6.08 8 5.76 7 4.82 7 7.26 7 6. 42 8 6.58 8 6.95 8 8. 14 6. 77 8 6.89 9 8.81 9 8.77 9 7. 11 8 7.04 10 8.04 10 9. 14 10 7. 46 8 7.7 1 11 8.33 11 9.26 11 7. 81 8 7.91
  • 201. 12 10.84 12 9. 13 12 8. 15 8 8.47 13 7. 58 13 8.74 13 12.74 8 8.84 14 9.96 14 8. 10 14 8. 84 19 12.50 fiGURE 3-6 Anscom be's quartet The four data sets in Anscom be's quartet have nearly identical statistical properties, as shown in Table 3-3. TABLE 3-3 Statistical Properties of Anscombe's Quartet Statistical Property Value Meanof x 9 Variance of y 11 M ean ofY 7.50 (to 2 decimal points) 3.2 Exploratory Data Analysis Variance of Y 4.12 or4.13 (to 2 decimal points) Correla tions between x andy 0.816 Linear regression line y = 3.00 + O.SOx (to 2 decimal points) Based on the nearly identica l statistical properties across each dataset, one might conclude that these four datasets are quite similar. However, the scatterplots in Figure 3-7 tell a different story. Each dataset is
  • 202. plotted as a scatterplot, and the fitted lines are the result of applying linear regression models. The estimated regression line fits Dataset 1 reasonably well. Dataset 2 is definitely nonlinear. Dataset 3 exhibits a linear trend, with one apparent outlier at x = 13. For Dataset 4, the regression line fits the dataset quite well. However, with only points at two x value s, it is not possible to determine that the linearity assumption is proper. 12 • • 12 :t 5 I I ~I • 3 • • • • 10 •
  • 203. • 15 X FIGURE 3-7 Anscom be's quartet visualized as scatterplots • • • • • ~: 5 • • • 10 2 ~ • • 4 • 15 The R code for generating Figure 3-7 is shown next. It requires the R package ggplot2 [11]. which can
  • 204. be installed simply by running the command install . p ackages ( "ggp lot2" ) . The anscombe REVIEW OF BASIC DATA ANALYTIC METHODS USING R dataset for the plot is included in the standard R distribution. Enter data ( ) for a list of datasets included in the R base distribution. Enter data ( Da tase tName) to make a dataset available in the current workspace. In the code that follows, variable levels is created using the gl (} function, which generates factors offour levels (1, 2, 3, and 4), each repeating 11 times. Variable myda ta is created using the with (data, expression) function, which evaluates an expression in an environment con- structed from da ta.ln this example, the data is the anscombe dataset, which includes eight attributes: xl, x2, x3, x4, yl, y2, y3, and y4. The expression part in the code creates a data frame from the anscombe dataset, and it only includes three attributes: x, y, and the group each data point belongs to (mygroup). install.packages(''ggplot2") # not required if package has been installed data (anscombe) anscombe x1 x2 x3 x4 1 10 10 10 8
  • 205. 2 8 8 8 8 13 13 13 4 9 9 5 11 11 11 6 14 14 14 8 7 6 6 6 8 8 4 4 4 19 9 12 12 12 8 10 7 7 7 8 11 5 5 5 8 nrow(anscombe) [1] 11 It load the anscombe dataset into the current 'iOrkspace y1 y2 y3 y4 8. O·l 9.14 7.-16 6.58 6.95 8.14 6.77 5.76 7.58 8.74 12.74 7.71 8.81 8.77 7.11 8.84 8.33 9.26 7.81 8.·±7
  • 206. 9. 9G 8.10 8.34 7.04 7.24 6.13 6. •J8 5.25 ·l. 26 3.10 5. 3 9 12.50 10. 8•1 9.13 8.15 5.56 4.82 7.26 6.-12 7.91 5.68 4.74 5.73 6.89 It number of rows # generates levels to indicate which group each data point belongs to levels<- gl(4, nrow(anscombe)) levels [1] 1 1 1 1 1 1 1 1 l l l 2 .; 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 [ 34] 4 4 4 4 4 •l ·l 4 4 4 4 Levels: 1 2 3 4 # Group anscombe into a data frame mydata <- with(anscombe, data.frame(x=c(xl,x2,x3,x4), y=c(yl,y2,y3,y4), mygroup=levels)) mydata X y mygroup 10 8.04
  • 207. 2 8 6.95 13 7.58 4 9 8.81 ... 1 1 ... 4 4, B 'i.S6 -l3 8 7 0 l 4 44 B 6 . 89 4 A Ma~f -~atterp "tF ~siny th ruplot~ package library (ggplot2 ) therne_set (therne_bw ()) - s L rlot color :~erne ggplot (rnydata, aes (x, y )) + geom_point (size=4 ) + j11 ' 1 7 geom_srnooth (rnethod="l rn ", fill=NA, f ullrange=TRUE ) + facet_wrap (-rnygroup ) 3.2.2 Dirty Data 3.2 Exploratory Data Analysis This section addresses how dirt y data ca n be detected in th e data expl oration phase with visual izations. In general, analysts should look for anomalies, verify the data with domain knowledge, and decide the most appropriate approach to clean the data.
  • 208. Consider a scenario in which a bank is conducting data analyses of its account holders to gauge customer retention. Figure 3-8 shows the age distribution of the account holders. 0 0 -... >. ~ ~ u ~ ~ c QJ :J 0" QJ u: 8 -J ~ 0 -' Age FIGURE 3-8 Age distribution o f bank account holders If the age data is in a vector called age, t he graph can be created with the following R script: h ist(age, b r eaks=l OO , main= "Age Distributi on of Account Holders ",
  • 209. xlab="Age", ylab="Frequency", col ="gray" ) The figure shows that the median age of the account holders is around 40. A few accounts with account holder age less than 10 are unusual but plausible. These could be custodial accounts or college savings accou nts set up by the parents of young children. These accounts should be retained for future analyses. REVIEW OF BASIC DATA ANALYTIC METHODS USING R However, the left side of the graph shows a huge spike of customers who are zero years old or have negative ages. This is likely to be evidence of missing data. One possible explanation is that the null age values could have been replaced by 0 or negative values during the data input. Such an occurrence may be caused by entering age in a text box that only allows numbers and does not accept empty values. Or it might be caused by transferring data among several systems that have different definitions for null values (such as NULL, NA, 0, -1, or-2). Therefore, data cleansing needs to be performed over the accounts with abnormal age values. Analysts should take a closer look at the records to decide if the missing data should be eliminated or if an appropriate age value can be determined using other available information for each of the accounts. In R, the is . na (} function provides tests for missing values. The following example creates a vector x where the fourth value is not available (NA). The is . na ( } function returns TRUE at each NA value and FALSE otherwise.
  • 210. X<- c(l, 2, 3, NA, 4) is.na(x) [1) FALSE FALSE FALSE TRUE FALSE Some arithmetic functions, such as mean ( } , applied to data containing missing values can yield an NA result. To prevent this, set the na. rm parameter to TRUE to remove the missing value during the function's execution. mean(x) [1) NA mean(x, na.rm=TRUE) [1) 2. 5 The na. exclude (} function returns the object with incomplete cases removed. DF <- data.frame(x = c(l, 2, 3), y = c(lO, 20, NA)) DF X y 1 1 10 2 2 20 3 3 NA DFl <- na.exclude(DF) DFl X y
  • 211. 1 1 10 2 2 20 Account holders older than 100 may be due to bad data caused by typos. Another possibility is that these accounts may have been passed down to the heirs of the original account holders without being updated. In this case, one needs to further examine the data and conduct data cleansing if necessary. The dirty data could be simply removed or filtered out with an age threshold for future analyses. If removing records is not an option, the analysts can look for patterns within the data and develop a set of heuristics to attack the problem of dirty data. For example, wrong age values could be replaced with approximation based on the nearest neighbor-the record that is the most similar to the record in question based on analyzing the differences in all the other variables besides age. 3.2 Exploratory Data Analysis Figure 3-9 presents another example of dirty data. The distribution shown here corresponds to the age of mortgages in a bank's home loan portfolio. The mortgage age is calculated by subtracting the orig ina- tion date of the loan from the current date. The vertical axis corresponds to the number of mortgages at each mortgage age. Portfolio Distributio n, Years Since Origination 0 0
  • 212. "' ~ 0 0 ~ 0 >-u c 0 (X) Qj 0 ::J 0 cY <D ~ u. 0 0 ..,. 0 0 "' 0 I 0 2 6 8 10 Mortgage Age FIGURE 3-9 Distribution of mortgage in years since origination from a bank's home loan portfolio If the data is in a vector called mortgage, Figure 3-9 can be
  • 213. produced by the following R script. hist (mortgage, breaks=lO, xlab='Mortgage Age ", col= "gray•, main="Portfolio Distribution, Years Since Origination" ) Figure 3-9 shows that the loans are no more than 10 years old, and these 10-year-old loans have a disproportionate frequency compared to the res t of the population. One possible explanation is that the 10-year-old loans do not only include loans originated 10 years ago, but also those originated earlier than that. In other words, the 10 in the x-axis actually means"<! 10. This sometimes happens when data is ported from one system to another or because the data provider decided, for some reason, not to distinguish loans that are more than 10 years old. Analysts need to study the data further and decide the most appropriate way to perform data cleansing. Data analysts shou ld perform san ity checks against domain knowledge and decide if the dirty data needs to be eliminated. Consider the task to find out the probability of mortga ge loan default. If the past observations suggest that most defaults occur before about the 4th year and 10-year-old mortgages rarely default, it may be safe to eliminate the dirt y data and assu me that the defaulted loans are less than 10 years old. For other ana lyses, it may become necessary to track down the source and find out the true origination dates. Dirty data can occur due to acts of omission.ln the sales data used at the beginning of this chapter,
  • 214. it was seen that the minimum number of orders was 1 and the minimum annual sales amount was $30.02. Thus, there is a strong possibility that the provided dataset did not include the sales data on all customers, just the customers who purchased something during the past year. REVIEW OF BASIC DATA ANALYTIC METHODS USI NG R 3.2.3 Visualizing a Single Variable Using visual representations of data is a hallmark of exploratory data analyses: letting the data speak to its audience rather than imposing an interpretation on the data a priori. Sections 3.2.3 and 3.2.4 examine ways of displaying data to help explain the underlying distributions of a single variable or the relationships of two or more variables. R has many fu nctions avai lable to examine a single variable. Some of t hese func t ions are listed in Table 3-4. TABLE 3-4 Example Functions for Visualizing a Single Variable Function Purpose p l o t (data ) barp lot (data )
  • 215. dotchart ( data ) hist (data ) plot(density (data )) s tem (data) rug (data ) Dotchart and Barplot Scatterplot where x is the index andy is the value; suitable for low-volume data Barplot with vertical or horizontal bars Cleveland dot plot [12) Histog ram Density p lot (a continuous histogram) Stem-and -leaf plot Add a ru g representa t ion (1-d plot) of t he data to an existing plot Dotcha rt and barplot portray continuous values with labels from a discrete variable. A dotchart can be created in R with the functio n dot cha rt ( x , lab e l= ... ) , where x is a numeric vector and l a bel is a vector of cat egorical labels for x. A barplot can be created with the barplot (h e igh t ) fun ction,
  • 216. w here h eigh t represent s a vector or matrix. Figure 3-10 shows (a) a dotchart and (b) a barplot based o n the mtcars dataset, which includes t he f uel consumption and 10 aspects of automobile design and performance of 32 automobiles. This dataset comes with the standard R distribution. The plots in Figure 3-10 can be produced with the following R code. data (mtcars ) dotchart (mtcars$mpg,labels=row . names (mtcars ) ,cex=.7, ma in= "Mi les Per Gallon (MPG ) of Car Models", xlab = "MPG" ) barplot (tabl e (mtcars$cyl ) , main ="Distribu:ion of Car Cyl inder Counts", x lab= "Number of Cylinders" ) Histogram and Density Plot Figure 3-ll(a) includ es a histogram of household income. The histogram shows a clear concentration of low household incom es on the left and the long tail of t he higher incomes on t he right. Volvo U2f Uastreb Bota Ferran [)n) Ford Panttra L Lotus Europa
  • 217. Pot3che 91 • -2 F'"1X1·9 Ponbac Frebrd ComaroZ28 AJ.ICJavtlw1 Dodge Chalenger Toyota Corona Toyota Carob Hondt CNC F,.1 128 Chrysltr ~nal L11cok1 Cont11tnt.l C.d .. c Fleotwoo4 o Utrc •54Slt Utrc 4SOSL Utrc.c.SQSE. Utrc2!0( Were 280 Wtre230 llerc 2•00 Ousttr 360 Vttant Homtt SportabOU1 Homtl' Drrve Datsun 7 10 Uazdo R.X' Wag Uazda RXI 10 l.llles Per Gallon (t.IPG) of Cor Models 0 0
  • 218. 0 0 0 0 0 0 0 15 20 2S 30 UPG (a) 3.2 Explorat ory Dat a Analysi s Distribution of Car Cylinder counts ~ 0 ~ 0 ~ co "' ....
  • 219. 0 D 6 8 Ntmler ol Cylinders (b) FIGURE 3-10 (a) Dotchart on the miles per gallon of cars and (b) Barplot on the distribution of car cylind er counts Histogram of Income Distribution of Income (log10 scale) 0 " .... 0 N "' 0 "' 0 i'; 0 ?;- 00 c: "' 0 .. v; :> c: CT 0 .,
  • 220. "' ., "' 0 0 u: 0 "' N 0 ~ N 0 0 0 0 Oe• OO 1e+05 2e+05 3e+05 4e•05 5e • 05 4 0 4 5 50 55 Income N = 4000 BandYidlh = 0 02069 FIGURE 3·11 (a) Histogram and (b) Density plot of household income REVIEW OF BASIC DATA ANALYTIC METHODS USING R Figure 3-11 (b) shows a density plot of the logarithm of household income values, which emphasizes the distribution. The income distribution is concentrated in the center portion of the graph. The code to generate the two plots in Figure 3-11 is provided next. The rug ( } function creates a one-dimensional density plot on the bottom of the graph to emphasize the distribution of the observation. # randomly generate 4000 observations from the log normal distribution
  • 221. income<- rlnorm(4000, meanlog = 4, sdlog = 0.7) summary (income) Min. 1st Qu. t.Jedian t>!ean 3rd Qu. f.!ax. 4.301 33.720 54.970 70.320 88.800 659.800 income <- lOOO*income summary (income) Min. 1st Qu. f.!edian f.!ean 3rd Qu. 1!ax. 4301 33720 54970 70320 88800 659800 # plot the histogram hist(income, breaks=SOO, xlab="Income", main="Histogram of Income") # density plot plot(density(loglO(income), adjust=O.S), main="Distribution of Income (loglO scale)") # add rug to the density plot rug(loglO(income)) In the data preparation phase of the Data Analytics Lifecycle, the data range and distribution can be obtained. If the data is skewed, viewing the logarithm of the data (if it's all positive) can help detect struc- tures that might otherwise be overlooked in a graph with a regular, nonlogarithmic scale. When preparing the data, one should look for signs of dirty data, as explained in the previous section. Examining if the data is unimodal or multi modal will give an idea of how many distinct populations with different behavior patterns might be mixed into the overall
  • 222. population. Many modeling techniques assume that the data follows a normal distribution. Therefore, it is important to know if the available dataset can match that assumption before applying any of those modeling techniques. Consider a density plot of diamond prices (in USD). Figure 3- 12(a) contains two density plots for pre- mium and ideal cuts of diamonds. The group of premium cuts is shown in red, and the group of ideal cuts is shown in blue. The range of diamond prices is wide-in this case ranging from around $300 to almost $20,000. Extreme values are typical of monetary data such as income, customer value, tax liabilities, and bank account sizes. Figure 3-12(b) shows more detail of the diamond prices than Figure 3-12(a) by taking the logarithm. The two humps in the premium cut represent two distinct groups of diamond prices: One group centers around log 10 price= 2.9 (where the price is about $794), and the other centers around log 10 price= 3.7 (where the price is about $5,012). The ideal cut contains three humps, centering around 2.9, 3.3, and 3.7 respectively. The R script to generate the plots in Figure 3-12 is shown next. The diamonds dataset comes with the ggplot2 package.
  • 223. library("ggplot2") data(diamonds) # load the diamonds dataset from ggplot2 # Only keep the premium and ideal cuts of diamonds .. ' 3t ' ~ o; ; '. " It ' n iceDiamonds < - diamonds [diamonds$cut=="P r emi um" I diamonds$c ut== " Ide a l •, I s ummary(niceDiamonds$cut ) 0 0 Pr m1u 137<ll !:! a .. . lSSl # plot density plot of diamond prices ggplo t( niceDiamonds, ae s(x=price , fill =cut) ) + g eom_density(alpha = .3, col or= NA ) # plot density plot of the loglO of diamond prices
  • 224. ggpl ot (niceDiamonds , aes (x =logl O(price) , f il l =c ut ) ) + geom_density (alpha = . 3 , color=NA) 3.2 Exploratory Data Analysis As an alternative to ggplot2 , the lattice package provides a function ca lled densityplot () for making simple d ensity plot s . ~ ;; c .. " 0 0 1~300 pri ce (a) togtO(prtc e) (b) cut Premium Jcsu• FIGURE 3-12 Density plot s of (a) d iamond prices and (b) t he logarit hm of diamond p rices 3.2.4 Examining Multiple Variables A scatterp lot (shown previo usly in Fig ure 3-1 and Fig ure 3-5) is a simple and w idely used visualizatio n
  • 225. fo r fin din g the relationshi p among multiple va ri ables. A sca tterpl o t ca n represent data with up to fi ve variables using x-axi s, y-axis, size, color, and shape. But usually only t wo to four variables are portrayed in a scatterplot to minimize confusion. When examining a scatterplot, one needs to pay close at tention REVIEW OF BASIC DATA ANALYTIC METHODS USING R to the possible relationship between the vari ables. If the functiona l relationship between the variables is somewhat pronounced, the data may roughly lie along a straight line, a parabola, or an exponential curve. If variable y is related exponentially to x , then the plot of x versus log (y) is approximately linea r. If the plot looks more like a cluster without a pattern, the corresponding variables may have a weak relationship. The scatterplot in Figu re 3-13 portrays the relationship of two variables: x and y . The red line shown on the graph is the fitted li ne from the linear regression. Linear regression wi ll be revisited in Chapter 6, "Advanced Analytical Theory and Methods: Reg ression." Figure 3-13 shows that the regression line does not fit the data well. This is a case in which linear regression cannot model the relationship between the vari ables. Altern ative methods such as the l oess () functio n ca n be used to fit a nonlinear line to the
  • 226. data. The blue curve shown on the graph represents the LOESS curve, which fits the data better than linear regression. 0 0 N 0 .,., 0 0 0 .,., 0 0 0 2 4 FIGURE 3-13 Examining two variables with regression 0 0 o o 0 0 0 0 0 0 6 8 10
  • 227. The R code to produce Figure 3-13 is as foll ows. The runi f ( 7 5, 0 , 1 0) ge nerates 75 numbers between 0 to 10 with random deviates, and the numbers conform to the uniform distrib ution. The r norm ( 7 5 , o , 2 o) generates 75 numbers that conform to the normal distribu tion, with the mean eq ual to 0 and the standard deviation equal to 20. The poi n ts () function is a generic function that draws a sequence of points at the specified coordinates. Parameter type=" 1" tells the function to draw a solid line. The col parameter sets the color of the line, where 2 represents the red color and 4 represents the blue co lor. 7 numbers ben:een x c- runif(75 , 0 , 1 0) x c- sort(x) ~nd 10 ~f unifor~ distribution y c - 200 + xA 3 - 10 * x A2 + x + rnorm(75, 0 , 20) lr c- lm(y - x) poly c- loess( y - x) ::a_l_ .eqL .;~l .. L E5~
  • 228. 4 6 8 fit < - predict (pol y) fit a nonlinear lice plot (x, y ) # draw the fitted li~e for the l i near regression points (x, lr$coeffic i e nts [l ] + lr$coeffic i ents [2 ] • x , type= " 1 ", col = 2 ) po ints (x, fit, type = "1" , col = 4 ) Dotchart and Barplot 3.2 Exploratory Data Analysis Dotchart and barplot from the previous section can visualize multiple variables. Both of them use color as an additional dimension for visualizing the data. For the same mtcars dataset, Figure 3-14 shows a dotchart that groups vehicle cyl inders at they-axis and uses colors to distinguish different cylinders. The vehicles are sorted accordi ng to their MPG values. The code to generate Figure 3-14 is shown next. Mil es Per Gallon (MPG) of Car Models Grouped by Cy linder To yota Corolla 0
  • 229. Fl8t 128 0 Lo tus Eu ropa 0 Honda Civic 0 F1at X1-9 0 Porscl1e 914- 2 0 t.terc2400 0 Mere 230 0 Datsun 710 0 Toyota Corona 0 Volvo 142E 0 Hornet 4 Drive 0 Ma zd a RX4 Wag 0 Mazda RX4 0 Ferrari Dine 0 I.! ere 280 0 Valiant 0 I.! ere 280C 0 Pontiac Fire bird 0 Hornet Sporta bout 0 Mer e 450SL 0 Mere 450SE 0 Ford Pantera L 0 Dodge Challenger 0 AMC Javeun 0 Mere 450SLC 0 1.1aserati Bora 0 Chrysler Imperial 0 Duster 360 0 Camaro Z28 0 Lincoln Continental 0 Cadillac Fleet wood 0 I I I I
  • 230. 10 15 20 25 30 Miles Per Gallon FIGURE 3-14 Dotplot to visualize multip le varia bles REVI EW OF BASIC DATA ANALYTIC METHODS USING R ;: sor- bJ' mpg cars <- mtcars[or der {mtcars$mpg ) , ) h grouping variab l e must be a factol cars$cyl < - f actor {cars$cyl ) cars$col or[car s$cyl ==4) <- "red " cars$color[ cars$cyl== 6) < - "blue • ca r s$color[cars $cyl==8) < - "darkgreen • dotchart {cars$mp g, labels=row.names{cars ) , cex - . 7, group s= cars$cyl, main =" Mi les Per Gallon {MPG ) of Car Mode l s nGr ouped by Cylinder• , xl ab ="Mil es Per Gal l on•, co l or=cars$color, gcolor="bl a c k") The barplot in Figure 3-15 visualizes the distri bution of car cyli nder counts and num ber of gears. The x-axis represents the number of cylinders, and the color represents the number of gears. The code to genera te Figure 3-15 is shown next. ~
  • 231. ~ CD ~ :J 0 u <D "" N 0 4 Distribution of Car Cylinder Counts and Gears Number of Gears • 3 !;! 4 D 5 6 Number of Cylinders FIGURE 3-15 Barplot to visualize multiple variables count s <- t a ble {mtcars$gear , mtcars$cyl ) barpl o t (counts, ma in= "Di s tributi on o f Ca r Cylinder Coun ts and Gears • ,
  • 232. x l ab ="Number of Cylinders • , ylab="Co unts •, col=c ( " #OOO OFFFF" , "# 008 0 FFFF", " #OOF FFFFF") , legend = rownames (c ounts ) , beside- TRUE, args. l egend = list (x= "top", title= "Number of Gears" )) 8 3 .2 Exploratory Data Analysis Box-and-Whisker Plot Box-and-whisker plots show the distribution of a continuous variable for each value o f a discrete variable. The box-and-whisker plot in Figure 3-16 visualizes mean household incomes as a fun ction of region in th e United States. The first digit of the U.S. postal ("ZIP") code corresponds to a geographical reg ion in the Un ited States. In Figure 3-16, each data point corresponds to the mean househo ld income from a particular zip code. The horizontal axis represents t he fi rst digit of a zip code, ranging from 0 to 9, where 0 corresponds t o t he northeast reg ion ofthe United States (such as Maine, Verm ont, and Massachusetts), and 9 corresponds to t he southwest region (such as Ca lifornia and Hawa ii). The vertical axis rep resents the logarithm of mean household incomes. Th e loga rithm is take n to bet t er v isualize the d istr ibution
  • 233. of th e mean household incomes. .., E 0 u c ;:; 0 .c '" "' ::J 0 J: so- iii ~ 5- • '" :::!! 0 Cl 2 Mean Household Income by Zip Code ' ' ' 2 5 6 Zlp1 FIGURE 3-16 A box-and-whisker plot of mean household income and geographical region ' 8 9
  • 234. In this figure, the scatterplot is displayed beneath the box-and- whisker plot, with some jittering fo r t he overl ap po ints so that each line of points widens into a strip. The "box" of the box-and-whisker shows t he range that contains the central 50% of the data, and the line inside the box is the location of the median value. The upper and lower hinges of the boxes correspond to the first and third quartiles of the data. The upper whisker extends f rom the hinge to t he highest value that is within 1.5 * IQR of the hinge. The lower whisker extends from the hinge to the lowes t value w ithin 1.5 * IQR of the hinge. IQR is the inter-qua rt ile range, as discussed in Section 3.1.4. The points outside the wh iskers can be considered possible outliers. REVIEW OF BAS IC DATA ANALYTIC M ETHODS USING R j 8 c C! '0 </) 0 0 0 ! :;) 8 0 </) I <i I c IB ::!i
  • 235. 0 0. 0 C! .2 " 0 0 The gra ph shows how household income varies by reg ion. The hig hest median incomes are in region 0 and region 9. Region 0 is slig htly higher, but the boxes for the two regions overlap enough that the dif- ference between the two reg ions probably is not significant. The lowest household incomes tend to be in region 7, which includes states such as Louisiana, Arka nsas, and Oklahoma. Assuming a data frame called DF contains two columns (MeanHousehol din come and Zipl), the following R script uses the ggplot2 1ibrary [11 ] to plot a graph that is simi lar to Figure 3-16. library (ggplot2 ) plot the jittered scat-erplot w/ boxplot H color -code points with z1p codes h th~ outlier . s.ze pr~vents the boxplot from p:c-•inq •h~ uutlier ggplot (data=DF, aes (x=as . factor (Zipl ) , y=loglO(MeanHouseholdincome) )) + geom_point(aes(color=factor (Zipl )) , alpha= 0 .2, pos it i on="j itter") + geom_boxpl ot(outlier .size=O, alpha=O . l ) + guides(colour=FALSE) +
  • 236. ggtitle ( "Mean Hous ehold Income by Zip Code") Alternatively, one can create a simple box-and-whisker plot with the boxplot () fun ction provided by the R base package. Hexbinplot for Large Data sets This chapter ha s shown t hat scat terplot as a popular visualization can visualize data containing one or more variables. But one should be ca reful about using it on high-volume data. lf there is too much data, the structure of the data may become difficult to see in a scatterplot. Consider a case to compare the logarithm of household income aga inst the years of ed ucation, as shown in Figure 3-17. The cluster in the scatterplot on the left (a) suggests a somewhat linear relationship of the two variables. However, one cannot rea lly see the structure of how the data is distributed inside the cluster. This is a Big Data type of problem. Mi llions or billions of data points would require different approaches for exploration, visualization, and analysis. g] Counts 71&8 6328 I ··- SS22 4 77 1 0 407S 0 " 34 8 ~ 1&15 0 f u .J
  • 237. 316 0 1640 141 8 a ~ 1051 0 739 r 432 .... 279 132 39 0 1 10 .. 5 10 15 ~'•W~.Eduauon MeanEduca1ion (a) (b) FIGURE 3-17 (a) Scatterplot and (b) Hexbinplot o f household incom e against y ears of education 3.2 Exploratory Data Analysis Although color and transparency can be used in a scatterplot to address this issue, a hexbinplot is sometimes a better alternative. A hexbinplot combines the ideas of scatterplot and histogram. Similar to
  • 238. a scatterplot, a hexbinplot visualizes data in the x-axis andy- axis. Data is placed into hex bins, and the third dimension uses shading to represent the concentration of data in each hexbin. In Figure 3-17(b), the same data is plotted using a hexbinplot. The hexbinplot shows that the data is more densely clustered in a streak that runs through the center of the cluster, roughly along the regression line. The biggest concentration is around 12 years of education, extending to about 15 years. In Figure 3-17, note the outlier data at MeanEducation=O. These data points may correspond to some missing data that needs further cleansing. Assuming the two variables MeanHouseholdincome and MeanEduca tion are from a data frame named zeta, the scatterplot of Figure 3-17(a) is plotted by the following R code. # plot the data points plot(loglO(MeanHouseholdincome) - MeanEducation, data=zcta) # add a straight fitted line of the linear regression abline(lm(loglO(MeanHouseholdincome) - MeanEducation, data=zcta), col='red') Using the zeta data frame, the hexbinplot of Figure 3-17(b) is plotted by the following R code. Running the code requires the use of the hexbin package, which can be installed by running ins tall .packages ( "hexbin"). library(hexbin) # "g" adds the grid, "r" adds the regression line
  • 239. # sqrt transform on the count gives more dynamic range to the shading # inv provides the inverse transformation function of trans hexbinplot(loglO(MeanHouseholdincome) - MeanEducation, data=zcta, trans= sqrt, inv = function(x) x ... 2, type=c( 11 g 11 , 11 r 11 )) Scatterplot Matrix A scatterplot matrix shows many scatterplots in a compact, side-by-side fashion. The scatterplot matrix, therefore, can visually represent multiple attributes of a dataset to explore their relationships, magnify differences, and disclose hidden patterns. Fisher's iris dataset [13] includes the measurements in centimeters ofthe sepal length, sepal width, petal length, and petal width for 50 flowers from three species of iris. The three species are setosa, versicolor, and virginica. The iris dataset comes with the standard R distribution. In Figure 3-18, all the variables of Fisher's iris dataset (sepal length, sepal width, petal length, and petal width) are compared in a scatterplot matrix. The three different colors represent three species of iris flowers. The scatterplot matrix in Figure 3-18 allows its viewers to compare the differences across the iris species for any pairs of attributes. REVIEW O F BA SIC DATA A N A LYT IC M ETHODS USIN G R
  • 240. Q .., 0 "' "' Q Sepal. length .. ;~·: .. • • " . • ··~til!~-= ,. . . . . . ., ~ .. ·.t.* •. H 55 65 7 5 Fisher's Iris Dataset 20 25 30 35 •o ••• I ~-t' ...... .. ;.;.:.·· • • •• • Sepal. Width • • • - ~· • =:t • • . ... .. ~· .....
  • 241. Petal. length 12 3<567 • setosa D verstcolor • virgimca FIGURE 3·18 Scatterplot matrix of Fisher's {13] iris dataset 0510152025 ..... ..... f· w.4"t~ ,. .• . "' .... "' .. "' 0&<4 "' 14 11 '" 10>1 '19 "' ~ .. 11t I ll f.· .. . ~-· ... . . . _ic )9 I Petal. Width
  • 242. Consider the scatterplot from the first row and third col umn of Figure 3-18, where sepal length is com- pared against petal length. The horizontal axis is the peta l length, and t he vertical axis is t he sepa l length. The scatterplot shows that versicolor and virginica share similar sepal and petal lengths, although the latter has longer peta ls. The petal lengths of all setosa are about the sa me, and the petal lengths are remarkably shorter than the other two species. The scatterplot shows t hat for versicolor and virgin ica, sepal length grows linea rly with the petal length. The R code for generating the scatterplot mat ri x is provided next. I; define the colors colors<- C( 11 red 11 J 11 green 11 , 11 blue•') ~ draw the plot ma:rix pairs(iris[l : 4], main= "Fisher ' s Iris Datase t•, pch = 21, bg = colors[unclass ( iris$Species)] = ~Qr qrdp~ica: pa~a~ :e~· - cl~!' p!ot - 1~9 :c :te ~1gure ~~a1o~ par (xpd = TRUE ) " ada l<"go::d legend ( 0.2, 0 . 02, horiz = TRUE, as.vector (unique ( iris$Species )) , fil l = colors, bty = "n" )
  • 243. 3.2 Explorat ory Data Analysis The vector colors defines th e colo r sc heme for the plot. It could be changed to something li ke colors<- c("gray50", "white" , " black " } to makethescatterplotsgrayscale. Analyzing a Variable over Tim e Visua lizing a variable over time is the same as visualizing any pair of vari ables, but in this case the goal is to identify time-specific patterns. Figure 3-19 plots the mon thly total numbers of international airline passengers (in thousands) from January 1940 to December 1960. Enter plot (AirPassengers} in the R console to obta in a similar graph. The plot shows that, for each year, a la rge peak occurs mid-year around July and August, and a sma ll peak happens around t he end of the year, possibly due to the holidays. Such a phenomenon is referr ed to as a seasonality effect. 0 0 CD 0 0 II'> "' Q;
  • 244. 0 0> c 0 ., .... "' "' "' 0 Q, 0 < (") 0 0 N 0 ~ 1950 1952 1954 1956 1958 1960 Tune FIGURE 3-19 Airline passenger counts from 1949 to 1960 Additionally, the overall trend is that the number of air passengers steadily increased from 1949 to 1960. Chapter 8, "Advanced Analytica l Theory and Methods: Time Series Analysis," discusses the analysis of such data sets in greater detail. 3.2.5 Data Exploration Versus Presentation Using visualization for data exploration is different from presenting results to stakeholders. Not every type of plot is suitable for all audiences. Most of the plots presented earli er try to detail the data as clearly as pos- sible for data scientists to identify structures and relationships. These graphs are more technical in nature and are better suited to technical audiences such as data
  • 245. scientists. Nontechnical sta keholders, however, ge nerally prefer simple, clear graphics that focus on the message rather than the data. Figure 3-20 shows the density plot on the distribution of account va lues from a bank. The data has been converted to the log 10 scale. The plot includes a ru g on the bottom to show the distribution of the variable. This graph is more suitable for data scientists and business analysts because it provides information that REVIEW OF BASIC DATA ANALYTIC METHODS USING R can be relevant to the downstream analysis. The graph shows that the transformed account values follow an approximate normal distribution, in the range from $100 to $10,000,000. The median account value is approximately $30,000 (1 o4s), with the majority of the accounts between $1,000 (1 03) and $1,000,000 (1 06). Distribution of Account Values (log10 scale) CD ci II) ci oq:
  • 246. 0 ~ ("') ·a; c c::) cu 0 N c::) ..... c::) 0 c::) 2 3 4 5 N = 5000 Bandwidth= 0.05759 FIGURE 3-20 Density plots are better to show to data scientists 6 7 Density plots are fairly technical, and they contain so much information that they would be difficult to explain to less technical stakeholders. For example, it would be challenging to explain why the account values are in the log 10 scale, and such information is not relevant to stakeholders. The same message can
  • 247. be conveyed by partitioning the data into log-like bins and presenting it as a histogram. As can be seen in Figure 3-21, the bulk of the accounts are in the S 1,000- 1,000,000 range, with the peak concentration in the $10-SOK range, extending to $500K. This portrayal gives the stakeholders a better sense of the customer base than the density plot shown in Figure 3-20. Note that the bin sizes should be carefully chosen to avoid distortion of the data.ln this example, the bins in Figure 3-21 are chosen based on observations from the density plot in Figure 3-20. Without the density plot, the peak concentration might be just due to the somewhat arbitrary appearing choices for the bin sizes. This simple example addresses the different needs of two groups of audience: analysts and stakehold- ers. Chapter 12, "The Endgame, or Putting It All Together," further discusses the best practices of delivering presentations to these two groups. Following is the R code to generate the plots in Figure 3-20 and Figure 3-21. # Generate random log normal income data income= rlnorm(SOOO, meanlog=log(40000), sdlog=log(S)) # Part I: Create the density plot plot(density(loglO(income), adjust=O.S), main= 11 Distribution of Account Values (loglO scale)") # Add rug to the density plot 3.3 Statistica l Methods for Evaluation
  • 248. r ug (logl O(income)) l• :l... "1 ! .... < bl:-;.5'' breaks = c(O, 1000, 5000, 10000, 50000, 100000, SeS, le6, 2e7 ) "'! 1.:: .... ... ••' , bins = cut(income, breaks, include .lowest =T, labels c ( "< lK", "1 - SK", "5- lOK" , "10 - SOK", "50-lOOK" , "100 -S OOK" , "SOCK-1M", "> 1M") ) ~ n r •L ri. .. plot(bins, main "Dis tribut i on of Account Val ues ", x l ab "Account value ($ USD) ", ylab = "Number of Accounts", col= "blue ") Distribution of Account Values 0 - --<1K 1-5K 5-I OK 10-50K 50- l OOK 100.500K 500K-1M > 11.1 AccOirl value (S USO) FIGURE 3·21 Histograms are better to show to stakeholders 3.3 Statistical Methods for Evaluation Visualization is useful for data exploration and presentation, but statistics is crucial because it may exist throughout the entire Data Analytics Lifecycle. Statistical techniques are used during the initial data explo- ration and data preparation, model building, evaluation of the final models, and assessment of how the new models improve the situation when deployed in the field. In particular, statistics can help answer the
  • 249. following questions for data analytics: • Model Building and Planning • What are the best input variables for the model? • Can the model predict the outcome given the input? REVIEW OF BASIC DATA ANALYTIC METHODS USING R • Model Evaluation • Is the model accurate? • Does the model perform better than an obvious guess? • Does the model perform better than another cand idate model? • Model Deployment • Is the prediction sound? • Does t he model have the desired effect (such as reduc ing the co st}? This sec tion discusses some useful statistical tools that may answer these questions. 3.3.1 Hypothesis Testing When compari ng populations, such as testing or evaluating the difference of the means from two samples of data (Figure 3-22}, a common technique to assess the
  • 250. difference or the significance of the difference is hy p oth esis testin g. FIGURE 3-22 Distribut ions of two samples of data The basic concept of hypothesis testing is to form an assertion and test it with data. When perform- in g hypothesis tests, the common assumption is that there is no difference between two samples. This assumptio n is used as the default position for building the test or conducting a scientific experiment. Statisticians refer to this as the null hy p o thesis (H 0 ). The altern a tive hyp o th esis (H) is that there is a 3.3 Statistical Method s for Evaluation difference between two samples. For example, if the task is to identify the effect of drug A compared to drug Bon patients, the null hypothesis and alternative hypothesis would be th is. • fl0: Drug A and drug B have the same effect on patients. • fl A: Drug A has a greater effect than drug Bon patients. If the task is to identify whether advertising Campaign C is effective on reducing customer churn, the null hypothesis and alternative hypothesis wou ld be as follows. • fl0 : Campaign C does not reduce customer churn better than
  • 251. the cu rrent campa ign method. • fl A: Campaign C does reduce customer churn better than the current campa ign. It is important to state the null hypothesis and alternative hypothesis, because misstating them is likely to underm ine the subsequent steps of the hypothesis testing process. A hypothesis test leads to either rejecting the null hypothesis in favor of the alternative or not rejecting the null hypothesis. Table 3·5 includes some examples of null and alternative hypotheses that should be answered during the analytic lifecycle. TABLE 3-5 Example Null Hypotheses and Alternative Hypotheses Application Null Hypothesis Alternative Hypothesis Accuracy Forecast Recommendation Engine Regression Modeling Model X does not predict better than the existing model. Algorithm Y does not produce better recommendations than the current algorithm being
  • 252. used. This variable does not affect the outcome because its coefficient is zero. Model X predicts better than the existing model. Algorithm Y produces better recommen· dations than t he current algorithm being used. This variable affects outcome because its coefficient is not zero. Once a model is built over the t raining data, it needs to be eva luated over the testing data to see if the proposed model predicts better than the existing model curren tly being used. Th e null hypothesis is that the proposed model does not predict better than the existing model. The alternative hypothesis is that the proposed model indeed predicts better than the existing model. In accuracy forecast, the null model could be that the sales of the next month are the same as the prior month. The hypothesis test needs to evaluate if the proposed model provides a better prediction. Take a recommendation engine as an example. The null hypothesis could be that the new algorithm does not produce better recommendations than the current algorithm being deployed. The alternative hypothesis is that the new algorithm produces better recommendations than the old algorithm. When eva luating a model, sometimes it needs to be determined
  • 253. if a given input variable improves the model. In regression analysis (Chapter 6), for example, this is the same as asking if the regression coefficient for a variable is zero. The null hypothesis is that the coefficient is zero, which means the variable does not have an impact on the outcome. The alternative hypothesis is that the coefficient is nonzero, which means the variable does have an impact on the outcome. REVIEW OF BASIC DATA ANALYTIC METHODS USING R A common hypothesis test is to compare the means of two populations. Two such hypothesis test s are discussed in Section 3.3.2. 3.3.2 Difference of Means Hypothesis testing is a common approach to draw inferences on whether or not the two populations, denoted pop1 and pop2, are different from each other. This section provides two hypothesis tests to com- pare the means of the respective populations based on sam ples randomly drawn from each population. Specifically, the two hypothesis tests in this section consider the following null and alternative hypotheses. • Ho: II , = ll 2 • HA: II , ""' ll2 The 1' , and 11 2
  • 254. denote the population means of pop1 and pop2, respectively. The basic testing approach is to compare the observed sample means, X, and X 2 , corresponding to each population. If the values of X 1 and X 2 are approximately equal to each other, the distributions of X, and X2 overlap substantially (Figure 3-23), and the null hypothesis is supported. A large observed difference between the sample means indicates that the null hypothesis should be rejected. Formally, the difference in means can be tested using Student's t-test or the Welch's t- test. Irx, '" :X2 . this area is large FIGURE 3-23 Overlap of the two distributions is larg e if X1 ::::: X2 Student's t-test Stud ent 's t- test ass umes that distributions of t he t wo populations have equal but unknow n varian ces. Suppose n
  • 255. 1 and n 2 samples are random ly and independently selected from two populations, pop1 and pop2, respectively. If each population is normally distributed with the same mean (Jt 1 = Jt 2 ) and wi th the sa me variance, then T (the t-statistic ), given in Equation 3-1, follows a t -distribution w ith n, + n 2 - 2 degrees of f reed om (df). where (3-1) 3.3 Statistical Methods for Evaluation The shape of the t-distribution is similar to the normal distribution. In fact, as the degrees of freedom approaches 30 or more, the t-distribution is nearly identical to the normal distribution. Because the numera- tor ofT is the difference of the sample means, if the observed value ofT is far enough from zero such that the probability of observing such a value of Tis unlikely, one would reject the null hypothesis that the
  • 256. population means are equal. Thus, for a small probability, say a= 0.05, T* is determined such that P(ITI2: T*) = 0.05. After the samples are collected and the observed value ofT is calculated according to Equation 3-1, the null hypothesis (p,1 = p2) is rejected ifiTI2: r·. In hypothesis testing, in general, the small probability, n, is known as the significance level of the test. The significance level of the test is the probability of rejecting the null hypothesis, when the null hypothesis is actually TRUE.In other words, for n = 0.05, if the means from the two populations are truly equal, then in repeated random sampling, the observed magnitude ofT would only exceed r· 5% of the time. In the following R code example, 10 observations are randomly selected from two normally distributed populations and assigned to the variables x andy. The two populations have a mean of 100 and 105, respectively, and a standard deviation equal to 5. Student's t- test is then conducted to determine if the obtained random samples support the rejection of the null hypothesis. # generate random observations from the two populations x <- rnorm(lO, mean=lOO, sd=S) # normal distribution centered at 100 y <- rnorm(20, mean=lOS, sd=S) ll no~:mal distribution centered at 105 t.test(x, y, var.equal=TRUE) Two Sample t-test data: x and y
  • 257. # run the Student's t-test t = -1.7828, df = 28, p-value = 0.08547 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -6.1611557 0.4271393 sample estimates: mean of x mean of y 102.2136 105.0806 From the R output, the observed value of Tis t = -1.7828. The negative sign is due to the fact that the sample mean of xis less than the sample mean of y. Using the qt () function in R, a Tvalue of 2.0484 corresponds to a 0.05 significance level. # obtain t value for a two-sidec test at a 0.05 significance level qt(p=O.OS/2, df=28, lower.tail= FALSE) 2.048407 Because the magnitude of the observed T statistic is less than the T value corresponding to the 0.05 significance level Q -1.78281< 2.0484), the null hypothesis is not rejected. Because the alternative hypothesis is that the means are not equal (p 1 :;z:: 11 2 ), the possibilities of both p, > 11
  • 258. 2 and p 1 < 112 need to be considered. This form of Student's t-test is known as a two-sided hypothesis test, and it is necessary for the sum of the probabilities under both tails of the t-distribution to equal the significance level. It is customary to evenly REVIEW OF BASIC DATA ANALYTIC METHODS USING R divide the significance level between both tails. So, p = 0.05/2 = 0.025 was used in the qt () function to obtain the appropriate t-value. To simplify the comparison of the t-test results to the significance level, the R output includes a quantity known as the p -value. ln the preceding example, the p-value is 0.08547, which is the sum of P(T ~ - 1.7828) and P(T ~ 1.7828). Figure 3-24 illustrates the t-statistic for the area under the tail of a t-distribution. The -t and tare the observed values of the t-statistic. ln the R output, t = 1.7828. The left shaded area corresponds to the P(T ~ - 1.7828), and the right shaded area corresponds to the P(T ~ 1.7828). -t 0 FIGURE 3-24 Area under the tails (shaded) of a student's t- distribution In the R output, for a significance level of 0.05, the nu ll hypothesis would not be rejected because the
  • 259. likelihood of a Tvalue of magnitude 1.7828 or greater would occur at higher probability than 0.05. However, based on the p -value, if the significance level was chosen to be 0.10, instead of 0.05, the null hypothesis would be rejected. In general, the p-value offers the probability of observing such a sample result given the null hypothesis is TRUE. A key assumption in using Student's t-test is that the population variances are equal. In the previous example, the t . test ( ) function call includes var . equal=TRUE to specify that equa lity of the vari- ances should be assumed. If that assumption is not appropriate, then Welch's t-test should be used. Welch 's t-test When the equal population variance assumption is not justified in performing Student's t-test for the dif- ference of means, Welch's t-test [14] can be used based on T expressed in Equation 3-2. (3-2) where X,. 5,2, and n, correspond to the i-th sample mean, sample variance, and sample size. Notice that Welch's t-test uses the sample va riance (5ll for each population instead of the pooled sample variance. In Welch's test, under the remaining assumptions of random samples from two normal populations with the same mea n, the distribution of Tis approximated by the t- distribution. The following R code performs the We lch's t-test on the same set of data analyzed in the ea rlier Student's t-test example.
  • 260. 3.3 Statistical Methods for Evaluation t.test(x, y, var.equal=FALSE) # run the Welch's t-test l'lelch Two Sample t-test data: x andy t = -1.6596, df = 15.118, p-value = 0.1176 alternative hypothesis: true difference in neans is not equal to o 95 percent confidence interval: -6.546629 0.812663 sample estimates: mean of x mean of y 102.2136 105.0806 In this particular example of using Welch's t-test, the p-value is 0.1176, which is greater than the p-value of 0.08547 observed in the Student's t-test example. In this case, the null hypothesis would not be rejected at a 0.10 or 0.05 significance level. It should be noted that the degrees of freedom calculation is not as straightforward as in the Student's t-test. In fact, the degrees of freedom calculation often results in a non-integer value, as in this example. The degrees of freedom for Welch's t-test is defined in Equation 3-3. df=l~r l~:r --+--
  • 261. n,-1 n2 -1 (3-3) In both the Student's and Welch's t-test examples, the R output provides 95% confidence intervals on the difference of the means. In both examples, the confidence intervals straddle zero. Regardless of the result of the hypothesis test, the confidence interval provides an interval estimate of the difference of the population means, not just a point estimate. A confidence interval is an interval estimate of a population parameter or characteristic based on sample data. A confidence interval is used to indicate the uncertainty of a point estimate.lfx is the estimate of some unknown population mean f..L, the confidence interval provides an idea of how close xis to the unknown p. For example, a 95% confidence interval for a population mean straddles the TRUE, but unknown mean 95% of the time. Consider Figure 3-25 as an example. Assume the confidence level is 95%. If the task is to estimate the mean of an unknown value Jt in a normal distribution with known standard deviation u and the estimate based on n observations is x, then the interval x ± ~ straddles the unknown value of Jl with about a 95% chance. If one takes 100 different samples and computes the 95% confi- dence interval for the mean, 95 of the 100 confidence intervals will be expected to straddle the population mean Jt. REVIEW OF BASIC DATA ANALYTIC METHODS USING R
  • 262. FIGURE 3-25 A 95% confidence interval straddlin g the unknown population mean 1J Confidence intervals appear again in Section 3.3.6 on AN OVA. Return ing to t he discussion of hypoth- es is t est ing, a key assumpti on in b oth t he Stud ent 's and Welch 's t-tes t is that the relevant population attri bute is norma lly distributed. For non-norm ally dist ributed data, it is sometimes p ossible to transform the co llected data to approx imate a normal distribution . For example, taki ng the logarithm of a d ataset can often transfo rm skewed d ata to a dataset that is at least symmetric arou nd its mean. Howeve r, if such transform ations are ineffective, there are tes t s like t he Wi lcoxon ra nk-su m test that can be ap plied to see if t wo population distributions are different. 3.3.3 Wilcoxon Rank-Sum Test At-test represents a parametric test in t hat it makes assumptions about the population distributions f rom w hich th e sa mples are drawn. If t he populations cann ot be assu med or transformed to follow a normal distribut ion, a n onparametric test can be used . The Wilcoxo n rank-sum test [15] is a nonpa ramet ric hypothesis test that checks w hether two populations are identically d istributed. Assuming the two popula- t ions are identica lly distributed, o ne would expect that the
  • 263. ordering of any sampled observations would be evenly intermixed among t hemselves. For example, in orderi ng the observations, one would not expect to see a large number of observations from one population grouped together, especially at the beginning or the end of ordering. Let the t wo p opulations again be popl and pop2, w ith independently random samples of size n 1 and n 2 respecti vely. The tot al number of observations is then N = n 1 + n 2 • The first step of the Wilcoxon t est is to rank t he set of observat ions from t he t wo groups as if they came from one la rge group. The smallest observation receives a rank of 1, t he second smallest observation receives a rank of 2, and so on with the largest observatio n being assig ned the rank of N. Ties amo ng the observations receive a ran k equal to t he average of the ranks they span. The test uses ranks instead of numerical o utcom es t o avoid specific
  • 264. assumpt io ns about the shape of the distributi on. After ranking all the ob serva t ions, t he assig ned ranks are summed for at least one population's sample. If t he distribution of popl is shift ed to t he right of the other distribution, t he rank-sum corr espondi ng to popl's sa mple shou ld be larger than the rank-sum of pop2. The Wilcoxon rank- sum test determines the 3.3 Statistical Methods for Evaluat io n significance of the observed rank-su ms. The following R code performs the test on the same dataset used for the previous t-test. wilcox.test(x, y, conf.int TRUE) :·:~.c :-c n r rJ.: ! 1 t The wilcox. test ( l function ranks the observations, determines the respective rank-sums cor- responding to each population's sample, and then determines the probability of such rank-sums of such magnitude being observed assuming that the population distributions are identical. In this example, the probability is given by the p-value of 0.04903. Thu s, the null hypothesis would be rejected at a 0.05 sig- nificance level. The reader is cautioned against interpreting that
  • 265. one hypothesis test is clearly better than another test based solely on the examples given in this section. Because the Wilcoxon test does not assume anything about the population distribution, it is generally considered more robust than the t-test. In other words, there are fewer assumptions to violate. However, when it is reasonable to assume that the data is normally distributed, Student's or Welch's t-test is an appropriate hypothesis test to consider. 3.3.4 Type I and Type II Erro rs A hypothesis test may result in two types of errors, depending on whether the test accepts or rejects the null hypothesis. These two errors are known as type I and type II errors. • A type I error is the rejection of the null hypothesis when the null hypothesis is TRUE. The probabil- ity of the type I error is denoted by the Greek letter n . • A type II error is the acceptance of a null hypothesis when the null hypothesis is FALSE. The prob- ability of the type II error is denoted by the Greek letter .1. Table 3-61ists the four possible states of a hypothesis test, including the two types of errors. TABLE 3-6 Type I and Type II Error H 0 is true H 0
  • 266. is false H 0 is accepted Correct outcome Type II Error H 0 is rejected Type/error Correct outcome REVIEW OF BASI C DATA ANALYTIC METHO DS USING R The significance level, as mentioned in the Student's t-test discussion, is equivalent to the type I error. For a significance level such as o = 0.05, if the null hypothesis (Jt 1 = J1 1) is TRUE, there is a So/o chance that the observed Tvalue based on the sample data will be large enough to reject the null hypothesis. By select- ing an appropriate sig nificance level, the probability of commi tting a type I error can be defined before any data is collected or analyzed. The probability of committing a Type II error is somewhat more difficult to determine. If two population means are truly not equal, the probability of committing a type II error will depend on how far apart the means truly are. To reduce the probability of a type II error to a reasonable level, it is often necessary to increase the sample size. This topic is addressed in the next
  • 267. section. 3.3.5 Power and Sample Size The power of a test is the probability of correctly rejecting the null hypothesis. It is denoted by 1- /.3, where f] is the probability of a type II err or. Because the power of a test improves as the sample size increases, power is used to determine the necessary sample size. In the difference of means, the power of a hypothesis test depends on the true difference of the po pulation means. In ot her words, for a fixed significance level, a larger sample size is required to detect a smaller difference in the means. In general, the magn itude of the difference is known as the effect size. As the sample size becomes larger, it is easier to detect a given effec t size, 6, as illustrated in Figure 3-26. Moderate Sample Size Larger Sample Size 1------1 1------1 a' a' F IGURE 3-26 A larger sample size better identifies a fixed effect size With a large enough sample size, almost any effect size can appear statistica lly sign ificant. However, a very small effect size may be useless in a practical sense. It is import an t to consider an appropriate effect size for the problem at hand. 3.3.6ANOVA The hypothesis tests presented in the previous sections are good
  • 268. for analyzing means between two popu- lations. But what if there are more than two populations? Consider an examp le of testing the impact of 3.3 Statistical Methods for Evaluation nutrition and exercise on 60 candidates between age 18 and 50. The candidates are randomly split into six groups, each assigned with a different weight loss strategy, and the goal is to determine which strategy is the most effective. o Group 1 only eats junk food. o Group 2 only eats healthy food. o Group 3 eats junk food and does cardia exercise every other day. o Group 4 eats healthy food and does cardia exercise every other day. o Group 5 eats junk food and does both cardia and strength training every other day. o Group 6 eats healthy food and does both cardia and strength training every other day. Multiple t-tests could be applied to each pair of weight loss strategies. In this example, the weight loss of Group 1 is compared with the weight loss of Group 2, 3, 4, 5, or 6. Similarly, the weight loss of Group 2 is compared with that of the next 4 groups. Therefore, a total of 15 t-tests would be performed.
  • 269. However, multiplet-tests may not perform well on several populations for two reasons. First, because the number oft-tests increases as the number of groups increases, analysis using the multiplet-tests becomes cognitively more difficult. Second, by doing a greater number of analyses, the probability of committing at least one type I error somewhere in the analysis greatly increases. Analysis of Variance (ANOVA) is designed to address these issues. AN OVA is a generalization of the hypothesis testing of the difference of two population means. AN OVA tests if any of the population means differ from the other population means. The null hypothesis of A NOVA is that all the population means are equal. The alternative hypothesis is that at least one pair of the population means is not equal. In other words, 0 Ho:Jll = J12 = ··· = Jln o H A: Jl; ::= J1i for at least one pair of i,j As seen in Section 3.3.2, "Difference of Means," each population is assumed to be normally distributed with the same variance. The first thing to calculate for the AN OVA is the test statistic. Essentially, the goal is to test whether the clusters formed by each population are more tightly grouped than the spread across all the populations. Let the total number of populations be k. The total number of samples N is randomly split into the k groups. The number of samples in the i-th group is denoted as n
  • 270. 1 , and the mean of the group is X1 where iE[l,k]. The mean of all the samples is denoted as X 0 • The between-groups mean sum of squares, s;, is an estimate of the between-groups variance. It measures how the population means vary with respect to the grand mean, or the mean spread across all the populations. Formally, this is presented as shown in Equation 3-4. k 52 =-1-~n.·(x.-x )2 8 k-1L...i I I 0 1=1 (3-4) The within-group mean sum of squares, s~. is an estimate of the within-group variance. It quantifies the spread of values within groups. Formally, this is presented as shown in Equation 3-5. REVIEW OF BASIC DATA ANALYTIC METHODS USING R (3-5)
  • 271. If s; is much larger than 5~, then some of the population means are different from each other. The F-test statistic is defined as the ratio of the between-groups mean sum of squares and the within- group mean sum of squares. Formally, this is presented as shown in Equation 3-6. (3-6) The F-test statistic in A NOVA can be thought of as a measure of how different the means are relative to the variability within each group. The larger the observed F-test statistic, the greater the likelihood that the differences between the means are due to something other than chance alone. The F-test statistic is used to test the hypothesis that the observed effects are not due to chance-that is, if the means are significantly different from one another. Consider an example that every customer who visits a retail website gets one of two promotional offers or gets no promotion at all. The goal is to see if making the promotional offers makes a difference. ANOVA could be used, and the null hypothesis is that neither promotion makes a difference. The code that follows randomly generates a total of 500 observations of purchase sizes on three different offer options. offers<- sample(c("offerl", "offer2", "nopromo"), size=SOO, replace=T) # Simulated 500 observations of purchase sizes on the 3 offer options purchasesize <- ifelse(offers=="offerl", rnorm(SOO, mean=SO, sd=30),
  • 272. ifelse(offers=="offer2", rnorm(SOO, mean=SS, sd=30), rnorm(SOO, mean=40, sd=30))) # create a data frame of offer option and purchase size offertest <- data.frame(offer=as.factor(offers), purchase_amt=purchasesize) The summary ofthe offertest data frame shows that 170 offerl, 161 offer2, and 169 nopromo (no promotion) offers have been made. It also shows the range of purchase size (purchase_ amt) for each of the three offer options. # display a summary of offertest where o:fer="offer1" summary(offertest[offertest$offer=="offerl",]) offer nopromo: offe:n :170 offer2 : purchase_amt t·li;;.. 4.521 1 s:: Qu . : 5 8 . 1 5 8 i·iedian : 76. 944 I·1ean .Sl. 936 3 rd Qu. : 1 D 4 . 9 59 t•la:·:. :130.507 # display a summary of offertest where o:fer="offer2"
  • 273. summary(offertest[offertest$offer=="offer2",]) offer nopromo: 0 offer! 0 offer2 :161 purchase_amt ~lin. 14.04 1st Qu . : 6 9 . 4 6 t·ledian : 90.20 r•lean 89.09 3 rd Qu. : 10 7. 4 8 !•lax. : 154. 3 3 # display a summary of offertest where offer="nopromo" summary(offertest[offertest$offer== 11 nopromo 11 ,]) offer nopromo:169 offerl 0 offer2 : 0 purchase_amt Min. :-27.00 1st Qu.: 20.22 t<ledian : 42.44 f.lean 40.97
  • 274. 3rd Qu.: 58.96 i'olax. :164.04 3.3 Statistical Methods for Evaluation The aov ( } function performs the AN OVA on purchase size and offer options. # fit ANOVA test model <- aov(purchase_amt - offers, data=offertest) The summary (} function shows a summary of the model. The degrees of freedom for offers is 2, which corresponds to the k -1 in the denominator of Equation 3- 4. The degrees of freedom for residuals is 497, which corresponds to then- k in the denominator of Equation 3-5. summary (model) Of Sum Sq !-lean Sq F value Pr (>F) offers 2 225222 112611 130.6 <2e-16 Residuals 4 97 428470 862 Signif. codes: 0 1 *** 1 0.001 1 **' 0.01 1 * 1 0.05 '. 1 0.1 1 1 1 The output also includes the 5~ (112,611), 5~ (862), the F-test statistic (130.6), and the p-value (< 2e-16). The F-test statistic is much greater than 1 with a p-value much less than 1. Thus, the null hypothesis that the means are equal should be rejected. However, the result does not show whether offerl is different from offer2, which requires addi- tional tests. The TukeyHSD (} function implements Tukey's
  • 275. Honest Significant Difference (HSD) on all pair-wise tests for difference of means. TukeyHSD(model) Tukey multiple comparisons of means 95% family-wise confidence level Fit: aoviformula purchase amt - offers, data $offers diff lwr upr offertest) p adj offerl-nopromo 40.961437 33.4638483 48.45903 0.0000000 REVIEW OF BASIC DATA ANALYTIC METHODS USING R offer2 nop romo 48 . 120286 40 . 51894~6 55 . 72163 O. OOOOOCO offer2--fferl 7 . 1"8849 -0 . 4315769 14 . 7 4928 o . r692895 The re sult includes p -values of pair-wise comparisons of the three offer options. The p-values for of ferl- nopromo and of fer- nop romo are equal to 0, smaller than the significance level 0.05. Thi s suggests t hat both of ferl and offer2 are sign ifica ntly different from n opromo. A p-value of 0.0692895 for off er2 aga inst of fer 1 is greater than the
  • 276. significance level 0.05. This suggests that of fer2 is not significantly different from offerl. Because only the influence of one fa ctor (offers) was executed, the presented A NOVA is known as one- way ANOVA. If the goal is to analyze two factors, such as offers and day of week, that would be a two-way A NOVA [16]. 1f the goal is to model more than one outcome variable, then multivariate AN OVA (or MAN OVA) cou ld be used. Summary R is a popu lar package and prog ramming language for data exploration, analytics, and visualization. As an introduction toR, this chapter covers t he R GUI, data 1/0, attribute and data types, and descriptive statistics. This chapter also discusses how to useR to perform exploratory data analysis, including the discovery of dirty data, visua lization of one or more variables, and customization of visualization for different audiences. Finally, the chapter introduces some basic statistical methods. The firs t statistical method presented in the chapter is the hypothesis testing. The Student's t-test and Welch's t-test are included as two example hypoth- esis tests designed for testing the difference of means. Other statistical methods and tools presented in this chapter include confidence interva ls, Wilcoxon rank-sum test, type I and II errors, effect size, and ANOVA. Exercises 1. How ma ny levels does fdata contain in the following R code?
  • 277. data = c(1 , 2,2,3,1,2 , 3,3 ,1 , 2,3,3 , 1) fdata = factor(data) 2. Two vectors, vl and v2, are created with the following R code: vl <- 1:5 v2 <- 6 : 2 What are the results of cbi nd (vl , v2) and rbind (vl , v2)? 3. What R comma nd(s) would you use to remove null values from a dataset? 4. What R command can be used to install an additiona l R package? 5. What R fun ction is used to encode a vector as a ca tegory? 6. What is a rug plot used for in a density plot? 7. An online retailer wa nts to study the purchase behaviors of its customers. Figure 3-27 shows the den- sity plot of the purchase sizes (in dol lars) . What wou ld be your recommendation to enhance the plot to detect more structures that otherwise might be missed? Bibliography Be-04
  • 278. 6e-04 £ "' ~ 4e-04 2e-04 09+(}0 0 2000 4000 6000 8000 10000 purchase size (dollars) fiGURE 3-27 Density plot of purchase size 8. How many sections does a box-and-whisker divide the data into? What are these sections? 9. What attributes are correlated according to Figure 3-18? How would you describe their relationships? 10. What function can be used to tit a nonlinear line to the data? 11. If a graph of data is skewed and all the data is positive, what mathematical technique may be used to help detect structures that might otherwise be overlooked? 12. What is a type I error? What is a type II error? Is one always more serious than the other? Why? 13. Suppose everyone who visits a retail website gets one promotional offer or no promotion at all. We want to see if making a promotional offer ma kes a difference. What statistical method wou ld you
  • 279. recommend for this analysis? 14. You are ana lyzing two norma lly distributed populations, and your null hypothesis is that the mean f1 1 of the first population is equal to the mean 112 of the second. Assume the significance level is set at 0.05. If the observed p ·value is 4.33e-05, what will be your decision regarding the null hypothesis? Bibliography [1] The R Project for Statistical Computing, "R Licenses." [Online). Available: http : I l www. r- proj ec t. orgiLicensesl. [Accessed 10 December 2013]. [2] The R Project for Statistical Computing, "The Comprehensive R Arch ive Network." [Onli ne]. Available: http: I lcran . r-project. orgl. [Accessed 10 December 2013]. REVIEW OF BASIC DATA ANALYTIC METHODS USING R [3] J. Fox and M. Bouchet-Valat, "The R Commander: A Basic- Statistics GUI for R," CRAN. [Online]. Available: http : I / soc s e rv . mcmaste r. ca / j fox / Misc / Rcmdr / . [Acce ssed 11 December 2013]. [4] G. William s, M. V. Culp, E. Cox, A. Nolan, D. White, D. Medri, and A. Waljee, "Rattle: Graphical User Interface for Data Mining in R," CRAN. [Online]. Available: ht t p : I I c ran . r - p roj ect . org/
  • 280. we b / pac kages / rattl e/ index . html. [Accessed 12 December 2013]. [5] RStudio, "RStudio IDE" [Online]. Available: http : I / www. rstudio . com/ide / . [Accessed 11 December 2013]. [6] R Special Interest Group on Databases (R-SIG-DB), "OBI: R Database Interface." CR AN [On line]. Available: http : I I cran . r-proj ec t. o rg/ we b / packages / DBI / index . h tml . [Accessed 13 December 2013]. (7] B. Ripley, "RODBC: ODBC Database Access," CRAN. [Online]. Available: http : 11 cran . r-pr o j- e c t . o rg/ web / packages / RODBC / index. html. [Accessed 13 December 2013]. [8] S. S. Stevens, "On the Theory of Scales of Measurement," Science, vol. 103, no. 2684, p. 677-680, 1946. [9] D. C. Hoaglin, F. Mosteller, and J. W. Tukey, Understanding Robust and Exploratory Data Analysis, New York: Wi ley, 1983. [10) F. J. Anscom be, "G raphs in Statistical Analysis," The American Statistician, vol. 27, no. 1, pp. 17- 21, 1973. [11) H. Wickham, "ggplot2," 2013. [Online]. Availa ble: h ttp: I I ggplo t2 . org I . [Accessed 8 January 2014]. [12) W. S. Cleveland, Visualizing Data, Lafayette, IN: Hobart Press, 1993.
  • 281. [13] R. A. Fisher, "The Use of Multiple Measurements in Taxonomic Problems," Annals of Eugenics, vol. 7, no. 2, pp. 179- 188, 1936. [14) B. L. Welch, "The Generalization of "Student's" Problem When Several Different Popu lation Variances Are Involved," Biometrika, vol. 34, no. 1-2, pp. 28- 35, 1947. [15) F. Wilcoxon, "Individual Comparisons by Ranking Methods," Biometrics Bulletin, vol. 1, no. 6, pp. 80- 83, 1945. [16) J. J. Faraway, "Practical Regression and A nova Using R," July 2002. [On line]. Available: ht tp : 11 c ran. r-pro ject . o r g / doc/ c o ntrib/ Fa r awa y - PRA. pdf. [Accessed 22 January 2014]. ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING Building upon the introduction toR presented in Chapter 3, "Review of Basic Data Analytic Methods Using R," Chapter 4, "Advanced Analytical Theory and Methods: Cl ustering" through Chapter 9, "Advanced Analytical Theory and Methods: Tex t Analysis" describe several commonly used analytical methods that may be considered for the Model Planning and Execution phases (Phases 3 and 4) of the Data Analytics Lifecycle.
  • 282. This chapter considers clustering techniques and algorithms. 4.1 Overview of Clustering In general, clustering is the use of unsupervised tech niques for grouping sim ilar objects. In mach ine learning, unsupervi sed refers to the problem of finding hidden structure withi n unlabeled data. Clustering techniques are unsupervised in the sense that the data scientist does not determine, in advance, the labels to apply to the clusters. The structure of the data describes the objects of interest and determines how best to group the object s. For example, based on customers' personal income, it is straightforward to divide the customers into three groups depending on arbitrarily selected values. The customers could be divided into three groups as follows: • Earn less than $10,000 • Earn between 510,000 and $99,999 • Earn $100,000 or more In this case, the income leve ls were chosen somewhat subjectively based on easy-to-commun icate points of de lineation. However, such groupings do not indicate a natural affinity of the customers within each group. In other word s, there is no inherent reason to believe that the customer making $90,000 will behave any differently than the customer making 5110,000. As additional dimensions are introduced by
  • 283. adding more variables about the customers, the ta sk of finding meaningful groupings becomes more complex. For instance, suppose variables such as age, years of education, household size, and annual purchase expenditures were considered along with the personal income variable. What are t he natural occurring groupings of customers? This is the type of question that clustering analysis can help answer. Clustering is a method often used for exploratory ana lysis of the data. In clustering, there are no pre- dictions made. Rather, clustering methods find the sim ilarities between objec ts according to the object attributes and group the simi lar objects into clusters. Clustering techniques are utilized in marke ti ng, economics, and various branches of science. A popular clustering method is k-means. 4.2 K-means Given a collection of objects each with n measurable attributes, k-means [1] is an analytical technique that, for a chose n value of k, identifies k clusters of obj ects based on the objects' proximity to the center of the k groups. The center is determined as the arithmetic average (mean) of each cluster's n-dimensiona l vector of attributes. This section describes the algorithm to determ ine the k means as well as how best to apply this technique to several use cases. Figure 4-1 illustrates three clusters of objects with two attributes. Each object in the dataset is represented by a small dot color-coded to the closest large dot, the mean of the cluster.
  • 284. 4 .2K-means • • • •••• • •• • •• • • • • • • • • • • ~ 0 • • • e• o 0 • e o• • • • • 0 0 • • • 0 ••• ••• • 0 0 0 • • ,8 • e0o o • 0 0 0 0 fiGURE 4-1 Possible k-means clusters for k=3 4.2.1 Use Cases Clustering is often used as a lead-in to classification. Once the clusters are identified, labels can be applied to each cluster to classify each group based on its chara cteristics. Classification is covered in more detai l in Chapter 7, "Adva nced Analytical Theory and Methods: Classification." Clustering is primarily an exploratory technique to discover hidden structures of the data, possibly as a prelude to more focused analysis or decision processes. Some specific applications of k-means are image processing, medical, and customer segmentation. Image Processing Video is one example of the growi ng volumes of unstructured data being collected. Within each frame of a video, k-means analysis ca n be used to identify objects in the video. For each frame, the task is to deter- mine which pixels are most similar to each other. The attributes of each pixel ca n include bri ghtness, color, and location, th e x and y coordinates in the frame. With security video images, for example, successive frames are examined to identify any cha nges to the clusters.
  • 285. These newly identified clusters may indicate unauthorized access to a facility. Medical Patient attributes such as age, height, weight, systolic and diastolic blood pressures, cholesterol level, and other attributes can identify naturally occurring clusters. These clusters could be used to ta rget individuals for specific preventive measures or clinica l trial participation. Clustering, in general, is useful in biology for the classification of plants and animals as well as in the field of human genetics. ADVANCED ANALYTICAL THEORY AND METHODS: CLU STERI NG Customer Segmentation Marketing and sa les groups use k-means to better identify customers who have similar behaviors and spending patterns. For example, a wireless provider may look at the following customer attributes: monthly bill, number of text messages, data volume consumed, minutes used during various daily periods, and years as a customer. The wi reless company could then look at the naturally occurring clusters and consider tactics to increase sales or reduce the customer ch urn rate, the proportion of customers who end their relationship with a particular company.
  • 286. 4.2.2 Overview of the Method To illustrate the met hod to find k clusters from a collection of M objects wit h n attributes, t he two- dimensional case (n = 2) is examined. It is much easier to visualize the k-means method in two dimensions. later in the chapter, the two-dimension scenario is generalized to handle any number of attributes. Because each object in this example has two attributes, it is usefu l to consider each object correspond - ing to the point (x,. y) , where x andy denote the two attributes and i = 1, 2 ... M. For a given cluster of m points (m ~M), the point that corresponds to the cluster's mean is called a centroid. In mathematics, a centroid refers to a point that corresponds to the center of mass for an object. The k-means algorithm to find k clusters can be described in the following four steps. 1. Choose the value of k and the k initial guesses for the centroids. In this example, k = 3, and the initial centroids are indicated by the points shaded in red, green, and blue in Figure 4-2. 0 0 0 0 0 0 0 o 0 8 0
  • 287. 00 0 0 0 0 0 0 0 0 0 0 <e 0 0 0 0 o 0 o 0 0 oo0 0 0 • 0 0 0 0 0 0 0 o o 0 o0 o 0 0 0 0 0 0
  • 288. ~ 8 0 0 0 000 0 0 0 0 F IGURE 4-2 Init ial starting points for the centroids 4.2 K-means 2 . Compute the distance from each data point (x ,, y,) to each centroid. Assign each point to the closest centroid. This association defines the first k clusters. FIGURE4 3 In two dimensions, the distance, d, between any two points, (x 1 , y 1 ) and (x 2 , y 2 ) , in the Cartesian plane is typically expressed by using the Euclidean distance measure provided in Equation 4-1.
  • 289. (4-1) In Figure 4-3, the points closest to a centroid are shaded the corresponding color. • • • •••• • •• • •• • • • • • • • • • • ~ 0 • • • •• o 0 • e o• • • • • 0 0 • • • 0 ••• • • • ••• 0 0 • ,8 • e8 oo • 0 0 0 0 Points are assigned to the closest centroid 3. Compute the centroid, the center of mass, of each newly defined cluster from Step 2. In Figure 4-4, the computed centroids in Step 3 are the lightly shaded points of the correspond- ing color. In two dimensions, the centroid (xc, Y cl of them points in a k-means cluster is calcu- lated as follows in Equation 4-2. (4-2) Thu s, (xc, Yc l is the ordered pair of the arithmetic means of the coordinates of them points in the cluster. In this step, a centroid is computed for each of the k clusters. 4. Repeat Steps 2 and 3 until the algorithm converges to an answer. ADVANCED ANALYTICA L THEORY AND METHODS: CLUSTERING
  • 290. a. Assign each point to the closest centroid computed in Step 3. b. Compute the centroid of newly defined clusters. c. Repeat until the algorithm reaches the final answer. Convergence is reached when the computed centroids do not change or the centroids and the assigned points oscillate back and forth from one iteration to the next. The latter case ca n occur when there are one or more points that are equal distances from the computed centroid . • • • ••• • ••• •• • • • • • • • 0 • • • • ~ • • • • • •• • • • •• • • • • • • • •• '0 • • ••• • • O• • • • • • ,. . • •••• • • 0 FIGURE 4-4 Compute the mean of each cluster To generalize the prior algorithm to n dimensions, suppose there are M objects, where each object is described by n attributes or property values (p,, P2 • .• • pJ Then object i is described by (p,,, P,2, • •• p,.) for i = 1,2, .. . , M. In other words, there is a matrix with M rows corresponding to theM objects and n columns to store the attribute values. To expand the earlier process to find the k clusters from two dimensions ton
  • 291. dimensions, the following equations provide the formulas for calculating the distances and the locations of the centroids for n ~ 1. For a given point, p 1 ,at (p", P, 2 , ••• p,. ) and a centroid, q, located at(q,, q2, .. • q"). the distance, d, between p 1 and q, is expressed as shown in Equation 4-3. (4-3) The centroid, q, of a cluster of m points, (p 11 , p12 , •• . p1n)• is calculated as shown in Equation 4-4. (4-4) m 4.2K-means 4.2.3 Determining the Number of Clusters With the preceding algorithm, k clusters can be identified in a given dataset, but what value of k should be selected? The value of k can be chosen based on a reasonable guess or some predefined requirement.
  • 292. However, even then, it would be good to know how much better or worse having k clusters versus k- 1 or k + 1 clusters would be in explaining the structure of the data. Next, a heuristic using the Within Sum of Squares (WSS) metric is examined to determine a reasonably optimal value of k. Using the distance function given in Equation 4-3, WSS is defined as shown in Equation 4- 5. (4-5) 1=1 1=1 j=l In other words, WSS is the sum of the squares of the distances between each data point and the clos- est centroid. The term q11l indicates the closest centroid that is associated with the ith point. If the points are relatively close to their respective centroids, the WSS is relatively small. Thus, if k + 1 clusters do not greatly reduce the value of WSS from the case with only k clusters, there may be little benefit to adding another cluster. Using R to Perform a K-means Analysis To illustrate how to use the WSS to determine an appropriate number, k, of clusters, the following example uses R to perform a k-means analysis. The task is to group 620 high school seniors based on their grades in three subject areas: English, mathematics, and science. The grades are averaged over their high school career and assume values from 0 to 100. The following R code establishes the necessary R libraries and imports the CSV file containing the grades. library (plyr) library(ggplot2)
  • 293. library(cluster) library(lattice) library(graphics) library(grid) library(gridExtra) #import the student grades grade_input = as.data.frame(read.csv("c:/data/grades_km_input.csv")) The following R code formats the grades for processing. The data file contains four columns. The first column holds a student identification (10) number, and the other three columns are for the grades in the three subject areas. Because the student ID is not used in the clustering analysis, it is excluded from the k-means input matrix, kmda ta. kmdata_orig = as. matrix (grade_input [, c ("Student" I 11 English" I 11 Math" I 11 Science")]) kmdata <- kmdata_orig[,2:4] ADVANCED ANALYTICAL T HEORY AND METHODS: CLUSTERI NG kmdata[1 : 10,) Engllsh Math Science ( 1' l 99 96 97 (2. l 99 96 97 [ 3'; 98 97 97 (·l,) 95 100 95 [<;,: 95 96 96 (6. l % 97 %
  • 294. [7 .: 100 96 97 [8, l 95 98 98 (9': 98 96 96 [10, l 99 'l9 95 To determ in e an appropriate value for k, the k-means algori thm is used to identify clu sters for k = 1, 2, .. . , 15. For each value ofk, the WSS is calculated. If an additional cluster provides a better partition- ing of the data points, the WSS should be markedly smaller than without the additional cl uster. The following R code loops through several k-means analyses for the number of centroids, k, va rying from 1 to 15. For each k, the option n start= 2 5 specifies that the k-mea ns algorithm wil l be repeated 25 times, each starting with k random initial centroids. The corresponding value ofWSS for each k-mean analysis is stored in the ws s vector. wss <- n ume r ic (1 5 ) for (kin 1 : 15 ) wss[k) < - s um (kmeans (kmdata, c e nters=k, nstart=25 ) $wi thins s ) Using the basic R plot function, each WSS is plotted against the respective number of centroids, 1 through 15. This plot is provided in Figure 4-5. plot(1 : 15, wss, type =" b ", x lab="Numbe r of Clusters " , ylab="l'li t h in Sum o f Squares" ) 0
  • 295. 0 <J) 0 ~ 0 II') ro N :I cr en 0 0 0 0 E 0 :I II') en c 5 0 0 ~ 0 0 II') 0 0 ""' o....._ o- o - o - o - o - o - o-o -o- o - o - o I I 2 4 6 8 10 12 14 Nllllber of Clusters FIOUI!E 4-5 WSS of the student grade data As can be seen, th e WSS is greatly reduced when k increa ses from one to two. Another substa ntial
  • 296. reduction in WSS occurs at k = 3. However, the improvement in WSS is fairly linear fork > 3. There fore, the k-means analysis wi ll be conducted fork = 3. The process of identifying the appropriate va lue of k is referred to as finding the "elbow" of the WSS curve. km kmeans(kmdata,3, nstart~2S) km K-means clustering with 3 clusters of sizes 158, 218, 244 Cluster means: English Math Science 1 97.21519 93.37342 94.86076 2 73.22018 64.62844 65.84862 3 85.84426 79.68033 81.50820 Clustering vector: [1] 1 1 1 1 1 1 1 1 1 1 1 [41] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
  • 297. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [81] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 [121] 1 1 1 1 1 3 3 3 3 3 [161] 3 1 1 1 3 1 1 3 3 3
  • 298. 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 1 1 3 [201] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 [241] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 [281] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 [321] 3 3 [361] 3 3 3 3 3 3 3 3 3 3 2 2 2 2 3 3 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
  • 299. 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 3 2 3 3 3 [401] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [441] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 2 2 2 3 2 [481] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [521] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 [561] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
  • 300. [601] 3 2 2 3 1 1 3 3 3 2 2 3 2 Within cluster sum of squares by cluster: [1] 6692.589 34806.339 22984.131 (between_SS I total_SS 76.5 %) Available components: [1] "cluster" "centers" "totss" [6] "betweenss" "size" "iter" "withinss" "ifault" "tot.withinss" 4.2K-means ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING The displayed contents of the variable km include the following: o The location of the cluster means o A clustering vector that defines the membership of each student to a corresponding cluster 1, 2, or 3 o The WSS of each cluster o A list of all the available k-means components
  • 301. The reader can find details on these components and using k- means in R by employing the help facility. The reader may have wondered whether the k-means results stored in km are equivalent to the WSS results obtained earlier in generating the plot in Figure 4-5. The following check verifies that the results are indeed equivalent. c( wss[3] , sum(km$withinss) [1] 64483.06 64483.06 In determining the value of k, the data scientist should visualize the data and assigned clusters. In the following code, the ggplot2 package is used to visualize the identified student clusters and centroids. #prepare the student data and clustering results for plotting df; as.data.frame(kmdata_orig[,2:4]) df$cluster ; factor(km$cluster) centers;as.data.frame(km$centers) gl; ggplot(data=df, aes(x;English, y=Math, color=cluster )) + geom_point() + theme(legend.position="right") + geom_point(data=centers, aes(x;English,y=Math, color=as.factor(c(l,2,3))), size=lO, alpha=.3, show_guide=FALSE) g2 =ggplot(data=df, aes(x=English, y=Science, color=cluster )) + geom _point () + geom_point(data=centers, aes(x=English,y=Science, color=as.factor(c(l,2,3))),
  • 302. size=lO, alpha=.3, show_guide=FALSE) g3 = ggplot(data=df, aes(x=Math, y=Science, color=cluster )) + geom_point () + geom_point(data=centers, aes(x;Math,y=Science, color=as.factor(c(l,2,3))), size=lO, alpha;.), show_guide=FALSE) tmp ggplot_gtable(ggplot_build(gl)) 100 - ..c:: 90 - ro so - ::2 - o - eo - Q) 100 - (.) 90 - ~ 80 - u -o - en eo - Q) 100 - (.) 90 - ~ so - (.) 70 - en eo - grid.arrange(arrangeGrob(gl + theme(legend.position="none"), g2 + theme(legend.position="none"), g3 + theme(legend.position= "none " ), main ="High School Student Cluster Analysis" ,
  • 303. ncol=l) ) 4.2K-means The resul ting plots are provided in Figure 4-6. The large circles represent the loca tion of the cluster means provided earlier in the display of the kmcontents. The small dots represent the students correspond- ing to the appropriate cluster by assigned color: red, blue, or green. In general, the plots indicate the three clusters of students: the top academic students (red), the academical ly cha llenged students (green), and the other students (b lue) who fall somewhere between those two groups. The plots also high lig ht which students may excel in one or two subject areas but struggle in othe r areas. High School Student Cluster Analysis • • • • • • • • • • • • • • : • • I • • I • ~~;.:,······ -=· : ~ : : I I I I i I I I I ! I I I I eo
  • 304. • • • . I. . . . . I . . :1 •!::!• a . : ' : . I eo 70 • • so Engli sh • , . •.. d, ll ll: • • • a I · • I a • : : • • • • • e I I I 70 so English Math 9D 90 • • • • 100 ;pt11! • • •
  • 305. 100 FIGURE 4-6 Plots of the identified student clust ers Assig ning labels to the identified clusters is useful to communicate the results of an analysis. In a mar- keting context, it is common to label a group of customers as frequent shoppers or big spenders. Such designations are especia lly useful when communicating the clustering results to business users or execu- tives. It is better to describe the marketing plan for big spenders rather than Cluster #1. ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING 4.2.4 Diagnostics The heuristic using WSS can provide at least several possible k values to consider. When the num ber of attributes is relatively small, a common approach to further refine the choice of k is to plot the data to determine how distinct the identified clusters are from each other. In general, the following questions should be considered. • Are the clusters well separated from each other? • Do any of the clusters have only a few poi nts?
  • 306. • Do any of the centroids appear to be too close to each other? In the first case, ideally t he plot would look like the one shown in Figure 4-7, when n = 2. The clusters are well defined, with considerable space between the fo ur identified clusters. However, in other cases, such as Figure 4-8, the clusters may be close to each other, and the distinction may not be so obvious. 0 0~ ~C>.> <D o g.? - 0 0 o~,~ <D >- «: N <%>0 00 <§ 0 @0(/> I I I I 2 4 6 8 X FIGURE • -7 Example of distinct clusters
  • 307. In such cases, it is important to apply some judgment on whether anything different will result by using more clusters. For example, Figure 4-9 uses six clusters to describe the sa me dataset as used in Figu re 4-8. If using more clusters does not better distinguish the groups, it is almost certainly better to go with fewer clusters. 4.2 K-means 0 0 n 0 0 0 fJ ,j (X) 0 0 0 0 Oo o o 0 00 0 C• 0 10 0 0 C> 0 0 0 oo 0 >- 0 0 0 0 C• 0 0
  • 308. 0 ~ 00 0 0 0 8 0 0 0 0 0 0 N ~ 0 0 0 0 0 T I I 2 4 6 8 X F IGURE 4-8 Example of less obvious clusters 0 0 0 8 0 f) 0 (X) - 0 0 0 Oo oo 0 0 0
  • 309. 0 10 - 0 0 0 0 0 0 >- 0 0 •:1 0 0 0 -:r - 0 0 0 8 0 0 0 0 0 0 N ~ 0 0 0 0 0 I 2 4 6 8 X FIGURE 4-9 Six clusters applied to the points from Figure 4-8 ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING
  • 310. 4.2.5 Reasons to Choose and Cautions K-means is a simple and straightforward method for defining clusters. Once clusters and their associated centroids are identified, it is easy to assign new objects (for example, new customers) to a cluster based on the object's distance from the closest centroid. Because the method is unsupervised, using k-means helps to eliminate subjectivity from the analysis. Although k-means is considered an unsupervised method, there are still several decisions that the practitioner must make: o What object attributes should be included in the analysis? o What unit of measure (for example, miles or kilometers) should be used for each attribute? o Do the attributes need to be rescaled so that one attribute does not have a disproportionate effect on the results? o What other considerations might apply? Object Attributes Regarding which object attributes (for example, age and income) to use in the analysis, it is important to understand what attributes will be known at the time a new object will be assigned to a cluster. For example, information on existing customers' satisfaction or purchase frequency may be available, but such information may not be available for potential customers. The Data Scientist may have a choice of a dozen or more
  • 311. attributes to use in the clustering analysis. Whenever possible and based on the data, it is best to reduce the number of attributes to the extent pos- sible. Too many attributes can minimize the impact of the most important variables. Also, the use of several similar attributes can place too much importance on one type of attribute. For example, if five attributes related to personal wealth are included in a clustering analysis, the wealth attributes dominate the analysis and possibly mask the importance of other attributes, such as age. When dealing with the problem of too many attributes, one useful approach is to identify any highly correlated attributes and use only one or two of the correlated attributes in the clustering analysis. As illustrated in Figure 4-10, a scatterplot matrix, as introduced in Chapter 3, is a useful tool to visualize the pair-wise relationships between the attributes. The strongest relationship is observed to be between Attribute3 and Attribute7.lfthe value of one of these two attributes is known, it appears that the value of the other attribute is known with near certainty. Other linear relationships are also identified in the plot. For example, consider the plot of Attribute2 against Attribute3.lfthe value of Attribute2 is known, there is still a wide range of possible values for At tribu t e3. Thus, greater consideration must be given prior to dropping one of these attributes from the clustering analysis. 4.2 K-mean s
  • 312. Another option to reduce the number of attributes is to combine several attributes into one measure. For example, instead of using two attribute variables, one for Debt and one for Assets, a Debt to Asset ra tio could be used. This option also addresses the problem when the magnitude of an attribute is not of real interest, but the relative magnitude is a more important measure. FIGURE 4-10 Scatterplot matrix for seven attributes ADVANCED ANALYTICAL T H EORY AND METHODS: CLUSTERI NG Uni t s of Measure From a computational perspective, the k-means algorithm is somewhat indifferent to the units of measure for a given attribute (for example, meters or centimeters for a patient's height). However, the algorithm will identify different clusters depending on the choice of the units of measure. For example, suppose that k-means is used to cluster pat ients based on age in years and height in centimeters. For k=2, Figure 4-11 illustrates the two clusters that would be determined for a given dataset. 0 0 0 -
  • 313. N E' 0 ~ lO - J: 01 0 00 0% 00 0 0 00 g> 0 <o 0 o o o o a o o o 0 o o o 0'0 (; 0 f] <S> ~ o o 00 ~ o 0 ooooo>CX> o o o o o Ooo 0 o 0 Qi 0 r. 0 - 0 lO - 0 I I I I 0 20 40 60 80 age (years) FIGURE 4-11 Clusters with height expressed in centimeters But if the height was rescaled from centimeters to meters by dividing by 100, the resulting clusters would be slightly different, as illustrated in Figure 4-12. 0
  • 314. 0 I 0 20 40 60 80 age (years) FIGURE 4-12 Clusters with height expressed in meters 4 .2K·means When the height is expressed in meters, the magnitude of the ages dominates the distance calculation between two points. The height attribute provides only as much as the square between the difference of the maximum height and the minimum height or (2.0 - 0)2 = 4 to the rad icand, the number under the square root symbol in the distance formu la given in Eq uation 4·3. Age ca n contribute as much as(S0 - 0)2 = 6,400 to the radicand when measuring the distance. Rescaling Attributes that are ex pressed in dollars are common in clustering analyses and can differ in magnit ude from the other attributes. For example, if personal income is expressed in dollars and age is expressed in years, the income attribute, often exceeding $10,000, can easily dominate the distance calculation with ages typically less than 100 years.
  • 315. Although some adjustments could be made by expressing the income in thousands of dollars (for example, 10 for $10,000), a more straightforward method is to divide each attribute by th e attribute's sta nda rd deviation. The resulting attributes wi ll each have a standa rd deviation equal to 1 and wi ll be without units. Returning to the age and height example, the standard deviations are 23.1 years and 36.4 em, respectively. Dividing each attribute val ue by the appropri ate standard deviation and performing the k-means analysis yields the result shown in Figure 4-13. 0 0 I I I I 00 OS 10 1 5 20 25 30 35 age (rescaled) FIGURE4 13 Clusters with rescaled attributes With the resca led attributes for age and height, the borders of the resulting clusters now fall somewhere between the two earlier clustering analyses. Such an occurrence is not surpri sing based on the magnitudes of the attributes of the previous clusteri ng attempts. Some practitioners also subtract the means of the attributes to center the attributes around zero. However, this step is unnecessa ry because the distance formu la is only sensitive to the scale of the attribute, not its location.
  • 316. ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERING In many statistical analyses, it is com mon to transform typically skewed data, such as income, with long tails by taking the logarithm of the data. Such transformation can also be appied ink-means, but the Data Scientist needs to be aware of what effect th is transformation will have. For example, if /og 10 of income expressed in dollars is used, the practitioner is essentially stating that, from a clustering perspective, $1,000 is as cl ose to S10,000as $10,000 is to $100,000 (because log10 1,000 = 3,1og10 10,000 = 4, and log10 100,000 = 5). In many cases, the skewness of the data may be the reason to perform the clustering analysis in the first place. Additional Consideration s The k-means algorithm is sensitive to the starting positions of the initial centroid. Thus, it is important to rerun the k-means analysis several times for a particular value of k to ensure the cluster results provide the overa ll minimum WSS. As seen earlier, this task is accompl ished in R by using the nstart option in the kmeans () function call. This chapter presented the use of the Euclidean distance
  • 317. function to assign the points to the closest cen- troids. Other possible function choices include the cosine similarity and the Manhattan distance fun ctions. The cosine similarity function is often chosen to compare two documents based on the frequency of each word that appea rs in each of the documents [2). For two points, p and q, at (p 1 , pl' ... p 0 ) and (q 1 , q 2 , .• • q 0 ), respectively, the Manhattan distance, d 1 , between p and q is expressed as shown in Equation 4-6. n dl(p.q) = L h - q~ (4-6) I I The Manhattan dista nce function is analogous to the distance traveled by a car in a city, where the streets are laid out in a rectangular grid (such as city blocks). In
  • 318. Euclidean distance, the measurement is made in a straight line. Using Equation 4-6, the distance from (1, 1) to (4, 5) would be J1 - 41 + J1 - 51 = 7. From an optimi zation perspectiv e, if there is a need to use the Manhattan distan ce for a clustering analysis, the median is a better choice for the centroid than use of the mean [2). K-means cl ustering is applicable to obj ects that can be described by attributes that are numerical with a meaningful distance measure. From Chapter 3, interval and ratio attribute types can certai nly be used. However, k-means does not handle categorical variables well. For example, suppose a clustering analysis is to be conducted on new car sales. Among other attributes, such as the sale price, the color of the car is considered important. Although one could assign numerical values to the color, such as red = 1, yellow = 2, and green = 3, it is not useful to consider that ye llow is as close to red as yellow is to green from a clustering perspective. In such cases, it may be necessary to use an alte rn ative clustering methodology. Such methods are described in the next section. 4.3 Additional Algorithms The k-means clustering method is easily applied to numeric data where the concept of distance can natu rally be applied. However, it may be necessary or desirable to use an alternative clustering algorithm. As discussed at the end of the previous section, k-means does not handle categorical data. In such cases, k-modes [3) is a common ly used method for clustering categorical data based on the number of differences in the respective
  • 319. components of the attribute s. For example, if each obj ect has four attributes, the distance from (a, b, e, d) to (d, d, d, d) is 3. In R, the function kmode () is implemented in the klaR package. Exercises Because k-means and k-modes divide the entire dataset into di stinct groups, both approaches are considered partitioning methods. A third partitioning method is known as Partitioning around Medoids (PAM) [4). In general, a medoid is a representative object in a set of objects. in cl usteri ng, the medoids are the objects in each cluster that minimize the sum of the distances from the medoid to the other objects in the cluster. The advantage of using PAM is that the "center" of each cluster is an actual object in the dataset. PAM is implemented in R by the pam () function included in the cluster R package. The fpc R package includes a function pamk () ,which uses the pam () function to find the optimal value fork. Other clustering methods include hierarchical agglomerative clustering and density clustering methods. In hierarchical agglomerative clustering, each object is initially placed in its own cluster. The clusters are then combined with the most similar cluster. This process is repeated until one cluster, which includes all the objects, exists. The R stats package includes the h clust () function for performing hierarchical
  • 320. agglomerative clustering. in density-based clustering methods, the clusters are identified by the concentra- tion of points. The fpc R package includes a function, dbscan () ,to perform density-based clustering analysis. Density-based clustering can be useful to identify irregularly shaped clusters. Summary Clustering analysis groups similar objects based on the objects' attributes. Clustering is applied in are as such as marketing, economics, biology, and medicine. This chapter presented a detailed explanation of the k-means algorithm and its implementation in R. To use k-means properly, it is important to do the following: • Properly scale the attribute values to prevent certain attributes from dominating the other attributes. • Ensure that the concept of dista nce between the assigned values within an attribute is meaningful. • Choose the number of clusters, k, such that the sum of the Within Sum of Squares (WSS) of the distances is reasonably minimized. A plot such as the exam ple in Figu re 4-5 can be helpfu l in this respect. If k-means does not appear to be an appropriate clustering technique for a given dataset, then alterna- tive techniques such as k-modes or PAM should be considered. Once the clusters are identified, it is often useful to label these clusters in some descriptive way. Especially when dealing with upper management, these labels are useful to
  • 321. easily communicate the findings of the clustering analysis. In clusteri ng, the labels are not preassigned to each object. The labels are subjectively assigned after the clusters have been identified. Chapter 7 considers severa l methods to perform the cl as- sification of objects with predetermined labels. Clustering can be used with other analytical techniques, such as regre ssion. Linear regression and logistic regression are covered in Chapter 6, "Advanced Analytica l Theory and Methods: Regression." Exercises 1. Using the age and height clustering example in section 4.2.5, algebraically illustrate the impact on the measured distance when the height is expressed in meters rather than centimeters. Explain why different clusters will result depending on the choice of units for the patient's height. 2. Compare and contrast five clustering algorithms, assigned by the instructor or selected by the student. ADVANCED ANALYTICAL THEORY AND METHODS: CLUSTERI NG 3. Using the ruspini dataset provided with the cl us ter package in R, perform a k-means analysis. Document the findings and j ustify the choice of k. Hint: use data ( ruspini ) to load the dataset into the R workspace. Bibliography
  • 322. [1] J. MacQueen, "Some Methods for Classification and Analysis of Multivari ate Observa tions," in Proceedings of th e Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, 1967. [2] P.-N. Tan, V. Kumar, and M. Steinbach, Introduction to Data Mining, Upper Saddle River, NJ: Person, 2013. [3] Z. Huang, "A Fast Clustering Algorithm to Cluster Very Large Categorica l Data Sets in Data Mining," 1997. [Onlin e] . Available: http : I I ci teseerx . is t . psu . edulviewdoc l download ? d o i=1 0 . 1. 1 . 13 4. 83&rep=rep1 &type =pdf. [Accessed 13 March 2014]. [4] L. Kaufman and P. J. Rous seeuw, "Partitioning Around Medoids (Program PAM)," in Finding Groups in Data: An Introduction to Cluster Analysis, Hoboken, NJ, John Wiley & Sons, Inc, 2008, p. 68-125, Chapter 2. ADVANCED ANALYTICAL THEORY AND METHODS: ASSOCIATION RULES This chapter discusses an unsupervised learning method called association rules. This is a descriptive, not predictive, method often used to discover interesting relationships hidden in a large dataset. The disclosed relationships can be represented as rules or frequent itemsets.
  • 323. Association rules are commonly used for mining transactions in databases. Here are some possible questions that association rules can answer: • Which products tend to be purchased together? • Of those customers who are similar to this person, what products do they tend to buy? • Of those customers who have purchased this product, what other similar products do they tend to view or purchase? 5.1 Overview Figure 5-l shows the general logic behind association rules. Given a large collection of transactions (depicted as three stacks of receipts in the figure), in which each transaction consists of one or more items, associa- tion rules go through the items being purchased to see what items are frequently bought together and to discover a list of rules that describe the purchasing behavior. The goal with association rules is to discover interesting relationsh ips among the items. (The relationship occu rs too frequently to be random and is meaningful from a business perspective, which may or may not be obvious.) The relationships that are inter- esting depend both on the business context and the natu re of the algorithm being used for the discovery. :.• ... ... :.•
  • 324. WIUIO 'iitl rtot Jl tJint't' --.•• t:UC~~o. •• t:US lHU110 .,_,.,,,, '0 &Ol'' FIGURE 5-1 The genera/logic behind association rules Cereal Bread Milk Milk Rules .. Milk (90%) .. Milk (40%) .. Cereal (23%) .. Apples (10%) Wine .. Diapers (2%) Each of the uncovered rules is in the form X ~ Y, meaning that when item X is observed, item Y is also observed. In this case, the left-hand side (LHS) of the rule is X, and the right-hand side (RHS) of the rule is Y. Usi ng association rules, patterns ca n be discovered from the data that allow the association ru le algo-
  • 325. rithms to disclose rules of related product purchases. The uncovered rules are listed on the right side of 5.1 Overview Figure 5-1. The first three rules suggest that when cereal is purchased, 90% of the time milk is purchased also. When bread is purchased, 40% of the time milk is purchased also. When milk is purchased, 23% of the time cereal is also purchased. In the example of a retail store, association rules are used over transactions that consist of one or more items. In fact, because of their popularity in mining customer transactions, association rules are sometimes referred to as market basket analysis. Each transaction can be viewed as the shopping basket of a customer that contains one or more items. This is also known as an item set. The term itemset refers to a collection of items or individual entities that contain some kind of relationship. This could be a set of retail items purchased together in one transaction, a set of hyperlinks clicked on by one user in a single session, or a set of tasks done in one day. An item set containing k items is called a k-itemset. This chapter uses curly braces like { i tern 1, i tern 2, . . . i tern k} to denote a k-itemset. Computation of the association rules is typically based on item sets. The research of association rules started as early as the 1960s. Early research by Hajek et al. [1] intro- duced many of the key concepts and approaches of association rule learning, but it focused on the
  • 326. mathematical representation rather than the algorithm. The framework of association rule learning was brought into the database community by Agrawal et al. [2] in the early 1990s for discovering regularities between products in a large database of customer transactions recorded by point-of-sale systems in supermarkets. In later years, it expanded to web contexts, such as mining path traversal patterns [3] and usage patterns [4] to facilitate organization of web pages. This chapter chooses Apriori as the main focus of the discussion of association rules. Apriori [5] is one of the earliest and the most fundamental algorithms for generating association rules. It pioneered the use of support for pruning the itemsets and controlling the exponential growth of candidate item- sets. Shorter candidate item sets, which are known to be frequent item sets, are combined and pruned to generate longer frequent itemsets. This approach eliminates the need for all possible item sets to be enumerated within the algorithm, since the number of all possible itemsets can become exponentially large. One major component of Apriori is support. Given an item set L, the support [2] of Lis the percentage of transactions that contain L. For example, if 80% of all transactions contain item set {bread}, then the support of {bread} is 0.8. Similarly, if 60% of all transactions contain itemset {bread, butter}, then the support of {bread, butter} is 0.6. A frequent itemset has items that appear together often enough. The term "often enough" is for- mally defined with a minimum support criterion. If the minimum support is set at 0.5, any itemset can
  • 327. be considered a frequent item set if at least 50% of the transactions contain this itemset. In other words, the support of a frequent itemset should be greater than or equal to the minimum support. For the previous example, both {bread} and {bread, butter} are considered frequent item sets at th