SlideShare a Scribd company logo
1
Tutorial
Content Overview of Python Libraries for Data
Scientists
Reading Data; Selecting and Filtering the Data; Data manipulation,
sorting, grouping, rearranging
Plotting the data
Descriptive statistics
Inferential statistics
2
Python Libraries for Data Science
Many popular Python toolboxes/libraries:
• NumPy
• SciPy
• Pandas
• SciKit-Learn
Visualization libraries
• matplotlib
• Seaborn
and many more …
All these libraries are
installed on the SCC
3
Python Libraries for Data Science
NumPy:
 introduces objects for multidimensional arrays and matrices, as well as
functions that allow to easily perform advanced mathematical and statistical
operations on those objects
 provides vectorization of mathematical operations on arrays and matrices
which significantly improves the performance
 many other python libraries are built on NumPy
Link: http://www.numpy.org/
4
Python Libraries for Data Science
SciPy:
 collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more
 part of SciPy Stack
 built on NumPy
Link: https://www.scipy.org/scipylib/
5
Python Libraries for Data Science
Pandas:
 adds data structures and tools designed to work with table-like data (similar
to Series and Data Frames in R)
 provides tools for data manipulation: reshaping, merging, sorting, slicing,
aggregation etc.
 allows handling missing data
Link: http://pandas.pydata.org/
6
Link: http://scikit-learn.org/
Python Libraries for Data Science
SciKit-Learn:
 provides machine learning algorithms: classification, regression, clustering,
model validation etc.
 built on NumPy, SciPy and matplotlib
7
matplotlib:
 python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats
 a set of functionalities similar to those of MATLAB
 line plots, scatter plots, barcharts, histograms, pie charts etc.
 relatively low-level; some effort needed to create advanced visualization
Link: https://matplotlib.org/
Python Libraries for Data Science
8
Seaborn:
 based on matplotlib
 provides high level interface for drawing attractive statistical graphics
 Similar (in style) to the popular ggplot2 library in R
Link: https://seaborn.pydata.org/
Python Libraries for Data Science
9
Start Jupyter nootebook
# On the Shared Computing Cluster
[scc1 ~] jupyter notebook
10
In [ ]:
Loading Python Libraries
#Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
import seaborn as sns
Press Shift+Enter to execute the jupyter cell
11
In [ ]:
Reading data using pandas
#Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
There is a number of pandas commands to read other data formats:
pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None,
na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
Note: The above command has many optional arguments to fine-tune the data import process.
12
In [3]:
Exploring data frames
#List first 5 records
df.head()
Out[3]:
13
Hands-on exercises
Try to read the first 10, 20, 50 records;
Can you guess how to view the last few records; Hint:
14
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be
assigned to your column if column
has mixed types (numbers and
strings).
int64 int Numeric characters. 64 refers to
the memory allocated to hold this
character.
float64 float Numeric characters with decimals.
If a column contains numbers and
NaNs(see below), pandas will
default to float64, in case your
missing value has a decimal.
datetime64, timedelta[ns] N/A (but see the datetime module
in Python’s standard library)
Values meant to hold time data.
Look into these for time series
experiments.
15
In [4]:
Data Frame data types
#Check a particular column type
df['salary'].dtype
Out[4]: dtype('int64')
In [5]: #Check types for all the columns
df.dtypes
Out[4]: rank
discipline
phd
service
sex
salary
dtype: object
object
object
int64
int64
object
int64
16
Data Frames attributes
Python objects have attributes and methods.
df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
size number of elements
shape return a tuple representing the dimensionality
values numpy representation of the data
17
Hands-on exercises
Find how many records this data frame has;
How many elements are there?
What are the column names?
What types of columns we have in this data frame?
18
Data Frames methods
df.method() description
head( [n] ), tail( [n] ) first/last n rows
describe() generate descriptive statistics (for numeric columns only)
max(), min() return max/min values for all numeric columns
mean(), median() return mean/median values for all numeric columns
std() standard deviation
sample([n]) returns a random sample of the data frame
dropna() drop all the records with missing values
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)
19
Hands-on exercises
Give the summary for the numeric columns in the dataset
Calculate standard deviation for all numeric columns;
What are the mean values of the first 50 records in the dataset? Hint: use
head() method to subset the first 50 records and then calculate the mean
20
Selecting a column in a Data Frame
Method 1: Subset the data frame using column name:
df[‘gender']
Method 2: Use the column name as an attribute:
df.gender
Note: there is an attribute rank for pandas data frames, so to select a column with a name
"rank" we should use method 1.
21
Hands-on exercises
Calculate the basic statistics for the salary column;
Find how many values in the salary column (use count method);
Calculate the average salary;
22
Data Frames groupby method
Using "group by" method we can:
• Split the data into groups based on some criteria
• Calculate statistics (or apply a function) to each group
• Similar to dplyr() function in R
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])
In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()
23
Data Frames groupby method
Once groupby object is create we can calculate various statistics for each group:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()
Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
24
Data Frames groupby method
groupby performance notes:
- no grouping/splitting occurs until it's needed. Creating the groupby object
only verifies that you have passed a valid mapping
- by default the group keys are sorted during the groupby operation. You may
want to pass sort=False for potential speedup:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby(['rank'], sort=False)[['salary']].mean()
25
Data Frame: filtering
To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the salary
value is greater than $120K:
In [ ]: #Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]
In [ ]: #Select only those rows that contain female professors:
df_f = df[ df[‘gender'] == 'Female' ]
Any Boolean operator can be used to subset the data:
> greater; >= greater or equal;
< less; <= less or equal;
== equal; != not equal;
26
Data Frames: Slicing
There are a number of ways to subset the Data Frame:
• one or more columns
• one or more rows
• a subset of rows and columns
Rows and columns can be selected by their position or label
27
Data Frames: Slicing
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
In [ ]: #Select column salary:
df['salary']
When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
In [ ]: #Select column salary:
df[['rank','salary']]
28
Data Frames: Selecting rows
If we need to select a range of rows, we can specify the range using ":"
In [ ]: #Select rows by their position:
df[10:20]
Notice that the first row has a position 0, and the last value in the range is omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9
29
Data Frames: method loc
If we need to select a range of rows, using their labels we can use method loc:
In [ ]: #Select rows by their labels:
df_sub.loc[10:20,['rank’,’gender','salary']]
30
Data Frames: method iloc
If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their labels:
df_sub.iloc[10:20,[0, 3, 4, 5]]
31
Data Frames: method iloc (summary)
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row
df.iloc[:, 0] # First column
df.iloc[:, -1] # Last column
df.iloc[0:7] #First 7 rows
df.iloc[:, 0:2] #First 2 columns
df.iloc[1:3, 0:2] #Second through third rows and first 2 columns
df.iloc[[0,5], [1,3]] #1st
and 6th
rows and 2nd
and 4th
columns
32
Data Frames: Sorting
We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.
In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='service')
df_sorted.head()
Out[ ]:
33
Data Frames: Sorting
We can sort the data using 2 or more columns:
In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False])
df_sorted.head(10)
Out[ ]:
34
Missing Values
Missing values are marked as NaN
In [ ]: # Read a dataset with missing values
flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv")
In [ ]: # Select the rows that have at least one missing value
flights[flights.isnull().any(axis=1)].head()
Out[ ]:
35
Missing Values
There are a number of methods to deal with missing values in the data frame:
df.method() description
dropna() Drop missing observations
dropna(how='all') Drop observations where all cells is NA
dropna(axis=1, how='all') Drop column if all the values are missing
dropna(thresh = 5) Drop rows that contain less than 5 non-missing values
fillna(0) Replace missing values with zeros
isnull() returns True if the value is missing
notnull() Returns True for non-missing values
36
Missing Values
• When summing the data, missing values will be treated as zero
• If all values are missing, the sum will be equal to NaN
• cumsum() and cumprod() methods ignore missing values but preserve them in
the resulting arrays
• Missing values in GroupBy method are excluded (just like in R)
• Many descriptive statistics methods have skipna option to control if missing
data should be excluded . This value is set to True by default (unlike R)
37
Missing Values
• How to overcome Missing data in our dataset?
• There are many ways to overcome the missing data, we will see methods, before
that we will start with the scratch like importing the libraries,
• Dataset: https://github.com/JangirSumit/data_science/blob/master/18th%20May
%20Assignments/case%20study%201/SalaryGender.csv
• at the beginning of every code, we need to import the libraries,
import pandas as pd
import numpy as npdataset =pd.read_csv("SalaryGender.csv")
print(dataset.head())
38
Missing Values
• How to overcome Missing data in our dataset?
checking for the dimension of the dataset
dataset.shape
Checking for the missing values
print(dataset.isnull().sum())
39
Missing Values
• How to overcome Missing data in our dataset?
Drop it if it is not in use (mostly Rows)
Excluding observations with missing data is the next most easy approach.
However, you run the risk of missing some critical data points as a result.
You may do this by using the Python pandas package’s dropna() function to remove all the columns with
missing values.
Rather than eliminating all missing values from all columns, utilize your domain knowledge or seek the help
of a domain expert to selectively remove the rows/columns with missing values that aren’t relevant to the
machine learning problem.
Pros: after removing missed data, the model becomes robus
Cons: Loss of data, which may be important too. If you have more missing data then efficiency
won’t be good for modelling.
#deleting rows - missed vales
dataset.dropna(inplace=True)
print(dataset.isnull().sum())
40
Missing Values
• How to overcome Missing data in our dataset?
Imputation by Median:
Another technique of imputation that addresses the outlier problem in the previous method is to utilize
median values. When sorted, it ignores the influence of outliers and updates the middle value that occurred
in that column.
Cons: Works only with numerical datasets and failed in covariance between the independent variables
#Median - missed value
dataset["Age"] = dataset["Age"].replace(np.NaN, dataset["Age"].median())
print(dataset["Age"][:10])
41
Missing Values
• How to overcome Missing data in our dataset?
Imputation by Most frequent values (mode):
This method may be applied to categorical variables with a finite set of values. To impute, you can use the
most common value. For example, whether the available alternatives are nominal category values such as
True/False or conditions such as normal/abnormal. This is especially true for ordinal categorical factors such
as educational attainment. Pre-primary, primary, secondary, high school, graduation, and so on are all
examples of educational levels. Unfortunately, because this method ignores feature connections, there is a
danger of data bias. If the category values aren’t balanced, you’re more likely to introduce bias into the data
(class imbalance problem).
Pros: Works with all formats of data.
Cons: Covariance value cannot be predicted between independent features
#Mode - missed value
import statistics
dataset["Age"] = dataset["Age"].replace(np.NaN, statistics.mode(dataset["Age"]))
print(dataset["Age"][:10])
42
Aggregation Functions in Pandas
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts
Common aggregation functions:
min, max
count, sum, prod
mean, median, mode, mad
std, var
43
Aggregation Functions in Pandas
agg() method are useful when multiple statistics are computed per column:
In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max'])
Out[ ]:
44
Basic Descriptive Statistics
df.method() description
describe Basic statistics (count, mean, std, min, quantiles, max)
min, max Minimum and maximum values
mean, median, mode Arithmetic average, median and mode
var, std Variance and standard deviation
sem Standard error of mean
skew Sample skewness
kurt kurtosis
45
Handling Duplicate Values and Outliers in a dataset
While working on a real world dataset, we might come across very messy data which involves a lot of duplicate values.
Such records do not add any value or information while using them in a model and would rather slow down the processing.
So, it is better to remove duplicates before feeding the data to the model. The following method can be used to check for
duplicate values in pandas —
To check for duplicates, we use the “duplicated” function in Pandas.
If the df is the DataFrame, then df.duplicated() will check if the entire row has been repeated anywhere in the dataframe.
46
Handling Duplicate Values and Outliers in a dataset
# to check for duplicate values in a particular column
df.duplicated('column1')
# to check for duplicate values in some specific columns
df.duplicated(['column1', 'column2', 'column3')]
# To check the number of duplicate values
df.duplicated.sum()
Once we have identified duplicates in the dataset, it is time to remove them. To delete duplicates, we use a function
drop_duplicates in Pandas.
# Dropping duplicates
df.drop_duplicates()
# to delete duplicates from a particular column
df.drop_duplicates('column1')
# to delete duplicates from some specific columns
df.drop_duplicates(['column1', 'column2', 'column3')]
47
Handling Duplicate Values and Outliers in a dataset
An argument “keep” can also be used with drop_duplicates. keep = ‘first’ keeps the first record and deletes the other
duplicates, keep = ‘last’ keeps the last record and deletes the rest, and keep = False deletes all the records.
Note: Also, do not forget to add the argument ‘inplace’ as True to save the changes made to the dataframe.
Handling Outliers
Outliers are values in a dataset that are atypical and different from the majority of the datapoints, but may or may not
be false. Outliers may occur due to the variability of data or maybe due to machine or human errors.
Detecting and treating outliers is very crucial in any machine learning project. However, it is not always required to
delete or remove outliers. It depends on the problem statement and what are we trying to achieve from that model.
For example, in problems related to anomaly detection, fraud detection, etc. outliers play a major role. It is basically
the outliers that need to be tracked in such scenarios. Also, the type of algorithms used also decide to what extent
outliers would affect the model. Weight based algorithms like linear regression, logistic regression, ADABoost and
other deep learning techniques get affected by outliers a lot. Whereas tree based algos like decision tree, random
forest etc. don’t get affected by outliers as much.
48
Handling Duplicate Values and Outliers in a dataset
Detecting outliers
Outliers can be detected using the following methods -
1- Boxplots — Creating a boxplot is a smart way to detect if the dataset has outliers. The following
picture shows a boxplot –
Here, it clearly shows that the data points lying outside the
whiskers are outliers. The lower whisker is at Q1–1.5*IQR
and the higher whisker is at Q3 + 1.5*IQR, where Q1, Q3,
and IQR are 1st quartile(25th percentile),
3rd quartile(75th percentile), and Interquartile
Range (Q3-Q1) respectively.
This is also known as IQR Proximity Rule.
49
Handling Duplicate Values and Outliers in a dataset
Detecting outliers
2- Using Z-Scores — According to 66–95–99.7 rule, for a normally distributed data 99.7% of the data is
within 3 standard deviations of the mean.
So, if a point lies outside 3 standard deviations from the mean, it is considered as an outlier.
For this we can calculate the z-scores of the data points and keep the threshold as 3.
If the z-score of any point is greater than 3 or less than -3, it is an outlier.
But this rule is only valid for normal distributions.
50
Graphics to explore the data
To show graphs within Python notebook include inline directive:
In [ ]: %matplotlib inline
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization
51
Graphics
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot similar to boxplot, also shows the probability density of the
data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
52
Plotting with pandas:
Line Plots
Bar Plots
Histograms and Density Plots
Scatter or Point Plots
53
Plotting with pandas
Pandas uses the plot() method to create diagrams.
We can use Pyplot, a submodule of the Matplotlib library to visualize the diagram on the
screen.
Example:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('data.csv')
df.plot()
plt.show()
The examples in this page uses a CSV file called: 'data.csv'.
Download data.csv
54
Plotting with pandas
Scatter Plot-
Specify that you want a scatter plot with the kind argument:
kind = 'scatter'
A scatter plot needs an x- and a y-axis.
In the example below we will use "Duration" for the x-axis and "Calories" for the y-axis.
Include the x and y arguments like this:
x = 'Duration', y = 'Calories’
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('data.csv')
df.plot(kind = 'scatter', x = 'Duration', y = 'Calories')
plt.show()
55
Plotting with pandas
Histogram
Use the kind argument to specify that you want a histogram:
kind = 'hist'
A histogram needs only one column.
A histogram shows us the frequency of each interval, e.g. how many workouts lasted between 50 and 60
minutes?
In the example below we will use the "Duration" column to create the histogram:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('data.csv')
df['Duration'].plot(kind='hist')
plt.show()
56
Plotting with pandas
Line Plots- shows the relationship between the variables, here we represent the
relationship between the duration and calories
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('data.csv’)
Df.plot()
plt.show()
57
Plotting with pandas
#bar plot
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('data.csv')
df.plot(kind='bar')

More Related Content

PPTX
PPT on Data Science Using Python
PPTX
Pythonggggg. Ghhhjj-for-Data-Analysis.pptx
PPTX
Python-for-Data-Analysis.pptx
PPTX
Python for data analysis
PPTX
python for data anal gh i o fytysis creation.pptx
PPT
SASasasASSSasSSSSSasasaSASsasASASasasASs
PDF
Python-for-Data-Analysis.pdf
PPTX
Python-for-Data-Analysis.pptx
PPT on Data Science Using Python
Pythonggggg. Ghhhjj-for-Data-Analysis.pptx
Python-for-Data-Analysis.pptx
Python for data analysis
python for data anal gh i o fytysis creation.pptx
SASasasASSSasSSSSSasasaSASsasASASasasASs
Python-for-Data-Analysis.pdf
Python-for-Data-Analysis.pptx

Similar to Data Visualization_pandas in hadoop.pptx (20)

PPTX
Python-for-Data-Analysis.pptx
PDF
Python for Data Analysis.pdf
PPTX
interenship.pptx
PPTX
More on Pandas.pptx
PPTX
Lecture 9.pptx
PPTX
Meetup Junio Data Analysis with python 2018
PPTX
Lecture 1 Pandas Basics.pptx machine learning
PPTX
Lecture 3 intro2data
PPTX
Group B - Pandas Pandas is a powerful Python library that provides high-perfo...
PDF
pandas-221217084954-937bb582.pdf
PPTX
Pandas.pptx
PPTX
python-pandas-For-Data-Analysis-Manipulate.pptx
PPTX
Pandas yayyyyyyyyyyyyyyyyyin Python.pptx
PDF
De-Cluttering-ML | TechWeekends
PDF
Lesson 2 data preprocessing
PPTX
Unit 3_Numpy_Vsp.pptx
PPTX
Unit 3_Numpy_VP.pptx
PPTX
DataStructures in Pyhton Pandas and numpy.pptx
PDF
pandas dataframe notes.pdf
PPTX
Unit 1 Ch 2 Data Frames digital vis.pptx
Python-for-Data-Analysis.pptx
Python for Data Analysis.pdf
interenship.pptx
More on Pandas.pptx
Lecture 9.pptx
Meetup Junio Data Analysis with python 2018
Lecture 1 Pandas Basics.pptx machine learning
Lecture 3 intro2data
Group B - Pandas Pandas is a powerful Python library that provides high-perfo...
pandas-221217084954-937bb582.pdf
Pandas.pptx
python-pandas-For-Data-Analysis-Manipulate.pptx
Pandas yayyyyyyyyyyyyyyyyyin Python.pptx
De-Cluttering-ML | TechWeekends
Lesson 2 data preprocessing
Unit 3_Numpy_Vsp.pptx
Unit 3_Numpy_VP.pptx
DataStructures in Pyhton Pandas and numpy.pptx
pandas dataframe notes.pdf
Unit 1 Ch 2 Data Frames digital vis.pptx
Ad

More from Rahul Borate (20)

PPT
Bionomial and Poisson Distribution chapter.ppt
PPTX
Exponential Distribution chapter of stat.pptx
PPTX
Unit IV database intergration with node js
PPTX
Unit 1 Express J for mean stack and mern
PPTX
Unit II Developing Cyber Threat Intelligence Requirements.pptx
PPTX
Chapter I Introduction To Cyber Intelligence.pptx
PPT
Sampling of statistics and its techniques
PPT
sampling techniques of statistics and types
PPT
ch7 powerpoint presentation introduction.ppt
PPT
chapter 4 introduction to powerpoint .ppt
PPT
chapter ppt presentation engineering 1.ppt
PPT
student t-test (new) distrutuin t test.ppt
PPTX
scatter plots and visualization concept.pptx
PPT
Corelation and covariance and cocrrr.ppt
PPTX
Box Plot in stat using python hypothesis.pptx
PPT
sampling for statistics and population.ppt
PPTX
Software and Hardware for requirements.pptx
PPTX
Hadoop and their in big data analysis EcoSystem.pptx
PPTX
UNIT II Evaluating NoSQL for various .pptx
PPTX
Unit III Key-Value Based Databases in nosql.pptx
Bionomial and Poisson Distribution chapter.ppt
Exponential Distribution chapter of stat.pptx
Unit IV database intergration with node js
Unit 1 Express J for mean stack and mern
Unit II Developing Cyber Threat Intelligence Requirements.pptx
Chapter I Introduction To Cyber Intelligence.pptx
Sampling of statistics and its techniques
sampling techniques of statistics and types
ch7 powerpoint presentation introduction.ppt
chapter 4 introduction to powerpoint .ppt
chapter ppt presentation engineering 1.ppt
student t-test (new) distrutuin t test.ppt
scatter plots and visualization concept.pptx
Corelation and covariance and cocrrr.ppt
Box Plot in stat using python hypothesis.pptx
sampling for statistics and population.ppt
Software and Hardware for requirements.pptx
Hadoop and their in big data analysis EcoSystem.pptx
UNIT II Evaluating NoSQL for various .pptx
Unit III Key-Value Based Databases in nosql.pptx
Ad

Recently uploaded (20)

PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Pre independence Education in Inndia.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
Insiders guide to clinical Medicine.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Complications of Minimal Access Surgery at WLH
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
Lesson notes of climatology university.
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Pre independence Education in Inndia.pdf
Microbial diseases, their pathogenesis and prophylaxis
102 student loan defaulters named and shamed – Is someone you know on the list?
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPH.pptx obstetrics and gynecology in nursing
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Insiders guide to clinical Medicine.pdf
Final Presentation General Medicine 03-08-2024.pptx
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Complications of Minimal Access Surgery at WLH
TR - Agricultural Crops Production NC III.pdf
Lesson notes of climatology university.
VCE English Exam - Section C Student Revision Booklet
2.FourierTransform-ShortQuestionswithAnswers.pdf
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Module 4: Burden of Disease Tutorial Slides S2 2025

Data Visualization_pandas in hadoop.pptx

  • 1. 1 Tutorial Content Overview of Python Libraries for Data Scientists Reading Data; Selecting and Filtering the Data; Data manipulation, sorting, grouping, rearranging Plotting the data Descriptive statistics Inferential statistics
  • 2. 2 Python Libraries for Data Science Many popular Python toolboxes/libraries: • NumPy • SciPy • Pandas • SciKit-Learn Visualization libraries • matplotlib • Seaborn and many more … All these libraries are installed on the SCC
  • 3. 3 Python Libraries for Data Science NumPy:  introduces objects for multidimensional arrays and matrices, as well as functions that allow to easily perform advanced mathematical and statistical operations on those objects  provides vectorization of mathematical operations on arrays and matrices which significantly improves the performance  many other python libraries are built on NumPy Link: http://www.numpy.org/
  • 4. 4 Python Libraries for Data Science SciPy:  collection of algorithms for linear algebra, differential equations, numerical integration, optimization, statistics and more  part of SciPy Stack  built on NumPy Link: https://www.scipy.org/scipylib/
  • 5. 5 Python Libraries for Data Science Pandas:  adds data structures and tools designed to work with table-like data (similar to Series and Data Frames in R)  provides tools for data manipulation: reshaping, merging, sorting, slicing, aggregation etc.  allows handling missing data Link: http://pandas.pydata.org/
  • 6. 6 Link: http://scikit-learn.org/ Python Libraries for Data Science SciKit-Learn:  provides machine learning algorithms: classification, regression, clustering, model validation etc.  built on NumPy, SciPy and matplotlib
  • 7. 7 matplotlib:  python 2D plotting library which produces publication quality figures in a variety of hardcopy formats  a set of functionalities similar to those of MATLAB  line plots, scatter plots, barcharts, histograms, pie charts etc.  relatively low-level; some effort needed to create advanced visualization Link: https://matplotlib.org/ Python Libraries for Data Science
  • 8. 8 Seaborn:  based on matplotlib  provides high level interface for drawing attractive statistical graphics  Similar (in style) to the popular ggplot2 library in R Link: https://seaborn.pydata.org/ Python Libraries for Data Science
  • 9. 9 Start Jupyter nootebook # On the Shared Computing Cluster [scc1 ~] jupyter notebook
  • 10. 10 In [ ]: Loading Python Libraries #Import Python Libraries import numpy as np import scipy as sp import pandas as pd import matplotlib as mpl import seaborn as sns Press Shift+Enter to execute the jupyter cell
  • 11. 11 In [ ]: Reading data using pandas #Read csv file df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv") There is a number of pandas commands to read other data formats: pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA']) pd.read_stata('myfile.dta') pd.read_sas('myfile.sas7bdat') pd.read_hdf('myfile.h5','df') Note: The above command has many optional arguments to fine-tune the data import process.
  • 12. 12 In [3]: Exploring data frames #List first 5 records df.head() Out[3]:
  • 13. 13 Hands-on exercises Try to read the first 10, 20, 50 records; Can you guess how to view the last few records; Hint:
  • 14. 14 Data Frame data types Pandas Type Native Python Type Description object string The most general dtype. Will be assigned to your column if column has mixed types (numbers and strings). int64 int Numeric characters. 64 refers to the memory allocated to hold this character. float64 float Numeric characters with decimals. If a column contains numbers and NaNs(see below), pandas will default to float64, in case your missing value has a decimal. datetime64, timedelta[ns] N/A (but see the datetime module in Python’s standard library) Values meant to hold time data. Look into these for time series experiments.
  • 15. 15 In [4]: Data Frame data types #Check a particular column type df['salary'].dtype Out[4]: dtype('int64') In [5]: #Check types for all the columns df.dtypes Out[4]: rank discipline phd service sex salary dtype: object object object int64 int64 object int64
  • 16. 16 Data Frames attributes Python objects have attributes and methods. df.attribute description dtypes list the types of the columns columns list the column names axes list the row labels and column names ndim number of dimensions size number of elements shape return a tuple representing the dimensionality values numpy representation of the data
  • 17. 17 Hands-on exercises Find how many records this data frame has; How many elements are there? What are the column names? What types of columns we have in this data frame?
  • 18. 18 Data Frames methods df.method() description head( [n] ), tail( [n] ) first/last n rows describe() generate descriptive statistics (for numeric columns only) max(), min() return max/min values for all numeric columns mean(), median() return mean/median values for all numeric columns std() standard deviation sample([n]) returns a random sample of the data frame dropna() drop all the records with missing values Unlike attributes, python methods have parenthesis. All attributes and methods can be listed with a dir() function: dir(df)
  • 19. 19 Hands-on exercises Give the summary for the numeric columns in the dataset Calculate standard deviation for all numeric columns; What are the mean values of the first 50 records in the dataset? Hint: use head() method to subset the first 50 records and then calculate the mean
  • 20. 20 Selecting a column in a Data Frame Method 1: Subset the data frame using column name: df[‘gender'] Method 2: Use the column name as an attribute: df.gender Note: there is an attribute rank for pandas data frames, so to select a column with a name "rank" we should use method 1.
  • 21. 21 Hands-on exercises Calculate the basic statistics for the salary column; Find how many values in the salary column (use count method); Calculate the average salary;
  • 22. 22 Data Frames groupby method Using "group by" method we can: • Split the data into groups based on some criteria • Calculate statistics (or apply a function) to each group • Similar to dplyr() function in R In [ ]: #Group data using rank df_rank = df.groupby(['rank']) In [ ]: #Calculate mean value for each numeric column per each group df_rank.mean()
  • 23. 23 Data Frames groupby method Once groupby object is create we can calculate various statistics for each group: In [ ]: #Calculate mean salary for each professor rank: df.groupby('rank')[['salary']].mean() Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object. When double brackets are used the output is a Data Frame
  • 24. 24 Data Frames groupby method groupby performance notes: - no grouping/splitting occurs until it's needed. Creating the groupby object only verifies that you have passed a valid mapping - by default the group keys are sorted during the groupby operation. You may want to pass sort=False for potential speedup: In [ ]: #Calculate mean salary for each professor rank: df.groupby(['rank'], sort=False)[['salary']].mean()
  • 25. 25 Data Frame: filtering To subset the data we can apply Boolean indexing. This indexing is commonly known as a filter. For example if we want to subset the rows in which the salary value is greater than $120K: In [ ]: #Calculate mean salary for each professor rank: df_sub = df[ df['salary'] > 120000 ] In [ ]: #Select only those rows that contain female professors: df_f = df[ df[‘gender'] == 'Female' ] Any Boolean operator can be used to subset the data: > greater; >= greater or equal; < less; <= less or equal; == equal; != not equal;
  • 26. 26 Data Frames: Slicing There are a number of ways to subset the Data Frame: • one or more columns • one or more rows • a subset of rows and columns Rows and columns can be selected by their position or label
  • 27. 27 Data Frames: Slicing When selecting one column, it is possible to use single set of brackets, but the resulting object will be a Series (not a DataFrame): In [ ]: #Select column salary: df['salary'] When we need to select more than one column and/or make the output to be a DataFrame, we should use double brackets: In [ ]: #Select column salary: df[['rank','salary']]
  • 28. 28 Data Frames: Selecting rows If we need to select a range of rows, we can specify the range using ":" In [ ]: #Select rows by their position: df[10:20] Notice that the first row has a position 0, and the last value in the range is omitted: So for 0:10 range the first 10 rows are returned with the positions starting with 0 and ending with 9
  • 29. 29 Data Frames: method loc If we need to select a range of rows, using their labels we can use method loc: In [ ]: #Select rows by their labels: df_sub.loc[10:20,['rank’,’gender','salary']]
  • 30. 30 Data Frames: method iloc If we need to select a range of rows and/or columns, using their positions we can use method iloc: In [ ]: #Select rows by their labels: df_sub.iloc[10:20,[0, 3, 4, 5]]
  • 31. 31 Data Frames: method iloc (summary) df.iloc[0] # First row of a data frame df.iloc[i] #(i+1)th row df.iloc[-1] # Last row df.iloc[:, 0] # First column df.iloc[:, -1] # Last column df.iloc[0:7] #First 7 rows df.iloc[:, 0:2] #First 2 columns df.iloc[1:3, 0:2] #Second through third rows and first 2 columns df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns
  • 32. 32 Data Frames: Sorting We can sort the data by a value in the column. By default the sorting will occur in ascending order and a new data frame is return. In [ ]: # Create a new data frame from the original sorted by the column Salary df_sorted = df.sort_values( by ='service') df_sorted.head() Out[ ]:
  • 33. 33 Data Frames: Sorting We can sort the data using 2 or more columns: In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False]) df_sorted.head(10) Out[ ]:
  • 34. 34 Missing Values Missing values are marked as NaN In [ ]: # Read a dataset with missing values flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv") In [ ]: # Select the rows that have at least one missing value flights[flights.isnull().any(axis=1)].head() Out[ ]:
  • 35. 35 Missing Values There are a number of methods to deal with missing values in the data frame: df.method() description dropna() Drop missing observations dropna(how='all') Drop observations where all cells is NA dropna(axis=1, how='all') Drop column if all the values are missing dropna(thresh = 5) Drop rows that contain less than 5 non-missing values fillna(0) Replace missing values with zeros isnull() returns True if the value is missing notnull() Returns True for non-missing values
  • 36. 36 Missing Values • When summing the data, missing values will be treated as zero • If all values are missing, the sum will be equal to NaN • cumsum() and cumprod() methods ignore missing values but preserve them in the resulting arrays • Missing values in GroupBy method are excluded (just like in R) • Many descriptive statistics methods have skipna option to control if missing data should be excluded . This value is set to True by default (unlike R)
  • 37. 37 Missing Values • How to overcome Missing data in our dataset? • There are many ways to overcome the missing data, we will see methods, before that we will start with the scratch like importing the libraries, • Dataset: https://github.com/JangirSumit/data_science/blob/master/18th%20May %20Assignments/case%20study%201/SalaryGender.csv • at the beginning of every code, we need to import the libraries, import pandas as pd import numpy as npdataset =pd.read_csv("SalaryGender.csv") print(dataset.head())
  • 38. 38 Missing Values • How to overcome Missing data in our dataset? checking for the dimension of the dataset dataset.shape Checking for the missing values print(dataset.isnull().sum())
  • 39. 39 Missing Values • How to overcome Missing data in our dataset? Drop it if it is not in use (mostly Rows) Excluding observations with missing data is the next most easy approach. However, you run the risk of missing some critical data points as a result. You may do this by using the Python pandas package’s dropna() function to remove all the columns with missing values. Rather than eliminating all missing values from all columns, utilize your domain knowledge or seek the help of a domain expert to selectively remove the rows/columns with missing values that aren’t relevant to the machine learning problem. Pros: after removing missed data, the model becomes robus Cons: Loss of data, which may be important too. If you have more missing data then efficiency won’t be good for modelling. #deleting rows - missed vales dataset.dropna(inplace=True) print(dataset.isnull().sum())
  • 40. 40 Missing Values • How to overcome Missing data in our dataset? Imputation by Median: Another technique of imputation that addresses the outlier problem in the previous method is to utilize median values. When sorted, it ignores the influence of outliers and updates the middle value that occurred in that column. Cons: Works only with numerical datasets and failed in covariance between the independent variables #Median - missed value dataset["Age"] = dataset["Age"].replace(np.NaN, dataset["Age"].median()) print(dataset["Age"][:10])
  • 41. 41 Missing Values • How to overcome Missing data in our dataset? Imputation by Most frequent values (mode): This method may be applied to categorical variables with a finite set of values. To impute, you can use the most common value. For example, whether the available alternatives are nominal category values such as True/False or conditions such as normal/abnormal. This is especially true for ordinal categorical factors such as educational attainment. Pre-primary, primary, secondary, high school, graduation, and so on are all examples of educational levels. Unfortunately, because this method ignores feature connections, there is a danger of data bias. If the category values aren’t balanced, you’re more likely to introduce bias into the data (class imbalance problem). Pros: Works with all formats of data. Cons: Covariance value cannot be predicted between independent features #Mode - missed value import statistics dataset["Age"] = dataset["Age"].replace(np.NaN, statistics.mode(dataset["Age"])) print(dataset["Age"][:10])
  • 42. 42 Aggregation Functions in Pandas Aggregation - computing a summary statistic about each group, i.e. • compute group sums or means • compute group sizes/counts Common aggregation functions: min, max count, sum, prod mean, median, mode, mad std, var
  • 43. 43 Aggregation Functions in Pandas agg() method are useful when multiple statistics are computed per column: In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max']) Out[ ]:
  • 44. 44 Basic Descriptive Statistics df.method() description describe Basic statistics (count, mean, std, min, quantiles, max) min, max Minimum and maximum values mean, median, mode Arithmetic average, median and mode var, std Variance and standard deviation sem Standard error of mean skew Sample skewness kurt kurtosis
  • 45. 45 Handling Duplicate Values and Outliers in a dataset While working on a real world dataset, we might come across very messy data which involves a lot of duplicate values. Such records do not add any value or information while using them in a model and would rather slow down the processing. So, it is better to remove duplicates before feeding the data to the model. The following method can be used to check for duplicate values in pandas — To check for duplicates, we use the “duplicated” function in Pandas. If the df is the DataFrame, then df.duplicated() will check if the entire row has been repeated anywhere in the dataframe.
  • 46. 46 Handling Duplicate Values and Outliers in a dataset # to check for duplicate values in a particular column df.duplicated('column1') # to check for duplicate values in some specific columns df.duplicated(['column1', 'column2', 'column3')] # To check the number of duplicate values df.duplicated.sum() Once we have identified duplicates in the dataset, it is time to remove them. To delete duplicates, we use a function drop_duplicates in Pandas. # Dropping duplicates df.drop_duplicates() # to delete duplicates from a particular column df.drop_duplicates('column1') # to delete duplicates from some specific columns df.drop_duplicates(['column1', 'column2', 'column3')]
  • 47. 47 Handling Duplicate Values and Outliers in a dataset An argument “keep” can also be used with drop_duplicates. keep = ‘first’ keeps the first record and deletes the other duplicates, keep = ‘last’ keeps the last record and deletes the rest, and keep = False deletes all the records. Note: Also, do not forget to add the argument ‘inplace’ as True to save the changes made to the dataframe. Handling Outliers Outliers are values in a dataset that are atypical and different from the majority of the datapoints, but may or may not be false. Outliers may occur due to the variability of data or maybe due to machine or human errors. Detecting and treating outliers is very crucial in any machine learning project. However, it is not always required to delete or remove outliers. It depends on the problem statement and what are we trying to achieve from that model. For example, in problems related to anomaly detection, fraud detection, etc. outliers play a major role. It is basically the outliers that need to be tracked in such scenarios. Also, the type of algorithms used also decide to what extent outliers would affect the model. Weight based algorithms like linear regression, logistic regression, ADABoost and other deep learning techniques get affected by outliers a lot. Whereas tree based algos like decision tree, random forest etc. don’t get affected by outliers as much.
  • 48. 48 Handling Duplicate Values and Outliers in a dataset Detecting outliers Outliers can be detected using the following methods - 1- Boxplots — Creating a boxplot is a smart way to detect if the dataset has outliers. The following picture shows a boxplot – Here, it clearly shows that the data points lying outside the whiskers are outliers. The lower whisker is at Q1–1.5*IQR and the higher whisker is at Q3 + 1.5*IQR, where Q1, Q3, and IQR are 1st quartile(25th percentile), 3rd quartile(75th percentile), and Interquartile Range (Q3-Q1) respectively. This is also known as IQR Proximity Rule.
  • 49. 49 Handling Duplicate Values and Outliers in a dataset Detecting outliers 2- Using Z-Scores — According to 66–95–99.7 rule, for a normally distributed data 99.7% of the data is within 3 standard deviations of the mean. So, if a point lies outside 3 standard deviations from the mean, it is considered as an outlier. For this we can calculate the z-scores of the data points and keep the threshold as 3. If the z-score of any point is greater than 3 or less than -3, it is an outlier. But this rule is only valid for normal distributions.
  • 50. 50 Graphics to explore the data To show graphs within Python notebook include inline directive: In [ ]: %matplotlib inline Seaborn package is built on matplotlib but provides high level interface for drawing attractive statistical graphics, similar to ggplot2 library in R. It specifically targets statistical data visualization
  • 51. 51 Graphics description distplot histogram barplot estimate of central tendency for a numeric variable violinplot similar to boxplot, also shows the probability density of the data jointplot Scatterplot regplot Regression plot pairplot Pairplot boxplot boxplot swarmplot categorical scatterplot factorplot General categorical plot
  • 52. 52 Plotting with pandas: Line Plots Bar Plots Histograms and Density Plots Scatter or Point Plots
  • 53. 53 Plotting with pandas Pandas uses the plot() method to create diagrams. We can use Pyplot, a submodule of the Matplotlib library to visualize the diagram on the screen. Example: import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('data.csv') df.plot() plt.show() The examples in this page uses a CSV file called: 'data.csv'. Download data.csv
  • 54. 54 Plotting with pandas Scatter Plot- Specify that you want a scatter plot with the kind argument: kind = 'scatter' A scatter plot needs an x- and a y-axis. In the example below we will use "Duration" for the x-axis and "Calories" for the y-axis. Include the x and y arguments like this: x = 'Duration', y = 'Calories’ import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('data.csv') df.plot(kind = 'scatter', x = 'Duration', y = 'Calories') plt.show()
  • 55. 55 Plotting with pandas Histogram Use the kind argument to specify that you want a histogram: kind = 'hist' A histogram needs only one column. A histogram shows us the frequency of each interval, e.g. how many workouts lasted between 50 and 60 minutes? In the example below we will use the "Duration" column to create the histogram: import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('data.csv') df['Duration'].plot(kind='hist') plt.show()
  • 56. 56 Plotting with pandas Line Plots- shows the relationship between the variables, here we represent the relationship between the duration and calories import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('data.csv’) Df.plot() plt.show()
  • 57. 57 Plotting with pandas #bar plot import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('data.csv') df.plot(kind='bar')