SlideShare a Scribd company logo
Statistics Lab
Rodolfo Metulini
IMT Institute for Advanced Studies, Lucca, Italy

Lesson 2 - Application to the Central Limit Theory - 14.01.2014
Introduction

The modern statistics was built and developed around the normal
distribution.
Academic world use to say that, if the empirical distribution is
normal (or approximative normal), everything works good. This
depends mainly on the sample dimension
Said this, it is important to undestand in which circumstances we
can state the distribution is normal.
Two founding statistical theorems are helpful: The Central Limit
Theorem and The Law of Large Numbers.
The Law of Large Numbers (LLN)

Suppose we have a random variable X with expected value
E (X ) = µ.
We extract n observation from X (say {x = x1 , x2 , ..., xn }).
ˆ
If we define Xn =
n −→ ∞,
ˆ
Xn −→ µ

i

n

xi

=

x1 +x2 +...+xn
,
n

the LLN states that, for
The Central Limit Theorem (CLT)
Suppose we have a random variable X with expected value
E (X ) = µ and v (X ) = σ 2
We extract n observation from X (say {x = x1 , x2 , ..., xn }).
ˆ
Lets define Xn =

i

n

xi

=

x1 +x2 +...+xn
.
n

σ2
ˆ
Xn distributes with expected value µ and variance
.
n
In case n −→ ∞ (in pratice n > 30)
2

σ
ˆ
Xn ∼ N(µ, ), whatever the distribution of x be.
n
2

σ
ˆ
N.B. If X is normal distributed, Xn ∼ N(µ, ) even if
n
n < 30
CLT: Empiricals

To better understand the CLT, it is recommended to examine the
theorem empirically and step by step.
By the introduction of new commands in the R programming
language.
In the first part, we will show how to draw and visualize a sample
of random numbers from a distribution.
Then, we will examine the mean and standard deviation of the
sample, then the distribution of the sample means.
Drawing random numbers - 1
We already introduced the use of the letters d, p and q in relations
to the various distributions (e.g. normal, uniform, exponential). A
reminder of their use follows:
d is for density: it is used to find values of the probability
density function.
p is for probability: it is used to find the probability that the
random variable lies on the left of a giving number.
q is for quantile: it is used to find the quantiles of a given
distribution.
There is a fourth letter, namely r, used to draw random numbers
from a distribution. For example runif and rexp would be used to
draw random numbers from the uniform and exponential
distributions, respectively.
Drawing random numbers - 2
Let use the rnorm command to draw 500 number atrandom from
a normal distribution having mean 100 and standard deviation (sd)
10.
> x= rnorm(500,mean=100,sd=10)
The resuls, typing in the r consolle x, is a list of 500 numbers
extracted at random from a normal distribution with mean 500 and
sd 100.
When you examine the numbers stored in the variable X , There is
a sense that you are pulling random numbers that are clumped
about a mean of 100. However, a histagram of this selection
provides a different picture of the data stored.
> hist(x,prob=TRUE)
Drawing random numbers - Comments

Several comments are in order regarding the histogram in the
figure.
1. The histogram is approximately normal in shape.
2. The balance point of the histogram appears to be located
near 100, suggesting that the random numbers were drawn
from a distribution having mean 100.
3. Almost all of the values are within 3 increments of 10 from
the mean, suggesting that random numbers were drawn from
a normal distribution having standard deviation 10.
Drawing random numbers - a new drawing
Lets try the experiment again, drawing a new set of 500 random
numbers from the normal distribution having mean 100 and
standard deviation 10:
> x = rnorm(500, mean = 100, sd = 10)
> hist(x, prob = TRUE , ylim = c(0, 0.04))
Give a look to the histogram ... It is different from the first one,
however, it share some common traits: (1) it appears normal in
shape; (2) it appears to be balanced around 100; (3) all values
appears to occur within 3 increments of 10 of the mean.
This is a strong evidence that the random numbers have been
drawn from a normal distribution having mean 100 and sd 10. We
can provide evidence of this claim by imposing a normal density
curve:
> curve(dnorm(x, mean = 100, sd = 10), 70, 130, add =
TRUE , lwd = 2, col = ”red”))
The curve command
The curve command is new. Some comments on its use
follow:
1. In its simplest form, the sintax curve(f (x), from =, to =)
draws the function defined by f(x) on the interval (from, to).
Our function is dnorm(x, mean = 100, sd = 10). The curve
command sketches this function of X on the interval
(from,to).
2. The notation from = and to = may be omitted if the
arguments are in the proper order to the curve command:
function first, value of from second, value of to third. That is
what we have done.
3. If the argument add is set to TRUE , then the curve is added
to the existing figure. If the arument is omitted (or FALSE )
then a new plot is drawn,erasing the prevoius graph.
ˆ
The distribution of Xn (sample mean)
In our prevous example we drew 500 random numbers from a
normal distribution with mean 100 and standard deviation 10. This
leads to ONE sample of n = 500. Now the question is: what is
the mean of our sample?
> mean(x)
[1]100.14132
If we take another sample of 500 random numbers from the SAME
distribution, we get a new sample with different mean.
> x = rnorm(500, mean = 100, sd = 10)
mean(x)
[1]100.07884
What happens if we draw a sample several times?
Producing a vector of sample means
We will repeatedly sample from the normal distribution. Each of
the 500 samples will select 5 random numbers (instead of 500)
from the normal distrib. having mean 100 and sd 10. We will then
find the mean of those samples.
We begin by declaring the mean and the standard deviation. Then,
we declare the sample mean.
> µ = 100; σ = 10
>n=5
We need some place to store the mean of the sample. We initalize
a vector xbar to initially contain 500 zeros.
> xbar = rep(0, 500)
Producing a vector of sample means - cycle for
It is easy to draw a sample of size n = 5 from the normal
distribution having mean µ = 100 and standard deviation σ = 10.
We simply issue the command
rnorm(n, mean = µ, sd = σ).
To find the mean of this results, we simply add the
adjustment
mean(rnorm(n, mean = µ, sd = σ)).
The final step is to store this results in the vector xbar . Then we
must repeat this same process an addintional 499 times. This
require the use of a for loop.
> for (iin1 : 500){xbar [i] = mean(rnorm(n, mean = µ, sd =
σ))}
Cycle for
The i in for (iin1 : 500) is called theindex of the for loop.
The index i is first set equal to 1, then the body of the for
loop is executed. On the next iteration, i is set equal to 2 and
the body of the loop is executed again. The loop continues in
this manner, incrementing by 1, finally setting the index i to
500. After executing the last loop, the for cycle is terminated
In the body of the for loop, we have
xbar [i] = mean(rnorm(n, mean = µ, sd = σ)). This draws a
sample of size 5 from the normal distribution, calculates the
mean of the sample, and store the results in xbar [i].
When the for loop completes 500 iterations, the vector xbar
contains the means of 500 samples of size 5 drawn from the
normal distribution having µ = 100 and σ = 10
> hist(xbar , prob = TRUE , breacks = 12, xlim = c(70, 130, ylim =
c(0, 0.1)))
ˆ
Distribution of Xn - observations
1. The previous histograms describes the shape of the 500
random number randomly selected, here, the histogram
describe the distribution of 500 different sample means, each
of which founded by selecting n = 5 random number from the
normal distribution.
2. The distribution of xbar appears normal in shape. This is so
even though the sample size is relatively small ( n = 5).
3. It appears that the balance point occurs near 100. This can
be checked with the following command:
> mean(xbar )
That is the mean of the sample means, that is almost equal to
the mean of the draw of random numbers.
4. The distribution of the sample means appears to be narrower
then the random number distributions.
Increasing the sample size
Lets repeat the last experiment, but this time let’s draw a sample
size of n = 10 from the same distribution (µ = 100, σ = 10)
> µ = 100; σ = 10
> n = 10
> xbar = rep(0, 500)
> for (iin1 : 500){xbar [i] = mean(rnorm(n, mean = µ, sd =
σ))}
hist(xbar , prob = TRUE , breaks = 12, xlim = c(70, 130), ylim =
c(0, 0.1))
The Histogram produced is even more narrow than using
n=5
Key Ideas

1. When we select samples from a normal distribution, then the
distribution of sample means is also normal in shape
2. The mean of the distribution of sample meana appears to be
the same as the mean of the random numbers
(parentpopulation) (see the balance points compared)
3. By increasing the sample size of our samples, the histograms
becomes narrower . Infact, we would expect a more accurate
estimate of the mean of the parent population if we take the
mean from a larger sample size.
4. Imagine to draw sample means from a sample of n = ∞. The
histogram will be exactly concentrated (P = 1) in Xbar = µ
Summarise

We finish replicating the statement about CLT:
1. If you draw samples from a norml distribution, then the
distribution of the sample means is also normal
2. The mean of the distribution of the sample means is identical
to the mean of the parent population
3. The higher the sample size that is drawn, the narrower will be
the spread of the distribution of the sample means.
Homeworks
Experiment 1: Draw the Xbar histogram for n = 1000. How is
the histogram shape?
Experiment 2: Repeat the full experiment drawing random
numbers and sample means from a (1) uniform and from (2) a
poisson distribution. Is the histogram of Xbar normal in shape for
n = 5 and for n=30?
Experiment 3: Repeat the full experiment using real data instead
of random numbers. (HINT: select samples of dimension n = 5
from the real data, not using rnorm)
Recommended: Try to evaluate the agreement of the sample mean
histogram with normal distribution by mean of the qq-plot and
shapiro wilk test.
Application to Large Number Law

Experiment: toss the coin 100 times.
This experiment is like repeating 100 times a random draw from a
bernoulli distribution with parameter ρ = 0.5
We expect to have 50 times (value = 1) head and 50 times cross
(value = 0), if the coin is not distorted
But, in practice, this not happen: repeating the experiment we are
going to have a distribution centered in 50, but spread out.
ˆ
Let’s imagine to define Xn as the mean of the number of heads
ˆ
across n experiments. For n −→ ∞, Xn −→ 50

More Related Content

PPT
Powerpoint sampling distribution
PDF
Sampling and sampling distribution tttt
PPTX
6.5 central limit
PPTX
Regression Analysis
PPTX
law of large number and central limit theorem
PPTX
Discrete probability distributions
PPT
The sampling distribution
Powerpoint sampling distribution
Sampling and sampling distribution tttt
6.5 central limit
Regression Analysis
law of large number and central limit theorem
Discrete probability distributions
The sampling distribution

What's hot (20)

PDF
Introduction to probability distributions-Statistics and probability analysis
ODP
Visualiation of quantitative information
PPTX
Sampling Distributions
PPTX
Statistical inference concept, procedure of hypothesis testing
PPTX
Statistical inference
PDF
Hypothesis Testing
PPTX
Multiple Regression Analysis (MRA)
PPTX
probability proportional to size.pptx.By Rc
PPTX
How to choose a right statistical test
PPTX
Covariance
PPT
Testing Hypothesis
PPT
Simple lin regress_inference
PPTX
Panel data analysis
PPTX
Basic Descriptive statistics
PPTX
Chap06 sampling and sampling distributions
PPTX
Introduction to Statistics - Basic concepts
PPTX
Sampling Distribution
PPSX
Simple linear regression
PPTX
1.2 types of data
PPTX
Ml3 logistic regression-and_classification_error_metrics
Introduction to probability distributions-Statistics and probability analysis
Visualiation of quantitative information
Sampling Distributions
Statistical inference concept, procedure of hypothesis testing
Statistical inference
Hypothesis Testing
Multiple Regression Analysis (MRA)
probability proportional to size.pptx.By Rc
How to choose a right statistical test
Covariance
Testing Hypothesis
Simple lin regress_inference
Panel data analysis
Basic Descriptive statistics
Chap06 sampling and sampling distributions
Introduction to Statistics - Basic concepts
Sampling Distribution
Simple linear regression
1.2 types of data
Ml3 logistic regression-and_classification_error_metrics
Ad

Viewers also liked (20)

PPTX
Central limit theorem
PPTX
Applied Statistics : Sampling method & central limit theorem
PDF
Lecture slides stats1.13.l09.air
PPT
Law of large numbers
PPT
Chapter 08
PPT
050 sampling theory
PDF
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
PDF
isc2015
PDF
Why are stochastic networks so hard to simulate?
PPT
Chapter 07
PPT
Probablity normal
PPTX
Continuous probability Business Statistics, Management
PPT
Statistical Techniques in Business & Economics (McGRAV-HILL) 12 Edt. Chapter ...
PPTX
6Tisch telecom_bretagne_2016
PDF
Lecture: Monte Carlo Methods
PDF
Monte Carlo Statistical Methods
PPT
Monte carlo
PPT
Normal Distribution
PPTX
Normal distribution
Central limit theorem
Applied Statistics : Sampling method & central limit theorem
Lecture slides stats1.13.l09.air
Law of large numbers
Chapter 08
050 sampling theory
Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
isc2015
Why are stochastic networks so hard to simulate?
Chapter 07
Probablity normal
Continuous probability Business Statistics, Management
Statistical Techniques in Business & Economics (McGRAV-HILL) 12 Edt. Chapter ...
6Tisch telecom_bretagne_2016
Lecture: Monte Carlo Methods
Monte Carlo Statistical Methods
Monte carlo
Normal Distribution
Normal distribution
Ad

Similar to Applications to Central Limit Theorem and Law of Large Numbers (20)

PDF
Montecarlophd
PPTX
StatBasicsRefffffffffffffffffffffffffffffffffffffffffv2.pptx
PPTX
Probility distribution
PPTX
UNIT 4 PTRP final Convergence in probability.pptx
PPT
What is an estimate with details regarding it's use in biostatistics
PDF
Lect w2 measures_of_location_and_spread
PPTX
Excel Homework Help
PDF
Point Estimate, Confidence Interval, Hypotesis tests
PPTX
Normal Distribution, Binomial Distribution, Poisson Distribution
PPT
raghu veera stats.ppt
DOCX
1 Lab 4 The Central Limit Theorem and A Monte Carlo Si.docx
PPT
random variation 9473 by jaideep.ppt
PDF
Unit 4a- Sampling Distribution (Slides - up to slide 21).pdf
PDF
U unit8 ksb
PPTX
Statistics Homework Help
DOCX
Makalah ukuran penyebaran
DOCX
HW1_STAT206.pdfStatistical Inference II J. Lee Assignment.docx
DOCX
Suggest one psychological research question that could be answered.docx
Montecarlophd
StatBasicsRefffffffffffffffffffffffffffffffffffffffffv2.pptx
Probility distribution
UNIT 4 PTRP final Convergence in probability.pptx
What is an estimate with details regarding it's use in biostatistics
Lect w2 measures_of_location_and_spread
Excel Homework Help
Point Estimate, Confidence Interval, Hypotesis tests
Normal Distribution, Binomial Distribution, Poisson Distribution
raghu veera stats.ppt
1 Lab 4 The Central Limit Theorem and A Monte Carlo Si.docx
random variation 9473 by jaideep.ppt
Unit 4a- Sampling Distribution (Slides - up to slide 21).pdf
U unit8 ksb
Statistics Homework Help
Makalah ukuran penyebaran
HW1_STAT206.pdfStatistical Inference II J. Lee Assignment.docx
Suggest one psychological research question that could be answered.docx

More from University of Salerno (20)

PDF
Modelling traffic flows with gravity models and mobile phone large data
PDF
Regression models for panel data
PDF
Carpita metulini 111220_dssr_bari_version2
PDF
A strategy for the matching of mobile phone signals with census data
PDF
Detecting and classifying moments in basketball matches using sensor tracked ...
PDF
BASKETBALL SPATIAL PERFORMANCE INDICATORS
PDF
Human activity spatio-temporal indicators using mobile phone data
PDF
Poster venezia
PDF
Metulini280818 iasi
PDF
Players Movements and Team Performance
PDF
Big Data Analytics for Smart Cities
PDF
Meeting progetto ode_sm_rm
PDF
Metulini, R., Manisera, M., Zuccolotto, P. (2017), Sensor Analytics in Basket...
PDF
Metulini, R., Manisera, M., Zuccolotto, P. (2017), Space-Time Analysis of Mov...
PDF
Metulini1503
PDF
A Spatial Filtering Zero-Inflated approach to the estimation of the Gravity M...
PPT
The Water Suitcase of Migrants: Assessing Virtual Water Fluxes Associated to ...
PPT
The Global Virtual Water Network
PDF
The Worldwide Network of Virtual Water with Kriskogram
PDF
Ad b 1702_metu_v2
Modelling traffic flows with gravity models and mobile phone large data
Regression models for panel data
Carpita metulini 111220_dssr_bari_version2
A strategy for the matching of mobile phone signals with census data
Detecting and classifying moments in basketball matches using sensor tracked ...
BASKETBALL SPATIAL PERFORMANCE INDICATORS
Human activity spatio-temporal indicators using mobile phone data
Poster venezia
Metulini280818 iasi
Players Movements and Team Performance
Big Data Analytics for Smart Cities
Meeting progetto ode_sm_rm
Metulini, R., Manisera, M., Zuccolotto, P. (2017), Sensor Analytics in Basket...
Metulini, R., Manisera, M., Zuccolotto, P. (2017), Space-Time Analysis of Mov...
Metulini1503
A Spatial Filtering Zero-Inflated approach to the estimation of the Gravity M...
The Water Suitcase of Migrants: Assessing Virtual Water Fluxes Associated to ...
The Global Virtual Water Network
The Worldwide Network of Virtual Water with Kriskogram
Ad b 1702_metu_v2

Recently uploaded (20)

PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PDF
Complications of Minimal Access-Surgery.pdf
PDF
International_Financial_Reporting_Standa.pdf
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
Trump Administration's workforce development strategy
PDF
Hazard Identification & Risk Assessment .pdf
PDF
IGGE1 Understanding the Self1234567891011
PPTX
Computer Architecture Input Output Memory.pptx
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
Weekly quiz Compilation Jan -July 25.pdf
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
Complications of Minimal Access-Surgery.pdf
International_Financial_Reporting_Standa.pdf
202450812 BayCHI UCSC-SV 20250812 v17.pptx
FORM 1 BIOLOGY MIND MAPS and their schemes
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
AI-driven educational solutions for real-life interventions in the Philippine...
Environmental Education MCQ BD2EE - Share Source.pdf
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Virtual and Augmented Reality in Current Scenario
Trump Administration's workforce development strategy
Hazard Identification & Risk Assessment .pdf
IGGE1 Understanding the Self1234567891011
Computer Architecture Input Output Memory.pptx
What if we spent less time fighting change, and more time building what’s rig...
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
Weekly quiz Compilation Jan -July 25.pdf

Applications to Central Limit Theorem and Law of Large Numbers

  • 1. Statistics Lab Rodolfo Metulini IMT Institute for Advanced Studies, Lucca, Italy Lesson 2 - Application to the Central Limit Theory - 14.01.2014
  • 2. Introduction The modern statistics was built and developed around the normal distribution. Academic world use to say that, if the empirical distribution is normal (or approximative normal), everything works good. This depends mainly on the sample dimension Said this, it is important to undestand in which circumstances we can state the distribution is normal. Two founding statistical theorems are helpful: The Central Limit Theorem and The Law of Large Numbers.
  • 3. The Law of Large Numbers (LLN) Suppose we have a random variable X with expected value E (X ) = µ. We extract n observation from X (say {x = x1 , x2 , ..., xn }). ˆ If we define Xn = n −→ ∞, ˆ Xn −→ µ i n xi = x1 +x2 +...+xn , n the LLN states that, for
  • 4. The Central Limit Theorem (CLT) Suppose we have a random variable X with expected value E (X ) = µ and v (X ) = σ 2 We extract n observation from X (say {x = x1 , x2 , ..., xn }). ˆ Lets define Xn = i n xi = x1 +x2 +...+xn . n σ2 ˆ Xn distributes with expected value µ and variance . n In case n −→ ∞ (in pratice n > 30) 2 σ ˆ Xn ∼ N(µ, ), whatever the distribution of x be. n 2 σ ˆ N.B. If X is normal distributed, Xn ∼ N(µ, ) even if n n < 30
  • 5. CLT: Empiricals To better understand the CLT, it is recommended to examine the theorem empirically and step by step. By the introduction of new commands in the R programming language. In the first part, we will show how to draw and visualize a sample of random numbers from a distribution. Then, we will examine the mean and standard deviation of the sample, then the distribution of the sample means.
  • 6. Drawing random numbers - 1 We already introduced the use of the letters d, p and q in relations to the various distributions (e.g. normal, uniform, exponential). A reminder of their use follows: d is for density: it is used to find values of the probability density function. p is for probability: it is used to find the probability that the random variable lies on the left of a giving number. q is for quantile: it is used to find the quantiles of a given distribution. There is a fourth letter, namely r, used to draw random numbers from a distribution. For example runif and rexp would be used to draw random numbers from the uniform and exponential distributions, respectively.
  • 7. Drawing random numbers - 2 Let use the rnorm command to draw 500 number atrandom from a normal distribution having mean 100 and standard deviation (sd) 10. > x= rnorm(500,mean=100,sd=10) The resuls, typing in the r consolle x, is a list of 500 numbers extracted at random from a normal distribution with mean 500 and sd 100. When you examine the numbers stored in the variable X , There is a sense that you are pulling random numbers that are clumped about a mean of 100. However, a histagram of this selection provides a different picture of the data stored. > hist(x,prob=TRUE)
  • 8. Drawing random numbers - Comments Several comments are in order regarding the histogram in the figure. 1. The histogram is approximately normal in shape. 2. The balance point of the histogram appears to be located near 100, suggesting that the random numbers were drawn from a distribution having mean 100. 3. Almost all of the values are within 3 increments of 10 from the mean, suggesting that random numbers were drawn from a normal distribution having standard deviation 10.
  • 9. Drawing random numbers - a new drawing Lets try the experiment again, drawing a new set of 500 random numbers from the normal distribution having mean 100 and standard deviation 10: > x = rnorm(500, mean = 100, sd = 10) > hist(x, prob = TRUE , ylim = c(0, 0.04)) Give a look to the histogram ... It is different from the first one, however, it share some common traits: (1) it appears normal in shape; (2) it appears to be balanced around 100; (3) all values appears to occur within 3 increments of 10 of the mean. This is a strong evidence that the random numbers have been drawn from a normal distribution having mean 100 and sd 10. We can provide evidence of this claim by imposing a normal density curve: > curve(dnorm(x, mean = 100, sd = 10), 70, 130, add = TRUE , lwd = 2, col = ”red”))
  • 10. The curve command The curve command is new. Some comments on its use follow: 1. In its simplest form, the sintax curve(f (x), from =, to =) draws the function defined by f(x) on the interval (from, to). Our function is dnorm(x, mean = 100, sd = 10). The curve command sketches this function of X on the interval (from,to). 2. The notation from = and to = may be omitted if the arguments are in the proper order to the curve command: function first, value of from second, value of to third. That is what we have done. 3. If the argument add is set to TRUE , then the curve is added to the existing figure. If the arument is omitted (or FALSE ) then a new plot is drawn,erasing the prevoius graph.
  • 11. ˆ The distribution of Xn (sample mean) In our prevous example we drew 500 random numbers from a normal distribution with mean 100 and standard deviation 10. This leads to ONE sample of n = 500. Now the question is: what is the mean of our sample? > mean(x) [1]100.14132 If we take another sample of 500 random numbers from the SAME distribution, we get a new sample with different mean. > x = rnorm(500, mean = 100, sd = 10) mean(x) [1]100.07884 What happens if we draw a sample several times?
  • 12. Producing a vector of sample means We will repeatedly sample from the normal distribution. Each of the 500 samples will select 5 random numbers (instead of 500) from the normal distrib. having mean 100 and sd 10. We will then find the mean of those samples. We begin by declaring the mean and the standard deviation. Then, we declare the sample mean. > µ = 100; σ = 10 >n=5 We need some place to store the mean of the sample. We initalize a vector xbar to initially contain 500 zeros. > xbar = rep(0, 500)
  • 13. Producing a vector of sample means - cycle for It is easy to draw a sample of size n = 5 from the normal distribution having mean µ = 100 and standard deviation σ = 10. We simply issue the command rnorm(n, mean = µ, sd = σ). To find the mean of this results, we simply add the adjustment mean(rnorm(n, mean = µ, sd = σ)). The final step is to store this results in the vector xbar . Then we must repeat this same process an addintional 499 times. This require the use of a for loop. > for (iin1 : 500){xbar [i] = mean(rnorm(n, mean = µ, sd = σ))}
  • 14. Cycle for The i in for (iin1 : 500) is called theindex of the for loop. The index i is first set equal to 1, then the body of the for loop is executed. On the next iteration, i is set equal to 2 and the body of the loop is executed again. The loop continues in this manner, incrementing by 1, finally setting the index i to 500. After executing the last loop, the for cycle is terminated In the body of the for loop, we have xbar [i] = mean(rnorm(n, mean = µ, sd = σ)). This draws a sample of size 5 from the normal distribution, calculates the mean of the sample, and store the results in xbar [i]. When the for loop completes 500 iterations, the vector xbar contains the means of 500 samples of size 5 drawn from the normal distribution having µ = 100 and σ = 10 > hist(xbar , prob = TRUE , breacks = 12, xlim = c(70, 130, ylim = c(0, 0.1)))
  • 15. ˆ Distribution of Xn - observations 1. The previous histograms describes the shape of the 500 random number randomly selected, here, the histogram describe the distribution of 500 different sample means, each of which founded by selecting n = 5 random number from the normal distribution. 2. The distribution of xbar appears normal in shape. This is so even though the sample size is relatively small ( n = 5). 3. It appears that the balance point occurs near 100. This can be checked with the following command: > mean(xbar ) That is the mean of the sample means, that is almost equal to the mean of the draw of random numbers. 4. The distribution of the sample means appears to be narrower then the random number distributions.
  • 16. Increasing the sample size Lets repeat the last experiment, but this time let’s draw a sample size of n = 10 from the same distribution (µ = 100, σ = 10) > µ = 100; σ = 10 > n = 10 > xbar = rep(0, 500) > for (iin1 : 500){xbar [i] = mean(rnorm(n, mean = µ, sd = σ))} hist(xbar , prob = TRUE , breaks = 12, xlim = c(70, 130), ylim = c(0, 0.1)) The Histogram produced is even more narrow than using n=5
  • 17. Key Ideas 1. When we select samples from a normal distribution, then the distribution of sample means is also normal in shape 2. The mean of the distribution of sample meana appears to be the same as the mean of the random numbers (parentpopulation) (see the balance points compared) 3. By increasing the sample size of our samples, the histograms becomes narrower . Infact, we would expect a more accurate estimate of the mean of the parent population if we take the mean from a larger sample size. 4. Imagine to draw sample means from a sample of n = ∞. The histogram will be exactly concentrated (P = 1) in Xbar = µ
  • 18. Summarise We finish replicating the statement about CLT: 1. If you draw samples from a norml distribution, then the distribution of the sample means is also normal 2. The mean of the distribution of the sample means is identical to the mean of the parent population 3. The higher the sample size that is drawn, the narrower will be the spread of the distribution of the sample means.
  • 19. Homeworks Experiment 1: Draw the Xbar histogram for n = 1000. How is the histogram shape? Experiment 2: Repeat the full experiment drawing random numbers and sample means from a (1) uniform and from (2) a poisson distribution. Is the histogram of Xbar normal in shape for n = 5 and for n=30? Experiment 3: Repeat the full experiment using real data instead of random numbers. (HINT: select samples of dimension n = 5 from the real data, not using rnorm) Recommended: Try to evaluate the agreement of the sample mean histogram with normal distribution by mean of the qq-plot and shapiro wilk test.
  • 20. Application to Large Number Law Experiment: toss the coin 100 times. This experiment is like repeating 100 times a random draw from a bernoulli distribution with parameter ρ = 0.5 We expect to have 50 times (value = 1) head and 50 times cross (value = 0), if the coin is not distorted But, in practice, this not happen: repeating the experiment we are going to have a distribution centered in 50, but spread out. ˆ Let’s imagine to define Xn as the mean of the number of heads ˆ across n experiments. For n −→ ∞, Xn −→ 50