SlideShare a Scribd company logo
Henry R. Kang (1/2010)
General Chemistry
Lecture 6
Graphing
Henry R. Kang (7/2008)
Outlines
• Logarithm and Exponent
 Definitions and rules
• Graphing
 Graphing rules
 Line drawing
• Linear Regression
 Deviation of equations
 Goodness of data fitting
 Draw the least-square line
Henry R. Kang (7/2008)
Logarithm
and
Exponent
Henry R. Kang (7/2008)
Definition of Exponent
• Exponent is expressed as the power of a base
value.
 Y = BX
.
B is the base and X is the power.
 Any positive number greater than 1 can be used as
the base.
 The power X can be a number or a more complex
expression such as X = a/4 or X = (a+b)/2.
 Commonly used bases are 10 and e = 2.7182818…
(e is an irrational number).
 Computer uses binary system; the base is 2.
Henry R. Kang (7/2008)
Examples of Exponent
• Positive exponents
 B1
= B; B2
= B×B; B3
= B×B×B;
 Bn
= B×B - - - B×B (B multiples itself n times );
 21
= 2; 22
= 4; 23
= 8;
 81
= 8; 82
= 64; 83
= 512;
 101
= 10; 102
= 100; 103
= 1000; 10n
= 10 - - - 0 (n zeros);
• Negative exponents
 B–1
= 1/B; B–2
= 1/(B×B); B–3
= 1/(B×B×B);
 B–n
= 1/(B×B - - - B×B) (The inverse of B multiples itself n times );
 2–1
= 1/2; 2–2
= 1/4; 2–3
= 1/8;
 8–1
= 1/8; 8–2
= 1/64; 8–3
= 1/512;
 10–1
= 1/10 = 0.1; 10–2
= 1/100 = 0.01; 10–3
= 1/1000 = 0.001;
 10–n
= 1/10 - - - 0 = 0.0 - - - 01 (n-1 zeros between 1 and decimal point);
• Zero exponent
 B0
= 1; 20
= 1; e0
= 1; 80
= 1; 100
= 1
Henry R. Kang (7/2008)
Rules of Exponent
• Multiplication
 Ba
× Bb
= Ba+b
 2a
× 2b
= 2a+b
24
× 22
= 26
= 64
 10a
× 10b
= 10a+b
104
× 107
= 1011
• Division
 Ba
÷ Bb
= Ba-b
 2a
÷ 2b
= 2a-b
24
÷ 207
= 2-3
= 1/8
 10a
÷ 10b
= 10a-b
104
÷ 107
= 10-3
= 0.001
• Exponent
 (Ba
)b
= Ba×b
 (2a
)b
= 2a×b
(23
)2
= 106
= 64
 (10a
)b
= 10a×b
(104
)7
= 1028
• Fraction
 B1/a
= a
√B
 21/a
= a
√2 21/3
= 3
√2
 101/a
= a
√10 101/2
= √10
Henry R. Kang (7/2008)
Definition of Logarithm
• Logarithm is the inverse of the exponent.
• log(base)Y = X.
 Two popular bases are 10 and e.
• Common (or Briggsian) logarithm uses the base value of 10.
 log10Y = X or log Y = X.
 Some values of common logarithm:
 log(1) = 0; log(10) = 1; log(100) = 2; etc.
• Natural (or hyperbolic) logarithm uses the base value of
e = 2.7182818…
 logeY = X or lnY = X.
• Common and natural logarithms can be inter-converted.
 lnY = 2.30259 × logY or logY = lnY / 2.30259.
Henry R. Kang (7/2008)
Rules of Logarithms
• The following rules apply to both common and natural
logarithms:
• log(A×B) = logA + logB.
 log(5×7) = log(5) + log(7) = 0.698970004 + 0.84509804 = 1.544068044
 log(35) = 1.544068044
• log(A/B) = logA – logB.
 log(5/7) = log(5) – log(7) = 0.698970004 - 0.84509804 = -0.146128036
 log(5/7) = log(0.714285714) = -0.146128036
• log(An
) = n logA.
 log(73
) = 3 log(7) = 2.53529412
 log(73
) = log(343) = 2.53529412
• log(A-n
) = -n logA.
• log(A1/n
) = (1/n) logA.
Henry R. Kang (7/2008)
Graphing
Henry R. Kang (7/2008)
Advantages of Graphing
• A graph or figure is a very powerful means
of delivering information.
 The information is very compactly
represented.
 The relationship between parameters is clearly
shown.
 The general characteristics of the parameters
can be derived.
• A picture is worth a thousand words.
Henry R. Kang (7/2008)
Graphing Rules – Label & Size
• Graph should be neatly presented,
easily readable and properly titled.
 Each axis should be clearly labeled with
 the name of the parameter and
 the unit.
 Scales should be selected so that the
actual graph covers at least 50% of the
space available.
Henry R. Kang (7/2008)
Example – Correct Size
0
200
400
600
800
1000
1200
1400
0 10 20 30 40 50 60 70 80
Tem perature (C)
Pressure(mmH2O)
Henry R. Kang (7/2008)
Example – Incorrect Size
0
200
400
600
800
1000
1200
1400
0 10 20 30 40 50 60 70 80
Temperature (C)
Pressure(mmH2O)
Henry R. Kang (7/2008)
Graphing Rules - Axis
• An axis scale does not need to
start at “zero”
To avoid data points cluster in a
narrow range
Exception
If the extrapolation of the line is
required to the x-axis or y-axis
intercept.
Henry R. Kang (7/2008)
Data Cluster - Incorrect
0
1
2
3
4
5
6
7
8
9
10
0 5 10 15 20
Tem perature (C)
Volume(mL)
Henry R. Kang (7/2008)
Expand the Scale
0
1
2
3
4
5
6
7
8
9
10
16 16.5 17 17.5 18 18.5 19
Tem perature (C)
Volume(mL)
Henry R. Kang (7/2008)
Extrapolation
0
1
2
3
4
5
6
7
8
9
-10 -5 0 5 10 15 20
X
Y
Henry R. Kang (7/2008)
Graphing Rules - Divisions
• Scale on the graph
paper should have
divisions that are
easily “divided by
the eye”
 1, 2, 5, or 10
 Not 3, 6, 7, or 11
0
1
2
3
4
5
6
7
8
9
0 5 10 15 20
0 2 4 6 8or
3 60 9 12Not
2
0
4
6
8
10
12
5
10
15
20
25
30
10
20
30
40
50
60
00
Henry R. Kang (7/2008)
Graphing Rules – Table of Data
•A table of data may be
provided (This rule is not
always obeyed)
No individual coordinates of
data points should appear on
the graph.
Henry R. Kang (7/2008)
Example
0
1
2
3
4
5
6
7
8
9
10
16 16.5 17 17.5 18 18.5 19
Temperature (C)
Volume(mL)
T(°c) V
(mL)
17.0 6.58
16.8 5.76
17.8 7.71
18.5 8.84
18.2 8.47
17.5 7.04
16.5 5.25
(17.0,
6.58)
Henry R. Kang (7/2008)
Graphing Rules - Resolution
• Ideally, one should be able to read
all significant figures of the data
from its position on the graph
paper
Often, the significant figure of the
data is higher than the resolution of
the graph paper.
Henry R. Kang (7/2008)
Graphing Rules – Data Symbols
• A data point should be circled (or other
shapes such as square or triangle)
surrounding it.
• If more than one data set, each set
should have its own shape (or symbol)
to represent the data points.
 You can use different color for different
data set, if color display or print is
available.
Henry R. Kang (7/2008)
Example
0
2
4
6
8
10
12
14
0 5 10 15
X
Y
Series1
Series2
Series3
Henry R. Kang (7/2008)
Graphing Rules - Drawing
• The curve or straight line is drawn smoothly
among the points, rather than connecting dots
(piece-wise linearization).
 The curve or line should represent the best average of
the data.
Roughly about equal number of points above and below the
curve or line.
The curve or line does not have to touch any of the data
points.
 Use a clear straightedge for drawing lines.
 Use French curves for drawing curves.
Henry R. Kang (7/2008)
Incorrect Way of Line Drawing
0
1
2
3
4
5
6
7
8
9
10
16 16.5 17 17.5 18 18.5 19
Tem perature (C)
Volume(mL)
Henry R. Kang (7/2008)
Correct Way of Line Drawing
0
1
2
3
4
5
6
7
8
9
10
16 16.5 17 17.5 18 18.5 19
Tem perature (C)
Volume(mL)
Henry R. Kang (7/2008)
Graphing Rules – Data Points
•Data point should not exist
on the axis line.
This rule is not always obeyed.
You still plot the point if it lies
on the axis line.
Henry R. Kang (7/2008)
Linear Regression
Henry R. Kang (7/2008)
Linear Regression - Equation
• Linear regression is used to find the
straight line that fits the data best.
• General equation for a line is
 Y = m X + b
 X is the independent variable,
 Y is the dependent variable,
 m is the slope of the line, and
 b is the y-axis intercept
Henry R. Kang (7/2008)
Linear Regression - Deviation
• Let yn = the observed values
• and ýn= the calculated value from the linear
equation
• The deviation
 dn = ýn– yn (n = 1, 2, 3, - - -, N)
N is the number of data sets.
• The best result is obtained by minimizing the
deviation (or the square of the deviation)
∑ dn
2
= (y1–ý1)2
+ (y2–ý2)2
+ - - - + (yN–ýN)
Henry R. Kang (7/2008)
Linear Regression – Minimize Deviation
• Calculate ýn from the linear equation, we have
 ýn = m xn + b
• The deviation becomes
 dn = ýn– yn = m xn+ b – yn
• The square of deviation is
 dn
2
= (m xn+ b – yn )2
= m2
xn
2
+ b2
+ yn
2
+ 2mbxn – 2mxnyn – 2byn
• The overall deviation is
 ∑ dn
2
= ∑ (m xn+ b – yn )2
∀ ∑dn
2
can be minimized by taking the partial derivatives
with respect to m and b, respectively.
Henry R. Kang (7/2008)
Linear Regression - Formulas
• Minimize ∑dn
2
by taking the partial derivatives
with respect to m (slope) and b (intercept)
∂(∑ dn
2
)/ ∂m = ∑(2mxn
2
+ 2bxn – 2xnyn) = 0
∂(∑ dn
2
)/ ∂b = ∑(2b + 2mxn – 2yn) = 0
• This set of equations can be solved for m and b.
N (∑xnyn ) – (∑xn) (∑ yn)
N (∑xn
2
) – (∑xn)2
N (∑xn
2
) – (∑xn)2
(∑xn
2
)(∑yn) – (∑xn)(∑xn yn)
m =
b =
where N is the number of data sets
Henry R. Kang (7/2008)
Linear Regression - Example
N (∑xn yn)– (∑xn) (∑ yn)
[N(∑xn
2
) – (∑xn)2
]1/2
[N(∑yn
2
)– (∑yn )2
]1/2r =
x
(C)
y
(liter)
xy x2
y2
1 0.0 20.0 0.0 0.0 400.
2 10.0 22.0 220. 100. 484.
3 20.0 23.0 460. 400. 529.
Sum 30.0
(∑xn)
65.0
(∑ yn)
680.
(∑xn yn)
500.
(∑xn
2
)
1413
(∑yn
2
)
b =
m =
N (∑xn
2
) – (∑xn)2
N (∑xn yn)– (∑xn) (∑ yn)
=
3×680.– 30.0×65.0
3×500.– 30.02
=
2040.– 1950.
1500.– 900.
=
90.0
600.
=0.150
N (∑xn
2
) – (∑xn)2
(∑xn
2
)(∑ yn) – (∑xn)(∑xn yn)
=
500.×65.0 – 30.0×680.
3×500.– 30.02
=
12100.
600.
= 20.2
=
=
3×680.– 30.0×65.0
(3×500.– 30.02
)1/2
(3×1413– 65.02
)1/2
90.0
r =
(600.)1/2
(14)1/2
0.982
r2
= 0.964
Henry R. Kang (7/2008)
Linear Regression - Goodness
• The “goodness” of the data fitting is expressed by
the regression coefficient r2
• If r2
= 1, perfect fit
• If r2
> 0.95, excellent fit
• If r2
> 0.90, good fit
• If r2
> 0.80, reasonable fit
• If r2
= 0, completely unrelated
N (∑xn yn)– (∑xn) (∑ yn)
[N (∑xn
2
) – (∑xn)2
]1/2
[N (∑yn
2
) – (∑yn )2
]1/2
r =
Henry R. Kang (7/2008)
Draw the Least-Square Line
• Once the slope m and intercept b are calculated from a
set of data (x and y), the best line can be drawn to fit the
data.
• A line, y = mx + b, is defined by two points.
• The least computational cost to find these two points is
 Set x = 0, then y = b, giving the first point (0, b)
 Set y = 0, then x = -b/m, giving the second point (-b/m, 0)
 Put these two points in the graph, then draw a straight line
connecting these two points.
 This line is the least-square line to fit the given data set (x and y) best
with a minimum error between the calculated and measured y values.

More Related Content

PPT
GC-S005-DataAnalysis
PDF
Error analysis statistics
PDF
Linear Regression Ordinary Least Squares Distributed Calculation Example
PDF
Physics_150612_01
PDF
accurate ABC Oliver Ratmann
PPSX
Simple linear regression
PDF
asm_pca_posit
PPT
Chi square using excel
GC-S005-DataAnalysis
Error analysis statistics
Linear Regression Ordinary Least Squares Distributed Calculation Example
Physics_150612_01
accurate ABC Oliver Ratmann
Simple linear regression
asm_pca_posit
Chi square using excel

What's hot (20)

PPTX
Basics of Regression analysis
PPT
Linear regression
PDF
3D Transformation in Computer Graphics
PPT
Introductory maths analysis chapter 16 official
PPT
Introductory maths analysis chapter 12 official
PDF
Density theorems for anisotropic point configurations
PDF
2D Transformation in Computer Graphics
PDF
Student_Garden_geostatistics_course
PDF
Kriging
PDF
FPDE presentation
PPT
Ch11.kriging
PDF
Visual Explanation of Ridge Regression and LASSO
PDF
Bayesian model choice in cosmology
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PPT
Introductory maths analysis chapter 15 official
PDF
Ridge regression
PPTX
Snakes in Images (Active contour tutorial)
PPT
PDF
CAR Models for Agricultural data
PDF
Variograms
Basics of Regression analysis
Linear regression
3D Transformation in Computer Graphics
Introductory maths analysis chapter 16 official
Introductory maths analysis chapter 12 official
Density theorems for anisotropic point configurations
2D Transformation in Computer Graphics
Student_Garden_geostatistics_course
Kriging
FPDE presentation
Ch11.kriging
Visual Explanation of Ridge Regression and LASSO
Bayesian model choice in cosmology
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Introductory maths analysis chapter 15 official
Ridge regression
Snakes in Images (Active contour tutorial)
CAR Models for Agricultural data
Variograms
Ad

Viewers also liked (20)

PPT
GC-S008-Mass&Mole
PDF
2 garcia cedillo_la_integr._educ._en_el_aula_regular_pp._41-72
DOC
Young - Task, Nullification Crisis
PPS
Dad- importance in life
PDF
F017313341
PDF
Energy Efficient E-BMA Protocol for Wireless Sensor Networks
PDF
J017226770
PDF
F012133539
PDF
January2016
PDF
Satin bridesmaid dresses gudeer.com
PDF
Q01243111116
PDF
C012461526
PDF
R01741124127
PDF
Q01222110120
PDF
InheritanceNovember2016
PDF
A012230108
PDF
B017441015
PDF
B010421824
PDF
9555569222 Supertech Basera Affordable Housing Sector 79 Gurgaon, Haryana Aff...
GC-S008-Mass&Mole
2 garcia cedillo_la_integr._educ._en_el_aula_regular_pp._41-72
Young - Task, Nullification Crisis
Dad- importance in life
F017313341
Energy Efficient E-BMA Protocol for Wireless Sensor Networks
J017226770
F012133539
January2016
Satin bridesmaid dresses gudeer.com
Q01243111116
C012461526
R01741124127
Q01222110120
InheritanceNovember2016
A012230108
B017441015
B010421824
9555569222 Supertech Basera Affordable Housing Sector 79 Gurgaon, Haryana Aff...
Ad

Similar to GC-S006-Graphing (20)

PDF
Basic gd&t datums
PPT
Image Segmentation with Fundamentals, Point, Line, and Edge Detection, Thres...
PPT
digital image processing classification.ppt
PPT
ImageSegmentation.ppt
PPT
ImageSegmentation (1).ppt
PPT
ImageSegmentation.ppt
PDF
Tenseur en algèbre lineaire numerique avancé
PPTX
Sessions 18 19- Regression- SLR MLR.pptx
PPT
Image segmentation
DOCX
Q1Perform the two basic operations of multiplication and divisio.docx
PPT
LCDF3_Chap_02_P2.ppt Digital Login Design
PDF
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
PDF
H 2008 2011
PDF
Bayesian Estimation for Missing Values in Latin Square Design
PDF
Functional Regression Analysis
PDF
Notes and formulae mathematics
PDF
R_Satistical_Software_Trainning _Man.pdf
PDF
LandMarking of 3D CT mammographic Images.pdf
PDF
Mm chap08 -_lossy_compression_algorithms
PDF
MSSISS riBART 20160321
Basic gd&t datums
Image Segmentation with Fundamentals, Point, Line, and Edge Detection, Thres...
digital image processing classification.ppt
ImageSegmentation.ppt
ImageSegmentation (1).ppt
ImageSegmentation.ppt
Tenseur en algèbre lineaire numerique avancé
Sessions 18 19- Regression- SLR MLR.pptx
Image segmentation
Q1Perform the two basic operations of multiplication and divisio.docx
LCDF3_Chap_02_P2.ppt Digital Login Design
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
H 2008 2011
Bayesian Estimation for Missing Values in Latin Square Design
Functional Regression Analysis
Notes and formulae mathematics
R_Satistical_Software_Trainning _Man.pdf
LandMarking of 3D CT mammographic Images.pdf
Mm chap08 -_lossy_compression_algorithms
MSSISS riBART 20160321

More from henry kang (10)

PPT
GC-S010-Nomenclature
PPT
GC-S009-Substances
PPT
GC-S007-Atom
PPT
GC-S004-ScientificNotation
PPT
GC-S003-Measurement
PPT
GC-S002-Matter
DOCX
RC3-deScreen_s
DOCX
RC2-filterDesign_s
PPT
GenChem000-WhatIsChemistry
PPT
GenChem001-ScientificMethod
GC-S010-Nomenclature
GC-S009-Substances
GC-S007-Atom
GC-S004-ScientificNotation
GC-S003-Measurement
GC-S002-Matter
RC3-deScreen_s
RC2-filterDesign_s
GenChem000-WhatIsChemistry
GenChem001-ScientificMethod

GC-S006-Graphing

  • 1. Henry R. Kang (1/2010) General Chemistry Lecture 6 Graphing
  • 2. Henry R. Kang (7/2008) Outlines • Logarithm and Exponent  Definitions and rules • Graphing  Graphing rules  Line drawing • Linear Regression  Deviation of equations  Goodness of data fitting  Draw the least-square line
  • 3. Henry R. Kang (7/2008) Logarithm and Exponent
  • 4. Henry R. Kang (7/2008) Definition of Exponent • Exponent is expressed as the power of a base value.  Y = BX . B is the base and X is the power.  Any positive number greater than 1 can be used as the base.  The power X can be a number or a more complex expression such as X = a/4 or X = (a+b)/2.  Commonly used bases are 10 and e = 2.7182818… (e is an irrational number).  Computer uses binary system; the base is 2.
  • 5. Henry R. Kang (7/2008) Examples of Exponent • Positive exponents  B1 = B; B2 = B×B; B3 = B×B×B;  Bn = B×B - - - B×B (B multiples itself n times );  21 = 2; 22 = 4; 23 = 8;  81 = 8; 82 = 64; 83 = 512;  101 = 10; 102 = 100; 103 = 1000; 10n = 10 - - - 0 (n zeros); • Negative exponents  B–1 = 1/B; B–2 = 1/(B×B); B–3 = 1/(B×B×B);  B–n = 1/(B×B - - - B×B) (The inverse of B multiples itself n times );  2–1 = 1/2; 2–2 = 1/4; 2–3 = 1/8;  8–1 = 1/8; 8–2 = 1/64; 8–3 = 1/512;  10–1 = 1/10 = 0.1; 10–2 = 1/100 = 0.01; 10–3 = 1/1000 = 0.001;  10–n = 1/10 - - - 0 = 0.0 - - - 01 (n-1 zeros between 1 and decimal point); • Zero exponent  B0 = 1; 20 = 1; e0 = 1; 80 = 1; 100 = 1
  • 6. Henry R. Kang (7/2008) Rules of Exponent • Multiplication  Ba × Bb = Ba+b  2a × 2b = 2a+b 24 × 22 = 26 = 64  10a × 10b = 10a+b 104 × 107 = 1011 • Division  Ba ÷ Bb = Ba-b  2a ÷ 2b = 2a-b 24 ÷ 207 = 2-3 = 1/8  10a ÷ 10b = 10a-b 104 ÷ 107 = 10-3 = 0.001 • Exponent  (Ba )b = Ba×b  (2a )b = 2a×b (23 )2 = 106 = 64  (10a )b = 10a×b (104 )7 = 1028 • Fraction  B1/a = a √B  21/a = a √2 21/3 = 3 √2  101/a = a √10 101/2 = √10
  • 7. Henry R. Kang (7/2008) Definition of Logarithm • Logarithm is the inverse of the exponent. • log(base)Y = X.  Two popular bases are 10 and e. • Common (or Briggsian) logarithm uses the base value of 10.  log10Y = X or log Y = X.  Some values of common logarithm:  log(1) = 0; log(10) = 1; log(100) = 2; etc. • Natural (or hyperbolic) logarithm uses the base value of e = 2.7182818…  logeY = X or lnY = X. • Common and natural logarithms can be inter-converted.  lnY = 2.30259 × logY or logY = lnY / 2.30259.
  • 8. Henry R. Kang (7/2008) Rules of Logarithms • The following rules apply to both common and natural logarithms: • log(A×B) = logA + logB.  log(5×7) = log(5) + log(7) = 0.698970004 + 0.84509804 = 1.544068044  log(35) = 1.544068044 • log(A/B) = logA – logB.  log(5/7) = log(5) – log(7) = 0.698970004 - 0.84509804 = -0.146128036  log(5/7) = log(0.714285714) = -0.146128036 • log(An ) = n logA.  log(73 ) = 3 log(7) = 2.53529412  log(73 ) = log(343) = 2.53529412 • log(A-n ) = -n logA. • log(A1/n ) = (1/n) logA.
  • 9. Henry R. Kang (7/2008) Graphing
  • 10. Henry R. Kang (7/2008) Advantages of Graphing • A graph or figure is a very powerful means of delivering information.  The information is very compactly represented.  The relationship between parameters is clearly shown.  The general characteristics of the parameters can be derived. • A picture is worth a thousand words.
  • 11. Henry R. Kang (7/2008) Graphing Rules – Label & Size • Graph should be neatly presented, easily readable and properly titled.  Each axis should be clearly labeled with  the name of the parameter and  the unit.  Scales should be selected so that the actual graph covers at least 50% of the space available.
  • 12. Henry R. Kang (7/2008) Example – Correct Size 0 200 400 600 800 1000 1200 1400 0 10 20 30 40 50 60 70 80 Tem perature (C) Pressure(mmH2O)
  • 13. Henry R. Kang (7/2008) Example – Incorrect Size 0 200 400 600 800 1000 1200 1400 0 10 20 30 40 50 60 70 80 Temperature (C) Pressure(mmH2O)
  • 14. Henry R. Kang (7/2008) Graphing Rules - Axis • An axis scale does not need to start at “zero” To avoid data points cluster in a narrow range Exception If the extrapolation of the line is required to the x-axis or y-axis intercept.
  • 15. Henry R. Kang (7/2008) Data Cluster - Incorrect 0 1 2 3 4 5 6 7 8 9 10 0 5 10 15 20 Tem perature (C) Volume(mL)
  • 16. Henry R. Kang (7/2008) Expand the Scale 0 1 2 3 4 5 6 7 8 9 10 16 16.5 17 17.5 18 18.5 19 Tem perature (C) Volume(mL)
  • 17. Henry R. Kang (7/2008) Extrapolation 0 1 2 3 4 5 6 7 8 9 -10 -5 0 5 10 15 20 X Y
  • 18. Henry R. Kang (7/2008) Graphing Rules - Divisions • Scale on the graph paper should have divisions that are easily “divided by the eye”  1, 2, 5, or 10  Not 3, 6, 7, or 11 0 1 2 3 4 5 6 7 8 9 0 5 10 15 20 0 2 4 6 8or 3 60 9 12Not 2 0 4 6 8 10 12 5 10 15 20 25 30 10 20 30 40 50 60 00
  • 19. Henry R. Kang (7/2008) Graphing Rules – Table of Data •A table of data may be provided (This rule is not always obeyed) No individual coordinates of data points should appear on the graph.
  • 20. Henry R. Kang (7/2008) Example 0 1 2 3 4 5 6 7 8 9 10 16 16.5 17 17.5 18 18.5 19 Temperature (C) Volume(mL) T(°c) V (mL) 17.0 6.58 16.8 5.76 17.8 7.71 18.5 8.84 18.2 8.47 17.5 7.04 16.5 5.25 (17.0, 6.58)
  • 21. Henry R. Kang (7/2008) Graphing Rules - Resolution • Ideally, one should be able to read all significant figures of the data from its position on the graph paper Often, the significant figure of the data is higher than the resolution of the graph paper.
  • 22. Henry R. Kang (7/2008) Graphing Rules – Data Symbols • A data point should be circled (or other shapes such as square or triangle) surrounding it. • If more than one data set, each set should have its own shape (or symbol) to represent the data points.  You can use different color for different data set, if color display or print is available.
  • 23. Henry R. Kang (7/2008) Example 0 2 4 6 8 10 12 14 0 5 10 15 X Y Series1 Series2 Series3
  • 24. Henry R. Kang (7/2008) Graphing Rules - Drawing • The curve or straight line is drawn smoothly among the points, rather than connecting dots (piece-wise linearization).  The curve or line should represent the best average of the data. Roughly about equal number of points above and below the curve or line. The curve or line does not have to touch any of the data points.  Use a clear straightedge for drawing lines.  Use French curves for drawing curves.
  • 25. Henry R. Kang (7/2008) Incorrect Way of Line Drawing 0 1 2 3 4 5 6 7 8 9 10 16 16.5 17 17.5 18 18.5 19 Tem perature (C) Volume(mL)
  • 26. Henry R. Kang (7/2008) Correct Way of Line Drawing 0 1 2 3 4 5 6 7 8 9 10 16 16.5 17 17.5 18 18.5 19 Tem perature (C) Volume(mL)
  • 27. Henry R. Kang (7/2008) Graphing Rules – Data Points •Data point should not exist on the axis line. This rule is not always obeyed. You still plot the point if it lies on the axis line.
  • 28. Henry R. Kang (7/2008) Linear Regression
  • 29. Henry R. Kang (7/2008) Linear Regression - Equation • Linear regression is used to find the straight line that fits the data best. • General equation for a line is  Y = m X + b  X is the independent variable,  Y is the dependent variable,  m is the slope of the line, and  b is the y-axis intercept
  • 30. Henry R. Kang (7/2008) Linear Regression - Deviation • Let yn = the observed values • and ýn= the calculated value from the linear equation • The deviation  dn = ýn– yn (n = 1, 2, 3, - - -, N) N is the number of data sets. • The best result is obtained by minimizing the deviation (or the square of the deviation) ∑ dn 2 = (y1–ý1)2 + (y2–ý2)2 + - - - + (yN–ýN)
  • 31. Henry R. Kang (7/2008) Linear Regression – Minimize Deviation • Calculate ýn from the linear equation, we have  ýn = m xn + b • The deviation becomes  dn = ýn– yn = m xn+ b – yn • The square of deviation is  dn 2 = (m xn+ b – yn )2 = m2 xn 2 + b2 + yn 2 + 2mbxn – 2mxnyn – 2byn • The overall deviation is  ∑ dn 2 = ∑ (m xn+ b – yn )2 ∀ ∑dn 2 can be minimized by taking the partial derivatives with respect to m and b, respectively.
  • 32. Henry R. Kang (7/2008) Linear Regression - Formulas • Minimize ∑dn 2 by taking the partial derivatives with respect to m (slope) and b (intercept) ∂(∑ dn 2 )/ ∂m = ∑(2mxn 2 + 2bxn – 2xnyn) = 0 ∂(∑ dn 2 )/ ∂b = ∑(2b + 2mxn – 2yn) = 0 • This set of equations can be solved for m and b. N (∑xnyn ) – (∑xn) (∑ yn) N (∑xn 2 ) – (∑xn)2 N (∑xn 2 ) – (∑xn)2 (∑xn 2 )(∑yn) – (∑xn)(∑xn yn) m = b = where N is the number of data sets
  • 33. Henry R. Kang (7/2008) Linear Regression - Example N (∑xn yn)– (∑xn) (∑ yn) [N(∑xn 2 ) – (∑xn)2 ]1/2 [N(∑yn 2 )– (∑yn )2 ]1/2r = x (C) y (liter) xy x2 y2 1 0.0 20.0 0.0 0.0 400. 2 10.0 22.0 220. 100. 484. 3 20.0 23.0 460. 400. 529. Sum 30.0 (∑xn) 65.0 (∑ yn) 680. (∑xn yn) 500. (∑xn 2 ) 1413 (∑yn 2 ) b = m = N (∑xn 2 ) – (∑xn)2 N (∑xn yn)– (∑xn) (∑ yn) = 3×680.– 30.0×65.0 3×500.– 30.02 = 2040.– 1950. 1500.– 900. = 90.0 600. =0.150 N (∑xn 2 ) – (∑xn)2 (∑xn 2 )(∑ yn) – (∑xn)(∑xn yn) = 500.×65.0 – 30.0×680. 3×500.– 30.02 = 12100. 600. = 20.2 = = 3×680.– 30.0×65.0 (3×500.– 30.02 )1/2 (3×1413– 65.02 )1/2 90.0 r = (600.)1/2 (14)1/2 0.982 r2 = 0.964
  • 34. Henry R. Kang (7/2008) Linear Regression - Goodness • The “goodness” of the data fitting is expressed by the regression coefficient r2 • If r2 = 1, perfect fit • If r2 > 0.95, excellent fit • If r2 > 0.90, good fit • If r2 > 0.80, reasonable fit • If r2 = 0, completely unrelated N (∑xn yn)– (∑xn) (∑ yn) [N (∑xn 2 ) – (∑xn)2 ]1/2 [N (∑yn 2 ) – (∑yn )2 ]1/2 r =
  • 35. Henry R. Kang (7/2008) Draw the Least-Square Line • Once the slope m and intercept b are calculated from a set of data (x and y), the best line can be drawn to fit the data. • A line, y = mx + b, is defined by two points. • The least computational cost to find these two points is  Set x = 0, then y = b, giving the first point (0, b)  Set y = 0, then x = -b/m, giving the second point (-b/m, 0)  Put these two points in the graph, then draw a straight line connecting these two points.  This line is the least-square line to fit the given data set (x and y) best with a minimum error between the calculated and measured y values.