SlideShare a Scribd company logo
Foundations Of Factor Analysis Second Edition
2nd Edition Stanley A Mulaik download
https://ebookbell.com/product/foundations-of-factor-analysis-
second-edition-2nd-edition-stanley-a-mulaik-4765512
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Conceptual Foundations Of Human Factors Measurement 1st Edition David
Meister Author
https://ebookbell.com/product/conceptual-foundations-of-human-factors-
measurement-1st-edition-david-meister-author-11966846
Country Sector And Company Factors In Global Equity Portfolios 1st
Edition Peter J B Hopkins
https://ebookbell.com/product/country-sector-and-company-factors-in-
global-equity-portfolios-1st-edition-peter-j-b-hopkins-1779218
Foundations Of Scalable Systems Designing Distributed Architectures
1st Edition Ian Gorton
https://ebookbell.com/product/foundations-of-scalable-systems-
designing-distributed-architectures-1st-edition-ian-gorton-44887562
Foundations Of Software Science And Computation Structures 25th
International Conference Fossacs 2022 Held As Part Of The European
Joint Conferences On Theory And Practice Of Software Etaps 2022 Munich
Germany April 27 2022 Proceedings Patricia Bouyer
https://ebookbell.com/product/foundations-of-software-science-and-
computation-structures-25th-international-conference-
fossacs-2022-held-as-part-of-the-european-joint-conferences-on-theory-
and-practice-of-software-etaps-2022-munich-germany-
april-27-2022-proceedings-patricia-bouyer-44887776
Foundations Of Software Science And Computation Structures 24th
International Conference Stefan Kiefer
https://ebookbell.com/product/foundations-of-software-science-and-
computation-structures-24th-international-conference-stefan-
kiefer-44887782
Foundations Of Marketing 9th William M Pride O C Ferrell
https://ebookbell.com/product/foundations-of-marketing-9th-william-m-
pride-o-c-ferrell-44954530
Foundations Of Rural Public Health In America Joseph N Inungu
https://ebookbell.com/product/foundations-of-rural-public-health-in-
america-joseph-n-inungu-44963066
Foundations Of Marketing 9e 9th Edition William M Pride O C Ferrell
https://ebookbell.com/product/foundations-of-marketing-9e-9th-edition-
william-m-pride-o-c-ferrell-44975342
Foundations Of Molecular Quantum Electrodynamics R Guy Woolley
https://ebookbell.com/product/foundations-of-molecular-quantum-
electrodynamics-r-guy-woolley-45936564
Foundations Of Factor Analysis Second Edition 2nd Edition Stanley A Mulaik
Foundations
of
Factor Analysis
Second Edition
© 2010 by Taylor & Francis Group, LLC
Statistics in the Social and Behavioral Sciences Series
Aims and scope
Large and complex datasets are becoming prevalent in the social and behavioral
sciences and statistical methods are crucial for the analysis and interpretation of such
data. This series aims to capture new developments in statistical methodology with par-
ticular relevance to applications in the social and behavioral sciences. It seeks to promote
appropriate use of statistical, econometric and psychometric methods in these applied
sciences by publishing a broad range of reference works, textbooks and handbooks.
The scope of the series is wide, including applications of statistical methodology in
sociology, psychology, economics, education, marketing research, political science,
criminology, public policy, demography, survey methodology and official statistics. The
titles included in the series are designed to appeal to applied statisticians, as well as
students, researchers and practitioners from the above disciplines. The inclusion of real
examples and case studies is therefore essential.
Published Titles
Analysis of Multivariate Social Science Data, Second Edition
David J. Bartholomew, Fiona Steele, Irini Moustaki, and Jane I. Galbraith
Bayesian Methods: A Social and Behavioral Sciences Approach, Second Edition
Jeff Gill
Foundations of Factor Analysis, Second Edition
Stanley A. Mulaik
Linear Causal Modeling with Structural Equations
Stanley A. Mulaik
Multiple Correspondence Analysis and Related Methods
Michael Greenacre and Jorg Blasius
Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences
Brian S. Everitt
Statistical Test Theory for the Behavioral Sciences
Dato N. M. de Gruijter and Leo J. Th. van der Kamp
A. Colin Cameron
University of California, Davis, USA
Andrew Gelman
Columbia University, USA
J. Scott Long
Indiana University, USA
Sophia Rabe-Hesketh
University of California, Berkeley, USA
Series Editors
Chapman & Hall/CRC
Anders Skrondal
Norwegian Institute of Public Health, Norway
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Stanley A. Mulaik
Foundations
of
Factor Analysis
Second Edition
Statistics in the Social and Behavioral Sciences Series
Chapman & Hall/CRC
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2010 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Version Date: 20110725
International Standard Book Number-13: 978-1-4200-9981-2 (eBook - PDF)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
v
Contents
Preface to the Second Edition............................................................................ xiii
Preface to the First Edition................................................................................. xix
1 Introduction.....................................................................................................1
1.1 Factor Analysis and Structural Theories .............................................1
1.2 Brief History of Factor Analysis as a Linear Model...........................3
1.3 Example of Factor Analysis.................................................................12
2 Mathematical Foundations for Factor Analysis .....................................17
2.1 Introduction...........................................................................................17
2.2 Scalar Algebra........................................................................................17
2.2.1 Fundamental Laws of Scalar Algebra ..................................18
2.2.1.1 Rules of Signs............................................................18
2.2.1.2 Rules for Exponents.................................................19
2.2.1.3 Solving Simple Equations.......................................19
2.3 Vectors ....................................................................................................20
2.3.1 n-Tuples as Vectors..................................................................22
2.3.1.1 Equality of Vectors....................................................22
2.3.2 Scalars and Vectors..................................................................23
2.3.3 Multiplying a Vector by a Scalar...........................................23
2.3.4 Addition of Vectors.................................................................24
2.3.5 Scalar Product of Vectors .......................................................24
2.3.6 Distance between Vectors ......................................................25
2.3.7 Length of a Vector ...................................................................26
2.3.8 Another Definition for Scalar Multiplication......................27
2.3.9 Cosine of the Angle between Vectors...................................27
2.3.10 Projection of a Vector onto Another Vector .........................29
2.3.11 Types of Special Vectors .........................................................30
2.3.12 Linear Combinations ..............................................................31
2.3.13 Linear Independence..............................................................32
2.3.14 Basis Vectors.............................................................................32
2.4 Matrix Algebra ......................................................................................32
2.4.1 Definition of a Matrix .............................................................32
2.4.2 Matrix Operations...................................................................33
2.4.2.1 Equality......................................................................34
2.4.2.2 Multiplication by a Scalar .......................................34
2.4.2.3 Addition.....................................................................34
2.4.2.4 Subtraction ................................................................35
2.4.2.5 Matrix Multiplication ..............................................35
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
vi Contents
2.4.3 Identity Matrix.........................................................................37
2.4.4 Scalar Matrix............................................................................38
2.4.5 Diagonal Matrix ......................................................................39
2.4.6 Upper and Lower Triangular Matrices................................39
2.4.7 Null Matrix...............................................................................40
2.4.8 Transpose Matrix.....................................................................40
2.4.9 Symmetric Matrices ................................................................41
2.4.10 Matrix Inverse..........................................................................41
2.4.11 Orthogonal Matrices...............................................................42
2.4.12 Trace of a Matrix......................................................................43
2.4.13 Invariance of Traces under Cyclic Permutations................43
2.5 Determinants .........................................................................................44
2.5.1 Minors of a Matrix ..................................................................46
2.5.2 Rank of a Matrix......................................................................47
2.5.3 Cofactors of a Matrix ..............................................................47
2.5.4 Expanding a Determinant by Cofactors ..............................48
2.5.5 Adjoint Matrix .........................................................................48
2.5.6 Important Properties of Determinants.................................49
2.5.7 Simultaneous Linear Equations............................................50
2.6 Treatment of Variables as Vectors.......................................................51
2.6.1 Variables in Finite Populations .............................................51
2.6.2 Variables in Infinite Populations...........................................53
2.6.3 Random Vectors of Random Variables.................................56
2.7 Maxima and Minima of Functions.....................................................58
2.7.1 Slope as the Indicator of a Maximum or Minimum...........59
2.7.2 Index for Slope.........................................................................59
2.7.3 Derivative of a Function.........................................................60
2.7.4 Derivative of a Constant ........................................................62
2.7.5 Derivative of Other Functions...............................................62
2.7.6 Partial Differentiation.............................................................64
2.7.7 Maxima and Minima of Functions of Several Variables....65
2.7.8 Constrained Maxima and Minima .......................................67
3 Composite Variables and Linear Transformations................................69
3.1 Introduction...........................................................................................69
3.1.1 Means and Variances of Variables ........................................69
3.1.1.1 Correlation and Causation......................................71
3.2 Composite Variables.............................................................................72
3.3 Unweighted Composite Variables......................................................73
3.3.1 Mean of an Unweighted Composite ....................................73
3.3.2 Variance of an Unweighted Composite...............................73
3.3.3 Covariance and Correlation between
Two Composites......................................................................77
3.3.4 Correlation of an Unweighted Composite
with a Single Variable.............................................................78
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Contents vii
3.3.5 Correlation between Two Unweighted
Composites.................................................................................80
3.3.6 Summary Concerning Unweighted Composites..................83
3.4 Differentially Weighted Composites..................................................83
3.4.1 Correlation between a Differentially Weighted
Composite and Another Variable ...........................................83
3.4.2 Correlation between Two Differentially
Weighted Composites...............................................................84
3.5 Matrix Equations...................................................................................84
3.5.1 Random Vectors, Mean Vectors, Variance–Covariance
Matrices, and Correlation Matrices ........................................84
3.5.2 Sample Equations......................................................................86
3.5.3 Composite Variables in Matrix Equations.............................88
3.5.4 Linear Transformations............................................................89
3.5.5 Some Special, Useful Linear Transformations ......................91
4 Multiple and Partial Correlations.............................................................93
4.1 Multiple Regression and Correlation.................................................93
4.1.1 Minimizing the Expected Squared Difference between
a Composite Variable and an External Variable ...................93
*4.1.2 Deriving the Regression Weight Matrix for
Multivariate Multiple Regression...........................................95
4.1.3 Matrix Equations for Multivariate Multiple Regression.....97
4.1.4 Squared Multiple Correlations................................................98
4.1.5 Correlations between Actual and Predicted Criteria...........99
4.2 Partial Correlations.............................................................................100
4.3 Determinantal Formulas.................................................................... 102
4.3.1 Multiple-Correlation Coefficient........................................... 103
4.3.2 Formulas for Partial Correlations ......................................... 104
4.4 Multiple Correlation in Terms of Partial Correlation .................... 104
4.4.1 Matrix of Image Regression Weights.................................... 105
4.4.2 Meaning of Multiple Correlation.......................................... 107
4.4.3 Yule’s Equation for the Error of Estimate............................ 109
4.4.4 Conclusions.............................................................................. 110
5 Multivariate Normal Distribution.......................................................... 113
5.1 Introduction......................................................................................... 113
5.2 Univariate Normal Density Function .............................................. 113
5.3 Multivariate Normal Distribution.................................................... 114
5.3.1 Bivariate Normal Distribution .............................................. 115
5.3.2 Properties of the Multivariate Normal Distribution.......... 116
*5.4 Maximum-Likelihood Estimation..................................................... 118
5.4.1 Notion of Likelihood .............................................................. 118
5.4.2 Sample Likelihood .................................................................. 119
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
viii Contents
5.4.3 Maximum-Likelihood Estimates .......................................... 119
5.4.4 Multivariate Case.................................................................... 124
5.4.4.1 Distribution of y
–
and S............................................128
6 Fundamental Equations of Factor Analysis ..........................................129
6.1 Analysis of a Variable into Components .........................................129
6.1.1 Components of Variance........................................................ 132
6.1.2 Variance of a Variable in Terms of Its Factors .....................133
6.1.3 Correlation between Two Variables
in Terms of Their Factors........................................................134
6.2 Use of Matrix Notation in Factor Analysis......................................135
6.2.1 Fundamental Equation of Factor Analysis..........................135
6.2.2 Fundamental Theorem of Factor Analysis ..........................136
6.2.3 Factor-Pattern and Factor-Structure Matrices..................... 137
7 Methods of Factor Extraction ................................................................... 139
7.1 Rationale for Finding Factors and Factor Loadings....................... 139
7.1.1 General Computing Algorithm
for Finding Factors..................................................................140
7.2 Diagonal Method of Factoring.......................................................... 145
7.3 Centroid Method of Factoring .......................................................... 147
7.4 Principal-Axes Methods..................................................................... 147
7.4.1 Hotelling’s Iterative Method ................................................. 151
7.4.2 Further Properties of Eigenvectors and
Eigenvalues..............................................................................154
7.4.3 Maximization of Quadratic Forms for Points
on the Unit Sphere ..................................................................156
7.4.4 Diagonalizing the R Matrix into Its Eigenvalues ...............158
7.4.5 Jacobi Method.......................................................................... 159
7.4.6 Powers of Square Symmetric Matrices ................................164
7.4.7 Factor-Loading Matrix from Eigenvalues
and Eigenvectors..................................................................... 165
8 Common-Factor Analysis.......................................................................... 167
8.1 Preliminary Considerations............................................................... 167
8.1.1 Designing a Factor Analytic Study....................................... 168
8.2 First Stages in the Factor Analysis.................................................... 169
8.2.1 Concept of Minimum Rank................................................... 170
8.2.2 Systematic Lower-Bound Estimates
of Communalities.................................................................... 175
8.2.3 Congruence Transformations................................................ 176
8.2.4 Sylvester’s Law of Inertia ...................................................... 176
8.2.5 Eigenvector Transformations ................................................177
8.2.6 Guttman’s Lower Bounds for Minimum Rank...................177
8.2.7 Preliminary Theorems for Guttman’s Bounds.................... 178
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Contents ix
8.2.8 Proof of the First Lower Bound........................................... 181
8.2.9 Proof of the Third Lower Bound......................................... 181
8.2.10 Proof of the Second Lower Bound......................................184
8.2.11 Heuristic Rules of Thumb for the Number of Factors .....185
8.2.11.1 Kaiser’s Eigenvalues-Greater-
Than-One Rule...................................................... 186
8.2.11.2 Cattell’s Scree Criterion....................................... 186
8.2.11.3 Parallel Analysis................................................... 188
8.3 Fitting the Common-Factor Model to a Correlation Matrix......... 192
8.3.1 Least-Squares Estimation of the Exploratory
Common-Factor Model........................................................ 193
8.3.2 Assessing Fit .......................................................................... 197
8.3.3 Example of Least-Squares Common-Factor
Analysis................................................................................... 197
8.3.4 Maximum-Likelihood Estimation of the
Exploratory Common-Factor Model ..................................199
*8.3.4.1 Maximum-Likelihood Estimation Obtained
Using Calculus......................................................202
8.3.5 Maximum-Likelihood Estimates ........................................206
*8.3.6 Fletcher–Powell Algorithm..................................................207
*8.3.7 Applying the Fletcher–Powell Algorithm to
Maximum-Likelihood Exploratory Factor Analysis.........210
8.3.8 Testing the Goodness of Fit of the
Maximum-Likelihood Estimates ........................................212
8.3.9 Optimality of Maximum-Likelihood Estimators.............. 214
8.3.10 Example of Maximum-Likelihood
Factor Analysis ......................................................................215
9 Other Models of Factor Analysis............................................................. 217
9.1 Introduction......................................................................................... 217
9.2 Component Analysis.......................................................................... 217
9.2.1 Principal-Components Analysis ......................................... 219
9.2.2 Selecting Fewer Components than Variables....................220
9.2.3 Determining the Reliability of Principal Components....222
9.2.4 Principal Components of True Components.....................224
9.2.5 Weighted Principal Components........................................226
9.3 Image Analysis....................................................................................230
9.3.1 Partial-Image Analysis .........................................................231
9.3.2 Image Analysis and Common-Factor Analysis ................237
9.3.3 Partial-Image Analysis as Approximation of
Common-Factor Analysis.....................................................244
9.4 Canonical-Factor Analysis.................................................................245
9.4.1 Relation to Image Analysis..................................................249
9.4.2 Kaiser’s Rule for the Number of Harris Factors...............253
9.4.3 Quickie, Single-Pass Approximation for
Common-Factor Analysis.....................................................253
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
x Contents
9.5 Problem of Doublet Factors...............................................................253
9.5.1 Butler’s Descriptive-Factor-Analysis Solution .................254
9.5.2 Model That Includes Doublets Explicitly..........................258
9.6 Metric Invariance Properties.............................................................262
9.7 Image-Factor Analysis........................................................................263
9.7.1 Testing Image Factors for Significance...............................264
9.8 Psychometric Inference in Factor Analysis .....................................265
9.8.1 Alpha Factor Analysis ..........................................................270
9.8.2 Communality in a Universe of Tests ..................................271
9.8.3 Consequences for Factor Analysis...................................... 274
10 Factor Rotation ............................................................................................275
10.1 Introduction.......................................................................................275
10.2 Thurstone’s Concept of a Simple Structure................................... 276
10.2.1 Implementing the Simple-Structure Concept .................280
10.2.2 Question of Correlated Factors .........................................282
10.3 Oblique Graphical Rotation ............................................................286
11 Orthogonal Analytic Rotation.................................................................301
11.1 Introduction .......................................................................................301
11.2 Quartimax Criterion .........................................................................302
11.3 Varimax Criterion.............................................................................. 310
11.4 Transvarimax Methods..................................................................... 312
11.4.1 Parsimax ............................................................................... 313
11.5 Simultaneous Orthogonal Varimax and Parsimax....................... 315
11.5.1 Gradient Projection Algorithm..........................................323
12 Oblique Analytic Rotation .......................................................................325
12.1 General ...............................................................................................325
12.1.1 Distinctness of the Criteria in Oblique Rotation.............325
12.2 Oblimin Family .................................................................................326
12.2.1 Direct Oblimin by Planar Rotations .................................328
12.3 Harris–Kaiser Oblique Transformations .......................................332
12.4 Weighted Oblique Rotation.............................................................336
12.5 Oblique Procrustean Transformations...........................................341
12.5.1 Promax Oblique Rotation ..................................................342
12.5.2 Rotation to a Factor-Pattern Matrix Approximating
a Given Target Matrix.........................................................343
12.5.3 Promaj...................................................................................343
12.5.4 Promin ..................................................................................345
12.6 Gradient-Projection-Algorithm Synthesis.....................................348
12.6.1 Gradient-Projection Algorithm .........................................348
12.6.2 Jennrich’s Use of the GPA..................................................351
12.6.2.1 Gradient-Projection Algorithm ........................353
12.6.2.2 Quartimin............................................................353
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Contents xi
12.6.2.3 Oblimin Rotation................................................354
12.6.2.4 Least-Squares Rotation to a Target Matrix .....357
12.6.2.5 Least-Squares Rotation to a Partially
Specified Target Pattern Matrix........................357
12.6.3 Simplimax ............................................................................357
12.7 Rotating Using Component Loss Functions .................................360
12.8 Conclusions........................................................................................366
13 Factor Scores and Factor Indeterminacy ................................................369
13.1 Introduction.......................................................................................369
13.2 Scores on Component Variables ..................................................... 370
13.2.1 Component Scores in Canonical-Component
Analysis and Image Analysis............................................373
13.2.1.1 Canonical-Component Analysis.......................373
13.2.1.2 Image Analysis.................................................... 374
13.3 Indeterminacy of Common-Factor Scores.....................................375
13.3.1 Geometry of Correlational Indeterminacy......................377
13.4 Further History of Factor Indeterminacy......................................380
13.4.1 Factor Indeterminacy from 1970 to 1980..........................384
13.4.1.1 “Infinite Domain” Position ...............................392
13.4.2 Researchers with Well-Defined Concepts
of Their Domains ................................................................395
13.4.2.1 Factor Indeterminacy from 1980 to 2000.........397
13.5 Other Estimators of Common Factors ...........................................399
13.5.1 Least Squares .......................................................................400
13.5.2 Bartlett’s Method.................................................................401
13.5.3 Evaluation of Estimation Methods...................................403
14 Factorial Invariance....................................................................................405
14.1 Introduction.......................................................................................405
14.2 Invariance under Selection of Variables ........................................405
14.3 Invariance under Selection of Experimental Populations ..........408
14.3.1 Effect of Univariate Selection ............................................408
14.3.2 Multivariate Case................................................................ 412
14.3.3 Factorial Invariance in Different Experimental
Populations.......................................................................... 414
14.3.4 Effects of Selection on Component Analysis................... 418
14.4 Comparing Factors across Populations ......................................... 419
14.4.1 Preliminary Requirements for Comparing
Factor Analyses ...................................................................420
14.4.2 Inappropriate Comparisons of Factors............................421
14.4.3 Comparing Factors from Component Analyses.............422
14.4.4 Contrasting Experimental Populations
across Factors.......................................................................423
14.4.5 Limitations on Factorial Invariance..................................424
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
xii Contents
15 Confirmatory Factor Analysis..................................................................427
15.1 Introduction.......................................................................................427
15.1.1 Abduction, Deduction, and Induction...........................428
15.1.2 Science as the Knowledge of Objects .............................429
15.1.3 Objects as Invariants in the Perceptual Field................431
15.1.4 Implications for Factor Analysis.....................................433
15.2 Example of Confirmatory Factor Analysis....................................434
15.3 Mathematics of Confirmatory Factor Analysis.............................440
15.3.1 Specifying Hypotheses.....................................................440
15.3.2 Identification......................................................................441
15.3.3 Determining Whether Parameters and
Models Are Identified ......................................................444
15.3.4 Identification of Metrics...................................................450
15.3.5 Discrepancy Functions .....................................................452
15.3.6 Estimation by Minimizing Discrepancy Functions......454
15.3.7 Derivatives of Elements of Matrices...............................454
15.3.8 Maximum-Likelihood Estimation in
Confirmatory Factor Analysis .........................................457
15.3.9 Least-Squares Estimation................................................. 461
15.3.10 Generalized Least-Squares Estimation ..........................463
15.3.11 Implementing the Quasi-Newton Algorithm ...............463
15.3.12 Avoiding Improper Solutions..........................................465
15.3.13 Statistical Tests...................................................................466
15.3.14 What to Do When Chi-Square Is Significant.................467
15.3.15 Approximate Fit Indices...................................................469
15.4 Designing Confirmatory Factor Analysis Models........................473
15.4.1 Restricted versus Unrestricted Models..........................473
15.4.2 Use for Unrestricted Model .............................................475
15.4.3 Measurement Model......................................................... 476
15.4.4 Four-Step Procedure for Evaluating a Model ...............477
15.5 Some Other Applications.................................................................477
15.5.1 Faceted Classification Designs........................................477
15.5.2 Multirater–Multioccasion Studies ..................................478
15.5.3 Multitrait–Multimethod Covariance Matrices..............483
15.6 Conclusion .........................................................................................489
References ...........................................................................................................493
Author Index.......................................................................................................505
Subject Index ......................................................................................................509
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
xiii
Preface to the Second Edition
This is a book for those who want or need to get to the bottom of things.
It is about the foundations of factor analysis. It is for those who are not content
with accepting on faith the many equations and procedures that constitute
factor analysis but want to know where these equations and procedures came
from. They want to know the assumptions underlying these equations and
procedures so that they can evaluate them for themselves and decide where
and when they would be appropriate. They want to see how it was done, so
they might know how to add modifications or produce new results.
The fact that a major aspect of factor analysis and structural equation
modeling is mathematical means that getting to their foundations is going to
require dealing with some mathematics. Now, compared to the mathematics
needed to fully grasp modern physics, the mathematics of factor analysis and
structural equation modeling is, I am happy to say, relatively easy to learn
and not much beyond a sound course in algebra and certainly not beyond a
course in differential calculus, which is often the first course in mathematics
for science and engineering majors in a university. It is true that factor analy-
sis relies heavily on concepts and techniques of linear algebra and matrix
algebra. But these are topics that can be taught as a part of learning about
factor analysis. Where differential calculus comes into the picture is in those
situations where one seeks to maximize or minimize some algebraic expres-
sion. Taking derivatives of algebraic expressions is an analytic process, and
these are algebraic in nature. While best learned in a course on calculus, one
can still be shown the derivatives needed to solve a particular optimization
problem. Given that the algebra of the derivation of the solution is shown
step by step, a reader may still be able to follow the argument leading to the
result. That, then, is the way this book has been written: I teach the math-
ematics needed as it is needed to understand the derivation of an equation or
procedure in factor analysis and structural equation modeling.
This text may be used at the postgraduate level as a first-semester course
in advanced correlational methods. It will find use in psychology, sociology,
education, marketing, and organizational behavior departments, especially
in their quantitative method programs. Other ancillary sciences may also
find this book useful. It can also be used as a reference for explanations of
various options in commercial computer programs for performing factor
analysis and structural equation modeling.
There is a logical progression to the chapters in this text, reflecting the
hierarchical structure to the mathematical concepts to be covered. First, in
Chapter 2 one needs to learn the basic mathematics, principally linear algebra
and matrix algebra and the elements of differential calculus. Then one needs
to learn about composite variables and their means, variances, covariances,
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
xiv Preface to the Second Edition
and correlation in terms of means, variances, and covariances among their
component variables. Then one builds on that to deal with multiple and
partial correlations, which are special forms of composite variables. This is
accomplished in Chapter 3.
Differential calculus will first be encountered in Chapter 4 in demon-
strating where the estimates of the regression weights come from, for this
involves finding the weights of a linear combination of predictor variables
that has either the maximum correlation with the criterion or the minimum
expected difference with the criterion.
In Chapter 5 one uses the concepts of composite variables and multiple
and partial regression to understand the basic properties of the multivariate
normal distribution. Matrix algebra becomes essential to simplify notations
that often involve hundreds of variables and their interrelations.
By this point, in Chapter 6 one is ready to deal with factor analysis, which
is an extension of regression theory wherein predictor variables are now
unmeasured, and hypothetical latent variables and dependent variables are
observed variables. Here we describe the fundamental equation and fun-
damental theorem of factor analysis, and introduce the basic terminology
of factor analysis. In Chapter 7, we consider how common factors may be
extracted. We show that the methods used build on concepts of regression
and partial correlation. We first look at a general algorithm for extracting
factors proposed by Guttman. We then consider the diagonal and the cen-
troid methods of factoring, both of which are of historical interest. Next we
encounter eigenvectors and eigenvalues. Eigenvectors contain coefficients
that are weights used to combine additively observed variables or their
common parts into variables that have maximum variances (the eigenval-
ues), because they will be variables that account for most of the information
among the original variables in any one dimension.
To perform a common-factor analysis one must first have initial estimates
of the communalities and unique variances. Lower-bound estimates are fre-
quently used. These bounds and their effect on the number of factors to retain
are discussed in Chapter 8. The unique variances are either subtracted from
the diagonal of the correlation matrix or the diagonal matrix of the recipro-
cals of their square roots are pre- and postmultiplied times the correlation
matrix, and then the eigenvectors and eigenvalues of the resulting matrix are
obtained. Different methods use different estimates of unique variances. The
formulas for the eigenvectors and eigenvalues of a correlation matrix, say,
are obtained by using differential calculus to solve the maximization prob-
lem of finding the weights of a linear combination of the variables that has
the maximum variance under the constraint that the sum of the squares of
the weights adds to unity. These will give rise to the factors, and the common
factors will in turn be basis vectors of a common factor space, meaning that
the observed variables are in turn linear combinations of the factors.
Maximum-likelihood-factor analysis, the equations for which were ulti-
mately solved by Karl Jöreskog (1967), building on the work of precursor
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Preface to the Second Edition xv
statisticians, also requires differential calculus to solve the maximization
problem involved. Furthermore, the computational solution for the
maximum-likelihood estimates of the model parameters cannot be solved
directly by any algebraic analytic procedure. The solution has to be obtained
numerically and iteratively. Jöreskog (1967) used a then new computer algo-
rithm for nonlinear optimization, the Fletcher–Powell algorithm. We will
explain how this works to obtain the maximum-likelihood solution for the
exploratory factor-analysis model.
Chapter 9 examines several variants of the factor-analysis model: princi-
pal components, weighted principal components, image analysis, canonical
factor analysis, descriptive factor analysis, and alpha factor analysis.
In Chapters 10 (simple structure and graphical rotation), 11 (orthogonal
analytic rotation), and 12 (oblique analytic rotation) we consider factor rota-
tion. Rotation of factors to simple structures will concern transformations of
the common-factor variables into a new set of variables that have a simpler
relationship with the observed variables. But there are numerous math-
ematical criteria of what constitutes a simple structure solution. All of these
involve finding the solution for a transformation matrix for transforming the
initial “unrotated solution” to one that maximizes or minimizes a mathe-
matical expression constituting the criterion for simple structures. Thus, dif-
ferential calculus is again involved in finding the algorithms for conducting
a rotation of factors, and the solution is obtained numerically and iteratively
using a computer.
Chapter 13 addresses whether or not it is possible to obtain scores on the
latent common factors. It turns out that solutions for these scores are not
unique, even though they are optimal. This is the factor-indeterminacy
problem, and it concerns more than getting scores on the factors: there
may be more than one interpretation for a common factor that fits the data
equally well.
Chapter 14 deals with factorial invariance. What solutions for the factors
will reveal the same factors even if we use different sets of observed vari-
ables? What coefficients are invariant in a factor-analytic model under restric-
tion of range? Building on ideas from regression, the solution is effectively
algebraic.
While much of the first 14 chapters is essentially unchanged from the first
edition, developments that have taken place since 1972, when the first edition
was published, have been updated and revised.
I have changed the notation to adopt a notation popularized by Karl
Jöreskog for the common-factor model. I now write the model equation as
Y = LX + YE instead of Z = FX + UV and the equation of the fundamental
theorem as RYY = LFXX L¢ + Y2 instead of RZZ = FCXXF¢ + U2.
I have added a new Chapter 5 on the multivariate normal distribution
and its general properties along with the concept of maximum-likelihood
estimation based on it. This will increase by one the chapter numbers for
subsequent chapters corresponding to those in the first edition. However,
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
xvi Preface to the Second Edition
Chapter 12, on procrustean rotation in the first edition, has been dropped,
and this subject has been briefly described in the new Chapter 12 on oblique
rotation. Chapters 13 and 14 deal with factor scores and indeterminacy, and
factorial invariance under restriction of range, respectively.
Other changes and additions are as follows. I am critical of several of the
methods that are commonly used to determine the number of factors to retain
because they are not based on sound statistical or mathematical theory. I have
now directed some of these criticisms toward some methods in renumbered
Chapter 8. However, since then I have also realized that, in most studies, a
major problem with determining the number of factors concerns the presence
of doublet variance uncorrelated with n − 2 observed variables and correlated
between just the two of them. Doublet correlations contribute to the commu-
nalities, but lead to an overestimate of the number of overdetermined common
factors. But the common-factor model totally ignores the possibility of dou-
blets, while they are everywhere in our empirical studies. Both unique factor
variance and doublet variance should be separated from the overdetermined
common-factor variance. I have since rediscovered that a clever but heuristic
solution for doing this, ignored by most factor-analytic texts, appeared in a
paper I cited but did not understand sufficiently to describe in the first edition.
This paper was by John Butler (1968), and he named his method “descriptive
factor analysis.” I now cover this more completely (along with a new method of
my own I call “doublet factor analysis”) in Chapter 9 as well as other methods
of factor analysis. It provides an objective way to determine the number of
overdetermined common factors to retain that those who use the eigenvalues
greater than 1.00 rule of principal components will find more to their liking in
the smaller number of factors it retains.
I show in Chapter 9 that Kaiser’s formula (1963) for the principal axes
factor structure matrix for an image analysis, ⎡ ⎤
Λ = γ − γ
⎣ ⎦
1/2
2
( 1)
r i i r
SA , is not
the correct one to use, because this represents the “covariances” between the
“image” components of the variables and the underlying “image factors.”
The proper factor structure matrix that represents the correlations between
the unit variance “observed” variables and the unit variance image factors is
none other than the weighted principal component solution: Λ = γ 1/2
[ ]
r i r
SA .
In the 1990s, several new approaches to oblique rotation to simple struc-
ture were published, and these and earlier methods of rotation were in turn
integrated by Robert Jennrich (2001, 2002, 2004, 2006) around a simple core
computing algorithm, the “gradient projection algorithm,” which seeks
the transformation matrix for simultaneously transforming all the factors.
I have therefore completely rewritten Chapter 12 on analytic oblique rotation
on the basis of this new, simpler algorithm, and I show examples of its use.
In the 1970s, factor score indeterminacy was further developed by several
authors, and in 1993 many of them published additional exchanges on the
subject. A discussion of these developments has now been included in an
expanded Chapter 13 on factor scores.
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Preface to the Second Edition xvii
Factor analysis was also extended to confirmatory factor analysis and
structural equation modeling by Jöreskog (1969, 1973, 1974, 1975). As a con-
sequence, these later methodologies diverted researchers from pursuing
exclusively exploratory studies to pursuing hypothesis-testing studies.
Confirmatory factor analysis is now best treated separately in a text on struc-
tural equation modeling, as a special case of that method, but I have rewrit-
ten a chapter on confirmatory factor analysis for this edition. Exploratory
factor analysis still remains a useful technique in many circumstances as
an abductive, exploratory technique, justifying its study today, although its
limitations are now better understood.
I wish to thank Robert Jennrich for his help in understanding the gra-
dient projection algorithm. Jim Steiger was very helpful in clarifying Peter
Schönemann’s papers on factor indeterminacy, and I owe him a debt of
gratitude.
I wish to thank all those who, over the past 30 years, have encouraged me
to revise this text, specifically, Jim Steiger, Michael Browne, Abigail Panter,
Ed Rigdon, Rod McDonald, Bob Cudeck, Ed Loveland, Randy Engle, Larry
James, Susan Embretson, and Andy Smith. Without their encouragement,
I might have abandoned the project. I owe Henry Kaiser, now deceased, an
unrepayable debt for his unselfish help in steering me into a career in factor
analysis and in getting me the postdoctoral fellowship at the University of
North Carolina. This not only made the writing of the first edition of this
book possible, but it also placed me in the intellectually nourishing environ-
ment of leading factor analysts, without which I could never have gained the
required knowledge to write the first edition or its current sequel.
I also acknowledge the loving support my wife Jane has given me through
all these years as I labored on this book into the wee hours of the night.
Stanley A. Mulaik
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
xix
Preface to the First Edition
When I was nine years old, I dismantled the family alarm clock. With gears,
springs, and screws—all novel mechanisms to me—scattered across the
kitchen table, I realized that I had learned two things: how to take the clock
apart and what was inside it. But this knowledge did not reveal the mysteries
of the all-important third and fourth steps in the process: putting the clock
back together again and understanding the theory of clocks in general. The
disemboweled clock ended up in the trash that night. A common experience,
perhaps, but nonetheless revealing.
There are some psychologists today who report a similar experience in
connection with their use of factor analysis in the study of human abilities or
personality. Very likely, when first introduced to factor analysis they had the
impression that it would allow them to analyze human behavior into its com-
ponents and thereby facilitate their activities of formulating structural theo-
ries about human behavior. But after conducting a half-dozen factor analyses
they discovered that, in spite of the plethora of factors produced and the piles
of computer output stacked up in the corners of their offices, they did not
know very much more about the organization of human behavior than they
knew before factor analyzing. Perhaps after factor analyzing they would
admit to appreciating more fully the complexity of human behavior, but in
terms of achieving a coherent, organized conception of human behavior, they
would claim that factor analysis had failed to live up to their expectations.
With that kind of negative appraisal being commonly given to the tech-
nique of factor analysis, one might ask why the author insisted on writing a
textbook about the subject. Actually, I think the case against factor analysis
is not quite as grim as depicted above. Just as my youthful experience with
the alarm clock yielded me fresh but limited knowledge about the works
inside the clock, factor analysis has provided psychologists with at least fresh
but limited knowledge about the components, although not necessarily the
organization, of human psychological processes.
If some psychologists are disillusioned by factor analysis’ failure to
provide them with satisfactory explanations of human behavior, the fault
probably lies not with the model of factor analysis itself but with the mindless
application of it; many of the early proponents of the method encouraged this
mindless application by their extravagant claims for the method’s efficacy in
discovering, as if by magic, underlying structures. Consequently, rather than
use scientific intuition and already-available knowledge about the properties
of variables under study to construct theories about the nature of relation-
ships among the variables and formulating these theories as factor-analytic
models to be tested against empirical data, many researchers have randomly
picked variables representing a domain to be studied, intercorrelated the
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
xx Preface to the First Edition
variables, and then factor-analyzed them expecting that the theoretically
important variables of the domain would be revealed by the analysis. Even
Thurstone (1947, pp. 55–56), who considered exploration of a new domain
a legitimate application of factor-analytic methods, cautioned that explora-
tion with factor analysis required carefully chosen variables and that results
from using the method were only provisional in suggesting ideas for further
research. Factor analysis is not a method for discovering full-blown struc-
tural theories about a domain. Experience has justified Thurstone’s cau-
tion for, more often than one would like to admit, the factors obtained from
purely exploratory studies have been difficult to integrate with other theory,
if they have been interpretable at all. The more substantial contributions of
factor analysis have been made when researchers postulated the existence
of certain factors, carefully selected variables from the domain that would
isolate their existence, and then proceeded to factor-analyze in such a way as
to reveal these factors as clearly as possible. In other words, factor analysis
has been more profitably used when the researcher knew what he or she was
looking for.
Several recent developments, discussed in this book, have made it pos-
sible for researchers to use factor-analytic methods in a hypothesis-testing
manner.
For example, in the context of traditional factor-analytic methodology,
using procrustean transformations (discussed in Chapter 12), a researcher
can rotate an arbitrary factor-pattern matrix to approximate a hypothetical
factor-pattern matrix as much as possible. The researcher can then examine
the goodness of fit of the rotated pattern matrix to the hypothetical pattern
matrix to evaluate his or her hypothesis.
In another recent methodological development (discussed in Chapter 15)
it is possible for the researcher to formulate a factor-analytic model for a set
of variables to whatever degree of completeness he or she may desire, leav-
ing the unspecified parameters of the model to be estimated in such a way
as to optimize goodness of fit of the hypothetical model to the data, and
then to test the overall model for goodness of fit against the data. The latter
approach to factor analysis, known as confirmatory factor analysis or analy-
sis of covariance structures, represents a radical departure from the tradi-
tional methods of performing factor analysis and may eventually become
the predominant method of using the factor-analytic model.
The objective of this book, as its title Foundations of Factor Analysis suggests,
is to provide the reader with the mathematical rationale for factor-analytic
procedures. It is thus designed as a text for students of the behavioral or
social sciences at the graduate level. The author assumes that the typical stu-
dent who uses this text will have had an introductory course in calculus, so
that he or she will be familiar with ordinary differentiation, partial differen-
tiation, and the maximization and minimization of functions using calculus.
There will practically be no reference to integral calculus, and many of the
sections will be comprehensible with only a good grounding in matrix
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Preface to the First Edition xxi
algebra, which is provided in Chapter 2. Many of the mathematical concepts
required to understand a particular factor-analytic procedure are introduced
along with the procedure.
The emphasis of this book is algebraic rather than statistical. The key con-
cept is that (random) variables may be treated as vectors in a unitary vec-
tor space in which the scalar product of vectors is a defined operation. The
empirical relationships between the variables are represented either by the
distances or the cosines of the angles between the corresponding vectors.
The factors of the factor-analytic model are basis vectors of the unitary vector
space of observed variables. The task of a factor-analytic study is to find a set
of basis vectors with optimal properties from which the vectors correspond-
ing to the observed variables can be derived as linear combinations.
Chapter 1 provides an introduction to the role factor-analytic models can
play in the formulation of structural theories, with a brief review of the his-
tory of factor analysis. Chapter 2 provides a mathematical review of concepts
of algebra and calculus and an introduction to vector spaces and matrix
algebra. Chapter 3 introduces the reader to properties of linear composite
variables and the representation of these properties by using matrix algebra.
Chapter 4 considers the problem of finding a particular linear combination
of a set of random variables that is minimally distant from some external
random variable in the vector space containing these variables. This discus-
sion introduces the concepts of multiple correlation and partial correlation;
the chapter ends with a discussion on how image theory clarifies the mean-
ing of multiple correlation. Multiple- and partial-correlational methods are
seen as essential, later on, to understanding methods of extracting factors.
Chapter 5 plunges into the theory of factor analysis proper with a dis-
cussion on the fundamental equations of common-factor analysis. Chapter 6
discusses methods of extracting factors; it begins with a discussion on a
general algorithm for extracting factors and concludes with a discussion on
methods for obtaining principal-axes factors by finding the eigenvectors and
eigenvalues of a correlation matrix.
Chapter 7 considers the model of common-factor analysis in greater
detail with a discussion on (1) the importance of overdetermining factors
by the proper selection of variables, (2) the inequalities regarding the lower
bounds to the communalities of variables, and (3) the fitting of the unre-
stricted common-factor model to a correlation matrix by least-squares and
maximum-likelihood estimation.
Chapter 8 discusses factor-analytic models other than the common-factor
model, such as component analysis, image analysis, image factor analysis,
and alpha factor analysis. Chapter 9 introduces the reader to the topic of factor
rotation to simple structure using graphical rotational methods. This discus-
sion is followed by a discussion on analytic methods of orthogonal rotation in
Chapter 10 and of analytic methods of oblique rotation in Chapter 11.
Chapters 12 and 13 consider methods of procrustean transformation of
factors and the meaning of factorial indeterminacy in common-factor
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
xxii Preface to the First Edition
analysis in connection with the problem of estimating the common factors,
respectively. Chapter 14 deals with the topic of factorial invariance of the
common-factor model over sampling of different variables and over selec-
tion of different populations. The conclusion this chapter draws is that the
factor-pattern matrix is the only invariant of a common-factor-analysis model
under conditions of varied selection of a population.
Chapter 15 takes up the new developments of confirmatory factor analy-
sis and analysis of covariance structures, which allow the researcher to test
hypotheses using the model of common-factor analysis. Finally, Chapter 16
shows how various concepts from factor analysis can be applied to methods
of multivariate analysis, with a discussion on multiple correlations, step-
down regression onto a set of several variables, canonical correlations, and
multiple discriminant analysis.
Some readers may miss discussions in this book of recent offshoots of
factor-analytic theory such as 3-mode factor analysis, nonlinear factor analy-
sis, and nonmetric factor analysis. At one point of planning, I felt such top-
ics might be included, but as the book developed, I decided that if all topics
pertinent to factor analysis were to be included, the book might never be
finished. And so this book is confined to the more classic methods of factor
analysis.
I am deeply indebted to the many researchers upon whose published
works I have relied in developing the substance of this book. I have tried to
give credit wherever it was due, but in the event of an oversight I claim no
credit for any development made previously by another author.
I especially wish to acknowledge the influence of my onetime teacher,
Dr. Calvin W. Taylor of the University of Utah, who first introduced me to
the topic of factor analysis while I was studying for my PhD at the University
of Utah, and who also later involved me in his factor-analytic research as a
research associate. I further wish to acknowledge his role as the primary
contributing cause in the chain of events that led to the writing of this book,
for it was while substituting for him, at his request, as the instructor of his
factor-analysis course at the University of Utah in 1965 and 1966 that I first
conceived of writing this book. I also wish to express my gratitude to him for
generously allowing me to use his complete set of issues of Psychometrika and
other factor-analytic literature, which I relied upon extensively in the writing
of the first half of this book.
I am also grateful to Dr. Henry F. Kaiser who, through correspondence with
me during the early phases of my writing, widened my horizons and helped
and encouraged me to gain a better understanding of factor analysis.
To Dr. Lyle V. Jones of the L. L. Thurstone Psychometric Laboratory,
University of North Carolina, I wish to express my heartfelt appreciation for
his continued encouragement and support while I was preparing the manu-
script of this book. I am also grateful to him for reading earlier versions of
the manuscript and for making useful editorial suggestions that I have tried
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Preface to the First Edition xxiii
to incorporate in the text. However, I accept full responsibility for the final
form that this book has taken.
I am indebted to the University of Chicago Press for granting me permis-
sion to reprint Tables 10.10, 15.2, and 15.8 from Harry H. Harman’s Modern
Factor Analysis, first edition, 1960. I am also indebted to Chester W. Harris,
managing editor of Psychometrika, and to the following authors for granting
me permission to reprint tables taken from their articles that appeared in
Psychometrika: Henry F. Kaiser, Karl G. Jöreskog, R. Darrell Bock and Rolf
E. Bargmann, R. I. Jennrich, and P. F. Sampson. I am especially grateful to
Lyle V. Jones and Joseph M. Wepman for granting me permission to reprint
a table from their article, “Dimensions of language performance,” which
appeared in the Journal of Speech and Hearing Research, September, 1961.
I am also pleased to acknowledge the contribution of my colleagues,
Dr. Elliot Cramer, Dr. John Mellinger, Dr. Norman Cliff, and Dr. Mark
Appelbaum, who in conversations with me at one time or another helped me
to gain increased insights into various points of factor-analytic theory, which
were useful when writing this book. In acknowledging this contribution,
however, I take full responsibility for all that appears in this book.
I am also grateful for the helpful criticism of earlier versions of the manu-
script of this book, which were given to me by my students in my factor-
analysis classes at the University of Utah and at the University of North
Carolina.
I also wish to express my gratitude to the following secretaries who at one
time or another struggled with the preparation of portions of this manuscript:
Elaine Stewart, Judy Nelson, Ellen Levine, Margot Wasson, Judy Schenck,
Jane Pierce, Judy Schoenberg, Betsy Schopler, and Bess Autry. Much of their
help would not have been possible, however, without the support given to
me by the L. L. Thurstone Psychometric Laboratory under the auspices of
Public Health Service research grant No. M-10006 from the National Institute
of Mental Health, and National Science Foundation Science Development
grant No. GU 2059, for which I am ever grateful.
Stanley A. Mulaik
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
1
1
Introduction
1.1 Factor Analysis and Structural Theories
By a structural theory we shall mean a theory that regards a phenomenon as
an aggregate of elemental components interrelated in a lawful way. An excel-
lent example of a structural theory is the theory of chemical compounds:
Chemical substances are lawful compositions of the atomic elements, with
the laws governing the compositions based on the manner in which the
electron orbits of different atoms interact when the atoms are combined in
molecules.
Structural theories occur in other sciences as well. In linguistics, for exam-
ple, structural descriptions of language analyze speech into phonemes or
morphemes. The aim of structural linguistics is to formulate laws govern-
ing the combination of morphemes in a particular language. Biology has a
structural theory, which takes, as its elemental components, the individual
cells of the organism and organizes them into a hierarchy of tissues, organs,
and systems. In the study of the inheritance of characters, modern geneticists
regard the manifest characteristics of an organism (phenotype) as a function
of the particular combination of genes (genotype) in the chromosomes of the
cells of the organism.
Structural theories occur in psychology as well. At the most fundamental
level a psychologist may regard behaviors as ordered aggregates of cellular
responses of the organism. However, psychologists still have considerable
difficulty in formulating detailed structural theories of behavior because
many of the physical components necessary for such theories have not been
identified and understood. But this does not make structural theories impos-
sible in psychology. The history of other sciences shows that scientists can
understand the abstract features of a structure long before they know the
physical basis for this structure. For example, the history of chemistry indi-
cates that chemists could formulate principles regarding the effects of mixing
compounds in certain amounts long before the atomic and molecular aspects
of matter were understood. Gregor Mendel stated the fundamental laws of
inheritance before biologists had associated the chromosomes of the cell
with inheritance. In psychology, Isaac Newton, in 1704, published a simple
mathematical model of the visual effects of mixing different hues, but nearly
a hundred years elapsed before Thomas Young postulated the existence of
© 2010 by Taylor & Francis Group, LLC
2 Foundations of Factor Analysis
three types of color receptors in the retina to account for the relationships
described in Newton’s model. And only a half-century later did physiologist
Helmholtz actually give a physiological basis to Young’s theory. Other physi-
ological theories subsequently followed. Much of psychological theory today
still operates at the level of stating relationships among stimulus conditions
and gross behavioral responses.
One of the most difficult problems of formulating a structural theory
involves discovering the rules that govern the composition of the aggregates
of components. The task is much easier if the scientist can show that the
physical structure he is concerned with is isomorphic to a known mathe-
matical structure. Then, he can use the many known theorems of the math-
ematical structure to make predictions about the properties of the physical
structure. In this regard, George Miller (1964) suggests that psychologists
have used the structure of euclidean space more than any other mathemati-
cal structure to represent structural relationships of psychological processes.
He cites, for example, how Isaac Newton’s (1704) model for representing the
effects of mixing different hues involved taking the hues of the spectrum in
their natural order and arranging them as points appropriately around the
circumference of a circle. The effects of color mixtures could be determined
by proportionally weighting the points of the hues in the mixture accord-
ing to their contribution to the mixture and finding the center of gravity of
the resulting points. The closer this center of gravity approached the center
of the color circle, the more the resulting color would appear gray. In addi-
tion, Miller cites Schlosberg’s (1954) representation of perceived emotional
similarities among facial expressions by a two-dimensional graph with one
dimension interpreted as pleasantness versus unpleasantness and the other
as rejection versus attention, and Osgood’s (1952) analysis of the compo-
nents of meaning of words into three primary components: (1) evaluation,
(2) power, and (3) activity.
Realizing that spatial representations have great power to suggest the exis-
tence of important psychological mechanisms, psychologists have developed
techniques, such as metric and nonmetric factor analysis and metric and non-
metric multidimensional scaling, to create, systematically, spatial representa-
tions from empirical measurements. All four of these techniques represent
objects of interest (e.g., psychological “variables” or stimulus “objects”) as
points in a multidimensional space. The points are so arranged with respect
to one another in the space as to reflect relationships of similarity among the
corresponding objects (variables) as given by empirical data on these objects.
Although a discussion of the full range of techniques using spatial repre-
sentations of relationships found in data would be of considerable interest,
we shall confine ourselves, in this book, to an in depth examination of the
methods of factor analysis. The reason for this is that the methodology of
factor analysis is historically much more fully developed than, say, that
of multidimensional scaling; as a consequence, prescriptions for the ways
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Introduction 3
of doing factor analysis are much more established than they are for these
other techniques. Furthermore, factor analysis, as a technique, dovetails very
nicely with such classic topics in statistics as correlation, regression, and
multivariate analysis, which are also well developed. No doubt, as the gains
in the development of multidimensional scaling, especially the nonmetric
versions of it, become consolidated, there will be authors who will write text-
books about this area as well. In the meantime, the interested reader can
consult Torgerson’s (1958) textbook on metric multidimensional scaling for
an account of that technique.
1.2 Brief History of Factor Analysis as a Linear Model
The history of factor analysis can be traced back into the latter half of the
nineteenth century to the efforts of the British scientist Francis Galton (1869,
1889) and other scientists to discover the principles of the inheritance of
manifest characters (Mulaik, 1985, 1987). Unlike Gregor Mendel (1866), who
is today considered the founder of modern genetics, Galton did not try to
discover these principles chiefly through breeding experiments using sim-
ple, discrete characters of organisms with short maturation cycles; rather, he
concerned himself with human traits such as body height, physical strength,
and intelligence, which today are not believed to be simple in their genetic
determination. The general question asked by Galton was: To what extent are
individual differences in these traits inherited and by what mechanism? To
be able to answer this question, Galton had to have some way of quantifying
the relationships of traits of parents to traits of offspring. Galton’s solution
to this problem was the method of regression. Galton noticed that, when
he took the heights of sons and plotted them against the heights of their
fathers, he obtained a scatter of points indicating an imperfect relationship.
Nevertheless, taller fathers tended strongly to have on the average taller sons
than shorter fathers. Initially, Galton believed that the average height of sons
of fathers of a given height would be the same as the height of the fathers, but
instead the average was closer to the average height of the population of sons
as a whole. In other words, the average height of sons “regressed” toward
the average height in the population and away from the more extreme height
of their fathers. Galton believed this implied a principle of inheritance and
labeled it “regression toward the mean,” although today we regard the
regression phenomenon as a statistical artifact associated with the linear-
regression model. In addition, Galton discovered that he could fit a straight
line, called the regression line, with positive slope very nicely through the
average heights of sons whose fathers had a specified height. Upon consulta-
tion with the mathematician Karl Pearson, Galton learned that he could use
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
4 Foundations of Factor Analysis
a linear equation to relate the heights of fathers to heights of sons (cf. Pearson
and Lee, 1903):
= + +
Y a bX E (1.1)
Here
Y is the height of a son
a the intercept with the Y-axis of the regression line passing through the
averages of sons with fathers of fixed height
b the slope of the regression line
X the height of the father
E an error of prediction
As a measure of the strength of relationship, Pearson used the ratio of the
variance of the predicted variable, Ŷ=a+bX, to the variance of Y.
Pearson, who was an accomplished mathematician and an articulate writer,
recognized that the mathematics underlying Galton’s regression method had
already been worked out nearly 70 years earlier by Gauss and other mathema-
ticians in connection with determining the “true” orbits of planets from obser-
vations of these orbits containing error of observation. Subsequently, because
the residual variate E appeared to be normally distributed in the prediction of
height, Pearson identified the “error of observation” of Gauss’s theory of least-
squares estimation with the “error of prediction” of Equation 1.1 and treated
the predicted component Ŷ=a+bX as an estimate of an average value. This ini-
tially amounted to supposing that the average heights of sons would be given
in terms of their fathers’ heights by the equation Y=a+bX (without the error
term), if nature and environment did not somehow interfere haphazardly in
modifying the predicted value. Although Pearson subsequently found the
field of biometry on such an exploitation of Gauss’s least-squares theory of
error, modern geneticists now realize that heredity can also contribute to the E
term in Equation 1.1 and that an uncritical application of least-squares theory
in the study of the inheritance of characters can be grossly misleading.
Intrigued with the mathematical problems implicit in Galton’s program to
metricize biology, anthropology, and psychology, Pearson became Galton’s
junior colleague in this endeavor and contributed enormously as a math-
ematical innovator (Pearson, 1895). After his work on the mathematics of
regression, Pearson concerned himself with finding an index for indicating
the type and degree of relationship between metric variables (Pearson, 1909).
This resulted in what we know today as the product-moment correlation
coefficient, given by the formula
− −
ρ =
− −
2 2
[( ( ))( ( ))]
[( ( )) ] [( ( )) ]
XY
E X E X Y E Y
E X E X E Y E Y
(1.2)
where
E[] is the expected-value operator
X and Y are two random variables
© 2010 by Taylor & Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Introduction 5
This index takes on values between −1 and +1, with 0 indicating no rela-
tionship. A deeper meaning for this coefficient will be given later when we
consider that it represents the cosine of the angle between two vectors, each
standing for a different variable.
In and of itself the product-moment correlation coefficient is descriptive
only in that it shows the existence of a relationship between variables with-
out showing the source of this relationship, which may be causal or coin-
cidental in nature. When the researcher obtains a nonzero value for the
product-moment correlation coefficient, he must supply an explanation for
the relationship between the related variables. This usually involves finding
that one variable is the cause of the other or that some third variable (and
maybe others) is a common cause of them both. In any case, interpretations
of correlations are greatly facilitated if the researcher already has a struc-
tural model on which to base his interpretations concerning the common
component producing a correlation.
To illustrate, some of the early applications of the correlation coefficient
in genetics led nowhere in terms of furthering the theory of inheritance
because explanations given to nonzero correlation coefficients were fre-
quently tautological, amounting to little more than saying that a relation-
ship existed because such-and-such relationship-causing factor was present
in the variables exhibiting the relationship. However, when Mendel’s theory
of inheritance (Mendel, 1865) was rediscovered during the last decade of the
nineteenth century, researchers of hereditary relationships had available to
them a structural mechanism for understanding how characters in parents
were transmitted to offspring. Working with a trait that could be measured
quantitatively, a geneticist could hypothesize a model of the behavior of the
genes involved and from this model draw conclusions about, say, the cor-
relation between relatives for the trait in the population of persons. Thus,
product-moment correlation became not only an exploratory, descriptive
index but an index useful in hypothesis testing. R. A. Fisher (1918) and Sewell
Wright (1921) are credited with formulating the methodology for using cor-
relation in testing Mendelian hypotheses.
With the development of the product-moment correlation coefficient, other
related developments followed: In 1897, G. U. Yule published his classic
paper on multiple and partial correlation. The idea of multiple correlation
was this: Suppose one has p variables, X1, X2,…, Xp, and wishes to find that
linear combination
= β + +β

1 2 2
ˆ p p
X X X (1.3)
of the variables X2,…, Xp, which is maximally correlated with X1. The prob-
lem is to find the weights β2,…, βp that make the linear combination X̂1
maximally correlated with X1. After Yule’s paper was published, multiple
correlation became quite useful in prediction problems and turned out to
be systematically related but not exactly equivalent to Gauss’s solution for
linear least-squares estimation of a variable, using information obtained on
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
6 Foundations of Factor Analysis
several independent variables observed at certain preselected values. In any
case, with multiple correlation, the researcher can consider several compo-
nents (on which he had direct measurements) as accounting for the variabil-
ity in a given variable.
At this point, the stage was set for the development of factor analysis. By
the time 1900 had arrived, researchers had obtained product-moment cor-
relations on many variables such as physical measurements of organisms,
intellectual-performance measures, and physical-performance measures.
With variables showing relationships with many other variables, the need
existed to formulate structural models to account for these relationships. In
1901, Pearson published a paper on lines and planes of closest fit to systems
of points in space, which formed the basis for what we now call the prin-
cipal-axes method of factoring. However, the first common-factor-analysis
model is attributed to Spearman (1904). Spearman intercorrelated the test
scores of 36 boys on topics such as classics, French, English, mathematics,
discrimination of tones, and musical talent. Spearman had a theory, primar-
ily attributed by him to Francis Galton and Herbert Spencer, that the abilities
involved in taking each of these six tests were a general ability, common to
all the tests, and a specific ability, specific to each test. Mathematically, this
amounts to the equation
= + ψ
j j j
Y a G (1.4)
where
Yj is the jth manifest variable (e.g., test score in mathematics)
aj is a weight indicating the degree to which the latent general-ability vari-
able G participates in Yj
ψj is an ability variable uncorrelated with G and specific to Yj
Without loss of generality, one can assume that E(Yj)=E(ψj)=E(G)=0, for all j,
implying that all variables have zero means. Then saying that ψj is specific to
Yj amounts to saying that ψj does not covary with another manifest variable
Yk, so that E(Yk ψj)=0, with the consequence that E(ψj ψk)=0 (implying that
different specific variables do not covary). Thus the covariances between dif-
ferent variables are due only to the general-ability variable, that is,
=
=
=
2
2
( ) [( + )( + )]
( + + + )
( )
j k j j k k
j k j k k j j k
j k
E YY E a G a G
E a a G a G a G
a a E G
ψ ψ
ψ ψ ψ ψ
(1.5)
From covariance to correlation is a simple step. Assuming in Equation 1.5
that E(G2)=1 (the variance of G is equal to 1), we then can derive the correla-
tion between Yj and Yk:
ρ =
jk j k
a a (1.6)
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Introduction 7
Spearman noticed that the pattern of correlation coefficients obtained among
the six intellectual-test variables in his study was consistent with the model
of a single common variable and several specific variables.
For the remainder of his life Spearman championed the doctrine of one
general ability and many specific abilities, although evidence increasingly
accumulated which disconfirmed such a simple model for all intellectual-
performance variables. Other British psychologists contemporary with
Spearman either disagreed with his interpretation of general ability or modi-
fied his “two-factor” (general and specific factors) theory to include “group
factors,” corresponding to ability variables not general to all intellectual
variables but general to subgroups of them. The former case was exemplified
by Godfrey H. Thomson (cf. Thomson, 1956), who asserted that the mind
does not consist of a part which participates in all mental performance and a
large number of particular parts which are specific to each performance.
Rather, the mind is a relatively undifferentiated collection of many tiny parts.
Any intellectual performance that we may consider involves only a “sample”
of all these many tiny parts. Two performance measures are intercorrelated
because they sample overlapping sets of these tiny parts. And Thomson was
able to show how data consistent with Spearman’s model were consistent
with his sampling-theory model also.
Another problem with G is that one tends to define it mathematically as
“the common factor common to all the variables in the set” rather than in
terms of something external to the mathematics, for example “rule inferring
ability.” This is further exacerbated by the fact that to know what something
is, you need to know what it is not. If all variables in a set have G in com-
mon, you have no instance for which it does not apply, and G easily becomes
mathematical G by default. If there were other variables that were not due to
G in the set, this would narrow the possibilities as to what G is in the world
by indicating what it is not pertinent to. This problem has subtly bedeviled
the theory of G throughout its history.
On the other hand, psychologists such as Cyril Burt and Philip E. Vernon
took the view that in addition to a general ability (general intelligence), there
were less general abilities such as verbal–numerical–educational ability and
practical–mechanical–spatial–physical ability, and even less general abilities
such as verbal comprehension, reading, spelling, vocabulary, drawing, hand-
writing, and mastery of various subjects (cf. Vernon, 1961). In other words,
the mind was organized into a hierarchy of abilities running from the most
general to the most specific. Their model of test scores can be represented
mathematically much like the equation for multiple correlation as
Yj =ajG+bj1G1 +…+bjsGs +cj1H1 +…+cjtHt +ψj
where
Yj is a manifest intellectual-performance variable
aj, bj1,…, bjs, cj1,…, cjt are the weights
G is the latent general-ability variable
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
8 Foundations of Factor Analysis
G1,…, Gs are the major group factors
H1,…, Ht are the minor group factors
ψj is a specific-ability variable for the jth variable
Correlations between two observed variables, Yj and Yk, would depend upon
having not only the general-ability variable in common but group-factor
variables in common as well.
By the time all these developments in the theory of intellectual abilities
had occurred, the 1930s had arrived, and the center of new developments in
this theory (and indirectly of new developments in the methodology of com-
mon-factor analysis) had shifted to the United States where L. L. Thurstone
at the University of Chicago developed his theory and method of multiple-
factor analysis. By this time, the latent-ability variables had come to be called
“factors” owing to a usage of Spearman (1927).
Thurstone differed from the British psychologists over the idea that there
was a general-ability factor and that the mind was hierarchically organized.
For him, there were major group factors but no general factor. These major
group factors he termed the primary mental abilities. That he did not cater
to the idea of a hierarchical organization for the primary mental abilities
was most likely because of his commitment to a principle of parsimony; this
caused him to search for factors which related to the observed variables in
such a way that each factor pertained as much as possible to one nonover-
lapping subset of the observed variables. Sets of common factors displaying
this property, Thurstone said, had a “simple structure.” To obtain an opti-
mal simple structure, Thurstone had to consider common-factor variables
that were intercorrelated. And in the case of factor analyses of intellectual-
performance tests, Thurstone discovered that usually his common factors
were all positively intercorrelated with one another. This fact was consid-
erably reassuring to the British psychologists who believed that by relying
on his simple-structure concept Thurstone had only hidden the existence of
a general-ability factor, which they felt was evidenced by the correlations
among his factors.
Perhaps one reason why Thurstone’s simple-structure approach to factor
analysis became so popular—not just in the United States but in recent years
in England and other countries as well—was because simple-structure solu-
tions could be defined in terms of more-or-less objective properties which
computers could readily identify and the factors so obtained were easy to
interpret. It seemed by the late 1950s when the first large-scale electronic
computers were entering universities that all the drudgery could be taken
out of factor-analytic computations and that the researcher could let the com-
puter do most of his work for him. Little wonder, then, that not much thought
was given to whether theoretically hierarchical solutions were preferable
to simple-structure solutions, especially when hierarchical solutions did
not seem to be blindly obtainable. And believing that factor analysis could
automatically and blindly find the key latent variables in a domain, what
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Introduction 9
researchers would want hierarchical solutions which might be more difficult
to interpret than simple-structure solutions?
The 1950s and early 1960s might be described as the era of blind factor
analysis. In this period, factor analysis was frequently applied agnostically,
as regards structural theory, to all sorts of data, from personality-rating vari-
ables, Rorschach-test-scoring variables, physiological variables, semantic
differential variables, and biographical-information variables (in psychol-
ogy), to characteristics of mining districts (in mineralogy), characteristics
of cities (in city planning), characteristics of arrowheads (in anthropology),
characteristics of wasps (in zoology), variables in the stock market (in eco-
nomics), and aromatic-activity variables (in chemistry), to name just a few
applications. In all these applications the hope was that factor analysis could
bring order and meaning to the many relationships between variables.
Whether blind factor analyses often succeeded in providing meaningful
explanations for the relationships among variables is a debatable question.
In the case of Rorschach-test-score variables (Cooley and Lohnes, 1962) there
is little question that blind factor analysis failed to provide a manifestly
meaningful account of the structure underlying the score variables. Again,
factor analyses of personality trait-rating variables have not yielded factors
universally regarded by psychologists as explanatory constructs of human
behavior (cf. Mulaik, 1964; Mischel, 1968). Rather, the factors obtained in
personality trait-rating studies represent confoundings of intrarater pro-
cesses (semantic relationships among trait words) with intraratee processes
(psychological and physiological relationships within the persons rated). In
the case of factor-analytic studies of biographical inventory items, the chief
benefit has been in terms of classifying inventory items into clusters of
similar content, but as yet no theory as to life histories has emerged from
such studies. Still, blind factor analyses have served classification purposes
quite well in psychology and other fields, but these successes should not be
interpreted as generally providing advances in structural theories as well.
In the first 60 years of the history of factor analysis, factor-analytic meth-
odologists developed heuristic algebraic solutions and corresponding algo-
rithms for performing factor analyses. Many of these methods were designed
to facilitate the finding of approximate solutions using mechanical hand
calculators. Harman (1960) credits Cyril Burt with formulating the centroid
method, but Thurstone (1947) gave it its name and developed it more fully as
an approximation to the computationally more challenging principal axes,
the eigenvector–eigenvalue solution put forth by Hotelling (1933). Until the
development of electronic computers, the centroid method was a simple and
straightforward solution that highly approximated the principal axes solu-
tion. But in the 1960s, computers came on line as the government poured
billions into the development of computers for decryption work and into the
mathematics of nuclear physics in developing nuclear weapons. Out of the lat-
ter came fast computer algorithms for finding eigenvectors and eigenvalues.
Subsequently, factor analysts discovered the computer, and the eigenvector
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
10 Foundations of Factor Analysis
and eigenvalue routines and began programming them to obtain principal
axes solutions, which rapidly became the standard approach. Nevertheless
most of the procedures initially used were still based on least-squares
methods, for the statistically more sophisticated method of maximum-
likelihood estimation was still both mathematically and computationally
challenging.
Throughout the history of factor analysis there were statisticians who
sought to develop a more rigorous statistical theory for factor analysis.
In 1940, Lawley (1940) made a major breakthrough with the development
of equations for the maximum-likelihood estimation of factor loadings
(assuming multivariate normality for the variables), and he followed up
this work with other papers (1942, 1943, 1949) that sketched a framework
for statistical testing in factor analysis. The problem was, to use these
methods you needed maximum-likelihood estimates of the factor load-
ings. Lawley’s computational recommendations for finding solutions were
not practical for more than a few variables. So, factor analysts continued to
use the centroid method and to regard any factor loading less than .30 as
“nonsignificant.”
In the 1950s, Rao (1955) developed an iterative computer program for
obtaining maximum-likelihood estimates, but this was later shown not to
converge. Howe (1955) showed that the maximum-likelihood estimates of
Lawley (1949) could be derived mathematically without making any distri-
butional assumptions at all by simply seeking to minimize the determinant
of the matrix of partial correlations among residual variables after partial-
ling out common factors from the original variables. Brown (1961) noted that
the same idea was put forth on intuitive grounds by Thurstone in 1953. Howe
also provided a far more efficient Gauss–Seidel algorithm for computing the
solution. Unfortunately, this was ignored or unknown. In the meantime,
Harman and Jones (1966) presented their Gauss–Seidel minres method of
least-squares estimation, which rapidly converged and yielded close approx-
imations to the maximum-likelihood estimates.
The major breakthrough mathematically, statistically, and computa-
tionally in maximum-likelihood exploratory factor analysis, was made
by Karl Jöreskog (1967), then a new PhD in mathematical statistics from
the University of Uppsala in Sweden. He applied a then recently devel-
oped numerical algorithm of Fletcher and Powell (1963) to the maximum-
likelihood estimation of the full set of parameters of the common-factor
model. The algorithm was quite rapid in convergence. Jöreskog’s algorithm
has been the basis for maximum-likelihood estimation in most commer-
cial computer programs ever since. However, the program was not always
well integrated with other computing methods in some major commercial
programs, so that the program reports principal components eigenvalues
rather than those of the weighted reduced correlation matrix of the common-
factor model provided by Jöreskog’s method, which Jöreskog used in
initially determining the number of factors to retain.
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Introduction 11
Recognizing that more emphasis should be placed on the testing of hypoth-
eses in factor-analytic studies, factor analysts in the latter half of the 1960s
began increasingly to concern themselves with the methodology of hypoth-
esis testing in factor analysis. The first efforts in this regard, using what are
known as procrustean transformations, trace their beginnings to a paper by
Mosier (1939) that appeared in Psychometrika nearly two decades earlier. The
techniques of procrustean transformations seek to transform (by a linear
transformation) the obtained factor-pattern matrix (containing regression
coefficients for the observed variables regressed onto the latent, underlying
factors) to be as much as possible like a hypothetical factor-pattern matrix
constructed according to some structural hypothesis pertaining to the vari-
ables studied. When the transformed factor-pattern matrix is obtained it is
tested for its degree of fit to the hypothetical factor-pattern matrix. For exam-
ple, Guilford (1967) used procrustean techniques to isolate factors predicted
by his three-faceted model of the intellect. However, hypothesis testing with
procrustean transformations have been displaced in favor of confirmatory
factor analysis since the 1970s, because the latter is able to assess how well
the model reproduces the sample covariance matrix.
Toward the end of the 1960s, Bock and Bargmann (1966) and Jöreskog
(1969a) considered hypothesis testing from the point of view of fitting a
hypothetical model to the data. In these approaches the researcher speci-
fies, ahead of time, various parameters of the common-factor-analysis model
relating manifest variables to hypothetical latent variables according to a
structural theory pertaining to the manifest variables. The resulting model
is then used to generate a hypothetical covariance matrix for the manifest
variables that is tested for goodness of fit to a corresponding empirical cova-
riance matrix (with unspecified parameters of the factor-analysis model
adjusted to make the fit to the empirical covariance matrix as good as pos-
sible). These approaches to factor analysis have had the effect of encourag-
ing researchers to have greater concern with substantive, structural theories
before assembling collections of variables and implementing the factor-
analytic methods.
We will treat confirmatory factor analysis in Chapter 15, although it is
better treated as a special case of structural equation modeling, which would
be best dealt in a separate book. The factor analysis we primarily treat in this
book is exploratory factor analysis, which may be regarded as an “abductive,”
“hypothesis-generating” methodology rather than a “hypothesis-testing”
methodology. With the development of structural equation, modeling
researchers have come to see traditional factor analysis as a methodology
to be used, among other methods, at the outset of a research program, to
formulate hypotheses about latent variables and their relation to observed
variables. Furthermore it is now regarded as just one of several approaches
to formulating such hypotheses, although it has general applications any
time one believes that a set of observed variables are dependent upon a set
of latent common factors.
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
12 Foundations of Factor Analysis
1.3 Example of Factor Analysis
At this point, to help the reader gain a more concrete appreciation of what is
obtained in a factor analysis, it may help to consider a small factor-analytic
study conducted by the author in connection with a research project designed
to predict the reactions of soldiers to combat stress. The researchers had the
theory that an individual soldier’s reaction to combat stress would be a func-
tion of the degree to which he responded emotionally to the potential dan-
ger of a combat situation and the degree to which he nevertheless felt he
could successfully cope with the situation. It was felt that, realistically, com-
bat situations should arouse strong feelings of anxiety for the possible dan-
gers involved. But ideally these feelings of anxiety should serve as internal
stimuli for coping behaviors which would in turn provide the soldier with
a sense of optimism in being able to deal with the situation. Soldiers who
respond pessimistically to strong feelings of fear or anxiety were expected
to have the greatest difficulties in managing the stress of combat. Soldiers
who showed little appreciation of the dangers of combat were also expected
to be unprepared for the strong anxiety they would likely feel in a real com-
bat situation. They would have difficulties in managing the stress of combat,
especially if they had past histories devoid of successful encounters with
stressful situations.
To implement research on this theory, it was necessary to obtain measures
of a soldier’s emotional concern for the danger of a combat situation and of
his degree of optimism in being able to cope with the situation. To obtain
these measures, 14 seven-point adjectival rating scales were constructed, half
of which were selected to measure the degree of emotional concern for threat,
and half of which were selected to measure the degree of optimism in coping
with the situation. However, when these adjectival scales were selected, the
researchers were not completely certain to what extent these scales actually
measured two distinct dimensions of the kind intended.
Thus, the researchers decided to conduct an experiment to isolate the
common-meaning dimensions among these 14 scales. Two hundred and
twenty-five soldiers in basic training were asked to rate the meaning of “firing
my rifle in combat” using the 14 adjectival scales, with ratings being obtained
from each soldier on five separate occasions over a period of 2 months.
Intercorrelations among the 14 scales were then obtained by summing the
cross products over the 225 soldiers and five occasions. (Intercorrelations
were obtained in this way because the researchers felt that, although on any
one occasion various soldiers might differ in their conceptions of “firing
my rifle in combat” and on different occasions an individual soldier might
have different conceptions, the major determinants of covariation among the
adjectival scales would still be conventional-meaning dimensions common
to the scales.) The matrix of intercorrelations, illustrated in Table 1.1, was then
subjected to image factor analysis (cf. Jöreskog, 1962), which is a relatively
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Introduction 13
TABLE 1.1
Intercorrelations among 14 Scales
1 1.00 Frightening
2 .20 1.00 Useful
3 .65 −.26 1.0 Nerve-shaking
4 −.26 .74 −.32 1.00 Hopeful
5 .71 −.27 .70 −.32 1.00 Terrifying
6 −.25 .64 −.30 .68 −.31 1.00 Controllable
7 .64 −.30 .73 −.34 .74 −.34 1.00 Upsetting
8 −.40 .39 −.44 .40 −.47 .39 −.53 1.00 Painless
9 −.13 .24 −.11 .24 −.16 .26 −.17 .27 1.00 Exciting
10 −.45 .36 −.49 .42 −.53 .39 −.53 .58 .32 1.00 Nondepressing
11 .59 −.26 .63 −.31 .32 −.32 .69 −.45 −.15 −.51 1.00 Disturbing
12 −.30 .69 −.35 .75 −.36 .68 −.39 .44 .28 .48 −.38 1.00 Successful
13 −.36 .35 −.50 .43 −.42 .38 −.52 .45 .20 .49 −.45 .46 1.00 Settling (vs. unsettling)
14 −.35 .62 −.36 .65 −.38 .62 −.46 .50 .36 .51 −.45 .67 .49 1.00 Bearable
accurate but simple-to-compute approximation of common-factor analysis.
Four orthogonal factors were retained, and the matrix of “loadings” associ-
ated with the “unrotated factors” is given in Table 1.2. The coefficients in this
matrix are correlations of the observed variables with the common factors.
However, the unrotated factors of Table 1.2 are not readily interpretable, and
they do not in this form appear to correspond to the two expected-meaning
dimensions used in selecting the 14 scales. At this point it was decided, after
TABLE 1.2
Unrotated Factors
1 2 3 4
1 Frightening .73 .35 .01 −.15
2 Useful −.56 .60 −.10 .12
3 Nerve-shaking .78 .31 −.05 −.12
4 Hopeful −.62 .60 −.09 .13
5 Terrifying .78 .34 .43 .00
6 Controllable −.59 .52 −.06 .08
7 Upsetting .84 .29 −.08 .00
8 Painless −.65 .07 .03 −.33
9 Exciting −.29 .21 −.02 −.35
10 Nondepressing −.70 .04 .03 −.36
11 Disturbing .70 .13 −.59 −.05
12 Successful −.67 .54 −.04 .05
13 Settling −.63 .09 .08 −.16
14 Bearable −.69 .43 .04 −.12
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
14 Foundations of Factor Analysis
some experimentation, to rotate only the first two factors and to retain the
latter two unrotated factors as “difficult to interpret” factors. Rotation of
the first two factors was done using Kaiser’s normalized Varimax method
(cf. Kaiser, 1958). The resulting rotated matrix is given in Table 1.3.
The meaning of “rotation” may not be clear to the reader. Therefore let us
consider the plot in Figure 1.1 of the 14 variables, using for their coordinates
the loadings on the variables on the first two unrotated factors. Here we see
that the coordinate axes do not correspond to variables that would be clearly
definable by their association with the variables. On the other hand, note the
cluster of points in the upper right-hand quadrant (variables 1, 3, 5, and 7)
and the cluster of points in the upper left-hand quadrant (variables 2, 4, 6,
12, and 14). It would seem that one could rotate the coordinate axes so as to
have them pass near these clusters. As a matter of fact, this is what has been
done to obtain the rotated coordinates in Table 1.3, which are plotted in
Figure 1.2.
Rotated factor 1 appears almost exclusively associated with variables 1, 3, 5,
7, and 11, which were picked as measures of a fear response, whereas rotated
factor 2 appears most closely associated with variables 2, 4, 6, 12, and 14,
which were picked as measures of optimism regarding outcome. Although
variables 8, 10, and 13 appear now to be consistent in their relationships to
these two dimensions, they are not unambiguous measures of either factor.
Variable 9 appears to be a poor measure of these two dimensions.
Some factor analysts at this point might prefer to relax the requirement
that the obtained factors be orthogonal to one another. They would, in this
case, most likely construct a unit-length vector collinear with variable 7 to
TABLE 1.3
Rotated Factors
1 2
1 Frightening .83 −.10
2 Useful −.11 .85
3 Nerve-shaking .84 −.16
4 Hopeful −.17 .87
5 Terrifying .86 −.14
6 Controllable −.18 .80
7 Upsetting .89 −.22
8 Painless −.50 .44
9 Exciting −.12 .34
10 Nondepressing −.57 .43
11 Disturbing .67 −.29
12 Successful −.24 .85
13 Settling −.48 .43
14 Bearable −.33 .76
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Introduction 15
2
9
.1
.1
–.1
–.2
–.3
–.4
–.5
–.6
–.7
–.8
–.9
.2
.3
.4
.5
.6
.7
.8
.9
–.1
–.2
–.3
–.4
–.5
–.6
–.7
–.8
–.9 .2 .3 .4 .5 .6 .7 .8 .9
14
8
13
10
12
6
2
4
1
11
5
3 7
1
FIGURE 1.1
Plot of 14 variables on unrotated factors 1 and 2.
represent an “oblique” factor 1 and another unit-length vector collinear with
variable 12 to represent an “oblique” factor 2. The resulting oblique factors
would be negatively correlated with one another and would be interpreted
as dimensions that are slightly negatively correlated. Such “oblique” factors
are drawn in Figure 1.2 as arrows from the origin.
In conclusion, factor analysis has isolated two dimensions among the 14
scales which appear to correspond to dimensions expected to be present
when the 14 scales were chosen for study. Factor analysis has also shown
that some of the 14 scales (variables 8, 9, 10, and 13) are not unambiguous
measures of the intended dimensions. These scales can be discarded in con-
structing a final set of scales for measuring the intended dimensions. Factor
analysis has also revealed the presence of additional, unexpected dimen-
sions among the scales. Although it is possible to hazard guesses as to the
meaning of these additional dimensions (represented by factors 3 and 4),
such guessing is not strongly recommended. There is considerable likeli-
hood that the interpretation of these dimensions will be spurious. This is not
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
16 Foundations of Factor Analysis
to say that factor analysis cannot, at times, discover something unexpected
but interpretable. It is just that in the present data the two additional dimen-
sions are so poorly determined from the variables as to be interpretable only
with a considerable risk of error.
This example of factor analysis represents the traditional, exploratory use of
factor analysis where the researcher has some idea of what he will encounter
but nevertheless allows the method freedom to find unexpected dimensions
(or factors).
1
1
2
11
3
9
6
2
4
.1
–.1
–.1
–.2
–.3
–.4
–.5
–.6
–.7
–.8
–.9
–.2
–.3
–.4
–.5
–.6
–.7
–.8
–.9
.1 .2 .3 .4 .5 .6 .7 .8 .9
.2
.3
.4
.5
.6
.7
.8
.9
8
13
10
12
14
5
7
FIGURE 1.2
Plot of 14 variables on rotated factors 1 and 2.
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
17
2
Mathematical Foundations
for Factor Analysis
2.1 Introduction
Ideally, one begins a study of factor analysis with a mathematical background
of up to a year of calculus. This is not to say that factor analysis requires
an extensive knowledge of calculus, because calculus is used in only a few
instances, such as in finding weights to assign to a set of independent vari-
ables to maximize or minimize the variance of a resulting linear combination.
Or it will be used in finding a transformation matrix that minimizes, say,
a criterion for simple structure in factor rotation. But having calculus in one’s
background provides sufficient exposure to working with mathematical con-
cepts so that one will have overcome reacting to a mathematical subject such
as factor analysis as though it were an esoteric subject comprehensible only
to select initiates to its mysteries. One will have learned those subjects such
as trigonometry, college algebra, and analytical geometry upon which factor
analysis draws heavily.
In practice, however, the author recognizes that many students who now
undertake a study of factor analysis come from the behavioral, social, and
biological sciences, where mathematics is not greatly stressed. Consequently,
in this chapter the author attempts to provide a brief introduction to those
mathematical topics from modern algebra, trigonometry, analytic geometry,
and calculus that will be necessary in the study of factor analysis. Moreover,
the author also provides a background in operations with vectors and matrix
algebra, which even students having just had first-year calculus will more
than likely find them new. In this regard most students will find themselves
on even ground if they had up to college algebra, because operations with
vectors and matrix algebra are extensions of algebra using a new notation.
2.2 Scalar Algebra
By the term “scalar algebra,” we refer to the ordinary algebra applicable to
the real numbers with which the reader should already be familiar. The use
© 2010 by Taylor  Francis Group, LLC
18 Foundations of Factor Analysis
of this term distinguishes ordinary algebra from operations with vectors
and matrix algebra, which we will take up shortly.
2.2.1 Fundamental Laws of Scalar Algebra
The following laws govern the basic operations of scalar algebra such as
addition, subtraction, multiplication, and division:
1. Closure law for addition: a + b is a unique real number.
2. Commutative law for addition: a + b=b + a.
3. Associative law for addition: (a + b) + c=a + (b + c).
4. Closure law for multiplication: ab is a unique real number.
5. Commutative law for multiplication: ab=ba.
6. Associative law for multiplication: a(bc)=(ab)c.
7. Identity law for addition: There exists a number 0 such that
+ + =
0=0
a a a
8. Inverse law for addition: a + (−a)=(−a) + a=0.
9. Identity law for multiplication: There exists a number 1 such that
= =
1 1
a a a
10. Inverse law for multiplication:
1 1
1
a a
a a
= = .
11. Distributive law: a(b + c)=ab + ac.
The above laws are sufficient for dealing with the real-number system.
However, the special properties of zero should be pointed out:
0
0
a
=
is undefined
0
a
0
is indeterminate
0
2.2.1.1 Rules of Signs
The rule of signs for multiplication is given as
( ) ( ) and ( )( ) ( )
a b ab a b ab
− = − − − = +
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Mathematical Foundations for Factor Analysis 19
The rule of signs for division is given as
and
a a a a
b b b b
− −
= =
− −
The rule of signs for removing parentheses is given as
( ) and ( )
a b a b a b a b
− − = − + − + = − −
2.2.1.2 Rules for Exponents
If n is a positive integer, then xn will stand for
xx … x with n terms
If xn =a, then a, is known as the nth root of a. The following rules govern
the use of exponents:
1. xa xb =xa+b.
2. (xa)b =xab.
3. (xy)a =xa ya.
4.
a a
a
x x
y y
⎛ ⎞
=
⎜ ⎟
⎝ ⎠
.
5.
a
a b
b
x
x
x
−
= .
2.2.1.3 Solving Simple Equations
Let x stand for an unknown quantity, and let a, b, c, and d stand for known
quantities. Then, given the following equation
ax b cx d
+ = +
the unknown quantity x can be found by applying operations to both sides
of the equation until only an x remains on one side of the equation and the
known quantities on the other side. That is,
− + = (subtractcxfrom bothsides)
ax cx b d
− = − (subtract bfrom bothsides)
ax cx d b
− = −
( ) (byreversing thedistributivelaw)
a c x d b
−
= −
−
(bydividing bothsides by )
d b
x a c
a c
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
20 Foundations of Factor Analysis
2.3 Vectors
It may be of some help to those who have not had much exposure to the
concepts of modern algebra to learn that one of the essential aims of modern
algebra is to classify mathematical systems—of which scalar algebra is an
example—according to the abstract rules which govern these systems. To
illustrate, a very simple mathematical system is a group. A group is a set of
elements and a single operation for combining them (which in some cases is
addition and in some others multiplication) behaving according to the fol-
lowing properties:
1. If a and b are elements of the group, then a+b and b+a are also
elements of the group, although not necessarily the same element
(closure).
2. If a, b, and c are elements of the group, then
+ + = + +
( ) ( ) (associativelaw)
a b c a b c
3. There is an element 0 in the group such that for every a in the
group
+ = + =
0 0 (identitylaw)
a a a
4. For each a in the group, there is an element (−a) in the group such
that
+ − = − + =
( ) ( ) 0 (inverselaw)
a a a a
One should realize that the nature of the elements in the group has no bear-
ing upon the fact that the elements constitute a group. These elements may
be integers, real numbers, vectors, matrices, or positions of an equilateral tri-
angle; it makes no difference what they are, as long as under the operator (+)
they behave according to the properties of a group. Obviously, a group is a
far simpler system than the system which scalar algebra exemplifies. Only
4 laws govern the group, whereas 11 laws are needed to govern the system
exemplified by scalar algebra, which, by the way, is known to mathemati-
cians as a “field.”
Among the various abstract mathematical systems, the system known as a
“vector space” is the most important for factor analysis. (Note. One should not
let the term “space” unduly influence one’s concept of a vector space. A vec-
tor space need have no geometric connotations but may be treated entirely as
an abstract system. Geometric representations of vectors are only particular
examples of a vector space.) A vector space consists of two sets of mathemati-
cal objects—a set V of “vectors” and a set R of elements of a “field” (such as
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Mathematical Foundations for Factor Analysis 21
the scalars of scalar algebra)—together with two operations for combining
them. The first operation is known as “addition of vectors” and has the
following properties such that for every u, v, w in the set V of vectors:
1. u + v is also a uniquely defined vector in V.
2. u + (v + w)=(u + v) + w.
3. u + v=(v + w).
4. There exists a vector 0 in V such that u + 0=0 + u=u.
5. For each vector in V there exists a unique vector −u such that
( ) ( )
+ − = − + =
u u u u 0
(One may observe that under addition of vectors the set V of vectors
is a group.)
The second operation governs the combination of the elements of
the field R of scalars with the elements of the set V of vectors and is
known as “scalar multiplication.” Scalar multiplication has the fol-
lowing properties such that for all elements a, b from the field R of
scalars and all vectors u, v from the set V of vectors:
6. au is a vector in V.
7. a(u + v)=au + av.
8. (a + b)u=au + bu.
9. a(bu)=ab(u).
10. 1u=u; 0u=0.
In introducing the idea of a vector space as an abstract mathematical system,
we have so far deliberately avoided considering what the objects known as
vectors might be. Our purpose in doing so has been to have the reader real-
ize at the outset that a vector space is an abstract system that may be found
in connection with various kinds of mathematical objects. For example, the
vectors of a vector space may be identified with the elements of any field
such as the set of real numbers. In such a case, the addition of vectors corre-
sponds to the addition of elements in the field. In another example, the vec-
tors of a vector space may be identified with n-tuples, which are ordered sets
of real numbers. (In the upcoming discussion we will develop the properties
of vectors more fully in connection with vectors represented by n-tuples.)
Vectors may also be identified with the unidimensional random variables of
mathematical statistics. This fact has important implications for multivariate
statistics and factor analysis, and structural equation modeling, in particu-
lar, because it means one may use the vector concept to unify the treatment
of variables in both finite and infinite populations. The reader should note
at this point that the key idea in this book is that, in any linear analysis
of variables of the behavioral, social, or biological sciences, the variables
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
22 Foundations of Factor Analysis
may be treated as if they are vectors in a linear vector space. The concrete
representation of these variables may differ in various contexts (i.e., may be
n-tuples or random variables), but they may always be considered vectors.
2.3.1 n-Tuples as Vectors
In a vector space of n-tuples, by a vector we shall mean a point in n-dimen-
sional space designated by an ordered set of numbers known as an n-tuple
which are the coordinates of the point. For example, (1,3,2) is a 3-tuple which
represents a vector in three-dimensional space which is 1 unit from the ori-
gin (of the coordinate system) in the direction along the x-axis, 3 units from
the origin in the direction of the y-axis, and 2 units from the origin in the
direction of the z-axis. Note that in a set of coordinates the numbers appear
in special positions to indicate the distance one must go along each of the ref-
erence axes to arrive at the point. In graphically portraying a vector, we shall
use the convention of drawing an arrow from the origin of the coordinate
system to the point. For example, the vector (1,3,2) is illustrated in Figure 2.1.
In a vector space of n-tuples we will be concerned with certain operations
to be applied to the n-tuples which define the vectors. These operations will
define addition, subtraction, and multiplication in the vector space of n-tuples.
As a notational shorthand to allow us to forgo writing the coordinate num-
bers in full when we wish to express the equations in vector notation, we will
designate individual vectors by lowercase boldface letters. For example, let a
stand for the vector (1,3,2).
2.3.1.1 Equality of Vectors
Two vectors are equal if they have the same coordinates.
4
3
2
1
1 2 3 4
1
2
3
4
(1,3,2)
FIGURE 2.1
Graphical representation of vector (1,3,2) in three-dimensional space.
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Mathematical Foundations for Factor Analysis 23
For example, if a=(1,2,4) and b=(1,2,4), then a=b. A necessary condition
that two vectors are equal is that they have the same number of coordinates.
For example, if
(1,2,3, 4) and (1,2,3)
= =
a b
then a ≠ b. In fact, when two vectors have different numbers of coordinates,
they refer to a different order of n-tuples and cannot be compared or added.
2.3.2 Scalars and Vectors
Vectors compose a system complete in themselves. But they are of a differ-
ent order from the algebra we normally deal with when using real numbers.
Sometimes we introduce real numbers into the system of vectors. When we
do this, we call the real numbers “scalars.” In our notational scheme, we dis-
tinguish scalars from vectors by writing the scalars in italics and the vectors
in lowercase boldface characters. Thus, a ≠ a.
In vector notation, we will most often consider vectors in the abstract. Then,
rather than using actual numbers to stand for the coordinates of the vec-
tors, we will use scalar quantities expressed algebraically. For example, let a
general vector a in five-dimensional space be written as
1 2 3 4 5
( , , , , )
a a a a a
=
a
In this example a stands for a coordinate, each being distinguished from
others by a different subscript. Whenever possible, algebraic expressions for the
coordinates of vectors should take the same character as the character standing
for the vector itself. This will not always be done, however.
2.3.3 Multiplying a Vector by a Scalar
Let a be a vector such that
= …
1 2
( , , , )
n
a a a
a
and λ a scalar; then the operation
λ =
a c
produces another vector c such that
= λ λ λ
…
1 2
( , , , )
n
a a a
c
In other words, multiplying a vector by a scalar produces another vector that
has for components the components of the first vector each multiplied by the
scalar. To cite a numerical example, let a=(1,3,4,5); then
= × × × × = =
2 (2 1, 2 3, 2 4, 2 5) (2,6,8,10)
a c
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
24 Foundations of Factor Analysis
2.3.4 Addition of Vectors
If a=(1,3,2) and b=(2,1,5), then their sum, denoted as a+b is another vector
c such that
((1 2),(3 1),(2 5)) (3, 4,7)
= + = + + + =
c a b
When we add two vectors together, we add their corresponding coordinates
together to obtain a new vector.
The addition of vectors is found in physics in connection with the analysis
of forces acting upon a body where vector addition leads to the “resultant”
by the well-known “parallelogram law.” This law states that if two vectors
are added together, then lines drawn from the points of these vectors to the
point of the vector produced by the addition will make a parallelogram with
the original vectors. In Figure 2.2, we show the result of adding two vectors
(1,3) and (3,2) together.
If more than two vectors are added together, the result is still another vec-
tor. In some factor-analytic procedures, this vector is known as a “centroid,”
because it tends to be at the center of the group of vectors added together.
2.3.5 Scalar Product of Vectors
In some vector spaces, in addition to the two operations of addition of vec-
tors and scalar multiplication, a third operation known as the scalar product
of two vectors (written xy for each pair of vectors x and y) is defined which
associates a scalar with each pair of vectors. This operation has the follow-
ing abstract properties, given that x, y, and z are arbitrary vectors and a an
arbitrary scalar:
5
4
3
2
1
1 2 3
(1,3)
(3,2)
(4,5)
4
FIGURE 2.2
Graphical representation of sum of vectors (1,3) and (3,2) by vector (4,3) in two-dimensional
space, illustrating the parallelogram law for addition of vectors.
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Mathematical Foundations for Factor Analysis 25
11. xy=yx=a scalar.
12. x(y + z)=xy + xz.
13. x(ay)=a(xy).
14. xx ≥ 0; xx=0 implies x=0.
When the scalar product is defined in a vector space, the vector space is
known as a “unitary vector space.” In a unitary vector space, it is possible
to establish the length of a vector as well as the cosine of the angle between
pairs of vectors. Factor analysis as well as other multivariate linear analyses
is concerned exclusively with unitary vector spaces.
We will now consider the definition of the scalar product for a vector
space of n-tuples. Let a be the vector (a1,a2,…,an) and b the vector (b1,b2,…,bn);
then the vector product of a and b, written ab, is the sum of the products of
corresponding components of the vectors, that is,
= + + +

1 1 2 2 n n
a b a b a b
ab (2.1)
To use a simple numerical example, let a=(1,2,5) and b=(3,3,4); then ab=29.
As a further note on notation, consider that an expression such as Equation
2.1, containing a series of terms to be added together which differ only in
their subscripts, can be shortened by using the summational notation. For
example, Equation 2.1 can be rewritten as
=
= ∑
1
n
i i
i
a b
ab (2.2)
As an explanation of this notation, the expression aibi on the right-hand
side of the sigma sign stands for a general term in the series of terms, as in
Equation 2.1, to be added. The subscript i in the term aibi stands for a general
subscript. The expression =
∑ 1
n
i indicates that one must add together a series
of subscripted terms. The expression “i=1” underneath the sigma sign indi-
cates which subscript is pertinent to the summation governed by this sum-
mation sign—in the present example the subscript i is pertinent—as well as
the first value in the series of terms that the subscript will take. The expres-
sion n above the sigma sign indicates the highest value that the subscript will
take. Thus we are to add together all terms in the series with the subscript i
ranging in value from 1 to n.
2.3.6 Distance between Vectors
In high school geometry we learn from the Pythagorean theorem that the
square on the hypotenuse of a right triangle is equal to the sum of the squares
on the other two sides. This theorem forms the basis for finding the distance
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
26 Foundations of Factor Analysis
between two vector points. We shall define the distance between two vectors
a and b, written
=
⎡ ⎤
− = −
⎢ ⎥
⎢ ⎥
⎣ ⎦
∑
1/2
2
1
( )
n
i i
i
a b
a b
where a and b are both vectors with n components. This means that we find
the sum of squared differences between the corresponding components of
the two vectors and then take the square root of that. In Figure 2.3, we have
diagrammed the geometric equivalent of this formula.
− = − + −
2 2
1 1 2 2
( ) ( )
a b a b
a b (2.3)
2.3.7 Length of a Vector
Using Equation 2.3, we can find an equation for the length of a vector, that is,
the distance of its point from the origin of the coordinate system. If we define
the “zero vector” as
(0,0, ,0)
=
0 …
b2
b1
a
b
a –b (a1– b1)2
+(a2– b2)2
=
a1
a2
FIGURE 2.3
Graphical illustration of application of Pythagorean theorem to the determination of the
distance between 2 two-dimensional vectors a and b.
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Mathematical Foundations for Factor Analysis 27
Then
−
=
⎡ ⎤
= − = −
⎢ ⎥
⎢ ⎥
⎣ ⎦
⎡ ⎤
= ⎢ ⎥
⎢ ⎥
⎣ ⎦
∑
∑
1/2
2
1
1/2
2
1
( 0)
n
i
i
n
i
i
a
a
a a 0
(2.4)
The length of a vector a, denoted |a|, is the square root of the sum of the
squares of its components. (Note. Do not confuse |a| with |A| which is a
determinant.)
2.3.8 Another Definition for Scalar Multiplication
Another way of expressing scalar multiplication of vectors is given by the
formula
= θ
cos
ab a b (2.5)
where θ is the angle between the vectors. In other words, the scalar product
of one vector times another vector is equivalent to the product of the lengths
of the two vectors times the cosine of the angle between them.
2.3.9 Cosine of the Angle between Vectors
Since we have raised the concept of the cosine of the angle between vec-
tors, we should consider the meaning of the cosine function as well as other
important trigonometric functions.
In Figure 2.4, there is a right triangle where θ is
the value of the angle between the base of the tri-
angle and the hypotenuse. If we designate the length
of the side opposite the angle θ as a, the length of the
base as b, and the length of the hypotenuse as c, the
ratio of these sides to one another gives the following
trigonometric functions:
tan
a
b
θ =
sin
a
c
θ =
θ =
cos
b
c
b
θ
a
c
FIGURE 2.4
A right triangle with
angle θ between base and
hypotenuse.
© 2010 by Taylor  Francis Group, LLC
Downloaded
by
[University
of
California
-
San
Diego
(CDL)]
at
00:07
15
September
2014
Another Random Document on
Scribd Without Any Related Topics
844
848
852
nne sede þe king so dere,
lcome beo þu here.
nu, Berild, swiþe,
make him ful bliþe.
whan þu farst to woȝe,
him þine gloue.
nt þu hauest to wyue,
i he schal þe dryue;
Cutberdes fairhede
schal þe neure wel spede.”
844
848
o seyde þe king so dere,
Wel come be he here.
o nov, byryld, swyþe,
n mak him glad and blyþe.
an þou farest awowen,
ak hym þine glouen.
er þou hauest Mynt to wyue,
wey he schal þe dryue.”
o gap in MS. . . . .
. . . . . . . . ]
844
848
852
þo seide þe kyng wel dere,
welcome þe þou here.
o, beryld, wel swyþe,
t make hym wel blyþe,
nt when þou farest to wowen,
c him þine glouen.
er þou hast munt to wyue,
wey he shal þe dryue;
r godmodes feyrhede
halt þou no wer spede.”
hristmas feast a giant appears.
It was at Cristesmasse,
Neiþer more ne lasse,
[No gap in MS. . . . .
. . . . . . . ]
856
yt was at Cristesmesse,
aþer more ne lesse.
e king hym makede a feste,
yt hyse knyctes beste.
856
t wes at cristesmasse,
ouþer more ne lasse.
e kyng made feste,
his knyhtes beste.
iant’s challenge.
nt proclaims a challenge.
860
864
cam in at none,
eaunt suþe sone,
med fram paynyme,
seide þes ryme:—
e stille, sire kyng,
herkne þis tyþyng.
buþ paens ariued,
mo þane fiue.
beoþ on þe sonde,
, vpon þi londe.
860
864
er com ate none,
geaunt swiþe sone,
rmed of paynime,
nd seyde in hys rime,
Syte, knytes, by þe king,
nd lusteþ to my tydyng.
ere beþ paynyms aryued,
el mo þanne fyue.
y þe se stronde,
yng, on þine londe.
860
864
er com in at none,
geaunt suyþe sone,
armed of paynyme,
nt seide þise ryme:—
Site, kyng, bi kynge,
nt herkne my tidynge
er bueþ paynes aryue,
el more þen fyue.
er beþ vpon honde,
yng, in þine londe.
an will fight any three in the land,
868
of hem wile fiȝte
n þre kniȝtes.
868
ne þer of wille ich fyȝte
ȝen þi þre knyctes.
868
n þer of wol fyhte
ȝeynes þre knyhtes.
bat to determine who shall possess the land.
872
oþer þre slen vre,
s lond beo ȝoure;
vre on ouercomeþ ȝour þreo,
s lond schal vre beo.
oreȝe be þe fiȝtinge,
an þe liȝt of daye springe.”
872
yf þat houre felle þyne þre,
þis lond schal vre be;
yf þyne þre fellen houre,
þys lond þanne be ȝyure.
o morwe schal be þe fyȝtyng,
t þe sonne op rysyng.”
872
ef oure þre sleh oure on,
e shulen of ore londe gon;
ef vre on sleh oure þre,
þis lond shal vre be.
morewe shal be þe fyhtynge,
þe sonne vpspringe.”
Berild and Alrid accept it.
urston names Cutberd (Godmod), Harild and Berild as the three defenders.
876
880
nne sede þe kyng þurston,
berd schal beo þat on;
d schal beo þat oþer;
þridde, Alrid, his broþer.
hi beoþ þe strengeste,
of armes þe beste.
e what schal vs to rede?
wene we beþ alle dede.”
876
880
o seyde þe king þurston,
Cubert he schal be þat on,
yld chyld þat oþer,
e þrydde, byryld, hyse broþer.
ye þre beþ þe strengeste,
nd ín armes þe beste.
t wat schal do to rede?
h wene we ben alle dede.”
876
880
þo seyde þe kyng þurston,
odmod shal be þat on;
eryld shal be þat oþer;
e þridde, Aþyld, is broþer.
r hue bueþ strongeste,
nt in armes þe beste.
h, wat shal vs to rede?
wene we bueþ dede.”
884
utberd sat at borde,
sede þes wordes:—
884
ubert set on borde,
nd seyde þis worde:—
884
odmod set at borde,
nt seide þeose wordes:—
says that it were shame for three Christians to fight against one pagan, and offers to fight alone.
888
892
e king, hit nis no riȝte,
wiþ þre to fiȝte;
n one hunde,
cristen men to fonde.
, ischal al one,
ute more ymone,
mi swerd wel eþe
ge hem þre to deþe.”
892
Syre kyȝeking, hyt no ryȝcte,
n wiþ þre to fyȝcte.
o gap in MS. . . . .
. . . . . . . . ]
t wille ich alone,
ith outen mannes mone,
id my swerd wel heþe
ringen hem alle to deþe.”
888
892
ire kyng, nis no ryhte,
n wiþ þre fyhte,
ȝeynes one hounde,
re cristene to founde.
h, kyng, y shal alone,
iþ-oute more ymone,
ip my suerd ful eþe
ringen hem alle to deþe.”
rations for the combat.
himself,
896
e kyng aros amoreȝe,
hadde muchel sorȝe;
Cutberd ros of bedde,
armes he him schredde.
n his brunie gan on caste,
lacede hit wel faste,
896
e kyng ros a morwe,
nd hadde meche sorwe.
ubert ros of bedde;
yt armes he hym schredde.
ys brenye on he caste,
acede hyt wel faste.
896
e kyng aros amorewe;
e hade muche sorewe.
odmod ros of bedde;
iþ armes he him shredde.
s brunye he on caste,
t knutte hit wel faste,
e king,
900
904
cam to þe kinge,
is vp risinge.
g,” he sede, “cum to fel[de],
to bihelde
we fiȝte schulle,
togare go wulle.”
900
904
e cam biforn þe godeking,
t hyse op rysyng.
e seyde, “king, com to felde,
e for to by helde,
ou we scholen fyȝte
nd to gydere hus dyȝcte.”
900
904
nt com him to þe kynge,
his vp rysynge.
kyng,” quoþ he, “com to felde,
e forte byhelde,
ou we shule flyten
nt to gedere smiten.”
h him rides to the combat.
908
at prime tide,
unnen ut ride,
funden on a grene,
eaunt suþe kene,
feren him biside,
e deþ to abide.
908
yȝt at prime tyde,
e gonne hem out ryde.
e founden in a grene,
geant swyþe kene,
rmed with swerd by side,
e day for to abyde.
908
riht at prime tide,
y gonnen out to ryde.
y fonnden in a grene,
geaunt swyþe kene,
s feren him biside,
at day forto abyde.
ight begins.
strikes so hard, that the giant asks for a breathing spell,
912
916
eilke bataille
berd gan assaille.
ȝaf dentes inoȝe;
kniȝtes felle iswoȝe.
dent he gan wiþdraȝe,
hi were neȝ aslaȝe.
912
916
ubert him gan asayle;
olde he nawt fayle.
e keyte duntes ynowe;
e geant fel hy swowe.
ys feren gonnen hem wyt drawe,
o here mayster wa slawe.
912
[leaf 88, back]
916
odmod hem gon asaylen;
olde he nout faylen.
e ȝef duntes ynowe;
e payen fel y swowe.
s feren gonnen hem wiþ drawe,
r huere maister wes neh slawe.
s he has never before experienced such blows, save at the hand of King Murry.
920
924
sede, “kniȝtes, nu ȝe reste
while, ef ȝou leste.”
ede, “hi neure nadde
niȝte dentes so harde.
gap in MS. . . . .
. . . . . . . ]
was of hornes kunne,
n in suddenne.”
920
924
e seyden, “knyct þo reste
wile ȝyf þe luste.
e neuere ne hente
f man KH3 so harde dunte,
ute of þe king Mory,
at was so swyþe stordy.
e was of hornes kinne;
e slowe hym in sodenne.”
KH.3 MS. adds ‘nes honde’ underdotted
as a mistake.
920
924
e seide, “knyht, þou reste
whyle, ȝef þe leste.
ne heuede ner of monnes hond
o harde duntes in non lond,
ote of þe kyng Murry,
t wes swiþe sturdy.
e wes of hornes kenne;
sloh him in sudenne.”
enraged,
orn him gan to agrise,
his blod arise.
uberd gan agrise,
nd hys blod aryse.
Godmod him gon agryse,
nt his blod aryse.
ews the fight.
928
him saȝ he stonde
driuen him of londe,
þat his fader sloȝ.
im his swerd he droȝ.
928
y for hym he sey stonde
at drof hym out of londe,
nd hys fader aquelde.
e smot hym honder schelde.
928
yforen him he seh stonde
at drof him out of londe,
nt fader his a-quelde;
e smot him vnder shelde.
looks on his ring, then smites the giant through the heart.
932
936
okede on his rynge,
þoȝte on Rymenhilde.
smot him þureȝ þe herte,
sore him gan to smerte.
paens þat er were so sturne,
unne awei vrne.
932
936
e lokede on hys gode ringe,
nd þoute on reymyld þe ȝonge.
yd gode dunt ate furste,
e smot hym to þe herte.
e hondes gonnen at erne
to þe schypes sterne.
932
936
e lokede on is rynge,
nt þohte o rymenild þe ȝynge.
id god suerd at þe furste,
e smot him þourh þe huerte.
e payns bigonne to fleon,
nt to huere shype teon.
kills the Giant.
ans flee to their ship.
n and his compaynye
ne after hem wel swiþe hiȝe,
gap in MS. . . . .
. . . . . . .
. . . . . . .
. . . . . . . ]
940
o schip he wolden ȝerne,
nd cubert hem gan werne,
nd seyde, “kyng, so þou haue reste,
ep nou forþ ofi þi beste,
nd sle we þyse hounden,
ere we henne founden.”
ship hue wolden erne;
odmod hem con werne.
o gap in MS. . .
. . . . . . . .
. . . . . . . .
. . . . . . . . ]
g’s sons are slain, but Cutberd annihilates the pagan host,
gap in MS. . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . ]
944
948
e houndes hye of laucte,
n strokes hye þere kaute.
aste aȝen hye stode,
ȝen duntes gode.
elp nawht here wonder;
ubert hem broute al honder.
944
948
e kynges sones tweyne
e paiens slowe beyne.
o wes Godmod swyþe wo,
nt þe payens he smot so,
t in a lutel stounde
e paiens hy felle to grounde.
sloȝen alle þe hundes,
i here schipes funde.
e schedde of here blode,
nd makede hem al wode.
odmod ant is men
owe þe payenes eueruchen.
Thurston’s two sons are slain.
nging his father’s death.
952
956
eþe he hem alle broȝte;
fader deþ wel dere hi boȝte.
lle þe kynges kniȝtes,
scapede þer no wiȝte.
e his sones tweie
re him he saȝ deie.
952
956
o deþe he hem browte,
ys fader deþ he bowten.
f al þe kinges rowe,
er nas bute fewe slawe.
ote hys sones tweye
y fore he sey deye.
952
s fader deþ ant ys lond
wrek godmod wiþ his hond.
o gap in MS. . .
. . . . . . . .
. . . . . . . .
. . . . . . . . ]
g mourns.
960
king bigan to grete,
teres for to lete.
eiden hem in bare,
burden hem ful ȝare.
gap in MS. . . . .
. . . . . . . ]
960
e king bi gan to grete,
nd teres for to lete.
en leyden hem on bere,
nd ledde hem wel þere
to holy kyrke,
o man scholde werke.
960
e kyng wiþ reuþful chere
tte leggen is sones on bere,
nt bringen hom to halle;
uche sorewe hue maden alle.
a chirche of lym ant ston
e buriede hem wiþ ryche won.
Thurston offers Horn his kingdom.
e following section—through line 986—has been rearranged by the transcriber. Line numbers
ow the original alignment of the three texts.
964
e king com in to halle,
ong his kniȝtes alle.
Þ 964
e king cam hom to halle,
Among þe kniyctes alle.
964
Þe kyng lette forþ calle
se knyhtes alle,
s to make Horn (Cutberd) his heir, and to give him his daughter Reynild. Cutberd declines, but offers
in the king’s service.
968
972
rn,” he sede, “i seie þe,
as i schal rede þe.
ȝen beþ mine heirs,
þu art kniȝt of muchel pris,
of grete strengþe,
fair o bodie lengþe.
engne þu schalt welde,
to spuse helde
nild, mi doȝter,
sitteþ on þe lofte.”
968
972
Do, cubert,” he seyde,
As ich þe wolle rede.
ede beþ myn heyres,
nd þou þe boneyres,
nd of grete strengþe,
wete and fayr of lengþe.
i reaume þou schalt helde,
nd to spuse welde
ermenyl, my douter,
at syt in boure softe.”
[966]
[973]
976
980
nt seide, “godmod, ȝef þou nere,
le ded we were,
o gap in MS. . . . . ]
ou art boþe god ant feyr;
er y make þe myn heyr;
r my sones bueþ yflawe,
nt ybroht of lyfdawe.
ohter ich habbe one;
ys non so feyr of blod ant bone.
H5(Ermenild, þat feyre may,
ryht so eny someres day,)
re wolle ich ȝeue þe,
nt her kyng shalt þou be.”
KH.5 This line was at first left out by the
scribe, and then written in the margin of
the MS.
976
980
984
O sire king, wiþ wronge
olte ihc hit vnderfonge.
oȝter þat ȝe me bede,
er rengne for to lede.
more ihc schal þe serue,
kyng, or þu sterue.
orwe schal wende
eue ȝeres ende.
ne hit is wente,
king, ȝef me mi rente.
anne i þi doȝter ȝerne,
schaltu me hire werne.”
976
980
984
e seyde, “king, wit wronge
cholde ich hire honder fonge,
ng þat þou me bede,
nd þy reaume lede.
t more ich wile þe serue,
nd fro sorwe þe berwe.
y sorwe hyt schal wende
er þis seue ȝeres hende.
nd wanne he beþ wente,
yng, ȝyf þou me my rente.
an ich þi douter herne,
e schalt þou hire me werne.”
984
e seyde, “more ichul þe serue,
yng, er þen þou sterue.
hen y þy dohter ȝerne,
eo ne shal me noþyng werne.”
even years he does not communicate with Rymenhild.
988
992
berd wonede þere
e seue ȝere,
gap in MS. . . . .
. . . . . . . . ]
to Rymenild he ne sente,
him self ne wente.
enild was in Westernesse,
wel muchel sorinesse.
H 988
992
orn child wonede þere
fulle sixe yere.
Þe seuenþe, þat cam þe nexte
fter þe sexte, KH4
o reymyld he ne wende,
e to hyre sende.
eymyld was in westnesse,
yd michel sorwenesse.
H.4 MS. adds ‘yeres hende’ underdotted
s a mistake.
988
992
godmod wonede þere
lle six ȝere;
o gap in MS. . .
. . . . . . . . ]
nt þe seueþe ȝer bygon;
rymynyld, sonde ne sende he non.
menyld wes in westnesse,
iþ muchel sorewenesse.
g sues for Rymenhild.
ues for Rymenhild.
996
1000
king þer gan ariue
wolde hire haue to wyue.
n he was wiþ þe king,
at ilke wedding.
aies were schorte,
Riminhild ne dorste
n in none wise.
rit he dude deuise;
996
1000
kyng þer was aryuede
at wolde hyre habbe to wyue.
t sone ware þe kynges
f hyre weddinges.
e dawes weren schorte,
nd reymyld ne dorste
ette in none wise.
writ he dede deuise;
996
1000
kyng þer wes aryue,
nt wolde hyre han to wyue.
one were þe kynges,
þat weddynge.
e dayes were so sherte,
nt rymenild ne derste
tten on none wyse.
wryt hue dude deuyse;
rites a letter to Horn.
1004
1008
f hit dude write,
horn ne luuede noȝt lite.
sende hire sonde
uereche londe,
eche horn, þe kniȝt,
me him finde miȝte.
1004
1008
yol hyt dide write,
at horn ne louede nawt lite.
nd to eueryche londe,
or horn hym was so longe,
fter horn þe knycte,
or þat he ne Myȝte.
1004
1008
þulf hit dude wryte,
t horn ne louede nout lyte.
ue sende hire sonde
to eueruche londe,
sechen horn knyhte,
he so er me myhte.
meets Rymenhild’s messenger.
hile hunting, meets a page, who says that he is seeking Horn,
1012
1016
n noȝt þer of ne herde,
o dai þat he ferde
wude for to schete,
aue he gan imete.
n seden, “Leue fere,
sechestu here?”
ȝt, if beo þi wille,
ai þe sone telle.
che fram biweste,
n of westernesse,
1012
1016
orn þer of ne þoute,
yl, on a day þat he ferde
o wode for to seche,
page he gan mete.
e seyde, “leue fere,
at sekest þou here?”
Knyt, feyr of felle,”
wat þe page, “y wole þe telle.
h seke fram westnesse,
orn, knyt of estnesse,
1012
[leaf 89]
1016
orn þer of nout herde,
, o day þat he ferde
wode forte shete,
page he gan mete.
orn seide, “leue fere,
het dest þou nou here?”
Sire, in lutel spelle
may þe sone telle.
h seche from westnesse,
orn, knyht, of estnesse,
Rymenhild is to marry King Mody of Reynes, on Sunday.
1020
1024
a Maiden Rymenhild
for him gan wexe wild.
ng hire wile wedde,
bringe to his bedde,
Modi of Reynes,
of hornes enemis.
habbe walke wide
e se side,
1020
1024
or þe mayde reymyld,
at for hym ney waxeþ wild.
kyng hire schal wedde,
soneday to bedde,
yng mody of reny,
at was hornes enemy.
h haue walked wide
y þe se syde.
1020
1024
or rymenild, þat feyre may,
oreweþ for him nyht ant day.
kyng hire shal wedde,
sonneday to bedde,
yng Mody of reynis,
t is hornes enimis.
h habbe walked wyde
y þe see side.
ssenger laments that he cannot find Horn.
1032
gap in MS. . . . .
. . . . . . . ]
he no war ifunde,
awai þe stunde.
away þe while,
wurþ Rymenild bigiled.”
n iherde wiþ his ires,
spak wiþ bidere tires,
1028
1032
h neuere myȝt of reche
hit no londisse speche.
s he nower founde,
weylawey þe stounde.
eymyld worþ by gile,
eylawey þe wile.”
orn hyt herde with eren,
nd wep with blody teren.
1028
1032
e mihte ich him neuer cleche,
iþ nones kunnes speche,
e may ich of him here
londe fer no nere.
eylawey þe while,
m may hente gyle.”
Horn hit herde wiþ earen,
nt spec wiþ wete tearen,
closes his identity, and sends word to Rymenhild that he will come Sunday before ‘prime.’
1036
1040
1044
aue, wel þe bitide,
n stondep þe biside.
n to hure þu turne,
seie þat heo ne murne,
schal beo þer bitime,
neday bi pryme.”
knaue was wel bliþe,
hiȝede aȝen bliue.
e bigan to þroȝe
er hire woȝe.
1036
1040
So wel þe, grom, by tide,
orn stant by þy syde.
ȝen to reymyld turne,
nd sey þat he ne morne.
h schal ben þer by tyime,
soneday by prime.”
e page was blyþe,
nd schepede wel swyþe.
o gap in MS. . . . .
. . . . . . . . ]
1036
1040
So wel, grom, þe bitide,
orn stond by þi syde,
ȝeyn to rymenild turne,
t sey þat hue ne murne.
shal be þer bi time,
sonneday er prime.”
e page wes wel blyþe
t shipede wel suyþe.
o gap in MS. . .
. . . . . . . . ]
messenger on his return journey is drowned.
ssenger is drowned, and Rymenhild looks for him in vain.
1048
knaue þer gan adrinke;
enhild hit miȝte of þinke.
enhild vndude þe dure pin
e hus þer heo was in,
gap in MS. . . . .
. . . . . . . ]
1048
e se hym gan to drenche;
eymyld hyt Myȝt of þinche.
e se hym gan op þrowe,
onder hire boures wowe.
eymyld gan dore vn pynne,
f boure þat he was ynne,
1048
e see him gon adrynke;
t rymenil may of þinke.
e [see] him con ded þrowe
nder hire chambre wowe.
menild lokede wide
y þe see syde,
ild grieves when she finds the drowned messenger.
1052
1056
oke wiþ hire iȝe,
eo oȝt of horn isiȝe.
ond heo þe knaue adrent
he hadde for horn isent,
þat scholde horn bringe;
fingres he gan wringe.
1052
1056
nd lokede forþ riȝcte
fter horn þe knyte.
o fond hye hire sonde
renched by þe stronde,
at scholde horn bringe;
yre fingres hye gan wringe.
1052
1056
ef heo seȝe horn come,
þer tidynge of eny gome.
o fond hue hire sonde
dronque by þe stronde,
at shulde horn brynge;
re hondes gon hue wrynge.
asks King Thurston’s aid.
closes his identity to King Thurston
1060
1064
orn cam to þurston þe kyng,
tolde him þis tiþing.
he was iknowe
Rimenh[ild] was hise oȝe,
is gode kenne,
king of suddenne,
hu he sloȝ in felde
his fader quelde,
1060
orn cam to þurston þe kinge,
nd telde hym hys tydinge.
o he was by cnowe
at reymyld was his owe.
o gap in MS. . . . .
. . . . . . . . ]
1060
1064
Horn com to þurston þe kynge,
nt tolde him þes tidynge.
nt þo he was biknowe,
at rymenild wes ys owe,
nt of his gode kenne,
e kyng of sudenne,
nt hou he sloh afelde
m þat is fader aquelde,
s his pay and also aid to win Rymenhild.
1068
seide, “king þe wise,
me mi seruise.
enhild help me winne;
þu noȝt ne linne,
1068
e seyde, “kyng so wise,
eld me my seruyse.
eymyld me help to winne;
at þou ich nowt ne lynne,
1068
nt seide, “kyng so wyse,
eld me my seruice.
menild, help me to wynne,
wyþe þat þou ne blynne,
mises that Athulf shall marry Thurston’s daughter.
1072
ischal do to spuse
oȝter wel to huse.
schal to spuse haue
f, mi gode felaȝe,
kniȝt mid þe beste,
þe treweste.”
1072
nd hy schal to house
y douter do wel spuse.
e schal to spuse haue
yol, My trewe felawe,
e hys knyt wyt þe beste,
nd on of þe treweste.”
1072
nt y shal do to house
y dohter wel to spouse,
r hue shal to spouse haue
þulf, my gode felawe.
e is knyht mid þe beste,
t on of þe treweste.”
g consents.
1076
king sede so stille,
rn, haue nu þi wille.”
1076
o seyde þe kyng so stille,
Horn, do þine wille.”
1076
e kyng seide so stille,
orn, do al þi wille.”
ies men, and sets sail.
1080
1084
dude writes sende
yrlonde,
r kniȝtes liȝte,
e men to fiȝte.
orn come inoȝe,
to schupe droȝe.
n dude him in þe weie,
a god Galeie.
im gan to blowe
litel þroȝe.
H
1080
1084
orn sente hys sonde
In to eueryche londe,
After men to fyȝte,
yrische men so wyȝte,
o hym were come hy nowe,
at in to schipe drowe.
orn tok hys preye.
nd dude him in hys weye.
o gap in MS. . . . .
. . . . . . . . ]
1080
1084
e sende þo by sonde,
end al is londe,
ter knyhtes to fyhte,
t were men so lyhte.
him come ynowe,
t in to shipe drowe.
Horn dude him in þe weye,
a gret galeye.
e wynd bigon to blowe
a lutel þrowe.
arrives at the latest possible moment.
es after the bells for the wedding have been rung.
1088
1092
1096
e bigan to posse
in to Westernesse.
trike seil and maste,
Ankere gunne caste,
ny day was sprunge
r belle irunge.
word bigan to springe
Rymenhilde weddinge.
n was in þe watere;
miȝte he come no latere.
1088
1092
1096
ere scyp gan forþ seyle,
e wynd hym nolde fayle.
e striken seyl of maste,
nd anker he gonne kaste.
e soneday was hy sp[ronge],
nd þe messe hy songe,
f reymylde þe ȝonge,
nd of mody þe kinge;
nd horn was in watere;
yȝt he come no latere.
1088
1092
1096
e see bi-gan wiþ ship to gon,
westnesse hem brohte anon.
ue striken seyl of maste,
nt ancre gonnen caste.
atynes were yronge
t þe masse ysonge,
rymenild þe ȝynge
t of Mody þe kynge,
nt horn wes in watere;
e mihte he come no latere.
es his ship, and comes to land.
1100
et his schup stonde,
ȝede to londe.
folk he dude abide
er wude side.
1100
e let scyp stonde,
nd ȝede hym op to londe.
ys folc he dide abyde
onder þe wode syde.
1100
e let is ship stonde,
nt com him vp to londe.
s folk he made abyde
nder a wode syde.
meets a Palmer.
ts forth alone, and meets a palmer,
1104
[n] him ȝede alone,
he sprunge of stone.
almere he þar mette,
faire hine grette.
mere, þu schalt me telle
f þine spelle.”
gap in MS. . . . .
. . . . . . . ]
1104
1108
e wende forþ alone,
o he were spronge of stone.
palmere he mette;
yt worde he hym grette,
Palmere, þou schalt me telle,”
e seyde, “on þine spelle,
o brouke þou þi croune,
i comest þou fram toune?”
[leaf 89, back]
1104
1108
Horn eode forh al one,
o he sprong of þe stone.
n palmere he y-mette,
t wiþ wordes hyne grette,
palmere, þou shalt me telle,”
e seyde, “of þine spelle,
o brouke þou þi croune,
hy comest þou from toune?”
s him of the wedding
1112
sede vpon his tale,
ome fram o brudale,
was at o wedding
Maide Rymenhild.
gap in MS. . . . .
. . . . . . . ]
1112
e palmere seyde on hys tale,
Hy com fram on bridale.
h com fram brode hylde
f Mayden reymylde.
am honder chyrche wowe,
e gan louerd owe,
1112
nt he seide on is tale,
come from a brudale,
om brudale wylde
maide remenylde.
o gap in MS. . .
. . . . . . . . ]
Rymenhild’s grief.
1116
1120
miȝte heo adriȝe
heo ne weop wiþ iȝe.
sede þat ‘heo nolde
ispused wiþ golde;
hadde on husebonde,
he were vt of londe.’
1116
1120
e miyȝte hye hyt dreye
at hye wep wyt eye.
e seyde þat ‘hye nolde
e spoused Myd golde;
ye hadde hosebonde,
ey be nere nawt in londe.’
1116
1120
e mihte hue nout dreȝe
t hue ne wep wiþ eȝe.
ue seide, ‘þat hue nolde
e spoused wiþ golde;
ue hade hosebonde
ah he were out of londe.’
1124
1128
in strong halle,
nne castel walle,
iwas atte ȝate;
de hi me in late.
i ihote hadde
ure þat me hire ladde.
i igan glide;
deol inolde abide.
bride wepeþ sore,
þat is muche deole!”
1124
1128
ody Myd strencþe hyre hadde,
nd in to toure ladde,
to a stronge halle,
hit inne kastel walle.
er ich was attegate;
oste ich nawt in rake.
wey ich gan glyde;
e deþ ich nolde abyde.
er worþ a rewlich dole,
er þe bryd wepeþ sore.”
1128
h wes in þe halle,
iþ-inne þe castel walle.
o gap in MS. . .
. . . . . . . .
. . . . . . . .
. . . . . . . . ]
wey y gon glide;
e dole y nolde abyde.
er worþ a dole reuly;
e brude wepeþ bitterly.”
exchanges clothes with the Palmer.
anges clothes with the palmer,
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Handbook of Empirical Economics and Finance STATISTICS Textbooks and Monograp...
PDF
Time Series Modeling Computation And Inference West Mike Prado
PDF
Applied Statistical Inference with MINITAB Sally Lesik
PDF
Handbook of Empirical Economics and Finance STATISTICS Textbooks and Monograp...
PDF
Bayesian Ideas And Data Analysis An Introduction For Scientists And Statistic...
PDF
Applied Statistical Inference with MINITAB Sally Lesik
PDF
Confirmatory Factor Analysis J Micah Roos Shawn Bauldry
PDF
M4D-v0.4.pdf
Handbook of Empirical Economics and Finance STATISTICS Textbooks and Monograp...
Time Series Modeling Computation And Inference West Mike Prado
Applied Statistical Inference with MINITAB Sally Lesik
Handbook of Empirical Economics and Finance STATISTICS Textbooks and Monograp...
Bayesian Ideas And Data Analysis An Introduction For Scientists And Statistic...
Applied Statistical Inference with MINITAB Sally Lesik
Confirmatory Factor Analysis J Micah Roos Shawn Bauldry
M4D-v0.4.pdf

Similar to Foundations Of Factor Analysis Second Edition 2nd Edition Stanley A Mulaik (20)

PDF
Introduction to Mathematical Modeling and Chaotic Dynamics 1st Edition Ranjit...
PDF
Applied Statistics in Business and Economics, 7e ISE 7th Edition David Doane
PDF
Exploratory Data Analysis With Matlab Second Edition Chapman Hall Crc Compute...
PDF
Advances in Business Statistics, Methods and Data Collection 1st Edition Ger ...
PDF
Multilevel Structural Equation Modeling Bruno Castanho Silva Constantin Manue...
PDF
Mathematical Models and Methods for Real World Systems 1st Edition K.M. Furati
PDF
Confidence Intervals In Generalized Regression Models 1st Edition Esa Uusipaikka
PDF
Data Clustering Algorithms and Applications First Edition Charu C. Aggarwal
PDF
Statistics in the 21st Century Ed 1st Edition Martin A. Tanner
PDF
Think_Stats.pdf
PDF
Gaussian Markov random fields theory and applications 1st Edition Havard Rue
DOCX
ffirs.indd 24316PM12112014 Page iData Scienc.docx
PDF
Functional Data Analysis With R 1st Edition Ciprian M Crainiceanu Jeff Goldsm...
PDF
Handbook of Finite Fields 1st Edition Gary L. Mullen
DOCX
ffirs.indd 24316PM12112014 Page iData Scienc.docx
PDF
Stochastic Processes An Introduction 2nd Edition Peter Watts Jones
PDF
Applied Statistics in Business and Economics 5th Edition David Doane
PDF
Multiple Time Series Models 1st Edition Patrick T. Brandt
PDF
Multiple Time Series Models 1st Edition Patrick T Brandt John T Williams
PDF
Stochastic Processes An Introduction 2nd Edition Peter Watts Jones
Introduction to Mathematical Modeling and Chaotic Dynamics 1st Edition Ranjit...
Applied Statistics in Business and Economics, 7e ISE 7th Edition David Doane
Exploratory Data Analysis With Matlab Second Edition Chapman Hall Crc Compute...
Advances in Business Statistics, Methods and Data Collection 1st Edition Ger ...
Multilevel Structural Equation Modeling Bruno Castanho Silva Constantin Manue...
Mathematical Models and Methods for Real World Systems 1st Edition K.M. Furati
Confidence Intervals In Generalized Regression Models 1st Edition Esa Uusipaikka
Data Clustering Algorithms and Applications First Edition Charu C. Aggarwal
Statistics in the 21st Century Ed 1st Edition Martin A. Tanner
Think_Stats.pdf
Gaussian Markov random fields theory and applications 1st Edition Havard Rue
ffirs.indd 24316PM12112014 Page iData Scienc.docx
Functional Data Analysis With R 1st Edition Ciprian M Crainiceanu Jeff Goldsm...
Handbook of Finite Fields 1st Edition Gary L. Mullen
ffirs.indd 24316PM12112014 Page iData Scienc.docx
Stochastic Processes An Introduction 2nd Edition Peter Watts Jones
Applied Statistics in Business and Economics 5th Edition David Doane
Multiple Time Series Models 1st Edition Patrick T. Brandt
Multiple Time Series Models 1st Edition Patrick T Brandt John T Williams
Stochastic Processes An Introduction 2nd Edition Peter Watts Jones
Ad

Recently uploaded (20)

PPTX
Pharma ospi slides which help in ospi learning
PDF
01-Introduction-to-Information-Management.pdf
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PPTX
Cell Types and Its function , kingdom of life
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
O7-L3 Supply Chain Operations - ICLT Program
Pharma ospi slides which help in ospi learning
01-Introduction-to-Information-Management.pdf
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Anesthesia in Laparoscopic Surgery in India
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
human mycosis Human fungal infections are called human mycosis..pptx
Weekly quiz Compilation Jan -July 25.pdf
Microbial disease of the cardiovascular and lymphatic systems
Final Presentation General Medicine 03-08-2024.pptx
GDM (1) (1).pptx small presentation for students
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Cell Types and Its function , kingdom of life
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Chinmaya Tiranga quiz Grand Finale.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
O7-L3 Supply Chain Operations - ICLT Program
Ad

Foundations Of Factor Analysis Second Edition 2nd Edition Stanley A Mulaik

  • 1. Foundations Of Factor Analysis Second Edition 2nd Edition Stanley A Mulaik download https://ebookbell.com/product/foundations-of-factor-analysis- second-edition-2nd-edition-stanley-a-mulaik-4765512 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Conceptual Foundations Of Human Factors Measurement 1st Edition David Meister Author https://ebookbell.com/product/conceptual-foundations-of-human-factors- measurement-1st-edition-david-meister-author-11966846 Country Sector And Company Factors In Global Equity Portfolios 1st Edition Peter J B Hopkins https://ebookbell.com/product/country-sector-and-company-factors-in- global-equity-portfolios-1st-edition-peter-j-b-hopkins-1779218 Foundations Of Scalable Systems Designing Distributed Architectures 1st Edition Ian Gorton https://ebookbell.com/product/foundations-of-scalable-systems- designing-distributed-architectures-1st-edition-ian-gorton-44887562 Foundations Of Software Science And Computation Structures 25th International Conference Fossacs 2022 Held As Part Of The European Joint Conferences On Theory And Practice Of Software Etaps 2022 Munich Germany April 27 2022 Proceedings Patricia Bouyer https://ebookbell.com/product/foundations-of-software-science-and- computation-structures-25th-international-conference- fossacs-2022-held-as-part-of-the-european-joint-conferences-on-theory- and-practice-of-software-etaps-2022-munich-germany- april-27-2022-proceedings-patricia-bouyer-44887776
  • 3. Foundations Of Software Science And Computation Structures 24th International Conference Stefan Kiefer https://ebookbell.com/product/foundations-of-software-science-and- computation-structures-24th-international-conference-stefan- kiefer-44887782 Foundations Of Marketing 9th William M Pride O C Ferrell https://ebookbell.com/product/foundations-of-marketing-9th-william-m- pride-o-c-ferrell-44954530 Foundations Of Rural Public Health In America Joseph N Inungu https://ebookbell.com/product/foundations-of-rural-public-health-in- america-joseph-n-inungu-44963066 Foundations Of Marketing 9e 9th Edition William M Pride O C Ferrell https://ebookbell.com/product/foundations-of-marketing-9e-9th-edition- william-m-pride-o-c-ferrell-44975342 Foundations Of Molecular Quantum Electrodynamics R Guy Woolley https://ebookbell.com/product/foundations-of-molecular-quantum- electrodynamics-r-guy-woolley-45936564
  • 5. Foundations of Factor Analysis Second Edition © 2010 by Taylor & Francis Group, LLC
  • 6. Statistics in the Social and Behavioral Sciences Series Aims and scope Large and complex datasets are becoming prevalent in the social and behavioral sciences and statistical methods are crucial for the analysis and interpretation of such data. This series aims to capture new developments in statistical methodology with par- ticular relevance to applications in the social and behavioral sciences. It seeks to promote appropriate use of statistical, econometric and psychometric methods in these applied sciences by publishing a broad range of reference works, textbooks and handbooks. The scope of the series is wide, including applications of statistical methodology in sociology, psychology, economics, education, marketing research, political science, criminology, public policy, demography, survey methodology and official statistics. The titles included in the series are designed to appeal to applied statisticians, as well as students, researchers and practitioners from the above disciplines. The inclusion of real examples and case studies is therefore essential. Published Titles Analysis of Multivariate Social Science Data, Second Edition David J. Bartholomew, Fiona Steele, Irini Moustaki, and Jane I. Galbraith Bayesian Methods: A Social and Behavioral Sciences Approach, Second Edition Jeff Gill Foundations of Factor Analysis, Second Edition Stanley A. Mulaik Linear Causal Modeling with Structural Equations Stanley A. Mulaik Multiple Correspondence Analysis and Related Methods Michael Greenacre and Jorg Blasius Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences Brian S. Everitt Statistical Test Theory for the Behavioral Sciences Dato N. M. de Gruijter and Leo J. Th. van der Kamp A. Colin Cameron University of California, Davis, USA Andrew Gelman Columbia University, USA J. Scott Long Indiana University, USA Sophia Rabe-Hesketh University of California, Berkeley, USA Series Editors Chapman & Hall/CRC Anders Skrondal Norwegian Institute of Public Health, Norway © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 7. Stanley A. Mulaik Foundations of Factor Analysis Second Edition Statistics in the Social and Behavioral Sciences Series Chapman & Hall/CRC © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 8. CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2010 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Version Date: 20110725 International Standard Book Number-13: 978-1-4200-9981-2 (eBook - PDF) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit- ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright. com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 9. v Contents Preface to the Second Edition............................................................................ xiii Preface to the First Edition................................................................................. xix 1 Introduction.....................................................................................................1 1.1 Factor Analysis and Structural Theories .............................................1 1.2 Brief History of Factor Analysis as a Linear Model...........................3 1.3 Example of Factor Analysis.................................................................12 2 Mathematical Foundations for Factor Analysis .....................................17 2.1 Introduction...........................................................................................17 2.2 Scalar Algebra........................................................................................17 2.2.1 Fundamental Laws of Scalar Algebra ..................................18 2.2.1.1 Rules of Signs............................................................18 2.2.1.2 Rules for Exponents.................................................19 2.2.1.3 Solving Simple Equations.......................................19 2.3 Vectors ....................................................................................................20 2.3.1 n-Tuples as Vectors..................................................................22 2.3.1.1 Equality of Vectors....................................................22 2.3.2 Scalars and Vectors..................................................................23 2.3.3 Multiplying a Vector by a Scalar...........................................23 2.3.4 Addition of Vectors.................................................................24 2.3.5 Scalar Product of Vectors .......................................................24 2.3.6 Distance between Vectors ......................................................25 2.3.7 Length of a Vector ...................................................................26 2.3.8 Another Definition for Scalar Multiplication......................27 2.3.9 Cosine of the Angle between Vectors...................................27 2.3.10 Projection of a Vector onto Another Vector .........................29 2.3.11 Types of Special Vectors .........................................................30 2.3.12 Linear Combinations ..............................................................31 2.3.13 Linear Independence..............................................................32 2.3.14 Basis Vectors.............................................................................32 2.4 Matrix Algebra ......................................................................................32 2.4.1 Definition of a Matrix .............................................................32 2.4.2 Matrix Operations...................................................................33 2.4.2.1 Equality......................................................................34 2.4.2.2 Multiplication by a Scalar .......................................34 2.4.2.3 Addition.....................................................................34 2.4.2.4 Subtraction ................................................................35 2.4.2.5 Matrix Multiplication ..............................................35 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 10. vi Contents 2.4.3 Identity Matrix.........................................................................37 2.4.4 Scalar Matrix............................................................................38 2.4.5 Diagonal Matrix ......................................................................39 2.4.6 Upper and Lower Triangular Matrices................................39 2.4.7 Null Matrix...............................................................................40 2.4.8 Transpose Matrix.....................................................................40 2.4.9 Symmetric Matrices ................................................................41 2.4.10 Matrix Inverse..........................................................................41 2.4.11 Orthogonal Matrices...............................................................42 2.4.12 Trace of a Matrix......................................................................43 2.4.13 Invariance of Traces under Cyclic Permutations................43 2.5 Determinants .........................................................................................44 2.5.1 Minors of a Matrix ..................................................................46 2.5.2 Rank of a Matrix......................................................................47 2.5.3 Cofactors of a Matrix ..............................................................47 2.5.4 Expanding a Determinant by Cofactors ..............................48 2.5.5 Adjoint Matrix .........................................................................48 2.5.6 Important Properties of Determinants.................................49 2.5.7 Simultaneous Linear Equations............................................50 2.6 Treatment of Variables as Vectors.......................................................51 2.6.1 Variables in Finite Populations .............................................51 2.6.2 Variables in Infinite Populations...........................................53 2.6.3 Random Vectors of Random Variables.................................56 2.7 Maxima and Minima of Functions.....................................................58 2.7.1 Slope as the Indicator of a Maximum or Minimum...........59 2.7.2 Index for Slope.........................................................................59 2.7.3 Derivative of a Function.........................................................60 2.7.4 Derivative of a Constant ........................................................62 2.7.5 Derivative of Other Functions...............................................62 2.7.6 Partial Differentiation.............................................................64 2.7.7 Maxima and Minima of Functions of Several Variables....65 2.7.8 Constrained Maxima and Minima .......................................67 3 Composite Variables and Linear Transformations................................69 3.1 Introduction...........................................................................................69 3.1.1 Means and Variances of Variables ........................................69 3.1.1.1 Correlation and Causation......................................71 3.2 Composite Variables.............................................................................72 3.3 Unweighted Composite Variables......................................................73 3.3.1 Mean of an Unweighted Composite ....................................73 3.3.2 Variance of an Unweighted Composite...............................73 3.3.3 Covariance and Correlation between Two Composites......................................................................77 3.3.4 Correlation of an Unweighted Composite with a Single Variable.............................................................78 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 11. Contents vii 3.3.5 Correlation between Two Unweighted Composites.................................................................................80 3.3.6 Summary Concerning Unweighted Composites..................83 3.4 Differentially Weighted Composites..................................................83 3.4.1 Correlation between a Differentially Weighted Composite and Another Variable ...........................................83 3.4.2 Correlation between Two Differentially Weighted Composites...............................................................84 3.5 Matrix Equations...................................................................................84 3.5.1 Random Vectors, Mean Vectors, Variance–Covariance Matrices, and Correlation Matrices ........................................84 3.5.2 Sample Equations......................................................................86 3.5.3 Composite Variables in Matrix Equations.............................88 3.5.4 Linear Transformations............................................................89 3.5.5 Some Special, Useful Linear Transformations ......................91 4 Multiple and Partial Correlations.............................................................93 4.1 Multiple Regression and Correlation.................................................93 4.1.1 Minimizing the Expected Squared Difference between a Composite Variable and an External Variable ...................93 *4.1.2 Deriving the Regression Weight Matrix for Multivariate Multiple Regression...........................................95 4.1.3 Matrix Equations for Multivariate Multiple Regression.....97 4.1.4 Squared Multiple Correlations................................................98 4.1.5 Correlations between Actual and Predicted Criteria...........99 4.2 Partial Correlations.............................................................................100 4.3 Determinantal Formulas.................................................................... 102 4.3.1 Multiple-Correlation Coefficient........................................... 103 4.3.2 Formulas for Partial Correlations ......................................... 104 4.4 Multiple Correlation in Terms of Partial Correlation .................... 104 4.4.1 Matrix of Image Regression Weights.................................... 105 4.4.2 Meaning of Multiple Correlation.......................................... 107 4.4.3 Yule’s Equation for the Error of Estimate............................ 109 4.4.4 Conclusions.............................................................................. 110 5 Multivariate Normal Distribution.......................................................... 113 5.1 Introduction......................................................................................... 113 5.2 Univariate Normal Density Function .............................................. 113 5.3 Multivariate Normal Distribution.................................................... 114 5.3.1 Bivariate Normal Distribution .............................................. 115 5.3.2 Properties of the Multivariate Normal Distribution.......... 116 *5.4 Maximum-Likelihood Estimation..................................................... 118 5.4.1 Notion of Likelihood .............................................................. 118 5.4.2 Sample Likelihood .................................................................. 119 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 12. viii Contents 5.4.3 Maximum-Likelihood Estimates .......................................... 119 5.4.4 Multivariate Case.................................................................... 124 5.4.4.1 Distribution of y – and S............................................128 6 Fundamental Equations of Factor Analysis ..........................................129 6.1 Analysis of a Variable into Components .........................................129 6.1.1 Components of Variance........................................................ 132 6.1.2 Variance of a Variable in Terms of Its Factors .....................133 6.1.3 Correlation between Two Variables in Terms of Their Factors........................................................134 6.2 Use of Matrix Notation in Factor Analysis......................................135 6.2.1 Fundamental Equation of Factor Analysis..........................135 6.2.2 Fundamental Theorem of Factor Analysis ..........................136 6.2.3 Factor-Pattern and Factor-Structure Matrices..................... 137 7 Methods of Factor Extraction ................................................................... 139 7.1 Rationale for Finding Factors and Factor Loadings....................... 139 7.1.1 General Computing Algorithm for Finding Factors..................................................................140 7.2 Diagonal Method of Factoring.......................................................... 145 7.3 Centroid Method of Factoring .......................................................... 147 7.4 Principal-Axes Methods..................................................................... 147 7.4.1 Hotelling’s Iterative Method ................................................. 151 7.4.2 Further Properties of Eigenvectors and Eigenvalues..............................................................................154 7.4.3 Maximization of Quadratic Forms for Points on the Unit Sphere ..................................................................156 7.4.4 Diagonalizing the R Matrix into Its Eigenvalues ...............158 7.4.5 Jacobi Method.......................................................................... 159 7.4.6 Powers of Square Symmetric Matrices ................................164 7.4.7 Factor-Loading Matrix from Eigenvalues and Eigenvectors..................................................................... 165 8 Common-Factor Analysis.......................................................................... 167 8.1 Preliminary Considerations............................................................... 167 8.1.1 Designing a Factor Analytic Study....................................... 168 8.2 First Stages in the Factor Analysis.................................................... 169 8.2.1 Concept of Minimum Rank................................................... 170 8.2.2 Systematic Lower-Bound Estimates of Communalities.................................................................... 175 8.2.3 Congruence Transformations................................................ 176 8.2.4 Sylvester’s Law of Inertia ...................................................... 176 8.2.5 Eigenvector Transformations ................................................177 8.2.6 Guttman’s Lower Bounds for Minimum Rank...................177 8.2.7 Preliminary Theorems for Guttman’s Bounds.................... 178 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 13. Contents ix 8.2.8 Proof of the First Lower Bound........................................... 181 8.2.9 Proof of the Third Lower Bound......................................... 181 8.2.10 Proof of the Second Lower Bound......................................184 8.2.11 Heuristic Rules of Thumb for the Number of Factors .....185 8.2.11.1 Kaiser’s Eigenvalues-Greater- Than-One Rule...................................................... 186 8.2.11.2 Cattell’s Scree Criterion....................................... 186 8.2.11.3 Parallel Analysis................................................... 188 8.3 Fitting the Common-Factor Model to a Correlation Matrix......... 192 8.3.1 Least-Squares Estimation of the Exploratory Common-Factor Model........................................................ 193 8.3.2 Assessing Fit .......................................................................... 197 8.3.3 Example of Least-Squares Common-Factor Analysis................................................................................... 197 8.3.4 Maximum-Likelihood Estimation of the Exploratory Common-Factor Model ..................................199 *8.3.4.1 Maximum-Likelihood Estimation Obtained Using Calculus......................................................202 8.3.5 Maximum-Likelihood Estimates ........................................206 *8.3.6 Fletcher–Powell Algorithm..................................................207 *8.3.7 Applying the Fletcher–Powell Algorithm to Maximum-Likelihood Exploratory Factor Analysis.........210 8.3.8 Testing the Goodness of Fit of the Maximum-Likelihood Estimates ........................................212 8.3.9 Optimality of Maximum-Likelihood Estimators.............. 214 8.3.10 Example of Maximum-Likelihood Factor Analysis ......................................................................215 9 Other Models of Factor Analysis............................................................. 217 9.1 Introduction......................................................................................... 217 9.2 Component Analysis.......................................................................... 217 9.2.1 Principal-Components Analysis ......................................... 219 9.2.2 Selecting Fewer Components than Variables....................220 9.2.3 Determining the Reliability of Principal Components....222 9.2.4 Principal Components of True Components.....................224 9.2.5 Weighted Principal Components........................................226 9.3 Image Analysis....................................................................................230 9.3.1 Partial-Image Analysis .........................................................231 9.3.2 Image Analysis and Common-Factor Analysis ................237 9.3.3 Partial-Image Analysis as Approximation of Common-Factor Analysis.....................................................244 9.4 Canonical-Factor Analysis.................................................................245 9.4.1 Relation to Image Analysis..................................................249 9.4.2 Kaiser’s Rule for the Number of Harris Factors...............253 9.4.3 Quickie, Single-Pass Approximation for Common-Factor Analysis.....................................................253 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 14. x Contents 9.5 Problem of Doublet Factors...............................................................253 9.5.1 Butler’s Descriptive-Factor-Analysis Solution .................254 9.5.2 Model That Includes Doublets Explicitly..........................258 9.6 Metric Invariance Properties.............................................................262 9.7 Image-Factor Analysis........................................................................263 9.7.1 Testing Image Factors for Significance...............................264 9.8 Psychometric Inference in Factor Analysis .....................................265 9.8.1 Alpha Factor Analysis ..........................................................270 9.8.2 Communality in a Universe of Tests ..................................271 9.8.3 Consequences for Factor Analysis...................................... 274 10 Factor Rotation ............................................................................................275 10.1 Introduction.......................................................................................275 10.2 Thurstone’s Concept of a Simple Structure................................... 276 10.2.1 Implementing the Simple-Structure Concept .................280 10.2.2 Question of Correlated Factors .........................................282 10.3 Oblique Graphical Rotation ............................................................286 11 Orthogonal Analytic Rotation.................................................................301 11.1 Introduction .......................................................................................301 11.2 Quartimax Criterion .........................................................................302 11.3 Varimax Criterion.............................................................................. 310 11.4 Transvarimax Methods..................................................................... 312 11.4.1 Parsimax ............................................................................... 313 11.5 Simultaneous Orthogonal Varimax and Parsimax....................... 315 11.5.1 Gradient Projection Algorithm..........................................323 12 Oblique Analytic Rotation .......................................................................325 12.1 General ...............................................................................................325 12.1.1 Distinctness of the Criteria in Oblique Rotation.............325 12.2 Oblimin Family .................................................................................326 12.2.1 Direct Oblimin by Planar Rotations .................................328 12.3 Harris–Kaiser Oblique Transformations .......................................332 12.4 Weighted Oblique Rotation.............................................................336 12.5 Oblique Procrustean Transformations...........................................341 12.5.1 Promax Oblique Rotation ..................................................342 12.5.2 Rotation to a Factor-Pattern Matrix Approximating a Given Target Matrix.........................................................343 12.5.3 Promaj...................................................................................343 12.5.4 Promin ..................................................................................345 12.6 Gradient-Projection-Algorithm Synthesis.....................................348 12.6.1 Gradient-Projection Algorithm .........................................348 12.6.2 Jennrich’s Use of the GPA..................................................351 12.6.2.1 Gradient-Projection Algorithm ........................353 12.6.2.2 Quartimin............................................................353 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 15. Contents xi 12.6.2.3 Oblimin Rotation................................................354 12.6.2.4 Least-Squares Rotation to a Target Matrix .....357 12.6.2.5 Least-Squares Rotation to a Partially Specified Target Pattern Matrix........................357 12.6.3 Simplimax ............................................................................357 12.7 Rotating Using Component Loss Functions .................................360 12.8 Conclusions........................................................................................366 13 Factor Scores and Factor Indeterminacy ................................................369 13.1 Introduction.......................................................................................369 13.2 Scores on Component Variables ..................................................... 370 13.2.1 Component Scores in Canonical-Component Analysis and Image Analysis............................................373 13.2.1.1 Canonical-Component Analysis.......................373 13.2.1.2 Image Analysis.................................................... 374 13.3 Indeterminacy of Common-Factor Scores.....................................375 13.3.1 Geometry of Correlational Indeterminacy......................377 13.4 Further History of Factor Indeterminacy......................................380 13.4.1 Factor Indeterminacy from 1970 to 1980..........................384 13.4.1.1 “Infinite Domain” Position ...............................392 13.4.2 Researchers with Well-Defined Concepts of Their Domains ................................................................395 13.4.2.1 Factor Indeterminacy from 1980 to 2000.........397 13.5 Other Estimators of Common Factors ...........................................399 13.5.1 Least Squares .......................................................................400 13.5.2 Bartlett’s Method.................................................................401 13.5.3 Evaluation of Estimation Methods...................................403 14 Factorial Invariance....................................................................................405 14.1 Introduction.......................................................................................405 14.2 Invariance under Selection of Variables ........................................405 14.3 Invariance under Selection of Experimental Populations ..........408 14.3.1 Effect of Univariate Selection ............................................408 14.3.2 Multivariate Case................................................................ 412 14.3.3 Factorial Invariance in Different Experimental Populations.......................................................................... 414 14.3.4 Effects of Selection on Component Analysis................... 418 14.4 Comparing Factors across Populations ......................................... 419 14.4.1 Preliminary Requirements for Comparing Factor Analyses ...................................................................420 14.4.2 Inappropriate Comparisons of Factors............................421 14.4.3 Comparing Factors from Component Analyses.............422 14.4.4 Contrasting Experimental Populations across Factors.......................................................................423 14.4.5 Limitations on Factorial Invariance..................................424 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 16. xii Contents 15 Confirmatory Factor Analysis..................................................................427 15.1 Introduction.......................................................................................427 15.1.1 Abduction, Deduction, and Induction...........................428 15.1.2 Science as the Knowledge of Objects .............................429 15.1.3 Objects as Invariants in the Perceptual Field................431 15.1.4 Implications for Factor Analysis.....................................433 15.2 Example of Confirmatory Factor Analysis....................................434 15.3 Mathematics of Confirmatory Factor Analysis.............................440 15.3.1 Specifying Hypotheses.....................................................440 15.3.2 Identification......................................................................441 15.3.3 Determining Whether Parameters and Models Are Identified ......................................................444 15.3.4 Identification of Metrics...................................................450 15.3.5 Discrepancy Functions .....................................................452 15.3.6 Estimation by Minimizing Discrepancy Functions......454 15.3.7 Derivatives of Elements of Matrices...............................454 15.3.8 Maximum-Likelihood Estimation in Confirmatory Factor Analysis .........................................457 15.3.9 Least-Squares Estimation................................................. 461 15.3.10 Generalized Least-Squares Estimation ..........................463 15.3.11 Implementing the Quasi-Newton Algorithm ...............463 15.3.12 Avoiding Improper Solutions..........................................465 15.3.13 Statistical Tests...................................................................466 15.3.14 What to Do When Chi-Square Is Significant.................467 15.3.15 Approximate Fit Indices...................................................469 15.4 Designing Confirmatory Factor Analysis Models........................473 15.4.1 Restricted versus Unrestricted Models..........................473 15.4.2 Use for Unrestricted Model .............................................475 15.4.3 Measurement Model......................................................... 476 15.4.4 Four-Step Procedure for Evaluating a Model ...............477 15.5 Some Other Applications.................................................................477 15.5.1 Faceted Classification Designs........................................477 15.5.2 Multirater–Multioccasion Studies ..................................478 15.5.3 Multitrait–Multimethod Covariance Matrices..............483 15.6 Conclusion .........................................................................................489 References ...........................................................................................................493 Author Index.......................................................................................................505 Subject Index ......................................................................................................509 © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 17. xiii Preface to the Second Edition This is a book for those who want or need to get to the bottom of things. It is about the foundations of factor analysis. It is for those who are not content with accepting on faith the many equations and procedures that constitute factor analysis but want to know where these equations and procedures came from. They want to know the assumptions underlying these equations and procedures so that they can evaluate them for themselves and decide where and when they would be appropriate. They want to see how it was done, so they might know how to add modifications or produce new results. The fact that a major aspect of factor analysis and structural equation modeling is mathematical means that getting to their foundations is going to require dealing with some mathematics. Now, compared to the mathematics needed to fully grasp modern physics, the mathematics of factor analysis and structural equation modeling is, I am happy to say, relatively easy to learn and not much beyond a sound course in algebra and certainly not beyond a course in differential calculus, which is often the first course in mathematics for science and engineering majors in a university. It is true that factor analy- sis relies heavily on concepts and techniques of linear algebra and matrix algebra. But these are topics that can be taught as a part of learning about factor analysis. Where differential calculus comes into the picture is in those situations where one seeks to maximize or minimize some algebraic expres- sion. Taking derivatives of algebraic expressions is an analytic process, and these are algebraic in nature. While best learned in a course on calculus, one can still be shown the derivatives needed to solve a particular optimization problem. Given that the algebra of the derivation of the solution is shown step by step, a reader may still be able to follow the argument leading to the result. That, then, is the way this book has been written: I teach the math- ematics needed as it is needed to understand the derivation of an equation or procedure in factor analysis and structural equation modeling. This text may be used at the postgraduate level as a first-semester course in advanced correlational methods. It will find use in psychology, sociology, education, marketing, and organizational behavior departments, especially in their quantitative method programs. Other ancillary sciences may also find this book useful. It can also be used as a reference for explanations of various options in commercial computer programs for performing factor analysis and structural equation modeling. There is a logical progression to the chapters in this text, reflecting the hierarchical structure to the mathematical concepts to be covered. First, in Chapter 2 one needs to learn the basic mathematics, principally linear algebra and matrix algebra and the elements of differential calculus. Then one needs to learn about composite variables and their means, variances, covariances, © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 18. xiv Preface to the Second Edition and correlation in terms of means, variances, and covariances among their component variables. Then one builds on that to deal with multiple and partial correlations, which are special forms of composite variables. This is accomplished in Chapter 3. Differential calculus will first be encountered in Chapter 4 in demon- strating where the estimates of the regression weights come from, for this involves finding the weights of a linear combination of predictor variables that has either the maximum correlation with the criterion or the minimum expected difference with the criterion. In Chapter 5 one uses the concepts of composite variables and multiple and partial regression to understand the basic properties of the multivariate normal distribution. Matrix algebra becomes essential to simplify notations that often involve hundreds of variables and their interrelations. By this point, in Chapter 6 one is ready to deal with factor analysis, which is an extension of regression theory wherein predictor variables are now unmeasured, and hypothetical latent variables and dependent variables are observed variables. Here we describe the fundamental equation and fun- damental theorem of factor analysis, and introduce the basic terminology of factor analysis. In Chapter 7, we consider how common factors may be extracted. We show that the methods used build on concepts of regression and partial correlation. We first look at a general algorithm for extracting factors proposed by Guttman. We then consider the diagonal and the cen- troid methods of factoring, both of which are of historical interest. Next we encounter eigenvectors and eigenvalues. Eigenvectors contain coefficients that are weights used to combine additively observed variables or their common parts into variables that have maximum variances (the eigenval- ues), because they will be variables that account for most of the information among the original variables in any one dimension. To perform a common-factor analysis one must first have initial estimates of the communalities and unique variances. Lower-bound estimates are fre- quently used. These bounds and their effect on the number of factors to retain are discussed in Chapter 8. The unique variances are either subtracted from the diagonal of the correlation matrix or the diagonal matrix of the recipro- cals of their square roots are pre- and postmultiplied times the correlation matrix, and then the eigenvectors and eigenvalues of the resulting matrix are obtained. Different methods use different estimates of unique variances. The formulas for the eigenvectors and eigenvalues of a correlation matrix, say, are obtained by using differential calculus to solve the maximization prob- lem of finding the weights of a linear combination of the variables that has the maximum variance under the constraint that the sum of the squares of the weights adds to unity. These will give rise to the factors, and the common factors will in turn be basis vectors of a common factor space, meaning that the observed variables are in turn linear combinations of the factors. Maximum-likelihood-factor analysis, the equations for which were ulti- mately solved by Karl Jöreskog (1967), building on the work of precursor © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 19. Preface to the Second Edition xv statisticians, also requires differential calculus to solve the maximization problem involved. Furthermore, the computational solution for the maximum-likelihood estimates of the model parameters cannot be solved directly by any algebraic analytic procedure. The solution has to be obtained numerically and iteratively. Jöreskog (1967) used a then new computer algo- rithm for nonlinear optimization, the Fletcher–Powell algorithm. We will explain how this works to obtain the maximum-likelihood solution for the exploratory factor-analysis model. Chapter 9 examines several variants of the factor-analysis model: princi- pal components, weighted principal components, image analysis, canonical factor analysis, descriptive factor analysis, and alpha factor analysis. In Chapters 10 (simple structure and graphical rotation), 11 (orthogonal analytic rotation), and 12 (oblique analytic rotation) we consider factor rota- tion. Rotation of factors to simple structures will concern transformations of the common-factor variables into a new set of variables that have a simpler relationship with the observed variables. But there are numerous math- ematical criteria of what constitutes a simple structure solution. All of these involve finding the solution for a transformation matrix for transforming the initial “unrotated solution” to one that maximizes or minimizes a mathe- matical expression constituting the criterion for simple structures. Thus, dif- ferential calculus is again involved in finding the algorithms for conducting a rotation of factors, and the solution is obtained numerically and iteratively using a computer. Chapter 13 addresses whether or not it is possible to obtain scores on the latent common factors. It turns out that solutions for these scores are not unique, even though they are optimal. This is the factor-indeterminacy problem, and it concerns more than getting scores on the factors: there may be more than one interpretation for a common factor that fits the data equally well. Chapter 14 deals with factorial invariance. What solutions for the factors will reveal the same factors even if we use different sets of observed vari- ables? What coefficients are invariant in a factor-analytic model under restric- tion of range? Building on ideas from regression, the solution is effectively algebraic. While much of the first 14 chapters is essentially unchanged from the first edition, developments that have taken place since 1972, when the first edition was published, have been updated and revised. I have changed the notation to adopt a notation popularized by Karl Jöreskog for the common-factor model. I now write the model equation as Y = LX + YE instead of Z = FX + UV and the equation of the fundamental theorem as RYY = LFXX L¢ + Y2 instead of RZZ = FCXXF¢ + U2. I have added a new Chapter 5 on the multivariate normal distribution and its general properties along with the concept of maximum-likelihood estimation based on it. This will increase by one the chapter numbers for subsequent chapters corresponding to those in the first edition. However, © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 20. xvi Preface to the Second Edition Chapter 12, on procrustean rotation in the first edition, has been dropped, and this subject has been briefly described in the new Chapter 12 on oblique rotation. Chapters 13 and 14 deal with factor scores and indeterminacy, and factorial invariance under restriction of range, respectively. Other changes and additions are as follows. I am critical of several of the methods that are commonly used to determine the number of factors to retain because they are not based on sound statistical or mathematical theory. I have now directed some of these criticisms toward some methods in renumbered Chapter 8. However, since then I have also realized that, in most studies, a major problem with determining the number of factors concerns the presence of doublet variance uncorrelated with n − 2 observed variables and correlated between just the two of them. Doublet correlations contribute to the commu- nalities, but lead to an overestimate of the number of overdetermined common factors. But the common-factor model totally ignores the possibility of dou- blets, while they are everywhere in our empirical studies. Both unique factor variance and doublet variance should be separated from the overdetermined common-factor variance. I have since rediscovered that a clever but heuristic solution for doing this, ignored by most factor-analytic texts, appeared in a paper I cited but did not understand sufficiently to describe in the first edition. This paper was by John Butler (1968), and he named his method “descriptive factor analysis.” I now cover this more completely (along with a new method of my own I call “doublet factor analysis”) in Chapter 9 as well as other methods of factor analysis. It provides an objective way to determine the number of overdetermined common factors to retain that those who use the eigenvalues greater than 1.00 rule of principal components will find more to their liking in the smaller number of factors it retains. I show in Chapter 9 that Kaiser’s formula (1963) for the principal axes factor structure matrix for an image analysis, ⎡ ⎤ Λ = γ − γ ⎣ ⎦ 1/2 2 ( 1) r i i r SA , is not the correct one to use, because this represents the “covariances” between the “image” components of the variables and the underlying “image factors.” The proper factor structure matrix that represents the correlations between the unit variance “observed” variables and the unit variance image factors is none other than the weighted principal component solution: Λ = γ 1/2 [ ] r i r SA . In the 1990s, several new approaches to oblique rotation to simple struc- ture were published, and these and earlier methods of rotation were in turn integrated by Robert Jennrich (2001, 2002, 2004, 2006) around a simple core computing algorithm, the “gradient projection algorithm,” which seeks the transformation matrix for simultaneously transforming all the factors. I have therefore completely rewritten Chapter 12 on analytic oblique rotation on the basis of this new, simpler algorithm, and I show examples of its use. In the 1970s, factor score indeterminacy was further developed by several authors, and in 1993 many of them published additional exchanges on the subject. A discussion of these developments has now been included in an expanded Chapter 13 on factor scores. © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 21. Preface to the Second Edition xvii Factor analysis was also extended to confirmatory factor analysis and structural equation modeling by Jöreskog (1969, 1973, 1974, 1975). As a con- sequence, these later methodologies diverted researchers from pursuing exclusively exploratory studies to pursuing hypothesis-testing studies. Confirmatory factor analysis is now best treated separately in a text on struc- tural equation modeling, as a special case of that method, but I have rewrit- ten a chapter on confirmatory factor analysis for this edition. Exploratory factor analysis still remains a useful technique in many circumstances as an abductive, exploratory technique, justifying its study today, although its limitations are now better understood. I wish to thank Robert Jennrich for his help in understanding the gra- dient projection algorithm. Jim Steiger was very helpful in clarifying Peter Schönemann’s papers on factor indeterminacy, and I owe him a debt of gratitude. I wish to thank all those who, over the past 30 years, have encouraged me to revise this text, specifically, Jim Steiger, Michael Browne, Abigail Panter, Ed Rigdon, Rod McDonald, Bob Cudeck, Ed Loveland, Randy Engle, Larry James, Susan Embretson, and Andy Smith. Without their encouragement, I might have abandoned the project. I owe Henry Kaiser, now deceased, an unrepayable debt for his unselfish help in steering me into a career in factor analysis and in getting me the postdoctoral fellowship at the University of North Carolina. This not only made the writing of the first edition of this book possible, but it also placed me in the intellectually nourishing environ- ment of leading factor analysts, without which I could never have gained the required knowledge to write the first edition or its current sequel. I also acknowledge the loving support my wife Jane has given me through all these years as I labored on this book into the wee hours of the night. Stanley A. Mulaik © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 22. © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 23. xix Preface to the First Edition When I was nine years old, I dismantled the family alarm clock. With gears, springs, and screws—all novel mechanisms to me—scattered across the kitchen table, I realized that I had learned two things: how to take the clock apart and what was inside it. But this knowledge did not reveal the mysteries of the all-important third and fourth steps in the process: putting the clock back together again and understanding the theory of clocks in general. The disemboweled clock ended up in the trash that night. A common experience, perhaps, but nonetheless revealing. There are some psychologists today who report a similar experience in connection with their use of factor analysis in the study of human abilities or personality. Very likely, when first introduced to factor analysis they had the impression that it would allow them to analyze human behavior into its com- ponents and thereby facilitate their activities of formulating structural theo- ries about human behavior. But after conducting a half-dozen factor analyses they discovered that, in spite of the plethora of factors produced and the piles of computer output stacked up in the corners of their offices, they did not know very much more about the organization of human behavior than they knew before factor analyzing. Perhaps after factor analyzing they would admit to appreciating more fully the complexity of human behavior, but in terms of achieving a coherent, organized conception of human behavior, they would claim that factor analysis had failed to live up to their expectations. With that kind of negative appraisal being commonly given to the tech- nique of factor analysis, one might ask why the author insisted on writing a textbook about the subject. Actually, I think the case against factor analysis is not quite as grim as depicted above. Just as my youthful experience with the alarm clock yielded me fresh but limited knowledge about the works inside the clock, factor analysis has provided psychologists with at least fresh but limited knowledge about the components, although not necessarily the organization, of human psychological processes. If some psychologists are disillusioned by factor analysis’ failure to provide them with satisfactory explanations of human behavior, the fault probably lies not with the model of factor analysis itself but with the mindless application of it; many of the early proponents of the method encouraged this mindless application by their extravagant claims for the method’s efficacy in discovering, as if by magic, underlying structures. Consequently, rather than use scientific intuition and already-available knowledge about the properties of variables under study to construct theories about the nature of relation- ships among the variables and formulating these theories as factor-analytic models to be tested against empirical data, many researchers have randomly picked variables representing a domain to be studied, intercorrelated the © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 24. xx Preface to the First Edition variables, and then factor-analyzed them expecting that the theoretically important variables of the domain would be revealed by the analysis. Even Thurstone (1947, pp. 55–56), who considered exploration of a new domain a legitimate application of factor-analytic methods, cautioned that explora- tion with factor analysis required carefully chosen variables and that results from using the method were only provisional in suggesting ideas for further research. Factor analysis is not a method for discovering full-blown struc- tural theories about a domain. Experience has justified Thurstone’s cau- tion for, more often than one would like to admit, the factors obtained from purely exploratory studies have been difficult to integrate with other theory, if they have been interpretable at all. The more substantial contributions of factor analysis have been made when researchers postulated the existence of certain factors, carefully selected variables from the domain that would isolate their existence, and then proceeded to factor-analyze in such a way as to reveal these factors as clearly as possible. In other words, factor analysis has been more profitably used when the researcher knew what he or she was looking for. Several recent developments, discussed in this book, have made it pos- sible for researchers to use factor-analytic methods in a hypothesis-testing manner. For example, in the context of traditional factor-analytic methodology, using procrustean transformations (discussed in Chapter 12), a researcher can rotate an arbitrary factor-pattern matrix to approximate a hypothetical factor-pattern matrix as much as possible. The researcher can then examine the goodness of fit of the rotated pattern matrix to the hypothetical pattern matrix to evaluate his or her hypothesis. In another recent methodological development (discussed in Chapter 15) it is possible for the researcher to formulate a factor-analytic model for a set of variables to whatever degree of completeness he or she may desire, leav- ing the unspecified parameters of the model to be estimated in such a way as to optimize goodness of fit of the hypothetical model to the data, and then to test the overall model for goodness of fit against the data. The latter approach to factor analysis, known as confirmatory factor analysis or analy- sis of covariance structures, represents a radical departure from the tradi- tional methods of performing factor analysis and may eventually become the predominant method of using the factor-analytic model. The objective of this book, as its title Foundations of Factor Analysis suggests, is to provide the reader with the mathematical rationale for factor-analytic procedures. It is thus designed as a text for students of the behavioral or social sciences at the graduate level. The author assumes that the typical stu- dent who uses this text will have had an introductory course in calculus, so that he or she will be familiar with ordinary differentiation, partial differen- tiation, and the maximization and minimization of functions using calculus. There will practically be no reference to integral calculus, and many of the sections will be comprehensible with only a good grounding in matrix © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 25. Preface to the First Edition xxi algebra, which is provided in Chapter 2. Many of the mathematical concepts required to understand a particular factor-analytic procedure are introduced along with the procedure. The emphasis of this book is algebraic rather than statistical. The key con- cept is that (random) variables may be treated as vectors in a unitary vec- tor space in which the scalar product of vectors is a defined operation. The empirical relationships between the variables are represented either by the distances or the cosines of the angles between the corresponding vectors. The factors of the factor-analytic model are basis vectors of the unitary vector space of observed variables. The task of a factor-analytic study is to find a set of basis vectors with optimal properties from which the vectors correspond- ing to the observed variables can be derived as linear combinations. Chapter 1 provides an introduction to the role factor-analytic models can play in the formulation of structural theories, with a brief review of the his- tory of factor analysis. Chapter 2 provides a mathematical review of concepts of algebra and calculus and an introduction to vector spaces and matrix algebra. Chapter 3 introduces the reader to properties of linear composite variables and the representation of these properties by using matrix algebra. Chapter 4 considers the problem of finding a particular linear combination of a set of random variables that is minimally distant from some external random variable in the vector space containing these variables. This discus- sion introduces the concepts of multiple correlation and partial correlation; the chapter ends with a discussion on how image theory clarifies the mean- ing of multiple correlation. Multiple- and partial-correlational methods are seen as essential, later on, to understanding methods of extracting factors. Chapter 5 plunges into the theory of factor analysis proper with a dis- cussion on the fundamental equations of common-factor analysis. Chapter 6 discusses methods of extracting factors; it begins with a discussion on a general algorithm for extracting factors and concludes with a discussion on methods for obtaining principal-axes factors by finding the eigenvectors and eigenvalues of a correlation matrix. Chapter 7 considers the model of common-factor analysis in greater detail with a discussion on (1) the importance of overdetermining factors by the proper selection of variables, (2) the inequalities regarding the lower bounds to the communalities of variables, and (3) the fitting of the unre- stricted common-factor model to a correlation matrix by least-squares and maximum-likelihood estimation. Chapter 8 discusses factor-analytic models other than the common-factor model, such as component analysis, image analysis, image factor analysis, and alpha factor analysis. Chapter 9 introduces the reader to the topic of factor rotation to simple structure using graphical rotational methods. This discus- sion is followed by a discussion on analytic methods of orthogonal rotation in Chapter 10 and of analytic methods of oblique rotation in Chapter 11. Chapters 12 and 13 consider methods of procrustean transformation of factors and the meaning of factorial indeterminacy in common-factor © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 26. xxii Preface to the First Edition analysis in connection with the problem of estimating the common factors, respectively. Chapter 14 deals with the topic of factorial invariance of the common-factor model over sampling of different variables and over selec- tion of different populations. The conclusion this chapter draws is that the factor-pattern matrix is the only invariant of a common-factor-analysis model under conditions of varied selection of a population. Chapter 15 takes up the new developments of confirmatory factor analy- sis and analysis of covariance structures, which allow the researcher to test hypotheses using the model of common-factor analysis. Finally, Chapter 16 shows how various concepts from factor analysis can be applied to methods of multivariate analysis, with a discussion on multiple correlations, step- down regression onto a set of several variables, canonical correlations, and multiple discriminant analysis. Some readers may miss discussions in this book of recent offshoots of factor-analytic theory such as 3-mode factor analysis, nonlinear factor analy- sis, and nonmetric factor analysis. At one point of planning, I felt such top- ics might be included, but as the book developed, I decided that if all topics pertinent to factor analysis were to be included, the book might never be finished. And so this book is confined to the more classic methods of factor analysis. I am deeply indebted to the many researchers upon whose published works I have relied in developing the substance of this book. I have tried to give credit wherever it was due, but in the event of an oversight I claim no credit for any development made previously by another author. I especially wish to acknowledge the influence of my onetime teacher, Dr. Calvin W. Taylor of the University of Utah, who first introduced me to the topic of factor analysis while I was studying for my PhD at the University of Utah, and who also later involved me in his factor-analytic research as a research associate. I further wish to acknowledge his role as the primary contributing cause in the chain of events that led to the writing of this book, for it was while substituting for him, at his request, as the instructor of his factor-analysis course at the University of Utah in 1965 and 1966 that I first conceived of writing this book. I also wish to express my gratitude to him for generously allowing me to use his complete set of issues of Psychometrika and other factor-analytic literature, which I relied upon extensively in the writing of the first half of this book. I am also grateful to Dr. Henry F. Kaiser who, through correspondence with me during the early phases of my writing, widened my horizons and helped and encouraged me to gain a better understanding of factor analysis. To Dr. Lyle V. Jones of the L. L. Thurstone Psychometric Laboratory, University of North Carolina, I wish to express my heartfelt appreciation for his continued encouragement and support while I was preparing the manu- script of this book. I am also grateful to him for reading earlier versions of the manuscript and for making useful editorial suggestions that I have tried © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 27. Preface to the First Edition xxiii to incorporate in the text. However, I accept full responsibility for the final form that this book has taken. I am indebted to the University of Chicago Press for granting me permis- sion to reprint Tables 10.10, 15.2, and 15.8 from Harry H. Harman’s Modern Factor Analysis, first edition, 1960. I am also indebted to Chester W. Harris, managing editor of Psychometrika, and to the following authors for granting me permission to reprint tables taken from their articles that appeared in Psychometrika: Henry F. Kaiser, Karl G. Jöreskog, R. Darrell Bock and Rolf E. Bargmann, R. I. Jennrich, and P. F. Sampson. I am especially grateful to Lyle V. Jones and Joseph M. Wepman for granting me permission to reprint a table from their article, “Dimensions of language performance,” which appeared in the Journal of Speech and Hearing Research, September, 1961. I am also pleased to acknowledge the contribution of my colleagues, Dr. Elliot Cramer, Dr. John Mellinger, Dr. Norman Cliff, and Dr. Mark Appelbaum, who in conversations with me at one time or another helped me to gain increased insights into various points of factor-analytic theory, which were useful when writing this book. In acknowledging this contribution, however, I take full responsibility for all that appears in this book. I am also grateful for the helpful criticism of earlier versions of the manu- script of this book, which were given to me by my students in my factor- analysis classes at the University of Utah and at the University of North Carolina. I also wish to express my gratitude to the following secretaries who at one time or another struggled with the preparation of portions of this manuscript: Elaine Stewart, Judy Nelson, Ellen Levine, Margot Wasson, Judy Schenck, Jane Pierce, Judy Schoenberg, Betsy Schopler, and Bess Autry. Much of their help would not have been possible, however, without the support given to me by the L. L. Thurstone Psychometric Laboratory under the auspices of Public Health Service research grant No. M-10006 from the National Institute of Mental Health, and National Science Foundation Science Development grant No. GU 2059, for which I am ever grateful. Stanley A. Mulaik © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 28. 1 1 Introduction 1.1 Factor Analysis and Structural Theories By a structural theory we shall mean a theory that regards a phenomenon as an aggregate of elemental components interrelated in a lawful way. An excel- lent example of a structural theory is the theory of chemical compounds: Chemical substances are lawful compositions of the atomic elements, with the laws governing the compositions based on the manner in which the electron orbits of different atoms interact when the atoms are combined in molecules. Structural theories occur in other sciences as well. In linguistics, for exam- ple, structural descriptions of language analyze speech into phonemes or morphemes. The aim of structural linguistics is to formulate laws govern- ing the combination of morphemes in a particular language. Biology has a structural theory, which takes, as its elemental components, the individual cells of the organism and organizes them into a hierarchy of tissues, organs, and systems. In the study of the inheritance of characters, modern geneticists regard the manifest characteristics of an organism (phenotype) as a function of the particular combination of genes (genotype) in the chromosomes of the cells of the organism. Structural theories occur in psychology as well. At the most fundamental level a psychologist may regard behaviors as ordered aggregates of cellular responses of the organism. However, psychologists still have considerable difficulty in formulating detailed structural theories of behavior because many of the physical components necessary for such theories have not been identified and understood. But this does not make structural theories impos- sible in psychology. The history of other sciences shows that scientists can understand the abstract features of a structure long before they know the physical basis for this structure. For example, the history of chemistry indi- cates that chemists could formulate principles regarding the effects of mixing compounds in certain amounts long before the atomic and molecular aspects of matter were understood. Gregor Mendel stated the fundamental laws of inheritance before biologists had associated the chromosomes of the cell with inheritance. In psychology, Isaac Newton, in 1704, published a simple mathematical model of the visual effects of mixing different hues, but nearly a hundred years elapsed before Thomas Young postulated the existence of © 2010 by Taylor & Francis Group, LLC
  • 29. 2 Foundations of Factor Analysis three types of color receptors in the retina to account for the relationships described in Newton’s model. And only a half-century later did physiologist Helmholtz actually give a physiological basis to Young’s theory. Other physi- ological theories subsequently followed. Much of psychological theory today still operates at the level of stating relationships among stimulus conditions and gross behavioral responses. One of the most difficult problems of formulating a structural theory involves discovering the rules that govern the composition of the aggregates of components. The task is much easier if the scientist can show that the physical structure he is concerned with is isomorphic to a known mathe- matical structure. Then, he can use the many known theorems of the math- ematical structure to make predictions about the properties of the physical structure. In this regard, George Miller (1964) suggests that psychologists have used the structure of euclidean space more than any other mathemati- cal structure to represent structural relationships of psychological processes. He cites, for example, how Isaac Newton’s (1704) model for representing the effects of mixing different hues involved taking the hues of the spectrum in their natural order and arranging them as points appropriately around the circumference of a circle. The effects of color mixtures could be determined by proportionally weighting the points of the hues in the mixture accord- ing to their contribution to the mixture and finding the center of gravity of the resulting points. The closer this center of gravity approached the center of the color circle, the more the resulting color would appear gray. In addi- tion, Miller cites Schlosberg’s (1954) representation of perceived emotional similarities among facial expressions by a two-dimensional graph with one dimension interpreted as pleasantness versus unpleasantness and the other as rejection versus attention, and Osgood’s (1952) analysis of the compo- nents of meaning of words into three primary components: (1) evaluation, (2) power, and (3) activity. Realizing that spatial representations have great power to suggest the exis- tence of important psychological mechanisms, psychologists have developed techniques, such as metric and nonmetric factor analysis and metric and non- metric multidimensional scaling, to create, systematically, spatial representa- tions from empirical measurements. All four of these techniques represent objects of interest (e.g., psychological “variables” or stimulus “objects”) as points in a multidimensional space. The points are so arranged with respect to one another in the space as to reflect relationships of similarity among the corresponding objects (variables) as given by empirical data on these objects. Although a discussion of the full range of techniques using spatial repre- sentations of relationships found in data would be of considerable interest, we shall confine ourselves, in this book, to an in depth examination of the methods of factor analysis. The reason for this is that the methodology of factor analysis is historically much more fully developed than, say, that of multidimensional scaling; as a consequence, prescriptions for the ways © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 30. Introduction 3 of doing factor analysis are much more established than they are for these other techniques. Furthermore, factor analysis, as a technique, dovetails very nicely with such classic topics in statistics as correlation, regression, and multivariate analysis, which are also well developed. No doubt, as the gains in the development of multidimensional scaling, especially the nonmetric versions of it, become consolidated, there will be authors who will write text- books about this area as well. In the meantime, the interested reader can consult Torgerson’s (1958) textbook on metric multidimensional scaling for an account of that technique. 1.2 Brief History of Factor Analysis as a Linear Model The history of factor analysis can be traced back into the latter half of the nineteenth century to the efforts of the British scientist Francis Galton (1869, 1889) and other scientists to discover the principles of the inheritance of manifest characters (Mulaik, 1985, 1987). Unlike Gregor Mendel (1866), who is today considered the founder of modern genetics, Galton did not try to discover these principles chiefly through breeding experiments using sim- ple, discrete characters of organisms with short maturation cycles; rather, he concerned himself with human traits such as body height, physical strength, and intelligence, which today are not believed to be simple in their genetic determination. The general question asked by Galton was: To what extent are individual differences in these traits inherited and by what mechanism? To be able to answer this question, Galton had to have some way of quantifying the relationships of traits of parents to traits of offspring. Galton’s solution to this problem was the method of regression. Galton noticed that, when he took the heights of sons and plotted them against the heights of their fathers, he obtained a scatter of points indicating an imperfect relationship. Nevertheless, taller fathers tended strongly to have on the average taller sons than shorter fathers. Initially, Galton believed that the average height of sons of fathers of a given height would be the same as the height of the fathers, but instead the average was closer to the average height of the population of sons as a whole. In other words, the average height of sons “regressed” toward the average height in the population and away from the more extreme height of their fathers. Galton believed this implied a principle of inheritance and labeled it “regression toward the mean,” although today we regard the regression phenomenon as a statistical artifact associated with the linear- regression model. In addition, Galton discovered that he could fit a straight line, called the regression line, with positive slope very nicely through the average heights of sons whose fathers had a specified height. Upon consulta- tion with the mathematician Karl Pearson, Galton learned that he could use © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 31. 4 Foundations of Factor Analysis a linear equation to relate the heights of fathers to heights of sons (cf. Pearson and Lee, 1903): = + + Y a bX E (1.1) Here Y is the height of a son a the intercept with the Y-axis of the regression line passing through the averages of sons with fathers of fixed height b the slope of the regression line X the height of the father E an error of prediction As a measure of the strength of relationship, Pearson used the ratio of the variance of the predicted variable, Ŷ=a+bX, to the variance of Y. Pearson, who was an accomplished mathematician and an articulate writer, recognized that the mathematics underlying Galton’s regression method had already been worked out nearly 70 years earlier by Gauss and other mathema- ticians in connection with determining the “true” orbits of planets from obser- vations of these orbits containing error of observation. Subsequently, because the residual variate E appeared to be normally distributed in the prediction of height, Pearson identified the “error of observation” of Gauss’s theory of least- squares estimation with the “error of prediction” of Equation 1.1 and treated the predicted component Ŷ=a+bX as an estimate of an average value. This ini- tially amounted to supposing that the average heights of sons would be given in terms of their fathers’ heights by the equation Y=a+bX (without the error term), if nature and environment did not somehow interfere haphazardly in modifying the predicted value. Although Pearson subsequently found the field of biometry on such an exploitation of Gauss’s least-squares theory of error, modern geneticists now realize that heredity can also contribute to the E term in Equation 1.1 and that an uncritical application of least-squares theory in the study of the inheritance of characters can be grossly misleading. Intrigued with the mathematical problems implicit in Galton’s program to metricize biology, anthropology, and psychology, Pearson became Galton’s junior colleague in this endeavor and contributed enormously as a math- ematical innovator (Pearson, 1895). After his work on the mathematics of regression, Pearson concerned himself with finding an index for indicating the type and degree of relationship between metric variables (Pearson, 1909). This resulted in what we know today as the product-moment correlation coefficient, given by the formula − − ρ = − − 2 2 [( ( ))( ( ))] [( ( )) ] [( ( )) ] XY E X E X Y E Y E X E X E Y E Y (1.2) where E[] is the expected-value operator X and Y are two random variables © 2010 by Taylor & Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 32. Introduction 5 This index takes on values between −1 and +1, with 0 indicating no rela- tionship. A deeper meaning for this coefficient will be given later when we consider that it represents the cosine of the angle between two vectors, each standing for a different variable. In and of itself the product-moment correlation coefficient is descriptive only in that it shows the existence of a relationship between variables with- out showing the source of this relationship, which may be causal or coin- cidental in nature. When the researcher obtains a nonzero value for the product-moment correlation coefficient, he must supply an explanation for the relationship between the related variables. This usually involves finding that one variable is the cause of the other or that some third variable (and maybe others) is a common cause of them both. In any case, interpretations of correlations are greatly facilitated if the researcher already has a struc- tural model on which to base his interpretations concerning the common component producing a correlation. To illustrate, some of the early applications of the correlation coefficient in genetics led nowhere in terms of furthering the theory of inheritance because explanations given to nonzero correlation coefficients were fre- quently tautological, amounting to little more than saying that a relation- ship existed because such-and-such relationship-causing factor was present in the variables exhibiting the relationship. However, when Mendel’s theory of inheritance (Mendel, 1865) was rediscovered during the last decade of the nineteenth century, researchers of hereditary relationships had available to them a structural mechanism for understanding how characters in parents were transmitted to offspring. Working with a trait that could be measured quantitatively, a geneticist could hypothesize a model of the behavior of the genes involved and from this model draw conclusions about, say, the cor- relation between relatives for the trait in the population of persons. Thus, product-moment correlation became not only an exploratory, descriptive index but an index useful in hypothesis testing. R. A. Fisher (1918) and Sewell Wright (1921) are credited with formulating the methodology for using cor- relation in testing Mendelian hypotheses. With the development of the product-moment correlation coefficient, other related developments followed: In 1897, G. U. Yule published his classic paper on multiple and partial correlation. The idea of multiple correlation was this: Suppose one has p variables, X1, X2,…, Xp, and wishes to find that linear combination = β + +β 1 2 2 ˆ p p X X X (1.3) of the variables X2,…, Xp, which is maximally correlated with X1. The prob- lem is to find the weights β2,…, βp that make the linear combination X̂1 maximally correlated with X1. After Yule’s paper was published, multiple correlation became quite useful in prediction problems and turned out to be systematically related but not exactly equivalent to Gauss’s solution for linear least-squares estimation of a variable, using information obtained on © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 33. 6 Foundations of Factor Analysis several independent variables observed at certain preselected values. In any case, with multiple correlation, the researcher can consider several compo- nents (on which he had direct measurements) as accounting for the variabil- ity in a given variable. At this point, the stage was set for the development of factor analysis. By the time 1900 had arrived, researchers had obtained product-moment cor- relations on many variables such as physical measurements of organisms, intellectual-performance measures, and physical-performance measures. With variables showing relationships with many other variables, the need existed to formulate structural models to account for these relationships. In 1901, Pearson published a paper on lines and planes of closest fit to systems of points in space, which formed the basis for what we now call the prin- cipal-axes method of factoring. However, the first common-factor-analysis model is attributed to Spearman (1904). Spearman intercorrelated the test scores of 36 boys on topics such as classics, French, English, mathematics, discrimination of tones, and musical talent. Spearman had a theory, primar- ily attributed by him to Francis Galton and Herbert Spencer, that the abilities involved in taking each of these six tests were a general ability, common to all the tests, and a specific ability, specific to each test. Mathematically, this amounts to the equation = + ψ j j j Y a G (1.4) where Yj is the jth manifest variable (e.g., test score in mathematics) aj is a weight indicating the degree to which the latent general-ability vari- able G participates in Yj ψj is an ability variable uncorrelated with G and specific to Yj Without loss of generality, one can assume that E(Yj)=E(ψj)=E(G)=0, for all j, implying that all variables have zero means. Then saying that ψj is specific to Yj amounts to saying that ψj does not covary with another manifest variable Yk, so that E(Yk ψj)=0, with the consequence that E(ψj ψk)=0 (implying that different specific variables do not covary). Thus the covariances between dif- ferent variables are due only to the general-ability variable, that is, = = = 2 2 ( ) [( + )( + )] ( + + + ) ( ) j k j j k k j k j k k j j k j k E YY E a G a G E a a G a G a G a a E G ψ ψ ψ ψ ψ ψ (1.5) From covariance to correlation is a simple step. Assuming in Equation 1.5 that E(G2)=1 (the variance of G is equal to 1), we then can derive the correla- tion between Yj and Yk: ρ = jk j k a a (1.6) © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 34. Introduction 7 Spearman noticed that the pattern of correlation coefficients obtained among the six intellectual-test variables in his study was consistent with the model of a single common variable and several specific variables. For the remainder of his life Spearman championed the doctrine of one general ability and many specific abilities, although evidence increasingly accumulated which disconfirmed such a simple model for all intellectual- performance variables. Other British psychologists contemporary with Spearman either disagreed with his interpretation of general ability or modi- fied his “two-factor” (general and specific factors) theory to include “group factors,” corresponding to ability variables not general to all intellectual variables but general to subgroups of them. The former case was exemplified by Godfrey H. Thomson (cf. Thomson, 1956), who asserted that the mind does not consist of a part which participates in all mental performance and a large number of particular parts which are specific to each performance. Rather, the mind is a relatively undifferentiated collection of many tiny parts. Any intellectual performance that we may consider involves only a “sample” of all these many tiny parts. Two performance measures are intercorrelated because they sample overlapping sets of these tiny parts. And Thomson was able to show how data consistent with Spearman’s model were consistent with his sampling-theory model also. Another problem with G is that one tends to define it mathematically as “the common factor common to all the variables in the set” rather than in terms of something external to the mathematics, for example “rule inferring ability.” This is further exacerbated by the fact that to know what something is, you need to know what it is not. If all variables in a set have G in com- mon, you have no instance for which it does not apply, and G easily becomes mathematical G by default. If there were other variables that were not due to G in the set, this would narrow the possibilities as to what G is in the world by indicating what it is not pertinent to. This problem has subtly bedeviled the theory of G throughout its history. On the other hand, psychologists such as Cyril Burt and Philip E. Vernon took the view that in addition to a general ability (general intelligence), there were less general abilities such as verbal–numerical–educational ability and practical–mechanical–spatial–physical ability, and even less general abilities such as verbal comprehension, reading, spelling, vocabulary, drawing, hand- writing, and mastery of various subjects (cf. Vernon, 1961). In other words, the mind was organized into a hierarchy of abilities running from the most general to the most specific. Their model of test scores can be represented mathematically much like the equation for multiple correlation as Yj =ajG+bj1G1 +…+bjsGs +cj1H1 +…+cjtHt +ψj where Yj is a manifest intellectual-performance variable aj, bj1,…, bjs, cj1,…, cjt are the weights G is the latent general-ability variable © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 35. 8 Foundations of Factor Analysis G1,…, Gs are the major group factors H1,…, Ht are the minor group factors ψj is a specific-ability variable for the jth variable Correlations between two observed variables, Yj and Yk, would depend upon having not only the general-ability variable in common but group-factor variables in common as well. By the time all these developments in the theory of intellectual abilities had occurred, the 1930s had arrived, and the center of new developments in this theory (and indirectly of new developments in the methodology of com- mon-factor analysis) had shifted to the United States where L. L. Thurstone at the University of Chicago developed his theory and method of multiple- factor analysis. By this time, the latent-ability variables had come to be called “factors” owing to a usage of Spearman (1927). Thurstone differed from the British psychologists over the idea that there was a general-ability factor and that the mind was hierarchically organized. For him, there were major group factors but no general factor. These major group factors he termed the primary mental abilities. That he did not cater to the idea of a hierarchical organization for the primary mental abilities was most likely because of his commitment to a principle of parsimony; this caused him to search for factors which related to the observed variables in such a way that each factor pertained as much as possible to one nonover- lapping subset of the observed variables. Sets of common factors displaying this property, Thurstone said, had a “simple structure.” To obtain an opti- mal simple structure, Thurstone had to consider common-factor variables that were intercorrelated. And in the case of factor analyses of intellectual- performance tests, Thurstone discovered that usually his common factors were all positively intercorrelated with one another. This fact was consid- erably reassuring to the British psychologists who believed that by relying on his simple-structure concept Thurstone had only hidden the existence of a general-ability factor, which they felt was evidenced by the correlations among his factors. Perhaps one reason why Thurstone’s simple-structure approach to factor analysis became so popular—not just in the United States but in recent years in England and other countries as well—was because simple-structure solu- tions could be defined in terms of more-or-less objective properties which computers could readily identify and the factors so obtained were easy to interpret. It seemed by the late 1950s when the first large-scale electronic computers were entering universities that all the drudgery could be taken out of factor-analytic computations and that the researcher could let the com- puter do most of his work for him. Little wonder, then, that not much thought was given to whether theoretically hierarchical solutions were preferable to simple-structure solutions, especially when hierarchical solutions did not seem to be blindly obtainable. And believing that factor analysis could automatically and blindly find the key latent variables in a domain, what © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 36. Introduction 9 researchers would want hierarchical solutions which might be more difficult to interpret than simple-structure solutions? The 1950s and early 1960s might be described as the era of blind factor analysis. In this period, factor analysis was frequently applied agnostically, as regards structural theory, to all sorts of data, from personality-rating vari- ables, Rorschach-test-scoring variables, physiological variables, semantic differential variables, and biographical-information variables (in psychol- ogy), to characteristics of mining districts (in mineralogy), characteristics of cities (in city planning), characteristics of arrowheads (in anthropology), characteristics of wasps (in zoology), variables in the stock market (in eco- nomics), and aromatic-activity variables (in chemistry), to name just a few applications. In all these applications the hope was that factor analysis could bring order and meaning to the many relationships between variables. Whether blind factor analyses often succeeded in providing meaningful explanations for the relationships among variables is a debatable question. In the case of Rorschach-test-score variables (Cooley and Lohnes, 1962) there is little question that blind factor analysis failed to provide a manifestly meaningful account of the structure underlying the score variables. Again, factor analyses of personality trait-rating variables have not yielded factors universally regarded by psychologists as explanatory constructs of human behavior (cf. Mulaik, 1964; Mischel, 1968). Rather, the factors obtained in personality trait-rating studies represent confoundings of intrarater pro- cesses (semantic relationships among trait words) with intraratee processes (psychological and physiological relationships within the persons rated). In the case of factor-analytic studies of biographical inventory items, the chief benefit has been in terms of classifying inventory items into clusters of similar content, but as yet no theory as to life histories has emerged from such studies. Still, blind factor analyses have served classification purposes quite well in psychology and other fields, but these successes should not be interpreted as generally providing advances in structural theories as well. In the first 60 years of the history of factor analysis, factor-analytic meth- odologists developed heuristic algebraic solutions and corresponding algo- rithms for performing factor analyses. Many of these methods were designed to facilitate the finding of approximate solutions using mechanical hand calculators. Harman (1960) credits Cyril Burt with formulating the centroid method, but Thurstone (1947) gave it its name and developed it more fully as an approximation to the computationally more challenging principal axes, the eigenvector–eigenvalue solution put forth by Hotelling (1933). Until the development of electronic computers, the centroid method was a simple and straightforward solution that highly approximated the principal axes solu- tion. But in the 1960s, computers came on line as the government poured billions into the development of computers for decryption work and into the mathematics of nuclear physics in developing nuclear weapons. Out of the lat- ter came fast computer algorithms for finding eigenvectors and eigenvalues. Subsequently, factor analysts discovered the computer, and the eigenvector © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 37. 10 Foundations of Factor Analysis and eigenvalue routines and began programming them to obtain principal axes solutions, which rapidly became the standard approach. Nevertheless most of the procedures initially used were still based on least-squares methods, for the statistically more sophisticated method of maximum- likelihood estimation was still both mathematically and computationally challenging. Throughout the history of factor analysis there were statisticians who sought to develop a more rigorous statistical theory for factor analysis. In 1940, Lawley (1940) made a major breakthrough with the development of equations for the maximum-likelihood estimation of factor loadings (assuming multivariate normality for the variables), and he followed up this work with other papers (1942, 1943, 1949) that sketched a framework for statistical testing in factor analysis. The problem was, to use these methods you needed maximum-likelihood estimates of the factor load- ings. Lawley’s computational recommendations for finding solutions were not practical for more than a few variables. So, factor analysts continued to use the centroid method and to regard any factor loading less than .30 as “nonsignificant.” In the 1950s, Rao (1955) developed an iterative computer program for obtaining maximum-likelihood estimates, but this was later shown not to converge. Howe (1955) showed that the maximum-likelihood estimates of Lawley (1949) could be derived mathematically without making any distri- butional assumptions at all by simply seeking to minimize the determinant of the matrix of partial correlations among residual variables after partial- ling out common factors from the original variables. Brown (1961) noted that the same idea was put forth on intuitive grounds by Thurstone in 1953. Howe also provided a far more efficient Gauss–Seidel algorithm for computing the solution. Unfortunately, this was ignored or unknown. In the meantime, Harman and Jones (1966) presented their Gauss–Seidel minres method of least-squares estimation, which rapidly converged and yielded close approx- imations to the maximum-likelihood estimates. The major breakthrough mathematically, statistically, and computa- tionally in maximum-likelihood exploratory factor analysis, was made by Karl Jöreskog (1967), then a new PhD in mathematical statistics from the University of Uppsala in Sweden. He applied a then recently devel- oped numerical algorithm of Fletcher and Powell (1963) to the maximum- likelihood estimation of the full set of parameters of the common-factor model. The algorithm was quite rapid in convergence. Jöreskog’s algorithm has been the basis for maximum-likelihood estimation in most commer- cial computer programs ever since. However, the program was not always well integrated with other computing methods in some major commercial programs, so that the program reports principal components eigenvalues rather than those of the weighted reduced correlation matrix of the common- factor model provided by Jöreskog’s method, which Jöreskog used in initially determining the number of factors to retain. © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 38. Introduction 11 Recognizing that more emphasis should be placed on the testing of hypoth- eses in factor-analytic studies, factor analysts in the latter half of the 1960s began increasingly to concern themselves with the methodology of hypoth- esis testing in factor analysis. The first efforts in this regard, using what are known as procrustean transformations, trace their beginnings to a paper by Mosier (1939) that appeared in Psychometrika nearly two decades earlier. The techniques of procrustean transformations seek to transform (by a linear transformation) the obtained factor-pattern matrix (containing regression coefficients for the observed variables regressed onto the latent, underlying factors) to be as much as possible like a hypothetical factor-pattern matrix constructed according to some structural hypothesis pertaining to the vari- ables studied. When the transformed factor-pattern matrix is obtained it is tested for its degree of fit to the hypothetical factor-pattern matrix. For exam- ple, Guilford (1967) used procrustean techniques to isolate factors predicted by his three-faceted model of the intellect. However, hypothesis testing with procrustean transformations have been displaced in favor of confirmatory factor analysis since the 1970s, because the latter is able to assess how well the model reproduces the sample covariance matrix. Toward the end of the 1960s, Bock and Bargmann (1966) and Jöreskog (1969a) considered hypothesis testing from the point of view of fitting a hypothetical model to the data. In these approaches the researcher speci- fies, ahead of time, various parameters of the common-factor-analysis model relating manifest variables to hypothetical latent variables according to a structural theory pertaining to the manifest variables. The resulting model is then used to generate a hypothetical covariance matrix for the manifest variables that is tested for goodness of fit to a corresponding empirical cova- riance matrix (with unspecified parameters of the factor-analysis model adjusted to make the fit to the empirical covariance matrix as good as pos- sible). These approaches to factor analysis have had the effect of encourag- ing researchers to have greater concern with substantive, structural theories before assembling collections of variables and implementing the factor- analytic methods. We will treat confirmatory factor analysis in Chapter 15, although it is better treated as a special case of structural equation modeling, which would be best dealt in a separate book. The factor analysis we primarily treat in this book is exploratory factor analysis, which may be regarded as an “abductive,” “hypothesis-generating” methodology rather than a “hypothesis-testing” methodology. With the development of structural equation, modeling researchers have come to see traditional factor analysis as a methodology to be used, among other methods, at the outset of a research program, to formulate hypotheses about latent variables and their relation to observed variables. Furthermore it is now regarded as just one of several approaches to formulating such hypotheses, although it has general applications any time one believes that a set of observed variables are dependent upon a set of latent common factors. © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 39. 12 Foundations of Factor Analysis 1.3 Example of Factor Analysis At this point, to help the reader gain a more concrete appreciation of what is obtained in a factor analysis, it may help to consider a small factor-analytic study conducted by the author in connection with a research project designed to predict the reactions of soldiers to combat stress. The researchers had the theory that an individual soldier’s reaction to combat stress would be a func- tion of the degree to which he responded emotionally to the potential dan- ger of a combat situation and the degree to which he nevertheless felt he could successfully cope with the situation. It was felt that, realistically, com- bat situations should arouse strong feelings of anxiety for the possible dan- gers involved. But ideally these feelings of anxiety should serve as internal stimuli for coping behaviors which would in turn provide the soldier with a sense of optimism in being able to deal with the situation. Soldiers who respond pessimistically to strong feelings of fear or anxiety were expected to have the greatest difficulties in managing the stress of combat. Soldiers who showed little appreciation of the dangers of combat were also expected to be unprepared for the strong anxiety they would likely feel in a real com- bat situation. They would have difficulties in managing the stress of combat, especially if they had past histories devoid of successful encounters with stressful situations. To implement research on this theory, it was necessary to obtain measures of a soldier’s emotional concern for the danger of a combat situation and of his degree of optimism in being able to cope with the situation. To obtain these measures, 14 seven-point adjectival rating scales were constructed, half of which were selected to measure the degree of emotional concern for threat, and half of which were selected to measure the degree of optimism in coping with the situation. However, when these adjectival scales were selected, the researchers were not completely certain to what extent these scales actually measured two distinct dimensions of the kind intended. Thus, the researchers decided to conduct an experiment to isolate the common-meaning dimensions among these 14 scales. Two hundred and twenty-five soldiers in basic training were asked to rate the meaning of “firing my rifle in combat” using the 14 adjectival scales, with ratings being obtained from each soldier on five separate occasions over a period of 2 months. Intercorrelations among the 14 scales were then obtained by summing the cross products over the 225 soldiers and five occasions. (Intercorrelations were obtained in this way because the researchers felt that, although on any one occasion various soldiers might differ in their conceptions of “firing my rifle in combat” and on different occasions an individual soldier might have different conceptions, the major determinants of covariation among the adjectival scales would still be conventional-meaning dimensions common to the scales.) The matrix of intercorrelations, illustrated in Table 1.1, was then subjected to image factor analysis (cf. Jöreskog, 1962), which is a relatively © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 40. Introduction 13 TABLE 1.1 Intercorrelations among 14 Scales 1 1.00 Frightening 2 .20 1.00 Useful 3 .65 −.26 1.0 Nerve-shaking 4 −.26 .74 −.32 1.00 Hopeful 5 .71 −.27 .70 −.32 1.00 Terrifying 6 −.25 .64 −.30 .68 −.31 1.00 Controllable 7 .64 −.30 .73 −.34 .74 −.34 1.00 Upsetting 8 −.40 .39 −.44 .40 −.47 .39 −.53 1.00 Painless 9 −.13 .24 −.11 .24 −.16 .26 −.17 .27 1.00 Exciting 10 −.45 .36 −.49 .42 −.53 .39 −.53 .58 .32 1.00 Nondepressing 11 .59 −.26 .63 −.31 .32 −.32 .69 −.45 −.15 −.51 1.00 Disturbing 12 −.30 .69 −.35 .75 −.36 .68 −.39 .44 .28 .48 −.38 1.00 Successful 13 −.36 .35 −.50 .43 −.42 .38 −.52 .45 .20 .49 −.45 .46 1.00 Settling (vs. unsettling) 14 −.35 .62 −.36 .65 −.38 .62 −.46 .50 .36 .51 −.45 .67 .49 1.00 Bearable accurate but simple-to-compute approximation of common-factor analysis. Four orthogonal factors were retained, and the matrix of “loadings” associ- ated with the “unrotated factors” is given in Table 1.2. The coefficients in this matrix are correlations of the observed variables with the common factors. However, the unrotated factors of Table 1.2 are not readily interpretable, and they do not in this form appear to correspond to the two expected-meaning dimensions used in selecting the 14 scales. At this point it was decided, after TABLE 1.2 Unrotated Factors 1 2 3 4 1 Frightening .73 .35 .01 −.15 2 Useful −.56 .60 −.10 .12 3 Nerve-shaking .78 .31 −.05 −.12 4 Hopeful −.62 .60 −.09 .13 5 Terrifying .78 .34 .43 .00 6 Controllable −.59 .52 −.06 .08 7 Upsetting .84 .29 −.08 .00 8 Painless −.65 .07 .03 −.33 9 Exciting −.29 .21 −.02 −.35 10 Nondepressing −.70 .04 .03 −.36 11 Disturbing .70 .13 −.59 −.05 12 Successful −.67 .54 −.04 .05 13 Settling −.63 .09 .08 −.16 14 Bearable −.69 .43 .04 −.12 © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 41. 14 Foundations of Factor Analysis some experimentation, to rotate only the first two factors and to retain the latter two unrotated factors as “difficult to interpret” factors. Rotation of the first two factors was done using Kaiser’s normalized Varimax method (cf. Kaiser, 1958). The resulting rotated matrix is given in Table 1.3. The meaning of “rotation” may not be clear to the reader. Therefore let us consider the plot in Figure 1.1 of the 14 variables, using for their coordinates the loadings on the variables on the first two unrotated factors. Here we see that the coordinate axes do not correspond to variables that would be clearly definable by their association with the variables. On the other hand, note the cluster of points in the upper right-hand quadrant (variables 1, 3, 5, and 7) and the cluster of points in the upper left-hand quadrant (variables 2, 4, 6, 12, and 14). It would seem that one could rotate the coordinate axes so as to have them pass near these clusters. As a matter of fact, this is what has been done to obtain the rotated coordinates in Table 1.3, which are plotted in Figure 1.2. Rotated factor 1 appears almost exclusively associated with variables 1, 3, 5, 7, and 11, which were picked as measures of a fear response, whereas rotated factor 2 appears most closely associated with variables 2, 4, 6, 12, and 14, which were picked as measures of optimism regarding outcome. Although variables 8, 10, and 13 appear now to be consistent in their relationships to these two dimensions, they are not unambiguous measures of either factor. Variable 9 appears to be a poor measure of these two dimensions. Some factor analysts at this point might prefer to relax the requirement that the obtained factors be orthogonal to one another. They would, in this case, most likely construct a unit-length vector collinear with variable 7 to TABLE 1.3 Rotated Factors 1 2 1 Frightening .83 −.10 2 Useful −.11 .85 3 Nerve-shaking .84 −.16 4 Hopeful −.17 .87 5 Terrifying .86 −.14 6 Controllable −.18 .80 7 Upsetting .89 −.22 8 Painless −.50 .44 9 Exciting −.12 .34 10 Nondepressing −.57 .43 11 Disturbing .67 −.29 12 Successful −.24 .85 13 Settling −.48 .43 14 Bearable −.33 .76 © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 42. Introduction 15 2 9 .1 .1 –.1 –.2 –.3 –.4 –.5 –.6 –.7 –.8 –.9 .2 .3 .4 .5 .6 .7 .8 .9 –.1 –.2 –.3 –.4 –.5 –.6 –.7 –.8 –.9 .2 .3 .4 .5 .6 .7 .8 .9 14 8 13 10 12 6 2 4 1 11 5 3 7 1 FIGURE 1.1 Plot of 14 variables on unrotated factors 1 and 2. represent an “oblique” factor 1 and another unit-length vector collinear with variable 12 to represent an “oblique” factor 2. The resulting oblique factors would be negatively correlated with one another and would be interpreted as dimensions that are slightly negatively correlated. Such “oblique” factors are drawn in Figure 1.2 as arrows from the origin. In conclusion, factor analysis has isolated two dimensions among the 14 scales which appear to correspond to dimensions expected to be present when the 14 scales were chosen for study. Factor analysis has also shown that some of the 14 scales (variables 8, 9, 10, and 13) are not unambiguous measures of the intended dimensions. These scales can be discarded in con- structing a final set of scales for measuring the intended dimensions. Factor analysis has also revealed the presence of additional, unexpected dimen- sions among the scales. Although it is possible to hazard guesses as to the meaning of these additional dimensions (represented by factors 3 and 4), such guessing is not strongly recommended. There is considerable likeli- hood that the interpretation of these dimensions will be spurious. This is not © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 43. 16 Foundations of Factor Analysis to say that factor analysis cannot, at times, discover something unexpected but interpretable. It is just that in the present data the two additional dimen- sions are so poorly determined from the variables as to be interpretable only with a considerable risk of error. This example of factor analysis represents the traditional, exploratory use of factor analysis where the researcher has some idea of what he will encounter but nevertheless allows the method freedom to find unexpected dimensions (or factors). 1 1 2 11 3 9 6 2 4 .1 –.1 –.1 –.2 –.3 –.4 –.5 –.6 –.7 –.8 –.9 –.2 –.3 –.4 –.5 –.6 –.7 –.8 –.9 .1 .2 .3 .4 .5 .6 .7 .8 .9 .2 .3 .4 .5 .6 .7 .8 .9 8 13 10 12 14 5 7 FIGURE 1.2 Plot of 14 variables on rotated factors 1 and 2. © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 44. 17 2 Mathematical Foundations for Factor Analysis 2.1 Introduction Ideally, one begins a study of factor analysis with a mathematical background of up to a year of calculus. This is not to say that factor analysis requires an extensive knowledge of calculus, because calculus is used in only a few instances, such as in finding weights to assign to a set of independent vari- ables to maximize or minimize the variance of a resulting linear combination. Or it will be used in finding a transformation matrix that minimizes, say, a criterion for simple structure in factor rotation. But having calculus in one’s background provides sufficient exposure to working with mathematical con- cepts so that one will have overcome reacting to a mathematical subject such as factor analysis as though it were an esoteric subject comprehensible only to select initiates to its mysteries. One will have learned those subjects such as trigonometry, college algebra, and analytical geometry upon which factor analysis draws heavily. In practice, however, the author recognizes that many students who now undertake a study of factor analysis come from the behavioral, social, and biological sciences, where mathematics is not greatly stressed. Consequently, in this chapter the author attempts to provide a brief introduction to those mathematical topics from modern algebra, trigonometry, analytic geometry, and calculus that will be necessary in the study of factor analysis. Moreover, the author also provides a background in operations with vectors and matrix algebra, which even students having just had first-year calculus will more than likely find them new. In this regard most students will find themselves on even ground if they had up to college algebra, because operations with vectors and matrix algebra are extensions of algebra using a new notation. 2.2 Scalar Algebra By the term “scalar algebra,” we refer to the ordinary algebra applicable to the real numbers with which the reader should already be familiar. The use © 2010 by Taylor Francis Group, LLC
  • 45. 18 Foundations of Factor Analysis of this term distinguishes ordinary algebra from operations with vectors and matrix algebra, which we will take up shortly. 2.2.1 Fundamental Laws of Scalar Algebra The following laws govern the basic operations of scalar algebra such as addition, subtraction, multiplication, and division: 1. Closure law for addition: a + b is a unique real number. 2. Commutative law for addition: a + b=b + a. 3. Associative law for addition: (a + b) + c=a + (b + c). 4. Closure law for multiplication: ab is a unique real number. 5. Commutative law for multiplication: ab=ba. 6. Associative law for multiplication: a(bc)=(ab)c. 7. Identity law for addition: There exists a number 0 such that + + = 0=0 a a a 8. Inverse law for addition: a + (−a)=(−a) + a=0. 9. Identity law for multiplication: There exists a number 1 such that = = 1 1 a a a 10. Inverse law for multiplication: 1 1 1 a a a a = = . 11. Distributive law: a(b + c)=ab + ac. The above laws are sufficient for dealing with the real-number system. However, the special properties of zero should be pointed out: 0 0 a = is undefined 0 a 0 is indeterminate 0 2.2.1.1 Rules of Signs The rule of signs for multiplication is given as ( ) ( ) and ( )( ) ( ) a b ab a b ab − = − − − = + © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 46. Mathematical Foundations for Factor Analysis 19 The rule of signs for division is given as and a a a a b b b b − − = = − − The rule of signs for removing parentheses is given as ( ) and ( ) a b a b a b a b − − = − + − + = − − 2.2.1.2 Rules for Exponents If n is a positive integer, then xn will stand for xx … x with n terms If xn =a, then a, is known as the nth root of a. The following rules govern the use of exponents: 1. xa xb =xa+b. 2. (xa)b =xab. 3. (xy)a =xa ya. 4. a a a x x y y ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ . 5. a a b b x x x − = . 2.2.1.3 Solving Simple Equations Let x stand for an unknown quantity, and let a, b, c, and d stand for known quantities. Then, given the following equation ax b cx d + = + the unknown quantity x can be found by applying operations to both sides of the equation until only an x remains on one side of the equation and the known quantities on the other side. That is, − + = (subtractcxfrom bothsides) ax cx b d − = − (subtract bfrom bothsides) ax cx d b − = − ( ) (byreversing thedistributivelaw) a c x d b − = − − (bydividing bothsides by ) d b x a c a c © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 47. 20 Foundations of Factor Analysis 2.3 Vectors It may be of some help to those who have not had much exposure to the concepts of modern algebra to learn that one of the essential aims of modern algebra is to classify mathematical systems—of which scalar algebra is an example—according to the abstract rules which govern these systems. To illustrate, a very simple mathematical system is a group. A group is a set of elements and a single operation for combining them (which in some cases is addition and in some others multiplication) behaving according to the fol- lowing properties: 1. If a and b are elements of the group, then a+b and b+a are also elements of the group, although not necessarily the same element (closure). 2. If a, b, and c are elements of the group, then + + = + + ( ) ( ) (associativelaw) a b c a b c 3. There is an element 0 in the group such that for every a in the group + = + = 0 0 (identitylaw) a a a 4. For each a in the group, there is an element (−a) in the group such that + − = − + = ( ) ( ) 0 (inverselaw) a a a a One should realize that the nature of the elements in the group has no bear- ing upon the fact that the elements constitute a group. These elements may be integers, real numbers, vectors, matrices, or positions of an equilateral tri- angle; it makes no difference what they are, as long as under the operator (+) they behave according to the properties of a group. Obviously, a group is a far simpler system than the system which scalar algebra exemplifies. Only 4 laws govern the group, whereas 11 laws are needed to govern the system exemplified by scalar algebra, which, by the way, is known to mathemati- cians as a “field.” Among the various abstract mathematical systems, the system known as a “vector space” is the most important for factor analysis. (Note. One should not let the term “space” unduly influence one’s concept of a vector space. A vec- tor space need have no geometric connotations but may be treated entirely as an abstract system. Geometric representations of vectors are only particular examples of a vector space.) A vector space consists of two sets of mathemati- cal objects—a set V of “vectors” and a set R of elements of a “field” (such as © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 48. Mathematical Foundations for Factor Analysis 21 the scalars of scalar algebra)—together with two operations for combining them. The first operation is known as “addition of vectors” and has the following properties such that for every u, v, w in the set V of vectors: 1. u + v is also a uniquely defined vector in V. 2. u + (v + w)=(u + v) + w. 3. u + v=(v + w). 4. There exists a vector 0 in V such that u + 0=0 + u=u. 5. For each vector in V there exists a unique vector −u such that ( ) ( ) + − = − + = u u u u 0 (One may observe that under addition of vectors the set V of vectors is a group.) The second operation governs the combination of the elements of the field R of scalars with the elements of the set V of vectors and is known as “scalar multiplication.” Scalar multiplication has the fol- lowing properties such that for all elements a, b from the field R of scalars and all vectors u, v from the set V of vectors: 6. au is a vector in V. 7. a(u + v)=au + av. 8. (a + b)u=au + bu. 9. a(bu)=ab(u). 10. 1u=u; 0u=0. In introducing the idea of a vector space as an abstract mathematical system, we have so far deliberately avoided considering what the objects known as vectors might be. Our purpose in doing so has been to have the reader real- ize at the outset that a vector space is an abstract system that may be found in connection with various kinds of mathematical objects. For example, the vectors of a vector space may be identified with the elements of any field such as the set of real numbers. In such a case, the addition of vectors corre- sponds to the addition of elements in the field. In another example, the vec- tors of a vector space may be identified with n-tuples, which are ordered sets of real numbers. (In the upcoming discussion we will develop the properties of vectors more fully in connection with vectors represented by n-tuples.) Vectors may also be identified with the unidimensional random variables of mathematical statistics. This fact has important implications for multivariate statistics and factor analysis, and structural equation modeling, in particu- lar, because it means one may use the vector concept to unify the treatment of variables in both finite and infinite populations. The reader should note at this point that the key idea in this book is that, in any linear analysis of variables of the behavioral, social, or biological sciences, the variables © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 49. 22 Foundations of Factor Analysis may be treated as if they are vectors in a linear vector space. The concrete representation of these variables may differ in various contexts (i.e., may be n-tuples or random variables), but they may always be considered vectors. 2.3.1 n-Tuples as Vectors In a vector space of n-tuples, by a vector we shall mean a point in n-dimen- sional space designated by an ordered set of numbers known as an n-tuple which are the coordinates of the point. For example, (1,3,2) is a 3-tuple which represents a vector in three-dimensional space which is 1 unit from the ori- gin (of the coordinate system) in the direction along the x-axis, 3 units from the origin in the direction of the y-axis, and 2 units from the origin in the direction of the z-axis. Note that in a set of coordinates the numbers appear in special positions to indicate the distance one must go along each of the ref- erence axes to arrive at the point. In graphically portraying a vector, we shall use the convention of drawing an arrow from the origin of the coordinate system to the point. For example, the vector (1,3,2) is illustrated in Figure 2.1. In a vector space of n-tuples we will be concerned with certain operations to be applied to the n-tuples which define the vectors. These operations will define addition, subtraction, and multiplication in the vector space of n-tuples. As a notational shorthand to allow us to forgo writing the coordinate num- bers in full when we wish to express the equations in vector notation, we will designate individual vectors by lowercase boldface letters. For example, let a stand for the vector (1,3,2). 2.3.1.1 Equality of Vectors Two vectors are equal if they have the same coordinates. 4 3 2 1 1 2 3 4 1 2 3 4 (1,3,2) FIGURE 2.1 Graphical representation of vector (1,3,2) in three-dimensional space. © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 50. Mathematical Foundations for Factor Analysis 23 For example, if a=(1,2,4) and b=(1,2,4), then a=b. A necessary condition that two vectors are equal is that they have the same number of coordinates. For example, if (1,2,3, 4) and (1,2,3) = = a b then a ≠ b. In fact, when two vectors have different numbers of coordinates, they refer to a different order of n-tuples and cannot be compared or added. 2.3.2 Scalars and Vectors Vectors compose a system complete in themselves. But they are of a differ- ent order from the algebra we normally deal with when using real numbers. Sometimes we introduce real numbers into the system of vectors. When we do this, we call the real numbers “scalars.” In our notational scheme, we dis- tinguish scalars from vectors by writing the scalars in italics and the vectors in lowercase boldface characters. Thus, a ≠ a. In vector notation, we will most often consider vectors in the abstract. Then, rather than using actual numbers to stand for the coordinates of the vec- tors, we will use scalar quantities expressed algebraically. For example, let a general vector a in five-dimensional space be written as 1 2 3 4 5 ( , , , , ) a a a a a = a In this example a stands for a coordinate, each being distinguished from others by a different subscript. Whenever possible, algebraic expressions for the coordinates of vectors should take the same character as the character standing for the vector itself. This will not always be done, however. 2.3.3 Multiplying a Vector by a Scalar Let a be a vector such that = … 1 2 ( , , , ) n a a a a and λ a scalar; then the operation λ = a c produces another vector c such that = λ λ λ … 1 2 ( , , , ) n a a a c In other words, multiplying a vector by a scalar produces another vector that has for components the components of the first vector each multiplied by the scalar. To cite a numerical example, let a=(1,3,4,5); then = × × × × = = 2 (2 1, 2 3, 2 4, 2 5) (2,6,8,10) a c © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 51. 24 Foundations of Factor Analysis 2.3.4 Addition of Vectors If a=(1,3,2) and b=(2,1,5), then their sum, denoted as a+b is another vector c such that ((1 2),(3 1),(2 5)) (3, 4,7) = + = + + + = c a b When we add two vectors together, we add their corresponding coordinates together to obtain a new vector. The addition of vectors is found in physics in connection with the analysis of forces acting upon a body where vector addition leads to the “resultant” by the well-known “parallelogram law.” This law states that if two vectors are added together, then lines drawn from the points of these vectors to the point of the vector produced by the addition will make a parallelogram with the original vectors. In Figure 2.2, we show the result of adding two vectors (1,3) and (3,2) together. If more than two vectors are added together, the result is still another vec- tor. In some factor-analytic procedures, this vector is known as a “centroid,” because it tends to be at the center of the group of vectors added together. 2.3.5 Scalar Product of Vectors In some vector spaces, in addition to the two operations of addition of vec- tors and scalar multiplication, a third operation known as the scalar product of two vectors (written xy for each pair of vectors x and y) is defined which associates a scalar with each pair of vectors. This operation has the follow- ing abstract properties, given that x, y, and z are arbitrary vectors and a an arbitrary scalar: 5 4 3 2 1 1 2 3 (1,3) (3,2) (4,5) 4 FIGURE 2.2 Graphical representation of sum of vectors (1,3) and (3,2) by vector (4,3) in two-dimensional space, illustrating the parallelogram law for addition of vectors. © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 52. Mathematical Foundations for Factor Analysis 25 11. xy=yx=a scalar. 12. x(y + z)=xy + xz. 13. x(ay)=a(xy). 14. xx ≥ 0; xx=0 implies x=0. When the scalar product is defined in a vector space, the vector space is known as a “unitary vector space.” In a unitary vector space, it is possible to establish the length of a vector as well as the cosine of the angle between pairs of vectors. Factor analysis as well as other multivariate linear analyses is concerned exclusively with unitary vector spaces. We will now consider the definition of the scalar product for a vector space of n-tuples. Let a be the vector (a1,a2,…,an) and b the vector (b1,b2,…,bn); then the vector product of a and b, written ab, is the sum of the products of corresponding components of the vectors, that is, = + + + 1 1 2 2 n n a b a b a b ab (2.1) To use a simple numerical example, let a=(1,2,5) and b=(3,3,4); then ab=29. As a further note on notation, consider that an expression such as Equation 2.1, containing a series of terms to be added together which differ only in their subscripts, can be shortened by using the summational notation. For example, Equation 2.1 can be rewritten as = = ∑ 1 n i i i a b ab (2.2) As an explanation of this notation, the expression aibi on the right-hand side of the sigma sign stands for a general term in the series of terms, as in Equation 2.1, to be added. The subscript i in the term aibi stands for a general subscript. The expression = ∑ 1 n i indicates that one must add together a series of subscripted terms. The expression “i=1” underneath the sigma sign indi- cates which subscript is pertinent to the summation governed by this sum- mation sign—in the present example the subscript i is pertinent—as well as the first value in the series of terms that the subscript will take. The expres- sion n above the sigma sign indicates the highest value that the subscript will take. Thus we are to add together all terms in the series with the subscript i ranging in value from 1 to n. 2.3.6 Distance between Vectors In high school geometry we learn from the Pythagorean theorem that the square on the hypotenuse of a right triangle is equal to the sum of the squares on the other two sides. This theorem forms the basis for finding the distance © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 53. 26 Foundations of Factor Analysis between two vector points. We shall define the distance between two vectors a and b, written = ⎡ ⎤ − = − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ∑ 1/2 2 1 ( ) n i i i a b a b where a and b are both vectors with n components. This means that we find the sum of squared differences between the corresponding components of the two vectors and then take the square root of that. In Figure 2.3, we have diagrammed the geometric equivalent of this formula. − = − + − 2 2 1 1 2 2 ( ) ( ) a b a b a b (2.3) 2.3.7 Length of a Vector Using Equation 2.3, we can find an equation for the length of a vector, that is, the distance of its point from the origin of the coordinate system. If we define the “zero vector” as (0,0, ,0) = 0 … b2 b1 a b a –b (a1– b1)2 +(a2– b2)2 = a1 a2 FIGURE 2.3 Graphical illustration of application of Pythagorean theorem to the determination of the distance between 2 two-dimensional vectors a and b. © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 54. Mathematical Foundations for Factor Analysis 27 Then − = ⎡ ⎤ = − = − ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎡ ⎤ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ∑ ∑ 1/2 2 1 1/2 2 1 ( 0) n i i n i i a a a a 0 (2.4) The length of a vector a, denoted |a|, is the square root of the sum of the squares of its components. (Note. Do not confuse |a| with |A| which is a determinant.) 2.3.8 Another Definition for Scalar Multiplication Another way of expressing scalar multiplication of vectors is given by the formula = θ cos ab a b (2.5) where θ is the angle between the vectors. In other words, the scalar product of one vector times another vector is equivalent to the product of the lengths of the two vectors times the cosine of the angle between them. 2.3.9 Cosine of the Angle between Vectors Since we have raised the concept of the cosine of the angle between vec- tors, we should consider the meaning of the cosine function as well as other important trigonometric functions. In Figure 2.4, there is a right triangle where θ is the value of the angle between the base of the tri- angle and the hypotenuse. If we designate the length of the side opposite the angle θ as a, the length of the base as b, and the length of the hypotenuse as c, the ratio of these sides to one another gives the following trigonometric functions: tan a b θ = sin a c θ = θ = cos b c b θ a c FIGURE 2.4 A right triangle with angle θ between base and hypotenuse. © 2010 by Taylor Francis Group, LLC Downloaded by [University of California - San Diego (CDL)] at 00:07 15 September 2014
  • 55. Another Random Document on Scribd Without Any Related Topics
  • 56. 844 848 852 nne sede þe king so dere, lcome beo þu here. nu, Berild, swiþe, make him ful bliþe. whan þu farst to woȝe, him þine gloue. nt þu hauest to wyue, i he schal þe dryue; Cutberdes fairhede schal þe neure wel spede.” 844 848 o seyde þe king so dere, Wel come be he here. o nov, byryld, swyþe, n mak him glad and blyþe. an þou farest awowen, ak hym þine glouen. er þou hauest Mynt to wyue, wey he schal þe dryue.” o gap in MS. . . . . . . . . . . . . ] 844 848 852 þo seide þe kyng wel dere, welcome þe þou here. o, beryld, wel swyþe, t make hym wel blyþe, nt when þou farest to wowen, c him þine glouen. er þou hast munt to wyue, wey he shal þe dryue; r godmodes feyrhede halt þou no wer spede.” hristmas feast a giant appears. It was at Cristesmasse, Neiþer more ne lasse, [No gap in MS. . . . . . . . . . . . ] 856 yt was at Cristesmesse, aþer more ne lesse. e king hym makede a feste, yt hyse knyctes beste.
  • 57. 856 t wes at cristesmasse, ouþer more ne lasse. e kyng made feste, his knyhtes beste. iant’s challenge. nt proclaims a challenge. 860 864 cam in at none, eaunt suþe sone, med fram paynyme, seide þes ryme:— e stille, sire kyng, herkne þis tyþyng. buþ paens ariued, mo þane fiue. beoþ on þe sonde, , vpon þi londe. 860 864 er com ate none, geaunt swiþe sone, rmed of paynime, nd seyde in hys rime, Syte, knytes, by þe king, nd lusteþ to my tydyng. ere beþ paynyms aryued, el mo þanne fyue. y þe se stronde, yng, on þine londe. 860 864 er com in at none, geaunt suyþe sone, armed of paynyme, nt seide þise ryme:— Site, kyng, bi kynge, nt herkne my tidynge er bueþ paynes aryue, el more þen fyue. er beþ vpon honde, yng, in þine londe. an will fight any three in the land,
  • 58. 868 of hem wile fiȝte n þre kniȝtes. 868 ne þer of wille ich fyȝte ȝen þi þre knyctes. 868 n þer of wol fyhte ȝeynes þre knyhtes. bat to determine who shall possess the land. 872 oþer þre slen vre, s lond beo ȝoure; vre on ouercomeþ ȝour þreo, s lond schal vre beo. oreȝe be þe fiȝtinge, an þe liȝt of daye springe.” 872 yf þat houre felle þyne þre, þis lond schal vre be; yf þyne þre fellen houre, þys lond þanne be ȝyure. o morwe schal be þe fyȝtyng, t þe sonne op rysyng.” 872 ef oure þre sleh oure on, e shulen of ore londe gon; ef vre on sleh oure þre, þis lond shal vre be. morewe shal be þe fyhtynge, þe sonne vpspringe.” Berild and Alrid accept it. urston names Cutberd (Godmod), Harild and Berild as the three defenders.
  • 59. 876 880 nne sede þe kyng þurston, berd schal beo þat on; d schal beo þat oþer; þridde, Alrid, his broþer. hi beoþ þe strengeste, of armes þe beste. e what schal vs to rede? wene we beþ alle dede.” 876 880 o seyde þe king þurston, Cubert he schal be þat on, yld chyld þat oþer, e þrydde, byryld, hyse broþer. ye þre beþ þe strengeste, nd ín armes þe beste. t wat schal do to rede? h wene we ben alle dede.” 876 880 þo seyde þe kyng þurston, odmod shal be þat on; eryld shal be þat oþer; e þridde, Aþyld, is broþer. r hue bueþ strongeste, nt in armes þe beste. h, wat shal vs to rede? wene we bueþ dede.” 884 utberd sat at borde, sede þes wordes:— 884 ubert set on borde, nd seyde þis worde:— 884 odmod set at borde, nt seide þeose wordes:— says that it were shame for three Christians to fight against one pagan, and offers to fight alone.
  • 60. 888 892 e king, hit nis no riȝte, wiþ þre to fiȝte; n one hunde, cristen men to fonde. , ischal al one, ute more ymone, mi swerd wel eþe ge hem þre to deþe.” 892 Syre kyȝeking, hyt no ryȝcte, n wiþ þre to fyȝcte. o gap in MS. . . . . . . . . . . . . ] t wille ich alone, ith outen mannes mone, id my swerd wel heþe ringen hem alle to deþe.” 888 892 ire kyng, nis no ryhte, n wiþ þre fyhte, ȝeynes one hounde, re cristene to founde. h, kyng, y shal alone, iþ-oute more ymone, ip my suerd ful eþe ringen hem alle to deþe.” rations for the combat. himself,
  • 61. 896 e kyng aros amoreȝe, hadde muchel sorȝe; Cutberd ros of bedde, armes he him schredde. n his brunie gan on caste, lacede hit wel faste, 896 e kyng ros a morwe, nd hadde meche sorwe. ubert ros of bedde; yt armes he hym schredde. ys brenye on he caste, acede hyt wel faste. 896 e kyng aros amorewe; e hade muche sorewe. odmod ros of bedde; iþ armes he him shredde. s brunye he on caste, t knutte hit wel faste, e king, 900 904 cam to þe kinge, is vp risinge. g,” he sede, “cum to fel[de], to bihelde we fiȝte schulle, togare go wulle.” 900 904 e cam biforn þe godeking, t hyse op rysyng. e seyde, “king, com to felde, e for to by helde, ou we scholen fyȝte nd to gydere hus dyȝcte.” 900 904 nt com him to þe kynge, his vp rysynge. kyng,” quoþ he, “com to felde, e forte byhelde, ou we shule flyten nt to gedere smiten.” h him rides to the combat.
  • 62. 908 at prime tide, unnen ut ride, funden on a grene, eaunt suþe kene, feren him biside, e deþ to abide. 908 yȝt at prime tyde, e gonne hem out ryde. e founden in a grene, geant swyþe kene, rmed with swerd by side, e day for to abyde. 908 riht at prime tide, y gonnen out to ryde. y fonnden in a grene, geaunt swyþe kene, s feren him biside, at day forto abyde. ight begins. strikes so hard, that the giant asks for a breathing spell, 912 916 eilke bataille berd gan assaille. ȝaf dentes inoȝe; kniȝtes felle iswoȝe. dent he gan wiþdraȝe, hi were neȝ aslaȝe. 912 916 ubert him gan asayle; olde he nawt fayle. e keyte duntes ynowe; e geant fel hy swowe. ys feren gonnen hem wyt drawe, o here mayster wa slawe.
  • 63. 912 [leaf 88, back] 916 odmod hem gon asaylen; olde he nout faylen. e ȝef duntes ynowe; e payen fel y swowe. s feren gonnen hem wiþ drawe, r huere maister wes neh slawe. s he has never before experienced such blows, save at the hand of King Murry. 920 924 sede, “kniȝtes, nu ȝe reste while, ef ȝou leste.” ede, “hi neure nadde niȝte dentes so harde. gap in MS. . . . . . . . . . . . ] was of hornes kunne, n in suddenne.” 920 924 e seyden, “knyct þo reste wile ȝyf þe luste. e neuere ne hente f man KH3 so harde dunte, ute of þe king Mory, at was so swyþe stordy. e was of hornes kinne; e slowe hym in sodenne.” KH.3 MS. adds ‘nes honde’ underdotted as a mistake. 920 924 e seide, “knyht, þou reste whyle, ȝef þe leste. ne heuede ner of monnes hond o harde duntes in non lond, ote of þe kyng Murry, t wes swiþe sturdy. e wes of hornes kenne; sloh him in sudenne.” enraged,
  • 64. orn him gan to agrise, his blod arise. uberd gan agrise, nd hys blod aryse. Godmod him gon agryse, nt his blod aryse. ews the fight. 928 him saȝ he stonde driuen him of londe, þat his fader sloȝ. im his swerd he droȝ. 928 y for hym he sey stonde at drof hym out of londe, nd hys fader aquelde. e smot hym honder schelde. 928 yforen him he seh stonde at drof him out of londe, nt fader his a-quelde; e smot him vnder shelde. looks on his ring, then smites the giant through the heart. 932 936 okede on his rynge, þoȝte on Rymenhilde. smot him þureȝ þe herte, sore him gan to smerte. paens þat er were so sturne, unne awei vrne. 932 936 e lokede on hys gode ringe, nd þoute on reymyld þe ȝonge. yd gode dunt ate furste, e smot hym to þe herte. e hondes gonnen at erne to þe schypes sterne.
  • 65. 932 936 e lokede on is rynge, nt þohte o rymenild þe ȝynge. id god suerd at þe furste, e smot him þourh þe huerte. e payns bigonne to fleon, nt to huere shype teon. kills the Giant. ans flee to their ship. n and his compaynye ne after hem wel swiþe hiȝe, gap in MS. . . . . . . . . . . . . . . . . . . . . . . . . . ] 940 o schip he wolden ȝerne, nd cubert hem gan werne, nd seyde, “kyng, so þou haue reste, ep nou forþ ofi þi beste, nd sle we þyse hounden, ere we henne founden.” ship hue wolden erne; odmod hem con werne. o gap in MS. . . . . . . . . . . . . . . . . . . . . . . . . . . ] g’s sons are slain, but Cutberd annihilates the pagan host,
  • 66. gap in MS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ] 944 948 e houndes hye of laucte, n strokes hye þere kaute. aste aȝen hye stode, ȝen duntes gode. elp nawht here wonder; ubert hem broute al honder. 944 948 e kynges sones tweyne e paiens slowe beyne. o wes Godmod swyþe wo, nt þe payens he smot so, t in a lutel stounde e paiens hy felle to grounde. sloȝen alle þe hundes, i here schipes funde. e schedde of here blode, nd makede hem al wode. odmod ant is men owe þe payenes eueruchen. Thurston’s two sons are slain. nging his father’s death.
  • 67. 952 956 eþe he hem alle broȝte; fader deþ wel dere hi boȝte. lle þe kynges kniȝtes, scapede þer no wiȝte. e his sones tweie re him he saȝ deie. 952 956 o deþe he hem browte, ys fader deþ he bowten. f al þe kinges rowe, er nas bute fewe slawe. ote hys sones tweye y fore he sey deye. 952 s fader deþ ant ys lond wrek godmod wiþ his hond. o gap in MS. . . . . . . . . . . . . . . . . . . . . . . . . . . ] g mourns. 960 king bigan to grete, teres for to lete. eiden hem in bare, burden hem ful ȝare. gap in MS. . . . . . . . . . . . ] 960 e king bi gan to grete, nd teres for to lete. en leyden hem on bere, nd ledde hem wel þere to holy kyrke, o man scholde werke. 960 e kyng wiþ reuþful chere tte leggen is sones on bere, nt bringen hom to halle; uche sorewe hue maden alle. a chirche of lym ant ston e buriede hem wiþ ryche won.
  • 68. Thurston offers Horn his kingdom. e following section—through line 986—has been rearranged by the transcriber. Line numbers ow the original alignment of the three texts. 964 e king com in to halle, ong his kniȝtes alle. Þ 964 e king cam hom to halle, Among þe kniyctes alle. 964 Þe kyng lette forþ calle se knyhtes alle, s to make Horn (Cutberd) his heir, and to give him his daughter Reynild. Cutberd declines, but offers in the king’s service. 968 972 rn,” he sede, “i seie þe, as i schal rede þe. ȝen beþ mine heirs, þu art kniȝt of muchel pris, of grete strengþe, fair o bodie lengþe. engne þu schalt welde, to spuse helde nild, mi doȝter, sitteþ on þe lofte.” 968 972 Do, cubert,” he seyde, As ich þe wolle rede. ede beþ myn heyres, nd þou þe boneyres, nd of grete strengþe, wete and fayr of lengþe. i reaume þou schalt helde, nd to spuse welde ermenyl, my douter, at syt in boure softe.”
  • 69. [966] [973] 976 980 nt seide, “godmod, ȝef þou nere, le ded we were, o gap in MS. . . . . ] ou art boþe god ant feyr; er y make þe myn heyr; r my sones bueþ yflawe, nt ybroht of lyfdawe. ohter ich habbe one; ys non so feyr of blod ant bone. H5(Ermenild, þat feyre may, ryht so eny someres day,) re wolle ich ȝeue þe, nt her kyng shalt þou be.” KH.5 This line was at first left out by the scribe, and then written in the margin of the MS.
  • 70. 976 980 984 O sire king, wiþ wronge olte ihc hit vnderfonge. oȝter þat ȝe me bede, er rengne for to lede. more ihc schal þe serue, kyng, or þu sterue. orwe schal wende eue ȝeres ende. ne hit is wente, king, ȝef me mi rente. anne i þi doȝter ȝerne, schaltu me hire werne.” 976 980 984 e seyde, “king, wit wronge cholde ich hire honder fonge, ng þat þou me bede, nd þy reaume lede. t more ich wile þe serue, nd fro sorwe þe berwe. y sorwe hyt schal wende er þis seue ȝeres hende. nd wanne he beþ wente, yng, ȝyf þou me my rente. an ich þi douter herne, e schalt þou hire me werne.” 984 e seyde, “more ichul þe serue, yng, er þen þou sterue. hen y þy dohter ȝerne, eo ne shal me noþyng werne.” even years he does not communicate with Rymenhild.
  • 71. 988 992 berd wonede þere e seue ȝere, gap in MS. . . . . . . . . . . . . ] to Rymenild he ne sente, him self ne wente. enild was in Westernesse, wel muchel sorinesse. H 988 992 orn child wonede þere fulle sixe yere. Þe seuenþe, þat cam þe nexte fter þe sexte, KH4 o reymyld he ne wende, e to hyre sende. eymyld was in westnesse, yd michel sorwenesse. H.4 MS. adds ‘yeres hende’ underdotted s a mistake. 988 992 godmod wonede þere lle six ȝere; o gap in MS. . . . . . . . . . . ] nt þe seueþe ȝer bygon; rymynyld, sonde ne sende he non. menyld wes in westnesse, iþ muchel sorewenesse. g sues for Rymenhild. ues for Rymenhild.
  • 72. 996 1000 king þer gan ariue wolde hire haue to wyue. n he was wiþ þe king, at ilke wedding. aies were schorte, Riminhild ne dorste n in none wise. rit he dude deuise; 996 1000 kyng þer was aryuede at wolde hyre habbe to wyue. t sone ware þe kynges f hyre weddinges. e dawes weren schorte, nd reymyld ne dorste ette in none wise. writ he dede deuise; 996 1000 kyng þer wes aryue, nt wolde hyre han to wyue. one were þe kynges, þat weddynge. e dayes were so sherte, nt rymenild ne derste tten on none wyse. wryt hue dude deuyse; rites a letter to Horn. 1004 1008 f hit dude write, horn ne luuede noȝt lite. sende hire sonde uereche londe, eche horn, þe kniȝt, me him finde miȝte. 1004 1008 yol hyt dide write, at horn ne louede nawt lite. nd to eueryche londe, or horn hym was so longe, fter horn þe knycte, or þat he ne Myȝte.
  • 73. 1004 1008 þulf hit dude wryte, t horn ne louede nout lyte. ue sende hire sonde to eueruche londe, sechen horn knyhte, he so er me myhte. meets Rymenhild’s messenger. hile hunting, meets a page, who says that he is seeking Horn, 1012 1016 n noȝt þer of ne herde, o dai þat he ferde wude for to schete, aue he gan imete. n seden, “Leue fere, sechestu here?” ȝt, if beo þi wille, ai þe sone telle. che fram biweste, n of westernesse, 1012 1016 orn þer of ne þoute, yl, on a day þat he ferde o wode for to seche, page he gan mete. e seyde, “leue fere, at sekest þou here?” Knyt, feyr of felle,” wat þe page, “y wole þe telle. h seke fram westnesse, orn, knyt of estnesse, 1012 [leaf 89] 1016 orn þer of nout herde, , o day þat he ferde wode forte shete, page he gan mete. orn seide, “leue fere, het dest þou nou here?” Sire, in lutel spelle may þe sone telle. h seche from westnesse, orn, knyht, of estnesse,
  • 74. Rymenhild is to marry King Mody of Reynes, on Sunday. 1020 1024 a Maiden Rymenhild for him gan wexe wild. ng hire wile wedde, bringe to his bedde, Modi of Reynes, of hornes enemis. habbe walke wide e se side, 1020 1024 or þe mayde reymyld, at for hym ney waxeþ wild. kyng hire schal wedde, soneday to bedde, yng mody of reny, at was hornes enemy. h haue walked wide y þe se syde. 1020 1024 or rymenild, þat feyre may, oreweþ for him nyht ant day. kyng hire shal wedde, sonneday to bedde, yng Mody of reynis, t is hornes enimis. h habbe walked wyde y þe see side. ssenger laments that he cannot find Horn.
  • 75. 1032 gap in MS. . . . . . . . . . . . ] he no war ifunde, awai þe stunde. away þe while, wurþ Rymenild bigiled.” n iherde wiþ his ires, spak wiþ bidere tires, 1028 1032 h neuere myȝt of reche hit no londisse speche. s he nower founde, weylawey þe stounde. eymyld worþ by gile, eylawey þe wile.” orn hyt herde with eren, nd wep with blody teren. 1028 1032 e mihte ich him neuer cleche, iþ nones kunnes speche, e may ich of him here londe fer no nere. eylawey þe while, m may hente gyle.” Horn hit herde wiþ earen, nt spec wiþ wete tearen, closes his identity, and sends word to Rymenhild that he will come Sunday before ‘prime.’
  • 76. 1036 1040 1044 aue, wel þe bitide, n stondep þe biside. n to hure þu turne, seie þat heo ne murne, schal beo þer bitime, neday bi pryme.” knaue was wel bliþe, hiȝede aȝen bliue. e bigan to þroȝe er hire woȝe. 1036 1040 So wel þe, grom, by tide, orn stant by þy syde. ȝen to reymyld turne, nd sey þat he ne morne. h schal ben þer by tyime, soneday by prime.” e page was blyþe, nd schepede wel swyþe. o gap in MS. . . . . . . . . . . . . ] 1036 1040 So wel, grom, þe bitide, orn stond by þi syde, ȝeyn to rymenild turne, t sey þat hue ne murne. shal be þer bi time, sonneday er prime.” e page wes wel blyþe t shipede wel suyþe. o gap in MS. . . . . . . . . . . ] messenger on his return journey is drowned. ssenger is drowned, and Rymenhild looks for him in vain.
  • 77. 1048 knaue þer gan adrinke; enhild hit miȝte of þinke. enhild vndude þe dure pin e hus þer heo was in, gap in MS. . . . . . . . . . . . ] 1048 e se hym gan to drenche; eymyld hyt Myȝt of þinche. e se hym gan op þrowe, onder hire boures wowe. eymyld gan dore vn pynne, f boure þat he was ynne, 1048 e see him gon adrynke; t rymenil may of þinke. e [see] him con ded þrowe nder hire chambre wowe. menild lokede wide y þe see syde, ild grieves when she finds the drowned messenger. 1052 1056 oke wiþ hire iȝe, eo oȝt of horn isiȝe. ond heo þe knaue adrent he hadde for horn isent, þat scholde horn bringe; fingres he gan wringe. 1052 1056 nd lokede forþ riȝcte fter horn þe knyte. o fond hye hire sonde renched by þe stronde, at scholde horn bringe; yre fingres hye gan wringe. 1052 1056 ef heo seȝe horn come, þer tidynge of eny gome. o fond hue hire sonde dronque by þe stronde, at shulde horn brynge; re hondes gon hue wrynge.
  • 78. asks King Thurston’s aid. closes his identity to King Thurston 1060 1064 orn cam to þurston þe kyng, tolde him þis tiþing. he was iknowe Rimenh[ild] was hise oȝe, is gode kenne, king of suddenne, hu he sloȝ in felde his fader quelde, 1060 orn cam to þurston þe kinge, nd telde hym hys tydinge. o he was by cnowe at reymyld was his owe. o gap in MS. . . . . . . . . . . . . ] 1060 1064 Horn com to þurston þe kynge, nt tolde him þes tidynge. nt þo he was biknowe, at rymenild wes ys owe, nt of his gode kenne, e kyng of sudenne, nt hou he sloh afelde m þat is fader aquelde, s his pay and also aid to win Rymenhild. 1068 seide, “king þe wise, me mi seruise. enhild help me winne; þu noȝt ne linne, 1068 e seyde, “kyng so wise, eld me my seruyse. eymyld me help to winne; at þou ich nowt ne lynne, 1068 nt seide, “kyng so wyse, eld me my seruice. menild, help me to wynne, wyþe þat þou ne blynne,
  • 79. mises that Athulf shall marry Thurston’s daughter. 1072 ischal do to spuse oȝter wel to huse. schal to spuse haue f, mi gode felaȝe, kniȝt mid þe beste, þe treweste.” 1072 nd hy schal to house y douter do wel spuse. e schal to spuse haue yol, My trewe felawe, e hys knyt wyt þe beste, nd on of þe treweste.” 1072 nt y shal do to house y dohter wel to spouse, r hue shal to spouse haue þulf, my gode felawe. e is knyht mid þe beste, t on of þe treweste.” g consents. 1076 king sede so stille, rn, haue nu þi wille.” 1076 o seyde þe kyng so stille, Horn, do þine wille.” 1076 e kyng seide so stille, orn, do al þi wille.” ies men, and sets sail.
  • 80. 1080 1084 dude writes sende yrlonde, r kniȝtes liȝte, e men to fiȝte. orn come inoȝe, to schupe droȝe. n dude him in þe weie, a god Galeie. im gan to blowe litel þroȝe. H 1080 1084 orn sente hys sonde In to eueryche londe, After men to fyȝte, yrische men so wyȝte, o hym were come hy nowe, at in to schipe drowe. orn tok hys preye. nd dude him in hys weye. o gap in MS. . . . . . . . . . . . . ] 1080 1084 e sende þo by sonde, end al is londe, ter knyhtes to fyhte, t were men so lyhte. him come ynowe, t in to shipe drowe. Horn dude him in þe weye, a gret galeye. e wynd bigon to blowe a lutel þrowe. arrives at the latest possible moment. es after the bells for the wedding have been rung.
  • 81. 1088 1092 1096 e bigan to posse in to Westernesse. trike seil and maste, Ankere gunne caste, ny day was sprunge r belle irunge. word bigan to springe Rymenhilde weddinge. n was in þe watere; miȝte he come no latere. 1088 1092 1096 ere scyp gan forþ seyle, e wynd hym nolde fayle. e striken seyl of maste, nd anker he gonne kaste. e soneday was hy sp[ronge], nd þe messe hy songe, f reymylde þe ȝonge, nd of mody þe kinge; nd horn was in watere; yȝt he come no latere. 1088 1092 1096 e see bi-gan wiþ ship to gon, westnesse hem brohte anon. ue striken seyl of maste, nt ancre gonnen caste. atynes were yronge t þe masse ysonge, rymenild þe ȝynge t of Mody þe kynge, nt horn wes in watere; e mihte he come no latere. es his ship, and comes to land. 1100 et his schup stonde, ȝede to londe. folk he dude abide er wude side. 1100 e let scyp stonde, nd ȝede hym op to londe. ys folc he dide abyde onder þe wode syde.
  • 82. 1100 e let is ship stonde, nt com him vp to londe. s folk he made abyde nder a wode syde. meets a Palmer. ts forth alone, and meets a palmer, 1104 [n] him ȝede alone, he sprunge of stone. almere he þar mette, faire hine grette. mere, þu schalt me telle f þine spelle.” gap in MS. . . . . . . . . . . . ] 1104 1108 e wende forþ alone, o he were spronge of stone. palmere he mette; yt worde he hym grette, Palmere, þou schalt me telle,” e seyde, “on þine spelle, o brouke þou þi croune, i comest þou fram toune?” [leaf 89, back] 1104 1108 Horn eode forh al one, o he sprong of þe stone. n palmere he y-mette, t wiþ wordes hyne grette, palmere, þou shalt me telle,” e seyde, “of þine spelle, o brouke þou þi croune, hy comest þou from toune?” s him of the wedding
  • 83. 1112 sede vpon his tale, ome fram o brudale, was at o wedding Maide Rymenhild. gap in MS. . . . . . . . . . . . ] 1112 e palmere seyde on hys tale, Hy com fram on bridale. h com fram brode hylde f Mayden reymylde. am honder chyrche wowe, e gan louerd owe, 1112 nt he seide on is tale, come from a brudale, om brudale wylde maide remenylde. o gap in MS. . . . . . . . . . . ] Rymenhild’s grief. 1116 1120 miȝte heo adriȝe heo ne weop wiþ iȝe. sede þat ‘heo nolde ispused wiþ golde; hadde on husebonde, he were vt of londe.’ 1116 1120 e miyȝte hye hyt dreye at hye wep wyt eye. e seyde þat ‘hye nolde e spoused Myd golde; ye hadde hosebonde, ey be nere nawt in londe.’ 1116 1120 e mihte hue nout dreȝe t hue ne wep wiþ eȝe. ue seide, ‘þat hue nolde e spoused wiþ golde; ue hade hosebonde ah he were out of londe.’
  • 84. 1124 1128 in strong halle, nne castel walle, iwas atte ȝate; de hi me in late. i ihote hadde ure þat me hire ladde. i igan glide; deol inolde abide. bride wepeþ sore, þat is muche deole!” 1124 1128 ody Myd strencþe hyre hadde, nd in to toure ladde, to a stronge halle, hit inne kastel walle. er ich was attegate; oste ich nawt in rake. wey ich gan glyde; e deþ ich nolde abyde. er worþ a rewlich dole, er þe bryd wepeþ sore.” 1128 h wes in þe halle, iþ-inne þe castel walle. o gap in MS. . . . . . . . . . . . . . . . . . . . . . . . . . . ] wey y gon glide; e dole y nolde abyde. er worþ a dole reuly; e brude wepeþ bitterly.” exchanges clothes with the Palmer. anges clothes with the palmer,
  • 85. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com