factor analysis with spss - east carolina universitycore.ecu.edu/psyc/wuenschk/mv/fa/fa-s… · ppt...

Post on 18-Apr-2018

225 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Factor Analysis with SPSS

Karl L. WuenschDept. of Psychology

East Carolina University

What is a Common Factor?

• It is an abstraction, a hypothetical construct that relates to at least two of our measurement variables.

• We want to estimate the common factors that contribute to the variance in our variables.

• Is this an act of discovery or an act of invention?

What is a Unique Factor?

• It is a factor that contributes to the variance in only one variable.

• There is one unique factor for each variable.

• The unique factors are unrelated to one another and unrelated to the common factors.

• We want to exclude these unique factors from our solution.

Iterated Principal Factors Analysis

• The most common type of FA.• Also known as principal axis FA.• We eliminate the unique variance by

replacing, on the main diagonal of the correlation matrix, 1’s with estimates of communalities.

• Initial estimate of communality = R2 between one variable and all others.

Lets Do It

• Using the beer data, change the extraction method to principal axis.

Look at the Initial Communalities

• They were all 1’s for our PCA.• They sum to 5.675.• We have eliminated 7 – 5.675 = 1.325

units of unique variance.Communalities

.738 .745

.912 .914

.866 .866

.499 .385

.922 .892

.857 .896

.881 .902

COSTSIZEALCOHOLREPUTATCOLORAROMATASTE

Initial Extraction

Extraction Method: Principal Axis Factoring.

Iterate!

• Using the estimated communalities, obtain a solution.

• Take the communalities from the first solution and insert them into the main diagonal of the correlation matrix.

• Solve again.• Take communalities from this second

solution and insert into correlation matrix.

• Solve again.• Repeat this, over and over, until the

changes in communalities from one iteration to the next are trivial.

• Our final communalities sum to 5.6.• After excluding 1.4 units of unique

variance, we have extracted 5.6 units of common variance.

• That is 5.6 / 7 = 80% of the total variance in our seven variables.

• We have packaged those 5.6 units of common variance into two factors:

Tot al Vari ance Expl ai ned

3. 123 44. 620 44. 620 2. 879 41. 131 41. 1312. 478 35. 396 80. 016 2. 722 38. 885 80. 016

Fact or12

Tot al % of Var iance Cum ulat ive % Tot al % of Var iance Cum ulat ive %Ext r act ion Sums of Squar ed Loadings Rot at ion Sum s of Squar ed Loadings

Ext r act ion M et hod: Pr incipal Axis Fact or ing.

Our Rotated Factor Loadings

• Not much different from those for the PCA.Rotated Factor Matr ixa

.950 -2.17E -02

.946 2.106E -02

.942 6.771E -027.337E -02 .9532.974E -02 .930-4.64E-02 .862

-.431 -.447

TA S TEA ROMACOLORS IZEA LCOHOLCOS TRE P UTAT

1 2Factor

E xtraction Method: P rincipal A xis Factoring. Rotation Method: Varimax with K aiser Normalization.

Rotation converged in 3 iterations.a.

Reproduced and Residual Correlation Matrices

• Correlations between variables result from their sharing common underlying factors.

• Try to reproduce the original correlation matrix from the correlations between factors and variables (the loadings).

• The difference between the reproduced correlation matrix and the original correlation matrix is the residual matrix.

• We want these residuals to be small.• Check “Reproduced” under “Descriptive”

in the Factor Analysis dialogue box, to get both of these matrices:

• Repr oduced Cor r el at i ons

. 745b . 818 . 800 - . 365 1. 467E- 02 - 2. 57E- 02 - 6. 28E- 02

. 818 . 914b . 889 - . 458 . 134 8. 950E- 02 4. 899E- 02

. 800 . 889 . 866b - . 428 9. 100E- 02 4. 773E- 02 8. 064E- 03- . 365 - . 458 - . 428 . 385b - . 436 - . 417 - . 399

1. 467E- 02 . 134 9. 100E- 02 - . 436 . 892b . 893 . 893- 2. 57E- 02 8. 950E- 02 4. 773E- 02 - . 417 . 893 . 896b . 898- 6. 28E- 02 4. 899E- 02 8. 064E- 03 - . 399 . 893 . 898 . 902b

1. 350E- 02- 3. 295E- 02 - 4. 02E- 02 3. 328E- 03 - 2. 05E- 02 - 1. 16E- 031. 350E- 02 1. 495E- 02 6. 527E- 02 4. 528E- 02 8. 097E- 03 - 2. 32E- 02- 3. 29E- 02 1. 495E- 02 - 3. 47E- 02 - 1. 88E- 02 - 3. 54E- 03 3. 726E- 03- 4. 02E- 02 6. 527E- 02- 3. 471E- 02 6. 415E- 02 - 2. 59E- 02 - 4. 38E- 023. 328E- 03 4. 528E- 02- 1. 884E- 02 6. 415E- 02 1. 557E- 02 1. 003E- 02- 2. 05E- 02 8. 097E- 03- 3. 545E- 03 - 2. 59E- 02 1. 557E- 02 - 2. 81E- 02- 1. 16E- 03 - 2. 32E- 02 3. 726E- 03 - 4. 38E- 02 1. 003E- 02 - 2. 81E- 02

CO STSI ZEALCO HO LREPUTATCO LO RARO M ATASTECO STSI ZEALCO HO LREPUTATCO LO RARO M ATASTE

Repr oduced Cor r elat ion

Res iduala

CO ST SI ZE ALCO HO L REPUTAT CO LO R ARO M A TASTE

Ex t r ac t ion M et hod: Pr inc ipal Ax is Fac t or ing.Res iduals ar e com put ed bet ween obser ved and r epr oduced cor r elat ions . Ther e ar e 2 ( 9. 0% ) nonr edundant r es iduals wit habsolut e values gr eat er t han 0. 05.

a.

Repr oduced com m unalit iesb.

Nonorthogonal (Oblique) Rotation

• The axes will not be perpendicular, the factors will be correlated with one another.

• the factor loadings (in the pattern matrix) will no longer be equal to the correlation between each factor and each variable.

• They will still equal the beta weights, the A’s in

jmmjjjj UFAFAFAX 2211

• Promax rotation is available in SAS.• First a Varimax rotation is performed.• Then the axes are rotated obliquely.• Here are the beta weights, in the “Pattern

Matrix,” the correlations in the “Structure Matrix,” and the correlations between factors:

Beta Weights Correlations

Patter n Matr ixa

.955 -7.14E -02

.949 -2.83E -02

.943 1.877E -022.200E -02 .953-2.05E -02 .932-9.33E -02 .868

-.408 -.426

TA S TEA ROMACOLORS IZEA LCOHOLCOS TRE P UTAT

1 2Factor

Extraction Method: P rincipal A xis Factoring. Rotation Method: P romax with K aiser Normalization.

Rotation converged in 3 iterations.a.

S tr uctur e Matr ix

.947 .030

.946 .072

.945 .118

.123 .956

.078 .930-.002 .858-.453 -.469

TA S TEAROMACOLORSIZEALCOHOLCOSTRE PUTAT

1 2Factor

Extraction Method: P rincipal Axis Factoring. Rotation Method: P romax with K aiser Normalization.

Factor Cor r elation Matr ix

1.000 .106.106 1.000

Factor12

1 2

E xtraction Method: P rincipal Axis Factoring. Rotation Method: P romax with K aiser Normalization.

Exact Factor Scores

• You can compute, for each subject, estimated factor scores.

• Multiply each standardized variable score by the corresponding standardized scoring coefficient.

• For our first subject, Factor 1 = (-.294)(.41) + (.955)(.40) + (-.036)(.22)

+ (1.057)(-.07) + (.712)(.04) + (1.219)(.03)+ (-1.14)(.01) = 0.23.

• SPSS will not only give you the scoring coefficients, but also compute the estimated factor scores for you.

• In the Factor Analysis window, click Scores and select Save As Variables, Regression, Display Factor Score Coefficient Matrix.

• Here are the scoring coefficients:

• Look back at the data sheet and you will see the estimated factor scores.

Factor S cor e Coefficient Matr ix

.026 .157-.066 .610.036 .251.011 -.042.225 -.201.398 .026.409 .110

COS TS IZEA LCOHOLRE P UTATCOLORA ROMATA S TE

1 2Factor

E xtraction Method: P rincipal A xis Factoring. Rotation Method: Varimax with K aiser Normalization. Factor S cores Method: Regression.

R2 of the Variables With Each Factor

• These are treated as indicators of the internal consistency of the solution.

• .70 and above is good.• They are in the main diagonal of this matrix

Factor Score Covariance MatrixFactor 1 21 .966 .0032 .003 .953

R2 of the Variables With Each Factor 2

• These squared multiple correlation coefficients are equal to the variance of the factor scores.

Use the Factor Scores

• Let us see how the factor scores are related to the SES and Group variables.

• Use multiple regression to predict SES from the factor scores.

Model Summary

.988a .976 .976 .385Model1

R R SquareAdjus tedR Square

Std. Error ofthe Es timate

Predic tors : (Cons tant), FAC2_1, FAC1_1a.

ANOVAb

1320. 821 2 660. 410 4453. 479 . 000a

32. 179 217 . 1481353. 000 219

RegressionResidualTot al

Model1

Sum ofSquares df Mean Square F Sig.

Predict ors: (Const ant ) , FAC2_1, FAC1_1a.

Dependent Var iable: SESb.

Coef f i ci entsa

134. 810 . 000. 681 65. 027 . 000 . 679 . 681

- . 718 -68. 581 . 000 - . 716 -. 718

(Const ant )FAC1_1FAC2_1

Model1

Bet a

St andardizedCoef f icient s

t Sig. Zero-order PartCorrelat ions

Dependent Var iable: SESa.

• Also, use independent t to compare groups on mean factor scores.

Gro u p Sta tis tic s

1 2 1-.4 1 9 8 7 7 5 .9 7 3 8 3 3 6 4.0 8 8 5 3 0 3 39 9 .5 1 3 1 8 3 6 .7 1 7 1 4 2 3 2.0 7 2 0 7 5 5 2

1 2 1 .5 6 2 0 4 6 5 .8 8 3 4 0 9 2 1.0 8 0 3 0 9 9 39 9-.6 8 6 9 4 5 7 .5 5 5 2 9 9 3 8.0 5 5 8 0 9 6 9

GROUP1212

F AC1 _ 1

F AC2 _ 1

N Me a n Std . De v i a t i o nStd . Erro r

Me a n

I ndependent Sam pl es Test

19. 264 . 000 - 7. 933 218 . 000 - 1. 16487 - . 701253

- 8. 173 215. 738 . 000 - 1. 15807 - . 708049

25. 883 . 000 12. 227 218 . 000 1. 047657 1. 450327

12. 771 205. 269 . 000 1. 056175 1. 441809

Equal var iancesassum edEqual var iancesnot assum edEqual var iancesassum edEqual var iancesnot assum ed

FAC1_1

FAC2_1

F Sig.

Levene's Test f orEqualit y of Var iances

t df Sig. ( 2- t ailed) Lower Upper

95% Conf idenceI nt er val of t he

Dif f er ence

t - t est f or Equalit y of M eans

Unit-Weighted Factor Scores

• Define subscale 1 as simple sum or mean of scores on all items loading well (> .4) on Factor 1.

• Likewise for Factor 2, etc.• Suzie Cue’s answers are• Color, Taste, Aroma, Size, Alcohol, Cost, Reputation• 80, 100, 40, 30, 75, 60, 10• Aesthetic Quality = 80+100+40-10 = 210• Cheap Drunk = 30+75+60-10 = 155

• It may be better to use factor scoring coefficients (rather than loadings) to determine unit weights.

• Grice (2001) evaluated several techniques and found the best to be assigning a unit weight of 1 to each variable that has a scoring coefficient at least 1/3 as large as the largest for that factor.

• Using this rule, we would not include Reputation on either subscale and would drop Cost from the second subscale.

Item Analysisand Cronbach’s Alpha

• Are our subscales reliable?• Test-Retest reliability• Cronbach’s Alpha – internal consistency

– Mean split-half reliability– With correction for attenuation– Is a conservative estimate of reliability

• AQ = Color + Taste + Aroma – Reputation• Must negatively weight Reputation prior to

item analysis.• Transform, Compute,

NegRep = -1Reputat.

• Analyze, Scale, Reliability Analysis

• Statistics• Scale if item deleted.

• Continue, OK

• Shoot for an alpha of at least .70 for research instruments.

• Note that deletion of the Reputation item would increase alpha to .96.

Comparing Two Groups’ Factor Structure

• Eyeball Test– Same number of well defined factors in both

groups?– Same variables load well on same factors in

both groups?

• Pearson r– Just correlate the loadings for one factor in

one group with those for the corresponding factor in the other group.

– If there are many small loadings, r may be large due to the factors being similar on small loadings despite lack of similarity on the larger loadings.

• CC, Tucker’s coefficient of congruence– Follow the instructions in the document

Comparing Two Groups’ Factor Structures: Pearson r and the Coefficient of Congruence

– CC of .85 to .94 corresponds to similar factors, and .95 to 1 as essentially identical factors.

• Cross-Scoring– Obtain scoring coefficients for each group.– For each group, compute factor scores using

coefficients obtained from the analysis for that same group (SG) and using coefficients obtained from the analysis for the other group (OG).

– Correlate SG factor scores with OG factor scores.

• Catell’s Salient Similarity Index– Factors (one from one group, one from the

other group) are compared in terms of similarity of loadings.

– Catell’s Salient Similarity Index, s, can be transformed to a p value testing the null that the factors are not related to one another.

– See my document Cattell’s s for details.

Required Number of Subjects and Variables

• Rules of Thumb (not very useful)– 100 or more subjects.– at least 10 times as many subjects as you

have variables.– as many subjects as you can, the more the

better.• It depends – see the references in the

handout.

• Start out with at least 6 variables per expected factor.

• Each factor should have at least 3 variables that load well.

• If loadings are low, need at least 10 variables per factor.

• Need at least as many subjects as variables. The more of each, the better.

• When there are overlapping factors (variables loading well on more than one factor), need more subjects than when structure is simple.

• If communalities are low, need more subjects.

• If communalities are high (> .6), you can get by with fewer than 100 subjects.

• With moderate communalities (.5), need 100-200 subjects.

• With low communalities and only 3-4 high loadings per factor, need over 300 subjects.

• With low communalities and poorly defined factors, need over 500 subjects.

What I Have Not Covered Today

• LOTS.• For a brief introduction to reliability,

validity, and scaling, see Document or Slideshow .

• For an SAS version of this workshop, see Document  or  Slideshow .

top related