the use and abuse of factor analysis in communication research · state of the art oscj the use and...

8
State of the Art OSCj THE USE AND ABUSE OF FACTOR ANALYSIS IN COMMUNICATION RESEARCH JAMES C. McCROSKEY and THOMAS J. YOUNG West Virginia University This paper discusses several decisions that researchers must make in their application of factor analysis to data related to communication phenomena. S~veral suggestions are provided to aid resear~hers in reaching appropriate decisions. Since the early 1960s, the use offactor analysis in communication research has consistently increased. Unfortunately, many of the factor analytic studies reported have been so seriously flawed that it is no exaggeration to state that the field would be far better off i(the studies had never been reported. As recently as two decades ago, it was extremely dif- ficult and time-consuming to conduct a factor analy- tic investigation at most universities. Many hours, even days, of hand computation were required. People who performed factor analyses had to have a thorough knowledge of the method as well as con- siderable stamina. With the advent of wide accessi- bility to electronic computers and packaged factor . analysis programs, the method has become avail- able with minuscule effort to anyone with access to a computation center, which means almost everyone in a university setting. The flagrant abuse of factor analysis in published communication research led one critic to suggest (and we think not completely facetiously) that be- fore anyone be allowed access to a factor analysis computer program, they must receive a "Certified Factor Analyst" card from some special agency created for this purpose. We find this suggestion appealing, but, pending the formation of the certify- ing agency, we believe that a discussion of some of James C. McCroskey (Ed.D., Pennsylvania State University 1966)is a professor and depaittnent chairperson and Thomas J. Young (Ph.D., University of Oregon, 1974) is an assistant pro- fessor in the Department of Speech Communication at West Virginia University, Morgantown. West Virginia 26506. This Studyaccepted for publication April 26, 1978. the decisions that a person must confront when utilizing the factor analytic method might be useful. We will address some of these decisions in this paper. Factor Analysis.' A Brief Overview Factor analysis is one method of examining a correlation (or covariance) matrix. In effect, the procedure searches for groups of variables that are substantially correlated with each other, while not maintaining high correlations with other variables or groups of variables. Such groups of variables are called "factors" or "dimensions." The degree to which a given variable is associated with a particu- lar factor is estimated by its "factor loading," a statistic analogous to a correlation which can range from - 1.00 to + 1.00.The closer the loading ap- proaches -1.00 or + 1.00, the greater the associa- tion between the variable and the factor. The procedure will automatically (with currently available computer packages) extract as many fac- tors as there are variables present in the matrix examined. Some of these will be "common" fac- tors, which represent variability in the matrix asso- ciated with several variables. Others will be "specific" factors, which represent variability pri- marily associated with a single variable in the mat- - rix. The usual goal of the researcher employing factor analysis is to isolate the common factors. Specific factors typically will not be of interest. When a researcher reports a three-factor solution when 30 variables have been analyzed, he or she is

Upload: duongcong

Post on 18-Apr-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

State of the Art

OSCj

THE USE AND ABUSE OF FACTOR ANALYSISIN COMMUNICATION RESEARCH

JAMES C. McCROSKEY and THOMAS J. YOUNG

West Virginia University

This paper discusses several decisions that researchers must make in their application offactor analysis to data related to communication phenomena. S~veral suggestions areprovided to aid resear~hers in reaching appropriate decisions.

Since the early 1960s, the use offactor analysis incommunication research has consistently increased.Unfortunately, many of the factor analytic studiesreported have been so seriously flawed that it is noexaggeration to state that the field would be farbetter off i(the studies had never been reported. Asrecently as two decades ago, it was extremely dif-ficult and time-consuming to conduct a factor analy-tic investigation at most universities. Many hours,even days, of hand computation were required.People who performed factor analyses had to have athorough knowledge of the method as well as con-siderable stamina. With the advent of wide accessi-

bility to electronic computers and packaged factor. analysis programs, the method has become avail-able with minuscule effort to anyone with access toa computation center, which means almosteveryone in a university setting.

The flagrant abuse of factor analysis in publishedcommunication research led one critic to suggest(and we think not completely facetiously) that be-fore anyone be allowed access to a factor analysiscomputer program, they must receive a "CertifiedFactor Analyst" card from some special agencycreated for this purpose. We find this suggestionappealing, but, pending the formation of the certify-ing agency, we believe that a discussion of some of

James C. McCroskey (Ed.D., Pennsylvania State University1966)is a professor and depaittnent chairperson and Thomas J.Young (Ph.D., University of Oregon, 1974) is an assistant pro-fessor in the Department of Speech Communication at WestVirginia University, Morgantown. West Virginia 26506. ThisStudyaccepted for publication April 26, 1978.

the decisions that a person must confront whenutilizing the factor analytic method might be useful.We will address some of these decisions in this

paper.

Factor Analysis.' A Brief Overview

Factor analysis is one method of examining acorrelation (or covariance) matrix. In effect, theprocedure searches for groups of variables that aresubstantially correlated with each other, while notmaintaining high correlations with other variablesor groups of variables. Such groups of variables arecalled "factors" or "dimensions." The degree towhich a given variable is associated with a particu-lar factor is estimated by its "factor loading," astatistic analogous to a correlation which can rangefrom - 1.00 to + 1.00.The closer the loading ap-proaches -1.00 or + 1.00, the greater the associa-tion between the variable and the factor.

The procedure will automatically (with currentlyavailable computer packages) extract as many fac-tors as there are variables present in the matrixexamined. Some of these will be "common" fac-tors, which represent variability in the matrix asso-ciated with several variables. Others will be

"specific" factors, which represent variability pri-marily associated with a single variable in the mat- -rix. The usual goal of the researcher employingfactor analysis is to isolate the common factors.Specific factors typically will not be of interest.When a researcher reports a three-factor solutionwhen 30 variables have been analyzed, he or she is

376 HUMAN COMMUNICATION RESEARCH I Vol. 5, No.4, SUMMER 1979

indicating that three ofthe 30 possible factors repre-sent common variance in the matrix, while the other27 represent specific variance. Methods for deter-mining the number of common factors in a matrixare discussed later.

Definition of Construct(s)

Although factor analysis can serve many otherpurposes, this method has been used most often incommunication research for the purpose of inves-tigating the dimensionality of constructs and therefinement of measuring instruments. In either in-stance, careful definition of the construct withwhich the researcher is concerned should be the first

step in the research process. Later decisions dependon this step, and if the researcher is careless at thispoint, the product of the research may be of littlevalue. We will use the factor analytic research onsource credibility to illustrate the importance ofdecisions about the nature of the construct to be

investigated, since this work has been seriouslyflawed as a result of a lack of careful definition ofthe construct to be investigated prior to conductingthe research.

The factor analytic research on source credibilityhas been characterized by careless or nondefinitionof the construct. As a result, factors of "cred-ibility" have been reported that fail to correspondwith the way the construct has been defined byscholars concerned with credibility theory. A goodexample is the "dynamism" or "extroversion"dimension originally reported by Berlo, Lemert,and Mertz (1961). Although the avowed purpose ofthe Berlo et al. research was to extend the work of

Hovland (Hovland, Janis, & Kelley, 1953), at nopoint had the earlier research included nonevalua-tive elements in the definition of the credibilityconstruct. Of additional note is the fact that Berlo etal. did not include nonevaluative elements in their

definition either. In fact, they chose not to use"source credibility" as the label for their constructbut to call it "dimensions for evaluating messagesources" (italics ours). In the two decades that havefollowed, the distinction between "source cred-ibility" and "person perception" has become in-creasingly fuzzy.

The definition problem illustrated by the cred-

ibility research is a serious one, because many laterdecisions in factor analytic research must be basedon the definition of the construct being studied. Oneof the basic requirements of measurement is theneed for isomorphism b~tween the constituent defi-nition of the construct and the operational definition(the measure[s]) of the construct. If there is no clearconstituent definition, this requirement cannot bemet, as we will see in the next section.

Item-Variable Selection

Once the construct to be studied is carefully de-fined, the next step in factor analytic research is theselection of the variables (or items, if we continueour credibility example) to be measured. The key tothis process is the maintenance of isomorphism be-tween the construct (as defined) and the measure-ment. If "source credibility" is defined as the"evaluation of a message source," then onlyevaluative scales should be chosen for use. If "per-son perception" is defined as "all of the ways oneperson can view another person, " then both evalua-tive and nonevaluative scales should be included. If

only evaluative scales are used for "person percep-tion" or nonevaluative scales are used for "sourcecredibility," isomorphism is sharply reduced andconclusions about the originally defined constructare negated.

There are many ways in which items may begenerated for factor analytic work. The most com-monly employed methods are surveying previoustheoretical work, surveying previous factor analyticwork, and obtaining free responses from researchsubjects. Any of these may be appropriate, giventhat each item is examined for its isomorphism withthe constituent definition of the construct prior touse. Failing to apply this test to items is very likelyto result in the" discovery" of a dimension thatdoes not really exist. For example, we could add a"new" dimension of "source credibility" by. in-cluding estimates of the person's height, weight,belt size, finger length, arm length, and inseammeasure along with our other scales. These mea-sures would be correlated with each other but prob-ably not correlated with our other scales. We mightcall our new dimension "size," or even

McCroskey and Young 377

"dynamism" if we prefer. In either event, wewouldbe making a dubious knowledge claim.IJetermining Sample Size

There has been considerable discussion in the

literatureconcerning the size of the sample neces-jary for conducting factor analytic research. Muchofthis has been less than useful. To understand the

importance of sample size in factor analytic re-)earch, we should first recognize that factor anal-lysis,unlike most other statistical procedures, doesmotoperate from raw data. The input to factor anal-ysisis a correlation or covariance matrix (althoughmoststandard programs allow the user to input rawdata, which is transformed to a correlation matrix

priorto beginning factor analysis). Consequently,factor analysis itself is completely insensitive tosample size. It makes not one bit of differencewhetherthe correlation matrix is based on N=3 orN=3000; if the correlation matrix is the same, the,factoranalytic output will be the same. The key to[hesample-size question, then, is the correlationmatrix. With small sample sizes, the individualcorrelationsare accompanied by a wide margin oferror. As sample size increases, the confidenceinterval around the individual correlations is nar-rowe,dand the probability that factor analysis will beworkingwith true correlations is increased. Thus,laswith most quantitative research, the larger thesamplesize the more confidence we can have in thefactoranalysis.

But what sample size is sufficient? There is nofirmnumber that can be set as an absolute criterion.Rather, the researcher must determine the size oferrorwith which he or she is willing to live. With asfewas 11 items or variables, there are 100 nonunitycorrelations in the matrix. Using a conservative(p<.01) significance criterion~ we would still ex-pectone significant correlation to occur by chance.

,With40 items or variables (which is not an uncom-l\1onlylarge number), over 1500 correlations areincludedin the matrix, and 15would be expected tobesignificant by chance alone-possibly enoughfortwo or more factors by themselves. This prob-I

,em,of course, cannot be completely overcome byf'ncreasingsample size. However, much of the nega-'iveimpact can be overcome. With N=50, a Pear-

son r must exceed .353 to be significant. WithN=200, r must exceed .180. With N=400, r mustexceed .127, and with N= 1000, r must exceed.080. Thus, the sample size is directly related to theseverity of the problems introduced by the presenceof chance correlations in the matrix. With small

sample sizes, the factor analysis can be severelydistorted by spurious correlations. (Correlations of.30 can have a major impact on factor structure.)But, with large sample sizes, such distortion will begreatly reduced because the spurious correlationsare likely to be smaller.

Large samples not only help to protect againstalpha error (obtaining factors produced by chancecorrelations), but also help to protect against betaerror (missing factors which actually exist). Chancecan operate in either direction, or even in bothdirections in the same investigation. Asin any otherquantitative analysis, other things being equal, thelarger the sample the higher the power of the anal-ysis.

While no firm sample size can be set for all factoranalytic work, we recommend approximately 200for any study which purports to produce generaliza-ble findings. With N=200 the correlations obtainedare reasonably stable, and nonsignificant correla-tions can have linle impact on the factor analysis.For less pretentious work that is extremely prelimi-nary or exploratory (and not intended for publica-tion prior to extensive replication), smaller samplesmay be appropriate. However, it should be recog-nized that this recommendation is only a Brule-of.thumb" to be applied during the planning stages ofthe study. Kaiser (1970) has noted that samplingadequacy is a function of number of common fac-tors extracted, number of variables in the matrix,and number of subjects. His Measure of SamplingAdequacy (MSA) is the best available method ofdetermining whether enough subjects have beenincluded. Unfortunately, at the time of this writing,~e only widely used computer package which in-cludes Kaiser's MSA is the BIOMED, package.

Insuring Adequate and Representative Variability

In most instances, factor analysis is conductedwith the intention of generalizing the results beyondthe narrow confines of the study itself. To make this

378 HUMAN COMMUNICATION RESEARCH / Vol. 5, No.4, SUMMER 1979

possible, the researcher must insure adequate varia-bility in his or her data and that the variability isrepresentative of the population to which the resultswill be generalized.

The first and most obvious method of insuringadequate and representative variability is to select alarge, random sample of subjects from the popula-tion to which the research will be generalized. If onewishes to generalize only to college sophomores,the sample should include college sophomores.However, if one should wish to generalize further, amore varied sample would be more appropriate.

In many cases, insuring adequate and representa-tive variability in the subject sample is not enough.Let us again take the source credibility research toillustrate. This research has sought to determinegeneralizable dimensions which will apply to manydifferent sources, or even different types of sources.The early work of Berlo et al. is exemplary in thisregard in that 18 different sources were used in onestudy and 12 in another. It is not surprising, there-fore, that later, well-conducted studies have gener-ally obtained similar results. However, otherstudies which have not included sufficient numbers

of sources generally have produced unreplicableresults, and, in several instances, results that werenot even internally consistent. Unfortunately, thelatter type of research is much more common thanthe former.

Without sufficient variance in source, the uniquerelationships among perceptions of one or twosources may produce factor structures vastly differ-ent than would be representative of a generalizedgroup of sources. Of course, if someone is in-terested only in the dimensions on which RichardNixon is perceived, then that would be the onlysource used. But, if the construct defined is broader,there must be more sources employed. This sameprinciple applies to many other factor analytic proj-ects, such as rating scale development and any otherconcept based on multiple measurement of percep-tions.

Although we have not yet said anything directlyabout factor analysis techniques, we will do so in amoment. But, before we do, we want to stress thatthe four decision points we have discussed thus farare crucial for competent use of any factor analytictechnique. In fact, most abuses of factor analysis in

communication research have stemmed from errorsin these four areas. Nevertheless, there are a numberof decisions that must be made about applying thetechniques of factor analysis per se. Unless thesedecision points are addresseciappropriately, the bestdata will not result in meaningful generalizations.We will now turn our attention to some of these

decision points.

Unity vs. Estimates of Communality

Recently, there has been a call for factor analyststo be more sensitive to the actual methods that are

performed in determining various values fordiagonal entries in factor analysis. This is not tosuggest that this is somehow a "new" issue, nor is itto suggest that communication researchers havebeen unaware of its significance. But, because re-porting a factor analytic investigation often leaveslittle room for a discussion of what diagonal entrieswere used (or even what they mean), a discussion ofthat issue will be offered.

Quite simply, diagonal entries in factor analysisare important because they can contribute to, orreduce, the error associated with any specific num-ber of common factors. In any correlation matrix,we generally are not concerned with the diagonals,because we have specified that we are interested inobtaining the magnitude of a relationship that can beobtained from off-diagonal values. However, this iswhere correlation matrices and factor analysis con-

ceptually split. When using factor analysis, we areconcerned with values for all the variables in an N XN matrix. Diagonal values are important in deter-mining the relative contribution of a variable to thecommon factor structure. This is where unity vs.estimates of communality placed in diagonals be-comes a most controversial issue.

Placing a value of 1.0 in the diagonals in a corre-lation matrix will make the rank of a matrix equal tothe number of variables. In other words" if we have20 variables, making a 20 X 20 matrix, with 380known off-diagonal values, and then insert unityinto the diagonals, we are suggesting that from theoutset we have at least 20 common factors, i.e., thecommon variance is accounted for, and no specificor error variance can enter the factor structure. Thealternative is not to be so presumptuous, and enter

McCroskey and Young 379

only estimates of communality, thereby providing atruer estimate of the relative contribution diagonalsplay in factor structure. Of course, there are prob-lems in doing this also. While many factor analystshave lamented the use of unity, Cattell goes so far asto say it is "barbarous"; little agreement exists as towhat decision option a person should choose. Theseauthors have encountered as many as 11 methods,but, for the sake of brevity , we will limit our discus-sion to the two most commonly used.

The first method is known as the "method ofhighest correlation." Quite simply, this means thatthe highest correlation found in a matrix should be at

least equal to the diagonal value. Others have sug-gested that the squared multiple correlations serveas the best entry decisions. While diagonal estima-tion is an important issue in factor analysis, theseverity of its importance has decreased substan-tially, as high speed computers have the ability toperform several iterations on diagonal entries. Thisprocedure provides the researcher with the best es-timate of communality. Most available standard

. programs include numerous iterations as a defaultoption unless this option is suppressed. This hasmadethe unity vs. communality controversy moot,unlessthe researcher forces retention of unity in thediagonals, as some suggest in research reports. Butwe doubt that this procedure is often, if ever, reallyemployed.

Oblique vs. Orthogonal Rotations

Our field, rightfully so, may be charged with avery suspicious ambition in determining the factorstructure associated with the concept/constructcredibility. This ambition relates to the issue of .

usingorthogonally determined factors as the means

ofdetermining the dimensionality of the credibilityConstruct. Only recently have communication re-searchers begun to utilize an alternate method ofdetermining factors, that being oblique analysis.

As the name implies, orthogonality imposes in-dependencyon a structure, thus insuring that factorsthat are orthogonally determined are uncorrelatedWithone another. Oblique analysis, on the otherhand,does not impose this requirement. Rather, itrotatesall factors in hyperspace with one another in)earch of the best hyperplanes describing a con-

struct, unlike orthogonal analysis that rotates fac-tors at 90° angles from one another. The followingexample will illustrate this principle. If we have 10factors that appear to operate in a given matrix,using orthogonal analysis will provide five shifts orrotations. Oblique analysis is capable of making 10rotations. Independence is lost, but we feel thatinformation is gained. Oblique analysis assumesthat correlations exist among all phenomena; theissue at hand is whether or not those correlations are

of sufficient magnitude to warrant a redefinition ofthe construct. We maintain that this is the onlyrealistic way to look at communication phenomena.Clean, tidy, independent structures are elegant in a

. mathematical sense, but they may not be representa-tive of reality. For example, research has found theconstruct of "interpersonal attraction" to be mul-tidimensional, consisting of at least threedimensions-physical, social, and task. While it ispossible to employ orthogonal rotation to generatefactors for each of these three dimensions which are

uncorrelated, common experience (as well asnumerous research studies) tells us that, most com-monly, when our physical attraction for a memberof the opposite sex increases, so does our socialattraction for that person.

The best argument for selecting orthogonal rota-tion over oblique rotation which we have seen is thatthis procedure will generate uncorrelated factorscores which, subsequently, may be used in multi-ple regression analyses without introducing theproblem of multicolinearity of predictors. This is adistinct advantage. However, procedures for de-composing R2 are available which will negate thisadvantage (Siebold & McPhee, 1979). Generally,then, we believe that oblique rotation proceduresare preferable for communication research.

Item on Factor Decision

Determining whether an item is on a factor is, onthe one hand, a fairly obvious issue, but, on theother, a relatively complex issue. The most obviousbasis for determination is that the factor that has the

highest loading for an item across N fa~tors is thefactor upon which the item is loaded. However,items have loadings across all factors, and, in manyinstances, they contribute nearly equal variance on

3HO HUMAN COMMUNICATION RESEARCH / Vol. 5, No.4, SUMMER 1979

several factors. We, and several other factoranalysts, have bee!l plagued with the issue of whatto do with items that contribute significant varianceon two or more factors. We have employed in manyinstances the a priori criterion that for the loading ofan item to be considered significant it must have aprimary loading on one factor of at least .60, and nosecondary loading on another factor with a valueabove .40. It has been charged that this criterion foritem significance is very conservative, for, in fact,we may be discarding items that are contributingvariance to the factor model. We agree. In our use offactor analysis, we have been primarily interested ininstrument development. We have sought items thatwere "pure" measures of a given factor and haveused orthogonal (varimax) rotation procedures toincrease the purity. If one has carefully chosenitems, and if those items appear to be representativeof the construct under investigation, then one mustbe careful to select only those items that will meetthe demands ofreplicability. If, however, one has apriori grounds for hypothesizing that certain itemswill load on certain factors, or even that they willcontribute variance to several factors, the .60-.40criterion would be of little value. In exploratoryinvestigations of a construct, we would suggest theutilization of a more liberal criterion. Similarly,when any rotation method other than varimax isemployed, the .60-.40 criterion is meaningless.

It is important to remember that all items areloaded on all factors, unless a given loading isactually 0.00. The issue of which factor has the"primary" loading is not important unless oneplans to discard certain items or to score items on apriori criteria, rather than on the basis of factorweights. When developing measuring instruments,both of these practices are common. Thus, wewould argue that very conservative criteria (such asthe .60-.40 criterion) should be employed if theresearcher expects her or his results to replicate inlater studies.

Determining the Number of Factors

Probably the most perplexing problem con-fronted by the factor analyst is determining howmany common factors exist in his or her data. There

have been a sizeable number of methogs suggested

for making this determination, but none of theseguarantee. that the researcher will make the rightdecision. Most of the packaged factor analysis pro-grams have built-in decision-making parametersthat the user must override to avoid solutions thatmay be completely inappropriate. The two mostcommon rules that are included in packaged pro-grams are: (1) extract as many factors as there areitems or variables, and (2) terminate extractionwhen a factor accounts for less variance than would

be expected for the total variance of a single item orvariable (eigenvalue= 1.0). Neither of these ap-proaches is usually appropriate.

The task that the researcher faces is separating thefactors that are based on common variance from

those based on specific variance. Normally, onewishes to rotate all of the common factors, butexclude all of the specific factors. By common.factors we mean those factors with which meaning-ful variance on more than one item/variable is asso-

ciated. Specific factors are those with which mean-ingful variance on only a single item/variable isassociated. However the decision about how manyfactors exist is made, the criteria to be applied mustbe determined in advance, much like the determina-tion of alpha level before running an analysis ofvariance. We will consider some of the criteria thatcan be considered.

The first decision the researcher should make is

whether there is more than one factor. In general,the law of parsimony applies to the interpretation ofthe factor analysis results-the fewer the factors,the better. While this law does not always apply(more factors may increase the heuristic value of thestudy), generally, the optimal solution is a single-factor solution. To determine whether such a solu-

tion is appropriate, the unrotated factor structureshould be examined. If all of the variables have theirhighest loadings (no matter of what absolute mag-nitude) on the first factor, the proper interpretationis that the variables form a single factor. If a rela-tively small proportion (as a rule of thumb we use 10percent) of the variables does not have the highestloading on the first unrotated factor, the single-factor solution still may be the best interpretation.The content of the deviant variables should be

examined to see if they appear to be psychologicallYrelated. Such deviant variables may represent

McCroskey and Young 381

specific"factors or simply poor measures (e.g., baditems). In such cases, the variables can be excluded

.from future analysis. If it is determined that onlyone factor exists, t~e researcher has completed hisor her task. If not, the researcher must determinehow many factors to submit to rotation.

Probably the best means of determining howmany factors to rotate is by considering prior re-search and theory, if any is available. If one is usinga factor-based measure, one should rotate the num-ber of factors previously observed on the instru-

Iment. If one is using a new instrument, the numberof factors to rotate should be based upon the theoryleading to the instrument. For example, if you writefive items for each of four theoretical dimensions,four factors should be rotated. Examination of onlythis solution, however, may lead to a self-fulfillingprophecy. Thus, it is advisable also to examine

- rotations with both more and less factors thantheoretically expected to see if some factors col-lapse together or split into two factors.I Many times, however, the researcher is in doubtabout how many factors to expect. In this event,more mathematically based criteria should be em-ployed. A common method is to rotate all factorswith an eigenvalue> 1.0. The eigenvalue for a fac-tor represents an estimate of the variance associatedwith a factor. An eigenvalue of 1.0 indicates thatthere is variance associated with the factor equal tothatpotentially generated by a single variable acrossall factors. The higher the eigenvalue, the morelikely the factor represents common, rather thanspecific, variance. The researcher can be rea-sonably certain that this procedure will include allcommon factors, but some specific factors may alsobe included. To protect against including specificfactors, two additional criteria should be applied.Thefirst is Cattell's "scree test. ,.,The "scree test"

isnot really a test, but simply a method eyeballingthe raw factor analysis. The eigenvalues for thefactors are plotted. The descending eigenvalueCUrvewill be continuous to a point, then register aslight rise or hump, and then continue to declineslowly. The rise or hump on the curve marks thenumber of factors to rotate.

While the use of the scree test will probably;eXcludeall of the specific factors, in some instancesitwill not. Thus, one should also include-a second

criterion based on the number of items with their

highest (how high is a separate question) loading ona given factor. The common requirement employedis either two or three items, depending on the size ofthe original item/variable pool.

Our colleague, Lawrence Wheeless, in an unpub-lished study, tested the above criteria to determinetheir usefulness. He generated a factor analysisbased on data taken from a table of random num-bers. He then applied all of the criteria that havebeen used in published factor analytic research incommunication to see which ones would correctlyindicate that all of his obtained factors were based

on specific variance. The only ones he found towork were the combination of scree and items-on-

factor (specifically two items with loading> .60 andno secondary 10ading>.4O after varimax rotation).We have used these criteria subsequently in over 40fa.ctor analytic investigations and believe they willusually accomplish their intended purpose.

Employing the above criteria requires the proc-essing of multiple rotation analyses, so we need toclarify this procedure . We automatically process allpossible rotations from two to the number of factorswith eigenvalue 1.0. We then begin with the largestnumber of factors and "step down." Presume thereare 11 factors. Presume, in addition, that the screetest suggests eight factors. We then examine theeight-factor rotation and apply the items-on-factorcriterion. If all factors meet that criterion, we reportthe eight-factor solution and discard the rest. How-ever, presume the sixth factor did not meet theitems-on-factor criterion. In such event, we stepdown to the next solution (in this case the seven-factor solution) and once again apply the items-on-factor criterion. We repeat this procedure until wefind a solution that meets the criterion. Our experi-ence indicates that this process works exceptionallywell. By "works" we mean that the structure wefinally accept is replicable in later research withdifferent subjects.

The Use of Factor Scores

Once a factor structure is accepted as representa-tive of the available data, the next question is whatto do with it. Central to this question is how to scoredata collected subsequently on the same measure.

382 HUMAN COMMUNICATION RESEARCH / Vol. S, No.4, SUMMER 1979

Should the scores be totaled, based on the rawnumbers originally assigned (e.g., 1-7 on bipolarscales) or should the scores be weighted by thefactor loadings? The answer depends on criteriaapplied earlier in the process.

Our experience has been that if conservativecriteria for item selection have been employed (the.60-.40 criterion, for example), both scoring meth-ods are essentially equivalent-the correlation usu-ally exceeds. 95. However, if all items are retained,or if there is a major difference in the loadings ofitems on the same factor, or if some items have largesecondary loadings, scores weighted by factor load-ings should be employed to remove extraneous er-ror.

lmerstudy Factor Comparisons

Interstudy factor comparisons are probably themost crucial basis for determining factorgeneralizability. The necessity for replication, bythe initial investigator and other investigators aswell, is the most basic tenant in factor analyticresearch. However, when one scans the literature,one not only finds this basic tenant violated, but,when interfactor study comparisons are reported,they often do not represent a true replication of theconstruct originally factored.

Those researchers who report factor analytic re-sults that have not as yet been replicated with differ-ent populations must be careful to spell out exactlythe criteria utilized in determining their factor struc- .ture. Too often this is not done, and replication is.virtually impossible. What items were factored,what means of analysis, and what criteria werespecified for factor extraction and determination areall crucial details that need to be reported for factorreplication.

Some researchers who have attempted replicationof prior factor analytic research have neglected thecriteria that were originally specified, and, perhapsmost importantly, have paid little attention to theconstituent and operational definitions that havebeen reported. We have mentioned earlier in thispaper that the establishment of the isomorphismbetween these two is crucial. However, some re-searchers have neglected consideration of this rela-

tionship. For example, Anatol and Applebaum(1973) at~emptedreplication of several factor analy-tic investigations of the source credibility construct.Many of the research investigations they attemptedto replicate were investigations of several sourcesand several populations. Their analysis of onesource was just not isomorphic with the prior inves-tigations. Interstudy factor comparisons, whenmeaningful, are crucial, but only if done properly.Otherwise, they may make false knowledge claimsand lead the unsuspecting or naive reader astray.

Our purpose in writing this paper has been tohighlight some of the problems associated withconducting and interpreting factor analytic re-search. One of our colleagues once commented thatfactor analysis is "a blend of math, magic, andmischief. " Indeed, as employed in previous com-munication research, elements of all three some-times have been visible. We hope that the observa-tions in this paper will help future researchers reoduce reliance on the latter two elements.

NOTE

This paper is a revision of a paper presented at theannual convention ofiCA in Chicago, 1975. The authorsare indebted to Lawrence Chase and Thomas Steinfan for

their thorough critiques of an earlier version of this paper.

REFERENCES

APPLEBAUM, R.F., & ANATOL, K.W. The factor structUreof source credibility as a function of the speaking situation.Speech Monographs, 39, 1972, 216-222.

BERLO, O.K., & LEMERT, J.B. A factor analytic study ofthldim::n£ions of source credibility. Paper presented at theSpeech Association of America Convention, New Yod:.1961. For a later report of this research, see O.K. Berlo, J.B.Lemert, & R. Mertz. Dimensions for evaluating the accepri.bility of message sources. Public Opinion Quarterly, 33.1969, 563-576.

CATTELL, R.B. Factor analysis: An introduction and manualfor the psychologist and social scientist. New York: HaqJO"& Brothers. 1952.

HARMON, H.H. Modernfactor analysis. Chicago and London:The University of Chicago Press, 1970.

HOVLAND. C.I.. JANIS, I.L., & KELLEY, H.H. Communi'cation and persuasion. New Haven: Yale University Press.1953, 19-48.

KAISER, H.F. A second generation little jiffy,Psychometrica.35. 1970.401-415.

THURSTONE. L.L. Multiple factor analysis. Chicago: Univer.sity of Chicago. Press, 1947.