developing a comparative measure of the learning climate ... learning... · developing a...

9

Click here to load reader

Upload: lyanh

Post on 27-May-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

Journal of Applied Psychology1975, Vol. 60, No. 1, 71-79

Developing a Comparative Measure of the LearningClimate in Professional Schools

Donald D. Bowen and Ralph H. KilmannGraduate School of Business, University of Pittsburgh

The Learning Climate Questionnaire (LCQ) was developed to assess the learn-ing climate of professional schools. Seven populations of students from fourschools participated in this study (N = 455). Five factors were extracted fromthe LCQ (Grading Process, Physical Environment, Task Relationships WithFaculty, Social Relationships With Faculty, and Course Material Presentation)which were fairly independent (average intercorrelations — .33) and had highinternal consistency (average a — .84) across all samples. Comparisons of theobjective properties of the schools and measures of overall student satisfactionwith the LCQ factors suggest considerable validity of the instrument. Con-sequently, it seems appropriate to utilize the LCQ for substantive research in-vestigations into the organizational dynamics of professional schools.

Recent studies of organizational climatehave suggested its usefulness for investigatingimportant aspects of organizational behavior,performance, and effectiveness (e.g., Litwin SiStringer, 1968; Pritchard & Karasick, 1973;Taguiri & Litwin, 1968). In some cases, re-searchers have defined the concept of climateas being equivalent to the structural propertiesof the organization (Forehand & Gilmer,1964) or as being synonymous with moraleand satisfaction (Guion, 1973). We, alongwith Schneider (Note 1), suggest that or-ganizational climate be defined as the descrip-tive, perceptual aspect of some organizationalphenomenon, rather than the evaluative (i.e.,satisfaction and morale) or objective aspects.In particular, organizational climate is viewedas the set of intervening variables betweenthe antecedent objective properties of or-ganizational systems, for example, technology,formal rules, span of control, number ofhierarchical levels (Porter & Lawler, 1966),

Special acknowledgement is given to the eight MBAstudents of the 1973 class at the University of Pitts-burgh (John Crawford, Andy Fedak, Gerard Klein,Nancy Klingelhoefer, Robert McLean, Joseph Mene-dez, Sam Paisley, and Charles Wise) who helped con-duct the data collection and statistical analysis; theseveral professional schools who participated in thisstudy; and to Douglas T. Hall, Benjamin Schneider,and an anonymous reviewer for their helpful com-ments on an earlier draft of this paper.

Requests for reprints should be sent to Donald D.Bowen, Graduate School of Business, University ofPittsburgh, Pittsburgh, Pennsylvania 15260.

and the resulting motivation, performance,and satisfaction of organizational members(Marrow, Bowers, & Seashore, 1967).

The intention of the present study was todevelop a measure of learning climate, that is,the dimensions of the organizational climatemoderating the impact of the objective prop-erties of the professional schools on the mo-tivation, learning, and satisfaction of thestudents. The basic assumption was that edu-cational institutions develop a climate analo-gous to that found in industrial organizationsand that this climate has a measurable im-pact on students. A climate focus was chosenfor two reasons. First, there has been in-creasing attention to organizational changeand development programs in educational sys-tems (Schmuck & Miles, 1971; Watson, 1967;Kilmann, Note 2 ) . This suggests the need formeasures of organizational climate in thesesettings to assess, guide, and evaluate the ef-fect of any change program designed to en-hance the learning climate in the schools.Second, because of the previous difficulties indeveloping reliable and valid measures of or-ganizational climate (e.g., Litwin & Stringer,1968), it was decided to give special attentionto instrument development before a concertedresearch program was undertaken in this area.

Educational psychologists have made someattempts to measure learning environments.Stern (1970) developed a theory of environ-mental press and investigated the types of

71

Page 2: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

72 DONALD D. BOWKN AND RALPH H. KILMANN

press which tend to enhance or block the per-sonal development of college students. One ofStern's questionnaires is the OrganizationalClimate Index. Stern, however, quite explicitlyemphasizes his primary interest in testing aneed-press theoretical framework, rather thanin attempting to determine the dimensions ofclimate as perceived by students. Astin (1968,1970) and Pace (1963) have also developedquestionnaires to measure students' impres-sions of their college environment. The the-oretical perspective in these two question-naires varies; Astin's is somewhat closer tothe organizational climate concept, whilePace's work is a modification of Stern's. How-ever, all three investigators focused on under-graduate rather than professional schools.

This article describes a two-phase study ofthe development and preliminary validation ofthe Learning Climate Questionnaire (LCQ),an instrument which attempts to assess thedescriptive, perceptual aspects of professionalschools which have a major impact on thelearning process. The theoretical frameworkunderlying the LCQ reflects the extensive re-search on participative management (e.g.,Leavitt, 196S), organizational influence (Li-kert, 1967), and organizational control (Tan-nenbaum, 1967). These authors suggest thatthe difference between perceived and desiredinfluence over one's work environment is animportant determinant of motivation, per-formance, and satisfaction.

Design of the LCQ was also based uponLewin's (1947) notion of the "restrainingforces" in a situation (i.e., obstacles) versusthe "driving forces" (i.e., influences). TheLewinian framework of quasi-stationary so-cial equilibria has been shown to be useful ingeneral analyses of organizational dynamics(e.g., Lippitt, Watson, & Westley, 1958), andspecifically to educational systems (Jenkins,1962) .

Validation of the LCQ sought to achieve:(a) substantive validity (defining the pool ofrelevant items for the ins t rument and theselection of items, factor analyzing items toinvestigate the underlying dimensions of cli-mate being assessed, testing the internal con-sistency of items identified with each dimen-sion), (b) s tructural validity (that the formatof the ins t rument and the calculation of indi-

vidual and organizational scores is consistentwith the intended concept of "climate"), and(c) external validity (investigating the ex-pected relationships between learning climateand student satisfaction). This is the valida-tion framework suggested by Loevinger(1967), which incorporates the notions of re-liability and construct validity discussed byPeak (1953), Cronbach and Meehl (1967) ,and Campbell (1967) .

METHODStudy 1

Sample.. One hundred twenty-five lull-time masterof business administration (MBA) students (vir-tually the entire population) responded to the LCQat the beginning of their second semester in BusinessSchool IA. The school is noted for the work pres-sures generated; students are permitted very fewelectives; the physical environment is an old build-ing, timcworn, without air-conditioning, study carrels,conference rooms, or a student lounge.

Procedure. The LCQ was administered duringclass by students participating in the conduct of thestudy, Respondents answered anonymously. A briefexplanation of the purpose of the study (to studyfactors affecting learning in the school) was providedby the questionnaire administrators, with the sameinformation and a guarantee of anonymity providedon a cover sheet to the questionnaires.

Questionnaire items. The investigators with the aidof several student representatives from the MBAprogram generated a pool of 36 items which wereexpected to represent the salient characteristics ofthe learning climate in a professional school. Theitems were cast in a Likert scale (a 7-point scale,1 — not at all, 7 = extremely) and were explicitly de-signed into a particular questionnaire frameworkwhich was expected to promote structural validity(i.e., construction of the instrument should be basedon fairly well-established evidence in the organiza-tional behavior l i terature).

Specifically, Section A of the LCQ asks students:"To what extent do you feel influential in determin-ing the following?" Section B asks ( f o r the same 10items): "To what extent would you want to be in-fluential in determining the following?" Ten aspectsof the learning climate are evaluated in terms of thediscrepancy between experienced and desired in-f luence:

1. The material lhat the instructor presents.(Item number 11 in Seclion B, etc.)

2. The manner in which class material is pre-sented by the instructor.

3. The use of audiovisual and other classroomaids (e.g., cases).

4. The choice of which courses to take.5. The choice of instructor for a course.6. The type of graded assignments (e.g., prob-

lem sets, theory paper) .

Page 3: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

DEVELOPING A COMPARATIVE MEASURE OF SCHOOL LEARNING CLIMATES 73

7. The topic of graded assignments.8. The content and type of in-class exams.9. The grading process in quantitative courses.10. The grading process in qualitative courses.

Perceived versus desired influence represented theoperationalization of the degree of participation bystudents in governing the learning process.

To measure the restraining forces (obstacles) inthe environment, Section C of the LCQ instructs re-spondents to indicate (for 16 items): "To what ex-tent have you experienced the following as obstaclesto a meaningful and useful learning environment?"

21. Size of classes.22. Faculty do not know students by name.23. Students do not feel free to address faculty

by their first name.24 Lack of social activities with faculty.25. Faculty are not easily accessible outside

class.26. Faculty do not seem to value student opin-

ions and experiences.27. Faculty do not treat students as willing to

learn.28. The presence of the current grading sys-

tem.29. Faculty seem more interested in activities

besides teaching.30. Students do not feel they can be open with

faculty.31. Faculty do not know students by first

name.32. Lack of audiovisual aids.33. Lack of conference rooms.34. The physical design of classrooms.35. Design of student lounge.36. General apathy of fellow students.

Analysis. The data were standardized for each ofthe 26 items (the 10 difference scores from SectionsA and B, plus the 16 scores from Section C) andanalyzed by the McKelvey factor and measet analysisprogram (McKelvey, Note 3, Note 4). This programemploys a principle factor solution for identifyingeigenvalues and a varimax rotation. The highest rowcorrelations constitute the initial communality esti-mates, and with each iteration, the positive eigen-values are computed, the vectors extracted, and thevariance explained by these vectors becomes the com-munality estimate for the next iteration (except forthe final iteration). The number of iterations isdetermined by the number of factors requested bythe investigator. Having achieved a best estimate ofsimple structure, the program next attempts toidentify a set of optimally efficient "measets"—scalescomposed of linear combinations of the highest load-ing items on each factor. Optimality is assessed on thebasis of mean Cronbach alpha reliability coefficients(Cronbach, 19S1), a process which rejects factoriallycomplex items loading substantially on two or morefactors; the result is a set of "pure" (internally con-sistent) measets or scales which measure Independentdimensions. Correlations between measets and Spear-man-Brown reliability estimates are also computed.

Study 2

Sample. Six additional samples were obtained fromthree business schools and one nonbusiness profes-sional school.

1. Business School IB. Thirty-four part-time eve-ning MBA students responded to the LCQ. The cur-riculum of the MBA program is virtually identicalfor all three samples from Business School I, andevening students use the same classroom and libraryfacilities used by full-time students. The sample of 34constitutes less than 10% of the evening studentpopulation.

2. Business School 1C. The 25 students comprisingthis sample were an elite group of executives on aspecially designed part-time program. In contrast toBusiness Schools IA and IB, all courses are held inair-conditioned, modernized facilities. Classes aresmall, and relationships with faculty are facilitatedby the size of the class, special orientation arrange-ments, and the status of the participants.

3. Business School II. The 5S participants in thissample were both full- and part-time MBA students.The dean of the school emphasized the policy of theschool to avoid making distinctions between the twotypes of students. About 40% of the program consistsof elective courses, and some of these may be avail-able to students in the first year of the program. Theschool is located in the heart of the city in a modern-ized, air-conditioned, and attractive building.

4. Business School III. The 44 respondents in thissample were full-time first-year MBA students at aprestigious business school in the suburbs of a largecity. The school has a new building. The first-yearstudents sampled take a core program with no elec-tives, in contrast to the considerable flexibility avail-able later in the program. The school is fairly large(total of 400 students), but class size is carefullycontrolled.

5. Nonbusiness Professional School (NBPS). Thenonbusiness professional school sample consists oftwo subsamples: NBPS 1 consisted of first-year stu-dents taking a highly structured program (no elec-tives) in large classes. A tradition of rigorous so-cialization to the profession is emphasized in thefirst year, but is gradually relaxed to an atmosphereof social informality later in the program. Advancedstudents in this school (NBPS 3) take many elec-tives, in smaller classes, and have some choice overinstructors. Grades are felt to be extremely importantfor progressing in the field. The physical environ-ment of the school is slightly better than but gen-erally comparable to that of Business School I.

The samples were 82 and 90 students for NBPS1 and NBPS 3, respectively.

Procedure. Faculty and administrators were con-tacted in the schools sampled to obtain permission toconduct the study and to obtain background informa-tion on the school. In Business School III, the ques-tionnaires were administered by a faculty member;in the other schools, the questionnaires were ad-ministered by the group of students participating inthe project.

Page 4: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

74 DONALD D. Bo WEN AND RALPH H. KILMANN

A revised cover sheet for the questionnaire wasdevised to explain that the purpose of the study wasto extend a study of factors affecting learning, andto assure students and schools of the anonymity oftheir responses. These conditions were repeated bythe administrators of the questionnaires.

Questionnaire revision. In order to assess the im-pact of LCQ dimensions on satisfaction with thelearning process, the following nine items were addedas Questions 37-45, using a 5-point Likert responseformat (1 = strongly agree, 3 = undecided, 5 =strongly disagree):

37. I am satisfied with my present program.38. The program stretches me to my full

potential.

39. I would look forward to interacting withmy classmates in a school alumni associationafter graduation.

40. I feel this program is preparing me for myfuture career.

41. If I had to do it over again, I would en-roll in the same program, or a similar program.

42. If I had to do it over again, I would en-roll at the same school.

43. I really enjoy studying the course ma-terial.

44. I would like to do more course-relatedwork beyond that required for courses.

45. I would describe the purpose of my en-rollment as being primarily for the sake oflearning.

TABLE 1

FACTOR AND MEASET ANALYSIS OF 26 ITEMS (QUESTIONNAIRE SECTIONS A, B. AND C)FOR COMBINED SAMPLES

Factor and Learning Climate Questionnaire item

T — Grading ProcessThe grading process in quantitative courses.The grading process in qualitative courses.The type of graded assignments."The content and type of in-class exams."The topic of graded assignments."The choice of instructors for a course."

II — Task Relationships with FacultyFaculty do not treat students as willing to learn.Faculty do not seem to value student opinions and experiences.Students do not feel they can be open with faculty.Faculty seem more interested in activities besides teaching.Faculty are not easily accessible outside class.The presence of the current grading system.8

Ill — Physical EnvironmentThe physical design of classrooms.Lack of conference rooms.Design of student lounge.Size of classes."Lack of audiovisual aids.™

IV — Social Relationships with FacultyStudents do not feel free to address faculty by first name.Lack of social activities with faculty.Faculty do not know students by first name.Faculty do not know students by name.

V — Course Material PresentationThe use of audiovisual and other classroom aids.The manner in which the class material is presented by the

instructor.The material that the instructor presents.The choice of which courses to take.3

Percentage of common variance

Factor loading

I

.86

.86

.71

.67

.62

.46

.04

.12

.05

.06

.07

.30

.12-.04

.14

.13-.12

.07-.02

.04

.15

.21

.29

.32

.3525.5

Measet reliabilities jCronbach alpha j .95Spearman-Brown .93

II

.14

.13

.11

.05

.07

.04

.75

.72

.61

.60

.52

.42

.12

.13

.08

.20

.26

.23

.18

.25

.28

.11

.11

.18

.0220.2

.82

.83

III

.10

.11

.02

.08

.04

.16

.11

.11

.13

.15

.24

.18

.80

.70

.67

.47

.43

.16

.23

.23

.30

.18

.04-.02

.1218.5

.81

.83

IV

.03-.01

.03

.04

.14

V

.25

.22

.39

.38

.31.05 .41

i.15 : .10.20 .13.28 ' .14.26 : .12

.29

.11

.12

.24

.23

.25

.40

.03

.05

.14

.15

.11-.16

.23

.75 i .14

.72

.65

.57

.15

.02

.09

.0318.2

.83

.84

.08

.07-.04

.67

.66

.63

.5317.5

.79

.74

* Indicates item deleted from measet to improve rel iabi l i ty .

Page 5: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

DEVELOPING A COMPARATIVE MEASURE OF SCHOOL LEARNING CLIMATES 75

A final section of the questionnaire was added toobtain the following demographic data from re-spondents: age, sex, grade point average in presentprogram, undergraduate major, number of creditscompleted at the end of the current term, and statusas a full- or part-time student.

Analysis. Responses from all seven samples (N —455) were standardized for each item for the first 36questions (26 scores) from Sections A, B, and C ofthe LCQ, and the data were again analyzed using theMcKelvey (Note 4) program. (Occasional missingdata points resulted in some fluctuation in the N fora particular analysis. The programs employed disre-gard missing data points; they do not read them aszeros. An A7 of 453 was the basis of the factor analy-sis and the analyses involving measet scores.)

Measet scores, computed again as linear combina-tions of item scores, were correlated with the demo-graphic variables of age and grades (all correlationswere close to zero).

Six of the satisfaction questions were found tointercorrelate substantially (Numbers 37, 38, 40, 41,42, and 43) and were combined to form an indexof overall satisfaction. The alpha coefficient of in-ternal consistency for the index was .83. Satisfactionscores were also found to be virtually independent ofgrades and age.

RESULTS

Factor and Measet Analysis

Evaluation of the data from the first studybegan with the factor analysis of the 26climate items. Having examined all of theMcKelvey solutions from 3 to 9 factors, a 5-factor solution was selected, one which ex-plained 53.55% of the total variance.

Examination of the factors suggested thefollowing descriptive titles: I—Grading Pro-cess; II—Physical Environment; III—TaskRelationships \vith Faculty; IV—Social Re-lationships with Faculty; and V—'Course Ma-terial Presentation.

Inspection of these data suggested that thefive measets compare quite favorably in termsof reliabilities normally reported for ques-tionnaires. The average alpha coefficient ofinternal consistency was .83. The average in-tercorrelation between scales was .32, thelargest (.56) between Measets I and V. Sincethese two measets (but none of the others)are composed entirely of items computed fromdifferences between Sections A and B of thequestionnaire, method variance may contrib-ute to this correlation. In the second analysis,the correlation between these two measetswas .50.

Since the items composing the five factorswere virtually identical in Study 1 and Study2, the description of the factor compositionwill be presented for the combined samplesonly.

Responses for all seven samples (A7 =455) were subsequently pooled, standardized,and factor analyzed. Table 1 shows the factorand measet analysis based on the combinedsamples. The factors which emerged weresimilar to those found in the preliminaryanalysis in item contents, reliabilities (aver-age a coefficient = .84, range — .80-.95), andmeaset intercorrelations (an average of .33with a high of .56 between Scales II and IV).

It should be observed that the items load-ing on the factors provide considerable intui-tive evidence for substantive validity of themeasets (see Table 1).

Were the results for the combined samplereasonably similar to those for the first school(IA) ? To answer this question, a new set ofmeaset scores were computed for BusinessSchool IA, using only the items identified forthe combined samples in the second factoranalysis. Spearman-Brown reliabilities for thenew measets ranged from ,85 to .79 with anaverage of .81. The comparable range for theoriginal Business School IA measets was .92to .79 with a mean of .84. Measet intercorrela-tions averaged .33 with the highest (.53) oc-curring between Scales II and IV—two scalesfrom Section C of the questionnaire. The re-sults indicated that the general solution fromthe combined samples was almost equally re-liable when applied to the original BusinessSchool IA data. Both factor analyses producedencouraging evidence for the substantive andstructural validity of the LCQ.

Tables 2 and 3 summarize responses andsome relevant objective comparisons amongthe sampled professional schools. For example,samples known to occupy better physical fa-cilities (Business Schools 1C, II, and III—seesample descriptions) reported that the physi-cal environment was less of an obstacle to auseful and meaningful learning environment(p < .01). Also, the differences in perceivingboth forms of faculty-student relationships inBusiness Schools 1C and II, and in NBPS 3were in accordance with prior expectations, as

Page 6: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

76 DONALD D. BOWEN AND RALPH H. KILMANN

TABLE 2MEANS (AND STANDARD DEVIATIONS) OP LEARNING CLIMATE QUESTIONNAIRE SCORES

AND SATISFACTION IN SEVEN SAMPLES

Dimension

Grading Process

Task Relationships withFaculty

Physical Environment

Social Relationshipswith Faculty

Course Material Pres-entation

Overall satisfaction withlearning process

Samples

BusinessSchool IA(n = 12S)

4.06(1.44)3.75

(1.21)3.84

(1.57)3.21

(1.36)3.31

(1.28)Not

Measured

BusinessSchool IB(» = 34)

3.35(1.52)3.40

(1.63)3.32

(1.62)2.90

(1.39)3.47

(1.45)2.96(.82)

BusinessSchool 1C(n = 25)

3.34(1.39)1.87(.81)1.83(.92)1.48(.56)2.94(.77)2.34(.64)

BusinessSchool II(» = 55)

3.51(1.44)2.98

(1.10)2.11(.96)2.45

(1.34)2.75

(1.26)2.80(.73)

BusinessSchool III(n = 44)

3.40(1.60)3.63

(1.40)2.22

(1.09)3.29

(1.27)2.86

(1.26)3.03(.79)

NBPS 1(n = 82)

4.76(1.62)3.17(1.24)3.82

(1.75)3.04

(1.44)3.60

(1.61)3.15

(1.09)

NBPS 3(n = 90)

4.47(1.74)3.11

(1.17)3.76

(1.91)2.84

(1.63)3.14

(1.20)3.08(.74)

Mole. Sample sizes for a particular item may vary slightly, due to missing data. Higher scores indicate greater perceived controldiscrepancy, more of an obstacle, or more dissatisfaction. Abbreviations: NBPS = nonbusiness professional school.

were most of the differences on the GradingProcess dimension. The major surprise inthese results was that Business School III didnot cluster with Business School IA. Subse-quent investigation, however, uncovered po-tential critical differences in grading policy atthe two schools: Business School III, wherestudents evidence less concern with gradingpolicy, requires a "C" average and allows"Fs" to be made up. At Business School IA, a"B minus" average is required, and making up"Fs," even if approved by the faculty, is ex-tremely difficult within the structured frame-work of the program. At the nonbusiness pro-fessional school, grades are believed to be ma-jor determinants of subsequent career success.

The differences in Course Material Presen-tation could not be predicted on the basis ofavailable knowledge about the schools. Onthe other hand, the outstanding showing ofBusiness School 1C on all five scales seems toreflect the exceptional treatment of this groupin terms of orientation program, facilities, andother tokens of attention.

In sum, considerable confidence in the ex-ternal validity of the measets seems meritedbased on predictable objective differences inthe learning environments across the sub-samples.

Table 4 presents the correlations betweenindividual scores on the measets and overallsatisfaction in the six schools where the latterdata were collected. For the entire sample(N — 315), overall satisfaction appears to bemost immediately related to the Task Rela-tionships with Faculty ( + .46, p < .001) andCourse Material Presentation ( + .39, p <.001)—an intuitively plausible outcome. Allother scale scores evidenced smaller yet sig-nificant correlations (p < .001).

Within samples, variation in measet-overallsatisfaction correlations is seen. Table 4 il-lustrates the aforementioned tendency forTask Relationships with Faculty and CourseMaterial Presentation to account for dispro-portionate variance in overall satisfaction.For purposes of diagnosing individual schoollearning climates, however, it is clear that thesignificant relationships may be different, de-pending upon the sample. For example, com-parison of Business School IB and NBPS 3reveals a situation where, in the businessschool, Social Relationships with Faculty ac-counts for a significant correlation (r =+ .46), while for the nonbusiness sample, thisfactor accounts for a small correlation (r =+ .07).

Page 7: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

DEVELOPING A COMPARATIVE MEASURE op SCHOOL LEARNING CLIMATES 77

While one aspect of the LCQ's validity (i.e.,as a measure of individual perceptions) isevident in the measet-satisfaction correla-tions, the instrument was designed to measureclimate, where climate is defined as a propertyof the organization (the shared perceptionsof participants) which affects the satisfactionof students. Viewing the entire sample as theunit for analysis, Spearman rank-order cor-relations (rho) were computed for the samplemeans on each of the climate scales and forsatisfaction (see Table 2) . The resulting rhoswere as follows: Grading Process, .77; TaskRelationships with Faculty, .54; Physical En-vironment, .94; Social Relationships withFaculty, .71; and Course Material Presenta-tion, .60. Since the sample size is small (N =6 samples), only the rho for Physical Environ-ment is significant (p=.0l, one-tailed).However, all of the associations are relativelysizable and positive. Moreover, when meanranks of school scores on the climate dimen-sions were rank-ordered and correlated withthe satisfaction ranks, the resulting rho was.99 (p < .01). As shown in Table 3, studentsfrom two schools (1C and II) expressed sig-nificantly less concern with most dimensionsof the learning environment, while the stu-dents of NBPS 1 voiced more concern onevery dimension. Sheffe's test for multiplecomparisons for the six satisfaction meansshowed the following ordering (p < .01): 1Cand II are more satisfied than IB, III, andNBPS 3, which, in turn, are more satisfiedthan NBPS 1. Hence the rho of .99 appears to

TABLE 3SUMMARY OF SIGNIFICANT COMPARISONS

BETWEEN SAMPLES

Samples less concerned Samples more concerned

I—Grading Process

Business Schools IB,1C, II, III

Business School IA andNBPS 1, NBPS 3 .01

II—Task Relationships with Faculty

Business School 1CBusiness School II and

NBPS 3

All othersBusiness Schools IA,

III, and NBPS 1

.01

.05

III—Physical Environment

Business Schools 1C,II, III

Business Schools IA,IB, and NBPS 1,

j NBPS 3 .01

IV—Social Relationships with Faculty

Business Schools 1C, II,and NBPS 3

Business Schools IA,III, and NBPS 1 .01

V—Course Material Presentation

Business Schools 1C, II,III

Business Schools IA, IB,and NBPS 1 I .01

Kale. Schcffe tests for mul t ip le comparisons (Glass & Stanley,t970) were used.

reflect substantial differences between sam-ples on both the climate and satisfactionscales.

TABLE 4

CORRELATIONS oj? INDIVIDUAL SCORES ON LEARNING CLIMATE QUESTIONNAIRE MEASETSAND TIIE OvEKAT.L SATISFACTION INDEX

Sample

IB (n = 34)1C (« = 23)II (n = 55)III (n = 42)NBPS 1 (» = 77)NBPS 3 (» = 84)

Total (n = 3 IS)

I — GradingProcess

.25-.06

.19

.22

.32**

.19

.26***

II— Task Rela-tionships

.33*

.24

.66***

.34*

.50***

.38***

.46***

III— PhysicalEnvironment

.24

.12-.02

.01

.27*

.13

.23***

IV— SocialRelationships

.46**

.30

.28*

.20

.45***

.07

.33***

V — CourseMaterial Pres-

entation

.41*-.13

.47***

.45**

.41***

.28**

.39***

* /> < .05.** /> < .01.** p < .001.

Page 8: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

78 DONALD D. BOWEN AND RALPH H. KILMANN

DISCUSSION

Briefly summarized, the factor anatyses andmeaset reliabilities provide evidence for thesubstantive and structural validity of theLCQ measets. Support for the external validityof the instrument was demonstrated in thecorrelations between measet and satisfactionscores on both an individual and sample basis.More importantly, the discriminative capacityof the LCQ measets was noted in their abilityto discriminate between relevant known prop-erties of the various school samples.

Given the correlational nature of the analy-sis, any attempt to interpret the directions ofcausal effects must be regarded as highlyspeculative. Nevertheless, parallel findings inthe literature of organizational behavior sug-gest that the patterns found in the profes-sional schools may represent instances of fre-quently observed organizational phenomena.This gives additional support to the validityof the instrument.

For example, it is generally accepted thatfeatures of the physical environment tend tocondition the quality and patterns of socialinteraction that develop in human groups (seeShaw, 1971, pp. 87-89, for a review of thesefindings). Hence the quality of faculty-stu-dent relationships, particularly of the socialvariety, may well be linked to whether thephysical facilities foster contact and interac-tion. Based on individual responses (N =4SS), the correlation between Physical En-vironment and Social Relationships (r =+ .45, p < .01) supports this interpretation.

It is instructive to examine the items com-prising the Social Relationships with Facultyscale (see Table 1). In addition to the fre-quency of social activities, three of the ques-tions deal with the use of names—Do facultyknow students' names? Do students addressfaculty formally or informally? The commondenominator for such questions seems to bethe amount of social distance between teach-ers and students.

If social relationships are conducted on aformal and distant basis, task relationshipsmay be characterized by an impersonal lackof mutual concern, trust, and interest—ratherthan a climate of mutual regard, support, and

openness. The content of the items composingthe Task Relationships with Faculty measetseems consistent with this expectation (seeTable 1). Also, the correlation between SocialRelationships and Task Relationships is .54(N = 45S,p < .01).

Numerous contemporary critics argue thatthe impact of educational processes dependsheavily upon the interpersonal climate amongfaculty and students (Committee on the Stu-dent in Higher Education, 1968; Mann,Arnold, Binder, Cytrynbaum, Neuman, Ring-wald, Ringwald, & Rosenswein, 1970; Rogers,1969). A parallel hypothesis runs throughmany behavioral science theories of organiza-tions. For example, Argyris (1962) reporteddeclining task effectiveness in organizations asa result of deteriorating interpersonal rela-tionships (i.e., those marked by decreasingtrust, openness, risk taking, authenticity, help-ing, etc., and increasing conformity, evalua-tions, defensiveness, etc.). In short, there isample precedent for expecting that overallsatisfaction with the educational processshould be heavily dependent upon the qualityof faculty-student relationships, as noted inthe correlations reported here.

Looking ahead, it has been hypothesizedthat changes in organizational climate pre-cede changes in outcome variables (e.g., Mar-row, Bowers, & Seashore, 1967). Regardingany organizational development program toenhance student learning and satisfactions,does the LCQ provide a valid indicator andmonitor of how specific organizational changesresult in these outcomes? More specifically,can changes indicated by the LCQ over timesuggest the strategy or the modification of agiven organizational development program?Longitudinal action research assessments inprofessional schools would be necessary to ex-plore these issues.

Furthermore, while the present study hasfocused primarily on graduate schools of busi-ness, does the validity of the instrument gen-eralize to a broader range of professionalschools, and would the LCQ be equally helpfulin guiding educational changes in these sys-tems? Research along these lines is currentlyunderway.

Page 9: Developing a Comparative Measure of the Learning Climate ... Learning... · Developing a Comparative Measure of the Learning Climate in Professional Schools ... kert, 1967), and organizational

DEVELOPING A COMPARATIVE MEASURE OF SCHOOL LEARNING CLIMATES 79

REFERENCE NOTES

1. Schneider, B. The perceived environment: Or-ganizational climate. Paper presented at the Mid-western Psychological Association Convention,Chicago, May 1973.

2. Kilmann, R. H. An action research model for thedesign of curricula for higher education: TheMAPS method (Working Paper 52), Unpublishedmanuscript, University of Pittsburgh, GraduateSchool of Business, 1973.

3. McKelvey, W. W. On developing statistically sig-nificant measuring instruments by factor analytictechnique (Research Paper 25). Unpublishedmanuscript, University of California, Los Angeles,Graduate School of Business Administration, 1969.

4. McKelvey, W. W. User's manual: McKelvey fac-tor and measet analysis program (Research Paper28). Unpublished manuscript, University of Cali-fornia, Los Angeles, Graduate School of BusinessAdministration, 1970.

REFERENCES

Argyris, C. Interpersonal competence and organiza-tional effectiveness. Homewood, 111.: Dorsey, 1962.

Astin, A. W. The college environment. Washington,D.C.: American Council on Education, 1Q68.

Astin, A. W. The methodology or research on collegeimpact, part two. Sociology oj Education, 1970,43, 437-450.

Campbell, D. T. Recommendations for APA teststandards regarding construct, trait, or discriminantvalidity. In D. N. Jackson & S. Messick (Eds.),Problems in human assessment. New York: Mc-Graw-Hill, 1967.

Committee on the Student in Higher Education. Thestudent in higher education. New Haven: TheHazen Foundation, 1968.

Cronbach, L. J. Coefficient alpha and the internalstructure of tests. Psychometrika, 1951, 16, 297-334.

Cronbach, L. J., & Meehl, P. E. Construct validity inpsychological tests. In D. N. Jackson & S. Messick(Eds.), Problems in human assessment. New York:McGraw-Hill, 1967.

Forehand, G., & Gilmer, B. Environmental variationin studies of organizational behavior. PsychologicalBulletin, 1964, 22, 361-382.

Glass, G. V., & Stanley, J. C. Statistical methods ineducation and psychology. Englewocd Cliffs, N.J.:Prentice-Hall, 1970.

Guion, R. M. A note on organizational climate. Or-ganizational Behavior and Human Performance,1973, 9, 120-125.

Jenkins, D, H. Force field analysis applied to aschool situation. In W. G. Bennis, K. D. Benne, &R. Chinn (Eds.), The planning of change. NewYork: Holt, Rinehart & Winston, 1962.

Leavitt, H. J, Applied organizational change in in-dustry: Structural, technological, and humanistic

approaches. In J. G. March (Ed.), Handbook ojorganizations. Chicago: Rand McNally, 1965.

Lewin, K. Group decision and social change. In T.M. Newcomb & E. L. Hartley (Eds.), Readings insocial psychology. New York: Holt, Rinehart &Winston, 1947.

Likert, R. The human organization. New York:McGraw-Hill, 1967.

Lippitt, R., Watson, J., & Westley, B. The dy-namics oj planned change. New York: Harcourt,Brace & World, 1958.

Litwin, G., & Stringer, D. Motivation and organiza-tional climate. Cambridge: Harvard UniversityPress, 1968.

Loevinger, J. Objective tests as instruments of psy-chological theory. In D. M. Jackson & S. Messick(Eds.), Problems in human assessment. New York:McGraw-Hill, 1967.

Mann, R. D., Arnold, S. M., Binder, J., Cytrynbaum,S., Newman, B. M., Ringwald, B., Ringwald, J.,& Rosenwein, R. The college classroom: Conflict,change, and learning. New York: Wiley, 1970.

Marrow, A., Bowers, D., & Seashore, S. Managementby participation. New York: Harper & Row, 1967.

Pace, C. R. College and university environmentalscales, Princeton: Educational Testing Service,1963.

Peak, H. Problems of objective observation. In L.Festinger & D, Katz (Eds.), Research methods inthe behavioral sciences. New York: Dryden Press,1953.

Porter, L,, & Lawler, E. Properties of organizationstructure in relation to job attitudes and job be-havior. Psychological Bulletin, 1966, 41, 23-51.

Pritchard, R. D., & Karasick, B. W. The effects oforganizational climate on managerial job per-formance and job satisfaction. Organizational Be-havior and Human Performance, 1973, 9, 110-119.

Rogers, C. R. Freedom to learn. Columbus, Ohio:Charles E. Merrill, 1969.

Schmuck, R. A., & Miles, M. B. Organization de-velopment in schools. Palo Alto, Calif.: NationalPress, 1971.

Shaw, M. E. Group dynamics: The psychology ofsmall group behavior. New York: McGraw-Hill,1971.

Stern, G. G. People in context: Measuring person en-vironment congruence in education and industry.New York: Wiley, 1970.

Taguiri, R., & Litwin, G. (Eds.). Organizationalclimate; Explorations of a concept. Cambridge:Harvard University Press, 1968.

Tannebaum, A. S. Control in organizations. NewYork: McGraw-Hill, 1967.

Watson, G. (Ed.). Change in school systems. Wash-ington, B.C.: National Education Association, Na-tional Training Laboratories, 1967.

(Received February 20, 1974)