alternatives to difference scores: polynomial...

Post on 03-Feb-2018

234 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

Alternatives to Difference Scores:Polynomial Regression and

Response Surface Methodology

Jeffrey R. EdwardsUniversity of North Carolina

2

OutlineI. Types of Difference ScoresII. Questions Difference Scores Are Intended To

AddressIII. Problems With Difference ScoresIV. An Alternative ProcedureV. The Matrix Approach to Testing ConstraintsVI. Analyzing Quadratic Regression Equations

Using Response Surface MethodologyVII. Moderated Polynomial RegressionVIII. Mediated Polynomial RegressionIX. Difference Scores As Dependent VariablesX. Answers to Frequently Asked Questions

3

Types of Difference Scores:UnivariatelAlgebraic difference: (X – Y)

lAbsolute difference: |X – Y|

lSquared difference: (X – Y)2

4

Types of Difference Scores:Multivariate

lSum of algebraic differences: Σ(Xi – Yi) = D1

lSum of absolute differences: Σ|Xi – Yi| = |D|

lSum of squared difference: Σ(Xi – Yi)2 = D2

lEuclidean distance:

lProfile correlation:

( ) DYX 2ii =−∑

( )( )( ) ( )

QYYXX

YYXX2

i2

i

ii =−−

−−

∑∑∑

5

Questions Difference Scores are Intended to Addressl How well do characteristics of the job fit the needs or

desires of the employee?l To what extent do job demands exceed or fall short of the

abilities of the person?l Are prior expectations of the employee met by actual job

experiences?l What is the degree of similarity between perceptions or

beliefs of supervisors and subordinates? l Do the values of the person match the culture of the

organization? l Can novices provide performance evaluations that agree

with expert ratings?

6

Data Used for Running Illustrationl Data were collected from 373 MBA students who were

engaged in the recruiting process.l Respondents rated the actual and desired amounts of

various job attributes and the anticipated satisfaction concerning a job for which they had recently interviewed.

l Actual and desired measured had three items and used 7-point response scales ranging from “none at all” to “a very great amount.” The satisfaction measured had three items and used a 7-point response scale ranging from “strongly disagree” to “strongly agree.”

l The job attributes used for illustration are autonomy, prestige, span of control, and travel.

7

Problems with Difference Scores:ReliabilitylWhen component measures are positively

correlated, difference scores are often less reliable than either component.lThe formula for the reliability of an algebraic

difference is (Johns, 1981):

σσ−σσσσ−σσ

α −yxxy

2y

2x

yxxyyy2yxx

2x

y)(xr2 +

r2 r + r =

8

lTo illustrate, if X and Y have unit variances, have reliabilities of .75, and are correlated .50, the reliability of X – Y equals:

50.150.

1 1 + 11 75. + 75. =

1150.2 1 + 11150.2 75.1 + 75.1 =

y)(x

y)(x

==−

−α

×××−×××−××

α

Problems with Difference Scores:Reliability

9

Example: Reliability of the Algebraic Difference for AutonomylFor autonomy, the actual amount (X) and

desired amount (Y) measures had reliabilities of .89 and .85, variances of 1.16 and 0.88, and a correlation of .51 Hence, the reliability of the algebraic difference (X – Y) is:

74. 94.008.151.2 88.0 + 16.1

94.008.151.2 85.88.0 + 89.16.1 =

y)(x

y)(x

=α×××−

×××−××α

lNote that this reliability is lower than the reliabilities of X and Y.

10

Reliabilities of OtherTypes of Difference Scores

l Reliabilities of other difference scores can be estimated using procedures for the reliabilities of squares, products, and linear combinations of variables.l For example, a squared difference can be written as

a linear combination of X2, XY, and Y2:(X – Y)2 = X2 – 2XY + Y2

l The reliability of this expression can be derived by combining procedures described by Nunnally (1978) and Bohrnstedt and Marwell (1978).

11

Reliabilities of OtherTypes of Difference Scoresl The reliabilities of profile similarity indices such as

D1, |D|, and D2 can also be derived by applying Nunnally (1978) and Bohrnstedt and Marwell (1978).l The reliabilities of the squared and product terms that

constitute |X – Y|, (X – Y)2, |D|, and D2 involve the means of X and Y, which are arbitrary for measures that use interval rather than ratio scales.l Profile similarity indices usually collapse conceptually

distinct dimensions, which obscures the meaning of their true scores and, thus, their reliabilities.

12

Problems with Difference Scores:Conceptual Ambiguityl It might seem that component variables are reflected

equally in a difference score, given that the components are implicitly assigned the same weight when the difference score is constructed.l However, the variance of a difference score depends

on the variances and covariances of the component measures, which are sample dependent.lWhen one component is a constant, the variance of a

difference score is solely due to the other component, i.e., the one that varies. For instance, when P-O fit is assessed in a single organization, the P-O difference solely represents variation in the person scores.

13

Variance of an Algebraic Difference Scorel The variance of an algebraic difference score can

be computed using the following formula for the variance of a weighted linear combination of random variables:

V(aX + bY) = a2V(X) + b2V(Y) + 2abC(X,Y)l For the algebraic difference score (X – Y), a = +1

and b = –1, which yields:V(X – Y) = V(X) + V(Y) – 2C(X,Y)

l Thus, X and Y contribute equally to V(X – Y) only when V(X) and V(Y) happen to be equal.

14

Example: Variance of the Algebraic Difference for Autonomyl For autonomy, the variance of X is 1.16, and the

variance of Y is 0.88. The covariance between X and Y is their correlation multiplied by the products of their standard deviations, which equals .51 x 1.08× 0.94 = 0.52. Using these quantities, the variance of (X – Y) is:

V(X – Y) = 1.16 + 0.88 – 1.04 = 1.02l V(X – Y) depends more on V(X) than V(Y) and

also incorporates C(X,Y). Thus, V(X – Y) does not reflect V(X) and V(Y) in equal proportions.

15

Variances of Other Types of Difference Scoresl Variances of difference scores involving higher-order

terms, such as (X – Y)2, can be computed using rules for the variances of products of random variables (Bohrnstedt & Goldberger, 1969; Goodman (1960). l These formulas involve the means of X and Y, which

are arbitrary when X and Y are measured on interval rather than ratio scales.l Nonetheless, it is reasonable to assume that all

components do not contribute equally, particularly when the number of components becomes large.

16

Problems with Difference Scores:Confounded EffectslDifference scores confound the effects of the

components of the difference.lFor example, an equation using an algebraic

difference as a predictor can be written as:Z = b0 + b1(X – Y) + e

l In this equation, b1 can reflect a positive relationship for X, an negative relationship for Y, or some combination thereof.

17

Problems with Difference Scores:Confounded Effectsl Some researchers have attempted to address this

confound by controlling for one component of the difference. For an algebraic difference, this yields:

Z = b0 + b1X + b2(X – Y) + el However, controlling for X simply transforms the

algebraic difference into a partialled measure of Y (Wall & Payne, 1973):

Z = b0 + (b1 + b2)X – b2Y + el Thus, b2 is not the effect of (X – Y), but instead is

the negative of the effect of Y, controlling for X.

18

Problems with Difference Scores:Confounded Effects

lThe effects of X and Y are easier to interpret if X and Y are used as separate predictors:

Z = b0 + b1X + b2Y + elThe R2 from this equation is the same as that

from the equation using (X – Y) and X as predictors, but its interpretation is more straightforward.

19

Example: Confounded Effects for the Algebraic Difference for Autonomy

lResults using (X – Y):Dep Var: SAT N: 360 Multiple R: 0.339 Squared multiple R: 0.115Adjusted squared multiple R: 0.113 Standard error of estimate: 1.082

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.937 0.061 0.0 . 97.007 0.000AUTALD 0.393 0.058 0.339 1.000 6.825 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 54.589 1 54.589 46.586 0.000Residual 419.498 358 1.172

20

Example: Confounded Effects for the Algebraic Difference for Autonomy

lResults using X and (X – Y):Dep Var: SAT N: 360 Multiple R: 0.356 Squared multiple R: 0.127Adjusted squared multiple R: 0.122 Standard error of estimate: 1.077

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.835 0.077 0.000 . 75.874 0.000AUTCA 0.145 0.066 0.134 0.650 2.187 0.029AUTALD 0.301 0.071 0.260 0.650 4.235 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 60.133 2 30.067 25.930 0.000Residual 413.953 357 1.160

21

Example: Confounded Effects for the Algebraic Difference for Autonomy

lResults using X and Y:Dep Var: SAT N: 360 Multiple R: 0.356 Squared multiple R: 0.127Adjusted squared multiple R: 0.122 Standard error of estimate: 1.077

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.835 0.077 0.000 . 75.874 0.000AUTCA 0.445 0.062 0.413 0.737 7.172 0.000AUTCD -0.301 0.071 -0.244 0.737 -4.235 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 60.133 2 30.067 25.930 0.000Residual 413.953 357 1.160

22

Problems with Difference Scores:Confounded Effects

l Other researchers have controlled for X, Y, or (X –Y) when using |X – Y| or (X – Y)2 as predictors. For example:

Z = b0 + b1(X – Y) + b2(X – Y)2 + el Although this approach might seem to provide a

conservative test for (X – Y)2, the term b1(X – Y) merely shifts the minimum of the U-shaped curve captured by (X – Y)2. Specifically, if b1 is positive, the minimum of the curve is shifted to the left, and if b1is negative, the minimum is shifted to the right.

23

Example: Confounded Effects for the Squared Difference for Autonomy

lResults using (X – Y)2:Dep Var: SAT N: 360 Multiple R: 0.310 Squared multiple R: 0.096Adjusted squared multiple R: 0.093 Standard error of estimate: 1.094

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.993 0.067 0.000 . 89.830 0.000AUTSQD -0.183 0.030 -0.310 1.000 -6.162 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 45.463 1 45.463 37.972 0.000Residual 428.623 358 1.197

24

Example: Confounded Effects for the Squared Difference for Autonomy

lPlot of (X – Y)2:

-6 -4 -2 0 2 4 6Actual - Desired

1

2

3

4

5

6

7Sa

tisfa

ctio

n

25

Example: Confounded Effects for the Squared Difference for Autonomy

lResults using (X – Y) and (X – Y)2:Dep Var: SAT N: 360 Multiple R: 0.364 Squared multiple R: 0.132Adjusted squared multiple R: 0.127 Standard error of estimate: 1.074

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.003 0.066 0.000 . 91.641 0.000AUTALD 0.277 0.072 0.240 0.632 3.863 0.000AUTSQD -0.097 0.037 -0.164 0.632 -2.646 0.008

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 62.660 2 31.330 27.185 0.000Residual 411.426 357 1.152

26

Example: Confounded Effects for the Squared Difference for Autonomy

lPlot of (X – Y) and (X – Y)2:

-6 -4 -2 0 2 4 6Actual - Desired

1

2

3

4

5

6

7Sa

tisfa

ctio

n

27

Problems with Difference Scores:Confounded Effects

l Analogously, X and Y have been controlled in equations using |X – Y| as a predictor:

Z = b0 + b1X + b2Y + b3|X – Y| + el Controlling for X and Y does not provide a

conservative test of |X – Y|. Rather, it alters the tilt of the V-shaped function indicated by |X – Y|.For example, if b1 = – b2 and b1 = b3, the left side is horizontal and the right side is positively sloped, and if b1 = –b2 and b2 = b3, the right side is horizontal and the left side is negatively sloped.

28

Example: Confounded Effects for the Absolute Difference for Autonomy

lResults using |X – Y|:Dep Var: SAT N: 360 Multiple R: 0.323 Squared multiple R: 0.105Adjusted squared multiple R: 0.102 Standard error of estimate: 1.089

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.212 0.087 0.000 . 71.122 0.000AUTABD -0.531 0.082 -0.323 1.000 -6.464 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 49.555 1 49.555 41.788 0.000Residual 424.532 358 1.186

29

Example: Confounded Effects for the Squared Difference for Autonomy

lPlot of |X – Y|:

-6 -4 -2 0 2 4 6Actual - Desired

1

2

3

4

5

6

7Sa

tisfa

ctio

n

30

Example: Confounded Effects for the Squared Difference for Autonomy

lResults using (X – Y) and |X – Y|:Dep Var: SAT N: 360 Multiple R: 0.374 Squared multiple R: 0.140Adjusted squared multiple R: 0.135 Standard error of estimate: 1.069

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.140 0.088 0.000 . 69.983 0.000AUTALD 0.265 0.069 0.229 0.669 3.819 0.000AUTABD -0.314 0.099 -0.191 0.669 -3.191 0.002

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 66.220 2 33.110 28.981 0.000Residual 407.866 357 1.142

31

Example: Confounded Effects for the Squared Difference for Autonomy

lPlot of (X – Y) and |X – Y|:

-6 -4 -2 0 2 4 6Actual - Desired

1

2

3

4

5

6

7Sa

tisfa

ctio

n

32

Problems with Difference Scores:Untested Constraintsl Difference scores impose untested constraints on

the coefficients relating X and Y to Z.l The constraints imposed by an algebraic difference

can be seen with the following equations:Z = b0 + b1(X – Y) + e

l Expansion yields:Z = b0 + b1X – b1Y + e

33

Problems with Difference Scores:Untested Constraintsl Now, consider an equation that uses X and Y as

separate predictors:Z = b0 + b1X + b2Y + e

l Comparing this equation to the previous equation shows that using (X – Y) as a predictor constrains the coefficients on X and Y to be equal in magnitude but opposite in sign (i.e., b1 = –b2, or b1 + b2 = 0). This constraint should not be imposed on the data but instead should be treated as a hypothesis to be tested.

34

Example: Constrained and Unconstrained Algebraic Difference for Autonomy

lResults using (X – Y):Dep Var: SAT N: 360 Multiple R: 0.339 Squared multiple R: 0.115Adjusted squared multiple R: 0.113 Standard error of estimate: 1.082

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.937 0.061 0.0 . 97.007 0.000AUTALD 0.393 0.058 0.339 1.000 6.825 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 54.589 1 54.589 46.586 0.000Residual 419.498 358 1.172

35

lResults using X and Y:Dep Var: SAT N: 360 Multiple R: 0.356 Squared multiple R: 0.127Adjusted squared multiple R: 0.122 Standard error of estimate: 1.077

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.835 0.077 0.000 . 75.874 0.000AUTCA 0.445 0.062 0.413 0.737 7.172 0.000AUTCD -0.301 0.071 -0.244 0.737 -4.235 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 60.133 2 30.067 25.930 0.000Residual 413.953 357 1.160

Example: Constrained and Unconstrained Algebraic Difference for Autonomy

36

lConstrained and unconstrained results:X Y R2

Constrained 0.39** -0.39** .12**

Unconstrained 0.45** -0.30** .13**

Example: Constrained and Unconstrained Algebraic Difference for Autonomy

37

Problems with Difference Scores:Untested Constraintsl The constraints imposed by an absolute difference can

be seen using a piecewise linear equation:Z = b0 + b1 (1 – 2W)(X – Y) + e

lWhen (X – Y) is positive or zero, W = 0, and the term (1 – 2W)(X – Y) becomes (X – Y). When (X – Y) is negative, W = 1, and (1 – 2W)(X – Y) equals –(X – Y). Thus, W switches the sign on (X – Y) only when it is negative, producing an absolute value transformation. l Expanding the equation yields:

Z = b0 + b1X – b1Y – 2b1WX + 2b1WY + e

38

Problems with Difference Scores:Untested Constraintsl Now, consider a piecewise equation using X and Y:

Z = b0 + b1X + b2Y + b3W + b4WX + b5WY + el Comparing this equation to the previous equation

shows that |X – Y| imposes four constraints:§ b1 = –b2, or b1 + b2 = 0§ b4 = –b5, or b4 + b5 = 0§ b3 = 0 § b4 = –2b1, or 2b1 + b4 = 0

l These constraints should be treated as hypotheses to be tested empirically.

39

Example: Constrained and Unconstrained Absolute Difference for Autonomy

lResults using |X – Y|:Dep Var: SAT N: 360 Multiple R: 0.323 Squared multiple R: 0.105Adjusted squared multiple R: 0.102 Standard error of estimate: 1.089

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.212 0.087 0.000 . 71.122 0.000AUTABD -0.531 0.082 -0.323 1.000 -6.464 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 49.555 1 49.555 41.788 0.000Residual 424.532 358 1.186

40

lResults using X, Y, W, WX, and WY:Dep Var: SAT N: 360 Multiple R: 0.399 Squared multiple R: 0.159Adjusted squared multiple R: 0.147 Standard error of estimate: 1.061

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.233 0.152 0.000 . 41.136 0.000AUTCA -0.150 0.184 -0.139 0.082 -0.818 0.414AUTCD 0.183 0.188 0.148 0.102 0.970 0.333AUTW -0.349 0.201 -0.148 0.329 -1.737 0.083AUTW*AUTCA 0.752 0.209 0.490 0.129 3.605 0.000AUTW*AUTCD -0.554 0.219 -0.406 0.093 -2.537 0.012

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 75.381 5 15.076 13.386 0.000Residual 398.705 354 1.126

Example: Constrained and Unconstrained Absolute Difference for Autonomy

41

lConstrained and unconstrained results:X Y W WX WY R2

Constrained -0.53** 0.53** 0.00 1.06** -1.06** .11**

Unconstrained 0.15 0.18 -0.35 0.75** -0.55** .16**

Example: Constrained and Unconstrained Absolute Difference for Autonomy

42

Problems with Difference Scores:Untested Constraints

lThe constraints imposed by a squared difference can be seen with the following equations:

Z = b0 + b1(X – Y)2 + elExpansion yields:

Z = b0 + b1X2 – 2b1XY + b1Y2 + elThus, a squared difference implicitly treats Z

as a function of X2, XY, and Y2.

43

Problems with Difference Scores:Untested Constraintsl Now, consider a quadratic equation using X and Y:

Z = b0 + b1X + b2Y + b3X2 + b4XY + b5Y2 + el Comparing this equation to the previous equation

shows that (X – Y)2 imposes four constraints:§ b1 = 0§ b2 = 0§ b3 = b5, or b3 – b5 = 0§ b3 + b4 + b5 = 0

l Again, these constraints should be treated as hypotheses to be tested empirically, not simply imposed on the data.

44

Example: Constrained and Unconstrained Squared Difference for Autonomy

lResults using (X – Y)2:Dep Var: SAT N: 360 Multiple R: 0.310 Squared multiple R: 0.096Adjusted squared multiple R: 0.093 Standard error of estimate: 1.094

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.993 0.067 0.000 . 89.830 0.000AUTSQD -0.183 0.030 -0.310 1.000 -6.162 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 45.463 1 45.463 37.972 0.000Residual 428.623 358 1.197

45

lResults using X, Y, X2, XY, and Y2:Dep Var: SAT N: 360 Multiple R: 0.411 Squared multiple R: 0.169Adjusted squared multiple R: 0.157 Standard error of estimate: 1.055

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.825 0.083 0.000 . 70.161 0.000AUTCA 0.197 0.100 0.182 0.273 1.966 0.050AUTCD -0.293 0.106 -0.238 0.315 -2.754 0.006AUTCA2 -0.056 0.047 -0.086 0.444 -1.177 0.240AUTCAD 0.276 0.080 0.396 0.178 3.453 0.001AUTCD2 -0.035 0.063 -0.054 0.242 -0.553 0.581

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 79.951 5 15.990 14.362 0.000Residual 394.135 354 1.113

Example: Constrained and Unconstrained Squared Difference for Autonomy

46

lConstrained and unconstrained results:X Y X2 XY Y2 R2

Constrained 0.00 0.00 -0.18** 0.36** -0.18** .10**

Unconstrained 0.20* -0.29** -0.06 0.28** -0.04 .17**

Example: Constrained and Unconstrained Squared Difference for Autonomy

47

Problems with Difference Scores:Dimensional Reduction

l Difference scores reduce the three-dimensional relationship of X and Y with Z to two dimensions.¡The linear algebraic difference function represents a

symmetric plane with equal but opposite slopes with respect to the X-axis and Y-axis.¡The U-shaped squared difference function represents a

symmetric U-shaped surface with its minimum (or maximum) running along the X = Y line.¡The V-shaped absolute difference function represents

a symmetric V-shaped surface with its minimum (or maximum) running along the X = Y line.

48

Two-Dimensional Algebraic Difference Function

-6 -4 -2 0 2 4 6(X - Y)

1

2

3

4

5

6

7Z

49

Three-Dimensional Algebraic Difference Function

50

Two-Dimensional Absolute Difference Function

-6 -4 -2 0 2 4 6(X - Y)

1

2

3

4

5

6

7Z

51

Three-Dimensional Absolute Difference Function

52

Two-Dimensional Squared Difference Function

-6 -4 -2 0 2 4 6(X - Y)

1

2

3

4

5

6

7Z

53

Three-Dimensional Squared Difference Function

54

Two-Dimensional Algebraic Difference Function for Autonomy

-6 -4 -2 0 2 4 6Actual - Desired

1

2

3

4

5

6

7Sa

tisfa

ctio

n

55

Three-Dimensional Algebraic Difference Function for Autonomy

56

Two-Dimensional Absolute Difference Function for Autonomy

-6 -4 -2 0 2 4 6Actual - Desired

1

2

3

4

5

6

7

Satis

fact

ion

57

Three-Dimensional Absolute Difference Function for Autonomy

58

Two-Dimensional Squared Difference Function for Autonomy

-6 -4 -2 0 2 4 6Actual - Desired

1

2

3

4

5

6

7Sa

tisfa

ctio

n

59

Three-Dimensional Squared Difference Function for Autonomy

60

Problems with Difference Scores:Dimensional Reduction

lThese surfaces represent only three of the many possible surfaces depicting how X and Y may be related to Z.lThis problem is compounded by the use of

profile similarity indices, which collapse a series of three-dimensional surfaces into a single two-dimensional function.

61

An Alternative Procedure

l The relationship of X and Y with Z should be viewed in three dimensions, with X and Y constituting the two horizontal axes and Z constituting the vertical axis.l Analyses should focus not on two-dimensional

functions relating the difference between X and Y to Z, but instead on three-dimensional surfaces depicting the joint relationship of X and Y with Z. l Constraints should not be simply imposed on the data,

but instead should be viewed as hypotheses that, if confirmed, lend support to the conceptual model upon which the difference score is based.

62

Confirmatory Approach

lWhen a difference scores represents a hypothesis that is predicted a priori, the alternative procedure should be applied using the confirmatory approach.¡The R2 for the unconstrained equation should be

significant.¡The coefficients in the unconstrained equation should

follow the pattern indicated by the difference score.¡The constraints implied by the difference score should

not be rejected.¡The set of terms one order higher than those in the

unconstrained equation should not be significant.

63

Example: Confirmatory Test of Algebraic Difference for Autonomy

lUnconstrained equation:Dep Var: SAT N: 360 Multiple R: 0.356 Squared multiple R: 0.127Adjusted squared multiple R: 0.122 Standard error of estimate: 1.077

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.835 0.077 0.000 . 75.874 0.000AUTCA 0.445 0.062 0.413 0.737 7.172 0.000AUTCD -0.301 0.071 -0.244 0.737 -4.235 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 60.133 2 30.067 25.930 0.000Residual 413.953 357 1.160

64

Example: Confirmatory Test of Algebraic Difference for AutonomylUnconstrained surface:

65

Example: Confirmatory Test of Algebraic Difference for Autonomy

lThe first condition is met, because the R2 from the unconstrained equation is significant.lThe second condition is met, because the

coefficients on X and Y are significant and in the expected direction.lFor the third condition, testing the constraints

imposed by the algebraic difference is the same as testing the difference in R2 between the constrained and unconstrained equations.

66

Example: Confirmatory Test of Algebraic Difference for Autonomy

lConstrained equation:Dep Var: SAT N: 360 Multiple R: 0.339 Squared multiple R: 0.115Adjusted squared multiple R: 0.113 Standard error of estimate: 1.082

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.937 0.061 0.0 . 97.007 0.000AUTALD 0.393 0.058 0.339 1.000 6.825 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 54.589 1 54.589 46.586 0.000Residual 419.498 358 1.172

67

Example: Confirmatory Test of Algebraic Difference for AutonomylConstrained surface:

68

Example: Confirmatory Test of Algebraic Difference for Autonomyl The general formula for the difference in R2

between two regression equations is:

l The test of the constraint imposed by the algebraic difference for autonomy is:

l The constraint is rejected, so the third condition is not satisfied.

U2U

UC2C

2U

df/)R1()dfdf/()RR(F

−−−

=

05.p,91.4357/)127.1(

)357358/()115.127(.<=

−−−

69

Example: Confirmatory Test of Algebraic Difference for Autonomy

lFor the fourth condition, the unconstrained equation for the algebraic equation is linear, so the higher-order terms are the three quadratic terms X2, XY, and Y2.lTesting the three quadratic terms as a set is the

same as testing the difference in R2 between the linear and quadratic equations.

70

lQuadratic equation:Dep Var: SAT N: 360 Multiple R: 0.411 Squared multiple R: 0.169Adjusted squared multiple R: 0.157 Standard error of estimate: 1.055

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.825 0.083 0.000 . 70.161 0.000AUTCA 0.197 0.100 0.182 0.273 1.966 0.050AUTCD -0.293 0.106 -0.238 0.315 -2.754 0.006AUTCA2 -0.056 0.047 -0.086 0.444 -1.177 0.240AUTCAD 0.276 0.080 0.396 0.178 3.453 0.001AUTCD2 -0.035 0.063 -0.054 0.242 -0.553 0.581

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 79.951 5 15.990 14.362 0.000Residual 394.135 354 1.113

Example: Confirmatory Test of Algebraic Difference for Autonomy

71

Example: Confirmatory Test of Algebraic Difference for Autonomyl The test of the higher-order terms associated with

the algebraic difference for autonomy:

l The higher-order terms are significant, so the fourth condition is not satisfied.

05.p,96.5354/)169.1(

)354357/()127.169(.<=

−−−

72

Example: Confirmatory Test of Absolute Difference for Autonomy

lUnconstrained equation:Dep Var: SAT N: 360 Multiple R: 0.399 Squared multiple R: 0.159Adjusted squared multiple R: 0.147 Standard error of estimate: 1.061

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.233 0.152 0.000 . 41.136 0.000AUTCA -0.150 0.184 -0.139 0.082 -0.818 0.414AUTCD 0.183 0.188 0.148 0.102 0.970 0.333AUTW -0.349 0.201 -0.148 0.329 -1.737 0.083AUTCAW 0.752 0.209 0.490 0.129 3.605 0.000AUTCDW -0.554 0.219 -0.406 0.093 -2.537 0.012

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 75.381 5 15.076 13.386 0.000Residual 398.705 354 1.126

73

Example: Confirmatory Test of Absolute Difference for AutonomylUnconstrained surface:

74

Example: Confirmatory Test of Absolute Difference for Autonomy

lThe first condition is met, because the R2 from the unconstrained equation is significant.lThe second condition is not met, because the

coefficients on X and Y are not significant, and in the expected direction.lFor the third condition, testing the constraints

imposed by the absolute difference is the same as testing the difference in R2 between the constrained and unconstrained equations.

75

Example: Confirmatory Test of Absolute Difference for Autonomy

lConstrained equation:Dep Var: SAT N: 360 Multiple R: 0.323 Squared multiple R: 0.105Adjusted squared multiple R: 0.102 Standard error of estimate: 1.089

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.212 0.087 0.000 . 71.122 0.000AUTABD -0.531 0.082 -0.323 1.000 -6.464 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 49.555 1 49.555 41.788 0.000Residual 424.532 358 1.186

76

Example: Confirmatory Test of Absolute Difference for AutonomylConstrained surface:

77

Example: Confirmatory Test of Absolute Difference for Autonomyl The test of the constraints imposed by the absolute

difference for autonomy is:

l The constraints are rejected, so the third condition is not satisfied.

05.p,68.5354/)159.1(

)354358/()105.159(.<=

−−−

78

Example: Confirmatory Test of Absolute Difference for Autonomy

lFor the fourth condition, the unconstrained equation for the absolute equation is piecewise linear, so the higher-order terms are the six quadratic terms X2, XY, Y2, WX2, WXY, and WY2.lTesting the six quadratic terms as a set is the

same as testing the difference in R2 between the piecewise linear and piecewise quadratic equations.

79

lPiecewise quadratic equation:Dep Var: SAT N: 360 Multiple R: 0.431 Squared multiple R: 0.185Adjusted squared multiple R: 0.160 Standard error of estimate: 1.053

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.193 0.206 0.000 . 30.124 0.000AUTCA -0.438 0.548 -0.407 0.009 -0.799 0.425AUTCD 0.256 0.505 0.207 0.014 0.506 0.613AUTW -0.534 0.276 -0.225 0.172 -1.931 0.054AUTCAW 0.672 0.608 0.438 0.015 1.105 0.270AUTCDW -0.373 0.592 -0.273 0.013 -0.631 0.529AUTCA2 0.146 0.312 0.225 0.010 0.468 0.640AUTCAD -0.092 0.618 -0.133 0.003 -0.150 0.881AUTCD2 0.107 0.350 0.169 0.008 0.307 0.759AUTCA2W -0.088 0.325 -0.082 0.026 -0.272 0.786AUTCADW 0.325 0.641 0.368 0.004 0.507 0.613AUTCD2W -0.219 0.371 -0.342 0.007 -0.589 0.556

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 87.940 11 7.995 7.205 0.000Residual 386.146 348 1.110

Example: Confirmatory Test of Absolute Difference for Autonomy

80

Example: Confirmatory Test of Absolute Difference for Autonomyl The test of the higher-order terms associated with

the absolute difference for autonomy is:

l The higher-order terms are not significant, so the fourth condition is satisfied.

05.p,85.1348/)185.1(

)348354/()159.185(.>=

−−−

81

Example: Confirmatory Test of Squared Difference for Autonomy

lUnconstrained equation:Dep Var: SAT N: 360 Multiple R: 0.411 Squared multiple R: 0.169Adjusted squared multiple R: 0.157 Standard error of estimate: 1.055

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.825 0.083 0.000 . 70.161 0.000AUTCA 0.197 0.100 0.182 0.273 1.966 0.050AUTCD -0.293 0.106 -0.238 0.315 -2.754 0.006AUTCA2 -0.056 0.047 -0.086 0.444 -1.177 0.240AUTCAD 0.276 0.080 0.396 0.178 3.453 0.001AUTCD2 -0.035 0.063 -0.054 0.242 -0.553 0.581

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 79.951 5 15.990 14.362 0.000Residual 394.135 354 1.113

82

Example: Confirmatory Test of Squared Difference for AutonomylUnconstrained surface:

83

Example: Confirmatory Test of Squared Difference for Autonomy

lThe first condition is met, because the R2 from the unconstrained equation is significant.lThe second condition is not met, because the

coefficients on X and Y are significant, and the coefficients on X2 and Y2 are not significant.lFor the third condition, testing the constraints

imposed by the squared difference is the same as testing the difference in R2 between the constrained and unconstrained equations.

84

Example: Confirmatory Test of Squared Difference for Autonomy

lConstrained equation:Dep Var: SAT N: 360 Multiple R: 0.310 Squared multiple R: 0.096Adjusted squared multiple R: 0.093 Standard error of estimate: 1.094

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.993 0.067 0.000 . 89.830 0.000AUTSQD -0.183 0.030 -0.310 1.000 -6.162 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 45.463 1 45.463 37.972 0.000Residual 428.623 358 1.197

85

Example: Confirmatory Test of Squared Difference for AutonomylConstrained surface:

86

Example: Confirmatory Test of Squared Difference for Autonomyl The test of the constraint imposed by the squared

difference for autonomy is:

l The constraint is rejected, so the third condition is not satisfied.

05.p,77.7354/)169.1(

)354358/()096.169(.<=

−−−

87

Example: Confirmatory Test of Squared Difference for Autonomy

lFor the fourth condition, the unconstrained equation for the squared equation is quadratic, so the higher-order terms are the four cubic terms X3, X2Y, XY2, and Y3.lTesting the four cubic terms as a set is the same

as testing the difference in R2 between the quadratic and cubic equations.

88

lCubic equation:Dep Var: SAT N: 360 Multiple R: 0.436 Squared multiple R: 0.190Adjusted squared multiple R: 0.170 Standard error of estimate: 1.047

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.757 0.109 0.000 . 52.736 0.000AUTCA 0.364 0.119 0.337 0.190 3.055 0.002AUTCD -0.312 0.120 -0.253 0.245 -2.609 0.009AUTCA2 0.043 0.095 0.066 0.109 0.456 0.649AUTCAD 0.356 0.175 0.511 0.037 2.033 0.043AUTCD2 -0.075 0.126 -0.117 0.060 -0.594 0.553AUTCA3 -0.104 0.037 -0.442 0.094 -2.817 0.005AUTCA2D 0.052 0.066 0.167 0.052 0.794 0.428AUTCAD2 -0.030 0.089 -0.098 0.028 -0.338 0.736AUTCD3 0.003 0.053 0.011 0.046 0.047 0.962

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 90.233 9 10.026 9.142 0.000Residual 383.853 350 1.097

Example: Confirmatory Test of Squared Difference for Autonomy

89

Example: Confirmatory Test of Squared Difference for Autonomyl The test of the higher-order terms associated with

the squared difference for autonomy is:

l The higher-order terms are not significant, so the fourth condition is satisfied.

05.p,27.2350/)190.1(

)350354/()169.190(.>=

−−−

90

Exploratory Approach

lWhen no a priori hypothesis can is predicted, the alternative procedure can be applied using the exploratory approach.¡The analysis begins by using the linear terms X and Y as

predictors. If the R2 is not significant, the procedure stops, with the conclusion that X and Y are unrelated to Z.¡If the R2 from the linear equation is significant, then the

quadratic term X2, XY, and Y2 are added as a set, and the increment in R2 is tested. If the increment in R2 is not significant, the linear equation is retained.¡If the increment in R2 from the quadratic terms is significant,

the four cubic terms X3, X2Y, XY2, and Y3 are added, and the increment in R2 is tested. If the increment is not significant, the quadratic equation is retained.

91

Exploratory Approach¡The foregoing procedure continues, adding higher-order

terms in sets and stopping when the increment in R2 is not significant.¡The tests of higher-order terms involved in the exploratory

procedure are susceptible to outliers and influential cases, and therefore regression diagnostics should be applied.¡Like any exploratory analysis, the exploratory procedure

described here can produce results that do not generalize beyond the sample in hand. Therefore, the obtained results should be considered tentative, pending cross-validation.¡“It is folly to construct elaborate post-hoc interpretations of

complex surfaces that are not both generalizable and conceptually meaningful” (Edwards, 1994, p. 74).

92

Example: Exploratory Analyses for Autonomy

lLinear equation:Dep Var: SAT N: 360 Multiple R: 0.356 Squared multiple R: 0.127Adjusted squared multiple R: 0.122 Standard error of estimate: 1.077

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.835 0.077 0.000 . 75.874 0.000AUTCA 0.445 0.062 0.413 0.737 7.172 0.000AUTCD -0.301 0.071 -0.244 0.737 -4.235 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 60.133 2 30.067 25.930 0.000Residual 413.953 357 1.160

93

lQuadratic equation:Dep Var: SAT N: 360 Multiple R: 0.411 Squared multiple R: 0.169Adjusted squared multiple R: 0.157 Standard error of estimate: 1.055

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.825 0.083 0.000 . 70.161 0.000AUTCA 0.197 0.100 0.182 0.273 1.966 0.050AUTCD -0.293 0.106 -0.238 0.315 -2.754 0.006AUTCA2 -0.056 0.047 -0.086 0.444 -1.177 0.240AUTCAD 0.276 0.080 0.396 0.178 3.453 0.001AUTCD2 -0.035 0.063 -0.054 0.242 -0.553 0.581

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 79.951 5 15.990 14.362 0.000Residual 394.135 354 1.113

Example: Exploratory Analyses for Autonomy

94

Example: Exploratory Analyses for Autonomyl The test of the quadratic terms for autonomy is:

l The quadratic terms are significant, so the cubic equation is estimated.

05.p,96.5354/)169.1(

)354357/()127.169(.<=

−−−

95

lCubic equation:Dep Var: SAT N: 360 Multiple R: 0.436 Squared multiple R: 0.190Adjusted squared multiple R: 0.170 Standard error of estimate: 1.047

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.757 0.109 0.000 . 52.736 0.000AUTCA 0.364 0.119 0.337 0.190 3.055 0.002AUTCD -0.312 0.120 -0.253 0.245 -2.609 0.009AUTCA2 0.043 0.095 0.066 0.109 0.456 0.649AUTCAD 0.356 0.175 0.511 0.037 2.033 0.043AUTCD2 -0.075 0.126 -0.117 0.060 -0.594 0.553AUTCA3 -0.104 0.037 -0.442 0.094 -2.817 0.005AUTCA2D 0.052 0.066 0.167 0.052 0.794 0.428AUTCAD2 -0.030 0.089 -0.098 0.028 -0.338 0.736AUTCD3 0.003 0.053 0.011 0.046 0.047 0.962

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 90.233 9 10.026 9.142 0.000Residual 383.853 350 1.097

Example: Exploratory Analyses for Autonomy

96

Example: Exploratory Analyses for Autonomyl The test of the cubic terms associated for autonomy

is:

l The cubic are not significant, so the quadratic equation is retained.

05.p,27.2350/)190.1(

)350354/()169.190(.>=

−−−

97

The Matrix Approach to Testing Constraintsl The constraints imposed by difference scores and

certain profile similarity indices (i.e., D1, |D|, D2) can be tested using statistical packages that permit linear constraints on regression coefficients (e.g., SAS, SPSS, SYSTAT). In SYSTAT, q constraints on k regression coefficients can be written as:

AB = Dwhere A is a q x k matrix of weights, B is a k x 1 column vector of regression coefficients, and D is a q x 1 vector of zeros. The values of A are chosen to express constraints as weighted linear combinations of regression coefficients set equal to zero.

98

Testing Constraints Imposed byan Algebraic Difference

lRecall that, in a linear regression equation, the constraint imposed by an algebraic difference is b1 = –b2, or b1 + b2 = 0. The corresponding A and B matrices are:

[ ] [ ]21

2

1

0

bbbbb

110 +=

99

Example: Testing the Algebraic Difference Constraints for Autonomy

lThe following SYSTAT commands compute an algebraic difference, estimate constrained and unconstrained equations, and test the constraint:MGLHMOD SAT=CONSTANT+AUTALDESTMOD SAT=CONSTANT+AUTCA+AUTCDESTHYPAMA [0 1 1]TEST

100

Example: Testing the Algebraic Difference Constraints for Autonomy

lResults from the constrained equation:Dep Var: SAT N: 360 Multiple R: 0.339 Squared multiple R: 0.115Adjusted squared multiple R: 0.113 Standard error of estimate: 1.082

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.937 0.061 0.0 . 97.007 0.000AUTALD 0.393 0.058 0.339 1.000 6.825 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 54.589 1 54.589 46.586 0.000Residual 419.498 358 1.172

101

Example: Testing the Algebraic Difference Constraints for Autonomyl Results from the unconstrained equation:

Dep Var: SAT N: 360 Multiple R: 0.356 Squared multiple R: 0.127Adjusted squared multiple R: 0.122 Standard error of estimate: 1.077

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.835 0.077 0.0 . 75.874 0.000AUTCA 0.445 0.062 0.413 0.737 7.172 0.000AUTCD -0.301 0.071 -0.244 0.737 -4.235 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 60.133 2 30.067 25.930 0.000Residual 413.953 357 1.160

102

Example: Testing the Algebraic Difference Constraints for Autonomy

lTest of constraints:Hypothesis.

A Matrix

1 2 30.0 1.000 1.000

Test of Hypothesis

Source SS df MS F P

Hypothesis 5.545 1 5.545 4.782 0.029Error 413.953 357 1.160

103

Testing Constraints Imposed byan Absolute Differencel In a piecewise linear regression equation, the

constraints imposed by an absolute difference are b1 + b2 = 0, b4 + b5 = 0, b3 = 0, and 2b1 + b4= 0. The corresponding A and B matrices are:

+

++

=

41

3

54

21

5

4

3

2

1

0

bb2b

bbbb

bbbbbb

010020001000110000000110

104

Example: Testing the Absolute Difference Constraints for Autonomy

lThe following SYSTAT commands compute an absolute difference, estimate constrained and unconstrained equations, and test the constraint:MOD SAT=CONSTANT+AUTABDESTMOD SAT=CONSTANT+AUTCA+AUTCD+AUTW+AUTCAW+AUTCDWESTHYPAMA [0 1 1 0 0 0;,

0 0 0 0 1 1;,0 0 0 1 0 0;,0 2 0 0 1 0]

TEST

105

Example: Testing the Absolute Difference Constraints for Autonomy

lResults from the constrained equation:Dep Var: SAT N: 360 Multiple R: 0.323 Squared multiple R: 0.105Adjusted squared multiple R: 0.102 Standard error of estimate: 1.089

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.212 0.087 0.0 . 71.122 0.000AUTABD -0.531 0.082 -0.323 1.000 -6.464 0.000

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 49.555 1 49.555 41.788 0.000Residual 424.532 358 1.186

106

Example: Testing the Absolute Difference Constraints for AutonomylResults from the unconstrained equation:

Dep Var: SAT N: 360 Multiple R: 0.399 Squared multiple R: 0.159Adjusted squared multiple R: 0.147 Standard error of estimate: 1.061

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 6.233 0.152 0.000 . 41.136 0.000AUTCA -0.150 0.184 -0.139 0.082 -0.818 0.414AUTCD 0.183 0.188 0.148 0.102 0.970 0.333AUTW -0.349 0.201 -0.148 0.329 -1.737 0.083AUTCAW 0.752 0.209 0.490 0.129 3.605 0.000AUTCDW -0.554 0.219 -0.406 0.093 -2.537 0.012

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 75.381 5 15.076 13.386 0.000Residual 398.705 354 1.126

107

Example: Testing the Absolute Difference Constraints for Autonomy

lTest of constraints:Hypothesis.

A Matrix

1 2 3 4 5 61 0.0 1.000 1.000 0.0 0.0 0.02 0.0 0.0 0.0 0.0 1.000 1.0003 0.0 0.0 0.0 1.000 0.0 0.04 0.0 2.000 0.0 0.0 1.000 0.0

Test of Hypothesis

Source SS df MS F P

Hypothesis 22.735 4 5.684 5.008 0.001Error 401.796 354 1.135

108

Testing Constraints Imposed bya Squared Differencel In a quadratic regression equation, the

constraints imposed by a squared difference are b1 = 0, b2 = 0, b3 – b5 = 0, and b3 + b4 + b5 = 0. The corresponding A and B matrices are:

+−

=

43

53

2

1

5

4

3

2

1

0

bb2bb

bb

bbbbbb

012000101000000100000010

109

Example: Testing the Squared Difference Constraints for Autonomy

lThe following SYSTAT commands compute a squared difference, estimate constrained and unconstrained equations, and test the constraint:MOD SAT=CONSTANT+AUTSQDESTMOD SAT=CONSTANT+AUTCA+AUTCD+AUTCA2+AUTCAD+AUTCD2ESTHYPAMA [0 1 0 0 0 0;,

0 0 1 0 0 0;,0 0 0 1 0 -1;,0 0 0 2 1 0]

TEST

110

Example: Testing the Squared Difference Constraints for Autonomy

lResults from the constrained equation:Dep Var: SAT N: 360 Multiple R: 0.310 Squared multiple R: 0.096

Adjusted squared multiple R: 0.093 Standard error of estimate: 1.094

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.993 0.067 0.0 . 89.830 0.000AUTSQD -0.183 0.030 -0.310 1.000 -6.162 0.000

Analysis of Variance

Source Sum-of-Squares df Mean-Square F-ratio PRegression 45.463 1 45.463 37.972 0.000Residual 428.623 358 1.197

111

Example: Testing the Squared Difference Constraints for Autonomy

lResults from the unconstrained equation:Dep Var: SAT N: 360 Multiple R: 0.411 Squared multiple R: 0.169

Adjusted squared multiple R: 0.157 Standard error of estimate: 1.055

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.825 0.083 0.0 . 70.161 0.000AUTCA 0.197 0.100 0.182 0.273 1.966 0.050AUTCD -0.293 0.106 -0.238 0.315 -2.754 0.006AUTCA2 -0.056 0.047 -0.086 0.444 -1.177 0.240AUTCAD 0.276 0.080 0.396 0.178 3.453 0.001AUTCD2 -0.035 0.063 -0.054 0.242 -0.553 0.581

Analysis of Variance

Source Sum-of-Squares df Mean-Square F-ratio PRegression 79.951 5 15.990 14.362 0.000Residual 394.135 354 1.113

112

Example: Testing the Squared Difference Constraints for Autonomy

lTest of constraints:Hypothesis.

A Matrix

1 2 3 4 5 61 0.0 1.000 0.0 0.0 0.0 0.02 0.0 0.0 1.000 0.0 0.0 0.03 0.0 0.0 0.0 1.000 0.0 -1.0004 0.0 0.0 0.0 2.000 1.000 0.0

Test of Hypothesis

Source SS df MS F P

Hypothesis 34.489 4 8.622 7.744 0.000Error 394.135 354 1.113

113

Analyzing Quadratic Regression Equations Using Response Surface Methodology

lResponse surface methodology can be used to analyze features of surfaces corresponding to quadratic regression equations. These analyses are useful for two reasons:§ Constraints imposed by difference scores are usually

rejected, which makes it necessary to interpret unconstrained equations.§ Many conceptually meaningful hypotheses cannot be

expressed using difference scores.

114

Key Features of Response Surfaces: Stationary PointlThe stationary point is the point at which the

slope of the surface relating X and Y to Z is zero in all directions.§ For convex (i.e., bowl-shaped) surfaces, the

stationary point is the overall minimum of the surface with respect to the Z axis.§ For concave (i.e., dome-shaped) surfaces, the

stationary point is the overall maximum of the surface with respect to the Z axis.§ For saddle-shaped surfaces, the stationary point is

where the surface is flat with respect to the Z axis.

115

Key Features of Response Surfaces: Stationary PointlThe coordinates of the stationary point can be

computed using the following formulas:

lX0 and Y0 are the coordinates of the stationary point in the X,Y plane.

2453

51420 bbb4

bb2bb = X−

2453

32410 bbb4

bb2bb = Y−

116

Example: Stationary Point for Autonomy

lApplying these formulas to the equation for autonomy yields:

982.0 = 276.0)035.0)(056.0(4

)035.0)(197.0(2)276.0)(293.0( = X 20 −−−−−−

315.0 = 276.0)035.0)(056.0(4

)056.0)(293.0(2)276.0)(197.0( = Y 20 −−−−

−−−

117

Example: Stationary Point for Autonomy

Stationary Point

118

Key Features of Response Surfaces: Principal Axesl The principal axes describe the orientation of the

surface with respect to the X,Y plane. The axes are perpendicular and intersect at the stationary point.¡For convex surfaces, the upward curvature is greatest along

the first principal axis and least along the second principal axis.¡For concave surfaces, the downward curvature is greatest

along the second principal axis and least along the first principal axis.¡For saddle-shaped surfaces, upward curvature is greatest

along the first principal axis, and the downward curvature is greatest along the second principal axis.

119

Key Features of Response Surfaces: First Principal Axisl An equation for the first principal axis is:

l The formula for the slope of the first principal axis (i.e., p11) is:

l Using X0, Y0, and p11, the intercept of the first principal axis (i.e., p10) can be calculated as follows:

.b

b)bb(bb = p4

24

25335

11+−+−

XppY 1110 +=

011010 XpYp −=

120

Example: First Principal Axis for Autonomy

lApplying these formulas to the equation for autonomy yields:

079.1 = 276.0

276.0)]035.0(056.0[)056.0(035.0 = p22

11+−−−+−−−

375.1 = )982.0)(079.1(315.0 = p10 −−−

121

Example: First Principal Axis for Autonomy

First Principal Axis

122

Key Features of Response Surfaces: Second Principal Axisl An equation for the second principal axis is:

l The formula for the slope of the second principal axis (i.e., p21) is:

l X0, Y0, and p21 can be used to obtain the intercept of the second principal axis (i.e., p20) as follows:

.b

b)bb(bb = p4

24

25335

11+−−−

XppY 2120 +=

021020 XpYp −=

123

Example: Second Principal Axis for Autonomy

lApplying these formulas to the equation for autonomy yields:

927.0 = 276.0

276.0)]035.0(056.0[)056.0(035.0 = p22

21 −+−−−−−−−

594.0 = )982.0)(927.0(315.0 = p20 −−−

124

Example: Second Principal Axis for Autonomy

Second Principal Axis

125

Key Features of Response Surfaces: Shape Along the Y = X Linel The shape of the surface along a line in the X,Y plane

can be estimated by substituting the expression for the line into the quadratic regression equation.l To estimate the slope along the Y = X line, X is

substituted for Y in the quadratic regression equation, which yields:

Z = b0 + b1X + b2X + b3X2 + b4X2 + b5X2 + e= b0 + (b1 + b2)X + (b3 + b4 + b5)X2 + e

l The term (b3 + b4 + b5) represents the curvature of the surface along the Y = X line, and (b1 + b2) is the slope of the surface at the point X = 0.

126

Example: Shape Along Y = X Line for AutonomylFor autonomy, the shape of the surface along

the Y = X line is:Z = 5.825 + [0.197 + (–0.293)]X

+ [–0.056 + 0.276 + (–0.035)]X2 + elSimplifying this expression yields:

Z = 5.825 – 0.096X + 0.185X2 + elThe surface is curved upward along the Y = X

line and is negatively sloped at the point X = 0 (the curvature is significant at p < .05).

127

Example: Shape Along Y = X Line for Autonomy

Contours ShowShape AlongY = X Line

128

Key Features of Response Surfaces: Shape Along Y = –X LinelTo estimate the slope along the Y = –X line,

–X is substituted for Y in the quadratic regression equation, which yields:Z = b0 + b1X – b2X + b3X2 – b4X2 + b5X2 + e

= b0 + (b1 – b2)X + (b3 – b4 + b5)X2 + elThe term (b3 – b4 + b5) represents the curvature

of the surface along the Y = –X line, and (b1 –b2) is the slope of the surface at the point X = 0.

129

Example: Shape Along Y = –X Line for AutonomylFor autonomy, the shape of the surface along

the Y = –X line is:Z = 5.825 + [0.197 – (–0.293)]X

+ [–0.056 – 0.276 + (–0.035)]X2 + elSimplifying this expression yields:

Z = 5.825 + 0.490X – 0.367X2 + elThe surface is curved downward along the Y =

–X line and is positively sloped at the point X = 0 (both are significant at p < .05).

130

Contours ShowShape AlongY = –X Line

Example: Shape Along Y = –X Line for Autonomy

131

l To estimate the slope along the first principal axis, p10+ p11X is substituted for Y:

l The composite terms preceding X2 and X are the curvature of the surface along the first principal axis and the slope of the surface at the point X = 0.

X)ppb2pbpbb(pbpbb 11105104112121051020 ++++++=

eX)pbpbb( 221151143 ++++

)Xpp(XbXb)Xpp(bXbb Z 111042

31110210 ++++++=e)Xpp(b 2

11105 +++

Key Features of Response Surfaces: Shape Along First Principal Axis

132

Example: Shape Along First Principal Axis for AutonomylFor autonomy, the shape of the surface along

the first principal axis is:

lThe surface is curved upward along the first principal axis and is negatively sloped at the point X = 0 (both are significant at p < .05).

)375.1)(035.0()375.1)(293.0(825.5 Z 2−−+−−+=)375.1)(276.0()079.1)(293.0(197.0[ −+−++

eX)]079.1)(035.0()079.1)(276.0(056.0[ 22 +−++−+eX201.0X395.0162.6 2 ++−=

X)]079.1)(375.1)(035.0(2 −−+

133

Example: Shape Along First Principal Axis for Autonomy

Contours ShowShape AlongFirst Principal

Axis

134

l To estimate the slope along the second principal axis, p20 + p21X is substituted for Y:

l The composite terms preceding X2 and X are the curvature of the surface along the second principal axis and the slope of the surface at the point X = 0.

X)ppb2pbpbb(pbpbb 21205204212122052020 ++++++=

eX)pbpbb( 222152143 ++++

)Xpp(XbXb)Xpp(bXbb Z 212042

32120210 ++++++=e)Xpp(b 2

21205 +++

Key Features of Response Surfaces: Shape Along Second Principal Axis

135

Example: Shape Along Second Principal Axis for Autonomyl For autonomy, the shape of the surface along the

second principal axis is:

l The surface is curved downward along the second principal axis and is positively sloped at the point X = 0 (both are significant at p < .05).

)594.0)(035.0()594.0)(293.0(825.5 Z 2−+−+=)594.0)(276.0()927.0)(293.0(197.0[ +−−++

eX)]927.0)(035.0()927.0)(276.0(056.0[ 22 +−−+−+−+eX342.0X671.0639.5 2 +−+=

X)]927.0)(594.0)(035.0(2 −−+

136

Example: Shape Along Second Principal Axis for Autonomy

Contours ShowShape Along

Second Principal Axis

137

l The formulas for shapes along predetermined lines such as Y = X and Y = –X can be tested using procedures for testing weighted linear combinations of regression coefficients.l For example, a t-test for b1 + b2 is obtained by

dividing b1 + b2 by its standard error, or the square root of the variance of b1 + b2:

l The variances of b1 and b2 are the squares of their standard errors, and the covariance of b1 and b2 is their correlation times their standard errors.

Key Features of Response Surfaces: Tests of Significance

)b,b(C2)b(V)b(V)bb(S 212121 ++=+

138

Key Features of Response Surfaces: Tests of Significance

lWeighted linear combinations of regression coefficients can also be tested with the matrix approach used to test constraints.lAnother approach is to test the reduction in R2

produced by the constraint represented by the weighted linear combination of coefficients.lFor instance, to jointly test (b1 + b2) and (b3 +

b4 + b5), we set both quantities equal to zero and impose the resulting constraints.

139

Key Features of Response Surfaces: Tests of SignificancelThe expression b1 + b2 = 0 implies b2 = –b1.

Likewise, the expression b3 + b4 + b5 = 0 implies b5 = –b3 – b4. Imposing these constraints on the quadratic regression equation yields:Z = b0 + b1X – b1Y + b3X2 + b4XY + (–b3 – b4)Y2 + e

lThe expression simplifies to:Z = b0 + b1(X – Y) + b3(X2 – Y2) + b4(XY – Y2) + e

lThe reduction in R2 from this equation relative to the R2 from the quadratic equation is a joint test of b1 + b2 = 0 and b3 + b4 + b5 = 0.

140

l X0, Y0, p10, p11, p20, p21, and slopes along the principal axes are nonlinear combinations of regression coefficients. For these quantities, significance tests can be conducted using the bootstrap, as follows:§ A large number (e.g., 10,000) of samples of size N are randomly

drawn with replacement.§ Each sample is used to estimate the quadratic regression

equation.§ The coefficients from each sample are used to compute X0, Y0,

p10, p11, p20, and p21.§ The distributions of X0, Y0, p10, p11, p20, and p21 are used to

construct confidence intervals.

Key Features of Response Surfaces: Tests of Significance

141

Example: Testing Response Surface Features for Autonomy

lA joint test of (b1 + b2) and (b3 + b4 + b5), which represent the slope at the point X = 0 and the curvature along the Y = X line, is yielded by the following commands:MGLHMOD SAT=CONSTANT+AUTCA+AUTCD+AUTCA2+AUTCAD+AUTCD2ESTHYPAMA [0 1 1 0 0 0;,

0 0 0 1 1 1]TEST

142

lFor autonomy, this test yields the following result:Hypothesis.

A Matrix

1 2 3 4 5 61 0.0 1.000 1.000 0.0 0.0 0.02 0.0 0.0 0.0 1.000 1.000 1.000

Test of Hypothesis

Source SS df MS F P

Hypothesis 16.878 2 8.439 7.580 0.001Error 394.135 354 1.113

Example: Testing Response Surface Features for Autonomy

143

Example: Testing Response Surface Features for Autonomy

lSeparate tests of (b1 + b2) and (b3 + b4 + b5) are yielded by the following commands:MGLHMOD SAT=CONSTANT+AUTCA+AUTCD+AUTCA2+AUTCAD+AUTCD2ESTHYPAMA [0 1 1 0 0 0]TESTHYPAMA [0 0 0 1 1 1]TEST

144

lFor autonomy, the results are:A Matrix

1 2 3 4 5 61 0.0 1.000 1.000 0.0 0.0 0.0

Test of Hypothesis

Source SS df MS F P

Hypothesis 1.068 1 1.068 0.959 0.328Error 394.135 354 1.113

A Matrix1 2 3 4 5 6

1 0.0 0.0 0.0 1.000 1.000 1.000Test of Hypothesis

Source SS df MS F P

Hypothesis 11.740 1 11.740 10.545 0.001Error 394.135 354 1.113

Example: Testing Response Surface Features for Autonomy

145

Example: Testing Response Surface Features for Autonomy

lLikewise, a joint test of (b1 – b2) and (b3 – b4 + b5), which represent the slope at the point X = 0 and the curvature along the Y = – X line, is yielded by the following commands:MGLHMOD SAT=CONSTANT+AUTCA+AUTCD+AUTCA2+AUTCAD+AUTCD2ESTHYPAMA [0 1 -1 0 0 0;,

0 0 0 1 -1 1]TEST

146

lFor autonomy, this test yields the following result:Hypothesis.

A Matrix

1 2 3 4 5 61 0.0 1.000 -1.000 0.0 0.0 0.02 0.0 0.0 0.0 1.000 -1.000 1.000

Test of Hypothesis

Source SS df MS F P

Hypothesis 39.512 2 19.756 17.744 0.000Error 394.135 354 1.113

Example: Testing Response Surface Features for Autonomy

147

Example: Testing Response Surface Features for Autonomy

lSeparate tests of (b1 – b2) and (b3 – b4 + b5) are yielded by the following commands:MGLHMOD SAT=CONSTANT+AUTCA+AUTCD+AUTCA2+AUTCAD+AUTCD2ESTHYPAMA [0 1 -1 0 0 0]TESTHYPAMA [0 0 0 1 -1 1]TEST

148

lFor autonomy, the results are:A Matrix

1 2 3 4 5 61 0.0 1.000 -1.000 0.0 0.0 0.0

Test of Hypothesis

Source SS df MS F P

Hypothesis 8.105 1 8.105 7.279 0.007Error 394.135 354 1.113

A Matrix1 2 3 4 5 6

1 0.0 0.0 0.0 1.000 -1.000 1.000

Test of Hypothesis

Source SS df MS F P

Hypothesis 6.588 1 6.588 5.917 0.015Error 394.135 354 1.113

Example: Testing Response Surface Features for Autonomy

149

l In SYSTAT, the bootstrap is implemented with the following commands:MGLHMOD SAT=CONSTANT+AUTCA+AUTCD+AUTCA2+AUTCAD+AUTCD2SAVE AUTBOOT.SYD/COEFEST/SAMPLE=BOOT(10000)

lThese commands will produce a large output file with the results of all 10,000 regressions and a system file containing 10,000 sets of coefficients.lThe coefficients are used to construct confidence

intervals (Mooney & Duval, 1993; Stine, 1989).

Example: Testing Response Surface Features for Autonomy

150

l For autonomy, the 95% confidence intervals for X0, Y0, p10, p11, p20, p21 are:

Value CIL CIUX0 0.982 0.199 5.142Y0 –0.315 –3.480 0.239p10 –1.375 –11.423 –0.359p11 1.079 0.688 2.123p20 0.594 –1.167 1.120p21 –0.927 –1.449 –0.466

Example: Testing Response Surface Features for Autonomy

151

l The surface was saddle-shaped.l The slope of the first principal axis did not differ from 1, and the

intercept of first principal axis was negative, meaning that the axis ran parallel to the Y = X line but was shifted to the right.

l The slope and intercept of the second principal axis did not differ from –1 and 0, respectively. Thus, the axis did not differ from the Y = –X line.

l The location of the first principal axis combined with the slopealong the second principal axis indicate that satisfaction increased as actual autonomy increased toward desired autonomy, continued to increase as actual autonomy exceeded desired autonomy, and began to decrease when actual autonomy exceeded desired autonomy by about one unit.

l Within the range of the data, satisfaction increased at an increasing rate as actual and desired autonomy both increased along the first principal axis.

Interpretation of Results for Autonomy

152

Moderated Polynomial Regression

l In some cases, the effect represented by a quadratic regression equation is believed to be moderated by another variable.l Incorporating the moderator variable V into a

quadratic regression equation yields:Z = b0 + b1X + b2Y + b3X2 + b4XY + b5Y2 + b6V +

b7XV + b8YV + b9X2V + b10XYV + b11Y2V + elModeration is tested by assessing the increment in

R2 yielded by the terms XV, YV, X2V, XYV, and Y2V.

153

Moderated Polynomial Regression

l The moderated quadratic regression equation can be rewritten to show simple surfaces at selected levels of the moderator variable, as follows:Z = (b0 + b6V) + (b1 + b7V)X + (b2 + b8V)Y +

(b3 + b9V)X2 + (b4 + b10V)XY + (b5 + b11V)Y2 + el The compound coefficients on the terms X, Y, X2,

XY, and Y2 can be tested using procedures for testing weighted linear combinations of regression coefficients.

154

Example: Moderated Polynomial Regression for Autonomyl Quadratic equation with importance as a moderator:

Dep Var: SAT N: 357 Multiple R: 0.431 Squared multiple R: 0.186Adjusted squared multiple R: 0.160 Standard error of estimate: 1.057

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.514 0.481 0.000 . 11.455 0.000AUTCA 0.409 0.487 0.379 0.012 0.841 0.401AUTCD -0.740 0.518 -0.595 0.014 -1.429 0.154AUTCA2 0.181 0.292 0.278 0.012 0.620 0.536AUTCAD 0.595 0.489 0.855 0.005 1.218 0.224AUTCD2 -0.225 0.306 -0.353 0.010 -0.736 0.462AUTI 0.062 0.101 0.051 0.343 0.614 0.540AUTCAI -0.050 0.103 -0.242 0.009 -0.479 0.632AUTCDI 0.103 0.115 0.454 0.009 0.890 0.374AUTCA2I -0.046 0.054 -0.408 0.011 -0.862 0.389AUTCADI -0.047 0.088 -0.392 0.004 -0.533 0.594AUTCD2I 0.021 0.059 0.200 0.008 0.360 0.719

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 88.023 11 8.002 7.158 0.000Residual 385.670 345 1.118

155

Example: Moderated Polynomial Regression for Autonomyl The test of the increment in R2 yielded by the five

moderator terms is:

l The increment in R2 is not significant, so moderation is not supported.

05.p,44.1345/)186.1(

)345350/()169.186(. >=−

−−

156

lSimple quadratic equations at low, medium, and high levels of importance:

X Y X2 XY Y2

Low 0.21 -0.33** -0.00 0.41* -0.14Medium 0.16 -0.23 -0.05 0.36** -0.12High 0.11 -0.13 -0.09 0.32** -0.10

Example: Moderated Polynomial Regression for Autonomy

157

Example: Moderated Polynomial Regression for AutonomylSimple surface for low importance:

158

Example: Moderated Polynomial Regression for AutonomylSimple surface for medium importance:

159

Example: Moderated Polynomial Regression for AutonomylSimple surface for high importance:

160

Mediated Polynomial Regression

l On occasion, the effect represented by a quadratic regression equation is believed to be mediated by (i.e., transmitted through) another variable.lMediation can be analyzed using two regression

equations, one that regresses the mediator on the five quadratic terms, and another that regresses the outcome on the five quadratic terms and the mediator:M = a0 + a1X + a2Y + a3X2 + a4XY + a5Y2 + eM

Z = b0 + b1M + b2X + b3Y + b4X2 + b5XY + b6Y2 + eZ

161

Mediated Polynomial Regression

l The mediated effect represented by these two equation can be derived by substituting the equation for M into the equation for Z to obtain a reduced form equation:Z = b0 + b1(a0 + a1X + a2Y + a3X2 + a4XY + a5Y2 + eM)

+ b2X + b3Y + b4X2 + b5XY + b6Y2 + eZ

l Distribution yields:Z = b0 + a0b1 + a1b1X + a2b1Y + a3b1X2 + a4b1XY +

a5b1Y2 + b1e + b2X + b3Y + b4X2 + b5XY + b6Y2 + eZ

162

Mediated Polynomial Regressionl Collecting like terms yields:

Z = (b0 + a0b1) + (b2 + a1b1)X + (b3 + a2b1)Y + (b4 + a3b1)X2 + (b5 + a4b1)XY + (b6 + a5b1)Y2 + (eZ + b1e)

l The compound coefficients on X, Y, X2, XY, and Y2

capture the portion of the quadratic effect mediated by M as the products a1b1, a2b1, a3b1, a4b1, and a5b1.l The portion of the quadratic effect that bypasses M is

captured by b2, b3, b4, b5, and b6.l These coefficients can be analyzed separately and

jointly to examine the mediated quadratic effect.

163

Example: Mediated Polynomial Regression for Autonomyl Quadratic equation with intent to take the focal job as the

outcome variable:Dep Var: INT N: 360 Multiple R: 0.276 Squared multiple R: 0.076Adjusted squared multiple R: 0.063 Standard error of estimate: 1.174

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.851 0.092 0.000 . 63.319 0.000AUTCA 0.161 0.111 0.142 0.273 1.449 0.148AUTCD -0.244 0.119 -0.187 0.315 -2.056 0.041AUTCA2 -0.076 0.052 -0.110 0.444 -1.438 0.151AUTCAD 0.197 0.089 0.267 0.178 2.211 0.028AUTCD2 0.008 0.070 0.013 0.242 0.121 0.904

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 40.397 5 8.079 5.858 0.000Residual 488.231 354 1.379

164

Example: Mediated Polynomial Regression for Autonomyl Quadratic equation with satisfaction as the mediator variable:

Dep Var: SAT N: 360 Multiple R: 0.411 Squared multiple R: 0.169Adjusted squared multiple R: 0.157 Standard error of estimate: 1.055

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 5.825 0.083 0.000 . 70.161 0.000AUTCA 0.197 0.100 0.182 0.273 1.966 0.050AUTCD -0.293 0.106 -0.238 0.315 -2.754 0.006AUTCA2 -0.056 0.047 -0.086 0.444 -1.177 0.240AUTCAD 0.276 0.080 0.396 0.178 3.453 0.001AUTCD2 -0.035 0.063 -0.054 0.242 -0.553 0.581

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 79.951 5 15.990 14.362 0.000Residual 394.135 354 1.113

165

Example: Mediated Polynomial Regression for Autonomyl Quadratic equation with intent to take the focal job as the

outcome variable and satisfaction as the mediating variable:Dep Var: INT N: 360 Multiple R: 0.760 Squared multiple R: 0.578Adjusted squared multiple R: 0.571 Standard error of estimate: 0.795

Effect Coefficient Std Error Std Coef Tolerance t P(2 Tail)CONSTANT 1.074 0.242 0.000 . 4.445 0.000SAT 0.820 0.040 0.777 0.831 20.480 0.000AUTCA 0.000 0.076 0.000 0.270 0.001 0.999AUTCD -0.003 0.081 -0.002 0.308 -0.038 0.969AUTCA2 -0.030 0.036 -0.044 0.443 -0.842 0.401AUTCAD -0.030 0.061 -0.040 0.173 -0.484 0.629AUTCD2 0.037 0.047 0.055 0.242 0.780 0.436

Analysis of VarianceSource Sum-of-Squares df Mean-Square F-ratio PRegression 305.506 6 50.918 80.556 0.000Residual 223.122 353 0.632

166

Example: Mediated Polynomial Regression for Autonomyl The compound coefficients are:¡b0 + a0b1 = 1.07 + 5.83 × 0.82 = 1.07 + 4.78 = 5.85¡b2 + a1b1 = 0.00 + 0.20 × 0.82 = 0.00 + 0.16 = 0.16¡b3 + a2b1 = –0.00 – 0.29 × 0.82 = –0.00 – 0.24 = –0.24¡b4 + a3b1 = –0.03 – 0.06 × 0.82 = –0.03 – 0.05 = –0.08¡b5 + a4b1 = –0.03 + 0.28 × 0.82 = –0.03 + 0.23 = 0.20¡b6 + a5b1 = 0.04 – 0.04 × 0.82 = 0.04 – 0.03 = 0.01

l The individual coefficients can be tested using the reported standard errors, and the products of coefficients can be tested using the bootstrap.

167

lTests of individual and compound coefficients:Direct1 First Second Indirect TotalEffect Stage Stage Effect Effect

Intercept 1.07** 5.83** 0.82** 4.78** 5.85**

X 0.00 0.20* 0.82** 0.16* 0.16Y –0.00 –0.29** 0.82** –0.24** –0.24*

X2 –0.03 –0.06 0.82** –0.05 –0.08XY –0.03 0.28** 0.82** 0.23** 0.20Y2 0.04 –0.04 0.82** –0.03 0.01

1The direct effect of the five quadratic terms was not significant.

Example: Mediated Polynomial Regression for Autonomy

168

Example: Mediated Polynomial Regression for AutonomylSurface for unmediated effect:

169

Example: Mediated Polynomial Regression for AutonomylSurface for direct effect:

170

Example: Mediated Polynomial Regression for AutonomylSurface for first stage of indirect effect:

171

Example: Mediated Polynomial Regression for AutonomylSurface for indirect effect:

172

Example: Mediated Polynomial Regression for AutonomylSurface for total effect:

173

lMany of the problems that occur when difference scores are used as independent variables also occur when they are used as dependent variables.lAlternative procedures for difference scores as

dependent variables are fundamentally different from those for difference scores as independent variables.lWe will briefly consider procedures when the

dependent variable is an algebraic difference and both components are endogenous, meaning they are caused by the independent variables.

Difference Scores as Dependent Variables

174

lAn equation that uses an algebraic difference as a dependent variable is:

(Y1 – Y2) = b0 + b1X + elY1 and Y2 may be recast as separate dependent

variables in a multivariate regression analysis:Y1 = b10 + b11X + e1

Y2 = b20 + b21X + e2

Difference Scores as Dependent Variables

175

l The correspondence between these equations can be seen by subtracting the Y2 equation from the Y1equation, which yields:

(Y1 – Y2) = (b10 – b20) + (b11 – b21)X + (e1 – e2)l This subtraction shows the following:

b0 = b10 – b20b1 = b11 – b21

l These expressions reveal a fundamental ambiguity, in that b0 and b1 indicate the differences between the intercepts and slopes, respectively, from the Y1 and Y2equations, but they provide no information regarding the absolute magnitudes of these intercepts and slopes.

Difference Scores as Dependent Variables

176

l This ambiguity is illustrated by the following examples, all of which yield the same value for b1.l This pattern indicates that the effects of X on Y1 and Y2

are equal in magnitude but opposite in sign:b11 = b1/2, b21 = –b1/2

l Here, X is positively related to Y1 and unrelated to Y2:b11 = b1, b21 = 0

l Here, X is negatively related to Y2 and unrelated to Y1:b11 = 0, b21 = –b1

l These examples show that b1 is essentially useless for determining the effect of X on Y1 and Y2.

Difference Scores as Dependent Variables

177

l The alternative procedure uses Y1 and Y2 jointly as dependent variables in multivariate regression equations.l The multivariate equations reveal the separate effects of

X on Y1 and Y2 and can be used to test whether these effects correspond to hypotheses implied when (Y1 –Y2) is used as a dependent variable.l The procedure provides multivariate tests of the effects

of X on Y1 and Y2 and differences between these effects.lMultivariate piecewise regression equations can be used

as an alternative to |Y1 – Y2| is used as a dependent variable.

Difference Scores as Dependent Variables

178

Q: Which higher-order terms should I use? Are squared and product terms sufficient, or should I also use cubed terms, the products of squared and first-order terms, etc.?

A: The higher-order terms to be included in the equation depend entirely on one’s hypotheses regarding the joint relationships of X and Y with Z. In most cases, I have found that the three quadratic terms (i.e., X2, XY, and Y2) are sufficient to capture most theoretically meaningful effects. In exploratory analyses, I have found significant effects for cubic and quartic terms, but these rarely survive cross-validation and are often symptoms of a few outliers or influential cases in the data.

Answers to Frequently Asked Questions

179

Q: How do I interpret the coefficients on X2, XY, and Y2? I understand what they each mean separately, but thinking about them all together is confusing.

A: The coefficients on X2, XY, and Y2 should be interpreted along with the coefficients on X and Y as a set, because these coefficients collectively describe the shape of the surface relating X and Y to Z. Trying to interpret any one of these coefficients in the absence of the others will often yield erroneous conclusions. Instead, surfaces indicated by quadratic regression equations should be treated as whole entities, and features of the surfaces can be tested using response surface methodology. A major motivation for applying response surface methodology was my frustration when trying to make sense of coefficients from quadratic equations. Response surface methodology makes the task much easier.

Answers to Frequently Asked Questions

180

Q: Given that the coefficients on X and Y are scale dependent when X2, XY, and Y2 in the equation, how can I meaningfully interpret these coefficients?

A: The coefficients on X and Y (i.e., b1 and b2) are indeed scale dependent. However, this simply reflects the fact that b1 and b2indicate the slope of the surface where X and Y are zero (i.e., the origin of the X,Y plane). One could add or subtract arbitrary constants to X and Y and change the values of b1 and b2, but doing so may shift the origins of X and Y beyond the bounds of the data, where it doesn’t make sense to estimate b1 and b2 in the first place. A more reasonable strategy is to scale X and Y such that their origins represent a meaningful point in the distribution of the data in the X,Y plane, such as a point midway between their means or the midpoint of their common scale.

Answers to Frequently Asked Questions

181

Q: How large should my sample be?A: The sample should be large enough to provide the statistical

power needed to test constraints and combinations of regression coefficients required to test hypotheses. Power is important because showing support for constraints requires support for thenull hypothesis (i.e., the R2 values for the constrained and unconstrained equations do not differ). A related concern is that the sample should provide adequate dispersion of cases in the X,Y plane. For example, if cases are skewed in the direction ofX > Y or X < Y, it will be very difficult to detect changes in the slope of the surface along the Y = –X line, which are usually of interest in congruence research. Keep in mind that skewness on either side of the Y = –X line cannot be detected by examining the distributions of X and Y separately.

Answers to Frequently Asked Questions

182

Q: I have seen measures that ask the respondent to directly comparethe degree to which X deviates from Y. Doesn’t this approach avoid the problems with difference scores?

A: Not really. Although it removes the need for the researcher to calculate the difference, it does not guarantee that the respondent will not implicitly or explicitly calculate the difference between X and Y when providing a response (many response scales for such items prompt the respondent to do just that). If this occurs, then items that solicit direct comparisons are subject to the problems as difference scores, because these problems do not depend on who calculates the difference. Moreover, direct comparison items hopelessly confound X and Y (analogous to any “double-barreled” item) and force the researcher to take a two-dimensional view of the relationship of X and Y with Z, even when a three-dimensional view may be more informative.

Answers to Frequently Asked Questions

183

Q: The unconstrained equations for profile similarity indices contain so many items. How do I interpret all those coefficients, and what do I do about degrees of freedom?

A: Testing the full set of constraints imposed D1, |D|, and D2 does indeed require using items for all of the dimensions as predictors. However, the items constituting profiles can often be grouped into conceptually homogeneous subsets. Scales corresponding to these subsets can then be constructed, which can drastically reduce the effective number of dimensions to be analyzed. This not only makes interpretation easier, but also reduces sample size requirements. Moreover, higher-order terms for each dimension can be tested as sets, and those that are not significant may bedropped (for an illustration of this, see Edwards, 1993). Of course, models derived in this manner should be considered exploratory, pending cross-validation.

Answers to Frequently Asked Questions

184

Q: By not using difference scores, aren’t we ignoring “fit?”A: Models using difference scores are simply special cases of

general models containing the components of the difference. Hence, these general models subsume those that use difference scores. The general models also permit tests of the constraintsimposed by difference scores, which remain unverified when difference scores are used. Moreover, fit hypotheses can usually be restated in terms of relationships involving the variables that constitute the fit construct. By stating hypotheses in these terms, one can verify that relationships for these variables conform topatterns depicted by fit hypotheses. Thus, the use of componentvariables, supplemented by higher-order terms and response surface analyses, permit tests of most fit hypotheses as well ashypotheses difference scores cannot depict. This approach lets the researcher gain much and lose little, if anything at all.

Answers to Frequently Asked Questions

185

Q: How can I apply the quadratic approach to structural equations modeling?

A: Drawing from the literature on moderated structural equation modeling, I have developed procedures for specifying and estimating quadratic structural equation models and applying response surface methodology. These procedures require squares and products of the indicators of first-order latent variables, involve complex nonlinear constraints on parameters, and use estimation methods for nonnormal data. I hope to finish a manuscript describing this procedure in the near future.

Answers to Frequently Asked Questions

186

Q: How do you generate those fancy graphs?A: I have traditionally used SYSTAT, which is great for

plotting three-dimensional surfaces and adding contour lines, principal axes, and so forth. Surfaces can also be plotted using Microsoft Excel, and I have developed a file that allows the user to enter coefficient estimates from a quadratic equation and the minimum and maximum values of X and Y to produce a surface. If you would like a copy of the file, just send me an email.

Answers to Frequently Asked Questions

187

Q: Are there empirical examples of polynomial regression in the organizational behavior literature?

A: The use of polynomial regression has grown since its introduction. Examples published through 2000 are cited in the Edwards (2001) article on difference score myths, and more recent examples are cited in the meta-analysis conducted by Kristof-Brown et al. (2005).

Answers to Frequently Asked Questions

188

Q: Your approach looks like a real pain. Can I just pretend it doesn’t exist? Or, can I just cite your work to make it look like I'm doing what you recommend?

A: Some researchers tenaciously cling to difference scores. Old habits die hard. As a case in point, in a 1992 Psychological Bulletin article, Lee Cronbach lamented that researchers continue to use profile similarity indices he once advocated (Cronbach, 1955; Cronbach & Gleser, 1953) but subsequently disavowed (Cronbach, 1958). Researchers have also developed clever ways of citing articles that criticize difference scores without following the advice in the articles. Here are some of my favorites, quoted from studies that cite Edwards (1994):

Answers to Frequently Asked Questions

189

l “Computing a correlation across dimensions for each individual topredict outcomes of fit or congruence represents a flawed measure of fit (Edwards, 1994). However, for our purposes here, correlations across individuals within a dimension provide an appropriate measure of the relationship between person and environment.”l “The reliabilities of the difference scores created to assess similarity

were relatively high, so it seemed simpler and more understandable to keep the analysis as it was rather than to apply more complicated alternatives (e.g., Edwards, 1994).”l “Unmet expectations were assessed by subtracting scores on each

item for the early expectations from scores on each item from the current situation . . . Problems in measuring and analyzing discrepancy scores, and unmet expectations in particular, have been reported recently (Edwards, 1994) . . . these problems have not been entirely overcome here.”

Answers to Frequently Asked Questions

190

Key ReferencesBohrnstedt, G. W., & Goldberger, A. S. (1969). On the exact covariance of products of random variables.

Journal of the American Statistical Association, 64, 1439-1442.Bohrnstedt, G. W., & Marwell, G. (1978). The reliability of products of two random variables. In K. F.

Schuessler, (Ed.), Sociological Methodology 1978 (pp. 254-273). San Francisco: Jossey-Bass.Edwards, J. R. (1994). The study of congruence in organizational behavior research: Critique and a proposed

alternative. Organizational Behavior and Human Decision Processes, 58, 51-100 (erratum, 58, 323-325).Edwards, J. R., & Parry, M. E. (1993). On the use of polynomial regression equations as an alternative to

difference scores in organizational research. Academy of Management Journal, 36, 1577-1613.Edwards, J. R. (1995). Alternatives to difference scores as dependent variables in the study of congruence in

organizational research. Organizational Behavior and Human Decision Processes, 64, 307-324.Edwards, J. R. (2001). Ten difference score myths. Organizational Research Methods, 4, 264-286.Edwards, J. R. (2002). Alternatives to difference scores: Polynomial regression analysis and response surface

methodology. In F. Drasgow & N. W. Schmitt (Eds.), Advances in measurement and data analysis (pp. 350-400). San Francisco: Jossey-Bass.

Kristof-Brown, A. L., Zimmerman, R. D., & Johnson, E. C. (2005). Consequences of individual's fit at work: A meta-analysis of person-job, person-organization, person-group, and person-supervisor fit. Personnel Psychology, 58, 281-342.

Mooney, C. Z., & Duval, R. D. (1993). Bootstrapping: A nonparametric approach to statistical inference. Newbury Park, CA: Sage.

Stine, R. (1989). An introduction to bootstrap methods. Sociological Methods & Research, 18, 243-291.

top related