taguchi method and robust design: tutorial and guideline manual... · 1. introduction taguchi...
Embed Size (px)
TRANSCRIPT

1
Taguchi Method and Robust Design: Tutorial and
Guideline
CONTENT
1. Introduction
2. Microsoft Excel: graphing
3. Microsoft Excel: Regression
4. Microsoft Excel: Variance analysis
5. Robust Design: An Example
Reference
Appendix A

2
1. INTRODUCTION
Taguchi method is also known as quality Engineering. The objective of quality
engineering is to choose from all possible designs the one that can ensure the highest
functional robustness of products at the lowest possible cost.
Taguchi method involves a three-step approach: i.e., system design, parameter design,
and tolerance design.
System design is the process of applying basic scientific and engineering principles in
order to develop a functional design. Parameter design is the investigation conducted in
order to identify settings that minimize or reduce the performance variation in the product
or process. Tolerance design is a method for determining tolerances that minimize the
sum of product manufacturing and lifetime cost. If the parameter design cannot achieve
the required performance variation, tolerance design can be used to reduce the variation
by reducing the tolerances based on the quality loss function.
Robust design is the operation of choosing settings for product or process parameters to
reduce variation of that product or process’s response from target. Because it involves
determination of parameter settings, robust design is called parameter design.
In order to design a system so that its performance is insensitive to uncontrollable (noise)
variables, one needs to systematically investigate the relationship between appropriate
control factors and noise variables, typically through off-line experiments, and judiciously
choose the settings of the control factors to make the system robust to uncontrollable
noise variation. Thus, the implementation of the robust design method includes the
following operational steps:
1. state the problem and objective.
2. identify responses, control factors, and sources of noise.

3
3. plan an experiment to study the relationships between responses and control and noise
factors.
4. run the experiment and collect the data. Analyze the data to determine the control
factor settings that predict improvement on the product or process design.
5. run a small experiment to confirm if the control factor settings determined in step 4
actually improve the product or process design. If so, adopt the control factor settings
and consider another iteration for further improvement. If not, correct or modify the
assumptions and go back to step 2.
This lab deals with relevant aspects in step 4, i.e., data analysis using Microsoft Excel.
We will deal with graphing in Section 2. Section 3 deals with regression. Section 4 deals
with analysis of variance. Finally, a numerical example is given in Section 5.
2. MICROSOFT EXCEL: GRAPHING
Excel is a spreadsheet program. By clicking “Microsoft Excel”, a blank worksheet will
appear on screen. A worksheet is a grid of columns and rows. The intersection of any
column and row is called a cell. Each cell in a worksheet has a unique cell reference, the
designation formed by combining the row and column headings. For example, B6 refers
to the cell at the intersection of column B and row 6.
The cell point is a white cross-shaped pointer that appears over cells in the worksheet.
You use the cell pointer to select any cell in the worksheet. The selected cell is called the
active cell. You always have at least one cell selected at all times.
A range is a specified group of cells. A range can be a single cell, a column, a row, or any
combination of cells, columns, and rows. Range coordinates identify a range. The first
element in the range coordinates is the location of the upper left cell in the range; the
second element is the location of the lower-right cell. A colon (:) separates these two
elements. For example, the range B6:D8 includes the cells B6, B7, B8, C6, C7, C8, D6,
D7, and D8.

4
With the Excel, you can create a chart based on data in a worksheet. The axes are the grid
on which the data is plotted. On a 2D chart, the y-axis is the vertical axis on a chart (value
axis), and the x-axis is the horizontal axis (category axis). A 3D chart has three (add a z-
axis).
Example 1: Draw x-y curvs of the following data:
x =: 8, 9, 10, 12, 15, 18, 20, 25, 30, 35;
y =: 25, 26.5, 28, 33, 36, 36.5, 36, 32.5, 26, 21.
Step 1: Sequentially input x values to A1 through A10, and input y values to B1 through
B10.
Step 2: Select the range A1:B10.
Step 3: Click “Chart Wizard”, and then hold the mouse’s left button and move on screen
from the upper-left to the lower-right. This forms the region of the chart. Click “Next”.
Step 4: Select “XY (Scatter)” or what you want and click “Next”.
Step 5: Select the format No. 2 or what you want and click “Next”.
Step 6: Input Chart title, title of category (x) axis, and title of value (y) axis. Click
“Finish”.
Step 7: Save the file as “Lab-1”.
The curve is shown in Figure 1.

5
Figure 1: x-y curve
0
5
10
15
20
25
30
35
40
0 10 20 30 40
x
y Series1
3. REGRESSION
Example 2: Fit the data in Example 1 into a function: y = f(x).
Step 1: Select a fitting model. According to the shape of the curves shown in Figure 1, the
following model may be appropriate:
y a bx cx e 2
where the parameters a, b, and c are the constants to be determined by regression, and e is
a random deviation which is assumed to be normally distributed with mean = 0 and
standard deviation .
Step 2: Linearize the above model by the following transformations:

6
x x x x1 2
2 ,
Thus, the model can be rewritten as
y a bx cx e 1 2
Step 3: Generate data. Now, open the file “Lab-1”. We use the column D as x1 , the
column E as x2 and the column F as y. Select D1 and type “=A1”; select E1 and type
“=A1^2”; select F1 and type “=B1”. Select D1:F1 and hold the left button of the mouse
and move to the row 10. This completes the input of the data.
Step 4: Regression. Click “Tools”, click “Data analysis” and “regression”. Type “F1:F10”
to “Input Y range” and Type “D1:E10” to “Input X range”. Press “Enter”. The result is
shown in Table 1.
For the convenience of understanding, we introduce the following definitions and
notations:
Residual Sum of Squares (SSr): ( )y y 2 , where y is observed value and y is
predicted value.
Total Sum of Squares (SSt): ( )y y 2 , where y is the mean of the y observations in
the sample.
The coefficient of multiple determination R2 : = 1 - SSr/SSt.
Adjusted R2 : = 1 - [(n-1)/(n-k)]SSr/SSt.
Random deviation variance: / ( ) 2 SSr n k . Where n is sample size and k is the
number of estimated constants in the model.

7
Table 1: Output of the regression
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.962769656
R Square 0.926925411
Adjusted R Square
0.906046956
Standard Error 1.671778115
Observations 10
ANOVA
df SS MS F Significance F
Regression 2 248.1611056 124.0805528 44.39626638 0.000105483
Residual 7 19.56389445 2.794842064
Total 9 267.725
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%
Intercept 7.313500551 3.029630589 2.41399086 0.046500657 0.149567712 14.4774334 0.14956771 14.477433
X Variable 1 2.880994396 0.336804616 8.553904132 5.93012E-05 2.084578603 3.67741019 2.0845786 3.6774102
X Variable 2 -0.072645789 0.007964304 -9.12142385 3.90931E-05 -0.091478361 -0.0538132 -0.0914784 -0.0538132
Step 5: Translate the result. The Multiple R in “regression statistics” reflects how good
the fitting is. The greater it is, the better. In this example, it equals 0.9628. It is close to 1.
This implies that the fitting model is not too bad. The Coefficient corresponding to
“Intercept” is a, the Coefficient corresponding to “X Variable 1” is b, and the Coefficient
corresponding to “X Variable 2” is c. They are 7.31350, 2.88099, and -0.072646,
respectively.
Now, we draw this fitting curve and the data curve together.

8
Open the file “Lab-1”. Let A11 = 6, A12 = A11+1, …, A41 = A40+1. Select C11 and
type “=7.3135+2.88099*A11-0.07265*A11^2”. Select C11, hold the left button of the
mouse and move to the row C41. This completes the calculation of y. Now, select
A1:C41 and repeat those steps presented in Example 1. Figure 2 shows the fitting curve
together with the data curve. As can be seen, the fitting is not very ideal. Therefore, we
can try the following model:
y ax eb cx
where the parameters a, b, and c are the constants to be determined by regression. It can
be linearized by the following transformations:
z y A a x x x x ln( ), ln( ), ln( ),1 2
Thus, the model can be rewritten as
z A bx cx 1 2
Similarly, we have the regression result shown in Table 2:

9
Table 2: Output of the regression
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.993972276
R Square 0.987980886
Adjusted R Square
0.984546853
Standard Error 0.023425092
Observations 10
ANOVA
df SS MS F Significance F
Regression 2 0.315745171 0.157872585 287.7028229 1.9035E-07
Residual 7 0.003841144 0.000548735
Total 9 0.319586315
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0%
Upper 95.0%
Intercept 0.341692322 0.13804421 2.475238351 0.042505533 0.015269869 0.6681148 0.0152699 0.6681148
X Variable 1 1.75652854 0.076387431 22.99499435 7.45711E-08 1.575901098 1.937156 1.5759011 1.937156
X Variable 2 -0.101289717 0.004239343 -23.89278486 5.72056E-08 -0.111314163 -0.091265 -0.111314 -0.091265
At this time, “multiple R” = 0.9940. This implies that this model is much better than the
previous model. The model parameters are as follows:
a = 1.40733, b=1.75653, c = -0.10129
Figure 3 shows the fitting curve together with the data curve. As can be seen, the fitting is
very good.

10
Figure 3: Fitting curve
0
5
10
15
20
25
30
35
40
0 10 20 30 40
x
y Series1
Series2
4. ANALYSIS OF VARIANCE (ANOVA)
Many investigations involve a comparison of several population means. A single-factor
analysis of variance problem involves a comparison of k population means. The objective
is to test if or not these means are equal. The analysis of variance method analyzes
variability in the data to see how much can be attributed to between-group differences and
how much is due to within-group variability.
Notation:
Sample size: ni
Sample mean: x x ni ijj i /
Sample variance: s x x ni ij ij i
2 2 1 ( ) / ( )

11
Total number of observations: N nii
Grant mean: x x Nijij /
Example 3: Suppose that three different teaching methods are used to teach a course to
three groups of students. Then we have their scores on the final examination. Based on
the scores, we can analyze the effectiveness of each teaching method by Analysis of
Variance technique. The scores are shown in Table 3:
Table 3: Data for Example 3
Method 1 Method 2 method 3
Student 1 57 64 68
Student 2 75 71 75
Student 3 98 79 50
Student 4 61 45 53
Student 5 84 50 61
Student 6 40 74
Now, input the five data under “Method 1” into A1 through A5; input the six data under
“Method 2” into B1 through B6; and input the six data under “Method 3” into C1 through
C6. Click “Tools”. Then click “data analysis” and select “Anova: single factor”. Input
“A1:C6” as “Input range” and then click “ok”. The result is shown in Table 4. It includes
two parts: the first part is the “summary” and the second is “ANOVA”.
Definition or notation:
Mean square (MS) for groups: MS x x n kii i1 12 ( ) / ( ) . It reflects between-group
variation.
Df: degrees of freedom.
SS: sum of squares.
P-value: it is the smallest level of significance at which the hypotheses can be rejected.
Mean square for error: MS s n N kii i2 12 ( ) / ( ) . It reflects within-group variation.
F ratio: F MS MS 1 2/ . If F > F critical value, then it implies that there are great
differences between the means of the groups. One cannot think that the means are equal.

12
Where F critical value is taken from the F distribution table based on numerator and
denominator degrees of freedom and a significance level.
In our example, MS1= 396.8333, and MS2 = 206.7381, so F = 1.919498. F critical value
= 3.73889. So, we cannot think that there are big differences between the means at a
significance level of 0.05.
Table 4: Output of the ANOVA
Anova: Single Factor
SUMMARY
Groups Count Sum Average Variance
Column 1 5 375 75 282.5
Column 2 6 349 58.16667 240.5667
Column 3 6 381 63.5 112.3
ANOVA
Source of Variation
SS df MS F P-value F crit
Between Groups
793.6667 2 396.8333 1.919498 0.18336 3.73889
Within Groups
2894.333 14 206.7381
Total 3688 16
Example 4: Two-factor ANOVA. An investigator will often be interested in assessing the
effects of two different factors A and B on a response variable. There are several levels
corresponding to each factor. Suppose that an experiment is carried out, resulting in a
data set that contains some number of observations for each combination of the factor
levels, see Table 5. Here, we can have 3+4 = 7 sample average (or variance) and 1 grant
average. These are given in the first part of ANOVA, see Table 6. We can view each
column as a group, then we can have a single-factor ANOVA. Similarly, we can view

13
each row as a group, then we can have another single-factor ANOVA. These are shown in
the second part.
Table 5: Data for Example 4
B1 B2 B3 B4
A1 9.2 12.43 12.9 10.8
A2 8.93 12.63 14.5 12.77
A3 16.3 18.1 19.93 18.17
Table 6: Output of the ANOVA
Anova: Two-Factor Without Replication
SUMMARY
Count Sum Average Variance
Row 1 4 45.33 11.3325 2.830892
Row 2 4 48.83 12.2075 5.497492
Row 3 4 72.5 18.125 2.1971
Column 1 3 34.43 11.47667 17.46663
Column 2 3 43.16 14.38667 10.35163
Column 3 3 47.33 15.77667 13.57763
Column 4 3 41.74 13.91333 14.55963
ANOVA
Source of Variation
SS df MS F P-value F crit
Rows 109.2273 2 54.61366 122.0985 1.38E-05 5.143249
Columns 28.8927 3 9.6309 21.53159 0.001299 4.757055
Error 2.68375 6 0.447292
Total 140.8038 11
5. ROBUST DESIGN: AN EXAMPLE

14
Example 5: The example is taken from Reference [9]. In this example, there are 8 control
factors, each at two levels. we denote the two levels as -1 and 1 although they correspond
to different specific values for different factor. The control array is a 28 4 , 16-run,
fractional factorial design, see Table 7.
Table 7: Data for Example 5
-1 -1 -1 -1 -1 -1 -1 -1
1 -1 -1 -1 -1 1 1 1
-1 1 -1 -1 1 -1 1 1
1 1 -1 -1 1 1 -1 -1
-1 -1 1 -1 1 1 1 -1
1 -1 1 -1 1 -1 -1 1
-1 1 1 -1 -1 1 -1 1
1 1 1 -1 -1 -1 1 -1
-1 -1 -1 1 1 1 -1 1
1 -1 -1 1 1 -1 1 -1
-1 1 -1 1 -1 1 1 -1
1 1 -1 1 -1 -1 -1 1
-1 -1 1 1 -1 -1 1 1
1 -1 1 1 -1 1 -1 -1
-1 1 1 1 1 -1 -1 -1
1 1 1 1 1 1 1 1
M1=100 M1=100 M2=200 M2=200 M3=300 M3=300
N1 N2 N1 N2 N1 N2
1 119.2 123.8 239.9 244.4 359.8 365.4
2 155.6 164.2 314.2 322.7 471.6 482.1
3 129 136.1 261.7 267.6 392.9 400.1
4 168.2 176.2 339.7 348 510.1 519.9
5 142.5 150.8 289.4 297 434.5 444
6 160 168.7 323.7 331.2 486.4 496
7 142.6 149.8 288.5 294.8 433 441
8 151.6 160.3 307 314.6 460.3 470.4
9 165.3 174.8 335.6 343.7 503.8 514.2
10 186 199.4 378.2 390.9 568.4 584.5
11 141.8 149.2 287.6 294.2 430.9 439.7
12 184.9 195.4 373.7 383.8 561.2 574.1
13 148.6 157.5 302.3 310.4 453.3 463.5
14 180.6 191 365.9 375.8 549.5 562

15
15 154.9 163.5 314.3 322 471.8 480.7
16 182.8 196.5 371.7 384.3 558.9 574.2
The second part of the table is the outer array which consists of 3 levels of a signal factor
(M1 = 100, M2 = 200, M3 = 300) crossed with two levels of a noise factor for a total of 6
runs. Thus, we have 166 = 96 observations.
Let Yijk denote the observation corresponding to the i-th setting of the control factors, j-th
setting of the signal factor, and k-th setting of the noise factors. Under the assumption of
a linear ideal function with no intercept (i.e. no constant term), we have the model
Y Mijk i j ijk
where i , the sensitivity measure, and i ijk
2 var( ) both depend on the control factor
setting, where i = 1, 2, …, 16.
Measure of robustness is so-called signal-to-noise ratio (SN ratio). The SN ratio for
evaluating the stability of the product is defined as
1
2
2
2
10 1
2
2
210/ , log ( / )or
where 1 and 2 are the standard deviation of the first part and the second part,
respectively. Basically, the SN ratio indicates the degree of the predictable performance
of the product in the presence of noise factors.
In the current example 1 i and 2 =i , so the SN ratio is defined as
i i i 10 102 2log ( / ) .
For each given combination of the control array, we find i and i by regression. Input
signal factor values – 100, 100, 200, 200, 300, 300 – into A1 through A6, and Y values in
the first row – 119.2, 123.8, 239.9, 244.4, 359.8, 365.4 -- into B1 through B6. Select
“regression”. Input Y range B1:B6, input X range A1: A6, and click the item “constant is
zero”. We have the result of regression shown in Table 8. Thus, i is given by

16
“coefficient of X variable 1” – 1.2097, and i is given by “standard error” – 2.7286.
These values along with the corresponding SN ratio is given in Table 9. Similarly, we can
find other i and i . In other words, we need to undertake such 16 regressions.
Table 8: Find i and i by regression
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.999679939
R Square 0.99935998
Adjusted R Square
0.79935998
Standard Error 2.728631263
Observations 6
ANOVA
df SS MS F Significance F
Regression 1 58128.38119 58128.38119 7807.257921 9.83521E-08
Residual 5 37.22714286 7.445428571
Total 6 58165.60833
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0%
Upper 95.0%
Intercept 0 #N/A #N/A #N/A #N/A #N/A #N/A #N/A
X Variable 1 1.209714286 0.005156628 234.5940399 2.67076E-11 1.196458772 1.2229698 1.1964588 1.2229698
The results of these regressions are shown in Table 9. As can be seen from the table, the
combination No. 1 gives the maximum SN ratio.
Table 9: i , i and the SN ratio
beta sigma SN ratio
1.2097 2.7286 -7.06524
1.591 5.0998 -10.1177
1.3224 3.7115 -8.96373
1.7178 4.8004 -8.9261
1.4649 4.6625 -10.0562

17
1.6315 10.3545 -16.0508
1.4607 4.4665 -9.70813
1.5575 6.3244 -12.1718
1.6974 5.145 -9.63202
1.9223 7.7537 -12.1138
1.4523 4.2123 -9.24926
1.8933 6.1794 -10.2745
1.5293 5.0048 -10.2979
1.8534 6.0305 -10.2477
1.5888 4.6269 -9.28442
1.8895 7.6348 -12.129
Under the assumption of a linear ideal function with no intercept, the sensitivity
measurements (i.e. coefficients) and the robustness are obtained in the above. The next
steps are: 1) Determine which control factors have the significant effects on the
robustness (i.e. SN ratio) and their appropriate settings of the levels. 2) Identifying which
control factors significantly affect the sensitivity measurements and their appropriate
settings. 3) Determine the settings of these significant control factors such that the system
is high in both robustness (SN ratio) and sensitivity.
To identify the active dispersion effects (i.e., factors that are important for reducing
variability), one considers a linear model to the estimated SN ratios as a function of the
control factors. With the data in Table 7 for control factor A to H and the SN ratios in
Table 9, we use SPSS and apply ANOVA to obtain Table 10.
Table 10. Tests of Between-Subjects Effects
Dependent Variable:SN
Source Type III Sum of
Squares Df Mean Square F Sig.
Corrected Model 20.912a 8 2.614 10.083 .003
Intercept 1525.618 1 1525.618 5884.526 .000
A 5.979 1 5.979 23.062 .002
B .109 1 .109 .421 .537
C .788 1 .788 3.039 .125
D 6.532 1 6.532 25.195 .002
E 1.249 1 1.249 4.817 .064

18
F .211 1 .211 .813 .397
G 5.671 1 5.671 21.874 .002
H .374 1 .374 1.441 .269
Error 1.815 7 .259
Total 1548.345 16
Corrected Total 22.727 15
a. R Squared = .920 (Adjusted R Squared = .829)
The significance of the each control factor to the SN ratio can be seen from the last
column of Table 10. When the p-value (i.e. sig. value in last column) is less than 0.05, we
say the corresponding factor is significant. From Table 10, we have the factors A, D and
G are significant (cf. Fig. 2(a) in [9]). To further determine the level settings of these
significant factors, the main effect of each factor is studied by plotting their profiles using
SPSS, as shown in Figure 4, Figure 5, and Figure 6, respectively.
Figure 4. Profile of factor A for SN ratio.

19
Figure 5. Profile of factor D for SN ratio
Figure 6. Profile of factor G for SN ratio
Figures 4 to 6 (cf. Figure 3 in [9]) show that the level setting for A, D and G should all be
-1. Similarly, with the data in Table 7 and sensitivity in Table 9, we apply the ANOVA to
find the significant factors for sensitivity. The results from SPSS are shown in Table 11.
Table 11. Tests of Between-Subjects Effects
Dependent Variable:BETA
Source Type III Sum of
Squares df Mean Square F Sig.
Corrected Model .617a 8 .077 13.296 .001

20
Intercept 41.537 1 41.537 7157.844 .000
A .341 1 .341 58.740 .000
B 6.202E-5 1 6.202E-5 .011 .921
C .002 1 .002 .302 .600
D .219 1 .219 37.766 .000
E .031 1 .031 5.303 .055
F .014 1 .014 2.356 .169
G .007 1 .007 1.184 .313
H .004 1 .004 .708 .428
Error .041 7 .006
Total 42.195 16
Corrected Total .658 15
a. R Squared = .938 (Adjusted R Squared = .868)
From Table 11, we can see that only factor A and D are significant. Hence, only factors A
and D will affect the sensitivity of the system. In order to determine the levels of these
two factors, the profiles of factors A and D with respect to sensitivity are plotted in Figure
7 and Figure 8, respectively.
Figure 7. Profile of factor A for sensitivity

21
Figure 8. Profile of factor D for sensitivity
From figures 7 and 8, we can see that the setting for factor A and D should all be +1. The
results are consistent with those in [9] (cf. Figure 4 in [9]).
One can conclude that the factors A, D and G have significant effects on the dispersion,
and from the profile we know that A, D and G should choose the first level (-1). From the
sensitivity analysis, in order to make the sensitivity measure as large as possible, the
significant factors A and D should choose the second level (+1). This conflicts with the
setting in the earlier choice of the first level of reducing the variability. As mentioned in
[9], a compromise is required. The results may be one of the following situations: G is set
the first level (-1). Either first or second level may be chosen for A and D. From [9], we
know, the optimal choice is, -1 for G, +1 for both A and D.
It is noted that in Appendix A, there is another approach to determine the control factor to
trade off between the dispersion (SN ratio) and sensitivity (beta).
REFERENCES:
1. Christine H. Muller (1997), Robust planning and analysis of experiments.
2. Nancy D. Warner (1999), Easy Microsoft Excel 2000.
3. Ron Person (1997), Using Microsoft Excel 97.

22
4. Genichi Taguchi (translated by Shih-Chung Tsai, 1993), Taguchi on robust
technology development: bringing quality engineering upstream.
5. N. Logothetis (1992), Managing for total quality, From Deming to Taguchi and SPC.
6. Thomas J. Lorenzen & Virgil L. Anderson (1993), Design of experiments.
7. Secial issue on Taguchi methods, Quality and Reliability Engineering International,
Vol. 4, No. 2, 1988.
8. Jay Devore & Roxy Peck (1993), Statistics: The Exploration and Analysis of Data,
Duxbury Press.
9. Mahesh Lunani etc. (1997), Graphical methods for robust design with dynamic
characteristics, Journal of Quality Technology, Vol. 29, No. 3, 327-338.

23
Appendix A:
To identify the active dispersion effects (i.e., factors that are important for reducing
variability), one fits a linear model to the estimated SN ratios as a function of the control
factors:
a a A a H0 1 8
The fitting result is shown in Table 10. The greater the coefficient (the absolute value) is,
the more important the corresponding factor. One would conclude from this analysis that
the variables 1, 3, 8, and 5 (i.e., A, C, H, and E) are the important factors in terms of
reducing variability. Thus, the control factors which are important for reducing variability
are determined and their appropriate settings are chosen – maximize the SN ratio.
Table 10: Output of the regression
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.856664144
R Square 0.733873456
Adjusted R Square
0.429728834
Standard Error 1.513310723
Observations 16
ANOVA
df SS MS F Significance F
Regression 8 44.20661329 5.525826662 2.412909529 0.131419679
Residual 7 16.03076542 2.290109345
Total 15 60.23737871
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0%
Upper 95.0%

24
Intercept -10.39301976 0.378327681 -27.47094724 2.17362E-08 -11.28762193 -9.498418 -11.28762 -9.498418
X Variable 1 -1.11090713 0.378327681 -2.936362275 0.021825858 -2.005509299 -0.216305 -2.005509 -0.216305
X Variable 2 0.304643465 0.378327681 0.805237048 0.447167182 -0.589958704 1.199246 -0.589959 1.199246
X Variable 3 -0.850231131 0.378327681 -2.247340531 0.059430809 -1.7448333 0.044371 -1.744833 0.044371
X Variable 4 -0.01055197 0.378327681 -0.027891085 0.978527523 -0.905154139 0.88405 -0.905154 0.88405
X Variable 5 -0.501492822 0.378327681 -1.325551494 0.226610264 -1.396094992 0.393109 -1.396095 0.393109
X Variable 6 0.384759664 0.378327681 1.017001091 0.343008356 -0.509842505 1.279362 -0.509843 1.279362
X Variable 7 -0.244398544 0.378327681 -0.645996992 0.53887405 -1.139000713 0.650204 -1.139001 0.650204
X Variable 8 -0.503707277 0.378327681 -1.331404765 0.22477671 -1.398309446 0.390895 -1.398309 0.390895
To identify the active sensitivity effects, one fits a linear model to the i ’s as a function
of the control factors:
b b A b H0 1 8
Table 11 shows the result of the regression.
Table 11: Output of the regression
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.967355084
R Square 0.935775858
Adjusted R Square
0.862376838
Standard Error 0.077585904
Observations 16
ANOVA
df SS MS F Significance F
Regression 8 0.61395595 0.076744494 12.74916014 0.001544208
Residual 7 0.042137008 0.006019573
Total 15 0.656092958
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0%
Upper 95.0%

25
Intercept 1.6113625 0.019396476 83.07501321 9.64079E-12 1.565497155 1.657228 1.565497 1.6572278
X Variable 1 0.145675 0.019396476 7.510384876 0.00013611 0.099809655 0.19154 0.09981 0.1915403
X Variable 2 -0.001075 0.019396476 -0.055422439 0.957350621 -0.046940345 0.04479 -0.04694 0.0447903
X Variable 3 0.0105875 0.019396476 0.545846575 0.602124582 -0.035277845 0.056453 -0.035278 0.0564528
X Variable 4 0.116925 0.019396476 6.028156867 0.000527269 0.071059655 0.16279 0.07106 0.1627903
X Variable 5 0.0429625 0.019396476 2.214964203 0.062338831 -0.002902845 0.088828 -0.002903 0.0888278
X Variable 6 0.0295125 0.019396476 1.521539273 0.17193473 -0.016352845 0.075378 -0.016353 0.0753778
X Variable 7 -0.0202125 0.019396476 -1.042070735 0.332025967 -0.066077845 0.025653 -0.066078 0.0256528
X Variable 8 0.015525 0.019396476 0.800403125 0.449783991 -0.030340345 0.06139 -0.03034 0.0613903
One would conclude from this analysis that the variables 1 and 4 (i.e., A and D) are the
most important factors in affecting sensitivity.
Once the important sensitivity and dispersion effects have been identified, we can choose
the appropriate settings of the factors to reduce variability and get close to the desired
sensitivity. To intuitively find parameter settings, we can fit the SN ratio as a function of
individual control variable by regressing the following model:
a b x jj j j , , ,..., .1 2 8
The regression straight lines are displayed in Figure 4 which indicates the magnitudes of
the effects of the factors. To make the SN ratio large, we have to choose the 1st level
(with -1 value) of A, C, D, E, G, and H, and the 2nd level (with +1 value) of B and F.

26
-12
-11.5
-11
-10.5
-10
-9.5
-9
A B C D E F G H
Figure 4: Regression lines: the SN ratio as the control factors