a methodology for multi-characteristic system improvement with active expert involvement

12
QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL Qual. Reliab. Engng. Int. 2000; 16: 405–416 A METHODOLOGY FOR MULTI-CHARACTERISTIC SYSTEM IMPROVEMENT WITH ACTIVE EXPERT INVOLVEMENT P. PERSSON 1 * , P. KAMMERLIND 1 , B. BERGMAN 1,2 AND J. ANDERSSON 3 1 Link¨ oping University, Division of Quality Technology and Management, S-581 83 Link¨ oping, Sweden 2 Chalmers University of Technology, Department of Total Quality Management, S-411 29 G¨ oteborg, Sweden 3 Link¨ oping University, Division of Mechanical Engineering Systems, S-581 83 Link¨ oping, Sweden SUMMARY When developing new products many different customer needs must be fulfilled. Often a few of the needs are prioritized (as emphasized in Quality Function Deployment) and most effort in the project is focused on these needs. However, even though there may be only four or five customer needs that are important, it is not easy to develop a product that complies with all of these. A well-known approach to this problem is to use Design of Experiments to find the settings of the system parameters. In a situation with several system characteristics, the analysis of the experiment is not as uncomplicated as in the case with just one system characteristic. This problem has been dealt with in the literature but not as much as one would expect. In this paper we present a methodology to deal with multi-characteristic system improvement, using active expert involvement. Our methodology, which is based on methods from four areas: Expert Groups, Design of Experiments, Conjoint Analysis and Optimization, is introduced using an example. Copyright 2000 John Wiley & Sons, Ltd. KEY WORDS: experimental design; expert group; multiple characteristics; optimization; simulation; systems engineering 1. INTRODUCTION With an increased emphasis on time to market, in- creasing system complexity, and an increased em- phasis on customer satisfaction, Systems Engineering methodologies have attracted more attention among both researchers and practitioners, see e.g. [1,2]. The rapid development of information technology has made realistic system simulation tools available to the systems engineers. Often, it is possible to create a system simulation model i.e. for any input vector x of system parameters, a response vector y of system characteristics, outputs, may be calculated from the simulation model in quite early design phases. How- ever, despite advances in computer power the effort in preparation and the expenses of running the simulation model often remain substantial. Also, due to the multi- dimensionality of the system it might be hard for the system engineer to find improved or optimal solutions to the system design without a systematic methodol- ogy. The aim of this paper is to suggest a systematic method to find improved system solutions when a * Correspondence to: P. Persson, Link¨ oping University, Division of Quality Technology and Management, S-581 83 Link¨ oping, Sweden. Email: [email protected] multi-dimensional system simulation model y = f(x ) is available; see Figure 1. Different methods have been suggested in the literature. Some researchers, see e.g. [3], suggest meta-modeling to be used, i.e. a simpler model, usually polynomic, is approximated from a number of simulations planned according to Design of Experiments, see [4, pp. 429–432] for an example. Based on this model an optimal solution is to be found. A similar possibility is to use Response Surface Methodology, see e.g. [5]. However, these methods only take care of the multi-dimensionality of the input vector x . The problem of a multi- dimensional output remains. A main problem is that different system characteristics often give conflicting settings of the system parameters. Lately, there has been an increased interest in the multiple system characteristics problem, see e.g. [6]. In almost all the literature that we studied for this project, dealing with multiple system characteristics includes some sort of weighting criteria applied to the single characteristics. Derringer and Suich [7], pioneers in the field, describe the earliest hit-or-miss graphical methods as ineffective and insensitive. Khuri [8] describes them as methods to find reasonably good solutions for all Copyright 2000 John Wiley & Sons, Ltd.

Upload: p-persson

Post on 06-Jun-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A methodology for multi-characteristic system improvement with active expert involvement

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL

Qual. Reliab. Engng. Int.2000;16: 405–416

A METHODOLOGY FOR MULTI-CHARACTERISTIC SYSTEMIMPROVEMENT WITH ACTIVE EXPERT INVOLVEMENT

P. PERSSON1∗, P. KAMMERLIND1, B. BERGMAN1,2 AND J. ANDERSSON3

1Linkoping University, Division of Quality Technology and Management, S-581 83 Link¨oping, Sweden2Chalmers University of Technology, Department of Total Quality Management, S-411 29 G¨oteborg, Sweden

3Linkoping University, Division of Mechanical Engineering Systems, S-581 83 Link¨oping, Sweden

SUMMARYWhen developing new products many different customer needs must be fulfilled. Often a few of the needs areprioritized (as emphasized in Quality Function Deployment) and most effort in the project is focused on theseneeds. However, even though there may be only four or five customer needs that are important, it is not easy todevelop a product that complies with all of these. A well-known approach to this problem is to use Design ofExperiments to find the settings of the system parameters. In a situation with several system characteristics, theanalysis of the experiment is not as uncomplicated as in the case with just one system characteristic. This problemhas been dealt with in the literature but not as much as one would expect. In this paper we present a methodologyto deal with multi-characteristic system improvement, using active expert involvement. Our methodology, whichis based on methods from four areas: Expert Groups, Design of Experiments, Conjoint Analysis and Optimization,is introduced using an example. Copyright 2000 John Wiley & Sons, Ltd.

KEY WORDS: experimental design; expert group; multiple characteristics; optimization; simulation; systemsengineering

1. INTRODUCTION

With an increased emphasis on time to market, in-creasing system complexity, and an increased em-phasis on customer satisfaction, Systems Engineeringmethodologies have attracted more attention amongboth researchers and practitioners, see e.g. [1,2]. Therapid development of information technology hasmade realistic system simulation tools available to thesystems engineers. Often, it is possible to create asystem simulation model i.e. for any input vectorx

of system parameters, a response vectory of systemcharacteristics, outputs, may be calculated from thesimulation model in quite early design phases. How-ever, despite advances in computer power the effort inpreparation and the expenses of running the simulationmodel often remain substantial. Also, due to the multi-dimensionality of the system it might be hard for thesystem engineer to find improved or optimal solutionsto the system design without a systematic methodol-ogy.

The aim of this paper is to suggest a systematicmethod to find improved system solutions when a

∗Correspondence to: P. Persson, Link¨oping University, Divisionof Quality Technology and Management, S-581 83 Link¨oping,Sweden. Email: [email protected]

multi-dimensional system simulation modely = f (x)

is available; see Figure1. Different methods havebeen suggested in the literature. Some researchers,see e.g. [3], suggest meta-modeling to be used, i.e.a simpler model, usually polynomic, is approximatedfrom a number of simulations planned according toDesign of Experiments, see [4, pp. 429–432] for anexample. Based on this model an optimal solution isto be found. A similar possibility is to use ResponseSurface Methodology, see e.g. [5]. However, thesemethods only take care of the multi-dimensionalityof the input vectorx. The problem of a multi-dimensional output remains. A main problem is thatdifferent system characteristics often give conflictingsettings of the system parameters.

Lately, there has been an increased interest in themultiple system characteristics problem, see e.g. [6].In almost all the literature that we studied for thisproject, dealing with multiple system characteristicsincludes some sort of weighting criteria applied to thesingle characteristics.

Derringer and Suich [7], pioneers in the field,describe the earliest hit-or-miss graphical methods asineffective and insensitive. Khuri [8] describes themas methods to find reasonably good solutions for all

Copyright 2000 John Wiley & Sons, Ltd.

Page 2: A methodology for multi-characteristic system improvement with active expert involvement

406 P. PERSSONET AL.

Figure 1. An illustration of a system simulation model using anadaptation of the P-diagram suggested by Phadke (1989)

responses, i.e. there is no optimization strategy. Thegraphical methods are hard to use for more than threeparameters and two characteristics [9].

Different kinds of desirability functions are themost frequently used multi-characteristic optimizationtechniques in practice, see [7,10]. Harrington [11]presented an optimization strategy utilizing a desir-ability function, which seems to be the first time thatthe term shows up in the literature. Derringer andSuich [7] modified this desirability function usinga more sophisticated expression. The choice of thedesirability function is subjective, governed by howexperts assess the relative importance of each systemcharacteristic. This method relies on an expert to makea fair judgement of the relative importance of eachsystem characteristic and on the assumption that theserelative importances are the same over the space ofpossible system characteristics.

Another approach is based on loss functions. Khuriand Conlon [12] base their strategy on a distancefunction, including the correlation structure amongthe system characteristics. The drawbacks with thisstrategy are that the process economics and prioritiesare ignored. These are taken care of in [13–15]. Theyall use a square loss function, which considers boththe specific process economics and the correlationstructure. Vining [15] also incorporates the quality ofthe predictions in the procedure. They all have a moreor less subjective part in the method, always includingexpert groups. Taguchi’s quadratic loss function isused by some authors, see e.g. [16], and the dualresponse approach is applied in [17]. The difficultieshere are similar to the methods mentioned before:experts need to assess the loss to society for notdelivering the optimal quality.

For all methods mentioned, the focus is on statisticaland optimization characteristics of the methodsinstead of the operational use. These methods neverconsider the role of the expert group more closely,see e.g. [18]. Expert groups are mentioned but neverdescribed in more detail, i.e. the composition of the

group is not given nor how the work in the group iscarried out.

The literature study made us feel that there isa need to create a methodology in which expertstake anactive role. Therefore, the purpose of thispaper is to report on an ongoing effort to find animproved methodology for multi-characteristic systemimprovement based on some simple building stones:Expert Groups, Design of Experiments, ConjointAnalysis and Optimization.

The method suggested in this paper is illustratedutilizing an actuator system for the aircraft industry.This system is modeled in the HOPSAN simulationpackage [19], and systems engineers from Saab ABwere actively involved in the improvement process.The research reported here is a part of a larger projecton system optimization methodology. In an earlierpart of this project, an approach with optimization(complex method, see [20]) combined with Designof Experiments was used. The system characteristicswere weighted together in an objective function,see [21].

The structure of the paper is as follows. Section2is a simple graphical introduction to the proposedmethodology. In Section3 a case study is presented.The theoretical background together with a step-by-step overview of the methodology is found inSection4. The paper ends with a few conclusions andsuggestions for future research.

2. THE BASIC IDEAS

The aim of this section is to give a simplifiedgraphical illustration of the proposed methodology.This illustration includes two system parameters(x1 andx2) and two system characteristics (y1 andy2). Experiments are performed in thex-space andthe result of each experiment is presented as concepts(c0–c4) to an expert group; see Figure2. Each conceptis described by its system characteristics and not by itssystem parameters.

The concepts are evaluated and their relative meritsare assessed by the experts and consensus is strivedfor. Utilizing the relative merits a direction in thex-space can be calculated, see Figure3, in whichconcepts representing better system performance canbe expected to be found.

Experiments are carried out in the new direction andthe results are again assessed by the experts. Aroundthe best concept a new experimental design can beset up and the procedure continues until no furtherimprovements seem worthwhile.

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 3: A methodology for multi-characteristic system improvement with active expert involvement

MULTI-DIMENSIONAL SYSTEM SIMULATION MODEL 407

Figure 2. A simplified illustration of the proposed methodology

Figure 3. A new direction with new experiments

3. THE PROPOSED METHODOLOGY—ANILLUSTRATIVE EXAMPLE

This section introduces the proposed methodologyusing an example from Saab AB in Link¨oping,Sweden.

3.1. The system

The object of study for this work is a so-called electro-hydrostatic actuator system (EHA), asdepicted in Figure4. Electro-hydrostatic actuatorsystems are designed to replace the centralizedhydraulic systems of today’s aircraft. Instead of havinga central hydraulic pump supplying actuators at thedifferent control surfaces the electrically poweredEHA is situated directly at the flight control surface.These systems are also known as ‘power by wire’systems, the use of which is a trend in modern aircraftflight control.

In order to evaluate the characteristics for differentdesigns, simulation was employed, where thesystem was modeled in the HOPSAN simulationpackage [19].

Figure 4. The electro-hydrostatic actuator situated directly at thecontrol surface

Table 1. The system parameters and system characteristics used inthe study

System parameters (x’s) System characteristics (y’s)

x1: Pump size y1: Control errorx2: Cylinder size y2: System costx3: Reservoir pressure y3: System weightx4: Control parameter C1 y4: Energy consumptionx5: Control parameter C2

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 4: A methodology for multi-characteristic system improvement with active expert involvement

408 P. PERSSONET AL.

Table 2. The experimental design

x1x4x4x5 x3x5 x2x5 x3x4 x2x4 x2x3 x1x5

I x1 x2 x3 x1x2 x1x3 x5 x4

1 1 −1 −1 −1 1 1 1 −12 1 1 −1 −1 −1 −1 1 13 1 −1 1 −1 −1 1 −1 14 1 1 1 −1 1 −1 −1 −15 1 −1 −1 1 1 −1 −1 16 1 1 −1 1 −1 1 −1 −17 1 −1 1 1 −1 −1 1 −18 1 1 1 1 1 1 1 19 1 0 0 0 0 0 0 0

Table 3. The results from the experiments in the simulation

x1x4x4x5 x3x5 x2x5 x3x4 x2x4 x2x3 x1x5

I x1 x2 x3 x1x2 x1x3 x5 x4 y1 y2 y3 y4

1 1 −1 −1 −1 1 1 1 −1 0.0233 2.99 15.82 24.142 1 1 −1 −1 −1 −1 1 1 0.0216 3.58 18.75 25.733 1 −1 1 −1 −1 1 −1 1 0.0243 2.65 18.32 24.144 1 1 1 −1 1 −1 −1 −1 0.0220 2.65 20.90 25.735 1 −1 −1 1 1 −1 −1 1 0.0229 2.98 15.82 24.146 1 1 −1 1 −1 1 −1 −1 0.0214 3.21 18.75 25.737 1 −1 1 1 −1 −1 1 −1 0.0246 2.68 18.32 24.148 1 1 1 1 1 1 1 1 0.0216 2.66 20.90 25.739 1 0 0 0 0 0 0 0 0.0221 2.81 18.54 24.95

3.2. System parameters and characteristics

There are several system characteristics that arevital when designing an actuator system. The firststep was to determine which of them to selectfor the project. The following characteristics werechosen based on recommendations from the systemengineers: system weight, cost, energy consumptionand, naturally, the system performance measured bythe control error during a specific duty cycle. Inorder to achieve the desired characteristics there areseveral system parameters that might be selected. Theparameters are important properties of the differentcomponents within the system, as seen in Figure4.The parameters and characteristics used in this projectare listed in Table1.

3.3. The first simulation

When the parameters had been decided upon, anappropriate experimental design had to be chosen.In the initial phase of the project our main concern

was to find out which main parameters have a largeinfluence on system performance. Already from thebeginning one of the systems engineers thought thatone of the interactionsx1x2, was important. This is theinteraction between pump size and cylinder size. Sincethere were five main parameters and one interaction ofinterest we decided to use a fractional factorial (a 25−2

design). One center point was added to be able to testfor quadratic terms, but the most important reason wasthat it is a reference point in our methodology. Theexperimental design can be seen in Table2.

The level of each parameter was set using knowl-edge from the system engineers. As the experimentersare likely to have insufficient knowledge prior to theexperiment, it is of utmost importance to involveexperts of the system when setting the levels.

The experiment was run in the HOPSAN simulationpackage [19]. The results of the experiment (y1–y4)can be found in Table3.

Based on the results from each single experiment,nine different ‘concepts’ (c1–c9) were created. Each

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 5: A methodology for multi-characteristic system improvement with active expert involvement

MULTI-DIMENSIONAL SYSTEM SIMULATION MODEL 409

Figure 5. Concept ‘A’

concept represents the four characteristics for a singleexperiment.

As can be seen in Figure5, control error wasrepresented graphically and not by a number as theother characteristics. If possible, we had wanted torepresent all characteristics graphically, in order tomake the forthcoming evaluation process easier. Thenext step in our methodology is perhaps the mostimportant of them all. This is to present all concepts toan expert group. In our case the expert group consistedof three persons from Saab AB. They representedthe divisions of hydraulic systems, electrical systemsand modeling/simulation. The meeting with the expertgroup proceeded as follows.

First, we introduced the system and its place in theaircraft to the group. We also described each character-istic carefully so all members in the group understoodthe concepts. Then we gave each member one set ofconcepts (c1–c9). The concepts were marked ‘A’–‘H’and ‘Ref’. The marking was made in random order forconcepts ‘A’–‘H’ (i.e. concept ‘A’ was not identical torow 1 in the design matrix). The corner points in thedesign were concepts ‘A’–‘H’. The center point wasmarked ‘Ref’ since it was the reference concept for theexpert group. We had decided that all experts should

value the results from the center point to 100 points.Firstly, each member was told to assess all concepts,

relating the number of points to the 100 points ofthe center point. This was done individually and theexperts were not allowed to talk to each other. Whenthis was completed, the results were summarizedand each expert was given the possibility to explainhow they had been thinking when setting points (inour case we chose to let the experts explain thepoints for the highest and the lowest points theyhad given). Thereafter, we told the experts to try toreach a consensus evaluation of the concepts. Thiswas done quite easily since the experts had donesimilar evaluations. Therefore, on suggestion from theexperts (not from us), the consensus solution becamethe average. Afterwards there was a discussion aboutwhether this was a good solution or not and the expertsthought it was good.

The consensus solution was then used as a newcharacteristic (y5) and was analyzed with standardDesign of Experiments procedures. The experts’evaluations, the consensus evaluation (y5) and thecalculated effects can be found in Table4.

From the normal plot in Figure6 it can be seen thatx1 and the interactionx1x2 are active.

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 6: A methodology for multi-characteristic system improvement with active expert involvement

410 P. PERSSONET AL.

Table 4. The experts’ evaluations, the consensus evaluations and the calculated effects

Expert nox1x4

x4x5 x3x5 x2x5 x3x4 x2x4 x2x3 x1x5 y5I x1 x2 x3 x1x2 x1x3 x5 x4 y1 y2 y3 y4 1 2 3 Total consensus

1 1 −1 −1 −1 1 1 1 −1 0.0233 2.99 15.82 24.14 150 110 100 360 120.02 1 1 −1 −1 −1 −1 1 1 0.0216 3.58 18.75 25.73 25 50 50 125 41.73 1 −1 1 −1 −1 1 −1 1 0.0243 2.65 18.32 24.14 75 80 100 255 85.04 1 1 1 −1 1 −1 −1 −1 0.0220 2.65 20.90 25.73 75 70 100 245 81.75 1 −1 −1 1 1 −1 −1 1 0.0229 2.98 15.82 24.14 125 100 125 350 116.76 1 1 −1 1 −1 1 −1 −1 0.0214 3.21 18.75 25.73 50 50 50 150 50.07 1 −1 1 1 −1 −1 1 −1 0.0246 2.68 18.32 24.14 100 90 100 290 96.78 1 1 1 1 1 1 1 1 0.0216 2.66 20.90 25.73 75 85 150 310 103.39 1 0 0 0 0 0 0 0 0.0221 2.81 18.54 24.95 100 100 100 300 100.0

Effects −35.4 9.583 9.583 37.08 5.417 7.083−0.42 88.3

Figure 6. Normal plot from the analysis after the first expert groupmeeting

Considering thatx1x2 is active, we also chose toincludex2 in the model. The prediction model is:

y5 = 88.3 − (35.4/2)x1

+ (37.1/2)x1x2 + (9.58/2)x2

Since there was a center point in the design we wereable to test for non-linearity. This test supported theprediction model, i.e. a linear model.

Table 5. The settings forx1 andx2 in the new experiments

x1 x2

0 0.00 0.001 −0.25 0.072 −0.50 0.013 −0.75 −0.194 −1.00 −0.435 −1.25 −0.676 −1.50 −0.91

3.4. The second iteration

The prediction model was used to find the directionin which y5 increases the most. This was done usingthe partial derivatives (a kind of modified steepestascent approach, see [5]):

∂y5

∂x1= −17.7 + 18.5x2

∂y5

∂x2= 4.8 + 18.5x1

The center point was used as a starting point whenfinding the new direction. We also decided to use astep of 0.25 to calculate new settings forx1 in the newexperiments. The settings forx2 are then calculated bythe following formula:

x2,new = x1,new∂y5/∂x2(x1,old)

∂y5/∂x1(x2,old)

Six new experiments were run in the simulationpackage according to the settings in Table5. All othervariables that earlier were found to be inert were heldconstant in the new experiments.

For each experiment in Table5 a new concept couldbe created. The six new concepts were presented to the

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 7: A methodology for multi-characteristic system improvement with active expert involvement

MULTI-DIMENSIONAL SYSTEM SIMULATION MODEL 411

Table 6. The new design matrix with results

y6l x1 x2 x4 x1x2 x1x4 x2x4 x1x2x4 y1 y2 y3 y4 consensus

1 1 −1 −1 −1 1 1 1 −1 0.0229 2.86 17.68 24.55 552 1 1 −1 −1 −1 −1 1 1 0.0234 2.85 18.16 24.55 503 1 −1 1 −1 −1 1 −1 1 0.0223 2.89 18.38 24.95 504 1 1 1 −1 1 −1 −1 −1 0.0227 2.86 18.84 24.95 505 1 −1 −1 1 1 −1 −1 1 0.0226 2.87 17.68 24.55 1206 1 1 −1 1 −1 1 −1 −1 0.0231 2.85 18.16 25.55 907 1 −1 1 1 −1 −1 1 −1 0.0220 2.90 18.38 24.95 708 1 1 1 1 1 1 1 1 0.0224 2.87 18.84 24.95 1109 1 0 0 0 0 0 0 0 0.0226 2.86 18.27 24.75 100

Effects −8.75 1.25 46.25 18.75 3.75−6.25 16.25 77.2

expert group along with the old center point and theywere asked to identify the best of the seven concepts.The experts found the concept corresponding toexperiment 1 in Table5 to be the best one. Thisis still within the experimental region from the firstset of experiments. This could be an indication thatthe optimum is near the experimental region or thatthere are other variables influencing the response,which have not been included in the experiment orhave been included but not varied sufficiently. Anotherpossibility is that the experts have changed opinionsince the previous evaluation.

3.5. The third simulation

Based on the results from the second simulation, wedecided to set up a new experimental design with thenew best point as the center point. We also decided toaddx4, and widen the range between its high and lowlevels. The reason for this was that the experts thoughtthat we might not have varied this variable enoughduring the first set of experiments.

The new design matrix can be seen in Table6. Thistime a non- fractionated 23-design was chosen.

Just like the first time, concepts were created andpresented to the expert group. The procedure duringevaluation of the concepts was also the same as thefirst time except that this time the experts reachedconsensus without first setting the points individually.The new consensus solution was denoted byy6.

The advice from the experts was found to bevaluable; x4 seems to be active when the rangebetween high and low levels was stretched, seeFigure 7. To increasey, x4 should be set on itshigh level. The normal plot in Figure7 also indicatesthat any additional conclusions are not justified. Theeffects do not really line up well on a straight

Figure 7. Normal plot for the new experiment

line. Also, the test for quadratic terms indicates thatan extension of the experiment design to a CentralComposite Design (CCD) might be appropriate.

3.6. Results from the experiment

The performance of the system has been muchimproved during the experiments carried out and weleave the extension to a CCD to a later part ofthis project. Table7 summarizes the results from theexperiment.

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 8: A methodology for multi-characteristic system improvement with active expert involvement

412 P. PERSSONET AL.

Table 7. The results from the experiment

x1: Pump size Should be set to−0.25 according to the settings in the first experimentx2: Cylinder size Should be set to+0.07 according to the settings in the first experimentx3: Reservoir pressure Not activex4: Control parameter C1 Should be set to+1 according to the settings in the second experimentx5: Control parameter C2 Not active

3.7. Concluding remarks from the example

Finally, a few remarks are called for. Since thiswas a continuation of another project, the system wasalready programmed in the simulation package whenthis project started. Therefore, the characteristics weregiven and it was decided to include all of them inthe study. However, normally much time should bedevoted to the selection of which characteristics toinclude in the study.

In our case we thought that we had sufficientknowledge about which levels to use for theparameters before the experiment, which turned outto be a mistake. A lesson learned from this is thatit is very important to bring in as much expertise aspossible in an early phase of the project.

It should be noted that the experts were not givenany information about which system parameters thatwere varied in the experiments. The reason for this isthat the experts should be focused on the evaluation ofthe concepts through their system characteristics only.

During the evaluation of the concepts from the firstsimulation we thought it was especially important thatall experts first did an individual evaluation beforethe consensus process started. When evaluating theconcepts from the third simulation we did not force theexperts to do the individual evaluation. By this timethe experts knew the system and had similar opinionsabout what was important to look for in the concepts,which was the reason that we let them do it this way.When analyzing this procedure afterwards we mustadmit that it probably would have been better to havethe same procedure in all of the evaluations.

4. THE PROPOSEDMETHODOLOGY—THEORETICAL

BACKGROUND

To overcome the known problems when improvingsystems with multiple system characteristics, we havedeveloped a methodology that is based on active expertinvolvement. The knowledge of the experts is usedvery explicitly and it is also possible for the expertsto change opinion during the optimization process.

The methodology consists of methods from fourmain areas: Expert Groups, Design of Experiments,Conjoint Analysis and Optimization. These areas arepresented in Sections4.1–4.4. Section4.5 is a step-by-step overview of the methodology.

4.1. Expert Groups

There exist situations that use expert involvement inmany fields [22]. In multiple-characteristic problemsexperts’ involvement mostly consists of constructionof desirability functions [18,23]. However, the formalsystematic involvement of experts is not described. InConjoint Analysis experts, i.e. the customers, evaluatedifferent concepts in the product design process.Another field containing expert involvement is riskanalysis, where the evaluations of risks performed byexperts are playing an important role [22].

A technique used in Market Research is the Delphimethod, which is described briefly, see e.g. [22].This technique uses a questionnaire to which chosenexperts respond. The responses are anonymous and therespondents do not know each other. The results areanalyzed and sent back to the respondents, includingmedian and interquartile range. The respondents areasked to reevaluate their initial answers (predictions)and, in the cases that answers after reevaluation areoutside the interquartile range, arguments are required.The procedure is iterated three or four times and thespread in the final stage is lower than the first, i.e. theexperts are closer to consensus. A modified approachis to let the experts rank their own expertise andbase analysis on the persons claiming to have mostexpertise. This approach is said to improve accuracyof the method [22].

In Conjoint Analysis and Risk Analysis anyinteraction between group members is not permitted;in Delphi techniques an interaction stage exists butexcludes a meeting face to face. The evaluationprocedure of quality awards starts with an individualevaluation followed by a group meeting in whichconsensus is reached [24]. This could be seen as atechnique using expert groups in the same manner aswe do.

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 9: A methodology for multi-characteristic system improvement with active expert involvement

MULTI-DIMENSIONAL SYSTEM SIMULATION MODEL 413

Cooke [22] identifies four problems in utilizingexpert opinion: the divergence of expert opinion, thedependence between opinions of different experts,reproducibility of the result and calibration of experts’assessment.

4.2. Design of Experiments

Design of Experiments is an established techniqueused in product and process improvement, originatingfrom the work of Sir Ronald Fischer, see e.g. [25].Designed experiments introduce variation in a struc-tured manner using control factors (or parameters)with the aim to reduce variation in the response vari-ables. There exist many experimental designs usingtwo- or three- level factors, see e.g. [25–27]. Thedesigns of most interest here are two-level geometricfactorial or fractional factorial designs (2n−p designs),see e.g. [4,5]. The experiments are preferably run insequence, e.g. moving to new locations in the factorspace, or augment designs for curvature and higher-level interactions, e.g. Central Composite Design, seee.g. [5,28].

Influential sources of variation can be judged frominert ones with analysis techniques using normalplots [29,30] or analysis of variance (ANOVA), seee.g. [4,31].

4.3. Conjoint Analysis

Conjoint Analysis was originally introduced withinthe field of psychology [32] and it has been introducedand used in the field of market research for a longtime [33]. More recently it has been introduced as atool supporting Quality Function Deployment (QFD)in the design process [34]. Conjoint Analysis isincluded in the seven product-planning tools (7 PP-tools), i.e. tools and techniques useful in new productdevelopment.

Conjoint Analysis can be characterized as Designof Experiments applied to marketing decisions;attributes of interests areconsidered jointly. Threedifferent approaches exist in Conjoint Analysis: pairedcomparison, full-profile and trade-off [34]. A full-profile approach includes the use of an experimentalplan generating product concepts evaluated bycustomers. The concepts are shown to the customerspreferably with graphical support explaining eachattribute.

The evaluation is made using a rating or rankingscale. Gustafsson [34] identifies five kinds of scales,all five including combinations of ranking and ratingscales (e.g. numbers 1 to 9).

The number of attributes should be limited to five orsix [35]. The number of concepts is usually twelve tosixteen [36].

4.4. Optimization

Optimization is a very broad area. Here, twoareas that have influenced us during this project arepresented briefly.

Multi-Criteria Decision Making (MCDM) is a well-known approach within operational research whenthere are multiple objectives, see [37]. The utilizationof MCDM compensates for the human bounded ratio-nality by highlighting conflicts and presenting alterna-tive options in problems. It facilitates implementationof decisions by clarifying concepts and entities wheninvestigating available options, and it provides insightinto the problem character by forcing decision-makingunits to communicate preferences and face real trade-offs [37]. Agrell [37] suggests an iterative algorithmfor the optimization procedure where solutions onthe border of the admissibley-space are investigated,which is different from the proposed methodology inthis paper.

Evolutionary Operation (EVOP) was introduced byBox [38]. It is a method for continuous monitoringand improvement of a full-scale process with theobjective of moving the operating conditions towardthe optimum or following a ‘drift’ [5]. This is doneby continuously using Design of Experiments duringthe normal run of a process and it is accomplished bymaking relatively small changes in the parameters.

4.5. Step-by-step overview of the proposed method-ology

Below, the proposed methodology is presented in astep-by-step fashion. Under each step special attentionis given to the most important things to consider.

1. Appoint an expert group.It is important that themembers of the group come from many differentexpertise areas to contribute with differentknowledge.

2. Plan and carry out the experiments.Performan experiment using an appropriate experimen-tal design. We recommend following the plan-ning procedure proposed by Coleman and Mont-gomery [39]. In most cases when there are manysystem parameters to investigate, a fractionalfactorial design is preferable to save resources tolater stages of the project. When the experimentshave been carried out the result is a set of systemcharacteristics for each experiment.

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 10: A methodology for multi-characteristic system improvement with active expert involvement

414 P. PERSSONET AL.

3. Create concepts and let the expert group evaluatethem.Create a concept (see Figure5) for eachset of system characteristics given in step 2.Present the concepts to the expert group anddescribe all system characteristics carefully tothem. There are many ways to let the expertsevaluate the concepts. We believe that the useof a reference point is important so that theexperts have a common starting point whendoing the evaluation. In our example the expertswere told to give points (individually) to theconcepts considering that the reference point wasevaluated to 100 points. The number of points togive to each concept was unlimited. Another wayto give points would have been to give the expertsfor example 800 points to distribute between thedifferent concepts. Which of these two ways toprefer needs to be further investigated. Whenthe experts have done the individual evaluation aconsensus process follows, where the experts aresupposed to reach a solution that is acceptablefor all experts. The result from this step is theevaluation from the experts.

4. Find the direction in which to change the settingsof the system parameters.The experts’ evaluationis then used as a new system characteristic,which is analyzed with standard Design ofExperiments methods. This gives a predictionmodel, which can be used to calculate thedirection in which improved performance of thesystem can be expected. This is done usingthe partial derivatives of the prediction model.All system characteristics should be analyzedindividually to gain knowledge about the system.

5. Plan and carry out new experiments in thedirection obtained in step 4.The experimentsperformed in this step are determined from thepartial derivatives calculated in the previous step.The result from this step is, just like in step 2, aset of system characteristics for each experiment.

6. Create new concepts and let the expert group findthe best.This step is almost the same as step3; the only difference is the evaluation process.Since the purpose is to find the best point inthe given direction, the experts are only told tofind which of the concepts that is the best. Then,the experts are told to investigate whether thechosen concept could be an optimum (or a nearoptimum). If the experts believe that the conceptcan be improved more experiments are needed(see step 7).

7. Use a new experimental design with the pointcorresponding to the best concept in step 6 as

the center point.With this step the methodologystarts over again in step 2. A new experimentaldesign is chosen, experiments are carried outand presented to the expert group. During theoptimization procedure knowledge can be gainedabout parameters that are not varied earlier in theexperiments. If that is the case, these parameterscan be included in this step. The analysis is doneaccording to steps 3 and 4. If more experimentsare needed to find an optimum, steps 5, 6 and 7are repeated.

As can be seen, the proposed methodology isbuilt up of parts from all the knowledge areas inSections4.1–4.4. Even though not explicitly used ina specific step of the methodology, the optimizationtheory in Section4.4has influenced the whole strategyof the methodology.

5. CONCLUSIONS

Even though we tried the proposed methodology onlyonce, a couple of benefits and drawbacks of themethod became evident.

One of the benefits is that it is easier for theexperts to evaluate a concept that to weight differentsystem characteristics against each other. It is alsoimportant that the experts have the possibility tochange opinion during the improvement process.Since the concepts should, if possible, be visualizedgraphically, important information can be gained fromthe experts. In our case, the experts concluded that theresponse time for the system was much too long andhad to be improved before the system can be used inan aircraft. Ify1 (control error) had not been picturedgraphically this would most certainly not have beendetected so early in the product development process.Another benefit is that many persons become involvedat an early stage.

One drawback of the methodology is that it isdifficult for the experts to evaluate the concepts whenthe experimental region is close to optimum becausethen the concepts become very alike. However, ifthe experts can not tell the difference betweenthe concepts, further improvements of the system’sperformance might not be needed.

One possible way to go is to use the proposedmethodology in screening experiments and othermethods (for example quadratic loss functions) whengetting closer to the optimum. This would use thestrengths of each method. The proposed methodologyinvolves many persons, who learn the system early,and steer the project in the right direction from the

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 11: A methodology for multi-characteristic system improvement with active expert involvement

MULTI-DIMENSIONAL SYSTEM SIMULATION MODEL 415

beginning, avoiding many traps along the way. Duringthe first experiments, these persons gain much of theknowledge needed to set the weights for fine-tuningwith the other methods later in the project.

6. FUTURE RESEARCH

In the continuation of this project we intend tofurther investigate how robustness questions can behandled by an expert group. One of the hardestproblems regarding robustness is that deviations innoise parameters are often small compared to changesin control parameters, and therefore it becomesdifficult for the experts to detect the small changes inthe concepts.

Another area where improvement is needed is thecomposition of the expert group. Questions that needto be investigated are for example: What is optimumsize of the group? Which expertise should be includedin the group?

The evaluation process must also be furtherdeveloped. How should points be given? Is it efficientto use only one concept as a reference point?

The connections to Multi-Criteria Decision Making(MCDM) and other methods within operationalresearch must be better investigated.

ACKNOWLEDGEMENTS

This project has been financed by NFFP (NationelltFlygtekniskt Forsknings Program), a national programfor research on aircraft technology. We want tothank the experts at Saab AB in Link¨oping for theircontributions to this project. Valuable comments onthe paper have been given by Associate Professor MatsLorstad, Linkoping University and Katarina Nilsson,Ph.D., Saab AB.

REFERENCES

1. Clausing DP.Total Quality Development. John Wiley & Sons,Inc: New York, 1994.

2. Bisgaard S. A conceptual framework for the use of qualityconcepts and statistical methods in product design.Journal ofEngineering Design1992;3(1):31–47.

3. Simpson TW, Peplinski J, Koch PN, Allen JK. On the useof statistics in design and the implications for deterministiccomputer experiments.ASME Design Engineering TechnicalConferences, Sacramento, CA, 14–17 September 1997.

4. Box GEP, Hunter WG, Hunter JS.Statistics forExperimenters–An Introduction to Design, Data Analysis, andModel Building. John Wiley & Sons, Inc: New York, 1978.

5. Myers RH, Montgomery DC.Response Surface Methodology,Process and Product Optimization Using Designed Experi-ments. John Wiley & Sons, Inc: New York, 1995.

6. Carlyle WM, Montgomery DC, Runger GC. Optimizationproblems and methods in quality control and improvement.Journal of Quality Technology2000;32(1):1–31.

7. Derringer G, Suich R. Simultaneous optimization of severalresponse variables.Journal of Quality Technology1980;12(4):214–219.

8. Khuri AI. Analysis of multiresponse experiments: a review.Statistical Design and Analysis Industrial Experiments, GoshS (ed.). Marcel Dekker Inc: New York, 1990.

9. Osborne DM, Armacost RL, Pet-Edwards J. State of the artin multiple response surface methodology.IEEE InternationalConference on Computational Cybernetics and Simulation,Orlando, Florida, 1997.

10. Del Castillo E, Montgomery DC, McCarville DR. Modifieddesirability functions for multiple response optimization.Journal of Quality Technology1996;28(3):337–345.

11. Harrington EC. The desirability function.Industrial QualityControl 1965;21(10):494–498.

12. Khuri AI, Conlon M. Simultaneous optimization of multipleresponses represented by polynomial regression functions.Technometrics1981;23(4):363–374.

13. Pignatiello JJJ. Strategies for robust multiresponse qualityengineering.IIE Transactions1993;25(3):5–15.

14. Ames AE, Mattucci N, MacDonald S, Szonyi G, HawkinsDM. Quality loss functions for optimization across multipleresponse surfaces.Journal of Quality Technology1997;29(3):339–346.

15. Vining GG. A compromise approach to multiresponseoptimization.Journal of Quality Technology1998;30(4):309–313.

16. Tong LI, Su CT. Optimizing multi-response problems inthe Taguchi method by fuzzy multiple attribute decisionmaking. Quality and Reliability Engineering International1997;13:25–34.

17. Vining GG, Myers RH. Combining Taguchi and responsesurface philosophies: a dual response approach.Journal ofQuality Technology1990;21(1):38–45.

18. Das P. Concurrent optimization of multiresponse productperformance.Quality Engineering1999;11(3):365–368.

19. HOPSAN, HOPSAN, A Simulation Package, User’s Guide.Technical Report LITH-IKP-R-704, Department of Mechani-cal Engineering, Link¨oping University, Sweden 1991.

20. Box MJ. A new method for constraint optimization and acomparison with other methods.Computer Journal1965;8:42–52.

21. Nilsson K, Andersson J, Krus P. Method for integratedsystems design—a study of EHA system.Recent Advancesin Aerospace Hydraulics, Toulouse, France, 24–28 November1998.

22. Cooke RM.Experts in Uncertainty—Opinion and SubjectiveProbability in Science. Oxford University Press: New York,1991.

23. Derringer GC. A balancing act: optimizing a product’sproperties.Quality Progress1994; June:51–58.

24. SIQ. Utmarkelsen Svensk Kvalitet (The Swedish QualityAward). Institutet for Kvalitetsutveckling: Gothenburg 1999,(in Swedish).

25. Fischer RA.The Design of Experiments. Oliver and Boyd:Edinburgh, 1935.

26. Placket RL, Burman JP. The design of optimum multifactorialexperiments.Biometrika1946;33:305–325.

27. Box GEP, Behnken DW. Some new three level designs for thestudy of quantitative variables.Technometrics1960;2(4):455–475.

28. Box GEP. George’s column—sequential experimentation andsequential assembly of designs.Quality Engineering1992;5(2):321–330.

29. Daniel C. Use of half-normal plots in interpreting factorialtwo-level.Technometrics1959;1(4):311–341.

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416

Page 12: A methodology for multi-characteristic system improvement with active expert involvement

416 P. PERSSONET AL.

30. Daniel C.Application of Statistics to Industrial Experimenta-tion. John Wiley & Sons, Inc: New York, 1976.

31. Montgomery DC.Design and Analysis of Experiments(3rdedn). John Wiley & Sons, Inc: New York, 1991.

32. Luce D, Tukey J. Simultaneous conjoint measurement: a newtype of fundamental measurement.Journal of MathematicalPsychology1964;1:1–27.

33. Green EP, Rao V. Conjoint measurement for quantifyingjudgmental data.Journal of Marketing Research1971;1(August):61–68.

34. Gustafsson A. Customer focused product development byconjoint analysis and QFD.PhD Dissertation, LinkopingStudies in Science and Technology, Link¨oping University,1996.

35. Green EP, Srinivasan V. Conjoint analysis in marketing: newdevelopments with implications for research and practice.Journal of Marketing1990;54(October):3–19.

36. Wittink RD, Cattin P. Commercial use of conjoint analysis: anupdate.Journal of Marketing1989;53(July):91–96.

37. Agrell PJ. Interactive multi-criteria decision-making in pro-duction economics.PhD Dissertation, Production-EconomicResearch in Link¨oping, Linkoping, 1995.

38. Box GEP. Evolutionary operation: a method for increasingindustrial productivity.Applied Statistics1957;6:81–101.

39. Coleman DE, Montgomery DC. A systematic approach toplanning for designed industrial experiments.Technometrics1993;35(1):1–27.

Authors’ biographies:

Per Perssonis a Ph.D. candidate at the Division of QualityTechnology and Management, Department of MechanicalEngineering, Link¨oping University, Sweden. He received hisM.Sc. Degree in Industrial Engineering and Managementfrom the same university in 1996. His research is mainly

focused on design of experiments but also other statisticalmethods within the quality technology area are in his field ofinterest. He is also a member of the board in the committeefor statistical methods of the Swedish Association forQuality.

Peter Kammerlind is also a Ph.D. candidate at the Divisionof Quality Technology and Management, Departmentof Mechanical Engineering, Link¨oping University. Hereceived his M.Sc. Degree in Industrial Engineering andManagement from the same university in 1998. Robustdesign methodology is his main research area but he is alsoan examiner for the Swedish Health Care Quality Award.

Bo Bergman is the professor of Total Quality Managementat Chalmers University of Technology, Gothenburg, Swe-den. Before that he was the professor of Quality Technologyand Management at Link¨oping University. Earlier he workedfor 15 years in industry primarily with reliability, qualitymanagement, and statistical consultations. During a part ofthis period he was also an adjoint professor at the RoyalInstitute of Technology in Stockholm. He is an author or co-author of seven books and more than fifty papers publishedin international scientific journals.

Johan Andersson is a Ph.D. candidate at the division ofFluid and Mechanical Engineering Systems, Departmentof Mechanical Engineering, Link¨oping University. Hisresearch interests are modeling and simulation of multi-domain systems and especially how simulation andoptimization techniques can be employed in order to supportthe development process. He presented his Licentiate thesisentitled‘On Engineering Systems Design–A Simulation andOptimization Approach’in May 1999.

Copyright 2000 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int.2000;16: 405–416