reliability based design optimization - formulations and methodologies

136
RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND METHODOLOGIES A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Harish Agarwal, B.Tech.(Hons), M.S.M.E. John E. Renaud, Director Graduate Program in Aerospace and Mechanical Engineering Notre Dame, Indiana December 2004

Upload: lpaschoal

Post on 27-Oct-2014

81 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Reliability Based Design Optimization - Formulations and Methodologies

RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND

METHODOLOGIES

A Dissertation

Submitted to the Graduate School

of the University of Notre Dame

in Partial Fulfillment of the Requirements

for the Degree of

Doctor of Philosophy

by

Harish Agarwal, B.Tech.(Hons), M.S.M.E.

John E. Renaud, Director

Graduate Program in Aerospace and Mechanical Engineering

Notre Dame, Indiana

December 2004

Page 2: Reliability Based Design Optimization - Formulations and Methodologies

RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND

METHODOLOGIES

Abstract

by

Harish Agarwal

Modern products ranging from simple components to complex systems should

be designed to be optimal and reliable. The challenge of modern engineering is to

ensure that manufacturing costs are reduced and design cycle times are minimized

while achieving requirements for performance and reliability. If the market for the

product is competitive, improved quality and reliability can generate very strong

competitive advantages. Simulation based design plays an important role in design-

ing almost any kind of automotive, aerospace, and consumer products under these

competitive conditions. Single discipline simulations used for analysis are being cou-

pled together to create complex coupled simulation tools. This investigation focuses

on the development of efficient and robust methodologies for reliability based design

optimization in a simulation based design environment.

Original contributions of this research are the development of a novel efficient

and robust unilevel methodology for reliability based design optimization, the de-

velopment of an innovative decoupled reliability based design optimization method-

ology, the application of homotopy techniques in unilevel reliability based design

optimization methodology, and the development of a new framework for reliability

based design optimization under epistemic uncertainty.

Page 3: Reliability Based Design Optimization - Formulations and Methodologies

Harish Agarwal

The unilevel methodology for reliability based design optimization is shown to

be mathematically equivalent to the traditional nested formulation. Numerical test

problems show that the unilevel methodology can reduce computational cost by at

least 50% as compared to the nested approach. The decoupled reliability based de-

sign optimization methodology is an approximate technique to obtain consistent re-

liable designs at lesser computational expense. Test problems show that the method-

ology is computationally efficient compared to the nested approach. A framework for

performing reliability based design optimization under epistemic uncertainty is also

developed. A trust region managed sequential approximate optimization method-

ology is employed for this purpose. Results from numerical test studies indicate

that the methodology can be used for performing design optimization under severe

uncertainty.

Page 4: Reliability Based Design Optimization - Formulations and Methodologies

To Baba, Maa, Papa, Mom, Pawan, and Sweta

ii

Page 5: Reliability Based Design Optimization - Formulations and Methodologies

CONTENTS

FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

CHAPTER 1: INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Design optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Reliability based design optimization . . . . . . . . . . . . . . . . . . 21.3 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3.1 Decoupled methodology for reliability based design optimization 51.3.2 Unilevel methodology for reliability based design optimization 61.3.3 Application of continuation methods in unilevel reliability based

design optimization formulation . . . . . . . . . . . . . . . . . 71.3.4 Reliability based design optimization under epistemic uncer-

tainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4 Overview of the dissertation . . . . . . . . . . . . . . . . . . . . . . . 8

CHAPTER 2: DESIGN AND OPTIMIZATION . . . . . . . . . . . . . . . . . 102.1 Deterministic design optimization . . . . . . . . . . . . . . . . . . . . 12

2.1.1 Multidisciplinary systems design . . . . . . . . . . . . . . . . . 132.1.2 Multidisciplinary design optimization . . . . . . . . . . . . . . 152.1.3 MDO Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

CHAPTER 3: UNCERTAINTY: TYPES AND THEORIES . . . . . . . . . . 183.1 Classification of uncertainties . . . . . . . . . . . . . . . . . . . . . . 18

3.1.1 Aleatory Uncertainty . . . . . . . . . . . . . . . . . . . . . . . 213.1.2 Epistemic Uncertainty . . . . . . . . . . . . . . . . . . . . . . 223.1.3 Error (Numerical Uncertainty) . . . . . . . . . . . . . . . . . . 23

3.2 Uncertainty modeling techniques . . . . . . . . . . . . . . . . . . . . 233.2.1 Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . 243.2.2 Dempster-Shafer Theory . . . . . . . . . . . . . . . . . . . . . 25

iii

Page 6: Reliability Based Design Optimization - Formulations and Methodologies

3.2.3 Convex Models of Uncertainty and Interval Analysis . . . . . . 293.2.4 Possibility/Fuzzy Set Theory Based Approaches . . . . . . . . 29

3.3 Optimization Under Uncertainty . . . . . . . . . . . . . . . . . . . . . 303.3.1 Robust Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.3.2 Reliability based design optimization . . . . . . . . . . . . . . 313.3.3 Fuzzy Optimization . . . . . . . . . . . . . . . . . . . . . . . . 323.3.4 Reliability Based Design Optimization Using Evidence Theory 33

3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

CHAPTER 4: RELIABILITY BASED DESIGN OPTIMIZATION . . . . . . 364.1 RBDO formulations: efficiency and robustness . . . . . . . . . . . . . 36

4.1.1 Double Loop Methods for RBDO . . . . . . . . . . . . . . . . 374.1.2 Probabilistic reliability analysis . . . . . . . . . . . . . . . . . 394.1.3 Sequential Methods for RBDO . . . . . . . . . . . . . . . . . . 454.1.4 Unilevel Methods for RBDO . . . . . . . . . . . . . . . . . . . 48

4.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

CHAPTER 5: DECOUPLED RBDO METHODOLOGY . . . . . . . . . . . . 505.1 A new sequential RBDO methodology . . . . . . . . . . . . . . . . . 50

5.1.1 Sensitivity of Optimal Solution to Problem Parameters . . . . 525.2 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.2.1 Short Rectangular Column . . . . . . . . . . . . . . . . . . . . 545.2.2 Analytical Problem . . . . . . . . . . . . . . . . . . . . . . . . 565.2.3 Cantilever Beam . . . . . . . . . . . . . . . . . . . . . . . . . 615.2.4 Steel Column . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

CHAPTER 6: UNILEVEL RBDO METHODOLOGY . . . . . . . . . . . . . 656.1 A new unilevel RBDO methodology . . . . . . . . . . . . . . . . . . . 656.2 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

6.2.1 Analytical Problem . . . . . . . . . . . . . . . . . . . . . . . . 696.2.2 Control Augmented Structures Problem . . . . . . . . . . . . 73

6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

CHAPTER 7: CONTINUATION METHODS IN OPTIMIZATION . . . . . . 827.1 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 827.2 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

7.2.1 Short Rectangular Column . . . . . . . . . . . . . . . . . . . . 837.2.2 Steel Column . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

7.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

CHAPTER 8: RELIABILITY BASED DESIGN OPTIMIZATION UNDEREPISTEMIC UNCERTAINTY . . . . . . . . . . . . . . . . . . . . . . . . 898.1 Epistemic uncertainty quantification . . . . . . . . . . . . . . . . . . 90

iv

Page 7: Reliability Based Design Optimization - Formulations and Methodologies

8.2 Deterministic Optimization . . . . . . . . . . . . . . . . . . . . . . . 958.3 Optimization under epistemic uncertainty . . . . . . . . . . . . . . . 968.4 Sequential Approximate Optimization . . . . . . . . . . . . . . . . . . 978.5 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1008.6 Analytic Test Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 1008.7 Aircraft concept sizing problem . . . . . . . . . . . . . . . . . . . . . 1048.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

CHAPTER 9: CONCLUSIONS AND FUTURE WORK . . . . . . . . . . . . 1129.1 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . . . 113

9.1.1 Decoupled methodology for reliability based design optimization1139.1.2 Unilevel methodology for reliability based design optimization 1149.1.3 Continuation methods for unilevel RBDO . . . . . . . . . . . 1159.1.4 Reliability based design optimization under epistemic uncer-

tainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169.2 Recommendations for future work . . . . . . . . . . . . . . . . . . . . 117

9.2.1 Decoupled RBDO using higher order methods . . . . . . . . . 1179.2.2 RBDO for system reliability . . . . . . . . . . . . . . . . . . . 1179.2.3 Homotopy curve tracking for solving unilevel RBDO . . . . . 1179.2.4 Considering total uncertainty in design optimization . . . . . . 1189.2.5 Variable fidelity reliability based design optimization . . . . . 118

BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

v

Page 8: Reliability Based Design Optimization - Formulations and Methodologies

FIGURES

2.1 Model of a multidisciplinary system analysis . . . . . . . . . . . . . . 14

3.1 Sources of Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.2 Belief (Bel) and Plausibility (Pl)[6] . . . . . . . . . . . . . . . . . . . 26

3.3 Design trade-off in RBDO . . . . . . . . . . . . . . . . . . . . . . . . 32

4.1 Reliability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2 Approximate MPP Estimation . . . . . . . . . . . . . . . . . . . . . . 46

5.1 Proposed RBDO Methodology . . . . . . . . . . . . . . . . . . . . . . 51

5.2 Convergence history for the example problem . . . . . . . . . . . . . 57

5.3 Contours of objective and constraints . . . . . . . . . . . . . . . . . . 58

5.4 Plot showing two reliable optima. . . . . . . . . . . . . . . . . . . . . 59

6.1 Contours of objective and constraints . . . . . . . . . . . . . . . . . . 71

6.2 Plot showing two reliable optima. . . . . . . . . . . . . . . . . . . . . 72

6.3 Control augmented structures problem. . . . . . . . . . . . . . . . . . 74

6.4 Coupling in control augmented structures problem. . . . . . . . . . . 74

7.1 Convergence of objective function. . . . . . . . . . . . . . . . . . . . . 85

7.2 Convergence of optimization variables. . . . . . . . . . . . . . . . . . 86

7.3 Convergence of objective function. . . . . . . . . . . . . . . . . . . . . 88

8.1 Simplified Multidisciplinary Model . . . . . . . . . . . . . . . . . . . 90

8.2 Known BPA structure . . . . . . . . . . . . . . . . . . . . . . . . . . 91

8.3 Complementary Cumulative Belief and Plausibility Function . . . . . 95

vi

Page 9: Reliability Based Design Optimization - Formulations and Methodologies

8.4 Experts Opinion for δ1 . . . . . . . . . . . . . . . . . . . . . . . . . . 102

8.5 Experts Opinion for δ2 . . . . . . . . . . . . . . . . . . . . . . . . . . 102

8.6 Design Variable History . . . . . . . . . . . . . . . . . . . . . . . . . 103

8.7 Convergence of objective function . . . . . . . . . . . . . . . . . . . . 104

8.8 Aircraft Concept Sizing Problem . . . . . . . . . . . . . . . . . . . . 106

8.9 Expert Opinion for p3 and p4 . . . . . . . . . . . . . . . . . . . . . . 108

8.10 Convergence of the Objective Function (ACS Problem) . . . . . . . . 109

vii

Page 10: Reliability Based Design Optimization - Formulations and Methodologies

TABLES

5.1 STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM . 55

5.2 COMPUTATIONAL COMPARISON OF RESULTS (SHORT RECT-ANGULAR COLUMN) . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.3 STARTING POINT [-5,3], SOLUTION [-3.006,0.049] . . . . . . . . . 59

5.4 STARTING POINT [5,3], SOLUTION [2.9277,1.3426] . . . . . . . . . 60

5.5 STOCHASTIC PARAMETERS IN CANTILEVER BEAM PROBLEM 61

5.6 STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM . 63

6.1 STARTING POINT [-5,3], SOLUTION [-3.006,0.049] . . . . . . . . . 71

6.2 STARTING POINT [5,3], SOLUTION [2.9277,1.3426] . . . . . . . . . 73

6.3 STATISTICAL INFORMATION FOR THE RANDOM VARIABLES 77

6.4 MERIT FUNCTION AT THE INITIAL AND FINAL DESIGNS . . 79

6.5 HARD CONSTRAINTS AT THE FINAL DESIGN . . . . . . . . . . 80

6.6 COMPARISON OF COMPUTATIONAL COST OF RBDO METH-ODS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

7.1 STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM . 84

7.2 STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM . 87

8.1 COMPARISON OF DESIGNS . . . . . . . . . . . . . . . . . . . . . . 105

8.2 DESIGN VARIABLES IN ACS PROBLEM . . . . . . . . . . . . . . 106

8.3 LIST OF PARAMETERS IN THE ACS PROBLEM . . . . . . . . . 107

8.4 LIST OF STATES IN THE ACS PROBLEM . . . . . . . . . . . . . 108

8.5 COMPARISON OF DESIGNS (ACS PROBLEM) . . . . . . . . . . . 110

viii

Page 11: Reliability Based Design Optimization - Formulations and Methodologies

ACKNOWLEDGMENTS

First, I would like to express my sincere gratitude to my advisor, Dr. John E.

Renaud, for his support, encouragement, and guidance during my stay at Notre

Dame. He has been a constant source of help and inspiration to me. He gave a lot

of freedom in my course and research work, and has been extremely supportive and

understanding at all times.

I thank my readers comprising of Dr. John E. Renaud, Dr. Stephen M. Batill,

Dr. Steven B. Skaar, and Dr. Alan P. Bowling, for their help in the successful

completion of this dissertation.

I appreciate the administrative help provided by the department administrative

assistants Ms. Nancy Davis, Ms. Evelyn Addington, Ms. Judith Kenna, Ms. Nancy

O’Connor.

I would like to thank the National Science Foundation, the Center for Applied

Mathematics at the University of Notre Dame, the Office of Naval Research, and the

Department of Aerospace and Mechanical Engineering for their financial support.

I would like to extend my thanks to Dr. Dhanesh Padmanabhan for helping me

settle in Notre Dame and for provided help in my research activities. I would like

to thank Dr. Victor Perez getting me involved in some of my initial research work

and providing information on cutting edge research.

Last but not the least, I thank my other lab members Dr. Xiaoyu Gu, Dr. Weiyu

Liu, Andres, Shawn, Alejandro, and Neal. I thank my roommates, Kameshwar,

Rajkumar, Sharad, Wyatt, Fabian, Parveen, Shishir, Anupam, and Himanshu.

ix

Page 12: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 1

INTRODUCTION

Modern competitive markets require that engineers design inexpensive and reli-

able systems and products. These requirements apply broadly across a variety of

businesses and products, ranging from small toys for children, to passenger cars

and space systems such as satellites or space stations. Reduced design cycle time,

products characterized by lower prices, higher quality, and reliability are the driving

factors behind the modern engineering design process. Engineers largely accomplish

these objectives through the use of better simulation models, continuously growing

in both complexity and fidelity. The modern design process is thus being increas-

ingly viewed as one of simulation based design. Depending upon the complexity of

the system to be designed, simulation based design can be practiced both in the

conceptual and preliminary design stage, and in the final detailed design stages,

albeit with different fidelity design tools.

1.1 Design optimization

The computational speed of computers has increased exponentially during the last

50 years. This has led to the development of large-scale simulation tools like the

finite element methods, computational fluid dynamics codes, etc, for analysis of

complex engineering systems. The availability of complex simulation models that

provide a better representation of the actual physical system has provided engineers

1

Page 13: Reliability Based Design Optimization - Formulations and Methodologies

with an opportunity to obtain improved designs. The process of obtaining optimal

designs is known as design optimization.

Increasingly the modern engineering community is employing optimization as

a tool for design. Optimization is used to find optimal designs characterized by

lower cost while satisfying performance requirements. Typical engineering examples

include minimizing the weight of a cantilever beam while satisfying constraint on

maximum stress and allowable deflection, maximizing the lift on an aircraft subject

to constraint on an acceptable range, and so on. The basic paradigm in design

optimization is to find a set of design variables that optimizes an objective function

while satisfying the performance constraints.

Most engineers when using optimization for design purposes, assume that the

design variables in the problem are deterministic. In this dissertation, this is referred

to as deterministic design optimization. A deterministic design optimization does

not account for the uncertainties that exist in modeling and simulation, manufactur-

ing processes, design variables and parameters, etc. However, a variety of different

kinds of uncertainties are present and need to be accounted for appropriately in the

design optimization process.

1.2 Reliability based design optimization

In a deterministic design optimization, the designs are often driven to the limit of

the design constraints, leaving little or no latitude for uncertainties. The resulting

deterministic optimal solution is usually associated with a high chance of failure, of

the artifact being designed, due to the influence of uncertainties inherently present

during the modeling and manufacturing phases of the artifact and due to uncertain-

ties in the external operating conditions of the artifact. The uncertainties include

variations in certain parameters, which are either controllable (e.g. dimensions) or

2

Page 14: Reliability Based Design Optimization - Formulations and Methodologies

uncontrollable (e.g. material properties), and model uncertainties and errors asso-

ciated with the simulation tools used for simulation based design [44].

Uncertainties in simulation based design are inherently present and need to be

accounted for in the design optimization process. Uncertainties may lead to large

variations in the performance characteristics of the system and a high chance of fail-

ure of the artifact. Optimized deterministic designs determined without considering

uncertainties can be unreliable and might lead to catastrophic failure of the artifact

being designed. Robust design optimization and reliability based design optimization

are methodologies that address these problems. Robust designs are designs at which

the variation in the performance functions is minimal. Reliable designs are designs

at which the chance of failure of the system is low. It is extremely desirable that

the engineers design for robustness and reliability as it helps in obtaining large mar-

ket shares for products under competitive economic conditions. This dissertation

specifically focuses on reliability based design optimization problems.

Reliability based design optimization (RBDO) deals with obtaining optimal de-

signs characterized by a low probability of failure. In RBDO problems, there is a

trade-off between obtaining higher reliability and lowering cost. The first step in

RBDO is to characterize the important uncertain variables and the failure modes.

In most engineering applications, the uncertainty is generally characterized using

probability theory. The probability distributions of the random variables are ob-

tained using statistical models. In designing artifacts with multiple failure modes,

it is important that an artifact be designed such that it is sufficiently reliable with

respect to each of the critical failure modes or to the overall system failure. In

a RBDO formulation, the critical failure modes in deterministic optimization are

replaced with constraints on probabilities of failure corresponding to each of the

failure driven modes or with a single constraint on the system probability of fail-

3

Page 15: Reliability Based Design Optimization - Formulations and Methodologies

ure. The reliability index, or the probability of failure corresponding to either a

failure mode or the system, can be computed by performing a probabilistic reliabil-

ity analysis. Some of the techniques used in reliability analysis are the first order

reliability method (FORM), second order reliability method (SORM), and Monte

Carlo simulation (MCS) techniques[20].

Traditionally, researchers have formulated RBDO as a nested optimization prob-

lem (also known as a double-loop method). Such a formulation is, by nature, com-

putationally expensive because of the inherent computational expense required for

the reliability analysis, which itself involves the solution to an optimization problem.

Solving such nested optimization problems are cost prohibitive, especially for mul-

tidisciplinary systems, which are themselves computationally intensive. Moreover,

the computational cost associated with RBDO grows exponentially as the number of

random variables and the number of critical failure modes increase. To alleviate the

high computational cost, researchers have developed sequential RBDO methods. In

these methods, a deterministic optimization and a reliability analysis are decoupled,

and the procedure is repeated until desired convergence is achieved. However, such

techniques are not provably convergent and may yield spurious optimal designs.

For decades, uncertainty has been formulated solely in terms of probability the-

ory. Such representation is now being questioned. This is because several other

mathematical theories, distinct from probability theory, are demonstrably capable

of characterizing situations under uncertainty[33, 49]. The risk assessment com-

munity has recognized different types of uncertainties and has argued that it is

inappropriate to represent all of them solely by probabilistic means when stochastic

information is not available. This is typically referred to as epistemic uncertainty.

Therefore, a need exists to develop a methodology to perform reliability based design

optimization under epistemic uncertainty.

4

Page 16: Reliability Based Design Optimization - Formulations and Methodologies

1.3 Research Objectives

This dissertation investigates and develops formulations and methodologies for re-

liability based design optimization (RBDO). The main focus is to develop method-

ologies that are computationally efficient and mathematically robust. Efforts are

focused on reducing the cost of the optimization by developing innovative formu-

lations for RBDO. An efficient and robust unilevel formulation in which the lower

level optimization problem is replaced at the system level by Karush-Kuhn-Tucker

(KKT) optimality conditions is developed. In addition to the unilevel formulation,

a novel sequential methodology for RBDO is developed to address the concerns with

some of the existing sequential RBDO methodologies. In this investigation a frame-

work for design optimization under epistemic uncertainty is also developed. The

epistemic uncertainty is quantified using Dempster-Shafer theory.

1.3.1 Decoupled methodology for reliability based design optimization

In this dissertation, a new decoupled methodology for reliability based design opti-

mization is developed. Methodologies based on similar thoughts have been devel-

oped by other researchers [13, 72]. In these methodologies, a deterministic opti-

mization and a reliability analysis are performed separately, and the procedure is

repeated until desired convergence is achieved. Such techniques are referred to as

sequential or decoupled RBDO techniques in the rest of the manuscript. The se-

quential RBDO techniques offer a practical means of obtaining a consistent reliable

design at considerably reduced computational cost. In the methodology developed,

the sensitivities of the Most Probable Point (MPP) of failure with respect to the

decision variables are computed to update the MPPs during the deterministic op-

timization phase of the new RBDO approach. The MPP update is based on the

first order Taylor series expansion around the design point from the last cycle. The

5

Page 17: Reliability Based Design Optimization - Formulations and Methodologies

MPP update is found to be extremely accurate, especially around the vicinity of

the point from the previous cycle. The methodology not only finds the true optimal

solution but also the exact MPPs of failure, which is important to ensure that the

target reliability index is satisfied.

1.3.2 Unilevel methodology for reliability based design optimization

In this dissertation, a new unilevel formulation for performing RBDO is developed.

The proposed formulation provides improved robustness and provable convergence

as compared to a unilevel variant given by Kuschel and Rackwitz[36]. The formu-

lation given by Kuschel and Rackwitz[36] replaces the direct first order reliability

method (FORM) problems (lower level optimization in the reliability index method

(RIA)) by their first order necessary KKT optimality conditions. The FORM prob-

lem in RIA is numerically ill conditioned[69]; the same is true for the formulation

given by Kuschel and Rackwitz[36]. In this research, the basic idea is to replace the

inverse FORM problem (lower level optimization in the performance measure ap-

proach (PMA)) by its first order Karush-Kuhn-Tucker (KKT) necessary optimality

conditions at the upper level optimization. It was shown in Tu et al [69] that PMA

is robust in terms of probabilistic constraint evaluation. The method developed in

this work is shown to be computationally equivalent to the original nested opti-

mization problem if the lower level optimization problem is solved by satisfying the

KKT necessary condition (which is what most numerical optimization algorithms

actually do). The unilevel method developed in this investigation is observed to be

more robust and has a provably convergent structure as compared to the one given

in Kuschel and Rackwitz[36].

6

Page 18: Reliability Based Design Optimization - Formulations and Methodologies

1.3.3 Application of continuation methods in unilevel reliability based design op-timization formulation

Optimization problems accompanied by many equality constraints are usually diffi-

cult to solve by most commercially available optimization algorithms. The unilevel

formulation for RBDO developed in this investigation is usually accompanied by a

large number of equality constraints which can cause numerical instability. Continu-

ation methods are employed to relax the constraints and to obtain a relaxed feasible

design. A series of less difficult optimization problems are solved for different values

of the continuation parameter. The relaxed problem is continuously deformed to

find the solution to the original problem.

1.3.4 Reliability based design optimization under epistemic uncertainty

A traditional RBDO using probability theory typically requires complete statistical

information of the uncertainties. However there exist cases where all the uncertain

parameters in a system cannot be described probabilistically. Such uncertainties,

known as epistemic uncertainty or subjective uncertainty are usually extremely dif-

ficult to characterize by mathematical means. In this dissertation, epistemic un-

certainty associated with the disciplinary design tools and the input parameters

in terms of the uncertain measures is quantified using Dempster-Shafer theory. A

trust region managed sequential approximate optimization (SAO) framework is used

for driving reliability based design optimization under epistemic uncertainty. The

uncertain measures of belief and plausibility provided by evidence theory are discon-

tinuous functions. In order to use gradient based optimization techniques, response

surface approximations are employed to smooth the discontinuous uncertain mea-

sures. Results indicate the methodology is effective in locating reliable designs under

epistemic uncertainty.

7

Page 19: Reliability Based Design Optimization - Formulations and Methodologies

1.4 Overview of the dissertation

This dissertation is organized as follows. In chapter 2, a deterministic optimization

problem formulation is presented, an overview of multidisciplinary systems is given

and some multidisciplinary algorithms are discussed.

Chapter 3 presents an overview of the different kinds of uncertainties in simula-

tion based design and different uncertainty modeling theories. The details of reliabil-

ity analysis using probability theory and uncertainty analysis using Dempster-Shafer

theory are described.

Chapter 4 presents a typical reliability based design optimization formulation

that employs probabilistic reliability analysis for reliability calculation. Some back-

ground work in this area by other researchers along with some issues related to high

computational cost of reliability based design optimization, numerical robustness of

the formulations, spurious optimal designs, etc is detailed.

In chapter 5, a new sequential methodology for traditional reliability based de-

sign optimization is presented. The main optimization and the reliability analysis

corresponding to the failure modes are decoupled from each other. The methodology

is computationally efficient compared to the nested approach.

In chapter 6, a unilevel methodology for reliability based design optimization

is developed. The KKT optimality conditions corresponding to the lower level op-

timization is replaced at the system level. The methodology is computationally

efficient with respect to the traditional RBDO approaches.

Chapter 7 presents continuation methods for unilevel RBDO methodology. The

relaxation of the unilevel formulation using the continuation parameter, identifying

an initial feasible solution, and the deformation of the relaxed problem into the

original problem is discussed.

In chapter 8, a framework for reliability based design optimization under epis-

8

Page 20: Reliability Based Design Optimization - Formulations and Methodologies

temic uncertainty is presented. Some research efforts in the area of epistemic un-

certainty quantification is discussed. The details of how epistemic uncertainty can

be accounted for in design optimization is detailed. The methodology is described

in application to some test problems.

In chapter 9, the advantages and limitations of the reliability based design op-

timization methodologies developed in this investigation are presented. Important

conclusions are drawn and some future work in this area is recommended.

9

Page 21: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 2

DESIGN AND OPTIMIZATION

Necessity is the mother of invention. A need or problem encourages creative efforts

to meet the need or solve the problem. In the engineering world, a need or prob-

lem consists of a number of activities, including analysis, design, fabrication, sales,

research and the development of systems. In this dissertation, the main subject

is, the design of systems, a major field of the engineering profession. The process

of designing systems has been developed and used for centuries, and the existence

of buildings, bridges, highways, automobiles, airplanes, space vehicles and other

complex systems is an excellent testimonial.

Design is an iterative process. The designer’s experience, intuition, and ingenu-

ity are required in the design of systems in most fields of engineering (aerospace,

automotive, civil, chemical, industrial, electrical, mechanical, and so on). Iterative

implies analyzing several trial systems in a sequence before an acceptable design is

obtained. Engineers strive to design the best systems and, depending on the spec-

ifications, best can have different connotations for different systems. In general, it

implies cost effective, efficient, reliable and durable systems. The process can involve

teams of specialists from different disciplines requiring considerable interaction.

Design, in engineering terminology, is transformed into specifications and re-

quirements. The requirements can be expressed in terms of mathematical con-

straints. The region delimited by the constraints is known as the feasible region.

10

Page 22: Reliability Based Design Optimization - Formulations and Methodologies

The designer is faced with the challenge of designing artifacts that are consistent

with the set of constraints. Competitive pressures continue to force product im-

provement demands on engineering and design departments. An improved design

is one that complies with the same requirements, but improves the value of the

merit function. When the constraints and merit function can be expressed in an

explicit mathematical form and they are functions of quantifiable characteristics,

then a mathematical optimization problem can be posed. Being able to optimize a

product for a desired performance outcome in the pre-design phase can mean more

time for product innovation and shorter time to market.

The advent of high speed computing has led to the development of large scale

simulation models for complex engineering systems. Typical applications include

models of nature for weather forecasting, stock market models for investment deci-

sion making, engineering models for analyzing complex flow patterns over an airfoil,

nuclear power plant models, and so on. The modern design engineer frequently

employs advanced simulation models for design applications.

The development of good simulation models has allowed the engineers to design

better systems and products. The modern engineering design process is, therefore,

viewed as simulation based design. A prospective design can be much easily iden-

tified long before an actual prototype is built. The designer can locate improved

designs that are consistent with the constraints and have a better merit function

value by simply varying the design inputs and executing the simulation model. In

numerical optimization, the process of locating an improved design is automated,

and advance mathematical algorithms are employed to locate better designs effi-

ciently.

11

Page 23: Reliability Based Design Optimization - Formulations and Methodologies

2.1 Deterministic design optimization

In a deterministic design optimization, the designer seeks the optimum values of

design variables for which the merit function is the minimum and the deterministic

constraints are satisfied. A typical deterministic design optimization problem can

be formulated as

min f(d,p,y(d,p)) (2.1)

subject to gRi (d,p,y(d,p)) ≥ 0, i = 1, .., Nhard, (2.2)

gDj (d,p,y(d,p)) ≥ 0, j = 1, .., Nsoft, (2.3)

dl ≤ d ≤ du, (2.4)

where d are the design variables and p are the fixed parameters of the optimization

problem. gRi is the ith hard constraint that models the ith critical failure mechanism

of the system (e.g., stress, deflection, loads, etc). gDj is the jth soft constraint that

models the jth deterministic constraint due to other design considerations (e.g., cost,

marketing, etc). The design space is bounded by dl and du. If gRi < 0 at a given

design d then the artifact is said to have failed with respect to the ith failure mode.

The complete failure of the artifact depends on how all the failure modes contribute

to what is known as the system failure of the artifact.

The merit function and the constraints in the above formulation are explicit

functions of d, p, and y(d,p). y(d,p) are the outputs of analysis tools that are used

to predict performance characteristics of an artifact. These intermediate quantities

are referred to in this study as state variables and the analysis tools are referred to

as contributing analysis.

Though, a clear distinction is made between hard and soft constraints, deter-

ministic design optimization treats both these type of constraints similarly, and the

failure of the artifact due to the presence of uncertainties is not taken into consid-

12

Page 24: Reliability Based Design Optimization - Formulations and Methodologies

eration. The distinction between hard and soft constraints is made to facilitate the

introduction of concepts on reliability analysis and reliability based design optimiza-

tion in subsequent chapters. It should be noted that equality constraints could also

be included in the optimization formulation, but have been omitted here without

any loss of generality.

2.1.1 Multidisciplinary systems design

The concept of a multidisciplinary problems, though inherent to most engineering

design problems, is more evident when analyzing complex systems such as automo-

biles or aircraft systems. This class of problems are characterized by two or more

disciplinary analyses such as controls, structures, and aerodynamics in a aeroser-

voelastic structure. Each discipline analysis can be a simulation tool or a collection

of simulation tools which are coupled through performance inputs and outputs of

the individual disciplines. Coupling between the various simulation tools exists in

the form of shared design variables and both input and output state performances.

The solution of these coupled simulation tools is referred to as a system analysis

(SA) or a multidisciplinary analysis (MDA). For a given set of independent vari-

ables that uniquely define the artifact, referred to as design variables, d, the SA

is intended to determine the corresponding set of attributes which characterize the

system performance, referred to as state variables (SVs), y.

An analysis of multidisciplinary systems often requires users to iterate between

individual disciplines until the states are converged. The convergence of the states is

based on some convergence criterion which is typically based on the change on input

parameters. A typical multidisciplinary system with full coupling is illustrated in

Figure 2.1. Here the system analysis consists of three disciplines or contributing

analyses (CAs). Each contributing analysis (CA) makes use of a simulation based

13

Page 25: Reliability Based Design Optimization - Formulations and Methodologies

d, p

CA1

CA2

CA3

y1

y2

y3

SA

converged state variables y

y2

y1

y3

Figure 2.1. Model of a multidisciplinary system analysis

discipline design tool. State information is exchanged between the three CAs and

an iterative solution is pursued until the states are converged and consistent. A

consistent design refers to one in which the output states y are not in contradiction

with the system of discipline equations (2.5) for a given design d.

A single evaluation of each discipline is referred as contributing analysis (CA).

In general, a CA in a multidisciplinary system can be expressed as

yi = CAi(di,ds,yii) (2.5)

The inputs for CAi are the discipline design vector di, the vector of shared variables

ds, and the vector of input states from the other CAs, yii. The output state vector

is yi.

14

Page 26: Reliability Based Design Optimization - Formulations and Methodologies

In general, a multidisciplinary system with nss CAs can be expressed as

y = SA(d,p), (2.6)

where d = {d1,d2, · · · ,dnss,ds}, (2.7)

y = {y1,y2, · · · ,ynss}. (2.8)

It might be possible to have more than one solution (or no solution may exist) for a

given set of design variables and parameters. However, this is not very common in

practice. A coupled system analysis might take several iterations to converge and

therefore can be extremely expensive to evaluate.

2.1.2 Multidisciplinary design optimization

The optimization of multidisciplinary problems is known as multidisciplinary design

optimization (MDO). To perform optimization, the sensitivities of the converged SVs

can be calculated by solving the global sensitivity equations (GSEs) [55] obtained

by the implicit function differentiation rule. The total sensitivity can be calculated

by using the following expression after a SA has been performed

∆d y = C−1∇d y, (2.9)

∆p y = C−1∇p y, (2.10)

where ∆d and ∆p are the total differential operators with respect to d and p, whereas

∇d and ∇p are the partial differential operators with respect to d and p respectively.

C is the coupling matrix obtained from the global sensitivity equations. y can be

divided into nss components y1,y2, ...,ynss, which are the outputs of CA1, CA2, ...,

CAnss respectively. The C (which is a function of d and p) is a matrix of dimension

M ×M , where M is the length of y, and consist of an array of rectangular matrices

15

Page 27: Reliability Based Design Optimization - Formulations and Methodologies

and identity matrices as shown below.

C =

I −∇y2y1 · · · −∇ynssy1

∇y1y2 I · · · −∇ynssy2

· · · · · · · · · · · ·−∇y1ynss −∇y2ynss · · · I

(2.11)

The SA and sensitivities from the solution of GSE can be used to perform the de-

terministic design optimization (Eqns. (2.1)-(2.4) ), using many standard nonlinear

programming techniques.

2.1.3 MDO Algorithms

The optimization of a multidisciplinary system is usually computationally expensive

because of the cost associated with each SA evaluation. To reduce the computational

cost of MDO, a variety of different approaches have been developed over the last 20

years. The vast majority of them take advantage of the multidisciplinary nature of

the problem to divide it in multilevel subproblems for solution.

Alexandrov and Lewis[4] categorized the MDO approaches based on two dif-

ferent perspectives: a structural perspective and an algorithmic perspective. The

structural perspective comprises of approaches where the original problem is decom-

posed based on individual disciplines (structures) of the problem. The main idea

behind this approach is to provide disciplinary autonomy that solves an optimiza-

tion subproblem while a system level optimization coordinates the procedure. The

advantage of this approach is that structural organization of the problem is main-

tained and the coordination between individual disciplines is reduced. This results

in a bi-level formulation for these approaches. Well known examples of structural

decomposition approaches for MDO are the optimization by linear decomposition

(OLD) introduced in Sobieszczanski-Sobieski[63], concurrent subspace optimization

16

Page 28: Reliability Based Design Optimization - Formulations and Methodologies

(CSSO) introduced in Sobieszczanski-Sobieski[64], collaborative optimization (CO),

proposed in Kroo[34], Braun and Kroo[8], and the recently introduced bi-level inte-

grated system synthesis (BLISS) Sobieszczanski et al [62].

In the algorithmic perspective, the MDO problem is reformulated to take ad-

vantage of the optimization algorithms. The idea here is to perform reliable and

efficient optimization. Under this perspective, robustness and efficiency in the opti-

mization is more important than disciplinary autonomy. The formulations that arise

are single level approaches. Some of the MDO methods that fall in this category

are the simultaneous analysis and design (SAND) [25] or the all-at-once approach

[16] and the individual discipline feasible (IDF) approaches [38, 16, 17].

2.2 Summary

In a deterministic design optimization, the basic idea is to optimize a merit function

subject to deterministic constraints and design variable bounds. The merit function

and constraints explicitly depend on intermediate variables also called state vari-

ables. Multidisciplinary systems require iteration between various disciplines until

the states are converged and consistent. The solution to multidisciplinary systems

typically requires iterative root solving methods. The sensitivities of the state vari-

ables can be efficiently obtained by solving global sensitivity equations (GSE) that

are based on the implicit differentiation rule.

17

Page 29: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 3

UNCERTAINTY: TYPES AND THEORIES

Uncertainties are inherently present in any form of simulation based design. Over

the last few years there has been an increased emphasis focused on accounting

for the various forms of uncertainties that are introduced in mathematical models

and simulation tools. Engineers, scientists, and decision makers are differentiating

and characterizing the different forms of uncertainty. Various representations of

uncertainties exist and it is very important that each one of them is accounted for

by appropriate means depending upon the information available.

For decades, uncertainty has been formulated in terms of probability theory.

Such representation is now being questioned. This is because several other math-

ematical theories, distinct from probability theory, are demonstrably capable of

characterizing situations under uncertainty[33, 49]. The risk assessment community

has recognized different types of uncertainties and has argued that it is inappropri-

ate to represent all of them solely by probabilistic means when enough information

is not available.

3.1 Classification of uncertainties

Advances in computational capabilities have led to the development of large scale

simulation tools for use in design. It is important that the uncertainties in mathe-

matical models (i.e., simulation tools) used for nondeterministic engineering systems

18

Page 30: Reliability Based Design Optimization - Formulations and Methodologies

design and optimization are quantified appropriately. The nondeterministic nature

of the mathematical model of the system exists because: a) the system responses

of the model can be non-unique due to the existence of uncertainties in the input

parameters of the model, or b) there are multiple alternative mathematical models

for the system and the environment. The simulation tool is deterministic in the

sense that for all input data, the simulation tool gives unique values of response

quantities[47].

In general, a distinction can be made between aleatory uncertainty (also re-

ferred to as stochastic uncertainty, irreducible uncertainty, inherent uncertainty,

variability), epistemic uncertainty (also referred to as reducible uncertainty, sub-

jective uncertainty, model form uncertainty or simply uncertainty) and numerical

uncertainty (also known as error) [43, 47]. Oberkampf et al [45] have described var-

ious methods for estimating the total uncertainty by identifying all possible sources

of variability, uncertainty, and error in mathematical models and simulation tools.

Numerical models of engineering systems, such as finite element analysis (FEA)

or computational fluid dynamics (CFD), are extensively used by the designers. The

uncertainty in these simulation tools comes from a variety of sources (see Figure

3.1). First, there is epistemic uncertainty when a physical model is converted into

a mathematical model because all the non-linearity of the physical model cannot be

exactly transformed into mathematical equations. Second, there is uncertainty (this

can be both aleatoric or epistemic) in the data that are inputs to the system. Third,

the mathematical equations can be solved using a variety of techniques and usually

these different methods provide slightly different results. For example, simple an-

alytic models for beam theory make the assumption that plane sections through a

beam, taken normal to the axis, remain plane after the beam is subject to bending.

A more rigorous solution from the mathematical theory of elasticity would show

19

Page 31: Reliability Based Design Optimization - Formulations and Methodologies

Figure 3.1. Sources of Uncertainty

that a slight warpage of these planes can occur. Thus, we can have different fidelity

models to carry out the required analysis with each fidelity model giving a different

result. This is commonly referred to as model form uncertainty. Hence there is

uncertainty in the computer model used. Last but not the least, there is numerical

error due to round-off.

Some of the modern uncertainty theories are the theory of fuzzy sets[75], Dempster-

Shafer theory[22], possibility theory[18], and the theory of upper and lower previsions[71].

Some of these theories only deal with epistemic uncertainty; most deal with both

20

Page 32: Reliability Based Design Optimization - Formulations and Methodologies

epistemic and aleatory uncertainty; and some deal with other varieties of uncer-

tainty and logic appropriate for artificial intelligence and expert systems. Many of

these new representations of uncertainty are able to more accurately represent epis-

temic uncertainty than is possible with traditional probability theory. Engineering

applications of some of these theories can be found in recent publications[23, 10].

3.1.1 Aleatory Uncertainty

Variation type uncertainties also known as aleatory uncertainty describes the inher-

ent variation of the physical system. Such variation is usually because of the random

nature of the input data. They can occur in the form of manufacturing tolerances

or uncontrollable variations in external environment. They are usually modeled

as random phenomenon, which are characterized by probability distributions. The

probability distributions are constructed using relative frequency of occurrence of

events, which require large amount of information.

Most often such information does not exist and designers usually make assump-

tions for the characteristics (mean, variances, correlation coefficients) of the random

phenomenon causing the variation.

One can have the following different cases:

(i) The bounds on the certain variations are known exactly but the probability

density functions or distributions governing the variations are not known. For ex-

ample, the maximum and minimum temperatures at which an artifact may work

under is known but not the characteristics of the underlying random process.

(ii) Individual probability density functions of variations in certain parameters

or design variables are known, but the correlation among them are unknown.

In most engineering applications, even though it is reasonable to represent the

stochastic uncertainty (variability) associated with design variables and parameters

21

Page 33: Reliability Based Design Optimization - Formulations and Methodologies

by probabilistic means, there are places where such a representation is questionable

or not adequate. There are cases where some of the inputs to the system are known

to exist in interval(s) and nothing else is known about the distribution, mean or

variance. Sometimes data is available only as discrete points obtained from previous

experiments. When such inputs are part of an engineering system it is extremely

difficult to estimate the corresponding uncertainty for the outputs of the system

and to use it for design applications. Such type of uncertainties characterized by

incomplete information can be represented using techniques such as convex models

of uncertainty, interval methods, possibility theory, etc.

3.1.2 Epistemic Uncertainty

Epistemic uncertainty also known as subjective uncertainty arises due to ignorance,

lack of knowledge or incomplete information. In engineering systems, the epistemic

uncertainty can be either parametric or model-based. Epistemic uncertainty also

arises in decision making.

Parametric and Tool Uncertainty

In engineering systems, the epistemic uncertainty is mostly either parametric or

model-based. Parametric uncertainty is associated with the uncertain parameters

for which the information available is sparse or inadequate. Model-form uncertainty

also known as tool uncertainty is associated with improper models of the system

due to a lack of knowledge of the physics of the system. In some applications,

models (tools) are conservative or consistently over-predicting or under-predicting

in predictions of certain characteristics. For example, in structural dynamics, the

use of consistent mass matrix is known to consistently overestimate the natural

frequencies of a given structure, whereas using a lumped mass matrix does not

follow such a trend. Usually, the model uncertainty is lower for higher fidelity

22

Page 34: Reliability Based Design Optimization - Formulations and Methodologies

analysis tools.

Model uncertainty also results with the selection of different mathematical mod-

els to simulate different conditions. It arises because of the lack of information about

the conditions or the range of conditions the system could operate under, and also

due to the lack of a unified modeling technique. For example, employing different

models for laminar and turbulent flows in a typical fluid mechanics application.

Uncertainty related to decision making

Uncertainty associated with decision making is also known as vagueness and impre-

cision. In a decision making problem which is mostly a multiobjective optimization

problem, it is usually not possible to minimize all objectives (or satisfy all goals), due

to constraints and conflicting objectives. In such circumstances, designers usually

make decisions based on what objectives to trade-off, or how to perform trade-off.

Design optimization under such situations are done using traditional multiobjec-

tive optimization methods, fuzzy multiobjective optimization and preference based

designing. Fuzzy sets are typically used to model such multiobjective problems.

3.1.3 Error (Numerical Uncertainty)

Error also known as numerical uncertainty is commonly associated with the nu-

merical models used for modeling and simulation. Some of the common examples

of such errors are error tolerance in the convergence of a coupled system analysis,

round-off errors, truncation errors, error associated with the solution of ODEs and

PDEs which typically uses discretization schemes.

3.2 Uncertainty modeling techniques

Uncertainties can be quantified using various uncertainty theories. Some of them

are probability theory, Dempster-Shafer theory, convex models of uncertainty and

23

Page 35: Reliability Based Design Optimization - Formulations and Methodologies

possibility or fuzzy set theory. Probability theory is popularly employed to model

uncertainties especially when sufficient data is available. A brief description of some

of these uncertainty theories is presented.

3.2.1 Probability Theory

Probability theory represents the uncertainties as random variables. In general, the

variables can be both discrete and continuous. In this dissertation, we will consider

only the continuous random variables. A random variable will be represented by

an uppercase (e.g., X), and a particular realization of a random variable will be

represented by a lowercase letter (e.g., x). The nature of randomness and the

information on probability is represented by the probability density function (PDF),

fX(x). To calculate the probability of X having a value between x1 and x2, the

area under the PDF between these two limits needs to be calculated. This can be

expressed as

P (x1 < X ≤ x2) =

x2∫

x1

fX(x)dx. (3.1)

To calculate P (X ≤ x), which is specifically denoted as FX(x) and is known as

the cumulative distribution function (CDF) or simply as distribution function, the

area under the PDF needs to be integrated for all possible values of X less than or

equal to x; in other words, the integration needs to be carried out theoretically from

−∞ to x, and can be expressed as

P (X ≤ x) = FX(x) =

x∫

−∞

fX(x)dx. (3.2)

The CDF directly gives the probability of a random variable having a value less

than or equal to a specific value. The PDF is the first derivative of the CDF and

can be expressed as

fX(x) =dFX(x)

dx. (3.3)

24

Page 36: Reliability Based Design Optimization - Formulations and Methodologies

The mean of X, denoted as µ, the standard deviation of X, denoted as σ, and

the correlation between random variables X1 and X2 denoted as ρ12, is given by

µ =

∞∫

−∞

xfX(x)dx, (3.4)

σ2 =

∞∫

−∞

(x− µ)2fX(x)dx, (3.5)

ρ12σ1σ2 =

∞∫

−∞

∞∫

−∞

(x1 − µ1)(x2 − µ2)fX1,X2(x1, x2)dx1dx2, (3.6)

where fX1,X2(x1, x2) is the joint probability density function of X1 and X2. These

statistical properties are usually obtained by sampling data.

Let’s say there is a scalar function g(X1, X2, .., Xn). The expected value (mean)

of g can be calculated as

µg =

∞∫

−∞

g(x1, x2, .., xn)fX1,X2,..,Xn(x1, x2, .., xn) dx1 dx2 .. dxn. (3.7)

Similarly, the distribution and other statistical properties of g can be obtained.

However, it is not always possible when g is obtained from expensive analysis tools.

Approximate techniques can be used for this purpose. They are described in the

next chapter.

3.2.2 Dempster-Shafer Theory

In this section, the basic concepts of evidence theory are summarized. The uncertain

measures provided by evidence theory are mentioned along with their advantages

and disadvantages.

Evidence theory uses two measures of uncertainty, belief and plausibility. In

comparison, probability theory uses just one measure, the probability of an event.

Belief and plausibility measures are determined from the known evidence for a

proposition without being necessary to distribute the evidence to subsets of the

25

Page 37: Reliability Based Design Optimization - Formulations and Methodologies

Figure 3.2. Belief (Bel) and Plausibility (Pl)[6]

proposition. This means that evidence in the form of experimental data or expert

opinion can be obtained for a parameter value within an interval. The evidence

does not assume a particular value within the interval or the likelihood of any value

with regard to any other value in the interval. Since there is uncertainty in the

given information, the evidential measure for the occurrence of an event and the

evidential measure for its negation do not have to sum to unity as shown in Figure

3.2 (Bel(A) + Bel(A) 6= 1).

The basic measure in evidence theory is known as the Basic Probability Assign-

ment (BPA). It is a function m that maps the power set (2U) of the universal set U(also known as the frame of discernment) to [0, 1]. The power set simply represents

all the possible subsets of the universal set U . Let E represents some event which is

subset of the universal set U . Then m(E) refers to the BPA corresponding exactly

to the event E and it expresses the degree of support of the evidential claim that the

true alternative (prediction, diagnosis, etc.) is in the set E but not in any special

subset of E . Any additional evidence supporting the claim that the true alternative

is in a subset of E , say A ⊂ E , must be expressed by another nonzero value m(A).

26

Page 38: Reliability Based Design Optimization - Formulations and Methodologies

m(E) must satisfy the following axioms of evidence theory.

(1) m(E) ≥ 0 for any E ∈ 2U

(2) m(∅) = 0

(3)∑

m(E) = 1 for all E ∈ 2U

where ∅ denotes an empty set. All the possible events E which are subsets of the

universal set U (E ⊆ U) and have m(E) > 0 are known as the focal elements. The

pair 〈F ,m〉, where F denotes the set of all focal elements induced by m is called a

body of evidence.

The measures of uncertainty provided by evidence theory are known as belief

(Bel) and plausibility (Pl). Once a body of evidence is given, these measures can

be obtained by using the following formulas.

Bel(A) =∑

B|B⊆Am(B) (3.8)

Pl(A) =∑

B|B∩A6=∅m(B) (3.9)

Observe that the belief of an event Bel(A) is calculated by summing the BPAs

of the propositions that totally agree with the event A whereas the plausibility of an

event is calculated by summing BPAs of propositions that agree with the event Atotally and partially. In other words, Bel and Pl give the lower and upper bounds

of the event respectively. They are related to each other by the following equation.

Pl(A) + Bel(A) = 1 (3.10)

where A represents the negation of the event A. The BPA can be obtained from

the Belief measure with the following inverse relation.

m(A) =∑

B|B⊆A(−1)|A−B|Bel(B) (3.11)

27

Page 39: Reliability Based Design Optimization - Formulations and Methodologies

where |A − B| is the difference of the cardinality of the two sets.

Sometimes the available evidence can come from different sources. Such bodies

of evidences can be aggregated using existing rules of combination. Commonly used

combination rules are listed below[61].

(1) The Dempster rule of combination

(2) Discount+Combine Method

(3) Yager’s modified Dempster’s rule

(4) Inagaki’s unified combination rule

(5) Zhang’s center combination rule

(6) Dubois and Prade’s disjunctive consensus rule

(7) Mixing or Averaging

Dempster-Shafer theory is based on the assumption that these sources are inde-

pendent. However, information obtained from a variety of resources needs to be

properly aggregated. There is always a debate about which combination rule is

most appropriate. Dempster’s rule of combination (1) is one of the most popular

rules of combination used. It is given by the following formula.

m(A) =

∑B∩C=A

m1(B)m2(C)

1− ∑B∩C=∅

m1(B)m2(C), A 6= ∅ (3.12)

Dempster’s rule has been subject to some criticism in the sense that it tends to com-

pletely ignore the conflicts that exist between the available evidence from different

sources. This is because of the normalization factor in the denominator. Thus, it is

not suitable for cases where there is lot of inconsistency in the available evidence.

However, it is appropriate to apply, when there is some degree of consistency or suf-

ficient agreement among the opinions of different sources. In the present research,

it will be assumed that there is some consistency in the available evidence from

different sources. Hence, Dempster’s rule of combination will be applied to combine

28

Page 40: Reliability Based Design Optimization - Formulations and Methodologies

evidence. When there is little or no consistency among the evidences from different

sources, it is appropriate to use mixing or averaging rule [47].

3.2.3 Convex Models of Uncertainty and Interval Analysis

In some cases, uncertain events form patterns that can be modeled using Convex

Models of Uncertainty [7]. Examples of convex models include intervals, ellipses or

any convex sets. Convex Models of uncertainties require less detailed information

to characterize uncertainties than a probability model of uncertainties. They often

require a worst case analysis in design applications which can be formulated as a

constrained optimization problem. Depending on the nature of the performance

function, local or global optimization techniques will be required. When the convex

models are intervals, techniques in interval analysis can be used.

3.2.4 Possibility/Fuzzy Set Theory Based Approaches

Fuzzy set theory [19] can be used to model uncertainties when there is little infor-

mation or sparse data. The conventional sets have fixed boundaries and are called

crisp sets, which is a special case of fuzzy sets. Let A be a fuzzy set or event (e.g. a

quantity is “equal” to 10) in a universe of discourse U (e.g. all possible values of the

quantity) and let x ∈ U , then the degree of membership of x in A is defined using a

membership function, also called the characteristic function, µA(x) (e.g. a triangle

shaped function with a peak of 1 at x = 10 and non-zero only when 9 < x < 11 and

0 elsewhere.

Possibility theory can be used when there is insufficient information about ran-

dom variations. Possibility distributions can be assigned to such variations that

are analogous to cumulative distribution functions in probability theory. The basic

definitions in possibility theory associated with the possibility of a compliment of an

event, union or intersection of events are very different from that used in probability

29

Page 41: Reliability Based Design Optimization - Formulations and Methodologies

theory. The membership function associated with a fuzzy set can be assumed to

be a possibility distribution of that set. The possibility distribution (membership

function) of a function of a variable (fuzzy set) with a given possibility distribution

(membership function) can be found using Zadeh’s Extension Principle also called

Vertex Method. The vertex method is based on combinatorial interval analysis, and

the computational expense increases exponentially with dimension of the uncertain

variables and increases with nonlinearity of the function. The vertex method can

be used to find the induced preferences on performance parameters due to pre-

scribed preferences for design variables, and possibility distributions of performance

parameters due to uncertain variables characterized by possibility distributions.

Researchers [11, 40, 10] have shown that using possibility theory can yield more

conservative designs as compared to probability theory. This is especially true when

the available information is scarce or when the design criterion is to achieve low

probability or possibility of failure.

3.3 Optimization Under Uncertainty

A deterministic optimization formulation does not account for the uncertainties in

the design variables and parameters and simulation models. Optimized designs

based on a deterministic formulation are usually associated with a high probability

of failure because of the violation of certain hard constraints and can be subjected

to failure in service. This is particularly true if the hard constraints are active at

the deterministic optimum. In today’s competitive marketplace, it is very impor-

tant that the resulting designs are optimum and at the same time reliable. Hence,

it is extremely important that design optimization accounts for the uncertainties.

Some of the existing methodologies that perform optimization accounting for the

uncertainties are discussed in the following sections.

30

Page 42: Reliability Based Design Optimization - Formulations and Methodologies

3.3.1 Robust Design

In some applications, it is important to ensure that the performance function is

insensitive to variations. A robust design needs to be found for such applications.

In [52], a formulation for performing robust design optimization which corresponds

to finding designs with minimum variation of certain performance characteristics

is presented. A Signal to Noise Ratio is maximized, where noise corresponds to

variation type uncertainties and signal corresponds to a performance parameter,

PP. In Taguchi techniques, experimental arrays are used to conduct experiments

at various levels of control factors (design variables, d), and for each experimental

control setting, a Signal to Noise Ratio is calculated. This usually requires another

experimental array for different settings of the uncertain variables, x. The Signal

to Noise Ratio (S/N) is calculated as follows

S/N(dj) = −10 log

[m∑

i=1

(PP (dj,xi)− τ)2 × 1

m

](3.13)

When the uncertainties are known probabilistically, a robust design optimization

corresponds to minimizing the variance of the performance function.

3.3.2 Reliability based design optimization

The basic idea in reliability based design optimization is to employ numerical opti-

mization algorithms to obtain optimal designs ensuring reliability. When the opti-

mization is performed without accounting the uncertainties, certain hard constraints

that are active at the deterministic solution may lead to system failure. Figure 3.3

illustrates such a case, where the chance that the deterministic solution fails is about

75% due to uncertainties in design variable settings. The reliable solution is char-

acterized by a slightly higher function value and is located inside the feasible region.

In most practical applications, the uncertainties are modeled using probability the-

ory. The probability of failure corresponding to a failure mode can be obtained and

31

Page 43: Reliability Based Design Optimization - Formulations and Methodologies

90

70

50

Deterministic Optimum

Reliable Optimum

Almost 75% of the designs around fails

Figure 3.3. Design trade-off in RBDO

can be posed as a constraint in the optimization problem to obtain safer designs.

3.3.3 Fuzzy Optimization

In fuzzy optimization, the uncertainties (fuzzy requirements or preferences) are mod-

eled by using fuzzy sets. If the performance parameters are PPi and the corre-

sponding preferences be µi, the design optimization problem is to maximize all the

preferences and hence is a multiobjective problem, usually called fuzzy program-

ming or fuzzy multi-objective optimization [60]. Usually all the preferences cannot

be simultaneously maximized and hence requires a trade-off between preferences.

This is achieved through an aggregation operator P (.). The optimization problem

therefore consist of maximizing the aggregated preference. Typical examples of P

are

P (µ1, µ2, .., µk) = min(µ1, µ2, .., µk) (3.14)

P (µ1, µ2, .., µk) = (µ1, µ2, .., µk)1k . (3.15)

Eqn. (3.14) represents a non-compensating trade-off while Eqn. (3.15) represents

an equally compensating trade-off among the various preferences.

32

Page 44: Reliability Based Design Optimization - Formulations and Methodologies

Antonsson and Otto[5] developed a method of imprecision (Mol) in which de-

signer preferences for design variables and performance variables are modeled in

terms of membership functions in the entire design variable and performance pa-

rameter space. A design trade-off strategy is identified and the optimal solution is

obtained by maximizing the aggregate preference functions for the design variables

and the induced preference on the performance parameters. The vertex method

can be used for computing induced preferences and as well as identifying the design

variables corresponding to the maxima of the aggregate preference [37].

Jensen and Sepulveda[29] provided a fuzzy optimization methodology in which

a trust region based sequential approximate optimization framework is used for

minimizing a non-compensating aggregate preference function. The methodology

develops approximations for intermediate variables and employs the vertex method

for computing the preference functions of the approximate intermediate variables in

an inexpensive way.

3.3.4 Reliability Based Design Optimization Using Evidence Theory

Uncertainty in engineering systems can be either aleatory or epistemic. Aleatory

uncertainty also known as stochastic uncertainty is associated with the inherent

random nature of the parameters of the system. It can be described mathematically

using probabilistic means. Once the probabilistic description is available for the

random parameters, the risk associated with the systems responses can be quan-

tified in terms of probability measure using appropriate methods such as FORM,

SORM, HORM (higher-order reliability method), Monte Carlo, etc. However, there

exist cases where all the uncertain parameters in a system cannot be described

probabilistically. In such cases, the usual practice is to assume some distribution

for the parameters for which probabilistic description is not available and perform

33

Page 45: Reliability Based Design Optimization - Formulations and Methodologies

probabilistic analysis. The results obtained from such analysis can be faulty. Epis-

temic uncertainty also known as subjective uncertainty arises due to ignorance, lack

of knowledge or incomplete information. A variety of different theories exist to

quantify such uncertainty. In engineering systems, the epistemic uncertainty can

be either parametric or model-based. Parametric uncertainty is associated with the

uncertain parameters for which the information available is sparse or inadequate and

hence cannot be described probabilistically. Model-form uncertainty is associated

with improper models of the system due to a lack of knowledge of the physics of

the system. Model form uncertainties also arise when variable fidelity mathematical

models are employed for simulation and design.

Fuzzy sets, possibility theory, Dempster-Shafer theory, etc provides a means for

mathematically quantifying epistemic uncertainty. In this dissertation, an attempt

is made on how uncertainty can be quantified in multidisciplinary systems analysis

subject to epistemic uncertainty associated with the disciplinary design tools and

input parameters. Evidence theory is used to quantify uncertainty in terms of the

uncertain measures of belief and plausibility.

After the epistemic uncertainty has been quantified mathematically, the designer

seeks the optimum design under uncertainty. The measures of uncertainty provided

by evidence theory are discontinuous functions. Such non-smooth functions can-

not be used in traditional gradient-based optimizers because the sensitivities of the

uncertain measures do not exist. In this research surrogate models are used to repre-

sent the uncertain measures as continuous functions. A formal trust region managed

sequential approximate optimization approach is used to drive the optimization pro-

cess. The trust region is managed by a trust region ratio based on the performance

of the Lagrangian which is a penalty function of the objective and the constraints.

The methodology is illustrated in application to multidisciplinary problems.

34

Page 46: Reliability Based Design Optimization - Formulations and Methodologies

3.4 Summary

A variety of uncertainties exist during simulation based design of an engineering

system. These include aleatory uncertainty, epistemic uncertainty, and errors. In

general, probability theory is used to model aleatory uncertainty. Other uncertainty

theories such as Dempster-Shafer theory, fuzzy set theory, possibility theory, and

convex models of uncertainty, can be used to model epistemic uncertainty. It is ex-

tremely important that the uncertainties are taken into account in design optimiza-

tion. A deterministic design optimization does not account for the uncertainties. A

variety of techniques have been developed in the last few decades to address this

issue. These techniques include robust design, reliability based design optimization,

fuzzy optimization, and so on. This dissertation mainly focuses on reliability based

design optimization.

35

Page 47: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 4

RELIABILITY BASED DESIGN OPTIMIZATION

In this chapter, aleatory uncertainty is considered in design optimization. This

is typically referred as reliability based design optimization. Reliability based de-

sign optimization (RBDO) is a methodology for finding optimized designs that are

characterized with a low probability of failure. Primarily, reliability based design

optimization consists of optimizing a merit function while satisfying reliability con-

straints. The reliability constraints are constraints on the probability of failure

corresponding to each of the failure modes of the system or a single constraint on

the system probability of failure. The probability of failure is usually estimated by

performing a reliability analysis. During the last few years, a variety of different

formulations have been developed for reliability based design optimization. This

chapter presents RBDO formulations and research issues associated with standard

methodologies.

4.1 RBDO formulations: efficiency and robustness

There are two important concepts in relation to a RBDO formulation. They are

efficiency and robustness. An efficient formulation is one in which the solution can

be obtained faster as compared to the other formulations. A real engineering de-

sign problem usually consists of large number of failure modes. Traditional RBDO

formulations requires solutions to nested optimization which is computationally in-

36

Page 48: Reliability Based Design Optimization - Formulations and Methodologies

efficient. Thus it is important that the formulation that is solved for obtaining

reliable designs is computationally efficient.

Robustness, on the other hand, means that the RBDO formulation does not

depend on the starting point, etc. It implies that if the optimizer is invoked, it will

provide a local optimal solution. Some of the existing RBDO formulations are not

robust in the sense that there could be designs at which the formulation may not

hold. Hence it is also important that the formulation used is robust.

In the last two decades, researchers have proposed a variety of frameworks for

efficiently performing reliability based design optimization. A careful survey of the

literature reveals that the various RBDO methods can be divided into three broad

categories.

4.1.1 Double Loop Methods for RBDO

A deterministic optimization formulation does not account for the uncertainties in

the design variables and parameters. Optimized designs based on a deterministic

formulation are usually associated with a high probability of failure because of the

likely violation of certain hard constraints in service. This is particularly true if

the hard constraints are active at the deterministic optimal solution. To obtain a

reliable optimal solution, a deterministic optimization formulation is replaced with

a reliability based design optimization formulation.

Traditionally, the reliability based optimization problem has been formulated as a

double loop optimization problem. In a typical RBDO formulation, the critical hard

constraints from the deterministic formulation are replaced by reliability constraints,

37

Page 49: Reliability Based Design Optimization - Formulations and Methodologies

as in

min f(d,p,y(d,p)) (4.1)

subject to grc(X, ηηη) ≥ 0, (4.2)

gDj (d,p,y(d,p)) ≥ 0, j = 1, .., Nsoft, (4.3)

dl ≤ d ≤ du, (4.4)

where grc are the reliability constraints. They are either constraints on probabilities

of failure corresponding to each hard constraint or are a single constraint on the

overall system probability of failure. In this dissertation, only component failure

modes are considered. It should be noted that the reliability constraints depend on

the random variables X and limit state parameters ηηη. The distribution parameters

of the random variables are obtained from the design variables d and the fixed

parameters p (see section 4.1.2 on reliability analysis below). grc can be formulated

as

grci = Pallowi

− Pi, i = 1, .., Nhard, (4.5)

where Pi is the failure probability of the hard constraint gRi at a given design, and

Pallowiis the allowable probability of failure for this failure mode. The probability

of failure is usually estimated by employing standard reliability techniques. A brief

description of standard reliability methods is given in the next section. It has to

be noted that the RBDO formulation given above (Equations (4.1)-(4.4)) assumes

that the violation of soft constraints due to variational uncertainties are permissible

and can be traded off for more reliable designs. For practical problems, design

robustness represented by the merit function and the soft constraints could be a

significant issue, one that would require the solution to a hybrid robustness and

reliability based design optimization formulation.

38

Page 50: Reliability Based Design Optimization - Formulations and Methodologies

4.1.2 Probabilistic reliability analysis

Reliability analysis is a tool to compute the reliability index or the probability of

failure corresponding to a given failure mode or for the entire system [27]. The un-

certainties are modeled as continuous random variables, X = (X1, X2, ..., Xn)T , with

known (or assumed) joint cumulative distribution function (CDF), FX(x). The de-

sign variables, d, consist of either distribution parameters θθθ of the random variables

X, such as means, modes, standard deviations, and coefficients of variation, or de-

terministic parameters, also called limit state parameters, denoted by ηηη. The design

parameters p consist of either the means, the modes, or any first order distribution

quantities of certain random variables. Mathematically this can be represented by

the statements

[p,d] = [θθθ,ηηη] , (4.6)

p is a subvector of θ. (4.7)

Random variables can be consistently denoted as X(θθθ), and the ith failure mode

can be denoted as gRi (X, ηηη). In the following, x denotes a realization of the random

variables X, and the subscript i is dropped without loss of clarity. Letting gR(x, ηηη) ≤0 represent the failure domain, and gR(x, ηηη) = 0 be the so-called limit state function,

the time-invariant probability of failure for the hard constraint is given by

P (θθθ,ηηη) =

gR(x,ηηη)≤0

fX(x) dx, (4.8)

where fX(x) is the joint probability density function (PDF) of X. It is usually impos-

sible to find an analytical expression for the above integral. In standard reliability

techniques, a probability distribution transformation T : Rn → Rn is usually em-

ployed. An arbitrary n-dimensional random vector X = (X1, X2, ..., Xn)T is mapped

into an independent standard normal vector U = (U1, U2, ..., Un)T . This transforma-

tion is known as the Rosenblatt Transformation [58]. The standard normal random

39

Page 51: Reliability Based Design Optimization - Formulations and Methodologies

variables are characterized by a zero mean and unit variance. The limit state func-

tion in U-space can be obtained as gR(x, ηηη) = gR(T−1(u), ηηη) = GR(u, ηηη) = 0. The

failure domain in U-space is GR(u, ηηη) ≤ 0. Equation (4.8) thus transforms to

Pi(θθθ,ηηη) =

GR(u,ηηη)≤0

φU(u) du, (4.9)

where φU(u) is the standard normal density. If the limit state function in U-space is

affine, i.e., if GR(u, ηηη) = αααTu + β, then an exact result for the probability of failure

is Pf = Φ(− β‖ααα‖), where Φ(·) is the cumulative Gaussian distribution function. If

the limit state function is close to being affine, i.e., if GR(u, ηηη) ≈ αααTu + β with

β = −αααTu∗, where u∗ is the solution of the following optimization problem,

min ||u|| (4.10)

subject to GR(u, ηηη) = 0, (4.11)

then the first order estimate of the probability of failure is Pf = Φ(− β‖ααα‖), where

ααα represents a normal to the manifold (4.11) at the solution point. The solution

u∗ of the above optimization problem, the so-called design point, β-point or the

most probable point (MPP) of failure, defines the reliability index βp = −αααT u∗‖ααα‖ . This

method of estimating the probability of failure is known as the first order reliability

method (FORM) [27].

In the second order reliability method (SORM), the limit state function is ap-

proximated as a quadratic surface. A simple closed form solution for the probability

computation using a second order approximation was given by Breitung [9] using

the theory of asymptotic approximations as

Pf (θθθ,ηηη) =

GR(u,ηηη)≤0

φU(u) du

≈ Φ(−βp)n−1∏

l=1

(1− κl)−1/2, (4.12)

40

Page 52: Reliability Based Design Optimization - Formulations and Methodologies

where the κl are related to the principal curvatures of the limit state function at the

minimum distance point u∗, and βp is the reliability index using FORM. Breitung

[9] showed that the second-order probability estimate asymptotically approaches the

first order estimate as βp approaches infinity if βpκl remains constant. Tvedt [70]

presented a numerical integration scheme to obtain exact probability of failure for

a general quadratic limit state function. Kiureghian et al [32] presented a method

of computing probability of failure by fitting parabolic approximation to the limit

state surface. The probability of failure can also be computed by using importance

sampling techniques [28, 21] that employ sampling around the MPP, thereby requir-

ing fewer samples than a traditional Monte Carlo technique. The concept of FORM

and SORM is illustrated in Figure 4.1 for an example with two random variables,

X1 and X2.

(Failed)

(Safe)

Original Space

(Failed)

(Safe)

Standard Space

Figure 4.1. Reliability analysis

The first order approximation, Pf ≈ Φ(−βp), is sufficiently accurate for most

practical cases. Thus, only first order approximations of the probability of failure are

used in practice. Using the FORM estimate, the reliability constraints in Equation

(4.5) can be written in terms of reliability indices as

grci = βi − βreqdi

, (4.13)

41

Page 53: Reliability Based Design Optimization - Formulations and Methodologies

where βi is the first order reliability index, and βreqdi= −Φ−1(Pallowi

) is the desired

reliability index for the ith hard constraint. When the reliability constraints are

formulated as given in equation (4.14), the approach is referred to as the reliability

index approach (RIA).

The sensitivities of the probabilities of failure and reliability index with respect

to the distribution parameters and limit-state parameters can also be obtained. The

sensitivities in FORM are given as follows

∂β

∂θθθ= − ∇uG

R(u∗, ηηη)

||∇uGR(u∗, ηηη)||∂T (x∗, θθθ)

∂θθθ(4.14)

∂β

∂ηηη= − 1

||∇uGR(u∗, ηηη)||∂GR(u∗, ηηη)

∂ηηη(4.15)

A big advantage of reliability analysis is that the influence of the uncertainties on

the probability of failure can be found from the components of ααα∗. The designer can

recommend those uncertainties be reduced that influences the probability of failure

the most by appropriate quality control measures.

The system probability of failure can be computed if the components constituting

its failure are known. The system failure can either be a series event or a parallel

event. In a series system, failure of any component (failure mode) corresponds to

the failure of the system. In a parallel system, the failure modes that constitute

system failure has to be defined. In a K-out-of-N system, K component modes have

to fail for the system to fail. For complex systems, fault tree diagrams are used to

analyze the system failure. The system probability of failure for series or parallel

systems can be bounded by using unimodal bounds or relaxed bimodal bounds. The

unimodal bounds for series and parallel system are as follows

maxi

P (GRi ≤ 0) ≤ P (∪GR

i ≤ 0) ≤n∑

i=1

P (GRi ≤ 0) (4.16)

0 ≤ P (∩GRi ≤ 0) ≤ min

iP (GR

i ≤ 0) (4.17)

42

Page 54: Reliability Based Design Optimization - Formulations and Methodologies

In this dissertation, only series systems are considered. Moreover, the first order

approximation to the probability of failure, Pf (ηηη) ≈ Φ(−βp), is reasonably accurate

for most practical cases. Thus, only first order approximations of the probability of

failure will be employed.

It should be noted that the first order reliability analysis involves a probability

distribution transformation, the search for the MPP, and the evaluation of the cu-

mulative Gaussian distribution function. To solve the FORM problem (Equations

4.10-4.11), various algorithms have been reported in the literature [39]. One of the

approaches is the Hasofer-Lind and Rackwitz-Fiessler (HL-RF) algorithm that is

based on a Newton-Raphson root solving approach. Variants of the HL-RF meth-

ods exist that use additional line searches to HL-RF scheme. The family of HL-RF

algorithms can exhibit poor convergence for highly nonlinear or badly scaled prob-

lems, since they are based on first order approximations of the hard constraint.

Using a sequential quadratic programming (SQP) algorithm is often a more robust

approach. The solution typically requires many system analysis evaluations. More-

over, there might be cases where the optimizer may fail to provide a solution to

the FORM problem, especially when the limit state surface is far from the origin in

U-space or when the case GR(u, ηηη) = 0 never occurs at a particular design variable

setting.

In design automation it cannot be known a priori what design points the upper

level optimizer (minimizing the merit function subject to reliability and determin-

istic constraints) will visit, therefore it is not known if the optimizer for the FORM

problem (evaluation of reliability constraints) will provide a consistent result. This

problem was addressed recently by Padmanabhan et al [48] by using a trust region

algorithm for equality constrained problems. For cases when GR(u, ηηη) = 0 does not

43

Page 55: Reliability Based Design Optimization - Formulations and Methodologies

occur, the algorithm provided the best possible solution for the problem through

min ‖u‖ (4.18)

subject to GR(u, ηηη) = c. (4.19)

The reliability constraints formulated by the RIA are therefore not robust. RIA

is usually more effective if the probabilistic constraint is violated, but it yields a

singularity if the design has zero failure probability [69]. To overcome this difficulty,

Tu et al [69] provided an improved formulation to solve the RBDO problem. In

this method, known as the performance measure approach (PMA), the reliability

constraints are stated by an inverse formulation as

grci = GR

i (ui∗β=ρ, ηηη) i = 1, .., Nhard. (4.20)

ui∗β=ρ is the solution to the inverse reliability analysis (IRA) optimization problem

min GRi (u, ηηη) (4.21)

subject to ‖u‖ = ρ = βreqdi, (4.22)

where the optimum solution ui∗β=ρ corresponds to MPP in IRA of the ith hard con-

straint. Solving RBDO by the PMA formulation is usually more efficient and robust

than the RIA formulation where the reliability is evaluated directly. The efficiency

lies in the fact that the search for the MPP of an inverse reliability problem is

easier to solve than the search for the MPP corresponding to an actual reliability

[69]. The RIA and the PMA approaches for RBDO are essentially inverse of one

another and would yield the same solution if the constraints are active at the op-

timum [69]. If the constraint on the reliability index (as in the RIA formulation)

or the constraint on the optimum value of the limit-state function (as in the PMA

formulation) is not active at the solution, the reliable solution obtained from the

two approaches might differ. In general, the RIA formulation yields a conservative

44

Page 56: Reliability Based Design Optimization - Formulations and Methodologies

solution. Similar RBDO formulations were independently developed by other re-

searchers [53, 59, 31]. In these RBDO formulations, constraint (4.22) is considered

as an inequality constraint (‖u‖ ≤ βreqdi), which is a more robust way of handling

the constraint on the reliability index. The major difference lies in the fact that

in these papers semi-infinite optimization algorithms were employed to solve the

RBDO problem. Semi-infinite optimization algorithms solve the inner optimization

problem approximately. However, the overall RBDO is still a nested double-loop

optimization procedure. As mentioned earlier, such formulations are computation-

ally intensive for problems where the function evaluations are expensive. Moreover,

the formulation becomes impractical when the number of hard constraints increase,

which is often the case in real-life design problems. To alleviate the computational

cost associated with the nested formulation, sequential RBDO methods have been

developed.

4.1.3 Sequential Methods for RBDO

The basic concept behind sequential RBDO techniques is to decouple the upper level

optimization from the reliability analysis to avoid a nested optimization problem.

In sequential RBDO methods, the main optimization and the search of the MPPs

of failure (reliability analysis) is performed separately and the procedure is repeated

until desired convergence is achieved. The idea is to find a consistent reliable de-

sign at considerably lower computational cost as compared to the nested approach.

A consistent reliable design is a feasible design that satisfies all the reliability con-

straints and other soft constraints. The reliability analysis is used to check if a given

design meets the desired reliability level. In most sequential techniques of RBDO,

a design obtained by performing a deterministic optimization is updated based on

the information obtained from the reliability analysis or by using some nonlinear

45

Page 57: Reliability Based Design Optimization - Formulations and Methodologies

transformations, and the updated design is used as a starting point for the next

cycle.

Chen et al [13] proposed a sequential RBDO methodology for normally dis-

tributed random variables. Wang and Kodiyalam [72] generalized this methodol-

ogy for nonnormal random variables and reported enormous computational savings

when compared to the nested RBDO formulation. The methodology was extended

for multidisciplinary systems in Agarwal et al [2]. Instead of using the reliability

analysis (inverse reliability problem) to obtain the true MPP of failure (u∗β=ρ), Wang

and Kodiyalam [72] use the direction cosines of the probabilistic constraint at the

mean values of the random variables in the standard space (ααα =∂gR

∂u

‖ ∂gR

∂u‖) and the

target reliability index (ρ) to make an estimate of the MPP of failure (u∗β=ρ = −ρααα)

(see figure 4.2). It should be noted that the estimated MPPs lie on the target

Limit State SurfaceLimit State Surface

Failure Region Failure Region

Required Reliability Sphere Required Reliability Sphere

Figure 4.2. Approximate MPP Estimation

reliability sphere. During optimization the corresponding MPP in X-space needs

to be calculated to evaluate the probabilistic performance functions. The MPP of

failure in X-space is found by mapping u∗β=ρ to the original space. If the random

46

Page 58: Reliability Based Design Optimization - Formulations and Methodologies

variables in X-space are independent and normally distributed, then the MPP in

original space is given by x∗ = µx − u∗β=ρσx. If the variables have a nonnormal dis-

tribution, then the equivalent means (µ′x) and equivalent standard deviations (σ

′x) of

an approximate normal distribution are computed and used in the above expression

to estimate the MPP in X-space [72].

The advantage of this methodology is that it completely eliminates the lower

level optimization for evaluating the reliability based constraints. The most probable

point (MPP) of failure for each failure driven constraint is estimated approximately.

If the limit state function is close to linear in the standard space, then the estimate

of the MPP in U-space will be accurate enough and the final solution may be close

to the actual solution. However, if the limit state function in the standard space in

sufficiently nonlinear, which is often the case in most real-life design problems, then

the MPP estimates might be extremely inaccurate, which might result in a design

which is not truly optimal. This is referred to as spurious optimal design.

Chen and Du [12] also proposed a sequential optimization and reliability assess-

ment methodology (SORA). In SORA, the boundaries of the violated constraints

(with low reliability) are shifted into the feasible direction based on the reliability

information obtained in the previous iteration. Two different formulations were

used for reliability assessment, the probability formulation (RIA) and the percentile

performance formulation (PMA). The percentile formulation was reported to be

computationally less demanding compared to the probability formulation and the

overall cost of RBDO was reported to be significantly less compared to the nested

formulation. It should be noted that in this methodology an exact first order re-

liability analysis is performed to obtain the MPP of failure for each failure driven

constraint, which was not the case in the approximate RBDO methodology of [72].

Therefore, a consistent reliable design is almost guaranteed to be obtained from this

47

Page 59: Reliability Based Design Optimization - Formulations and Methodologies

framework. However, a true local optimum cannot be guaranteed. This is because

the MPP of failure for the hard constraints are obtained at the previous design point.

A shift factor, si, from the mean values of the random variables is calculated and

is used to update the MPP of failure for probabilistic constraint evaluation during

the deterministic optimization phase in the next iteration, as the optimizer varies

the mean values of the random variables. This MPP update might be inaccurate

because of the fact that as the optimizer varies the design variables, the MPP of

failure (and hence the shift factor) also changes and is not addressed in SORA. This

might lead to spurious optimal designs.

4.1.4 Unilevel Methods for RBDO

During the last few years, researchers in the area of structural and multidisciplinary

optimization have continuously faced the challenge to develop more efficient tech-

niques to solve the RBDO problem. As outlined before, RBDO is typically a nested

optimization problem, requiring a large number of system analysis evaluations. The

major concern in evaluating reliability constraints is the fact that the reliability

analysis methods are formulated as optimization problems [54]. To overcome this

difficulty, a unilevel formulation has been developed in Kuschel and Rackwitz [36].

In their method, the direct FORM problem (lower level optimization - Eqs. (4.10)-

(4.11) ) is replaced by the corresponding first order Karush-Kuhn-Tucker (KKT)

optimality conditions of the first order reliability problem. As mentioned earlier,

the direct FORM problem can be ill conditioned, and the same may be true for the

unilevel formulation given by Kuschel and Rackwitz [36]. The reason being that

the probabilistic hard constraints might have a zero failure probability at a partic-

ular design setting, and hence the optimizer might not converge due to the hard

constraints (which are posed as equality constraints) not being satisfied. Moreover,

48

Page 60: Reliability Based Design Optimization - Formulations and Methodologies

the conditions under which such a replacement is equivalent to the original bi-level

formulation was not detailed in Kuschel and Rackwitz [36]. Therefore, the unilevel

approach of Kuschel and Rackwitz [36] does not guarantee that the unilevel ap-

proach is mathematically equivalent to the bilevel approach. In this investigation, a

new unilevel method is being developed which enforces the constraint qualification

of the KKT conditions and avoids the singularities associated with zero probability

of failure.

4.2 Summary

It has been noted that the traditional reliability based optimization problem is a

nested optimization problem. Solving such nested optimization problems for a large

number of failure driven constraints and/or nondeterministic variables is extremely

expensive. Researchers have developed sequential approaches to speed up the opti-

mization process and to obtain a consistent reliability based design. However, these

methodologies are not guaranteed to provide the true optimal solution. A unilevel

formulation has been developed to perform the optimization and reliability eval-

uation in a single optimization. But the existing formulation does not guarantee

mathematical equivalency to the original bi-level problem.

49

Page 61: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 5

DECOUPLED RBDO METHODOLOGY

In this chapter, a novel reliability based design optimization methodology (RBDO)

is developed. A new decoupled method for reliability based design optimization is

developed. In the proposed method, the sensitivities of the Most Probable Point

(MPP) of failure with respect to the decision variables are introduced to update the

MPPs during the deterministic optimization phase of the proposed RBDO approach.

For the test problem considered, the method not only finds the optimal solution but

it also locates the exact MPP of failure, which is important to ensure that the target

reliability index is met. The MPP update is based on the first order Taylor series

expansion around the design point from the last cycle. The MPP update is found to

be extremely accurate, especially around the vicinity of the point from the previous

cycle.

5.1 A new sequential RBDO methodology

In this investigation, a framework for efficiently performing RBDO is also developed.

As described earlier, a traditional nested RBDO formulation is extremely impracti-

cal for most real-life design problems of reasonable size (100 variables and 10 fail-

ure modes) and scope (e.g., multidisciplinary systems). Researchers have proposed

sequential RBDO approaches to speed up the optimization process and therefore

obtain a consistent reliability based design. These approaches are practical and at-

50

Page 62: Reliability Based Design Optimization - Formulations and Methodologies

tractive because of the fact that a workable design can be obtained at considerably

lower computational cost. In the following, a new sequential RBDO methodology

is described in which the main optimization and reliability assessment phases are

decoupled. The sensitivities of the MPPs with respect to the design variables are

used to update them during the deterministic optimization phase. This helps in a

good estimate of the MPP as the design variables are varied by the optimizer during

the deterministic optimization phase.

The flowchart of the proposed RBDO methodology is shown in Figure 6. The

Inverse Reliability Assessment

Deterministic Optimization

ConvergeYes Final

Design

No

Calculate Optimal Sensitivities of MPPs

Figure 5.1. Proposed RBDO Methodology

methodology consist of the following steps.

1. Given an initial design d0. Set iteration counter k = 0.

51

Page 63: Reliability Based Design Optimization - Formulations and Methodologies

2. Solve the following deterministic optimization problem starting from designdk to get a new design dk+1.

mind

: f(d,p) (5.1)

sub to : gRi (xi

∗, ηηη) ≥ 0 i = 1, .., Nhard (5.2)

gDj (d,p) ≥ 0 j = 1, .., Nsoft (5.3)

dl ≤ d ≤ du (5.4)

During the first iteraton (k = 0), the MPP of failure for evaluating theprobabilistic constraints is set equal to the mean values of the random vari-ables (xi

∗ = µx). It should be noted that the mean of the random variables(distribution parameters) is a subset of the design variables and fixed pa-rameters (see equation (4.6) ). This corresponds to solving the deterministicoptimization problem (equations (2.1) - (2.4)). From authors experience, ithas been observed that starting from a deterministic solution results in lowercomputational cost for RBDO.

In subsequent iterations (k > 0), the MPP of failure for evaluating theprobabilistic constraints is obtained from the first order Taylor series expansionabout the previous design point

ui∗ = ui

∗,k +∂ui

∗,k

∂d(d− dk) i = 1, .., Nhard. (5.5)

Note that ∂ui∗,k

∂dis a matrix and its columns contains the gradient of the

MPPs with respect to each of the decision variables. For example, the firstcolumn of the matrix contains the gradient of the MPP vector, ui

∗, withrespect to the first design variable d1. The MPPs in the X-space are obtainedby using the transformation.

3. At the new design dk+1, perform an exact first order inverse reliability analysis(equations (4.21) - (4.22) ) for each hard constraint. This gives the MPP offailure of each hard constraint, (ui

∗,k+1).

4. Check for convergence on the design variables and the MPPs and that theconstraints are satisfied. If converged, stop. Else, go to the next step.

5. Compute the post-optimal sensitivities ∂ui∗,k

∂dfor each hard constraint (i.e.,

how the MPP of failure will change with a change in design variables - seesection below).

6. Set k=k+1 and go to step 2.

5.1.1 Sensitivity of Optimal Solution to Problem Parameters

The proposed RBDO framework requires the sensitivities of the MPPs with re-

spect to the design variables. The post-optimal sensitivities are needed to update

52

Page 64: Reliability Based Design Optimization - Formulations and Methodologies

the MPPs based on linearization around the previous design point. The following

techniques could be used to compute the post-optimal sensitivities for the MPPs.

1. The sensitivity of the optimal solution to problem parameters can be computedby differentiating the first order Karush-Kuhn-Tucker (KKT) optimality con-ditions [55]. The Lagrangian L for the inverse reliability optimization problem(equations (4.21) - (4.22)) is

L = GR(u, ηηη) + λ(uTu− ρ2), (5.6)

where λ is the lagrange multiplier corresponding to the equality constraint(say hR). The first order optimality conditions for this problem are

∂L

∂u=

∂GR

∂u∗+ 2λu∗ = 0. (5.7)

Differentiating the first order KKT optimality conditions with respect to aparameter in the vector, z = [θθθ,ηηη]T , the following linear system of equationsis obtained

∂2L∂u2 u

uT 0

∂u∗∂zl

∂λ∗∂zl

+

∂2L∂u∂zl

∂hR

∂zl

= 0. (5.8)

The above system needs to be solved for each parameter in the optimizationproblem to obtain the sensitivity of the optimal solution with respect to thatparameter, ∂u∗

∂zl. In the proposed sequential RBDO framework, the system

of equations needs to be solved for only those parameters that are decisionvariables in the upper level optimization. It should be noted that the Hessianof the limit state function needs to be computed when using this technique. Ifthe Hessian of the limit state function is not available or is difficult to obtain,other techniques have to be used. In the present implementation, a dampedBFGS update is used to obtain the second order information [42]. This methodis defined by

rk = ψkyk + (1− ψk)Hksk, (5.9)

where the scalar

ψk =

1 : sTk yk ≥ 0.2sT

k Hksk

0.8sTk Hksk

sTk Hksk − sT

k yk

: sTk yk ≤ 0.2sT

k Hksk

, (5.10)

and yk and sk are the differences in the function and gradient values of theprevious iteration from the current iteration, respectively. The Hessian updateis

Hk+1 = Hk − HksksTk Hk

sTk Hksk

+rkr

Tk

sTk rk

. (5.11)

53

Page 65: Reliability Based Design Optimization - Formulations and Methodologies

2. The sensitivity of the optimal solution to problem parameters can also beobtained by using finite difference techniques. These techniques can be ex-tremely expensive, as the dimension of the decision variables and the numberof hard constraints increase. This is because a full optimization is required tocompute the sensitivity of the MPP with respect to each decision variable andthis has to be performed for each hard constraint. However, significant com-putational savings can be achieved if the previous optimum MPP is used asa warm starting point to compute the change in MPP as the design variablesare perturbed.

3. Approximations to the limit state function can also be utilized to computethe sensitivity of optimal solution to problem parameters. This technique isdescribed below.

(a) At a given design dk, perform inverse reliability analysis to obtain exactMPP, xi

∗,k.

(b) Construct linear approximations of the hard constraint as follows.

gRi = gR

i (x∗,ki , ηηηk) +∂gR

i

∂x

∣∣∣∣T

x∗,ki ,ηηηk

(x− x∗,ki ) +∂gR

i

∂ηηη

∣∣∣∣T

x∗,ki ,ηηηk

(η − ηki ) (5.12)

(c) Perform inverse reliability analysis over the linear approximation at per-turbed values of design variables to obtain approximate sensitivities.

5.2 Test Problems

The decoupled RBDO methodology developed in this investigation is implemented

for a series of analytical, structural, and multidisciplinary design problems. The

methodology is compared to the nested RBDO approach using the PMA approach

for probabilistic constraint evaluation.

5.2.1 Short Rectangular Column

This problem has been used for testing and comparing RBDO methodologies in

Kuschel and Rackwitz [35]. The design problem is to determine the depth h and

width b of a short column with rectangular cross section with a minimal total mass

bh assuming unit mass per unit area. The uncertain vector, X = (P, M, Y ), the

stochastic parameters, and the correlations of the vector elements are listed in Table

5.1. The limit state function in terms of the random vector, X = (P,M, Y ), and

54

Page 66: Reliability Based Design Optimization - Formulations and Methodologies

Table 5.1

STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM

Variable Dist. Mean/St. dev. Cor. P Cor. M Cor. YYield Stress (P ) N 500/100 1 0.5 0

Bending Moments (M) N 2000/400 0.5 1 0Axial Force (Y ) LogN 5/0.5 0 0 1

the limit state parameters, ηηη = (b, h), (which happens to be same as the design

vector d in this problem) is given by

gR(x, ηηη) = 1− 4M

bh2Y− P 2

(bhY )2. (5.13)

The objective function is given by

f(d) = bh. (5.14)

The depth h and the width b of the rectangular column had to satisfy 15 ≤ h ≤ 25

and 5 ≤ b ≤ 15. The allowable failure probability is 0.00621 or in other words a

reliability index for the failure mode greater than or equal to 2.5. The optimization

process was started from the point (u0,d0) = ((1, 1,−1), (5, 15)). Both approaches

results in an optimal solution d∗ = (8.668, 25.0). The computational effort for this

problem is compared in Table 5.2. The nested approach requires 77 evaluations

of the limit state function and 85 evaluations of its gradients as compared to 31

evaluations of the limit state function and 31 evaluation of its gradients for the

proposed framework. Therefore, it is noted that the proposed methodology for

RBDO is computationally more efficient than the traditional RBDO approach for

this particular problem. The proposed method took three cycles for convergence,

the design history for which is shown in Figure 5.2. It is observed that after the

55

Page 67: Reliability Based Design Optimization - Formulations and Methodologies

Table 5.2

COMPUTATIONAL COMPARISON OF RESULTS (SHORT RECTANGULAR

COLUMN)

Formulation f ∂f∂d

gR ∂gR

∂d∂gR

∂u

Nested (PMA) 8 8 77 8 77Decoupled Method 12 12 31 12 19

first cycle, the new design is very close to the optimal solution. However, at this

design the MPP is not converged. Therefore, by using the proposed methodology,

we are able to converge to the true MPP within a few cycles.

5.2.2 Analytical Problem

This is an analytical multidisciplinary test problem. Even though the problem is

just two-dimensional, it is sufficiently nonlinear and has the attributes of a general

multidisciplinary problem. This problem has two design variables, d1 and d2, and

two parameters, p1 and p2. There are two random variables, X1 and X2. The

design parameters, p1 and p2, are the means of the random variables, X1 and X2,

respectively. This problem involves a coupled system analysis and has two CAs.

The problem has two hard constraints, gR1 and gR

2 . The reliability based design

56

Page 68: Reliability Based Design Optimization - Formulations and Methodologies

0 1 2 35

10

15

20

25

d

bh

0 1 2 3−2

−1

0

1

2

u

u1

u2

u3

Figure 5.2. Convergence history for the example problem

optimization problem in standard form is as follows.

Minimize d21 + 10d2

2 + y1

subject to gR1 = Y1(X, η)/8− 1 ≥ 0

gR2 = 1− Y2(X, η)/5 ≥ 0

−10 ≤ d1 ≤ 10

0 ≤ d2 ≤ 10

where d1 = η1, d2 = η2

and p1 = µX1 = 0, p2 = µX2 = 0

CA1 : Y1(X, η) = η21 + η2 − 0.2Y2(X, η) + X1;

y1(d,p) = d21 + d2 − 0.2y2(d,p) + p1;

CA2 : Y2(X, η) = η1 − η22 +

√Y1(X, η) + X2;

y2(d,p) = d1 − d22 +

√y1(d,p) + p2.

57

Page 69: Reliability Based Design Optimization - Formulations and Methodologies

It is assumed that the random variables X1 and X2 have a uniform distribution over

the intervals [-1,1] and [-0.75,0.75] respectively. The desired value of the reliability

index βreqdi(for i = 1,2) is chosen as 3 for both the hard constraints.

Figure 5.3 shows the contours of the merit function and the constraints. The

−10 −8 −6 −4 −2 0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

d1

d 2

1000

850

700

550

400

25010

0

50

100g

1R(p

1,p

2)=0

g2R(p

1,p

2)=0

Figure 5.3. Contours of objective and constraints

zero contours of the hard constraints are plotted at the design parameters, p1 and

p2 (mean of the random variables, X1 and X2). It should be noted that in determin-

istic optimization, two local optima exist for this problem. At the global solution,

only the first hard constraint is active, whereas at the local solution both the hard

constraints are active. They are shown by star symbols. Both of these solutions can

be located easily by choosing different starting points in the design space.

Similarly, two local optimum designs exist for the RBDO problem as well. Both

reliable designs get pushed into the feasible region, characterized with a higher

merit function value and a lower probability of failure. They are shown by the

shaded squares in Figure 5.4.

To locate the two local optimal solutions of this problem, two different starting

points, [−5, 3] and [5,3], are chosen. The results corresponding to the starting point

58

Page 70: Reliability Based Design Optimization - Formulations and Methodologies

−6 −5 −4 −3 −2 −1 0 10

1

2

3

4

5

6

7

d1

d2

50

100150

200250

0

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

d1

d2

50

100

150

200250

350400

500

0

0 0

Figure 5.4. Plot showing two reliable optima.

[-5,3] are listed in Table 5.3.

Table 5.3

STARTING POINT [-5,3], SOLUTION [-3.006,0.049]

Cost Double-Loop DecoupledMeasure RIA PMA RBDO

SA callsNot

225 65converged

Starting at the design d = [−5, 3], the proposed decoupled RBDO framework,

converges to the reliable optimum point without any difficulty. The proposed

unilevel method requires 65 system analysis evaluations as compared to 225 when

using the traditional double loop PMA method. Analytical gradients were used in

implementing this problem for all methods. Note that the double loop method that

59

Page 71: Reliability Based Design Optimization - Formulations and Methodologies

uses the reliability index approach to prescribe the probabilistic constraints does

not converge. For the designs that are visited by the upper level optimizer (say, dk

at the kth iteration), the FORM problem does not have a solution (because of zero

failure probability at these designs). Starting from the design [-5,3], the optimizer

tries to find the local design [-3.006,0.049]. However, it turns out that at this design,

the second hard constraint, gR2 , is never zero in the space of uniformly distributed

random variables, X. Since in the RIA method, the limit state function is enforced

as an equality constraint, the lower level optimizer does not converge.

The results corresponding to the starting point [5,3] are listed in Table 5.4.

Note that the double loop method that uses the RIA for probabilistic constraint

Table 5.4

STARTING POINT [5,3], SOLUTION [2.9277,1.3426]

Cost Double-Loop DecoupledMeasure RIA PMA RBDO

SA callsNot

184 65converged

evaluation fails to converge for this starting point too. Again, the reason for this

is that there is zero failure probability (infinite reliability index) at the designs

visited by the upper-level optimizer and therefore the lower level optimizer does

not provide any true solution. All the other methods converge to the same local

optimum solution. The decoupled methodology developed in this investigation is

found to be sufficiently more efficient as compared to the nested formulation.

60

Page 72: Reliability Based Design Optimization - Formulations and Methodologies

5.2.3 Cantilever Beam

This problem is taken from Thanedar and Kodiyalam [68]. A cantilever beam is

subjected to an oscillatory fatigue loading, Q1, and random design load in service,

Q2. The random variables in the problem are assumed to be independent with

statistical parameters given in Table 5.5.

Table 5.5

STOCHASTIC PARAMETERS IN CANTILEVER BEAM PROBLEM

Variable Symbol Distribution Mean/St. dev. UnitYoung’s modulus E Normal 30000/3000 ksi

Fatigue load Q1 Lognormal 0.5056/0.1492 klbRandom load Q2 Lognormal 0.4045/0.1492 klb

Unit yeild strength R Weibull 50/6 ksifatigue strength coefficient A Lognormal 1.6323× 1010/0.4724 ksi

The design variables in the problem are width (b) and depth (h) of the beam.

The objective is to minimize the weight of the beam (bh) (assuming unit weight per

unit volume) subject to following hard constraints

gR1 =

0.3Eb3d

900−Q2 ≥ 0,

gR2 = A(6Q1L/bd2)−N0 ≥ 0,

gR3 = ∆0 − (4Q2L

3/Ebd3) ≥ 0,

gR4 = R− (6Q2L/bd2) ≥ 0,

where N0 = 2 × 106, ∆0 = 0.15′′, and L = 30′′. A minimum reliability index of 3

is desired for each failure mode. It is clear that the beam design problem exhibits

nonlinear limit state functions (gR1 through gR

4 ), nonnormal random variables and

multicriteria constraints.

61

Page 73: Reliability Based Design Optimization - Formulations and Methodologies

The optimization process was started from the point, d0 = (1, 1). Both ap-

proaches result in an optimal solution d∗ = (0.2941, 4.5559). The computational

cost for the two methods is compared in terms of the total number of g-function

evaluations taken by each method. The proposed decoupled RBDO method took 238

g-function evaluations as compared to 523 evaluations by the nested RBDO method.

This does not include derivative calculations as analytical first order derivatives were

used. Therefore, it is noted that the proposed methodology is significantly more ef-

ficient compared to the traditional approach while providing the same solution.

5.2.4 Steel Column

This problem is taken from Kuschel and Rackwitz [35]. The problem is a steel

column with design vector, d = (b, d, h), where

b = mean of flange breadth,

d = mean of flange thickness, and

h = mean of height of steel profile.

The length of the steel column (s) is 7500 mm. The objective is to min-

imize the cost function, f = bd + 5h. The independent random vector, X =

(Fs, P1, P2, P3, B, D,H, F0, E), and its stochastic characteristics are given in Table

7.2.

The limit state function in terms of the random vector, X, the limit state pa-

62

Page 74: Reliability Based Design Optimization - Formulations and Methodologies

Table 5.6

STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM

Variable Symbol Distribution Mean/Standard deviation UnitYield stress Fs Lognormal 400/35 MPa

Dead weight load P1 Normal 500000/50000 NVariable load P2 Gumbel 600000/90000 NVariable load P3 Gumbel 600000/90000 N

Flange breadth B Lognormal b/3 mmFlange thickness D Lognormal d/2 mmHeight of profile H Lognormal h/5 mmInitial Deflection F0 Normal 30/10 mmYoung’s modulus E Weibull 21000/4200 Mpa

rameters, ηηη = d, is given as

GR(X, ηηη) = Fs − P(

1

As

+F0

Ms

.εb

εb − P)

,

where

As = 2BD, (area of section)

Ms = BDH, (modulus of section)

Mi =1

2BDH2, (moment of inertia)

εb =π2EMi

s2, (Euler buckling load)

The means of the flange breadth b and flange thickness d must be within the intervals

[200, 400] and [10, 30] respectively. The interval [100, 500] defines the admissible

mean height h of the T-shaped steel profile. It is required that the optimal design

satisfies a reliability level of 3.

Again, both the methods yield the same optimal solution, d = (200, 17.1831, 100).

The computational cost of the two approaches is compared in terms of the number

of g-function evaluations taken be each method. The proposed decoupled RBDO

63

Page 75: Reliability Based Design Optimization - Formulations and Methodologies

methodology took 236 evaluations of the limit state function as compared to 457

evaluations taken by the nested RBDO approach. This does not include derivative

calculations as analytical first order derivatives were used. Again, it is noted that

the proposed methodology is significantly more efficient compared to the traditional

approach while providing the same solution.

5.3 Summary

A new decoupled iterative RBDO methodology is presented. The deterministic

optimization phase is separated from the reliability analysis phase. During the de-

terministic optimization phase the most probable point of failure corresponding to

each failure mode is obtained by using first order Taylor series expansion about

the design point from the previous cycle. The most probable point update during

deterministic optimization requires the sensitivities of the MPPs with respect to

the design vector. This requires the second order derivatives of the failure mode.

In this investigation, a damped BFGS update scheme is employed to compute the

second order derivatives. It is observed that the estimated most probable point

converges to the exact values in a few cycles. This implies that the Hessian up-

date scheme gives an accurate estimate of the second order information of the limit

state function. The framework is tested using a series of structural and multidisci-

plinary design problems. It is found that the proposed methodology provides the

same solution as the traditional nested optimization formulation, and is significantly

more computationally efficient. For the problems considered, the decoupled RBDO

methodology reduces the computational cost by 2 to 3 times as compared to the

traditional approach.

64

Page 76: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 6

UNILEVEL RBDO METHODOLOGY

In this chapter, a novel unilevel formulation for reliability based design optimiza-

tion is developed. In this formulation the lower level optimization (evaluation of

reliability constraints in the double-loop formulation) is replaced by its correspond-

ing first order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the

upper level optimization. It is shown that such a replacement is computationally

equivalent to solving the original nested optimization if the lower level optimization

problem is solved by numerically satisfying the KKT conditions (which is typically

the case). Numerical studies show that the proposed formulation is numerically

robust (stable) and computationally efficient compared to the existing approaches

for reliability based design optimization.

6.1 A new unilevel RBDO methodology

The main focus of this research has been to develop a robust and efficient for-

mulation for performing RBDO. As mentioned earlier, the probabilistic constraint

specification using the performance measure approach is robust compared to the

reliability index approach. However, the methodology is still nested and is hence

expensive. In this research, the inverse reliability analysis optimization problem

is replaced by the corresponding first order necessary Karush-Kuhn-Tucker (KKT)

optimality conditions. The KKT conditions for the reliability constraints similar to

65

Page 77: Reliability Based Design Optimization - Formulations and Methodologies

PMA (Eqns. (4.21)-(4.22)) are used. The treatment of Eqn. (4.22) is a bit subtle.

No simple modification of Eqn. (4.22) will result in an equality constraint that is

both quasiconvex and quasiconcave, which would be required for the sufficiency of

the KKT conditions. For necessity of the KKT conditions, observe that ‖u‖ − ρ is

convex and ‖u‖ − ρ ≤ 0 trivially satisfies Slater’s constraint qualification (feasible

set has a strictly interior point) [41]. Assume that GR(u, ηηη) is pseudoconvex with

respect to u for each fixed ηηη. Now GR(u, ηηη) pseudoconvex and ‖u‖−ρ convex means

that the KKT conditions are also sufficient, hence the original and KKT formulation

will be equivalent. Therefore, to facilitate development of the current method, the

inverse FORM can be restated as

min GRi (u, ηηη) (6.1)

subject to ‖u‖ ≤ ρ. (6.2)

The Lagrangian corresponding to the optimization problem is

L = GR(u, ηηη) + λ(‖u‖ − ρ), (6.3)

where λ is the scalar Lagrange multiplier. The first order necessary conditions for

the problem are

∇uGR(u∗, ηηη) + λ∇u(‖u∗‖ − ρ) = 0, (6.4)

‖u∗‖ − ρ ≤ 0, (6.5)

λ ≥ 0, (6.6)

λ(‖u∗‖ − ρ) = 0 (6.7)

where u∗ is the solution point u∗β=ρ of the inverse reliability optimization problem

when ‖u∗‖ = ρ. u∗ = 0 is a special degenerate case, so assume henceforth that

u∗ 6= 0. From equation (6.4), we have (assuming λ 6= 0)

u∗ = −1

λ‖u∗‖ ∇uGR(u∗, ηηη). (6.8)

66

Page 78: Reliability Based Design Optimization - Formulations and Methodologies

Observe that Eqn. (6.8) implies

λ = ‖∇uGR(u∗, ηηη)‖ ≥ 0, (6.9)

which is consistent with Eqn. (6.6) and is valid even if λ = 0. Substituting for λ in

equation (6.8) and rearranging,

∇uGR(u∗, ηηη)

‖∇uGR(u∗, ηηη)‖ = − u∗

‖u∗‖ . (6.10)

Eqn. (6.10) says that u∗ and ∇uGR(u∗, ηηη) point in opposite directions, which is

consistent with u∗ being the closest point in the manifold GR(u, ηηη) = constant to

the origin.

Eqn. (6.10) is true for all ηηη, if u∗ = u∗β=ρ is the solution to the inverse reliability

optimization problem, because ρ − ‖u‖ ≤ 0 satisfies the reverse convex constraint

qualification (the equality constraint (4.22) is equivalent to the convex constraint

(6.2) and ρ − ‖u‖ ≤ 0, hence constraint qualifications are satisfied and the KKT

condition (6.10) is necessary). In general, without the pseudoconvexity assumption

on GR, solving equation (6.10) does not necessarily imply that u∗ is the optimal

solution to the optimization problem.

It should be noted that the KKT conditions for the direct and inverse FORM

problems differ only in terms of what constraints are being presented as equality

constraints to the upper level optimizer. When using the KKT conditions of the

direct FORM problem in the upper level optimization, the limit state function is

presented as an equality constraint and the constraint on the reliability index is

an inequality constraint. As mentioned earlier, it is possible to have cases where

the limit state function never becomes zero. In other words, it is associated with

zero (or one) failure probability. When such a case occurs, the formulation given

by Kuschel and Rackwitz [36] might fail to yield a solution. In other words, it is

numerically unstable.

67

Page 79: Reliability Based Design Optimization - Formulations and Methodologies

In the current unilevel formulation, the first order conditions of the inverse

FORM problem are used. The corresponding KKT conditions for the inverse relia-

bility problem (Eqns. (6.1)-(6.2) ) are

h1i ≡ ∇uGRi (ui, ηηη) + λi

ui

‖ui‖ = 0, (6.11)

g1i ≡ ||ui|| − βreqdi≤ 0, (6.12)

h2i ≡ λi(‖ui‖ − βreqdi) = 0, (6.13)

g2i ≡ λi ≥ 0. (6.14)

Using these first order optimality conditions, the unilevel RBDO architecture can

be stated as follows

mindaug

f(d,p,y(d,p)) (6.15)

daug = [d,u1, ..,uNhard, λ1, .., λNhard

]

sub. to GRi (u, ηηη) ≥ 0 i = 1, .., Nhard, (6.16)

h1i = 0 i = 1, .., Nhard, (6.17)

h2i = 0 i = 1, .., Nhard, (6.18)

g1i ≤ 0 i = 1, .., Nhard, (6.19)

g2i ≥ 0 i = 1, .., Nhard, (6.20)

gDj (d,p,y(d,p)) ≥ 0 j = 1, .., Nsoft, (6.21)

dl ≤ d ≤ du. (6.22)

If d∗ is a solution of Eqns. (4.1)–(4.4), then there exist u∗i and λ∗i such that

[d∗,u∗1, ..,uN∗hard

, λ∗1, .., λ∗Nhard

] is a solution of Eqns. (7.4)–(6.22). The converse is

true, under the mild assumption that all the functions GRi (u, ηηη) are pseudoconvex

in u for each fixed ηηη.

It should be noted that the dimensionality of the problem has increased, as

in the unilevel method given in Kuschel and Rackwitz [36]. The optimization is

68

Page 80: Reliability Based Design Optimization - Formulations and Methodologies

performed with respect to the design variables d, the MPPs of failure, and the

Lagrange multipliers, simultaneously. At the beginning of the optimization, ui does

not correspond to the true MPP at the design d. The exact MPPs of failure ui∗

and the optimum design d∗ are found at convergence.

6.2 Test Problems

The proposed unilevel method is implemented for a simple analytical problem to

illustrate the method. A higher dimensional multidisciplinary structures control

problem is used to illustrate the efficacy of the method for relatively medium size

(20-30 variables) engineering problems. The double loop methods (DLM) for RBDO

that use the reliability index approach (RIA) or the performance measure approach

(PMA) for reliability constraint evaluation are compared to the unilevel methods on

the analytical problem. The unilevel method developed by Kuschel and Rackwitz

[36] is referred as unilevel-RIA and the method developed in this investigation is

referred as unilevel-PMA. The proposed methodology (unilevel-PMA) is also com-

pared with the double-loop PMA approach on the structures control problem.

6.2.1 Analytical Problem

The method is illustrated with a small analytical multidisciplinary test problem.

This problem is chosen to illustrate the robustness of the proposed formulation

compared to other methods. Even though the problem is just two-dimensional, it is

sufficiently nonlinear and has the attributes of a general multidisciplinary problem.

This problem has two design variables, d1 and d2, and two parameters, p1 and p2.

There are two random variables, X1 and X2. The design parameters, p1 and p2, are

the means of the random variables, X1 and X2, respectively. This problem involves

a coupled system analysis and has two CAs. The problem has two hard constraints,

gR1 and gR

2 . The reliability-based design optimization problem in standard form is

69

Page 81: Reliability Based Design Optimization - Formulations and Methodologies

as follows.

Minimize : d21 + 10d2

2 + y1

subject to : gR1 = Y1(X, η)/8− 1 ≥ 0

gR2 = 1− Y2(X, η)/5 ≥ 0

−10 ≤ d1 ≤ 10

0 ≤ d2 ≤ 10

where d1 = η1, d2 = η2

and p1 = µX1 = 0, p2 = µX2 = 0

CA1 : Y1(X, η) = η21 + η2 − 0.2Y2(X, η) + X1;

y1(d,p) = d21 + d2 − 0.2y2(d,p) + p1;

CA2 : Y2(X, η) = η1 − η22 +

√Y1(X, η) + X2;

y2(d,p) = d1 − d22 +

√y1(d,p) + p2.

It is assumed that the random variables X1 and X2 have a uniform distribution over

the intervals [-1,1] and [-0.75,0.75] respectively. The desired value of the reliability

index βreqdi(for i = 1,2) is chosen as 3 for both the hard constraints.

Figure 6.1 shows the contours of the merit function and the constraints. The

zero contours of the hard constraints are plotted at the design parameters, p1 and

p2 (mean of the random variables, X1 and X2). It should be noted that in determin-

istic optimization, two local optima exist for this problem. At the global solution,

only the first hard constraint is active, whereas at the local solution both the hard

constraints are active. They are shown by star symbols. Both of these solutions can

be located easily by choosing different starting points in the design space.

Similarly, two local optimum designs exist for the RBDO problem as well. Both

reliable designs get pushed into the feasible region, characterized with a higher

70

Page 82: Reliability Based Design Optimization - Formulations and Methodologies

−10 −8 −6 −4 −2 0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

d1

d 2

1000

850

700

550

400

250

100

50

100g

1R(p

1,p

2)=0

g2R(p

1,p

2)=0

Figure 6.1. Contours of objective and constraints

merit function value and a lower probability of failure. They are shown by the

shaded squares in Figure 6.2.

To locate the two local optimal solutions of this problem, two different starting

points, [−5, 3] and [5,3], are chosen. The results corresponding to the starting point

[-5,3] are listed in Table 6.1.

Table 6.1

STARTING POINT [-5,3], SOLUTION [-3.006,0.049]

Cost Double-Loop UnilevelMeasure RIA PMA RIA PMA

SA callsNot

225Not

24converged converged

Starting at the design d = [−5, 3], the proposed unilevel method, which uses

the KKT conditions of the performance measure approach (PMA) to prescribe the

probabilistic constraints, converges to the reliable optimum point without any dif-

71

Page 83: Reliability Based Design Optimization - Formulations and Methodologies

−6 −5 −4 −3 −2 −1 0 10

1

2

3

4

5

6

7

d1

d2

50

100150

200250

0

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

d1

d2

50

100

150

200250

350400

500

0

0 0

Figure 6.2. Plot showing two reliable optima.

ficulty. The proposed unilevel method requires 24 system analysis evaluations as

compared to 225 when using the traditional double loop PMA method. Analyti-

cal gradients were used in implementing this problem for all methods. Note that

the double loop method that uses the reliability index approach to prescribe the

probabilistic constraints does not converge. For the designs that are visited by the

upper-level optimizer (say, dk at the kth iteration), the FORM problem does not

have a solution (because of zero failure probability at these designs). Similar con-

clusions can be drawn for the unilevel-RIA method. Starting from the design [-5,3],

the optimizer tries to find the local design [-3.006,0.049]. However, it turns out that

at this design, the second hard constraint, gR2 , is never zero in the space of uniformly

distributed random variables, X. Since in the unilevel-RIA method, the limit state

function is enforced as an equality constraint, the optimizer does not converge.

The results corresponding to the starting point [5,3] are listed in Table 6.2.

Note that the double loop method that uses the RIA for probabilistic constraint

72

Page 84: Reliability Based Design Optimization - Formulations and Methodologies

Table 6.2

STARTING POINT [5,3], SOLUTION [2.9277,1.3426]

Cost Double-Loop UnilevelMeasure RIA PMA RIA PMA

SA callsNot

184 24 21converged

evaluation fails to converge for this starting point too. Again, the reason for this

is that there is zero failure probability (infinite reliability index) at the designs

visited by the upper-level optimizer and therefore the lower level optimizer does

not provide any true solution. All the other methods converge to the same local

optimum solution. The computational cost associated with the two unilevel methods

is comparable. However, the unilevel-PMA method developed here is numerically

robust compared to the unilevel-RIA method. Both unilevel methods are found to

be computationally more efficient as compared to the double loop methods.

6.2.2 Control Augmented Structures Problem

Figure 6.3 shows the control-augmented structure as proposed by Sobieszczanski-

Sobieski et al [65]. This test problem has been used by Padmanabhan et al [48] for

testing RBDO methodologies. The problem as described in Padmanabhan et al [48]

is given here. There are two coupled disciplines (contributing analyses (CAs)) in

this problem. They are the structures subsystem and the controls subsystem. The

structure is a 5-element cantilever beam, numbered 1-5 from the free end to the fixed

end, as shown in the Figure 6.3. Each element is of equal length, but the breadth

and height are design variables. Three static loads, T1, T2, and T3, are applied to

the first three elements. The beam is also acted on by a time varying force P , which

73

Page 85: Reliability Based Design Optimization - Formulations and Methodologies

T T T

P=f(t)

A

B123

Figure 6.3. Control augmented structures problem.

is a ramp function. Controllers A and B are designed as an optimal linear quadratic

regulator to control the lateral and rotational displacements of the free end of the

beam, respectively. The analysis is coupled since the weights of the controllers,

which are assumed to be proportional to the control effort, are required for the mass

matrix of the structure, and one requires the eigenfrequencies and eigenvectors of

the structure in the modal analysis for designing the controller, as shown in Figure

6.4. The damping matrix is taken to be proportional to the stiffness matrix by a

CA1

Structures

CA2

Controls

Controls Weight

Eigenmodes & frequencies

LQR

Static & Dynamic Analysis

Design Variables (beam dimensions & damping constant)

Merit function (Beam weight)

Constraints (stresses, displacements, natural frequencies)

SA

Figure 6.4. Coupling in control augmented structures problem.

factor c for the dynamic analysis of the structure. This damping parameter is also

a design variable. The constraints arise due to constraints on static stresses, static

74

Page 86: Reliability Based Design Optimization - Formulations and Methodologies

and dynamic displacements, and the natural frequencies. The main objective is to

minimize the total weight of the beam and the controllers.

Computation of Sensitivities of the SA

Sensitivities for the control augmented system analysis can be estimated using

a finite difference scheme, or one can use analytic techniques. The sensitivities

obtained from the analytical techniques are superior to those calculated from finite

difference techniques especially when used for coupled systems, since the use of finite

difference techniques can give inaccurate and “noisy” derivatives, and also because

it is difficult to obtain accurate sensitivities for certain outputs like the natural

frequencies and the mode shapes using finite difference techniques.

Since the problem being considered is coupled, one needs to use global sensitivity

equations (GSEs), which are based on the implicit function differentiation rule.

Sensitivities of the outputs of the structures’ module can be found using analytic

and numerical techniques [26]. The sensitivities of the static displacements and

stresses are quite easy to compute, but the computation of sensitivities of natural

frequencies and corresponding mode shapes is more involved and can be done using

various methods like Nelson’s method, the modal method, and the modified modal

method. Sutter et al [66] compare these methods in terms of computational costs

and rate of convergence. Analytic sensitivities of the controls’ output requires the

computation of sensitivities of the solution of an algebraic Riccati equation used for

obtaining a linear quadratic regulator, as described by Khot [30].

The design variables for this test problem are

d = [b1, b2, b3, b4, b5, h1, h2, h3, h4, h5, c]T , (6.23)

where

75

Page 87: Reliability Based Design Optimization - Formulations and Methodologies

bi, hi = breadth and height of the ith element, resp.,

c = damping matrix to stiffness matrix ratio (scalar).

The random variables in the problem are

ρ = density of the beam material,

E = modulus of elasticity of the beam material, and

σa = ultimate static stress.

The constraints for the problem are formulated in terms of the allowable dis-

placements (lateral and rotational), first and second natural frequencies, and the

stresses. They are

gi = 1−(

dlidla

)2

, i = 1, .., 5,

gi+5 = 1−(

dri

dra

)2

, i = 1, .., 5,

g11 =ω1

ω1a

− 1,

g12 =ω2

ω2a

− 1,

g2i+11 = 1− σri

σa

, i = 1, .., 5,

g2i+12 = 1− σli

σa

, i = 1, .., 5,

gi+22 = 1−(

ddlidla

)2

, i = 1, .., 5,

gi+27 = 1−(

ddri

dra

)2

, i = 1, .., 5,

where

dli, dri = static lateral and rotational displacements of ith element, resp.,

dla,dra = maximum allowable static lateral and rotational displacements,

ω1, ω2 = first and second natural frequencies,

ω1a, ω2a = minimum required value for the first and second natural frequencies,

76

Page 88: Reliability Based Design Optimization - Formulations and Methodologies

σri , σl

i = maximum static stresses at the right and left ends of ith element,

σa = maximum allowable static stress,

ddli, ddri = dynamic lateral and rotational displacements of ith element, and

ddla, ddra = maximum allowable dynamic lateral and rotational displacements.

The random variables for this problem σa, ρ, and E are assumed to be indepen-

dent and normally distributed with statistical parameters given in Table 6.3.

Table 6.3

STATISTICAL INFORMATION FOR THE RANDOM VARIABLES

Random Variable Mean Standard Deviation

σa (psi) 30,000 3000ρ (lb/in3) 0.1 0.01E (ksi) 10500 1050

For RBDO test studies, constraints g1, g6, g14, g16, g18, g20, g22, and g28 are

considered to be more important and therefore only these are considered as hard

constraints. The rest of the constraints are considered as soft (deterministic) con-

straints. The system probability of failure (Pallowsys) was required to be 0.001, which

was equally distributed among the 8 failure modes. This gives a desired reliability

index of βreqdi= −Φ−1

(Pallowsys

8

)= 3.6623 for each failure mode.

The RBDO was performed using two different methods, the double loop method

that uses the PMA approach to prescribe the probabilistic constraint and the

unilevel-PMA method described earlier. The unilevel-PMA method for this test

case results in nontrivial problem. Since there are 8 failure modes and three ran-

dom variables for each failure mode, there are 32 equality constraints imposed by

the unilevel method. Also, since the unilevel method is solved in an augmented

77

Page 89: Reliability Based Design Optimization - Formulations and Methodologies

design space consisting of the original design variables, the MPP of failure for each

failure mode, and the Lagrange multipliers for each failure mode, the dimensionality

of the design vector for this test case is 43. It should be noted that the sensitivities

of the first order KKT conditions (Eqn. (6.4)) require calculation of second order

information for the failure modes with respect to the augmented design variable

vector. In the present implementation, a damped BFGS update is used to obtain

the second order information [42]. This method is defined by

rk = ψkyk + (1− ψk)Hksk, (6.24)

where the scalar

ψk =

1 : sTk yk ≥ 0.2sT

k Hksk

0.8sTk Hksk

sTk Hksk − sT

k yk

: sTk yk ≤ 0.2sT

k Hksk

, (6.25)

and yk and sk are the differences in the function and gradient values of the previous

iteration from the current iteration, respectively. The Hessian update is

Hk+1 = Hk − HksksTk Hk

sTk Hksk

+rkr

Tk

sTk rk

. (6.26)

The starting and the final designs for RBDO are given in Table 6.4. The

deterministic optimum design was chosen to be the starting design for RBDO. It

was noted that this significantly reduces the required number of system analysis

evaluations. For the proposed method the designer has to choose a starting point

for the MPPs in the augmented design vector. Different choices for the initial MPPs

may lead to different final designs. In this study, an inverse reliability analysis was

performed at the initial design d to identify a suitable starting point for the MPP

of failure for each hard constraint. At the initial design, g1 is active. Both the

DLM-PMA and the unilevel-PMA methods yield the same final design. Note that

the value of the merit function (weight of the beam) has increased considerably at

78

Page 90: Reliability Based Design Optimization - Formulations and Methodologies

Table 6.4

MERIT FUNCTION AT THE INITIAL AND FINAL DESIGNS

Initial Design Final Design

b1−5 3 3h1 3.703 3.5805h2 7.040 8.5816h3 9.807 12.136h4 11.998 14.863h5 13.840 17.162c 0.06 0.06f 1493.97 1753.5

the final design. This is expected for a more reliable structure to account for the

variation in the random variables.

The values of the hard constraints at the final design are given in Table 6.5.

It should be noted that the constraints g6, g16, g18, g20, and g22 dictate the system

failure. The reliability constraints corresponding to these constraints are the only

active constraints in the RBDO. The other hard constraints have a value greater

than zero, which means that the reliability index corresponding to those constraints

is much higher than the desired index.

The computational cost of the two methods is compared in Table 6.6. The

proposed method is observed to be twice as fast as the nested approach. Therefore,

the proposed method is not only a robust formulation for RBDO problems, but is

also computationally efficient.

6.3 Summary

A traditional RBDO methodology is very expensive for designing engineering sys-

tems. To address this issue, a new unilevel formulation for RBDO is developed.

79

Page 91: Reliability Based Design Optimization - Formulations and Methodologies

Table 6.5

HARD CONSTRAINTS AT THE FINAL DESIGN

gi value at optimum

g1 0.2232g6 4.7 ×10−8

g14 0.1794g16 1.1 ×10−16

g18 1.1 ×10−16

g20 1.1 ×10−16

g22 3.3 ×10−16

g28 0.3749

Table 6.6

COMPARISON OF COMPUTATIONAL COST OF RBDO METHODS

Method SA Calls

DLM-PMA 493Unilevel-PMA 261

The first order KKT conditions corresponding to the probabilistic constraint (as

in PMA) are enforced directly at the system level optimizer, thus eliminating the

lower level optimizations used to compute the probabilistic constraints. The pro-

posed formulation is solved in an augmented design space that consists of the original

decision variables, the MPP of failure corresponding to each failure driven mode,

and Lagrange multipliers. It is mathematically equivalent to the original nested op-

timization formulation if the inner optimization problem is solved by satisfying the

KKT conditions (which is effectively what most numerical optimization algorithms

do). Under mild pseudoconvexity assumptions on the hard constraints, the proposed

80

Page 92: Reliability Based Design Optimization - Formulations and Methodologies

formulation is mathematically equivalent to the original nested formulation. The

method is tested using a simple analytical problem and a multidisciplinary struc-

tures control problem, and is observed to be numerically robust and computationally

efficient compared to the existing approaches for RBDO.

It is noted that the proposed formulation for RBDO is accompanied by a large

number of equality constraints. Most commercial optimizers exhibit numerical insta-

bility or show poor convergence behavior for problems with large numbers of equality

constraints. In the next chapter, continuation methods have been employed to solve

the unilevel RBDO problem.

81

Page 93: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 7

CONTINUATION METHODS IN OPTIMIZATION

In the unilevel formulation developed in the last chapter, the KKT conditions of

the inner optimization for each probabilistic constraint evaluation are imposed at

the system level as equality constraints. Most commercial optimizers are usually

numerically unstable when applied to problems accompanied by many equality con-

straints. In this chapter, continuation methods are used for constraint relaxation

and to obtain a simpler problem for which the solution is known. A series of opti-

mization problem are then solved as the relaxed optimization problem approaches

the original problem.

7.1 Proposed Algorithm

Since the problem of interest is accompanied by a large number of equality con-

straints, it is extremely important that the constraint relaxation techniques be such

that it is easier to identify an initial feasible starting point. In this investigation,

continuation methods have been used for this purpose[73]. Homotopy (continua-

tion) techniques have been shown to be extremely robust in the works of Perez et.

al.[50]. The constraint relaxation used in this investigation is of the following form

82

Page 94: Reliability Based Design Optimization - Formulations and Methodologies

gr ≡ g + (1− τ)b ≥ 0, (7.1)

hr ≡ h + (1− τ)c = 0, (7.2)

where g and h are generic inequalities and equalities, gr and hr are the relaxed

inequalities and equalities respectively. The constants b and c are chosen to make

the relaxed constraints feasible at the beginning. For the inequalities, b is based on

the value of g. If g ≥ 0, b is set equal to zero. If g < 0, b is set equal to negative

of g. Similarly, for the equalities, the constant c is evaluated to satisfy the relaxed

equality at the initial design. It is set equal to the negative of the value of h in

current studies. The homotopy parameter τ drives the relaxed constraints to the

original constraints by gradually adjusting τ = 0 → τ = 1.

After each cycle, the homotopy parameter τ is updated. In this investigation,

it is incremented by a constant value. Note that the homotopy parameter τ helps

in gradually solving simpler problems from a known solution. As the parameter is

changed from 0 to 1, the solution to the original problem is found.

7.2 Test Problems

The proposed algorithm is implemented for a short rectangular column design prob-

lem and a steel column design problem. Both the test problems are taken from the

literature and have been used to test RBDO methodologies.

7.2.1 Short Rectangular Column

The design problem is to determine the depth h and width b of a short column

with rectangular cross section with a minimal total mass bh assuming unit mass

per unit area. The uncertain vector, X = (P,M, Y ), the stochastic parameters,

and the correlations of the vector elements are listed in Table 7.1. The limit

83

Page 95: Reliability Based Design Optimization - Formulations and Methodologies

Table 7.1

STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM

Variable Dist. Mean/St. dev. Cor. P Cor. M Cor. YYield Stress (P ) N 500/100 1 0.5 0

Bending Moments (M) N 2000/400 0.5 1 0Axial Force (Y ) LogN 5/0.5 0 0 1

state function in terms of the random vector, X = (P, M, Y ), and the limit state

parameters, ηηη = (b, h), (which happens to be same as the design vector d in this

problem) is given by

gR(X, ηηη) = 1− 4M

bh2Y− P 2

(bhY )2. (7.3)

The objective function is given by

f(d) = bh. (7.4)

The depth h and the width b of the rectangular column had to satisfy 15 ≤ h ≤ 25

and 5 ≤ b ≤ 15. The allowable failure probability is 0.00621 or in other words a

reliability index for the failure mode greater than or equal to 2.5. The optimiza-

tion process was started from the point (u0,d0, λ0) = ((1, 1,−1), (5, 15), 0.1). The

optimal solution for this problem is d∗ = (8.668, 25.0).

Figure 7.1 shows the history of the objective function. It is noted that as

the value of the homotopy parameter τ increases from 0 to 1, the objective function

gradually approaches the optimal solution.

Figure 7.2 shows the history of the augmented design variables. Observe that

the variables gradually approach the optimal solution of the original problem. The

homotopy parameter τ controls the progress of the optimization process. For highly

nonlinear problems, it might be difficult to locate the solution directly. The use

84

Page 96: Reliability Based Design Optimization - Formulations and Methodologies

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1190

195

200

205

210

215

220

τ

f

Figure 7.1. Convergence of objective function.

of homotopy parameter allows to start from a known solution and gradually make

progress towards the optimal solution.

7.2.2 Steel Column

The problem is a steel column with design vector, d = (b, d, h), where

b = mean of flange breadth,

d = mean of flange thickness, and

h = mean of height of steel profile.

The length of the steel column (s)is 7500 mm. The objective is to mini-

mize the cost function, f = bd + 5h. The independent random vector, X =

(Fs, P1, P2, P3, B, D,H, F0, E), and its stochastic characteristics are given in Table

7.2.

The limit state function in terms of the random vector, X, the limit state pa-

85

Page 97: Reliability Based Design Optimization - Formulations and Methodologies

0 0.2 0.4 0.6 0.8 17.5

8

8.5

9

τ

b

0 0.2 0.4 0.6 0.8 124

24.5

25

25.5

26

τ

h

0 0.2 0.4 0.6 0.8 11.2

1.4

1.6

1.8

2

u 1

τ 0 0.2 0.4 0.6 0.8 10.55

0.6

0.65

0.7

0.75

τ

u 2

0 0.2 0.4 0.6 0.8 1−1.6

−1.4

−1.2

−1

−0.8

τ

u 3

0 0.2 0.4 0.6 0.8 10.1

0.2

0.3

0.4

τ

λ

Figure 7.2. Convergence of optimization variables.

rameters, ηηη = d, is given as

GR(X, ηηη) = Fs − P(

1

As

+F0

Ms

.εb

εb − P)

,

where

As = 2BD, (area of section)

Ms = BDH, (modulus of section)

Mi =1

2BDH2, (moment of inertia)

εb =π2EMi

s2, (Euler buckling load)

The means of the flange breadth b and flange thickness d must be within the intervals

[200, 400] and [10, 30] respectively. The interval [100, 500] defines the admissible

mean height h of the T-shaped steel profile. It is required that the optimal design

satisfies a reliability level of 3.

The optimal solution for this problem is d = (200, 17.1831, 100). Similar con-

vergence history was observed for this test problem as well. Figure 7.3 shows the

86

Page 98: Reliability Based Design Optimization - Formulations and Methodologies

Table 7.2

STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM

Variable Symbol Distribution Mean/Standard deviation UnitYield stress Fs Lognormal 400/35 MPa

Dead weight load P1 Normal 500000/50000 NVariable load P2 Gumbel 600000/90000 NVariable load P3 Gumbel 600000/90000 N

Flange breadth B Lognormal b/3 mmFlange thickness D Lognormal d/2 mmHeight of profile H Lognormal h/5 mmInitial Deflection F0 Normal 30/10 mmYoung’s modulus E Weibull 21000/4200 Mpa

convergence of the objective function. Again it is observed that the homotopy

parameter controls the progress of the optimization process.

7.3 Summary

An optimization methodology for reliability based design is presented. From the au-

thors experience, the unilevel formulation for RBDO, when directly coupled with an

optimizer, may have convergence difficulties if there are many equality constraints

or if the problem is very nonlinear. Since the unilevel formulation is usually ac-

companied by many equality constraints, homotopy techniques are used to relax

the constraints and identify a starting point that is feasible with respect to the

relaxed constraints. In this investigation, the homotopy parameter is incremented

by a fixed value. A series of optimization problems are solved for various values of

the homotopy parameter as the relaxed problem approaches the original problem.

It is realized that it is easier to solve the relaxed problem from a known solution

and make gradual progress towards the solution than solve the problem directly.

87

Page 99: Reliability Based Design Optimization - Formulations and Methodologies

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 12800

3000

3200

3400

3600

3800

4000

4200

4400

4600

4800

τ

f

Figure 7.3. Convergence of objective function.

The proposed strategy is tested on two design problems. It is observed that the

homotopy parameter controls the progress made in each cycle of the optimization

process. As the homotopy parameter approaches the value of 1, the optimal solution

to the original problem is obtained.

88

Page 100: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 8

RELIABILITY BASED DESIGN OPTIMIZATION UNDER EPISTEMIC

UNCERTAINTY

Advances in computational performance have led to the development of large-scale

simulation tools for design. Systems generated using such simulation tools can fail in

service if the uncertainty of the simulation tool’s performance predictions is not ac-

counted for. In this chapter, an investigation of how uncertainty can be quantified in

multidisciplinary systems analysis subject to epistemic uncertainty associated with

the disciplinary design tools and input parameters is undertaken. Evidence theory

is used to quantify uncertainty in terms of the uncertain measures of belief and

plausibility. To illustrate the methodology, multidisciplinary analysis problems are

introduced as an extension to the epistemic uncertainty challenge problems identified

by Sandia National Laboratories.

After uncertainty has been characterized mathematically the designer seeks the

optimum design under uncertainty. The measures of uncertainty provided by ev-

idence theory are discontinuous functions. Such non-smooth functions cannot be

used in traditional gradient-based optimizers because the sensitivities of the uncer-

tain measures does not exist. In this investigation, surrogate models are used to

represent the uncertain measures as continuous functions. A sequential approximate

optimization approach is used to drive the optimization process. The methodology

is illustrated in application to multidisciplinary example problems.

89

Page 101: Reliability Based Design Optimization - Formulations and Methodologies

8.1 Epistemic uncertainty quantification

In this section, a brief description of uncertainty quantification in multidisciplinary

system analysis is given. The nature of a multidisciplinary system was explained in

detail in chapter 3 along with the associated uncertainties. A simplified model is

shown again in Figure 8.1.

Figure 8.1. Simplified Multidisciplinary Model

There are two subsystems CA1 and CA2 interacting with each other. Each

subsystem simulation model has model form uncertainty δ1 and δ2. Also there

are uncertain parameters p and q which are inputs to CA1 and CA2 respectively.

Model form uncertainty and parametric uncertainty are known within intervals with

prescribed BPAs (basic probability assignments) as shown in Figure 8.2 for a variable

a.

In general, the intervals for an unknown parameter can be nested or non-nested

and they are usually obtained by experimental means or from expert opinion. Ex-

90

Page 102: Reliability Based Design Optimization - Formulations and Methodologies

Figure 8.2. Known BPA structure

pert knowledge is primarily intuitive. In a given situation an expert makes an

intuitive decision based on judgement and experience and the context of the prob-

lem, and the decision has a high probability of being correct. The intervals for

the model form uncertainty (δi) usually represent the difference in the value that a

given mathematical model predicts to that of the actual system. The BPA for it

reflects the degree of belief of an expert on the uncertainty of the values given by

the mathematical model. Given this information, our objective is to determine the

belief and plausibility of y ≥ yreqd. The algorithmic steps are outlined below.

(1) Combine the evidences obtained from different sources for each uncertain

variable and for each model form uncertainty e.g (p, q, δ1, δ2, etc). Dempster’s rule

of combination is employed in this investigation.

(2) Determine the BPA for all the possible sets of the uncertain variables and model

uncertainties. For example, if p is given by 2 intervals, q is given by 3 intervals, δ1 is

91

Page 103: Reliability Based Design Optimization - Formulations and Methodologies

given by 3 intervals and δ2 is given by 2 intervals, the different possible combinations

of the intervals is the product of all of them and is equal to 36. The BPA for each

combination is simply the product of the BPAs for each interval assuming them to

be independent.

(3) Propagate each set (C) (e.g., Cijkl = [pi, qj, δ1k, δ2l] where i,j,k,l are the indices

for the intervals of p, q, δ1, δ2 respectively) through the system analysis (Figure 8.1)

and obtain the bounds for the states y for each C. This is performed for the given

design x.

(4) Determine the Belief and Plausibility of y ≥ yreqd using Equations 1 and 2

respectively.

The above steps are now illustrated with an example problem. Researchers at

Sandia National Laboratories have identified a suite of epistemic uncertainty chal-

lenge problems [46] and one of the problem is adopted here for illustration purposes.

The mathematical model is given by the following equation.

y = (a + b)a (8.1)

where a and b are the input uncertain variables and y is the output response of

the model. The available evidence for a and b is assumed in order to illustrate the

calculation of belief and plausibility. The information from the first expert is given

as intervals with their BPAs.

From expert 1, for variable a

a11=[0.6,1], a12=[1,2], a13=[2,3]

m(a11)=0.3, m(a12)=0.6, m(a13)=0.1

From expert 1, for variable b

b11=[1.2,1.6], b12=[1.6,2], b13=[2,3]

m(b11)=0.3, m(b12)=0.3, m(b13)=0.4

92

Page 104: Reliability Based Design Optimization - Formulations and Methodologies

where aij and bij refers to the jth proposition from ith expert for variables a and b

respectively.

Similarly, from expert 2, for variable a

a21=[0.6,3], a22=[1,2]

m(a21)=0.6, m(a22)=0.4

From expert 2, for variable b

b21=[1.2,2], b22=[2,3]

m(b11)=0.8, m(a22)=0.2

Step 1 : Since the evidence for a and b comes from two sources, use Dempster’s

rule of combination (Equation 3.12) to combine them. The combined evidence is as

follows.

ac1=[0.6,1], ac2=[1,2], ac3=[2,3]

m(ac1)=0.2143, m(ac2)=0.7143, m(ac3)=0.0714

bc1=[1.2,1.6], bc2=[1.6,2], bc3=[2,3]

m(bc1)=0.4286, m(ac2)=0.4286, m(bc3)=0.1429

where c refers to the combined evidences.

Step 2 : Using the combined evidence for a and b, obtain all possible sets of the

intervals and their BPAs. This is shown below.

Cij = [aci, bcj]

mc(Cij) = m(aci)m(bcj)

Step 3 : Obtain the lower and upper bounds for the system response y corre-

sponding to each set Cij. Since, we have assumed monotonicity of y in Cij, Equa-

tion 8.1 is evaluated only at the vertices of Cij. For example, for C11 = [ac1, bc1] =

93

Page 105: Reliability Based Design Optimization - Formulations and Methodologies

[[0.6, 1], [1.2, 1.6]], the function is evaluated at points [0.6, 1.2], [0.6, 1.6], [1, 1.2] and

[1, 1.6] to obtain the bounds for the state y. Note that the state y given by equation

8.1 is indeed monotonic in the intervals chosen for the variables a and b.

Bounds for y for each set and corresponding BPAs

mc(Cij) Lower Bounds Upper Bounds

C11 0.0918 1.4229 2.6

C12 0.0918 1.6049 3

C13 0.0306 1.7741 4

C21 0.3061 2.2000 12.96

C22 0.3061 2.6000 16

C23 0.1020 3.0000 25

C31 0.0306 10.2400 97.336

C32 0.0306 12.9600 125

C33 0.0102 16.0000 216

Step 4 : Compute belief and plausibility of y ≥ yreqd. Belief and plausibility

plots are shown in Figure 8.3 as a function of yreqd.

The propagation of C through the system analysis requires some discussion.

In general, a global optimization problems needs to be solved to determine exact

bounds for the states corresponding to each C. Examples of such techniques are

genetic algorithms and branch and bound algorithms. In our research, we have

assumed that the state information is monotonic in each C. Hence, the system

analysis (considered to be expensive) is evaluated only at the vertices of the set

C. Using this information, the belief and plausibility are easily determined. The

examples used in this investigation are monotonic in the space of uncertain variables.

However, if nothing is known about the behavior of the states with respect to the

uncertain variables, we must use discretization or global optimization techniques to

94

Page 106: Reliability Based Design Optimization - Formulations and Methodologies

101

102

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

yreqd

(y ≥

yre

qd)

BeliefPlausibility

Figure 8.3. Complementary Cumulative Belief and Plausibility Function

find exact bounds for the state variables corresponding to each C.

This example illustrates how evidence theory can be used to characterize epis-

temic uncertainty. Our goal in this research is to use evidence theory to characterize

uncertainty in multidisciplinary systems and to optimize these systems under uncer-

tainty. In the next section, we briefly discuss the deterministic and non-deterministic

forms of the optimization problem.

8.2 Deterministic Optimization

Deterministic design optimization deals with the design variables, parameters and

responses as quantities that can be defined precisely. A conventional deterministic

95

Page 107: Reliability Based Design Optimization - Formulations and Methodologies

optimization problem is of the following form.

minimize : f(x) (8.2)

subject to : g(x) ≥ 0 (8.3)

xmin ≤ x ≤ xmax (8.4)

where f is the objective function to be minimized and is subject to the inequality

constraints g and variable bounds xmin and xmax.

8.3 Optimization under epistemic uncertainty

Optimized designs without considering the uncertainty in the design variables, pa-

rameters and simulation based design tools are unreliable and can be subjected

to failure in service. Therefore, it is imperative that the designers consider the

reliability of the resulting designs. Reliability based design optimization (RBDO)

methods employ numerical optimization to obtain optimal designs that are reliable.

In RBDO, the constraints for an optimization problem are formulated in terms of

the probability of failure (or reliability indices) corresponding to the failure modes

(also known as limit state functions)[54, 14, 15, 59, 1] . Such an analysis requires

the exact values of the distribution parameters (means and variances) and the exact

form of the distribution functions of the random variables. RBDO methods make

use of non-deterministic formulations for the constraints and therefore are in the

class of non-deterministic optimization problems.

It is not always possible to have all the information for the uncertain variables.

In such cases, the unknown information for the uncertain variables is normally

assumed. Such systems obtained by RBDO by making critical assumptions may

fail in practice. Therefore new methods must be developed for optimization under

uncertainty when the information available for the uncertain parameters is sparce.

96

Page 108: Reliability Based Design Optimization - Formulations and Methodologies

A typical non-deterministic optimization problem is of the following form.

minimize : f(x) (8.5)

subject to : G(x) = UM(g(x) ≥ 0)− UMreqd ≥ 0 (8.6)

xmin ≤ x ≤ xmax (8.7)

where G(x) are the non-deterministic uncertainty based constraints. It is required

that the value of the uncertain measure UM to be at least UMreqd. Note that UM(g(x) ≥

0) is a measure of the reliability of the system since g(x) ≥ 0 implies feasibility.

When little information is available, evidence theory can be used to provide un-

certain measures such as belief and plausibility. Having quantified uncertainty in

terms of belief and plausibility measures, we are now ready to use these measures in

non-deterministic design optimization (Equations 8.5-8.7). However, a careful study

of Fig. 8.3 shows that the uncertain measures, belief and plausibility, are discontinu-

ous functions. The discontinuities are associated with the different combinations C.

These uncertainty measures are thus nonsmooth functions and their sensitivities do

not exist, restricting us from using traditional gradient-based optimizers. Surrogate

models can be used instead and the optimization can be performed on them. A

sequential approximate optimization approach is employed in this investigation for

optimization. It is described in the next section.

8.4 Sequential Approximate Optimization

Engineering design of multidisciplinary systems often involves the use of large-scale

simulation based design tools. A common practice in the industry is the use of

surrogate models in place of these expensive computer simulations to drive the

multidisciplinary design process based on nonlinear programming techniques. In

addition, if the responses of the system analysis are discontinuous or noisy, it is

97

Page 109: Reliability Based Design Optimization - Formulations and Methodologies

better to use approximation techniques to obtain smooth functions for use in the

design optimization process. In this research, the constraints that we are dealing

with are discontinuous and they are evaluated using complex multidisciplinary sim-

ulation tools. Response surface approximations are therefore used to obtain smooth

functions and a sequential approximate optimization strategy is used for design

optimization. Equations 8.5 and 8.6 are thus approximated by equations 8.8 and

8.9 respectively and an approximate optimization problem is formulated as shown

below.

minimize : f(x) (8.8)

subject to : g(x) ≥ 0 (8.9)

xlower ≤ x ≤ xupper (8.10)

where f and g are the approximations to the objective function and the constraints

respectively within local variable bounds xlower and xupper obtained using a trust

region methodology. Second order polynomial approximations are used in this re-

search. The approximations employ zero order matching. The information for the

gradient and Hessian terms are obtained from a least squares fit.

Trust region model management strategies have been used in previous studies to

drive the sequential approximate optimization of multidisciplinary systems[57, 56,

51]. Trust region approaches manage the selection of move limits (i.e., local variable

bounds) for each sequence of approximate minimization based on the value of the

merit function obtained in the previous sequence. They are based on a trust region

ratio ρ, which is used to monitor the performance of the current approximation with

respect to the actual function. At the kth iteration, a local optimization problem of

the form (Equations 8.8 -8.10) is solved around the current design point xk. The

move limits are defined by the trust region ∆k, where ||x − xk||p ≤ ∆k and the p

98

Page 110: Reliability Based Design Optimization - Formulations and Methodologies

norm defines the shape of the region. ∆k is known as the trust region radius. In

this particular implementation, ∆k defines a hypercube around xk which defines the

local bounds xlower and xupper.

In previous implementations [57, 56, 51] of sequential approximate optimization,

the trust region ratio ρk is obtained based on the performance of a merit function

which is an augmented Lagrangian. An augmented Lagrangian or penalty func-

tion is required to guarantee convergence of the trust region managed sequential

approximate optimization approach. In the preliminary study of Agarwal et al [3],

ρk is obtained based only on the performance of the objective function. To manage

convergence in Agarwal et al [3], infeasible designs are rejected and the trust region

reduced. This approach is not provably convergent as discussed in Agarwal et al [3].

To address this issue the implementation is modified and the trust region ratio is

obtained based on the performance of the Lagrangian L = f +λTg. Since true gradi-

ents of the constraints do not exist, the gradient information of the approximations

are employed. Using those approximate gradients and a least squares approach,

an estimate of the Lagrange multipliers is obtained. The solution of approximate

minimization problem (Equations 8.8 -8.9) subject to the local trust region gives a

new point xk+1∗ . The trust region ratio ρk is computed based on the value of the

Lagrangian at the new design point.

ρk =L(xk)− L(xk+1

∗ )

L(xk − L(xk+1∗ )(8.11)

ρk is the ratio of the actual change in the Lagrangian to the change predicted by the

approximation. If the value of ρk is close to one, it means that the approximation

is good whereas a negative value of ∆k suggests a poor approximation. If the value

of ρk is greater than zero, the new point is accepted i.e., xk+1 = xk+1∗ and the trust

99

Page 111: Reliability Based Design Optimization - Formulations and Methodologies

region radius ∆k is updated according to the following rules.

∆k+1 =

c1∆k if ρk < R1,

c2∆k if ρk > R2,

∆k otherwise.

(8.12)

Commonly used values for the limiting range constants are R1 = 0.25 and R2 = 0.75.

The trust region multiplication factors c1 and c2 are commonly set to 0.25 and 2

respectively.

8.5 Test Problems

The sequential approximate optimization approach is used to drive the non-deterministic

optimization process. Evidence theory is used to estimate epistemic uncertainty in

terms of belief. The approach is implemented in application to two multidisciplinary

test-problems. A small analytic problem and a higher dimensional aircraft concept

sizing test-problem are used. Both the problems involve coupled system analysis.

They are described in the following section.

8.6 Analytic Test Problem

The first test-problem is an analytic problem. The system analysis makes use of

two design variables and outputs two states. The objective function is nonlinear and

their are two nonlinear constraints. The problem has two optimal solutions. This

small problem is useful for visualizing the results and understanding the performance

of the proposed method. The deterministic form of the optimization problem is as

100

Page 112: Reliability Based Design Optimization - Formulations and Methodologies

follows.

minimize : f(x) = x21 + 10x2

2 + y1 (8.13)

subject to : g1 = y1 − 8 ≥ 0 (8.14)

g2 = 5− y2 ≥ 0 (8.15)

−10 ≤ x1 ≤ 10 (8.16)

0 ≤ x2 ≤ 10 (8.17)

where the states y1 and y2 are calculated by contributing analyses CA1 and CA2

respectively as

CA1 : y1 = x21 + x2 − 0.2y2 (8.18)

CA2 : y2 = x1 − x22 +

√y1 (8.19)

To be suitable for optimization under uncertainty using evidence theory, the deter-

ministic form of the optimization problem (Equations 8.13 -8.17) is modified. Let’s

say that the states y1 and y2 (Equations 8.25-8.26) given by the CAs (simulation

based design tools) are not certain (i.e., there exists epistemic uncertainty in their

performance predictions). Assume that the uncertainties in CA1 and CA2 are given

by δ1 and δ2 respectively. Estimates of the values of the epistemic uncertainty for

δ1 and δ2 is obtained by elicitation of experts opinion. Two experts opinion are

obtained. The experts provide intervals for the uncertainty and a corresponding

basic probability assignment (BPA) for those intervals. The available information

from the two experts is shown in Figures 8.4 and 8.5.

Since the CAs are no more accurate, the original problem is reformulated as

101

Page 113: Reliability Based Design Optimization - Formulations and Methodologies

- 1 -0.8 - 0.5 0.5 0.6 1

Expert 10.2 0.5 0.3

Expert 2 0.1 0.7 0.2

δ1

Figure 8.4. Experts Opinion for δ1

- 0.75 - 0.7 - 0.25 0.1 0.5 0.7 0.75

Expert 10.2 0.5 0.3

Expert 2 0.6 0.4

δ2

Figure 8.5. Experts Opinion for δ2

shown below.

minimize : f(x) = x21 + 10x2

2 + y1 (8.20)

subject to : G1 = UM(y1 − 8 ≥ 0)− UMreqd ≥ 0 (8.21)

G2 = UM(5− y2 ≥ 0)− UMreqd ≥ 0 (8.22)

−10 ≤ x1 ≤ 10 (8.23)

0 ≤ x2 ≤ 10 (8.24)

where UM is the uncertain measure of interest and UMreqd is the minimum required

value of the uncertain measure. To evaluate the objective function, it is assumed

that δ1 and δ2 are both zero. Since the formulation is posed in terms of the feasibility

of the original constraints, it means that UMreqd is the minimum reliability that should

102

Page 114: Reliability Based Design Optimization - Formulations and Methodologies

be met. In evidence theory, UM can be either Bel or Pl. The states y1 and y2 are

obtained from uncertain contributing analyses CA1 and CA2 respectively as

CA1 : y1 = x21 + x2 − 0.2y2 + δ1 (8.25)

CA2 : y2 = x1 − x22 +

√y1 + δ2 (8.26)

Note that the equations 8.21 and 8.22 defining the non-deterministic constraints

for the reformulated optimization problem are discontinuous and cannot be used in

gradient based optimizer directly. Hence the sequential approximate optimization

approach described earlier is used.

The uncertain measure used is belief (Bel). The minimum required value of

belief is taken to be 0.99. Figure 8.6 shows the history of the designs visited by the

framework. The starting point is shown by ¤ and the optimal solution is shown by

♦. Figure 8.7 shows the history of the objective function. Note that the objective

function drops significantly after the first few iterations. The convergence history

proceeds linearly as the trust region is reduced.

−6 −4 −2 0 2 4 6 80

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

x1

x 2

Starting PointDesign HistoryOptimal Design

Infeasible Infeasible

Feasible

Figure 8.6. Design Variable History

103

Page 115: Reliability Based Design Optimization - Formulations and Methodologies

0 5 10 15 20 250

50

100

150

Iterations

f

Figure 8.7. Convergence of objective function

The design obtained using the framework for optimization under uncertainty

(OUU) is compared with the deterministic optima (see Table 8.1). Note that the

objective function value has increased. This is an expected trade-off for a more

reliable design.

8.7 Aircraft concept sizing problem

Apart from the simple analytical problem mentioned in the last section, the se-

quential approximate optimization approach is also implemented in application to a

higher dimensional multidisciplinary aircraft concept sizing problem. The problem

was originally conceived and developed by the MDO research group at the Uni-

versity of Notre Dame [74]. The objective is to perform preliminary sizing of the

aircraft subject to performance constraints. The design variables in this problem

come from several disciplines such as the aircraft configuration, propulsion and aero-

dynamics, and flight regime. All the design variables have appropriate bounds on

them. There are also some parameters which are fixed during the design process to

104

Page 116: Reliability Based Design Optimization - Formulations and Methodologies

Table 8.1

COMPARISON OF DESIGNS

Starting Deterministic OUUPoint Optimization

x1 -5 -2.82 -2.92x2 3 0.05 0.0f 94.7 15.98 17.05y1 29.7 8 8.52y2 -8.5 0.006 0.0

Bel(y1 ≥ 8) 1.0 0.05 0.99Bel(5 ≥ y2) 1.0 1.0 1.0

represent constraints on mission requirements, available technologies, and aircraft

class regulations.

The original deterministic design optimization problem has ten design variables

and five parameters. The design of the system is decomposed into three contributing

analyses. This problem has been modified by Tappeta [67] to fit the framework

of multiobjective coupled MDO systems (seven design variables and eight design

parameters). The problem has also been modified by Gu et al [24] to illustrate the

methodology of decision based collaborative optimization. It is further modified in

this research to be suitable for optimization under uncertainty (OUU) using evidence

theory. The modified problem (Tappeta [67]) is referred to as the ACS problem from

here-on and is described next. The OUU version of the ACS problem will be given

following this description.

The ACS problem has three disciplines as shown in Figure 8.8. They are

aerodynamics (CA1), weight (CA2), and performance (CA3). The dependency di-

agram indicates that there are two feed-forwards and no feed-backs between the

disciplines. However, CA2 is internally coupled. The design variables and the cor-

105

Page 117: Reliability Based Design Optimization - Formulations and Methodologies

Range

Stall Speed

Performance

~x1 x4 ê

~x1 x7 ê

,x2 x7 ê

y1

y3

y4

y2

y5

y6

Wetted Area

Lift/Drag

y1

Aero.

Total Weight

Empty Weight

y4

y3

Weight

Figure 8.8. Aircraft Concept Sizing Problem

responding bounds are listed in Table 8.2. Note that there are five shared design

Table 8.2

DESIGN VARIABLES IN ACS PROBLEM

Description (Unit) Lower UpperBound Bound

x1 aspect ratio of the wing 5 9x2 wing area (ft2) 100 300x3 fuselage length (ft) 20 30x4 fuselage diameter (ft) 4 5x5 density of air at cruise 0.0017 0.002378

altitude (slug/ft3)x6 cruise speed (ft/sec) 200 300x7 fuel weight (lbs) 100 2000

variables (x1 ∼ x4 and x7). Table 8.3 lists the design parameters and their values.

Table 8.4 provides a brief description of the various states and their relations with

each discipline. The state y2 is coupled with respect to CA1 and CA3 and the

state y4 is coupled with respect to CA2 and CA3.

The objective in the ACS problem is to determine the least gross take-off weight

106

Page 118: Reliability Based Design Optimization - Formulations and Methodologies

Table 8.3

LIST OF PARAMETERS IN THE ACS PROBLEM

Name Description Valuep1 Npass number of passengers 2p2 Nen number of engines 1p3 Wen engine weight 197 (lbs)p4 Wpay payload weight 398 (lbs)p5 Nzult ultimate load factor 5.7p6 Eta propeller efficiency 0.85p7 c specific fuel 0.4495

consumption (lbs/hr/hb)p8 C1max maximum lift coeff. 1.7

of the wing

within the bounded design space subject to two performance constraints. The con-

straints are on the range and stall speed of the aircraft. The deterministic optimiza-

tion problem is as follows.

minimize : F = Weight = y4 (8.27)

subject to : g1 = 1− y6

V stallreq

≥ 0 (8.28)

g2 = 1− Rangereq

y5

≥ 0 (8.29)

V stallreq = 70 ft/sec (8.30)

Rangereq = 560 miles (8.31)

The problem described above has been modified slightly to be suitable for design

optimization under uncertainty using evidence theory. The parameters p3 and p4

listed in table 8.3 are considered uncertain. The information on p3 and p4 is obtained

through expert opinion (let’s say). Expert’s information is given in intervals with

corresponding BPA for each of the intervals. This is shown in Figure 8.9.

107

Page 119: Reliability Based Design Optimization - Formulations and Methodologies

Table 8.4

LIST OF STATES IN THE ACS PROBLEM

Description (Unit) Output InputFrom To

y1 total aircraft wetted area (ft2) CA1y2 maximum lift to drag ratio CA1 CA3y3 empty weight (lbs) CA2y4 gross take-off weight (lbs) CA2 CA3y5 aircraft range (miles) CA3y6 stall speed (ft/sec) CA3

180 190 200 210 220

0.2 0.4 0.3 0.1

p3

0.1 0.2 0.5 0.2

p4

360 380 400 420 440

Figure 8.9. Expert Opinion for p3 and p4

Since the information on the parameters p3 and p4 is not certain, the determin-

istic optimization problem (Equations 8.27-8.29) is modified as follows.

minimize :

F = Weight = y4 (8.32)

subject to :

G1 = UM(1− y6

V stallreq

≥ 0)− UMreqd ≥ 0 (8.33)

G2 = UM(1− Rangereq

y5

≥ 0)− UMreqd ≥ 0 (8.34)

The objective function is calculated assuming the values of p3 and p4 as given

108

Page 120: Reliability Based Design Optimization - Formulations and Methodologies

in Table 8.3. The uncertain measure used is belief (Bel). The minimum value of

required belief is taken as 0.98. Figure 8.10 shows the convergence of the objective

function. Observe that after few iterations we are close to the optimal solution,

however to meet the required convergence criteria many more iterations are required.

2 4 6 8 10 12 14 16 18 20 221700

1750

1800

1850

1900

1950

2000

2050

2100

Iteration

Ob

ject

ive

Figure 8.10. Convergence of the Objective Function (ACS Problem)

The deterministic optima and the optimal solution under uncertainty are com-

pared in table 8.5. Note that the starting point is infeasible i.e., y6 has a value

greater that 70, hence making g2 infeasible. Observe that the objective function

(y4) has increased as compared to deterministic optima. This is an expected trade-

off for a more reliable design. This is evident from the fact that the value of y5 has

increased and the value of y6 has decreased as compared to the deterministic op-

tima, thus moving into the feasible region and ensuring the required belief measure.

In table 8.5, Bel1 and Bel2 means Bel(1 − y6

V stallreq

≥ 0) and Bel(1 − Rangereq

y5≥ 0)

respectively.

109

Page 121: Reliability Based Design Optimization - Formulations and Methodologies

Table 8.5

COMPARISON OF DESIGNS (ACS PROBLEM)

Starting Deterministic OUUPoint Optimization

x1 7.0 5 5x2 200.0 176.53 188.9x3 22.0 20 20x4 4.2 4 4x5 2.1E-03 0.0017 0.0017x6 250.0 200 200x7 200.0 142.86 150.7y1 810.28 710.31 742.48y2 12.911 10.971 11.09y3 1463.3 1207.6 1229.7y4 2061.3 1748.4 1778.4y5 788.11 560.02 587.87y6 71.407 70.001 68.246

Bel1 1 0.1 0.98Bel2 0 0.1 0.98

8.8 Summary

In this investigation, an approach for performing design optimization under epis-

temic uncertainty is presented. Dempster-Shafer theory (evidence theory) has been

used to model the epistemic uncertainty arising due to incomplete information or

the lack of knowledge. The constraints posed in the design optimization problem

are evaluated using uncertain measures provided by evidence theory. The belief

measure is used in this research to formulate non-deterministic constraints. Since

the belief functions are discontinuous, a formal trust region managed sequential ap-

proximate optimization approach based on the Lagrangian is employed to drive the

design optimization. The trust region is managed by a trust region ratio based on

110

Page 122: Reliability Based Design Optimization - Formulations and Methodologies

the performance of the Lagrangian. The Lagrangian is a penalty function of the

objective and the constraints. The framework is illustrated with multidisciplinary

test problems. The strength of the investigation is the use of evidence theory for

optimization under epistemic uncertainty. As a byproduct it also shows that sequen-

tial approximate optimization approaches can be used for handling discontinuous

constraints and obtaining improved designs.

111

Page 123: Reliability Based Design Optimization - Formulations and Methodologies

CHAPTER 9

CONCLUSIONS AND FUTURE WORK

This chapter presents an overview and general conclusions related to the work devel-

oped in this dissertation. The general topic of research is developing novel reliability

based design optimization (RBDO) methodologies. In traditional RBDO, the un-

certainties are modelled using probability theory. In chapters 5 and 6, two different

methodologies for performing traditional RBDO were developed. Uncertainties in

the form of aleatory uncertainty were treated in design optimization to obtain op-

timal designs characterized by a low probability of failure. The main objective

was to reduce the computational cost associated with existing nested methodology

for RBDO. Both the methodologies were tested on engineering design problems of

reasonable size and scope. An optimization methodology based on continuation

techniques was developed for solving the unilevel RBDO methodology in chapter 7.

A second focus in this dissertation was to develop a methodology for performing op-

timization under epistemic uncertainty. Epistemic uncertainty, by its very nature, is

difficult to characterize using standard probabilistic means. Dempster-Shafer theory

was used to quantify epistemic uncertainty in chapter 8. A trust region managed

sequential approximate optimization (SAO) framework was proposed to perform

optimization under epistemic uncertainty.

112

Page 124: Reliability Based Design Optimization - Formulations and Methodologies

9.1 Summary and conclusions

9.1.1 Decoupled methodology for reliability based design optimization

In chapter 5, a decoupled methodology for probabilistic design optimization is de-

veloped. Traditionally, RBDO formulations involve nested optimization making it

computationally intensive. The basic idea is to separate the main optimization phase

(optimizing an objective subject to constraints on performances) from the reliability

calculations (compute the performance that meets a given reliability requirement).

A methodology based on this paradigm is developed. During the deterministic

optimization phase, information on the most probable point (MPP) of failure corre-

sponding to each failure mode is required to calculate the performance constraints.

The most probable point of failure corresponding to each failure mode is obtained

by using the first order Taylor series expansion about the design point from the

previous cycle. This MPP update strategy during the deterministic optimization

phase requires the sensitivities of the MPP with respect to the design vector. In

practice, this requires the second order derivatives of the failure mode. In current

implementation, a damped BFGS update scheme is employed to compute the second

order derivatives. The framework is tested using a series of structural and multidis-

ciplinary design problems taken from the literature. For the problems considered, it

is observed that the estimated most probable point converges to the exact values in

3-4 cycles. It is found that the proposed methodology provides the same solution as

the traditional nested optimization formulation, and is computationally 2-3 times

more efficient.

This methodology has its advantages and disadvantages. The major advantage is

the fact that a workable reliable design can be obtained at significantly less compu-

tational effort. The calculations in the main optimization phase and the reliability

calculation phase can be solved independently, with different optimizers. By using

113

Page 125: Reliability Based Design Optimization - Formulations and Methodologies

higher order reliability calculation techniques (SORM, MCS, etc), the methodology

has the potential to give optimal designs with high reliability. The major limitation

of the methodology is that it is not provably convergent. However, the problems

on which the methodology was tested were nonlinear and the MPP obtained were

exact, thus showing its potential.

9.1.2 Unilevel methodology for reliability based design optimization

In chapter 6, a new unilevel formulation for RBDO is developed. As mentioned

before, traditional RBDO involves nested optimization, making it computationally

intensive. In the proposed unilevel RBDO formulation, the first order KKT con-

ditions corresponding to each probabilistic constraint (as in PMA) are enforced

directly at the system level optimizer, thus eliminating the lower level optimizations

used to compute the probabilistic constraints. The proposed formulation provides

improved robustness and provable convergence as compared to a unilevel variant

given by Kuschel and Rackwitz [36]. The formulation given by Kuschel and Rack-

witz [36] replaces the direct first order reliability method (FORM) problems (lower

level optimization in the reliability index method (RIA)) by their first order nec-

essary KKT optimality conditions. The FORM problem in RIA is numerically ill

conditioned [69]; the same is true for the formulation given by Kuschel and Rack-

witz [36]. It was shown in Tu et al [69] that PMA is numerically robust in terms of

probabilistic constraint evaluation and is therefore used in this investigation. The

proposed formulation is solved in an augmented design space that consists of the

original decision variables, the MPP of failure corresponding to each failure driven

mode, and the Lagrange multipliers corresponding to each lower level optimization.

It is computationally equivalent to the original nested optimization formulation if

the lower level optimization problem is solved by satisfying the KKT conditions

114

Page 126: Reliability Based Design Optimization - Formulations and Methodologies

(which is effectively what most numerical optimization algorithms do). It is proved

that under mild pseudoconvexity assumptions on the hard constraints, the proposed

formulation is mathematically equivalent to the original nested formulation. The

method is tested using a simple analytical problem and a multidisciplinary struc-

tures control problem, and is observed to be numerically robust and computationally

efficient compared to the existing approaches for RBDO.

One of the major advantage of this methodology is the fact that the RBDO prob-

lem can be solved in a single optimization. This helps is reducing the computational

cost of RBDO. For the structures control test problem, the unilevel methodology

was found to be two times as efficient as the nested RBDO methodology. The major

limitation of the formulation is that it is accompanied by a large number of equal-

ity constraints. Sometimes the commercial optimizers exhibit numerical instability

or show poor convergence behavior for problems with large numbers of equality

constraints. Also, the unilevel methodology is applicable only for FORM based

reliability constraints.

9.1.3 Continuation methods for unilevel RBDO

The unilevel formulation for RBDO is usually accompanied by a large number of

equality constraints which often cause numerical instability for many commercial

optimizers. In chapter 7, an optimization methodology employing continuation

methods is developed for reliability based design using the unilevel formulation.

Since the unilevel formulation is usually accompanied by many equality constraints,

homotopy techniques are used to relax the constraints and identify a starting point

that is feasible with respect to the relaxed constraints. In this investigation, the

homotopy parameter is incremented by a fixed value. A series of optimization

problems are solved for various values of the homotopy parameter as the relaxed

115

Page 127: Reliability Based Design Optimization - Formulations and Methodologies

problem approaches the original problem. It is realized that it is easier to solve

the relaxed problem from a known solution and make gradual progress towards

the solution than to solve the problem directly. The proposed strategy is tested

on two design problems. It is observed that the homotopy parameter controls the

progress made in each cycle of the optimization process. As the homotopy parameter

approaches the value of 1, the local solution is obtained.

9.1.4 Reliability based design optimization under epistemic uncertainty

In chapter 8, a methodology for performing design optimization under epistemic

uncertainty is developed. Epistemic uncertainty in nondeterministic systems arises

due to ignorance, lack of knowledge or incomplete information. This is also known

as subjective uncertainty. In general, epistemic uncertainty is extremely difficult to

quantify using probabilistic means. Dempster-Shafer theory (evidence theory) has

been used to model the epistemic uncertainty arising due to incomplete information

or the lack of knowledge. The constraints posed in the design optimization problem

are evaluated using uncertain measures provided by evidence theory. The belief

measure is used in this research to formulate non-deterministic constraints. Since

the belief functions are discontinuous, a formal trust region managed sequential

approximate optimization approach based on the Lagrangian is employed to drive

the design optimization. The trust region is managed by a trust region ratio based

on the performance of the Lagrangian. The Lagrangian is a penalty function of the

objective and the constraints. The framework is illustrated with multidisciplinary

test problems. Optimal designs characterized with low uncertainty of failure can be

obtained in few cycles.

The main accomplishment of this research is the quantification of epistemic un-

certainty in design optimization. As a byproduct it also shows that sequential

116

Page 128: Reliability Based Design Optimization - Formulations and Methodologies

approximate optimization approaches can be used for handling discontinuous con-

straints and obtaining better designs.

9.2 Recommendations for future work

9.2.1 Decoupled RBDO using higher order methods

In the decoupled RBDO methodology developed in chapter 5, first order reliability

techniques were used for reliability analysis. Since the reliability evaluation is sepa-

rated from the main optimization, it is possible to do higher order reliability analysis.

Techniques such as the second order reliability methods (SORM), Monte-Carlo sim-

ulation (MCS), etc can be used for obtaining high order estimates of reliability. This

will lead to better designs with very accurate estimates of probability of reliability.

9.2.2 RBDO for system reliability

In current work, only series systems were considered. However, there are numerous

systems where failure is governed by a combination of component failure modes.

Most of the research work in reliability based design optimization is limited to

series systems. Therefore, there is a need to develop methodologies for reliability

based design optimization for parallel systems, and where system reliability can

be incorporated in design optimization. The main challenge would be to develop

techniques using which system reliability could be computed efficiently.

9.2.3 Homotopy curve tracking for solving unilevel RBDO

In this investigation, a continuation technique is employed for managing the re-

laxed unilevel reliability based design optimization problem. In the continuation

procedure, the homotopy parameter is incremented by a fixed value. A series of

optimization problems are solved for various values of the homotopy parameter as

the relaxed problem approaches the original problem. It is realized that it is eas-

117

Page 129: Reliability Based Design Optimization - Formulations and Methodologies

ier to solve the relaxed problem from a known solution and make gradual progress

towards the solution than solve the problem directly. In continuation methods the

homotopy parameter controls the progress made in each cycle of the optimization

process. As the homotopy parameter approaches the value of 1, the optimal solu-

tion is obtained. The heuristic approach of updating the homotopy parameter by a

fixed value has worked for the problems considered as part of testing the algorithm.

However, it has been proved in the literature that this may not work at all times.

The use of formal homotopy curve tracking techniques for solving unilevel reliability

based design optimization problem will be make it more robust and computationally

efficient.

9.2.4 Considering total uncertainty in design optimization

In this dissertation, aleatory uncertainty and epistemic uncertainty were considered

separately in design optimization. A hybrid RBDO methodology can be developed

that incorporates both uncertainty types. Epistemic uncertainty can be quantified

using Dempster-Shafer theory and aleatory uncertainty using probability theory. A

total reliability analysis will involve full uncertainty quantification. The decoupled

RBDO methodology developed in this dissertation can be modified accordingly to

develop a hybrid framework.

9.2.5 Variable fidelity reliability based design optimization

A considerable amount of computational effort is usually required in reliability based

design optimization. Therefore, in recent years, surrogates of the simulation models

are largely employed to reduce the cost of optimization. Variable fidelity methods

employ a set of models ranging in fidelity to reduce the cost of design optimization.

The decoupled RBDO methodology and the unilevel RBDO methodology developed

in this research can be individually combined with variable fidelity methods to

118

Page 130: Reliability Based Design Optimization - Formulations and Methodologies

further reduce the computational cost associated with RBDO.

119

Page 131: Reliability Based Design Optimization - Formulations and Methodologies

BIBLIOGRAPHY

[1] H. Agarwal and J. E. Renaud, Reliability based design optimization formultidisciplinary systems using response surfaces. In Proceedings of the 43rdAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-rials Conference and Exhibit , AIAA-2002-1755 (Denver, Colorado. April 22-252002).

[2] H. Agarwal, J. E. Renaud and J. D. Mack, A decomposition approach forreliability-based multidisciplinary design optimization. In Proceedings of the44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, andMaterials Conference & Exhibit , number AIAA 2003-1778, Norfolk, Virginia(April 7-10 2003).

[3] H. Agarwal, J. E. Renaud, E. L. Preston and D. Padmanabhan, Uncertaintyquantification using evidence theory in multidisciplinary design optimization.Reliability Engineering and System Safety (2003), (in press).

[4] N. M. Alexandrov and R. M. Lewis, Algorithmic perspectives on problem for-mulation in mdo. In Proceedings of the 8th AIAA/NASA/USAF Multidisci-plinary Analysis & Optimization Symposium, number AIAA-2000-4719, LongBeach, CA (September 6-8 2000).

[5] E. K. Antonsson and K. N. Otto, Imprecision in engineering design. Journal ofMechanical Design, 117(B): 25–32 (1995).

[6] H.-R. Bae, R. V. Grandhi and R. A. Canfield, Uncertainty quantificationof structural response using evidence theory. In Proceedings of the 43rdAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-rials Conference, AIAA-2002-1468 (Denver, Colorado. April 22-25 2002).

[7] Y. Ben-Haim and I. Elishakoff, Convex Models of Uncertainty in Applied Me-chanics . Studies in Applied Mechanics 25, Elsevier (1990).

[8] R. Braun and I. M. Kroo, Development and application of the collaborativeoptimization architecture in a multidisciplinary design environment. In N. M.Alexandrov and M. Y. Hussaini, editors, Multidisciplinary Design Optimiza-tion: State of the Art , SIAM (1997).

[9] K. Brietung, Asymptotic approximations for multinormal integral. Journal ofEngineering Mechanics , 110(3): 357–366 (1984).

120

Page 132: Reliability Based Design Optimization - Formulations and Methodologies

[10] S. Chen, E. Nikolaidis, H. H. Cudney, R. Rosca and R. T. Haftka, Compari-son of probabilistic and fuzzy set methods for designing under uncertainty. InProceedings of the 40th AIAA/ASME/ASCE/AHS/ASC Structures, StructuralDynamics, and Materials Conference and Exhibit , pages 2860–2874, AIAA-99-1579 (April 1999).

[11] S. Chen, E. Nikolaidis, H. H. Cudney, R. Rosca and R. T. Haftka, Compari-son of probabilistic and fuzzy set methods for designing under uncertainty. InProceedings of the 40th AIAA/ASME/ASCE/AHS/ASC Structures, StructuralDynamics, and Materials Conference & Exhibit , number AIAA 99-1579, St.Louis (April 12-15 1999).

[12] W. Chen and X. Du, Sequential optimization and reliability assesment methodfor efficient probabilistic design. In ASME Design Engineering Technical Con-ferences and Computers and Information in Engineering Conference, numberDETC2002/DAC-34127, Montreal, Canada (2002).

[13] X. C. Chen, T. K. Hasselman and D. J. Neill, Reliability based structualdesign optimization for practical applications. In Proceedings of the 38thAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-rials Conference, number AIAA-97-1403, pages 2724–2732 (1997).

[14] K. K. Choi and B. D. Youn, Hybrid analysis method for reliability-based de-sign optimization. In Proceedings of 2001 ASME Design Engineering Techni-cal Conferences: 27th Design Automation Conference, number DETC/DAC-21044, Pittsburgh, PA (September 9-12 2001).

[15] K. K. Choi, B. D. Youn and R. Yang, Moving least square method for reliability-based design optimization. In Proceedings of the Fourth World Congress ofStructural and Multidisciplinary Optimization, Dalian, China (June 4-8, 20012001).

[16] E. J. Cramer, Dennis, J. E., jr., P. D. Frank, R. M. Lewis and G. R. Shubin, Onalternative problem formulations for multidisciplinary design optimization. InProceedings of the 4th AIAA/NASA/ISSMO Symposium on MultidisciplinaryAnalysis & Optimization, number AIAA-92-4752, pages 518–530 (1992).

[17] J. E. Dennis, Jr and R. M. Lewis, A comparison of nonlinear programmingapproaches to an elliptic inverse problem and a new domain decomposition ap-proach. Technical Report TR94-33, Department of Computational and AppliedMathematics, Rice University, Houston, Texas 77005-1892 (1994).

[18] D. Dubois and H. Prade, Possibility Theory : An Approach to ComputerizedProcessing of Uncertainty . Plenum Press, first edition (1988).

[19] D. Dubois and H. Prade, Possibility Theory: An approach to Computer Pro-cessing of Uncertainty . Plenum Press, NY (1988).

[20] I. Enevoldsen and J. D. Sorensen, Reliability-based optimization in structuralengineering. Structural Safety , 15(3): 169–196 (1994).

121

Page 133: Reliability Based Design Optimization - Formulations and Methodologies

[21] S. Engelund and R. Rackwitz, A benchmark study on importance samplingtechniques in structural reliability. Structural Safety , 12: 255–276 (1993).

[22] M. Fedrizzi, J. Kacprzyk and R. R. Yager, Advances in Dempster-Shafer Theoryof Evidence. John Wiley & Sons Inc. (1994).

[23] T. Fetz, M. Oberguggenberger and S. Pittschmann, Applications of possibilityand evidence theory in civil engineering. In 1st International Symposium onImprecise Probabilities and Their Applications (29 June - 2 July 1999).

[24] X. Gu, J. E. Renaud, L. M. Ashe, S. M. Batill, S. M. Budhiraja and A. S.Krajewski, Decision based collaborative optimization. ASME Journal of Me-chanical Design, 124(1): 1–13 (2001).

[25] R. T. Haftka, Simultaneous analysis and design. AIAA Journal , 25(1): 139–145(1985).

[26] R. T. Haftka, Z. Gurdal and M. P. Kamat, Elements of Structural Optimization,volume 1. Kluwer Academy Publications, second edition (1990).

[27] A. Haldar and S. Mahadevan, Probability, Reliability and Statistical Methodsin Engineering Design. John Wiley & Sons (2000).

[28] A. Harbitz, An efficient sampling method for probability of failure calculation.Structural Safety , 3: 109–115 (1986).

[29] H. A. Jensen and A. E. Sepulveda, Use of approximation concepts in fuzzydesign problem. Advances in Engineering Software, 31: 263–273 (2000).

[30] N. S. Khot, Optimization of controlled structures. Advances in Design Automa-tion (1994).

[31] C. Kirjner-Neto, E. Polak and A. der Kiureghian, An outer approximations ap-proach to reliability-based optimal design of structures. Journal of OptimizationTheory and Applications , 98(1): 1–16 (July 1998).

[32] A. D. Kiureghian, H.-Z. Lin and S.-J. Hwang, Second order reliability approx-imation. journal of engineering mechanics , 113(8): 1208–1225 (1987).

[33] G. J. Klir and M. J. Wierman, Uncertainty Based Information : Elements ofGeneralized Information Theory . Physica-Verlag (1998).

[34] I. M. Kroo, Decomposition and collaborative optimization for large-scaleaerospace design programs. In N. M. Alexandrov and M. Y. Hussaini, editors,Multidisciplinary Design Optimization: State of the Art , SIAM (1997).

[35] N. Kuschel and R. Rackwitz, Two basic problems in reliability based struc-tural optimization. Mathematical Methods of Operations Research, 46: 309–333(1997).

[36] N. Kuschel and R. Rackwitz, A new approach for structural optimization ofseries systems. Applications of Statistics and Probability , 2(8): 987–994 (2000).

122

Page 134: Reliability Based Design Optimization - Formulations and Methodologies

[37] S. W. Law and E. K. Antonsson, Implementing the method of imprecision:An engineering design example. In Proceedings of the 3rd IEEE InternationalConference on Fuzzy Systems , volume 1, pages 358–363 (1994).

[38] R. M. Lewis, Practical aspects of variable reduction formulations and reducedbasis algorithms in multidisciplinary design optimization. In N. M. Alexandrovand M. Y. Hussaini, editors, Multidisciplinary Design Optimization: State ofthe Art , SIAM (1997).

[39] P.-L. Liu and A. D. Kiureghian, Optimization algorithms for structural relia-bility. Structural Safety , 9(3): 161–177 (1991).

[40] G. Maglaras, E. Nikolaidis, R. T. Haftka and H. H. Cudney, Analytical-experimental comparison of probabilistic and fuzzy set based methods for de-signing under uncertainty. Structural Optimization, 13: 69–80 (1997).

[41] O. L. Mangasarian, Nonlinear Programming . Classics in Applied Mathematics,SIAM, Philadelphia (1994).

[42] J. Nocedal and S. J. Wright, Numerical Optimization. Springer-Verlag (1999),Springer series in Operations Research.

[43] W. L. Oberkampf, S. M. Deland, B. M. Rutherford, K. V. Diegert and K. F.Alvin, A new methodology for the estimation of total uncertainty in computa-tional simulation. In Proceedings of the 40th AIAA/ASME/ASCE/AHS/ASCStructures, Structural Dynamics, and Materials Conference (April 1999).

[44] W. L. Oberkampf, S. M. DeLand, B. M. Rutherford, K. V. Diegert and K. F.Alvin, Estimation of total uncertainty in modeling and simulation. TechnicalReport SAND2000-0824, Sandia National Laboratories (April 2000).

[45] W. L. Oberkampf, K. V. Diegert, K. F. Alvin and B. M. Rutherford, Variability,uncertainty, and error in computational simulation. In ASME Proceedings ofthe 7th. AIAA/ASME Joint Thermophysics and Heat Transfer Conference,volume 2, pages 259–272 (1998).

[46] W. L. Oberkampf, J. C. Helton, C. A. Joslyn, S. Wojtkiewicz and S. Ferson,Challenge problems : Uncertainty in system response given uncertain parame-ters. Reliability Engineering and System Safety (2003), (in press).

[47] W. L. Oberkampf, J. C. Helton and K. Sentz, Mathematical representationof uncertainty. In Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASCStructures, Structural Dynamics, and Materials Conference & Exhibit , num-ber AIAA 2001-1645, Seattle, WA (April 16-19, 2001 2001).

[48] D. Padmanabhan and S. M. Batill, Reliability based optimization using approx-imations with applications to multi-disciplinary system design. In Proceedingsof the 40th AIAA Sciences Meeting & Exhibit , number AIAA-2002-0449, Reno,NV (January 2002).

[49] S. Parsons, Qualitative Methods for Reasoning under Uncertainty . The MITPress (2001).

123

Page 135: Reliability Based Design Optimization - Formulations and Methodologies

[50] V. M. Perez, J. E. Renaud and L. T. Watson, Interior point sequen-tial approximate optimization methodology. In Proceedings of the 10thAIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis & Op-timization, number AIAA-2002-5505, Atlanta, GA. (September 4-6 2002).

[51] V. M. Perez, J. E. Renaud and L. T. Watson, An interior point se-quential approximate optimization methodology. In Proceedings of the 9thAIAA/NASA/USAF Multidisciplinary Analysis & Optimization Symposium,AIAA-2002-5505, Atlanta, GA (September 4-6 2002).

[52] M. S. Phadke, Quality Engineering Using Robust Design. Prentice Hall, Engle-wood Cliffs, NJ (1989).

[53] E. Polak, R. J.-B. Wets and A. der Kiureghian, On an approach to optimizationproblems with a probabilistic cost and or constraints. Nonlinear Optimizationand related topics , pages 299–316 (2000).

[54] R. Rackwitz, Reliability analysis-a review and some perspectives. StructuralSafety , 23(4): 365–395 (2001).

[55] J. E. Renaud, Sequential approximation in non-hierarchic system decomposi-tion and optimization: a multidisciplinary design tool . Ph.D. thesis, RenssalaerPolytechnic Institute, Department of Mechanical Engineering, Troy, New York(August 1992).

[56] J. F. Rodriguez, J. E. Renaud and L. T. Watson, Convergence of trust re-gion augmented lagrangian methods using variable fidelity approximation data.Structural Optimization, 15: 141–156 (1998).

[57] J. F. Rodriguez, J. E. Renaud and L. T. Watson, Convergence using variablefidelity approximation data in a trust region managed augmented lagrangianapproximate optimization. AIAA Journal , pages 749–768 (1998).

[58] M. Rosenblatt, Remarks on a multivariate transformation. The Annals of Math-ematical Statistics , 23(3): 470–472 (September 1952).

[59] J. O. Royset, A. D. Kiureghian and E. Polak, Reliability based optimal struc-tural design by the decoupling approach. Reliability Engineering and SystemSafety , 73(3): 213–221 (2001).

[60] M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization. PlenumPress (1993).

[61] K. Sentz and S. Ferson, Combination of evidence in dempster-shafer theory.Technical report, Sandia National Laboratories (April 2002), SAND 2002-0835.

[62] J. Sobieszczanski, J. S. Agte and Sandusky R. R., jr., Bi-level integrated sys-tem synthesis (bliss). In Proceedings of the 7th AIAA/NASA/USAF Multi-disciplinary Analysis & Optimization, number AIAA-98-4916, St. Louis, Mis-souri (September 2-4 2000), Extended paper published as Technical ReportNASA/TM-1998-208715.

124

Page 136: Reliability Based Design Optimization - Formulations and Methodologies

[63] J. Sobieszczanski-Sobieski, A linear decomposition method for large optimiza-tion problems- blueprint for development. Technical Report TM-83248-1982,NASA (1982).

[64] J. Sobieszczanski-Sobieski, Optimization by decomposition: A step from hierar-chic to non-hierarchic systems. In 2nd NASA/Air Force Symposium on RecentAdvances in Multidisciplinary Analysis and Optimization, number NASA TM-101494, CP-3031, Part 1, pages 28–30, Hampton, VA (1988).

[65] J. Sobieszczanski-Sobieski, C. L. Bloebaum and P. Hajela, Sensitivity of control-augmented structure obtained by a system decomposition method. AIAA Jour-nal , 29(2): 264–270 (February 1990).

[66] T. R. Sutter, C. J. Camarda, J. L. Walsh and H. M. Adelman, Comparisionof several methods for calculating vibration mode shape derivatives. AIAAJournal , 26: 1506–1511 (1988).

[67] R. V. Tappeta, An Investigation of Alternative Problem Formulations for Mul-tidisciplinary Optimization. Master’s thesis, University of Notre Dame (Decem-ber, 1996).

[68] P. B. Thanedar and S. Kodiyalam, Structural optimization using probabilisticconstraints. Structural Optimization, 4: 236–240 (1992).

[69] J. Tu, K. K. Choi and Y. H. Park, A new study on reliability-based designoptimization. Journal of Mechanical Design, 121: 557–564 (December 1999).

[70] L. Tvedt, Distribution of quadratic forms in normal space-application to struc-tural reliability. Journal of Engineering Mechanics , 116(6): 1183–1197 (1990).

[71] P. Walley, Statistical Reasoning with Imprecise Probabilities . London: Chap-man and Hall. (1991).

[72] L. Wang and S. Kodiyalam, An efficient method for probabilistic androbust design with non-normal distribution. In Proceedings of the 43rdAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materi-als Conference, number AIAA 2002-1754, Denver, Colorado (April 22-25 2002).

[73] L. T. Watson, Theory of globally convergent probability-one homotopiesfor nonlinear programming. SIAM Journal on Optimization, 11(3): 761–780(2001).

[74] B. A. Wujek, J. E. Renaud, S. M. Batill, E. W. Johnson and J. B. Brockman,Design flow management and multidisciplinary design optimization in applica-tion to aircraft concept sizing. In 34th Aerospace Sciences Meeting & Exhibit ,AIAA (January 1996).

[75] H. J. Zimmermann, Fuzzy Set Theory and its Applications . Kluwer AcademicPublishers, second edition (1991).

125