uncertainty quantification of a containment vessel dynamic … 04 imac_xxi_2003.pdf ·...

9
1 Proceedings of the IMAC-XXI Conference on Structural Dynamics International Modal Analysis Conference February 3-6, 2003 Kissimmee, Florida UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC RESPONSE SUBJECTED TO HIGH-EXPLOSIVE DETONATION IMPULSE LOADING Ben H. Thacker * , Edward A. Rodriguez ** , Jason E. Pepin , and David S. Riha *Manager, Senior Research Engineer Reliability and Engineering Mechanics **Deputy Group Leader, Technical Staff Member Engineering Sciences & Applications Division Southwest Research Institute Los Alamos National Laboratory 6220 Culebra Rd. MS P946 San Antonio, TX 78238 Los Alamos, NM 87545 ABSTRACT In support of the U.S. Department of Energy (DOE) Stockpile Stewardship Program (SSP), Los Alamos National Laboratory (LANL) has been developing capabilities to provide reliability-based structural evaluation techniques for performing weapon component and system reliability assessments. The development and application of probabilistic analysis methods plays a key role in enabling weapon reliability assessment in support of weapon certification in the absence of any nuclear testing. This paper focuses on the uncertainty quantification associated with the structural response of a containment vessel for high- explosive (HE) experiments. The probabilistic dynamic response of the vessel is evaluated through the coupling of the probabilistic code NESSUS [1] with the nonlinear structural dynamics code DYNA-3D [2]. The probabilistic model includes variations in geometry and mechanical properties, such as Young’s Modulus, yield strength, and material flow characteristics. These uncertainties are used in computing the probability of exceeding a specified strain limit, which is directly related to vessel failure. Currently, a verification and validation (V&V) plan is also being developed for the containment vessel. This is discussed with emphasis on the key role uncertainty quantification plays in V&V. INTRODUCTION Over the past 30 years, Los Alamos National Laboratory (LANL), under the auspices of DOE, has been conducting confined high explosion experiments utilizing large, spherical, steel pressure vessels. These experiments are performed in a containment vessel to prevent the release of explosion products to the environment. Design of these spherical vessels was originally accomplished by maintaining that the vessel’s kinetic energy, developed from the detonation impulse loading, be equilibrated by the elastic strain energy inherent in the vessel. Within the last decade, designs have been accomplished utilizing sophisticated and advanced 3D computer codes that address both the detonation hydrodynamics and the vessel’s highly nonlinear structural response. Cylindrical and spherical pressure vessels are used to contain the effects of high explosions. In some cases, the vessel is designed for one-time use only, efficiently utilizing the significant plastic energy absorption capability of ductile vessel materials [3]. Alternatively, the vessel can be designed for multiple use, in which case the material response is restricted to the elastic range [4]. Understanding of the dynamic events under detonation conditions is the first step towards the development of a rational pressure vessel design criteria. The multiple- use pressure vessels must, in effect, be designed with similar rules as those in Section III or VIII of the ASME Boiler & Pressure Vessel Code, hereafter referred to as the ASME Code. That is, it becomes imperative to the designer to maintain a purely elastic membrane response of the structural system. On the other hand, Environment, Safety, and Health (ESH) issues, such as waste-stream isolation, and clean-up costs associated with HE detonations within vessels, may be prohibitively expensive because of hazardous materials that may be present. In this scenario, the pressure vessel design is driven to a “single-use” mode, dictating that a more cost-effective design must be developed. The designer must start with a rational ductile failure design criterion that utilizes the plastic reserve capacity in providing structural margin. As such, quantifying uncertainties associated with loading functions, geometry (i.e., radius and thickness), fabrication, and material properties is of paramount importance when the design of these containment vessels is within the plastic regime. The containment vessel considered is manufactured from HSLA-100 steel and has a 6-ft inside diameter. Results are presented herein for a particular explosive test, with probability density functions describing the

Upload: others

Post on 26-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

1

Proceedings of the IMAC-XXI Conference on Structural Dynamics International Modal Analysis Conference

February 3-6, 2003 Kissimmee, Florida

UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC

RESPONSE SUBJECTED TO HIGH-EXPLOSIVE DETONATION IMPULSE LOADING

Ben H. Thacker*, Edward A. Rodriguez**, Jason E. Pepin‡, and David S. Riha†

*Manager, †Senior Research Engineer Reliability and Engineering Mechanics

**Deputy Group Leader, ‡Technical Staff Member Engineering Sciences & Applications Division

Southwest Research Institute Los Alamos National Laboratory 6220 Culebra Rd. MS P946

San Antonio, TX 78238 Los Alamos, NM 87545

ABSTRACT

In support of the U.S. Department of Energy (DOE) Stockpile Stewardship Program (SSP), Los Alamos National Laboratory (LANL) has been developing capabilities to provide reliability-based structural evaluation techniques for performing weapon component and system reliability assessments. The development and application of probabilistic analysis methods plays a key role in enabling weapon reliability assessment in support of weapon certification in the absence of any nuclear testing. This paper focuses on the uncertainty quantification associated with the structural response of a containment vessel for high-explosive (HE) experiments. The probabilistic dynamic response of the vessel is evaluated through the coupling of the probabilistic code NESSUS [1] with the nonlinear structural dynamics code DYNA-3D [2]. The probabilistic model includes variations in geometry and mechanical properties, such as Young’s Modulus, yield strength, and material flow characteristics. These uncertainties are used in computing the probability of exceeding a specified strain limit, which is directly related to vessel failure. Currently, a verification and validation (V&V) plan is also being developed for the containment vessel. This is discussed with emphasis on the key role uncertainty quantification plays in V&V.

INTRODUCTION Over the past 30 years, Los Alamos National Laboratory (LANL), under the auspices of DOE, has been conducting confined high explosion experiments utilizing large, spherical, steel pressure vessels. These experiments are performed in a containment vessel to prevent the release of explosion products to the environment. Design of these spherical vessels was originally accomplished by maintaining that the vessel’s kinetic energy, developed from the detonation impulse loading, be equilibrated by the elastic strain energy inherent in the vessel. Within the last decade, designs

have been accomplished utilizing sophisticated and advanced 3D computer codes that address both the detonation hydrodynamics and the vessel’s highly nonlinear structural response. Cylindrical and spherical pressure vessels are used to contain the effects of high explosions. In some cases, the vessel is designed for one-time use only, efficiently utilizing the significant plastic energy absorption capability of ductile vessel materials [3]. Alternatively, the vessel can be designed for multiple use, in which case the material response is restricted to the elastic range [4]. Understanding of the dynamic events under detonation conditions is the first step towards the development of a rational pressure vessel design criteria. The multiple-use pressure vessels must, in effect, be designed with similar rules as those in Section III or VIII of the ASME Boiler & Pressure Vessel Code, hereafter referred to as the ASME Code. That is, it becomes imperative to the designer to maintain a purely elastic membrane response of the structural system. On the other hand, Environment, Safety, and Health (ESH) issues, such as waste-stream isolation, and clean-up costs associated with HE detonations within vessels, may be prohibitively expensive because of hazardous materials that may be present. In this scenario, the pressure vessel design is driven to a “single-use” mode, dictating that a more cost-effective design must be developed. The designer must start with a rational ductile failure design criterion that utilizes the plastic reserve capacity in providing structural margin. As such, quantifying uncertainties associated with loading functions, geometry (i.e., radius and thickness), fabrication, and material properties is of paramount importance when the design of these containment vessels is within the plastic regime. The containment vessel considered is manufactured from HSLA-100 steel and has a 6-ft inside diameter. Results are presented herein for a particular explosive test, with probability density functions describing the

Page 2: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

2

vessel radius and thickness, and material parameters. No attempt is made, in this paper, to describe the loading function on a probabilistic basis. This will be accomplished in an upcoming study describing the variation of HE mass resulting in pressure-time history PDF. The containment vessel is shown in Figure 1, and consists of a minimum 2.0-in wall thickness HSLA-100 spherical shell with three ports. It is subjected to the transient pressure loading for a quantity of HE, up to a maximum charge size of 40 lbs. equivalent TNT.

Figure 1: LANL 6-ft. ID containment vessel.

1. DESCRIPTION OF CONTAINMENT VESSEL

The containment vessel is a spherical vessel with three access ports: two 16-inch ports aligned in one axis on the sides of the vessel and a single 22-inch port at the top “north pole” of the vessel. The vessel has an inside diameter of 72 inches and a 2 inch nominal wall thickness. The vessel is fabricated from HSLA-100 steel, chosen for its high strength, high fracture toughness, and no requirement for post weld heat treatment. The vessel’s three ports must maintain a seal during use to prevent any release of reaction product gases or material to the external environment. Each door is connected to the vessel with 64 high strength bolts, and four separate seals at each door ensure a positive pressure seal.

2. DETERMINSTIC ANALYSIS A series of hydrodynamic and structural analyses of the spherical containment vessel were performed using a combination of two numerical techniques. Using an uncoupled approach, the transient pressures acting on the inner surface of the vessel were computed using the Eulerian hydrodynamics code, CTH, which simulated the HE burn, the internal gas dynamics, and shock wave propagation. The HE was modeled as spherically symmetric with the initiating burn taking place at the

center of the sphere. The vessel’s structural response to these pressures was then analyzed [5] using the DYNA-3D explicit finite element structural dynamics code. In this section, we summarize the results of the structural analysis of the confinement vessel subjected to a 40 lb. HE charge detonation of PBX-9501 ignited at the center of the vessel. The simulation required the use of a large, detailed mesh to accurately represent the dynamic response of the vessel and to adequately resolve the stresses and discontinuities caused by various engineering features such as the bolts connecting the doors to their nozzles. Taking advantage of two planes of symmetry, one quarter of the structure was meshed using approximately one million hex elements. Six hex elements were used through the 2-inch wall thickness to accurately simulate the bending behavior of the vessel wall. The one-quarter symmetry model is shown in Figure 2. The structural response simulation used an explicit finite element code called PARADYN, which is a massively parallel version of DYNA-3D, a nonlinear, explicit Lagrangian finite element analysis code for three-dimensional transient structural mechanics. PARADYN was run on 504 processors of Los Alamos National Laboratory’s Accelerated Strategic Computing Initiative platform “Blue Mountain,” which is an interconnected array of independent SGI (Silicon Graphics, Inc.) computers. The containment vessel can be handled on Blue Mountain computer with approximately 2.5 hours of run time. The same analysis would have taken about 35 days when run on a single processor.

Figure 2: One quarter symmetry mesh used for the

structural analysis. The pressure-time history used for the 40 lb HE load case, as shown in Figure 3, was calculated for a duration of approximately 7 ms, which is more than sufficient to cover the initial blast loading and several subsequent reverberations inside of the vessel. Confirmatory analyses have shown [6,7] that a smaller

Page 3: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

3

duration (~ 2-ms) is adequate to represent the initial impulse loading providing the driving energy to the vessel. After the 7 ms of computed pressure loading, the pressure inside the vessel was taken as constant, that being the residual quasi-static pressure resulting from the expansion of reaction product gases. Numerical computations for the structural analysis were carried out to 20 ms duration, a time considered far enough removed from the initial pressure spike to capture the peak dynamic response. The vessel initially deforms in a “breathing mode,” an almost uniform radial expansion of the entire vessel and ports. Because of the asymmetry of the vessel’s ports, in terms of mass and stiffness, the breathing mode degenerates after a couple of cycles into a more complex combination of bending/extensional modes as shown in Figure 4.

Figure 3: Pressure time history for the center detonated

40 lb HE load case.

Figure 4: Structural response of the containment vessel. Stresses caused by high-order bending modes, during the time history, are significantly greater than the stresses caused by the initial breathing response. Even well after the initial load and unload, the dynamic stress

waves propagate through the vessel, and at certain times, combine to cause plastic straining in the vessel wall. Pure membrane stresses are developed only during the initial vessel response, the breathing mode. As shown in Figure 5, a unique combination of localized bending and membrane stresses causes a small amount of plastic strain to occur at the bottom of the vessel at 5 ms.

Figure 5: Plastic strain occurs at the bottom of the

confinement vessel for the 40 lb HE load case.

3. PROBABILISTIC ANALYSIS After investigation of the deterministic response of the vessel and where maximum responses occurred, the equivalent plastic strain at the bottom of the vessel was selected as the response metric. To quantify the uncertainty associated with the plastic strain occurring at the bottom of the containment vessel, a probabilistic analysis was performed with the vessel geometry and material properties as random variables. Because of the long runtimes required to evaluate the vessel model throughout the time domain of interest, efficient probabilistic methods were required to compute the probabilistic response of the containment vessel [8]. These methods have been primarily developed for complex computational systems requiring time-consuming calculations, the results of which have been shown to approach the exact solution obtained from traditional Monte Carlo methods using significantly fewer function evaluations [9]. Most Probable Point (MPP) Methods A class of probabilistic methods based on the most probable point (MPP) is becoming routinely used as a means of reducing the number of g-function evaluations from that of brute-force Monte Carlo simulation. The g-

Page 4: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

4

function (or limit-state function) separates the failed and non-failed regions of the design space. This function is typically formulated as a model response (Z) and a limiting value of the response z0 such that g=Z-z0=0. Although many variations have been proposed, the best-known and most widely used MPP-based methods include the first-order reliability method (FORM), second-order reliability method (SORM), and advanced mean value (AMV) [10]. The basic steps involved in MPP-based methods are as follows: (1) Obtain an approximate fit to the exact g-function at X*, where X* is initially the mean random variable values; (2) Transform the original, non-normal random variables into independent, normal random variables u; (3) Calculate the minimum distance, β (or safety index), from the origin of the joint PDF to the limit state surface, g = 0. This point, u*, on the limit state surface is the most probable point (MPP) in the u-space; (4) Approximate the g-function g(u) at u* using a first or second-order polynomial function; and (5) Solve the resulting problem using available analytical solutions. Step (1), which involves evaluating the g-function, represents the main computational burden in the above steps. Once a polynomial expression for the g-function is established, it is a numerically simple task to compute the failure probability and associated MPP. Because of this, the complete response cumulative distribution function (CDF) can be computed very quickly by repeating steps (2)-(4) for different z0 values. The resulting locus of MPPs is efficiently used in the advanced mean value algorithm (discussed next) to iteratively improve the probability estimates in the tail regions. Advanced Mean Value (AMV) Method The advanced mean value class of methods is most suitable for complicated but well-behaved response functions requiring computationally intensive calculations. Assuming that the response function is smooth and a Taylor's series expansion of Z exists at the mean values, the mean value Z-function can be expressed as

ZMV = Z µ( )+ ∂Z∂Xi

µiXi − µi( )+ H X( )

i=1

n

∑ (1)

where ZMV is a random variable representing the sum of the first order terms and H(X) represents the higher-order terms. For nonlinear response functions, the MV first-order solution obtained by using Equation 1 may not be sufficiently accurate. For simple problems, it is possible to use higher-order expansions to improve the accuracy. For example, a mean-value second-order solution can be obtained by retaining second-order terms in the series expansion. However, for problems involving implicit functions and large n, the higher-order approach becomes difficult and inefficient.

The AMV method improves upon the MV method by using a simple correction procedure to compensate for the errors introduced from the truncation of the Taylor's series. The AMV model is defined as ZAMV = ZMV + H ZMV( ) (2) where H(ZMV) is defined as the difference between the values of ZMV and Z calculated at the Most Probable Point Locus (MPPL) of ZMV, which is defined by connecting the MPP's for different z0 values. The AMV method reduces the truncation error by replacing the higher-order terms H(X) by a simplified function H(ZMV). As a result of this approximation, the truncation error is not optimum; however, because the Z-function correction points are usually close to the exact MPPs, the AMV solution provides a reasonably good solution. The AMV solution can be improved by using an improved expansion point, which can be done typically by an optimization procedure or an iteration procedure. Based initially on ZMV and by keeping track of the MPPL, the exact MPP for a particular limit state Z(X) - z0 can be computed to establish the AMV+ model, which is defined as

ZAMV + = Z X *( )+ ∂Z∂Xi

Xi* Xi − χi

*( )+ H X( )i=1

n

∑ (3)

where X* is the converged MPP. The AMV-based methods have been implemented in NESSUS and validated using numerous problems [8,9]. Probabilistic Sensitivity Analysis For design purposes, it is important to know which problem parameters are the most important and the degree to which they control the design. This can be accomplished by performing sensitivity analyses. In a deterministic analysis where each problem variable is single-valued, design sensitivities can be computed that quantify the change in the performance measure due to a change in the parameter value, i.e., ∂Z/∂Xi. As stated earlier, each random input variable is characterized by a mean value, a standard deviation, and a distribution type. That is, three parameters are defined as opposed to just one. The performance measure is the exceedance probability (or safety index, which is inversely related to probability). Sensitivity measures are needed then to reflect the relative importance of each of the probabilistic parameters on the probability of exceedance. NESSUS computes probabilistic based sensitivities for both MPP and sampling based methods [8]. The sensitivity computed as a by-product of MPP-based methods is

αi =∂β∂ui

(4)

Page 5: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

5

and measures the change in the safety index with respect to the standard normal variate u. Although useful for providing an importance ranking, this sensitivity is difficult to use in design because u is a function of the variable's mean, standard deviation, and distribution, and therefore, is referred to as an importance factor. Two other sensitivities that are more useful for design (and for importance ranking as well) include

Sµ =∂pI

∂µi

σ i

pI

(5)

which measures the change in the probability of exceedance with respect to the mean value of random variable i; and

Sσ =∂pI

∂σ i

σ i

pI

(6)

which measures the change in the probability of exceedance with respect to the standard deviation of random variable i. Multiplying by σi and dividing by pI non-dimensionalizes and normalizes the sensitivity to facilitate comparison between variables. The sensitivities given by Equations 5 and 6 are computed for both component and system probabilistic analysis. The four random variables considered are radius of the vessel wall (radius), thickness of the vessel wall (thickness), modulus of elasticity (E), and yield stress (Sy) of the HSLA steel. A summary of the probabilistic inputs is included in Table 1. The properties for radius and thickness are based on a series of quality control inspection tests that were performed by the vessel manufacturer. The coefficients of variation for the material properties are based on engineering judgment. In this case, the material of the entire vessel, excluding the bolts, is taken to be a random variable. When the thickness and radius random variables are perturbed, the nodal coordinates of the finite element model change with the exception of the three access ports in the vessel, which remain constant in size and move only to accommodate the changing wall dimensions. This was accomplished in NESSUS by defining a set of scale factors that defined how much and in what direction each nodal coordinate was to move for a given perturbation in both the thickness and radius. The scale factors were computed by subtracting a perturbed mesh from an unperturbed mesh. Also, perturbations in radius and thickness are cumulative so that thickness and radius can be perturbed simultaneously. Once the scale factors are defined and input to NESSUS, the probabilistic analysis, whether simulation or AMV+, can be performed without further user intervention. The response metric for the probabilistic analysis is the maximum equivalent plastic strain occurring over all times at the bottom of the vessel finite element model.

Using NESSUS, the iterative Advanced Mean Value (AMV+) method was used to calculate a CDF for equivalent plastic strain. Also, Latin Hypercube Simulation (LHS) [11] was performed with 100 samples to verify the correctness of the AMV+ solution near the mean value. The CDF is plotted in Figure 6 on a linear probability scale and in Figure 7 on a standard normal probability scale. Although Figures 6 and 7 present the same information, the standard normal scale used in Figure 7 displays more clearly the CDF in the tail regions. As shown, the LHS and AMV+ results are in excellent agreement. For far fewer PARADYN model evaluations, the AMV+ solution is able to predict probabilities in the extreme tail regions. If failure were to be assumed when the equivalent plastic strain exceeded some critical value, then the CDF shown in Figures 6 and 7 directly provide non-failure probabilities, i.e., probability of failure ( ) CDFp f −= 1 .

Table 1: Probabilistic inputs for the containment

vessel random variables Variable PDF σ µ COV

Radius (in) Normal 37.0 0.0521 0.00141

Thick (in) Lognormal 2.0 0.08667 0.04333

E (lb/in2) Lognormal 29.0E+06 1.0E+06 0.03448

Sy (lb/in2) Normal 106.0E+03 4.0E+03 0.03774

As shown in Figure 7, the CDF approaches an asymptotic state above about u = 2 (standard deviations). This is because beyond u = 2, the vessel wall thickness becomes so thin that the deterministic model is unable to compute a converged solution. Although not simulated here, this condition would suggest catastrophic failure such as rupture or fracture.

Figure 6: CDF for equivalent plastic strain at the bottom

of the containment vessel. Probabilistic sensitivities ( )/ i iβ µ σ∂ ∂ and

( )/ i iβ σ σ∂ ∂ are shown in Figures 8 and 9

respectively. The subscript i refers to the particular

Page 6: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

6

random variable and beta ( )β , the safety index, is inversely related to pf through the relationship

( )fp β= Φ − where Φ is the standard normal CDF.

The sensitivities are multiplied by iσ to nondimensionalize the values and facilitate a relative comparison between parameters. Finally, the values are normalized such that the maximum value is equal to one.

Figure 7: CDF for equivalent plastic strain plotted in

standard normal scale.

Figure 8. Probabilistic sensitivity with respect to mean

(u = 3).

Figure 9. Probabilistic sensitivity with respect to standard deviation (u = 3).

The sensitivities shown in Figures 8 and 9 indicate how a change in the mean and standard deviation of each random variable can be expected to affect the computed probability. These results can also be used to eliminate unimportant variables from the random variables considered thus improving computational efficiency, or conversely, where resources could most effectively be focused. As shown, thickness contributes the most to the computed probability. This suggests that the most effective strategy for reducing (or at least controlling) the reliability of the vessel would be to control the thickness of the material.

4. VERIFICATION AND VALIDATION

Because numerical simulation tools and models are used in the design of the confinement vessels, a Verification and Validation (V&V) process is required to quantify the level of confidence in models used for the design and analysis of DynEx vessels [13,14]. The expected outcome of the V&V effort is the ability to quantify the level of agreement between experimental and predicted results, as well as quantify the accuracy of the numerical model when used in a predictive mode. V&V are the primary methods for building and quantifying confidence in numerical models; therefore, V&V is the approach being used to support the certification requirement of the DOE SSP. The definition [12] of verification and validation adopted herein are as follows: • Verification is the process of determining that a

model implementation accurately represents the developer’s conceptual description of the model and the solution to the model.

• Validation is the process of determining the degree

to which a model is an accurate representation of the real world from the perspective of the intended uses of the model.

Figure 10 provides a high-level description of the V&V process. This graphical representation is a derivative of a diagram developed by the Society for Computer Simulation (SCS) in 1979 [15]. Although this diagram has some shortcomings, it provides a high-level illustration of the modeling and simulation activities (black solid lines) and the assessment activities (red dashed lines) involved in V&V. The Reality of Interest in Figure 10 represents the experiment for which data is being obtained. This is a point of possible confusion because the experiment is not necessarily the complete system of interest. A hierarchy of experiments is recommended which progresses from simple unit problems, to components, to sub-systems, and ending with the complete system. The Reality of Interest then represents the particular

Page 7: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

7

problem being studied, whether a unit problem, complete system, or anything in between.

Figure 10. Basic activities involved in model verification

and validation The Mathematical Model comprises all mathematical modeling data and equations needed to describe the Reality of Interest. This will usually take the form of the partial differential equations (PDEs), constitutive equations, and boundary conditions needed to describe mathematically the relevant physics. The Computer Model represents the implementation of the Mathematical Model, usually in the form of numerical discretization, solution algorithms and convergence criteria. The process of selecting important features and associated PDEs needed to represent the Reality of Interest in the Mathematical Model is termed the Confirmation Activity. Because decisions are being made during this phase regarding what is included and not included in the model, this activity is termed Modeling. The Verification Activity focuses on the identification and removal of errors in the Software Implementation. This activity is described as two separate activities to delineate the difference between the removal of errors in the computer software itself (code verification), and the errors introduced during operation of the software (calculation verification). As the final phase, the Validation Activity aims to quantify the confidence in the model through comparisons of experimental data with Simulation Outcomes from the Computer Model. Both verification and validation are ongoing activities that have no clearly defined endpoint. V&V ends when sufficiency is reached. Put differently, V&V are processes that collect evidence of a models correctness or accuracy for a specific set of parameters; thus, V&V cannot prove that a model is correct and accurate for all possible conditions and applications, but rather provide

evidence that a model is sufficient for its intended application. Verification is concerned with identifying and removing errors in the model by comparing numerical solutions to highly accurate benchmark solutions. Validation, on the other hand, is concerned with quantifying the accuracy of the model by comparing numerical solutions to experimental data. Because mathematical errors can cancel giving the impression of correctness (i.e., right answer for the wrong reason), verification must be performed to a sufficient level before the validation activity begins. Put succinctly, verification deals with the mathematics associated with the model, whereas validation deals with the physics associated with the model. It is important to distinguish the difference between the terms “code” and “model” in the context of model V&V. A code is a computer (or numerical) implementation of a mathematical model. A model comprises the code, code inputs, constitutive model, grid size, solution options and tolerances. Additionally, the model includes the failure model, nondeterministic analysis method, solution options and tolerances. Consequently, we do not validate a code; rather, we verify a code and validate a model. Role of Nondeterminism in V&V Recognizing that models will always be approximate in nature and be subject to error and uncertainty, the overarching goal of validation is to quantify the uncertainty and confidence in the predictive capability of the model. Therefore, the approach to validation assessment is to identify the sources of uncertainty in both modeling and experiments, propagate modeling uncertainties through the model to estimate the uncertainties in the model predictions, and perform statistical analysis of the differences between experimental outcomes and model predictions to quantify the expected uncertainty and confidence on the model prediction. The definition of validation assessment given above illustrates several key aspects that warrant further clarification. The phrase “process of determining” emphasizes that validation assessment is an on-going activity that concludes only when acceptable agreement between experiment and simulation is achieved. The phrase “degree to which” emphasizes that neither the model predictions nor the experimental measurements are known with certainty, and consequently, will be expressed as an uncertainty, e.g., expected value with associated confidence limits. Finally, the phrase “intended uses of the model” emphasizes that the validity of a model is defined over the domain of model form, parameters, and responses. This effectively limits the use of the model to the application for which it was validated for; use for any other purpose would require the validation to be performed again. Notionally, one can view a model prediction to be either within the validation domain (interpolative) or outside the

Page 8: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

8

validation domain (extrapolative) as illustrated in Figure 11.

Design Variable, X1

Des

ign

Var

iabl

e, X

2

Application DomainValidation Domain

Validation Experiment Interpolative Prediction

Extrapolative Prediction

Design Variable, X1

Des

ign

Var

iabl

e, X

2

Application DomainValidation Domain

Validation Experiment Interpolative Prediction

Extrapolative Prediction

Figure 11. Validation and application domain.

It is widely understood and accepted that uncertainties, whether reducible or irreducible, arise due to the inherent randomness in physical systems, modeling idealizations, experimental variability, measurement inaccuracy, etc., and cannot be ignored. This complicates the already difficult process of model validation by creating an unsure target, where neither the simulated nor observed behavior of the system is known with certainty. Nondeterminism refers to the existence of errors and uncertainties in the outputs created by computational simulations. Likewise, the measurements that are made to validate these predictions also contain errors and uncertainties. For the purposes of this discussion, we categorize and define the various sources of uncertainty as follows: Errors: Errors create a reproducible (i.e. deterministic) bias in the prediction and can theoretically be reduced or eliminated. Errors can be acknowledged (detected) or unacknowledged (undetected). Examples include inaccurate model form, implementation errors in the computational code, non-converged computational model, etc. Irreducible Uncertainty: Irreducible Uncertainty (i.e. variability, inherent uncertainty, aleatoric uncertainty) refers to the inherent variations in the system that is being modeled. This type of uncertainty always exists in physical systems and is an inherent property of the system. Examples include variations in system geometric or material properties, loading environment, assembly procedures, etc. Many mathematical tools are available to quantify and account for this type of uncertainty. Reducible Uncertainty: Reducible Uncertainty (i.e. epistemic uncertainty) refers to deficiencies that result from a lack of complete information about the system being modeled. An example of reducible uncertainty is the statistical distribution of a geometric property of a

population of parts. Measurements on a small number of the parts will allow estimation of a mean and standard deviation for this distribution. However, unless this sample size is sufficiently large, there will be uncertainty about the “true” values of these statistics, and indeed uncertainty regarding the “true” shape of the distribution. Obtaining more information (in this case, more sample parts to measure) will allow reduction of this uncertainty and a better estimate of the true distribution. Non-determinism is generally modeled through the theory of probability. The dominant two approaches to probability are the frequentist approach (where probability is defined as the number of occurrences of an event) and the Bayesian approach (where probability is defined as the subjective opinion of the analyst about an event). Both approaches are well suited in most engineering applications where large amounts of data can be collected either experimentally or computationally. Other mathematical theories have been/are being developed for representing uncertainty in situations of scarce data, rare events or when expert opinion is solicited. They include the fuzzy logic, the Dempster-Shaffer theory of plausibility and belief, the theory of random sets and the theory of information gap.

5. SUMMARY

The work presented here represents an ongoing effort at Los Alamos National Laboratory to move promptly towards increased reliance on numerical simulation and less on testing as part of the Stockpile Stewardship program. The containment vessel was selected for analysis and V&V because of the safety and mission critical nature in performing and containing small-scale dynamic experiments. The paper demonstrates the successful application of the probabilistic analysis program NESSUS to a large-scale problem (>1M elements) for the first time demonstrating its usefulness for LANL-type problems. Extensive modifications to NESSUS have been made to facilitate these calculations including the ability to link any number of numerical and/or analytical models together, support for parallel and batch job execution, and the inclusion of a graphical user interface that runs on all laboratory hardware. Future work may include additional testing to improve the characterization of the vessel thickness, coupling the hydrocode simulation of the detonation event with the structural response simulation, and incorporating alternative different failure models (fracture, burst, etc). A verification and validation plan for the confinement vessel analysis and design is also being developed.

ACKNOWLEDGMENTS This work was performed for the Los Alamos National Laboratory under contract No. W-7405-ENG-36 with the

Page 9: UNCERTAINTY QUANTIFICATION OF A CONTAINMENT VESSEL DYNAMIC … 04 IMAC_XXI_2003.pdf · Understanding of the dynamic events under detonation conditions is the first step towards the

9

US Department of Energy (DOE) and the National Nuclear Security Administration (NNSA).

REFERENCES 1. NESSUS Version 7.0, Southwest Research Institute,

September 2001. 2. DYNA-3D User’s Manual, “A Nonlinear, Explicit,

Three-Dimensional Finite Element Code for Solid and Structural Mechanics,” Lawrence Livermore National Laboratory, October, 1999.

3. W. E. Baker, “The Elastic-Plastic Response of Thin

Spherical Shells to Internal Blast Loading,” Journal of Applied Mechanics, Vol. 27, pp. 139-144 (1960).

4. J. J. White and B. D. Trott, “Scaling Law for the

Elastic Response of Spherical Explosion-Containment Vessels,” Experimental Mechanics, Vol. 20, 174-177 (1980).

5. R. Robert Stevens and Stephen P. Rojas,

“Confinement Vessel Dynamic Analysis,” Los Alamos National Laboratory Report, LA-13628-MS (1999).

6. T. A. Duffey and E. A. Rodriguez, “Ductile Failure

Criteria for Design of HSLA-100 Steel Confinement Vessels,” Los Alamos National Laboratory Report, LA-13800-MS, 2000.

7. T. A. Duffey and E. A Rodriguez, “Dynamic versus

Static Pressure Loading in Confinement Vessels,” Los Alamos National Laboratory Report, LA-13801-MS, 2000.

8. Thacker, B.H., E.A. Rodriguez, J.E. Pepin, and D.S.

Riha, "Application of Probabilistic Methods to Weapon Reliability Assessment," Proc. AIAA/ASME/ASCE/AHS/ASC 42nd Structures, Structural Dynamics, and Materials (SDM) Conf., AIAA 2001-1458, Seattle, WA, 16-19 April 2001.

9. Thacker, B.H., et al., “Computational Methods for

Structural Load and Resistance Modeling,” American Institute of Aeronautics and Astronautics, AIAA-91-0918-CP, April 1991.

10. Y. T. Wu, et al., “Advanced Probabilistic Structural

Analysis Method for Implicit Performance Functions,” American Institute of Aeronautics and Astronautics, AIAA Journal, Vol. 2, No. 9, Sept. 1990.

11. M. D. McKay, et al., “A Comparison of Three

Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code,” Journal of Technometrics, Vol. 21, No. 2, May 1977.

12. AIAA, Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, American Institute of Aeronautics and Astronautics, AIAA-G-077-1998, Reston, VA, 1998.

13. Duffey, T.A. and E. A. Rodriguez, “Computer Code

Verification and Validation (V&V) in Support of HSLA-100 Steel Confinement Vessel Design,” Los Alamos National Laboratory Report.

14. Doebling, S.W., F.M. Hemez, J.F. Schultz, S.P.

Girrens, “Overview of Structural Dynamics Model Validation Activities at Los Alamos National Laboratory,” Proc AIAA/ASME/ASCE/AHS/ASC 43rd Structures, Structural Dynamics, and Materials (SDM) Conf., AIAA 2002-1643, Denver, CO, 22-25 April 2002.

15. Schlesinger, S., “Terminology for Model Credibility,”

Simulation, Vol. 32, No. 3, 1979.