prpagation of error bounds across reduction interfaces

Post on 15-Aug-2015

27 Views

Category:

Engineering

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Probabilistic Error Bounds for Order Reductionof Smooth Nonlinear Models

Mohammad G. Abdoand

Hany S. Abdel-Khalikand

Presented by: Congjian Wang

North Carolina State UniversityNuclear Department

mgabdo@ncsu.edu and abdelkhalik@ncsu.edu

June 16, 2014

1 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

MotivationROM plays a vital role in many desiplines, specially forcomputationally intensive applications.

It i s mandatory to equip reduced order models with error metricsto credibly defend the predictions of the reduced model.Probabilistic error bounds are mostly used in the linear moulding.Reduction errors need to be propagated across variousinterfaces such as parameter interface(i.e. cross sections), statefunction(i.e. flux) and response of interest(i.e. reaction rates,detector response etc..).

2 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

MotivationROM plays a vital role in many desiplines, specially forcomputationally intensive applications.It i s mandatory to equip reduced order models with error metricsto credibly defend the predictions of the reduced model.

Probabilistic error bounds are mostly used in the linear moulding.Reduction errors need to be propagated across variousinterfaces such as parameter interface(i.e. cross sections), statefunction(i.e. flux) and response of interest(i.e. reaction rates,detector response etc..).

2 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

MotivationROM plays a vital role in many desiplines, specially forcomputationally intensive applications.It i s mandatory to equip reduced order models with error metricsto credibly defend the predictions of the reduced model.Probabilistic error bounds are mostly used in the linear moulding.

Reduction errors need to be propagated across variousinterfaces such as parameter interface(i.e. cross sections), statefunction(i.e. flux) and response of interest(i.e. reaction rates,detector response etc..).

2 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

MotivationROM plays a vital role in many desiplines, specially forcomputationally intensive applications.It i s mandatory to equip reduced order models with error metricsto credibly defend the predictions of the reduced model.Probabilistic error bounds are mostly used in the linear moulding.Reduction errors need to be propagated across variousinterfaces such as parameter interface(i.e. cross sections), statefunction(i.e. flux) and response of interest(i.e. reaction rates,detector response etc..).

2 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

We will adopt one formal mathematical definition that has beendeveloped back in the 1960s in the signal processing community.

DefinitionA nonlinear function f with n inputs is said to be reducable and ofintrinsic dimension r (0 ≤ r ≤ n) if there exists a non linear function gwith r inputs and an n × r matrix Q such that r is the smallest integersatisfying:

f (x) = g(x̃);

where x ∈ Rn and x̃ = QT x ∈ Rr

3 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

We will adopt one formal mathematical definition that has beendeveloped back in the 1960s in the signal processing community.

DefinitionA nonlinear function f with n inputs is said to be reducable and ofintrinsic dimension r (0 ≤ r ≤ n) if there exists a non linear function gwith r inputs and an n × r matrix Q such that r is the smallest integersatisfying:

f (x) = g(x̃);

where x ∈ Rn and x̃ = QT x ∈ Rr

3 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Reduction Algorithms

In our context, reduction algorithms refer to two differentalgorithms, each is used at a different interface:

Snapshot reduction algorithm (Gradient-free)(Reduces responseinterface).Gradient-based reduction algorithm(Reduces parameter interface).

4 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Reduction Algorithms

In our context, reduction algorithms refer to two differentalgorithms, each is used at a different interface:

Snapshot reduction algorithm (Gradient-free)(Reduces responseinterface).

Gradient-based reduction algorithm(Reduces parameter interface).

4 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Reduction Algorithms

In our context, reduction algorithms refer to two differentalgorithms, each is used at a different interface:

Snapshot reduction algorithm (Gradient-free)(Reduces responseinterface).Gradient-based reduction algorithm(Reduces parameter interface).

4 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Snapshot Reduction

Consider the reducible model under inspection to be described by:

y = f (x) , (1)

The algorithm proceeds as follows:

1 Generate k random parameters realizations: {xi }ki=1.

2 Execute the forward model in Eq.[1] k times and record thecorresponding k variations of the responses:

{yi = f (xi )

}ki=1 ,

referred to as snapshots, and aggregate them in a matrix asfollows:

Y =[

y1 y2 · · · yk]∈ Rm×k .

3 Calculate the singular value decomposition (SVD):

Y = U6VT;where U ∈ Rm×k .

5 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Snapshot Reduction

Consider the reducible model under inspection to be described by:

y = f (x) , (1)

The algorithm proceeds as follows:1 Generate k random parameters realizations: {xi }

ki=1.

2 Execute the forward model in Eq.[1] k times and record thecorresponding k variations of the responses:

{yi = f (xi )

}ki=1 ,

referred to as snapshots, and aggregate them in a matrix asfollows:

Y =[

y1 y2 · · · yk]∈ Rm×k .

3 Calculate the singular value decomposition (SVD):

Y = U6VT;where U ∈ Rm×k .

5 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Snapshot Reduction (cont.)4 Select the dimensionality of the reduced space for the responses

to be ry , such that ry ≤ min (m, k). Identify the active subspaceas the range of the first ry columns of the matrix U, denoted byUry . Note that in practice ry is increased until the errorupper-bound in step 5 meets a user-defined error tolerance.

5 For a general response y, calculate the error resulting from thereduction as: ey = ‖

(I− UryUry

T)

y‖.

6 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Gradient-baised ReductionThis algorithm may be described by the following steps:

1 Execute the adjoint model k times, each time with a randomrealization of the input parameters, and aggregate the pseudoresponse derivatives in a matrix:

G =[

dRpseudo1dx

∣∣∣∣x1

· · ·dRpseudo

kdx

∣∣∣∣xk

].

2 Calculate the SVD: G =WSPT , and select the first rx columnsof W (denoted by Wrx ) to span the active subspace for theparameters such that:

ex = ‖(I−Wrx WT

rx

)x‖.

7 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Gradient-baised ReductionThis algorithm may be described by the following steps:

1 Execute the adjoint model k times, each time with a randomrealization of the input parameters, and aggregate the pseudoresponse derivatives in a matrix:

G =[

dRpseudo1dx

∣∣∣∣x1

· · ·dRpseudo

kdx

∣∣∣∣xk

].

2 Calculate the SVD: G =WSPT , and select the first rx columnsof W (denoted by Wrx ) to span the active subspace for theparameters such that:

ex = ‖(I−Wrx WT

rx

)x‖.

7 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Gradient-baised ReductionThis algorithm may be described by the following steps:

1 Execute the adjoint model k times, each time with a randomrealization of the input parameters, and aggregate the pseudoresponse derivatives in a matrix:

G =[

dRpseudo1dx

∣∣∣∣x1

· · ·dRpseudo

kdx

∣∣∣∣xk

].

2 Calculate the SVD: G =WSPT , and select the first rx columnsof W (denoted by Wrx ) to span the active subspace for theparameters such that:

ex = ‖(I−Wrx WT

rx

)x‖.

7 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Notice that discarding components in the parameter space willgive rise to errors in the response space even if no reduction inthe response space is rendered.

To distinguish between different errors at different levels weintroduce:

8 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Notice that discarding components in the parameter space willgive rise to errors in the response space even if no reduction inthe response space is rendered.To distinguish between different errors at different levels weintroduce:

8 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Different Errors1

‖f (x)−Qy QTy f (x) ‖

‖f (x) ‖≤ ε

yy ,

where Qy is a matrix whose orthonormal columns span theresponse subspace Sy and εy

y is the user-defined tolerance forthe relative error in response due to reduction in response spaceonly .

2

‖f (x)− f(QxQT

x x)‖

‖f (x) ‖,≤ εx

y

Similarly, Qx is a matrix whose orthonormal columns span anactive subspace Sx in the parameter space and εx

y is theuser-defined tolerance for the relative error in response due toreduction in parameter space only.

9 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Different Errors1

‖f (x)−Qy QTy f (x) ‖

‖f (x) ‖≤ ε

yy ,

where Qy is a matrix whose orthonormal columns span theresponse subspace Sy and εy

y is the user-defined tolerance forthe relative error in response due to reduction in response spaceonly .

2

‖f (x)− f(QxQT

x x)‖

‖f (x) ‖,≤ εx

y

Similarly, Qx is a matrix whose orthonormal columns span anactive subspace Sx in the parameter space and εx

y is theuser-defined tolerance for the relative error in response due toreduction in parameter space only.

9 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Different Errors (cont.)3

‖f (x)−Qy QTy f(QxQT

x x)‖

‖f (x) ‖≤ ε

xyy ,

where εxyy is the user-defined tolerance for the relative error in

response due to simultaneous reductions in both spaces.

10 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

The previous relative errors can be estimated using Dixon’sTheory[3].

Dixon’s theory

It all started by Dixon(1983) when he estimated the largestand/or smallest eigen value and hence the condition number of areal positive definite matrix A.His work relies on a basic set of theorems and lemmas[3, 7] thatwe will introduce in the following few slides.

11 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

The previous relative errors can be estimated using Dixon’sTheory[3].

Dixon’s theory

It all started by Dixon(1983) when he estimated the largestand/or smallest eigen value and hence the condition number of areal positive definite matrix A.

His work relies on a basic set of theorems and lemmas[3, 7] thatwe will introduce in the following few slides.

11 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

The previous relative errors can be estimated using Dixon’sTheory[3].

Dixon’s theory

It all started by Dixon(1983) when he estimated the largestand/or smallest eigen value and hence the condition number of areal positive definite matrix A.His work relies on a basic set of theorems and lemmas[3, 7] thatwe will introduce in the following few slides.

11 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Theorem

If A ∈ Rnxn is a real positive definite matrix whose eigen valuesare λ1 ≥ λ2 ≥ · · · ≥ λn > 0.

Let S :={x ∈ Rn

; xT x = 1}

be a unit hyper sphere.

x =[

x1 · · · xn]T;n ≥ 2 and xi ∼ U(−1,1) over S.

Let θ ∈ R > 1.⇒

P{xT Ax ≤ λ1 ≤ θxT Ax

}≥ 1−

√2π

√nθ. (2)

12 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Theorem

If A ∈ Rnxn is a real positive definite matrix whose eigen valuesare λ1 ≥ λ2 ≥ · · · ≥ λn > 0.Let S :=

{x ∈ Rn

; xT x = 1}

be a unit hyper sphere.

x =[

x1 · · · xn]T;n ≥ 2 and xi ∼ U(−1,1) over S.

Let θ ∈ R > 1.⇒

P{xT Ax ≤ λ1 ≤ θxT Ax

}≥ 1−

√2π

√nθ. (2)

12 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Theorem

If A ∈ Rnxn is a real positive definite matrix whose eigen valuesare λ1 ≥ λ2 ≥ · · · ≥ λn > 0.Let S :=

{x ∈ Rn

; xT x = 1}

be a unit hyper sphere.

x =[

x1 · · · xn]T;n ≥ 2 and xi ∼ U(−1,1) over S.

Let θ ∈ R > 1.⇒

P{xT Ax ≤ λ1 ≤ θxT Ax

}≥ 1−

√2π

√nθ. (2)

12 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Theorem

If A ∈ Rnxn is a real positive definite matrix whose eigen valuesare λ1 ≥ λ2 ≥ · · · ≥ λn > 0.Let S :=

{x ∈ Rn

; xT x = 1}

be a unit hyper sphere.

x =[

x1 · · · xn]T;n ≥ 2 and xi ∼ U(−1,1) over S.

Let θ ∈ R > 1.

P{xT Ax ≤ λ1 ≤ θxT Ax

}≥ 1−

√2π

√nθ. (2)

12 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Theorem

If A ∈ Rnxn is a real positive definite matrix whose eigen valuesare λ1 ≥ λ2 ≥ · · · ≥ λn > 0.Let S :=

{x ∈ Rn

; xT x = 1}

be a unit hyper sphere.

x =[

x1 · · · xn]T;n ≥ 2 and xi ∼ U(−1,1) over S.

Let θ ∈ R > 1.⇒

P{xT Ax ≤ λ1 ≤ θxT Ax

}≥ 1−

√2π

√nθ. (2)

12 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

The next corollary has been explored by many authors[8, 5, 6, 4]and has been employed in different applications, it gave themodern texture to Dixon’s bound.

Corollary

if B ∈ Rmxn such that A = LLT= BT B; where L = BT is the cholesky

factor of A.

And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2i ).

Then the previous theorem can be written as:

P{‖Bx‖ ≤ (σ1 = ‖B‖) ≤

√θ‖Bx‖

}≥ 1−

√2π

√nθ. (3)

Selecting θ =[α2(

)n]; where α > 1 yields:

P

{‖B‖ ≤ α

√2π

√n max

i=1,2,··· ,k‖Bx(i)‖

}≥ 1− α−k . (4)

13 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

The next corollary has been explored by many authors[8, 5, 6, 4]and has been employed in different applications, it gave themodern texture to Dixon’s bound.

Corollary

if B ∈ Rmxn such that A = LLT= BT B; where L = BT is the cholesky

factor of A.

And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2i ).

Then the previous theorem can be written as:

P{‖Bx‖ ≤ (σ1 = ‖B‖) ≤

√θ‖Bx‖

}≥ 1−

√2π

√nθ. (3)

Selecting θ =[α2(

)n]; where α > 1 yields:

P

{‖B‖ ≤ α

√2π

√n max

i=1,2,··· ,k‖Bx(i)‖

}≥ 1− α−k . (4)

13 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

The next corollary has been explored by many authors[8, 5, 6, 4]and has been employed in different applications, it gave themodern texture to Dixon’s bound.

Corollary

if B ∈ Rmxn such that A = LLT= BT B; where L = BT is the cholesky

factor of A.

And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2i ).

Then the previous theorem can be written as:

P{‖Bx‖ ≤ (σ1 = ‖B‖) ≤

√θ‖Bx‖

}≥ 1−

√2π

√nθ. (3)

Selecting θ =[α2(

)n]; where α > 1 yields:

P

{‖B‖ ≤ α

√2π

√n max

i=1,2,··· ,k‖Bx(i)‖

}≥ 1− α−k . (4)

13 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

The next corollary has been explored by many authors[8, 5, 6, 4]and has been employed in different applications, it gave themodern texture to Dixon’s bound.

Corollary

if B ∈ Rmxn such that A = LLT= BT B; where L = BT is the cholesky

factor of A.

And if σ1 ≥ · · · ≥ σn > 0 are the singular values of B (i.e. λi = σ2i ).

Then the previous theorem can be written as:

P{‖Bx‖ ≤ (σ1 = ‖B‖) ≤

√θ‖Bx‖

}≥ 1−

√2π

√nθ. (3)

Selecting θ =[α2(

)n]; where α > 1 yields:

P

{‖B‖ ≤ α

√2π

√n max

i=1,2,··· ,k‖Bx(i)‖

}≥ 1− α−k . (4)

13 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Propagating error bounds

Consider a physical model:

y = f (x) where f : Rn7→ Rm.

The model is subjected to both types of reduction at bothparameter and response interfaces. Thes responses areaggregated in Yx and Yy respectively.The bound for each case is:

εxy = α1

√2π

N maxi=1,2,··· ,k1

‖(Y− Yx )wi‖,

εyy = α2

√2π

N maxi=1,2,··· ,k2

‖(Y− Yy )wi‖,

14 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Propagating error bounds

Consider a physical model:

y = f (x) where f : Rn7→ Rm.

The model is subjected to both types of reduction at bothparameter and response interfaces. Thes responses areaggregated in Yx and Yy respectively.

The bound for each case is:

εxy = α1

√2π

N maxi=1,2,··· ,k1

‖(Y− Yx )wi‖,

εyy = α2

√2π

N maxi=1,2,··· ,k2

‖(Y− Yy )wi‖,

14 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Propagating error bounds

Consider a physical model:

y = f (x) where f : Rn7→ Rm.

The model is subjected to both types of reduction at bothparameter and response interfaces. Thes responses areaggregated in Yx and Yy respectively.The bound for each case is:

εxy = α1

√2π

N maxi=1,2,··· ,k1

‖(Y− Yx )wi‖,

εyy = α2

√2π

N maxi=1,2,··· ,k2

‖(Y− Yy )wi‖,

14 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ROMDixon 1983Propagating the error bound accross different interfaces

Propagating error bounds(cont.)

Then the response error due to both reductions can becalculated from:

P{‖Y− Yxy

‖ ≤ εxy + ε

yy

}≥

(1− α−k1

1

) (1− α−k2

2

)(5)

15 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Case Study 1

The first numerical test is an algebraic prototype nonlinear modelwhere:

y = f (x) ; f : Rn7→ Rm

;n = 15;m = 10 such that:

y1y2y3y4y5y6y7y8y9y10

= B×

aT1 x

(aT2 x)2

(1.4 ∗ aT2 x + 1.5 ∗ aT

3 x)21

1+exp(−aT2 x)

cos(0.8aT4 x + 1.6aT

5 x)(aT

6 x + aT7 ) ∗ [(a

T7 )

2+ sin(aT

8 x)](1+ 0.1exp(−aT

8 x))[(aT9 x)2 + (aT

10x)2]aT

9 x + 0.2aT10x

aT10x

aT9 x + 8aT

10x

where ai ∈ Rn

; i = 1,2, · · · ,m and B is a random m ×m matrix.

16 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Case Study 2

The second case study involves a realistic neutron transport of aPWR pin cell model.

The objective is to test the proposed probabilistic error bounddue to reductions at both the parameter and response spaces.The computer code employed is TSUNAMI-2D, a control modulein SCALE 6.1 [1], wherein the derivatives are provided by SAMS,the sensitivity analysis module for SCALE 6.1.

17 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Case Study 2

The second case study involves a realistic neutron transport of aPWR pin cell model.The objective is to test the proposed probabilistic error bounddue to reductions at both the parameter and response spaces.

The computer code employed is TSUNAMI-2D, a control modulein SCALE 6.1 [1], wherein the derivatives are provided by SAMS,the sensitivity analysis module for SCALE 6.1.

17 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Case Study 2

The second case study involves a realistic neutron transport of aPWR pin cell model.The objective is to test the proposed probabilistic error bounddue to reductions at both the parameter and response spaces.The computer code employed is TSUNAMI-2D, a control modulein SCALE 6.1 [1], wherein the derivatives are provided by SAMS,the sensitivity analysis module for SCALE 6.1.

17 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Case Study 1The dimension of the parameter space is n = 15, and the response space ism = 10, and a user defined tolerance of 10−5 is selected. The parameter activesubspace is found to have a size of rx = 9 whereas the response activesubspace is ry = 9.number of tests is 10000.

Fig. 1 shows the function behavior plotted along a randomly selected direction inthe parameter space.

Figure : Function behavior along a random input direction.

18 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Table I. shows the minimum theoretical probabilitiesPact =

number of successestotal number of tests predicted by the theorem and the actual

probability resulting from the numerical test.

Table : Algebraic Model Results

Error Bound Pact Ptheo‖Y−Yx

‖Y‖ ≤ εxy 1.0 0.9

‖Y−Yy‖

‖Y‖ ≤ εyy 0.998 0.9

‖Y−Yxy‖

‖Y‖ ≤ εxy + ε

yy 1.0 0.81

19 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Relative Errors

Next we show the relative error ‖Y−Yxy‖

‖Y‖ due to both reductionsvs. the theoretical upper bound predicted by the theory εx

y + εyy .

Figure : Theoretical and actual error for case study 1.

20 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Case Study 2For the pin cell model the full input subspace (cross sections) had a sizeof n = 1936, whereas the output (material flux) was of size m = 176.The cross-sections of the fuel, clad, moderator and gap were perturbedby 30%(relative perturbations). Based on a user defined tolerance of10−5, the sizes of the input and output active subspaces are rx = 900and ry = 165, respectively.Table II shows the minimum theoretical probabilities predicted by thetheorem and the probability resulted from the numerical test.

Table : Algebraic Model Results

Error Bound Pact Ptheo‖Y−Yx

‖Y‖ ≤ εxy 1.0 0.9

‖Y−Yy‖

‖Y‖ ≤ εyy 1.0 0.9

‖Y−Yxy‖

‖Y‖ ≤ εxy + ε

yy 1.0 0.81

21 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Case Study 1Case Study 2

Relative Errors

Next we show the relative error ‖Y−Yxy‖

‖Y‖ due to both reductionsvs. theoretical upper bound predicted by the theory εx

y + εyy .

Figure : Theoretical and actual error for case study 2.

22 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

ConclusionsThis manuscript has equipped our previously developed ROMtechniques with probabilistic error metrics that bound themaximum errors resulting from the reduction.Given that reduction algorithms can be applied at any of thevarious model interfaces, e.g., parameters, state, and responses,the developed metric effectively aggregates the associated errorsto estimate an error bound on the response of interest.The results show that we can start to break the linear mouldingand start to explore nonlinear smooth functions.These functionality will prove essential in our ongoing workfocusing on extension of ROM techniques to multi-physicsmodels.

23 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Bibliography I

SCALE:A Comperhensive Modeling and Simulation Suite forNuclear Safety Analysis and Design,ORNL/TM-2005/39, Version6.1, Oak Ridge National Laboratory, Oak Ridge, Tennessee,June2011. Available from Radiation Safety Information ComputationalCenter at Oak Rodge National Laboratory as CCC-785.

Y. BANG, J. HITE, AND H. S. ABDEL-KHALIK, Hybrid reducedorder modeling applied to nonlinear models, IJNME, 91 (2012),pp. 929–949.

J. D. DIXON, Estimating extremal eigenvalues and conditionnumbers of matrices, SIAM, 20 (1983), pp. 812–814.

24 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Bibliography II

N. HALKO, P. G. MARTINSSON, AND J. A. TROPP, Findingstructure with randomness:probabilistic algorithms forconstructing approximate matrix decompositions, SIAM, 53(2011), pp. 217–288.

P. G. MARTINSSON, V. ROKHLIN, AND M. TYGERT, Arandomized algorithm for the approximation of matrices, tech.report, Yale University.

J. A. TROPP, User-friendly tools for random matrices.

S. S. WILKS, Mathematical statistics, John Wiley, New York,1st ed., 1962.

25 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Bibliography III

F. WOOLFE, E. LIBERTY, V. ROKHLIN, AND M. TYGERT, A fastrandomized algorithm for the approximation of matrices,preliminary report, Yale University.

26 / 27

MotivationBackground of Supporting Algorithms and Theory

Numerical tests and resultsConclusionsBibliography

Thanks

Questions/Suggestions?

27 / 27

top related