fault diagnostics and prognostics methods in nuclear power ...€¦ · example: turbine transients...
TRANSCRIPT
Fault Diagnostics and Prognostics
methods in Nuclear Power Plants
Piero Baraldi
Politecnico di Milano
22Piero Baraldi
Normal
operation
Remaining Useful Life
(RUL)
t
1x
t
2x
Detect Diagnose Predict
Nuclear Power Plant Component)
c2c1 c3
Measuredsignals
Anomalous
operationMalfunctioning type
(classes)
Fault Diagnostics and Prognostics
Data-driven fault diagnostics
4444Piero Baraldi
In This Lecture
1. Fault diagnostics: what is it?
2. Procedural steps for developing a fault diagnostic system
3. Artificial Neural Networks
5555Piero Baraldi
FAULT
DIAGNOSIS
(correct assignment)
f1
f2
Normal operation
Measured
signals
f1
f2
Forcing
functions
FAULT
DETECTION
early recognition
Fault diagnostics: What is it 5
Transient
Industrial
system
6666Piero Baraldi
Fault diagnostics: transient classification
Measured signals
Diagnostic
System
Fault Class 1
Fault Class 2
…
Fault Class c
6
Classifier
Fault Class 1
Fault Class 2
…
Fault Class c
x1
x2
x3
X1(t)
X2(t)
X3(t)
c
t
Transient
7777Piero Baraldi
In This Lecture
1. Fault diagnostics: what is it?
2. Procedural steps for developing a fault diagnostic system
3. Objectives of a fault diagnostic system
4. Unsupervised classification methods
5. Supervised classification methods
8888Piero Baraldi
Empirical Approach to Fault Classification
1. Identify fault/degradation classes:
▪ System Analysis (FMECA, Event Tree Analysis, …)
▪ Good engineering sense of practice
▪ Analysis of the historical data
{C1 C2}
1x
2x
1x = signal 1
= signal 22x
8
9999Piero Baraldi
Empirical Approach to Fault Classification
1. Identify fault/degradation classes
2. Collect data couples: <measurements, class>
▪ Historical data
▪ Fault transients in industrial systems may be rare
▪ Information on the class of the fault originating the transient is not always available
9
C1
C2
1x
2x
1x = representative signal 1
= representative signal 22x
10101010Piero Baraldi
Example: turbine transients in a Nuclear Power Plant
HealthyType 1
malfunctioning
Type 2
malfunctioning
Type 3
malfunctioning
Type 4
malfunctioning
Available data
• 5 signals (operational conditions) + 7 signals (turbine behaviour)
• 149 shut-down transients from a nuclear power plant
• Types of anomalies are unknown
11111111Piero Baraldi
Empirical Approach to Fault Classification
1. Identify fault classes
2. Collect data couples: <measurements, fault class>
▪ Historical plant data
▪ Fault transients in industrial systems may be rare
▪ Information on the class of the fault originating the transient is not always available
▪ Physics-based simulator
11
C1
C2
1x
2x
1x = representative signal 1
= representative signal 22x
12121212Piero Baraldi
Empirical Approach to Fault Classification
1. Identify fault classes
2. Collect data couples: <measurements, fault class>
3. Develop the empirical classification algorithm using as training set the collected data couples: <measurements, fault classes >
12
C1
C2
1x
2x
1x = representative signal 1
= representative signal 22x
Measured signals
Classifier
Fault Class 1
Fault Class 2
…
Fault Class c
X1(t)
X2(t)
X3(t)
Data-driven fault prognostics approaches
• Machine learning and data mining algorithmso Support Vector Machineso K-Nearest Neighbour Classifiero Fuzzy similarity-based approacheso Artificial Neural Networkso Neurofuzzy Systemso Relevant Vector Machineso …
14141414Piero Baraldi
o Prognostics
o What is it?
o Sources of information
o Prognostic approaches
15151515Piero Baraldi
Fault prognostics: What is it?
Remaining Useful Life
(RUL)
t
1x
t
2x
Predict
16161616Piero Baraldi
o Prognostics
o What is it?
o Sources of information
o Prognostic approaches
17171717Piero Baraldi
• Threshold of failure
• Current degradation
trajectory
• External/operational
conditions
Life durations of a
set of
similar components
• Degradation
trajectories of similar
components
• A physics-based model
of the degradation
Prognostics: Sources of Information
Piero Baraldi
Sources of Information: An example 18
Component: turbine blade
Degradation mechanism: creeping
19
Component: turbine blade
Degradation mechanism: creeping
Degradation indicator: blade elongation
Prognostics: an example
Length(t) – initial length
initial length tx
20
Sources of information for prognostics
• Life durations of a set of similar components which have already failed:
𝑇1, 𝑇2, … , 𝑇𝑛
20
Failure time (days)
21
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure:
21
Fault Initiation
x
t
thx
ft
thx
22
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure:
22
Fault Initiation
x
t
thx
ft
thx
«A blade is discarded when the elongation, x, reaches 1.5%»
23
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory): 𝑧1, 𝑧2 … , 𝑧𝑘
23
tz
tk
Threshold
of failure
24
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory): 𝑧1, 𝑧2 … , 𝑧𝑘
24
tz
t
Elongation measurements = past evolution of the degradation indicator
k
Threshold
of failure
25
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory)
• Degradation trajectories of similar components
25
Fault Initiation
)(tz
t
26
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory)
• Degradation trajectories of similar components
• Information on external/operational conditions (past – present - future)
Past, present and future time evolution of: ,...,,...,, 121 kk uuuu
26
27
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory)
• Degradation trajectories of similar components
• Information on external/operational conditions (past – present - future)
Past, present and future time evolution of: ,...,,...,, 121 kk uuuu
27
𝒖𝟏 = T = temperature
𝒖𝟐 =θr = rotational speed
28
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory)
• Degradation trajectories of similar components
• Information on external/operational conditions (past – present - future)
28
29
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory)
• Degradation trajectories of similar components
• Information on external/operational conditions (past – present - future)
29
30
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory)
• Degradation trajectories of similar components
• Information on external/operational conditions (past – present - future)
• A physics-based model of the degradation process
30
1111 ,,,..., kkkkk uxxfx
31
Sources of information for prognostics
• Life durations of a set of similar components which have already failed
• Threshold of failure
• A sequence of observations collected from the degradation initiation to the
present time (current degradation trajectory)
• Degradation trajectories of similar components
• Information on external/operational conditions (past – present - future)
• Measurement equation
• A physics-based model of the degradation process
31
Norton law for creep growth
x = blade elongation
T = temperature
φ = Kθr2 = applied stress
θr = rotational speed
A, Q, n = equipment inherent parameters
n
RT
QA
dt
dx
-exp
Arrhenius law
External/operational conditions
32
o Prognostics
o What is it?
o Sources of information
o Prognostic approaches
33
Data-
DrivenExperience-
Based
Model-
Based
• Threshold of failure
• Current degradation
trajectory
• External/operational
conditions
Life durations of a
set of
similar components
• Degradation
trajectories of similar
components
• A physics-based model
of the degradation
Prognostic approaches
34
Data-
DrivenExperience-
Based
Model-
Based
• Threshold of failure
• Current degradation
trajectory
• External/operational
conditions
Life durations of a
set of
similar components
• Degradation
trajectories of similar
components
• Measurement equation
• A physics-based model
of the degradation
Prognostic approaches
Hybrid
Data driven fault prognostics
o When?
❖ Understanding of first principles of component degradation is not comprehensive
❖ System sufficiently complex that developing an accurate physical model is prohibitively expensive
o Advantages:
❖ Quick and cheap to be developed
❖ They can provide system wide-coverage
o Disadvantages
❖ They require a substantial amount of data for training:
Degradation data are difficult to collect for new or highly reliable systems (running systems to failure can be a lengthy and costly process)
Quality of the data can be low due to difficulty in the signal measurements
Artificial Neural Networks
How to build an empirical
model
using input/output data?
The problem
x1(1), x2
(1) , x3(1) | t1
(1), t2(1)
x1(2), x2
(2) , x2(2) | t1
(2), t2(2)
…
What are Artificial Neural Networks (ANN)?
Multivariate, non linear interpolators capable of
reconstructing the underlying complex I/O nonlinear
relations by combining multiple simple functions.
An artificial neural network is composed by several
simple computational units (also called nodes or
neurons)
What Are (Artificial) Neural Networks?
f
y1 y2 … yn
u
The computational unit
𝑓′ = 𝑓 1 − 𝑓
+
ff
y represented
asor
y1 y2 … yn
y1 y2 … yn
y1 y2 … yn
u uu
The computational unit: other representations
ymj = wmjuj
Connection between nodes i and j
j
m
uj
wmj
ymj
The weighted connection
wmj = synapsis weight
between nodes j
and m
An artificial neural network is composed by several simple computational units
(also called nodes or neurons) directionally connected by weighted
connections
An artificial neural network is composed by several simple computational
units (also called nodes or neurons) directionally connected by weighted
connections organized in a proper architecture
An example of ANN architecture
The multilayered feedforward ANN
INPUT
OUTPUTNumber of connections
117+85=117
ARTIFICIAL NEURAL NETWORKS
INPUT LAYER:
each k-th node (k=1, 2, …, ni) receives the value of the k-th component
of the input vector and delivers the same valuex
HIDDEN LAYER:
each j-th node (j=1, 2, …, nh) receives x
and delivers
in
k
jjkk wwx1
0with 𝑓ℎ typically sigmoidal
Multilayered Feedforward NN
OUTPUT LAYER:
each l-th node (l=1, 2, …, no) receives
hn
j
llj
h
j wwu1
0
and delivers
hn
j
llj
h
j
o
l wwufu1
0 f typically linear or sigmoidal
Forward Calculation (hidden-output)
𝑜
Why are we speaking of “Neural Networks”?
Biological basis of ANN
49
Human brain is a densely interconnected network of approximately 1011 neurons, each connected to, on average, 104 others.
Neuron activity is excited or inhibitedthrough connections to other neurons.
Biological Motivation
Biological Motivation: the neuron
The dendritesprovide input signals to the cell.
The axon sends output signals to cell 2 via the axon terminals. These axon terminals merge with the dendrites of cell 2.
What is the mathematical basis
behind Artificial Neural Networks?
Neural networks are universal approximators of multivariate
non-linear functions.
KOLMOGOROV (1957):
For any real function continuous in [0,1]n, n2, there
exist n(2n+1) functions ml() continuous in [0,1] such that
),...,,( 21 nxxxf
12
1 1
21 )(),...,,(n
l
n
m
mmlln xxxxf
where the 2n+1 functions l’s are real and continuous.
Thus, a total of n(2n+1) functions ml() and 2n+1 functions l
of one variable represent a function of n variables
What is the mathematical basis behind Artificial Neural Networks?
Neural networks are universal approximators of multivariate
non-linear functions.
CYBENKO (1989):
Let (•) be a sigmoidal continuous function. The linear combinations
N
j
n
i
jijij wx0 1
What is the mathematical basis behind Artificial Neural Networks?
How to train an ANN?
Setting the ANN parameters
The ANN parameters to be set
• Once the ANN architecture has been fixed (number of
layers, number of nodes for layer), the only parameters
to be set are the synapsis weights (wlj , wjk)
wjk
wlj
x1(1), x2
(1) | t(1)
x1(2), x2
(2) | t(2)
x1(p), x2
(p) | t(p)
x1(np), x2
(np) | t(np)
Synapsis
Weights
W
Available input/output patterns
Setting the ANN Parameters: Training Phase
The Training Objective
Training Objective: minimize the average squared
output deviation error (also called Energy Function):
2
1 1
1
2
onnp
o
pl pl
p lp o
E u tn n
TRUE l-th output of
the p-th training
pattern
ANN l-th output of
the p-th training
pattern
The Training objective: graphical representation
• E is a function of
the ANN outputs
• E is a function of
the synapsis
weights [wjk]
The error back propagation algorithm
gradientLearning coefficient
Error Backpropagation (hidden-output)
𝐸 =1
2𝑛𝑜
𝑙=1
𝑛𝑜
𝑢𝑙𝑜 − 𝑡𝑙
2
Error Backpropagation (hidden-output)
𝐸 =1
2𝑛𝑜
𝑙=1
𝑛𝑜
𝑢𝑙𝑜 − 𝑡𝑙
2
𝜕𝐸
𝜕𝑤𝑙𝑗=
2 𝑢𝑙𝑜 − 𝑡𝑙
2𝑛𝑜
𝜕𝑢𝑙𝑜
𝜕𝑤𝑙𝑗
f( )
Output layer neuron
𝑦𝑙1𝑜 𝑦𝑙𝑗
𝑜 𝑦𝑙𝑛ℎ
𝑜
Error Backpropagation (hidden-output)
𝐸 =1
2𝑛𝑜
𝑙=1
𝑛𝑜
𝑢𝑙𝑜 − 𝑡𝑙
2
𝜕𝐸
𝜕𝑤𝑙𝑗=
2 𝑢𝑙𝑜 − 𝑡𝑙
2𝑛𝑜
𝜕𝑢𝑙𝑜
𝜕𝑤𝑙𝑗=
𝑢𝑙𝑜 − 𝑡𝑙
𝑛𝑜
𝜕𝑢𝑙𝑜
𝜕𝑦𝑙𝑜
𝜕𝑦𝑙𝑜
𝜕𝑤𝑙𝑗
=𝑢𝑙
𝑜 − 𝑡𝑙
𝑛𝑜𝑓′(𝑦𝑙
𝑜)𝜕 σ
𝑗′=1𝑛ℎ 𝑦𝑙𝑗′
𝑜
𝜕𝑤𝑙𝑗
f( )
Output layer neuron
𝑦𝑙1𝑜 𝑦𝑙𝑗
𝑜 𝑦𝑙𝑛ℎ
𝑜
Error Backpropagation (hidden-output)
𝐸 =1
2𝑛𝑜
𝑙=1
𝑛𝑜
𝑢𝑙𝑜 − 𝑡𝑙
2
𝜕𝐸
𝜕𝑤𝑙𝑗=
2 𝑢𝑙𝑜 − 𝑡𝑙
2𝑛𝑜
𝜕𝑢𝑙𝑜
𝜕𝑤𝑙𝑗=
𝑢𝑙𝑜 − 𝑡𝑙
𝑛𝑜
𝜕𝑢𝑙𝑜
𝜕𝑦𝑙𝑜
𝜕𝑦𝑙𝑜
𝜕𝑤𝑙𝑗
=𝑢𝑙
𝑜−𝑡𝑙
𝑛𝑜
𝜕𝑢𝑙𝑜
𝜕𝑦𝑙𝑜
𝜕 σ𝑗′=1
𝑛ℎ 𝑦𝑙𝑗′𝑜
𝜕𝑤𝑙𝑗=
𝑢𝑙𝑜−𝑡𝑙
𝑛𝑜
𝜕𝑢𝑙𝑜
𝜕𝑦𝑙𝑜 𝑢𝑗
ℎ 𝑦𝑙𝑗𝑜
j
l
wlj
𝑢𝑗ℎ
𝑦𝑙𝑗𝑜 = 𝑤𝑙𝑗 ∙ 𝑢𝑗
ℎ
Error Backpropagation (hidden-output)
𝐸 =1
2𝑛𝑜
𝑙=1
𝑛𝑜
𝑢𝑙𝑜 − 𝑡𝑙
2
Error Backpropagation (hidden-output)
Error Backpropagation (hidden-output)
momentum
Error Backpropagation (input- hidden)
Error Backpropagation (hidden-input)
Updating weight wjk (hidden-input connections)
Similarly to the updating of the output-hidden weigths,
being
1( ) ( 1)i
jk j j jk
o
w n u w nn
Learning coefficient Momentum
After training:
• Synaptic weights fixed
• New input retrieval of information in the weights output
Capabilities:
• Nonlinearity of sigmoids NN can learn nonlinear mappings
• Each node independent and relies only on local info (synapses)
Parallel processing and fault-tolerance
Utilization of the Neural Network
CONCLUSIONS [ANN]
Advantages:
➢No physical/mathematical modelling efforts.
➢Automatic parameters adjustment through a training phase based on available input/output data. Adjustments to obtain the best interpolation of the functional relation between input and output.
Disadvantages:
“black box” : difficulties in interpreting the underlying physical model.
ANN for fault diagnostics
72
Objective
• Build and train a neural network to classify different malfunctions in the plant component
73
Input/Output patterns
• Input: signal measurements
• Output: number (label) of the class of the failure
74747474Piero Baraldi
74
Application 1
Transient classification in a Feedwater
System of a Boiling Water Reactor (BWR)
75
Boiling Water Reactor
76
Secondary System
77
Outputs
• Class 1: Leakage through the second high-pressure preheater
• Class 2: Leakage in the first high-pressure preheater to the drain tank
• Class 3: Leakage through the first high-pressure preheaterdrain back-up valve to the condenser
• Class 4: Leakage through high-pressure preheaters bypass valve
• Class 5: Leakage through the second high-pressure preheater drain back-up valve to the feedwater tank
78
Transients
Class 1:
Preheater
EA1
Class 3:
Valve VB20
Class 2:
Preheater
EA2
Class 4:
Valve VB7
Preheater 5:
Valve VA25
79
Variables inputs
Variable Signal Unit
1 Position level for control valve EA1 %
2 Position level for control valve EB1 %
3 Temperature drain before VB3 ºC
4 Temperature feedwater after EA2 train A ºC
5 Temperature feedwater after EB2 train B ºC
6 Temperature drain 6 after VB1 ºC
7 Temperature drain 5 after VB2 ºC
8Position level control valve before EA2
%
9Position level control valve before EB2
%
10 Temperature feedwater before EB2 train B ºC
80
Sensors position
Var 1 (%)
Var 2 (%)
Var 4 (ºC)
Var 8 (%) Var 3 (ºC)
Var 5 (ºC)
Var 9 (%)
Var 10 (ºC)
Var 6 (ºC)Var 7 (ºC)
■ Measurement:
36 sampling instants in [80, 290]s, one each 6s.
81
Input/Output patterns
• Input: The class assignment is performed dynamically as a two-step time window of the measured signals shifts to the successive time t+1
• Output: number of the class to which the variables belong
82
Training set
classxxxxxx
classtxtxtxtxtxtx
classtxtxtxtxtxtx
classxxxxxx
)35()36()35()36()35()36(
)()1()()1()()1(
)1()()1()()1()(
)1()2()1()2()1()2(
10102211
10102211
10102211
10102211
• A training set was constructed containing 8 transients for each class.
• For a transient of a given class the 35 patterns used to train the network take the following form
83
How to set the ANN architecture
•
1 2 3 4 5 6 7 8 9 10 …
Error on the validation set
84
Network architecure
• Nh, η, α optimized with grid optimization
• Optima values• Nh : 17
• η: 0.55
• α : 0.6
85
Test results
0.9
1
1.1
Neural Network and True values.TEST DATA
1.9
2
2.1
2.9
3
3.1
cla
ss
3.9
4
4.1
5 10 15 20 25 30 35
4.9
5
5.1
input pattern
3a
3b
3c
3d
3e
86
Comments
• The real-valued outputs provided by the neural network closely approximate the class integer values since early times.
• As time progresses, the network output class assignment approaches more and more closely the true class integer.
87
New transients
• Response of the neural network performance desirable when a new transient is found
• Identification that something happens
• Classification as don’t know
88
New transients
• New class➢ Class 6: Steam line valve to the second high-pressure
preheater closing
Class 6:
Valve VA6
89
New transients
• Close to Class 2 but with a larger deviation
• Value provided by the neural network close to 0
1
2
3
4
5
6
Neural Network and True values.Class 6
Cla
ss
5 10 15 20 25 30 350
1
2
3
4
5
6
input pattern
Cla
ss
4a
4b
90
New transient
5 10 15 20 25 30 35-6
-5
-4
-3
-2
-1
0
1
input patterns
sig
nal valu
e
second transient
upper training level
lower training level
• the large output error is due to the fact that some of the inputs which are fed into the network fall outside the ranges of the input values of the training transients
Temperature drain after VB2
91
Conclusions
• A neural network has been constructed and trained to classifydifferent malfunctions in the secondary system of a boiling waterreactor.
• The network uses as input patterns the information provided bysignals that are monitored in the plant and provides as output anestimate of the class assignment.
• When a transient belonging to a different class is fed to thenetwork, it is capable of detecting that the pattern does notbelong to any of the classes for which it was trained
92
ANN for fault prognostics
93
ANN for fault prognostics
• Possible approaches• Learning directly from data the component RUL
• Modeling cumulative damage (health index) and then extrapolating out to a damage (health) threshold
94
ANN for fault prognostics
• Possible approaches• Learning directly from data the component RUL
• Modeling cumulative damage (health index) and then extrapolating out to a damage (health) threshold
Direct RUL Prediction: The model
ANN
…
…
…RUL
S1
Sn
…
Direct RUL Prediction: The model
ANNClass
S1
Sn
…
Information necessary to develop the model
)(1 tx
t
• Signal measurements for a set of N similar components from degradation onset to failure
Similar Component 1:Data available
ANN training data
Input Output
RUL… … …
1
Trajecto
ry 1 … … … … … …. … ….
𝑡𝑓(1)
− 𝑚 − 1
)(1 tx
t
Information necessary to develop the model
)(1 tx
t
)(txn
t
• Signal measurements for a set of N similar components from degradation onset to failure
…
ANN training data
Input Output
RUL… … …
1
1… … …
Trajecto
ry 1
… … … … … …. … ….
… … … … … …. … ….
… … … … … …. … ….
𝑡𝑓(1)
− 𝑚 − 1
𝑡𝑓(𝑁)
− 𝑚 − 1
Where to study
All slides
Paper: A Review of Prognostics and Health Management Applications in Nuclear Power Plants by Jamie Coble , Pradeep Ramuhalli , Leonard Bond, J. Wesley Hines, and Belle Upadhyaya
Document: Neural Network Modeling