echevarria 14

Upload: manea-bogdan

Post on 07-Mar-2016

12 views

Category:

Documents


0 download

DESCRIPTION

fhgh

TRANSCRIPT

  • A variant of the particle swarm optimization for the improvementof fault diagnosis in industrial systems via faults estimation

    Ldice Camps Echevarra a, Orestes Llanes Santiago a,n, Juan Alberto Hernndez Fajardo a,Antnio J. Silva Neto a,b, Doniel Jimnez Snchez a

    a Instituto Superior Politcnico Jos Antonio Echeverra, Cujae, La Habana, Cubab Instituto Politcnico, IPRJ/UERJ, RJ, Brazil

    a r t i c l e i n f o

    Article history:Received 13 January 2012Received in revised form6 November 2013Accepted 6 November 2013Available online 9 December 2013

    Keywords:Ant colony optimizationFault diagnosisIndustrial systemsParticle swarm optimizationRobust diagnosisSensitive diagnosis

    a b s t r a c t

    This paper proposes an approach for Fault Diagnosis and Isolation (FDI) on industrial systems via faultsestimation. FDI is presented as an optimization problem and it is solved with Particle SwarmOptimization (PSO) and Ant Colony Optimization (ACO) algorithms. Also, is presented a study of theinuence of some parameters from PSO and ACO in the desirable characteristics of FDI, i.e. robustnessand sensitivity. As a consequence, the Particle Swarm Optimization with Memory (PSO-M) algorithm, anew variant of PSO was developed. PSO-M has the objective of reducing the number of iterations/generations that PSO needs to execute in order to provide a reasonable quality diagnosis. The proposedapproach is tested using simulated data from a DC Motor benchmark. The results and analysis indicatethe suitability of the approach as well as the PSO-M algorithm.

    & 2013 Elsevier Ltd. All rights reserved.

    1. Introduction

    A fault is an unpermitted deviation of at least one characteristicproperty or parameter of a system from the acceptable, usual orstandard operating condition (Simani et al., 2002).

    Faults can cause economic losses as well as damage to humancapital and the environment. There is an increasing interest on thedevelopment of newmethods for fault detection and isolation, FDI,also known as Fault Diagnosis, in relation to reliability, safety andefciency (Isermann, 2005).

    The FDI methods are responsible for detecting, isolating andestablishing the causes of the faults affecting the system. Theyshould also guarantee the fast detection of incipient faults (sensi-tivity to faults) and the rejection of false alarms that are attribu-table to disturbances or spurious signals (robustness).

    The FDI methods are broken down into three general groups,the process history based methods (Venkatasubramanian et al.,2002c), those based on qualitative models (Venkatasubramanianet al., 2002b), and the quantitative model based methods, alsoknown as analytical methods (Venkatasubramanian et al., 2002a).

    The quantitative model based methods make use of an analy-tical or computational model of the system. The great variety ofthe proposed model based methods is brought down to a few basicconcepts such as: the parity space; observer approach and the

    parameters identication or estimation approach (Isermann, 1984;Frank, 1990, 1996; Isermann, 2005).

    Many papers and books have been devoted to making descriptionsand establishing links among the different approaches for modelbased diagnosis (Frank, 1990; Venkatasubramanian et al., 2002a;Simani et al., 2002; Witczak, 2007; Metenidin et al., 2011). A cleardescription of each approach and their limitations are presented inWitczak (2007). In Witczak (2007) and Metenidin et al. (2011) it isrecognized that observers and parity space approaches do not alwaysallow the isolation of the actuators faults. For nonlinear models, thecomplexity on the observer design increases, while an exact model ofthe system is necessary for the parity space approach (Witczak, 2007;Metenidin et al., 2011).

    Parameter estimation approach requires the knowing of therelationships between such parameters and the physical coefcientsof the system, as well as the inuence of the faults in these coefcients(Frank, 1990, 1996; Isermann, 2005). This approach does not provide agood diagnosis for the case of sensor faults. Furthermore, this usuallydemands a high computing time, which makes it infeasible for mostsituations (Isermann, 2005; Witczak, 2007).

    The topics of robustness and sensitivity are of high interestin FDI. Thus, many robust analytical methods have been developed(Isermann, 1984, 2005; Frank, 1990; Chen and Patton, 1999; Pattonet al., 2000). However, the unavoidable process disturbances andthe modelling errors make that most FDI methods becomeunfeasible in practical applications (Simani et al., 2002; Simaniand Patton, 2008). Therefore, further research on the topic of

    Contents lists available at ScienceDirect

    journal homepage: www.elsevier.com/locate/engappai

    Engineering Applications of Articial Intelligence

    0952-1976/$ - see front matter & 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.engappai.2013.11.007

    n Corresponding author. Tel.: 53 7 2663204.E-mail address: [email protected] (O. Llanes Santiago).

    Engineering Applications of Articial Intelligence 28 (2014) 3651

  • robust and sensitive FDI methods is indispensable (Isermann,1984; Simani et al., 2002; Simani and Patton, 2008).

    The FDI methods based in observers or parity space demandlarge efforts to generate robust and sensitive residuals, related tothe fault detection. The generation of a residual is highly depen-dent on the model that describes the system.

    This paper proposes an approach for FDI in industrial systems viafaults estimation. The proposal allows to diagnose the system based ona direct fault estimation. Soft computing techniques are recognized asattractive tools for solving various problems related to modern FDI(Witczak, 2007; Metenidin et al., 2011). This approach considers theuse of meta heuristics, in order to obtain a robust and sensitivediagnosis. Some FDI methods that use meta heuristics are reported inWitczak (2007), Yang et al. (2007), Wang et al. (2008) and Camps-Echevarra et al. (2010) and Metenidin et al. (2011).

    The meta heuristics Particle Swarm Optimization (PSO) and AntColony Optimization (ACO) have simple structures. Moreover, theywere recently applied to FDI (Liu et al., 2008; Liu, 2009; Samantaand Nataraj, 2009; Duarte and Quiroga, 2010; Metenidin et al.,2011). Therefore, they were selected for this approach.

    This work is also aimed at studying the inuence of parametersfrom PSO and ACO in robustness and sensitivity. This study is the basisfor the development of a new variant of PSO, named Particle SwarmOptimization with Memory (PSO-M). The new algorithm has theobjective of reducing the number of iterations/generations that PSOneeds to execute, in order to discover reasonable quality solutions. Thismeans less computational time, which allows faster diagnosis. PSO-Mcan be easily extended to other optimization problems.

    The proposed approach also permits the direct estimation ofthe faults, indistinctly if they take place in the actuator, process orsensors. This estimation is based on the residual, which is directlyobtained between the measurements of the output of the systemand the output that estimates the model. This avoids restrictionsto the nature of the model.

    The main contributions of this paper can be summarized asfollows: the study of a new approach for the development ofrobust and sensitive FDI methods based on direct fault estimationwith the meta heuristics PSO and ACO; the study of the inuenceof their parameters in order to increase the robustness andsensitivity; and the development of a new variant PSO-M, whichuses a pheromone matrix from ACO for storing the history of PSO,useful for improving the computational cost that is required byPSO. The viability of the proposal is demonstrated by diagnosingthe simulation data from a DC Motor.

    This paper is organized as follows. The second section introducesthe modelling of faults and the model-based FDI methods viaparameters estimation. The proposed approach for FDI is alsodescribed in this section. The third and fourth sections give a briefdescription of the algorithm for PSO and the algorithm for ACO,respectively. The fth section justies and describes the PSO-Malgorithm. Afterward, the next section details the DC motor case studyand its simulations. The other sections present the experimentalmethodology, experiments and results, following the same order.The tenth section presents a comparison between Parity Space,Diagnostic Observers and our proposal for FDI. Finally, some conclud-ing comments and remarks are presented.

    2. Modelling faults and FDI based on direct fault estimation

    FDI based on model parameters, which are partially unknown,requires online parameters estimation methods. For that purposethe input vector utARm, the output vector ytARp and the basicmodel structure must be known (Isermann, 2005).

    The models for describing the systems depend on the dynamicsof the process and the objective to be reached with the simulation.

    The most used model is the linear time invariant (LTI), which hastwo representations: the transfer function or transfer matrix, andthe state space representation. This last representation is also validfor non-linear models.

    Let us express the input/output behavior of SISO (single inputsingle output) processes by means of ordinary linear differentialequations

    yt T tt 1where

    t a1an b0bm 2and

    T t y1tynt u1tumt 3where ynt and umt indicate derivatives (ynt dynt=dtn).

    The respective transfer function becomes, through Laplacetransformation:

    Gp ysus

    BsAs

    b0b1sbmsm1a1sansn

    4

    The faults affecting the system may eventually change one orseveral parameters in the vector t. The FDI based on modelparameters is divided into two steps. The rst is meant for theestimation of the model parameters vector t. The second fordetecting and isolating the faults based on known relationshipsbetween model parameters, physical coefcients of the systemand faults (Isermann, 1984, 2005).

    The main drawback of this approach is that the model para-meters should have physical meaning, i.e., they should correspondwith the parameters of the system. In such situations, the detec-tion and isolation of faults are very straightforward. Otherwise, itis usually difcult to distinguish a fault from a change in theparameters vector t. Moreover, the process of fault isolationmay become extremely difcult because model parameters do notuniquely correspond with those of the system. It should also bepointed out that the detection of faults in sensors and actuators ispossible but rather complicated (Witczak, 2007; Metenidin et al.,2011).

    The two approaches that are commonly used for estimating themodel parameters t are classied with respect to the mini-mization function they use (Frank, 1990; Isermann, 2005):

    sum of least squares of the equation error; sum of least squares of output error.

    The FDI based on parameters estimation considering theminimization of the sum of least squares of the output errorrequires numerical optimization methods. These methods givemore precise parameters estimations but the computational effortis bigger, and on-line applications are, in general, not possible(Isermann, 2005). Another typical limitation regarding parametersestimation-based approaches is related to the fact that the inputsignal should be persistently excited (Witczak, 2007; Metenidinet al., 2011).

    Instead of estimating the model parameters vector , let usconsider explicitly the faults in a SISO system in a closed loopdescribed by a LTI model as

    ys GywswsGyf u sf usGyf y sf ysGyf p sf ps 5

    where wsAR is the reference signal of the control system, fu, fp,f yAR are faults in the actuator, process and output sensors,respectively. The transfer function Gyw(s) represents the dynamicsof the system while Gyf u s, Gyf p s and Gyf y are the transfer

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 37

  • functions that represent the faults fu, fp and fy, respectively (Ding,2008).

    This proposed approach considers the estimation of the faultyparameters vector f f u f y f p instead of t. Therefore, itrequires a model that directly represents the effect of the faultsin the actuator, process and sensors of the system. This kind ofmodel is widely used in other FDI model based methods such asthose based in observers or in parity spaces (Frank, 1990;Isermann, 2005; Ding, 2008).

    The estimation of f allows diagnosing the system in a directway: from the minimization of the sum of the square of the outputerrors. The optimization is described as follows:

    min F^f I

    t 1ytf y^t^f 2

    s:a: f minr ^f rf max 6

    where I is the number of sampling instants and y^t ^ f is theestimated output at each time instant t. y^t ^ f is obtained from thesolution of the model given by Eq. (5), and ytf is the outputmeasured by the sensor at the same instant (Ding, 2008).

    The proposed approach in this paper permits the directestimation of the faults taking place in the actuator, process orsensors. This allows alleviating one of the limitations of the modelbased methods for FDI (Witczak, 2007; Metenidin et al., 2011).

    In order to obtain robustness to model uncertainties, distur-bances or noise, the algorithms ACO and PSO are applied to thefaults estimation. These two algorithms showed robustness inother applications. Their parameters can be manipulated in orderto increase this characteristic. Depending on the model used withthe proposed approach and the applications of PSO and ACO thereis no necessity of making additional efforts in the generation of arobust residual. Thus, this proposal allows avoiding anotherlimitation of the model based FDI methods (Witczak, 2007;Metenidin et al., 2011).

    The proposed approach requires to meet the followingassumptions:

    There is a known model of the system that represents thedynamics of the faults.

    The faults cannot be intermittent.

    The steps of the proposed methodology are

    1 Formulate the optimization problem, see Eq. (6).2 Take the vector ^f 0

    !as a solution of the optimization

    problem and compute the objective function F^f . IfF^f o0:01 then the system is not under the inuence offaults. Otherwise, go to step number 3.

    3 Apply ACO, PSO or PSO-M to solve the optimization problem:estimations of f (It is recommended PSO-M).

    4 Diagnosis: If any component of ^f is different from zero, thenthe fault that corresponds with that component is affecting thesystem. The magnitude of the fault coincides with the value ofthe estimation.

    Due to model uncertainties, noise in the measurements andother disturbances, the value of the objective function is not equalto zero, even when the estimation of the fault vector ^ f coincideswith the real fault vector f. Therefore, the fault vector can bedifferent from zero even when the system is not affected by faults.This causes uncertainties in the decision.

    Step number 2 of the methodology allows avoiding thisdisadvantage. A threshold for the objective function is determined.It is considered that the system is not affected by faults and thatthe measurements are affected by noise up to 8%. If this threshold

    is exceeded, then it is decided that the system is under the effect offaults.

    3. Particle swarm optimization

    Many strategies that mimic different natural behavior havebeen proposed for handling difcult optimization problems. TheSwarm Intelligence brings together some optimization algorithmsthat are based on the observation of simplied social models. Thisis the case of Particle Swarm Optimization (PSO), which wasintroduced by Kennedy and Eberhart in 1995 (Kennedy andEberhart, 1995; Kennedy, 1997). PSO is based on the socialbehavior of ocking birds and schooling shes.

    PSO has been applied to different elds requiring parameteroptimization in a high dimensional space. This is a result of itssimplicity, high efciency in searching, easy implementation andits fast convergence to the global optimum (Kennedy and Eberhart,1995; Kameyama, 2009). All these advantages, in addition to itsapplications in automatic control, system identication, and somerecent results related with the FDI area (Poli, 2007; Samanta andNataraj, 2009; Liu, 2009; Duarte and Quiroga, 2010) motivated theselection of PSO in this study.

    3.1. Description of the algorithm PSO

    PSO works with a group or population (swarm) of Z agents(particles), which are interested on nding a good approximationto the global minimum x0 of the objective function f : DRn-R.

    Each agent moves throughout the search space D. The positionof the zth particle is identied with a solution for the optimizationproblem. On each lth iteration, its value is updated and it isrepresented by a vector Xzl AR

    n.Each particle accumulates its historical best position Xzpbest,

    which represents the achieved individual experience. The bestposition that was achieved along the iterative procedure andamong all the Xgbest represents the collective experience.

    The generation of the new position needs the current velocityof the particle Vzl AR

    n and the previous position Xzl1

    Xlz Xl1z Vlz 7The vector Vzl is updated according to the following expression:

    Vzl Vzl1c1Xzpbest Xzl1c2XgbestXzl1 8where Vzl1 is the previous velocity of the zth particle; denotes adiagonal matrix with random numbers in the interval [0,1]; andc1, c2 are the parameters that characterize the trend during thevelocity updating (Kennedy, 1997; Kameyama, 2009). They arecalled cognitive and social parameter, respectively. They representhow the individual and social experience inuence in the nextagent decision. Some studies have been made in order to deter-mine the best values for c1 and c2. The values c1 c2 2 ,c1 c2 2:05 or c14c2 with c1c2r4:10 are recommended(Kennedy, 1998; Carlisle and Dozier, 2001; Beielstein et al., 2002).

    Some variants of the algorithm have been developed with theobjective of improving some characteristics of PSO, e.g. velocity,stability and convergence.

    Eqs. (7) and (8) represent the canonical implementation of PSO.Another well known variant is the one with inertial weight, whichconsiders either constant inertial weight or inertial weight reduc-tion. The idea behind this variant is to add an inertial factor forbalancing the importance of the local and global search (Beielsteinet al., 2002; Kameyama, 2009). This parameter affects theupdating of each particle velocity by the expression

    Vzl Vzl1c1Xzpbest Xzl1

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 365138

  • c2XgbestXzl1 9

    Nowadays, the most accepted strategy for is to establishA min;max and to reduce its value according the number ofthe current iteration l by means of

    maxmaxmin

    Itrmaxl 10

    where Itrmax is the maximum number of iterations to be reached. It isrecommended to take max 0:9 and min 0:4 (Liang et al., 2006).

    The basic PSO is recognized as a particular case for thealternative that considers inertial weight if assigning 1 duringall the performance of the algorithm (Beielstein et al., 2002;Kameyama, 2009).

    The parameter called constriction factor (Clerc and Kennedy,2002; Kameyama, 2009) is introduced in order to facilitate thecontrol of the particles velocity

    Vzl Vzl1c1Xzpbest Xzl1c2XgbestXzl1 11

    where

    2j2

    24

    pj

    12

    and c1c244.The literature recommends to set 0:729 with c1 c2 2:05.

    This is equivalent to use the inertia weight variant with 0:729through the entire procedure and establishing c1 c2 1:49(Eberhart and Shi, 2001).

    There are different topologies for PSO. In this work, is used theGbest topology. It determines that all the particles are connected toeach other and they are part of a unique neighborhood(Kameyama, 2009).

    A pseudo-code for PSO is given in Fig. 1.

    4. Ant colony optimization

    Ant Colony Optimization (ACO) was initially proposed forinteger programming problems (Dorigo and Caro, 1992). ACO isinspired on the behavior of ants seeking a path between theircolony and a source of food. This behavior is due to the deposit andevaporation of pheromone.

    ACO was successfully extended to continuous optimizationproblems (Dorigo and Blum, 2005; Silva-Neto and Becceneri,2009; Socha and Dorigo, 2008). An advantage of this algorithmis that its parameters can be manipulated in order to achieve morediversication or intensication during the search. This allows anefcient hybridization with other algorithms.

    4.1. Description of the algorithm

    For the continuous case the idea of ACO is to mimic thebehavior of ants (Dorigo and Blum, 2005; Silva-Neto and Becce-neri, 2009; Socha and Dorigo, 2008). In this case, the adaptation tothe continuous case that was reported in Silva-Neto and Becceneri(2009) was applied. This variant was applied to other problems(Becceneri and Zinober, 2001; Souto et al., 2005). In this variant,the rst step is to divide the feasible interval of each variable of theproblem into k possible kn values. At each iteration the algorithmgenerates a family of Z new ants. This generation uses theinformation that was obtained from the previous ants, which issaved in the pheromone accumulative probability matrix PC (thematrix has dimensions n k where n is the number of variables inthe problem). This matrix is updated at each iteration l as

    pcijl jg 1f iglkg 1f igl

    13

    where fij are the elements of the pheromone matrix FAMnkRand they express the pheromone level of the discrete value jth ofthe variable ith. This matrix is updated in each iteration based onthe evaporation factor Cevap and the incremental factor Cinc as

    f ijl 1Cevapf ijl1ij;bestCincf ijl1 14

    where

    ij;best 1 if ji x

    besti

    0 otherwise

    (15

    being xbesti the component ith of the best ant Xgbest .

    The scheme for generating a new ant Xzl at the iteration l needsn random numbers qrand1 , q

    rand2 , , qrandn . For each component x

    zn

    from the ant Xzl to be generated, the following generationmechanism is set as

    xzn mn if q

    randn oq0

    m^n if qrandn Zq0

    (16

    where

    m : f nmZ f nm 8m 1;2;; k 17

    and

    m^ : pcnm^4qrandn 4pcnm^rpcnm 8mZm^ 18

    The control parameter q0 allows controlling the level ofrandomness during the ants generation. The pseudo-code forACO is given in Fig. 2.

    Fig. 1. Pseudo-code for PSO algorithm.

    Fig. 2. Pseudo-code for ACO algorithm.

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 39

  • 5. New variant: Particle swarm optimization with memory(PSO-M)

    PSO has been hybridized successfully with other optimizationmethods. The interest of combining PSO with other methods ismostly based on its recognized characteristic of not improving thequality of the solutions when the number of iterations is increased(Angeline, 1998). Instead, PSO can nd good enough solutionsfaster than other evolutionary algorithms (Angeline, 1998).

    In the particular case of PSO and ACO, some works have focusedon the hybridization between them. For example, in Shelokar et al.(2007) are proposed an algorithm for which ACO is used toperform a local search around each particle of PSO, at eachiteration.

    Our proposal, called Particle Swarm Optimization with Memory(PSO-M), has the objective of reducing the number of iterations/generations that PSO needs to execute, in order to discover reasonablequality solutions. This means a less computational time, which allowsa faster diagnosis. This is also a desirable characteristic for onlinediagnosing systems in industrial processes.

    The implementation of the algorithm consists of two stages:

    First stage: A swarm explores the search space, i.e. PSO isapplied. A pheromone matrix, as described in Section 4, storesthe information of the search in each iteration.

    Second stage: Another swarm makes an intensication of thepromising regions of the search space. For that purpose, its initialposition is generated using the generation scheme of ACO, which isdescribed in Section 4. For this generation scheme, the algorithmuses the pheromone matrix achieved in the rst stage.

    It can be concluded that PSO-M uses the memory of ACO forstoring the historical behavior of the agents from PSO during therst stage. In the second stage, the algorithms use this memory foraddressing the search of another swarm.

    5.1. Description of the algorithm

    This part describes the implementation of PSO-M.In the rst stage, it applies PSO, i.e. a swarm of Z1 agents

    explores the search space D following the structure of PSO. Thenew position XlzAR

    n and the new velocity Vlz of each agent z, withz 1;2;Z1, are updated at the iteration l following Eqs. (7) and(9), respectively. The values of the PSO parameters are based onthe presented study of their inuence in the robust and sensitivediagnosis. Following the idea of ACO for the continuous case, seeSection 4, we divide the permissible interval of each variable into kpossible values xnk, and a pheromone matrix F, with dimensionsn k, is generated and updated at each iteration.

    In order to update the pheromone matrix F, we propose thefollowing strategy:

    On each iteration l, each component of the vector Xgbest isidentied with only one of the k discrete values that areassigned to the variable that corresponds with that componentnth. This connection generates the vector Xgbestd. We dene avectorial function G : RnRn.Let be xgbestn and x

    gbestdn the nth components of the vector X

    gbest

    and Xgbestd respectively, then it is established

    GnXgbest xgbestdn xmn 19where

    m : jxgbestn nxmn j minm jxgbestn nxmn j with m 1;2k 20

    With the new vector GXgbest Xgbestd the matrix F is updatedas in ACO, see Eq. (14).

    The second stage considers a swarm of Z2 (Z2oZ1) agents thatwill perform an intensication of the promising regions of thesearch space D. For that purpose, the algorithm generates aninitial swarm using the information stored in the pheromonematrix F from the rst stage. The mechanism for generating theinitial swarm is the same as described in Eqs. (16)(18):

    The pseudo-code for the PSO-M method is given in Fig. 3.

    6. Benchmark DC motor

    This section describes the main characteristics of the DC Motorcontrol system DR300 (Ding, 2008). This system has been widelyused for studying and testing new methods of FDI due to itssimilitude with high speed industrial control systems (Ding, 2008).

    The system is formed by a permanent magnet, which is coupledto a DC generator. The main function of this generator is tosimulate the effect of a fault that results when a load torque isapplied to the axis of the motor. The speed is measured by atachometer that feeds the signal to a PI (proportional-integralcontroller) speed controller. Fig. 4 shows the block diagram of theDC Motor control system AMIRA DR300.

    The voltage UT(Volts) is proportional to the rotational speed ofthe motor's axis W(rad/s). UT is compared with Uref(Volts) in orderto use the error for computing the control signal UC(Volts) for thePI speed controller. The AMIRA DR300 system also includes an

    Fig. 3. Pseudo-code for PSO-M algorithm.

    Fig. 4. Block diagram of the DC Motor control system AMIRA DR300.

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 365140

  • internal control loop for the armature current IA. The controllercomputes the motor armature voltage UA(Volts) as a function ofthe reference that is obtained by means of the gain K1(Volts/Amp)and the output IA(Amp).

    6.1. Mathematical model

    For this study, the internal current loop, which is the process tobe controlled, is a single block. The block diagram of the closedloop is formed by the process and the PI controller. The parametersof the laboratory DC motor DR300 are reported in Table 3.1 fromDing (2008).

    This analysis considers that the system can be affected by threeadditive faults fu, fp and fy. fu represents a fault in the actuator andit is modeled as a deviation of the control signal; fp represents afault in the process itself due to a load torque, which is applied tothe axis of the motor, and fy represents a fault in the measurementof the motor speed.

    The dynamics of the control system in the open loop isdescribed in the frequency domain by

    UT s GyusUCsGyf p sf ps 21

    Gyus 8:7511:225s10:03s10:005s 22

    Gyf p s 31:07

    s10:005s 23

    where UT is the controlled variable, UC is the control signal, Gyu(s)represents the dynamics of the process in the open loop and Gyf p sis the transfer function of the fault fp (Ding, 2008).

    The transfer function of the PI speed controller is

    Gcs UCsEs 1:96

    1:6s

    24

    Es Uref sUT s 25where E(s) denotes the error signal.

    Considering the other faults that may affect the system, theequation that describes it by means of the closed loop transferfunction is

    UT s GywsUref sGyf u sf usGyf p sf psGyf y sf ys 26

    For the present study, it was considered that the faults are timeinvariant and under the following restrictions:

    f u; f yAR : 1 Vr f u; f yr1 Vf pAR : 0 Nmr f pr1 Nm 27

    6.2. Simulation of the benchmark DC Motor

    It was made simulations of the speed control system for theclosed loop. In all test cases, the system was affected by a noise upto 2% or 8%. The addition of noise is aimed to simulate morerealistic conditions. The noise affecting the systems is one of therecognized causes of a wrong diagnosis and leads to the necessityof robust FDI methods. All the implementations were made inMATLAB R2008a. The reference speed was 3000 rpm, whichcorresponds to 15 V.

    The direct estimation of the faults allowed diagnosing thesystem. These estimations can be obtained by the solution of theminimization problem described by Eq. (6).

    In the analyzed application, ytf is the measurement of thespeed in different time instants, t, and y^t ^f is the speedcomputed from the model at the same time instants.

    7. Experimental methodology

    Three aspects were considered: robustness, sensitivity andcomputational cost. This allows analyzing the merits of thediagnosis based on faults estimation with PSO, ACO and PSO-M.Also, it was analyzed the inuence of some parameters from PSOand ACO in the characteristics of the diagnosis.

    With this goal in mind, several faulty situations were consid-ered. The cases can represent single fault (only one fault affectingthe system); multiple faults (more than one fault affecting thesystem) or incipient fault conditions (faults of lowmagnitude). Theexperiments were divided into three parts:

    1 General performance: For the general analysis of the diagnosisvia PSO and ACO. This part includes multiple faults situations.The output of the system is corrupted up to a 2% level noise.The faulty situations are shown in Table 1.

    2 Robust performance: For the analysis of robustness. The samesituations from the rst part applies but now with a noise levelup to 8%.

    3 Sensitive Performance: for the sensitivity analysis. The casesinclude simple and incipient faults, see Cases 68 from Table 2;

    Table 1Faulty situations for the rst and second parts of the numerical experiments.

    Cases fu fy fp

    Case 1 0.87 0.12 0.53Case 2 0.27 0.96 0Case 3 0.63 0 0.29Case 4 0 0.47 0.86

    Table 2Faulty situations for the third part of the numerical experiments.

    Cases fu fy fp

    Case 5 0.08 0.09 0.2Case 6 0.15 0 0Case 7 0 0.1 0Case 8 0 0 0.12

    Table 3Variants of PSO.

    Alg c1 c2 Z max min

    PSOB 1 2 2 30 PSOI1 maxmaxminItrmax

    l 2 2 30 0.4 0.9

    PSOI2 maxmaxminItrmaxl 3 1 30 0.4 0.9

    PSOI3 maxmaxmin

    Itrmaxl 3.5 0.5 30 0.4 0.9

    Table 4Variants of ACO.

    Alg k qo Z

    ACO1:1 63 0.15 30ACO1:2 63 0.55 30ACO1:3 63 0.85 30ACO2:1 127 0.15 30ACO2:2 127 0.55 30ACO2:3 127 0.85 30

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 41

  • as well as multiple and incipient faults, see Case 5 from Table 2. Allthe measurements are corrupted with a noise level up to 8%.

    Different values for some parameters in PSO and ACO wereconsidered. The General, Robust and Sensitive performance of thevariants of PSO and ACO were analyzed. For each faulty situationwas made 30 runs of each algorithm. The abbreviations F ^ f forthe mean value of the objective function and Eval for thearithmetic average of the minimum number of objective functionevaluations achieved until the nal value of the objective functionwas reached were used. The analysis of the computational effort ofthe algorithm was based on the number of evaluation of theobjective function.

    Based on this study, the best set of parameters for ACO and PSO,respectively, were selected. For that selection the Sign test, whichis an easy way to compare the overall performance of twoalgorithms (Derrac et al., 2011) was used.

    A comparison between the best variant of ACO and PSO usingthe Wilcoxon signed ranks test was made. This is a simple and safenonparametric test for pairwise statistical comparisons (Derracet al., 2011). The statistical T was computed and it was comparedwith the value of the Wilcoxons distribution for Num degrees offreedom (critical value of W), where Num is the number of casesfor which the performance of the algorithms is compared (Derracet al., 2011). Also, the Wilcoxon signed ranks test for comparingthe best variant of PSO against PSO-M was used.

    8. Experiments

    8.1. Implementation of PSO

    Two variants of PSO were considered: basic (canonical) PSO,PSOB, and PSO with Inertial Weight, PSOI, see algorithm in Fig. 1.Table 3 shows the values for the parameters of the algorithm ineach variant. These variants will permit to analyze the inuence ofthe parameters , c1 and c2 in the diagnosis. These parametershave a great inuence over the quality of the solution and theconvergence (Becceneri et al., 2006). The selected values for c1 andc2 follow the recommendations from Kennedy (1998), Carlisle andDozier (2001) and Kameyama (2009).

    In all the experiments were selected Z30, following the ideaof taking Z dimD. The values of the coefcients c1, c2 and permit to establish the balance between the intensication anddiversication of the search. In Table 3 the notation l representsthe number of the current iteration, yielding a reduction on theinertial weight along the iterative procedure.

    8.2. Implementation of ACO

    The variants of ACO were based on the different values for theparameters q0 and k. The parameter q0 permits to establish the levelof randomness in the selection of the discrete value of the variable

    (Silva-Neto and Becceneri, 2009), determining the trend of the search.The values q00.15, q00.55 and q00.85 indicate a greater diversi-cation, a balance between diversication and intensication, and agreater intensication of the search, respectively. The parameter kdetermines the size of the search space. With k127 the size of thesearch is duplicated comparing with k63. The six variants that weconsidered in this paper are based on the algorithm in Fig. 2. Table 4shows the values for the parameters of the algorithm in each variant.The value of Z30 was used following the same criterion as in PSO.

    8.3. Implementation of PSO-M

    The variant PSO-M was designed in two stages, see algorithmin Fig. 3.

    First stage: Variant PSOI1 (c1 2 and c2 2) with Z1 45,Itrmax1 15. The pheromone matrix has dimensions 363. Second stage: Variant PSOI1 (c1 2 and c2 2) with Z2 20.

    8.4. Stopping criteria

    The implemented criteria for all variants of PSO and ACO are

    1. Criterion 1: Maximum number of iterations Itrmax 100.2. Criterion 2: Maximum number of iterations for which the best

    value of the objective function remains constant Itrcte 10 andminimum value for the objective function F^f o0:01.

    For the rst stage of PSO-M only the stopping criterion number1 was considered. In the second stage of PSO-M all the stoppingcriteria with values Itrmax2 25, Itrcte 10 and F^f o0:01 wereconsidered.

    9. Results

    9.1. Results of the diagnosis with PSO

    9.1.1. General performanceIn this part the variants PSOB and PSOI1 to solving the optimiza-

    tion problem given in Eq. (6) were applied.In Fig. 5 it is shown that both variants PSOB and PSOI1 detect the

    fault but PSOI1 is more precise while it presents a greater numberof objective function evaluations.

    In order to determine the causes of this behavior, in Fig. 6 the bestvalue of the objective function versus the iterations for PSOB and PSOI1is represented, respectively. The gures demonstrate that the greaternumber of evaluations of the objective function in the variant PSOI1 is aconsequence of its capability for obtaining better estimations. This isrelated with the fact that the algorithm decreases the parameter asa function of the number of iterations, allowing a more intensicationaround the better solutions.

    Fig. 5. Comparison between the performance of PSOB and PSOI1 when diagnosing the faulty situations in Table 1, up to 2% level noise.

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 365142

  • The results of this part point to that PSOI1 permit a betterdiagnosis than PSOB. Thus, we only keep the variants of PSOI1 inthe subsequent studies.

    9.1.2. Robust performanceIn this part the variants PSOI1, PSOI2 and PSOI3 were applied.In Fig. 7 it is shown that there are not evident differences

    between the estimations of the faults that provide PSOI1, PSOI2 andPSOI3. This indicates that the differences between the values of the

    parameters c1 and c2 do not affect signicantly the robustness.On the other hand, there are differences between the number ofobjective function evaluations: PSOI3 shows better results inthis point.

    9.1.3. Sensitive performanceIn this part the objective is the sensitivity analysis of the

    proposal for FDI, as well as the inuence of some parameters from

    0 5 10 152

    4

    6

    8

    10

    12

    Iterations

    FObj

    FObj vs Iterations

    0 10 20 30 402

    4

    6

    8

    10

    Iterations

    FObj

    FObj vs Iterations

    Fig. 6. Best value of F^ f obtained by PSOB and PSOI at each iteration for the Case 3 in Table 1, up to 2% level noise. (a) PSOB and (b) PSOI.

    Fig. 7. Comparison between the performance of PSOI1, PSOI2 and PSOI3 when diagnosing the faulty situations in Table 1, up to 8% level noise.

    Fig. 8. Comparison between the performance of PSOI1, PSOI2 and PSOI3 when diagnosing the faulty situations in Table 2, up to 8% level noise

    10

    0.51.5

    10.5

    0

    11.5

    0

    1.5

    f p

    fufy

    10.5

    0.2

    1

    0

    11.5

    0.5

    0

    0.5

    1.5

    f p

    fu

    fy

    Fig. 9. Comparison between the behavior of the search space by the variant PSOI1 and PSOI3. (a) PSOI1 and (b) PSOI3.

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 43

  • PSO in the quality of the diagnosis, see description of the experi-ments in Section 7.

    In Fig. 8 it is shown that the worse estimations were exhibitedby PSOI3. The performance of PSOI1 and PSOI2 was quite similarrespecting the quality of the estimations. This indicates that ahigher diversication of the search space is important for asensitive diagnosis.

    In order to analyze the effect of diversication in sensitivity, thesearch of PSOI1 and PSOI3 was compared, see Fig. 9.

    In Fig. 9 it is shown that PSOI1 performs a greater diversicationthan PSOI3. Based on the better estimations of PSOI1 over PSOI3, andbased on Fig. 9, it is possible to conclude that the sensitivity isimproved with greater diversication of the search space. On theother hand, sensitivity does not necessarily imply a highercomputational cost.

    Taking into account all these results also it was concluded thatthe better variants for obtaining a sensitive diagnosis are PSOI1 andPSOI2. Based on the Sign test, the variant PSOI1 was selected as thebest variant with a level of signicant 0:05, see the results ofthe test in Table 5.

    9.2. Results of the diagnosis with ACO

    9.2.1. General performanceIn this part the variants ACO1:1 and ACO2:1 were applied.

    In Fig. 10 the performance of ACO1:1 and ACO2:1 is shown. Theresults indicate that both variants detect the faults. The variant

    ACO1:1 shows the best performance for Case 1. For Case 2, ACO2:1 ismore efcient than ACO1:1; for Cases 3 and 4 both variants aresimilar.

    In Fig. 11 a comparison between ACO1:1 and ACO2:1 is shown. Thistime the gures show the evolution of the best value of the objectivefunction obtained for each iteration. Taking into account these results,the conclusion is that the greater search space of ACO2:1 with a lowvalue for the parameter q00.15 produces greater variations in thevalue of the objective function than ACO1:1.

    9.2.2. Robust performanceFig. 12 shows the performance of six variants of ACO from

    Table 4. The results uctuated within a wide range for some ofthese variants. Therefore, the variants ACO1:1 and ACO2:1 wereselected. The values of the parameters from ACO1:1 and ACO2:1 leadto a robust variant for the case of study while the others do not.The variants ACO1:1 and ACO2:1 give the most accurate estimation.Thus, it was concluded that the higher diversication obtainedwith the parameter q00.15 has a good inuence on the robust-ness of the diagnosis.

    9.2.3. Sensitivity performanceIn Fig. 13 the performance of the six variants of ACO from

    Table 4 is shown. The variants ACO1:1 and ACO2:1 give the mostaccurate estimations, which coincide with best performing algo-rithms in the robustness analysis. Thus, the conclusion is that thehigher diversication obtained with the parameter q00.15 alsohas a good inuence on the sensitivity of the diagnosis.

    Considering all the results, ACO1:1 and ACO2:1 were selected asthe better variants. Based on the Sign test, ACO1:1 was selected asthe best variant with a level of signicant 0:05, see Table 6 forthe results of the test.

    Table 5Sign test results: PSOI1 versus PSOI2.

    Comparison Criterion PSOI1 wins PSOI1 lost Num critical value

    PSOI1vs PSOI2 F^ f 7 1 8 7 0.05

    Fig. 10. Comparison between the performance of ACO1:1 and ACO2:1 when diagnosing the faulty situations in Table 1, up to 2% level noise.

    0 5 10 15 200

    1

    2

    3

    4

    5

    Iterations

    FObj

    FObj vs Iterations

    0 5 10 15 20 250.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    1.8

    2

    Iterations

    FObj

    FObj vs Iterations

    Fig. 11. Comparison between the best value of F^ f obtained by ACO1:1 and ACO2:1, until each iteration, when diagnosing the Case 3 in Table 1, up to 2% level noise. (a) ACO1:1and (b) ACO2:1.

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 365144

  • 9.3. Comparison between PSO and ACO

    After the previous study, PSOI1 and ACO1:1 were chosen as thebest variant of PSO and ACO, respectively. Table 8 shows thecomparison between them. The faulty situations are the samefrom Robust Performance and Sensitive Performance analysis.

    The results in Table 8 point out that the PSOI1 gave absolutelybetter faults estimations than ACO1:1 in six of the eight faultysituations. On the other two situations, PSOI1 was better thanACO1:1 in the estimations of two of the three faults. In Fig. 14 it isshown that the number of objective function evaluations neededby ACO1:1 was less than those needed by PSOI1, while the value ofthe objective function was always less in the diagnosis with PSOI1.

    Table 7 shows the results of the application of the Wilcoxon'stest. For the rst line R represents the sum of the ranks for whichPSOI1 outperformed ACO1:1, taking as criterion the value of theobjective function. For the second line R represents the sum ofthe ranks for which ACO1:1 outperformed PSOI1, taking as criterionthe number of evaluations of the objective function.

    From the results in Table 7, it was concluded that PSOI1 shows asignicant improvement over the estimations obtained by ACO1:1,with a level of signicant 0:01. On the other hand, ACO1:1permits to obtain acceptable estimations with a computationalcost signicantly better than PSOI1, with 0:01. This issue isimportant for online diagnosis.

    Taking into account the above results the new algorithm PSO-Mis proposed. The objective with PSO-M is keeping the quality of theestimations provided by PSOI1, which means robust and sensitivediagnosis, while reducing its number of objective function evalua-tions for improving its performance in online diagnosis.

    9.4. Validation of PSO-M

    The comparison between the results obtained by PSOI1, ACO1:1and PSO-M is shown in Table 8. This table also shows that PSO-Mobtained more accurate faults estimations in six of the eight cases.The algorithm PSO-M also reduced the number of functionevaluations needed by PSOI1 in all cases.

    In order to evaluate the performance of PSO-M, the Wilcoxon'test for the comparison between PSOI1 and PSO-M was applied.Table 9 shows the test results. R represents the sum of the ranksfor which PSO-M outperformed PSOI1 in the rst and second line.The value of the objective function and the number of evaluationsof the objective function, respectively, were chosen as criteria forthe comparison.

    The results on the rst line from Table 9 indicate that the nullhypothesis cannot be rejected, which means that in the estima-tions of faults with PSO-M and with PSOI1 there are no differences.The results from the second line also indicate that there is an overperformance of PSO-M, with a level of signicant 0:01, takingthe number of evaluations of the objective function, which impliesless computational cost.

    The comparison between PSOI1, ACO1:1 and PSO-M is alsoshown in Fig. 15. The algorithm PSO-M obtained similar estima-tions to PSOI1 as the Wilcoxon signed ranks test showed, and italso permitted to reduce its computational time. The PSO-Mimproves the disadvantages of the diagnosis via PSO and ACO.

    10. Comparison between this proposal and other FDI methods

    A comparison with other FDI methods is included in thissection. The main model based approaches for FDI are ParitySpace and Diagnostic Observers. Recent techniques for modelbased FDI are mostly a variation of them and they maintain theprincipal limitations of the original methods taken into account inthe comparison (Odgaard and Matajib, 2008; Li and Dahhou, 2008;Narasimhana et al., 2008; Fliess et al., 2004). Therefore, ParitySpace and Diagnostic Observers will be used to compare with theproposal of this paper.

    Both approaches generate residuals, which are used to formappropriate decision functions.

    The idea is to generate a structured set of residual. Then, atleast one measured quantity has no impact on a specic residue. Incase of a faulty measurement, the decoupled residue remainssmall, while the others increase their value. This feature helps tolocate the fault (Frank, 1990; Hoeing and Isermann, 1996).

    Both approaches are formulated based on the state representa-tion but, in the linear case it can be connected with the transferfunction models (Ding, 2008).

    10.1. Parity space approach

    The introduction of the Parity Space approach for FDI was madein the early 8s. Different forms of Parity Space Approach have beendeveloped since then. In this paper is considered the original one,which is based on the assumption of a state space model of alinear discrete time system (Chow and Willsky, 1984) and have theform:

    xk1 AxkBukEff kyk CxkFf f k 28

    where xARn is the vector of state variables; uARm and yARp themeasurable input and output signals, respectively. The matrices A,B, C, Ef and Ff are known and with appropriate dimensions.

    Fig. 12. Comparison between the performance obtained by six variants of ACO from Table 4 for the faulty situations from Table 1, up to 8% level noise.

    Table 6Sign test results: ACO1:1 versus ACO2:1.

    Comparison Criterion ACO1:1 wins ACO2:1 lost Num Criticalvalue

    ACO1:1 vs ACO2:1 F^ f 10 2 12 9 0.05

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 45

  • It is further assumed that (C,A) is observable and rankC p.A continuous time parity space can be generated similar to thedesign of the discrete one (Hoeing, 1993).

    Using the notations:

    ysk yks yks1ykt ; 29

    usk uks uks1ukt ; 30

    f ;sk f ks f ks1f kt ; 31

    Hu;s

    D 0 0CB D 0

    CAs1B CB D

    26664

    37775 32

    and

    Hf ;s

    Ff 0 0CEf Ff 0

    CAs1Ef CEf Ff

    266664

    377775 33

    The matrix Ho;s is called the extended observability matrix ofthe system. It is dened as (Ding, 2008)

    Ho;s C CACAst 34

    Fig. 14. Comparison between the performance obtained by ACO1:1 and PSOI1 for the faulty situations from Tables 1 and 2, up to 8% level noise.

    Table 7Wilcoxon signed ranks test results: PSOI1 versus ACO1:1.

    Comparison Criterion R R T minfR ;R g Critical valueof W

    PSOI1 vs ACO1:1 F^ f 36 0 0 0 0.01ACO1:1 vs PSOI1 Eval 36 0 0 0 0.01

    Fig. 13. Comparison between the performance obtained by six variants of ACO from Table 4 when diagnosing the faulty situations in Table 2, up to 8% level noise.

    Table 8Results of the comparison between PSOI1, ACO1:1 and PSO-M, up to 8% level noise.

    Case Variant fu fy fp F^ f Eval

    1 (0.87) (0.2) (0.53)ACO1:1 0.8508 0.0794 0.5429 1.4702 510PSOI1 0.8778 0.0985 0.5402 0.7018 1491PSOM 0.8635 0.1496 0.5526 0.5966 1037

    2 (0.3) (0.96) (0)ACO1:1 0.2349 0.7556 0.0857 9.5828 591PSOI1 0.2496 0.9944 0.01734 9.1491 2244PSOM 0.2995 0.9587 0.0015 9.0020 1083

    3 (0.63) (0) (0.29)ACO1:1 0.6159 0.0889 0.3270 2.4367 549PSOI1 0.6428 0.0275 0.3040 2.1905 1734PSOM 0.6412 0.0859 0.3293 2.1881 991

    4 (0) (0.47) (0.86)ACO1:1 0.0063 0.4889 0.8667 1.0940 483PSOI1 0.0010 0.4576 0.8545 0.3261 1761PSOM 0.0087 0.4614 0.8570 0.3548 947

    5 (0.08) (0.09) (0.2)ACO1:1 0.1397 0.0444 0.1746 3.3207 660PSOI1 0.0775 0.0951 0.2023 2.8132 1716PSOM 0.0720 0.1126 0.2113 2.8339 1051

    6 (0.15) (0) (0)ACO1:1 0.1714 0.0349 0.0222 4.9194 576PSOI1 0.1633 0.0175 0.0088 4.3135 1941PSOM 0.1483 0.0247 0.0109 4.3073 963

    7 (0) (0.1) (0)ACO1:1 0.0032 -0.2127 0.0508 4.3432 606PSOI1 0.0103 -0.0884 0.0043 3.7786 1473PSOM 0.0039 -0.0547 0.0194 3.6969 1077

    8 (0) (0) (0.12)ACO1:1 0.0190 0.0063 0.1175 3.6654 633PSOI1 0.0028 0.0135 0.1261 3.1399 1830PSOM 0.0118 0.0315 0.1358 3.1353 1109

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 365146

  • The parity relation is obtained by

    ysk Ho;sxskHu;suskHf ;sf ;sk 35

    If sZn exists, it means that at least one vector vsARps1

    (parity vector) with vsa0 such that vsHo;s 0. The parity relationbased residual generator, in the absence of faults, is constructed by

    rsk vsHo;sxks vsyskHu;susk 0 36

    In the presence of faults the residual becomes

    rsk vsHo;sxksvsHf ;sf ;sk; vs : vsHo;s 0 37

    rsk vsHf ;sf ;ska0 38

    10.2. Diagnostic observers

    The basic idea of the observers approach is to reconstruct theoutputs of the system from the measurements or subsets of themeasurements with the aid of observers, using the estimationerror as the residual for the detection and isolation of the faults(Frank, 1990; Ding, 2008).

    In the case of a linear process with the state equations

    _xt AxtButEf f tyt CxtFf f t 39

    the state x^, and output x^ of a full-order observer are governed by

    the equations:

    _^x AHCx^BuHyy^ Cx^: 40

    where H denotes the feedback gain matrix that has to be chosenproperly to achieve a desired performance of the observer.

    The relations for the state estimation error, x x^, and theoutput estimation error, e y y^, become_ AHCEf f HFf fe CFf f 41

    where e is used as the residual, r, for the purpose of detection andisolation of faults.

    10.3. Connections between parity space and diagnostic observers

    It was demonstrated that exists a one-to-one mapping betweenthe design parameters of diagnostic observers and parity relationbased residual generators. It was also shown that for a given parityrelation based residual generator there exists a set of correspond-ing observer-based residual generators. It was also shown how tocalculate the respective parity vector when an observer-basedresidual generator is provided, and vice versa (Frank, 1996; Ding,2008).

    The parity space based system design is characterized by itssimple mathematical handling. It only deals with matrix- andvector-valued operations. In the case of observers, the design ismore complex. Due to the connection between the parity spaceapproach and observer-based approach, a strategy called parityspace design-observer-based implementation was developed(Ding, 2008). This strategy makes use of the computationaladvantage of parity space approach for the system design (selec-tion of a parity vector or matrix) and then realizes the solution inthe observer form to ensure a numerically stable and less con-suming online computation.

    Fig. 15. Comparison between the performance obtained by obtained by ACO1:1, PSOI1 and PSO-M for the faulty situations in Tables 1 and 2, up to 8% level noise.

    0 10 20 30 40 50 60 70 80 90 1005

    4

    3

    2

    1

    0

    1Diagnostic observer based residual

    0 10 20 30 40 50 60 70 80 90 1002.5

    2

    1.5

    1

    0.5

    0

    0.5x 109 Parity relation based residual

    Res

    idua

    l

    Time [s]

    Fig. 16. Residual obtained with parity space and diagnostic observers; no faults affecting the system and no noise affecting the output. (a) Parity space and (b) diagnosticobservers.

    Table 9Wilcoxon signed ranks test results: PSO-M versus PSOI1.

    Comparison Criterion R R T minfR ;R g Criticalvalue of W

    PSO-M vs PSOI1 F^ f 27 9 9 6 0.1PSO-M vs PSOI1 Eval 36 0 0 0 0.01

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 47

  • 10.4. Results of the comparison between parity space, diagnosticobservers and our proposal when diagnosing faults in the DC motor

    In the DC motor is necessary to diagnose three faults with just oneoutput (one sensor). Parity Space and Diagnostic Observers do notpermit to diagnose additive faults when their number is larger thanthe number of the sensors (Ding, 2008). Therefore, the system cannotbe diagnosed with a Parity Space or Diagnostic Observers, only thedetection of the faults is possible. That is the rst advantage of ourproposal: it can diagnose the three faults using only one sensor.

    Considering that only one fault can affect the system, it allowsto make a comparison with the Parity Space and DiagnosticObservers methods. It was chosen the fault fu. In order todetermine the parity vector and to design the observer, the modelof the DC Motor was transformed into its state space representa-tion. This permits to compute the elements of both approaches inan easier way. In the case of the proposal of this paper, it can beapplied independently on the kind of model describing the systemand no further elements need to be designed or computed.

    The parity vector vs 0:1057 0:4499 0:7162 0:50560:1336t and matrix H 194:1163 1:8394 0:021 1:3084twere computed.

    10.4.1. Robustness analysisIn Fig. 16 are showed the residual obtained by Parity Space and

    Diagnostic Observers when no faults are affecting the system andthe output of the system is not affected by noise. In this case, theresidual is equal zero.

    In Fig. 17, is shown the residual when the system is not affectedby faults but the output is corrupted with a noise level up to 8%.In Fig. 17 (b) a zoom of the residual obtained by DiagnosticObserver is shown. Both approaches, Parity Space and DiagnosticObservers generate residuals different from zero due to the noiseaffecting the output of the system. As a consequence, both

    approaches provide false alarms. This fact is related with the lackof robustness. In order to increase robustness, some thresholdscould be established. In Fig. 17 (c) is shown the results of theestimation of fu in this case, which is next to zero but not equal. Inthis case, it is necessary some thresholds which are representedwith red lines and indicates when the estimation values are withinthe 0.1% of the maximum values allowed to this fault.

    The residual and the result of the fault estimation approachwhen the system is affected by fu0.9 at t 50 s is shown inFig. 18. A zoom of the residual obtained by Diagnostic Observer isshown in Fig. 19. The residual exceeded the thresholds values atthis time, in both approaches. Thus, it indicates that system isunder a fault fu. Both approaches detected the fault. On the otherhand, our proposal also detected the fault and it also allowed toobtain fu0.8940 as an estimation of its magnitude, see Fig. 18 (c).

    10.4.2. Sensitivity analysisThe residual when the system is affected by an incipient fault

    fu0.08 at t 50 s is shown in Fig. 20. It is also shown in Fig. 20 (a)that the parity space based residual did not exceed the thresholdsvalues needed for a robust diagnosis. Thus, for Parity Space thethresholds that avoid false alarms do not allow detecting incipientfaults. On the other hand, in Fig. 20 (b) it is shown that diagnosticobservers detected the fault. It is shown a zoom of the residualobtained by the Diagnostic Observer in Fig. 21. Our proposal detectedthe fault and it also allowed to obtain fu0.08 as an estimation of itsvalue, with the same thresholds established in order to obtainrobustness.

    The main disadvantage of the proposal of this paper when it iscompared with the parity space and diagnostic observersapproach is the computational time. Once the parity vector iscomputed, the parity space approach only needs matrix multi-plication, while the PSO-M algorithm needs to make manysimulations of the model of the system. For cases in Figs. 18 and 20,

    0 10 20 30 40 50 60 70 80 90 1004

    3

    2

    1

    0

    1

    2

    3

    4x 106 Parity relation based residual

    Time [s]

    Fault alarm

    50 55 60 65 70 75 80 85 90 95 100

    2

    1

    0

    1

    2

    x 104

    Time [s]

    Diagnostic observer based residual

    Fault alarm

    0 20 40 60 80 1008

    6

    4

    2

    0

    2

    4

    6

    8x 103 Fault Estimation

    Time[s]

    Val

    ue o

    f fu

    Fault Alarm Area

    Fault Alarm Area

    Res

    idua

    l

    Fig. 17. Residual obtained with parity space and diagnostic observers; no faults affecting the system and up to 8% level noise. (a) Parity space, (b) diagnostic observersand (c) fault estimation.

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 365148

  • the PSO-M algorithm took 906 model simulations (174 s) and took1081 model simulations (305 s), respectively. On the other hand,diagnostic observers took around 16 s for computing the residual.Considering that most industrial processes have high time constants,the processing time required by this proposal does not makeimpracticable its use.

    11. Conclusions

    This study indicates that the application of meta heuristics, inparticular PSO and ACO, characterizes a promising methodologyfor fault diagnosis problems based on direct faults estimation.

    The advantages of the proposed approach are: it does not need todivide the FDI into many steps, such as model parameters estimation,determination of the relationship between model parameters andphysical parameters as well as the relationship between the last oneand the faults, which requires more than one technique; it permits thedirect estimation of the faults, indistinctly if they took place in theactuator, process or sensors, which allows alleviating one of thedescribed limitations of the model based methods for FDI.

    After comparing the proposal of this paper with Parity Space andDiagnostic Observers, other advantages of it were identied: it permitsto diagnose the system even when the number of faults is longer thanthe number of sensors; it allowed to obtain robust and sensitivediagnosis in a simple way; it is a general methodology that does notdepend on the kind of model used to describe the system, no furtherelements are needed to be designed or computed, only the simula-tions of the model of the system are required.

    The study of the inuence of the parameters from PSO and ACOpermitted the analysis of the inuence of diversication andintensication in the quality of the diagnosis. The manipulationof these parameters in order to increase robustness and sensitivityavoid the efforts demanded by the robust residual generation inother model based FDI methods. The study revealed that anadequate balance between these two tendencies, diversicationand intensication is essential for the development of a robust andsensitive diagnosis based on PSO and ACO. The variants that gavethe best results for each algorithm were those that performedhigher diversication during the search.

    A new algorithm, PSO-M, was proposed. It has an easy structureand simple implementation. The application of the Wilcoxonsigned ranks test indicated that PSO-M over performed PSOI1taking as criterion the computational cost, while keeping the

    0 10 20 30 40 50 60 70 80 90 100

    4

    2

    0

    2

    4

    6x 106 Parity relation based residual

    Time [s]

    Fault alarm

    Res

    idua

    l

    0 20 40 60 80 1005

    4

    3

    2

    1

    0

    1Diagnostic observer based residual

    Time [s]

    Res

    idua

    l

    Fault alarm

    0 20 40 60 80 1001

    0.5

    0

    0.5

    1

    Time[s]

    Val

    ue o

    f fu

    Fault Estimation

    Fault Alarmfu=0.894

    Fig. 18. Residual obtained with parity space and diagnostic observers, actuator fault affecting the system fu0.9 and up to 8% level noise. (a) Parity space, (b) diagnosticobservers, and (c) fault estimation.

    0 10 20 30 40 50 60 70 80 90 1006

    4

    2

    0

    2

    4

    6x 106 Diagnostic observer based residual

    Time [s]

    Fault alarm

    Res

    idua

    l

    Fig. 19. Zoom of the residual presented in Fig. 18 (b).

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 49

  • quality of the estimations. The variant PSO-M was describedwithout loss of generality, which permits its application to othercontinuous optimization problem.

    Acknowledgments

    The authors acknowledge the support provided by FAPERJ, Funda-o de Amparo Pesquisa do Estado do Rio de Janeiro, CNPq, ConselhoNacional de Desenvolvimento Cientco e Tecnolgico, and CAPES,Coordenao de Aperfeioamento de Pessoal de Nvel Superior.

    References

    Angeline, P.J., 1998. Evolutionary Programming VII: Proceedings of the SeventhAnnual Conference on Evolutionary Programming (EP98), Lecture Notes in

    Computer Science. Springer-Verlag, vol. 1447. Chapter Evolutionary Optimiza-tion versus Particle Swarm Optimization: Philosophy and Performance Differ-ences, pp. 601611.

    Becceneri, J., Stephany, S., Campos-Velho, H.F., Silva-Neto, A.J., 2006. Solution of theinverse problem of radiative properties estimation with PSO technique. In:Inverse Problems in Engineering Symposium (IPES), Iowa State University, USA.

    Becceneri, J.C., Zinober, A., 2001. Extraction of energy in a nuclear reactor. In: XXXIIISimpsio Brasileiro de Pesquisa Operacional, Campos do Jordo, SP, Brazil.

    Beielstein, T., Parsopoulos, K.E., Vrahatis, M.N., 2002. Tuning PSO ParametersThrough Sensitivity Analysis. Technical Report. Reihe Computational Intelli-gence CI 124/02, Collaborative Research Center (SFB 531), Department ofComputer Science and University of Dortmund.

    Camps-Echevarra, L., Llanes-Santiago, O., Silva-Neto, A.J., 2010. An approach for faultdiagnosis based on bio-inspired strategies. In: Nature Inspired CooperativeStrategies for Optimization (NICSO 2010) Studies in Computational Intelligence,pp. 5363.

    Carlisle, A., Dozier, G., 2001. An off-the-shelf PSO. In: Proceedings of the ParticleSwarm Optimization Workshop, pp. 16.

    Chen, J., Patton, R.J., 1999. Robust Model-Based Fault Diagnosis for DynamicSystems. Kluwer Academic Publishers, Dordrecht.

    0 1 2 3 4 5 6 7 8 9 10

    4

    2

    0

    2

    4

    6x 106 Parity relation based residual

    Time [s]0 20 40 60 80 100

    5

    4

    3

    2

    1

    0

    1Diagnostic observer based residual

    Time [s]

    Res

    idua

    l

    Fault alarm

    Res

    idua

    l

    Fig. 20. Residual obtained with parity space and diagnostic observers, actuator fault affecting the system fu0.08 and up to 8% level noise. (a) Parity space, (b) diagnosticobservers, and (c) fault estimation.

    0 10 20 30 40 50 60 70 80 90 1005

    0

    5x 106 Diagnostic observer based residual

    Time [s]

    Fault alarm

    Res

    idua

    l

    Fig. 21. Zoom of the residual presented in Fig. 20 (b).

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 365150

  • Chow, E.Y., Willsky, A., 1984. Analytical redundancy and the design of robust failuredetection systems. IEEE Trans. Autom. Control 29, 603614.

    Clerc, M., Kennedy, J., 2002. The particle swarm-explosion, stability and conver-gence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6, 5873.

    Derrac, J., Garca, S., Molina, D., Herrera, F., 2011. A practical tutorial on the use ofnonparametric statistical test as a methodology for comparing evolutionaryand swarm intelligence algorithms. Swarm Evol. Comput. 1, 318.

    Ding, S.X., 2008. Model-Based Fault Diagnosis Techniques. Springer.Dorigo, M., Blum, C., 2005. Ant colony optimization theory: a survey. Theor.

    Comput. Sci. 344, 243278.Dorigo, M., Caro, G.D., The Ant Colony Optimization Meta-Hueristic (Ph.D. thesis),

    Universite Libre de Bruxelles, 1992.Duarte, C., Quiroga, J., 2010. Algoritmo PSO para identicacin de parmetros en un

    motor DC. Rev. Fac. Ing. Univ. Antioquia 55, 116124.Eberhart, R.C., Shi, Y.H., 2001. Comparing inertia weights and constriction factors in

    particle swarm optimization. In: Proceedings of the IEEE Congress on Evol-tionary Computing, pp. 84 88.

    Fliess, M., Join, C., Mounier, H., 2004. An introduction to nonlinear fault diagnosiswith an application to a congested internet router. Advances in communicationand control networks. In: Lecture Notes Control Information Science, Springer.

    Frank, P.M., 1990. Fault diagnosis in dynamic systems using analytical and knowledge-based redundancy a survey and some new results. Automatica 26, 459474.

    Frank, P.M., 1996. Analytical and qualitative model-based fault diagnosis- a surveyand some new results. Eur. J. Control 2, 628.

    Hoeing, T., 1993. Detection of parameter variations by continuous-time parityequations. In: 12th IFACWorld-Congress, pp. 511516.

    Hoeing, T., Isermann, R., 1996. Fault detection based on adaptive parity equationsand single-parameter tracking. Control Eng. Pract. 4, 13611369.

    Isermann, R., 1984. Process fault detection based on modelling and estimationmethods a survey. Automatica 20, 387404.

    Isermann, R., 2005. Model based fault detection and diagnosis. Status and applica-tions. Annu. Rev. Control 29, 7185.

    Kameyama, K., 2009. Particle swarm optimization a survey. IEICE Trans. Inf. Syst.E92-D, 13541361.

    Kennedy, J., 1997. The particle swarm: social adaptation of knowledge. In: IEEEInternational Conference on Evolutionary Computation. IEEE, pp. 303308.

    Kennedy, J., 1998. The Behavior of Particles. Evolutionary Programming VII.Springer, 581590.

    Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. In: IEEE InternationalConference on Neural Networks. IEEE, Perth, Australia, pp. 19421948.

    Li, Z., Dahhou, B., 2008. A new fault isolation and identication method fornonlinear dynamic systems: application to a fermentation process. Appl. Math.Model. 32, 28062830.

    Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S., 2006. Comprehensive learningparticle swarm optimizer for global optimization of multimodal functions. IEEETrans. Evol. Comput. 10, 281295.

    Liu, L., Liu, W., Cartes, D.A., 2008. Particle swarm optimization-based parameteridentication applied to permanent magnet synchronous motors. Eng. Appl.Artif. Intell. 21, 10921100.

    Liu, Q.L.W., 2009. The study of fault diagnosis based on particle swarm optimizationalgorithm. Comput. Inf. Sci. 2, 8791.

    Metenidin, M.F., Witczak, M., Korbicz, J., 2011. A novel genetic programmingapproach to nonlinear system modelling: application to the damadics bench-mark problem. Eng. Appl. Artif. Intell. 24, 958967.

    Narasimhana, S., Rengaswamya, P.V.R., 2008. New nonlinear residual feedbackobserver for fault diagnosis in nonlinear systems. Automatica 44, 22222229.

    Odgaard, P.F., Matajib, B., 2008. Observer-based fault detection and moistureestimating in coal mills. Control Eng. Pract. 16, 909921.

    Patton, R.J., Frank, P.M., Clark, R.N., 2000. Issues of Fault Diagnosis for DynamicSystems. Springer, London.

    Poli, R., 2007. An Analysis of Publications on Particle Swarm Optimisation Applica-tions. Department of Computer Science, University of Essex.

    Samanta, B., Nataraj, C., 2009. Use of particle swarm optimization for machineryfault detection. Eng. Appl. Artif. Intell. 22, 308316.

    Shelokar, P., Siarry, P., Jayaraman, V., Kulkarni, B., 2007. Particle swarm and antcolony algorithms hybridized for improved continuous optimization. Appl.Math. Comput. 188, 129142.

    Silva-Neto, A.J., Becceneri, J.C., 2009. Bioinspired computational intelligence tech-niques- application in inverse radiactive transfer problems. Notes in AppliedMathematics. SBMAC, So Carlos.

    Simani, S., Fantuzzi, C., Patton, R.J., 2002. Model-Based Fault Diagnosis in DynamicsSystems using Identications Techniques. Springer.

    Simani, S., Patton, R.J., 2008. Fault diagnosis of an industrial gas turbine prototypeusing a system identication approach. Control Eng. Pract. 16, 769786.

    Socha, K., Dorigo, M., 2008. Ant colony optimization for continuous domains. Eur.J. Oper. Res. 185, 11551173.

    Souto, R.P., Stephany, S., Becceneri, J.C., Campos Velho, H.F., Silva Neto, A.J., 2005. Onthe use of the ant colony system for radiative properties estimation. In: 5thInternational Conference on Inverse Problems in Engineering Theory andPractice (V ICIPE), Leeds University Press, Leeds, Inglaterra, pp. 110.

    Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N., 2002a. A review ofprocess fault detection and diagnosis. Part 1: quantitative model-basedmethods. Comput. Chem. Eng. 27, 293311.

    Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N., 2002b. A review ofprocess fault detection and diagnosis. part 2: qualitative models and searchstrategies. Comput. Chem. Eng. 227, 313326.

    Venkatasubramanian, V., Rengaswamy, R., Yin, K., Kavuri, S.N., 2002c. A review ofprocess fault detection and diagnosis. Part 3: process history based methods.Comput. Chem. Eng. 27, 327346.

    Wang, L., Niu, Q., Fei, M., 2008. A novel quantum ant colony optimization algorithmand its application to fault diagnosis. Trans. Inst. Meas. Control 30, 313329.

    Witczak, M., 2007. Modelling and Estimation Strategies for Fault Diagnosis of Non-Linear Systems From Analytical to Soft Computing Approaches. Springer.

    Yang, E., Xiang, H., Zhang, D.G.Z., 2007. A comparative study of genetic algorithmparameters for the inverse problem-based fault diagnosis of liquid rocketpropulsion systems. Int. J. Autom. Comput. 4, 255261.

    L. Camps Echevarra et al. / Engineering Applications of Articial Intelligence 28 (2014) 3651 51

    A variant of the particle swarm optimization for the improvement of fault diagnosis in industrial systems via faults...IntroductionModelling faults and FDI based on direct fault estimationParticle swarm optimizationDescription of the algorithm PSO

    Ant colony optimizationDescription of the algorithm

    New variant: Particle swarm optimization with memory (PSO-M)Description of the algorithm

    Benchmark DC motorMathematical modelSimulation of the benchmark DC Motor

    Experimental methodologyExperimentsImplementation of PSOImplementation of ACOImplementation of PSO-MStopping criteria

    ResultsResults of the diagnosis with PSOGeneral performanceRobust performanceSensitive performance

    Results of the diagnosis with ACOGeneral performanceRobust performanceSensitivity performance

    Comparison between PSO and ACOValidation of PSO-M

    Comparison between this proposal and other FDI methodsParity space approachDiagnostic observersConnections between parity space and diagnostic observersResults of the comparison between parity space, diagnostic observers and our proposal when diagnosing faults in the DC...Robustness analysisSensitivity analysis

    ConclusionsAcknowledgmentsReferences