hierarchical particle swarm optimization with ortho-cyclic circles

17
Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles Kirupa Ganapathy , V. Vaidehi, Bhairavi Kannan, Harini Murugan Department of Information Technology, Madras Institute of Technology, Anna University, Chennai, India article info Keywords: Particle Swarm Optimization Cloud computing Remote health monitoring Dynamic environment Orthogonality Dynamic Round Robin Scheduling abstract Cloud computing is an emerging technology which deals with real world problems that changes dynam- ically. The users of dynamically changing applications in cloud demand for rapid and efficient service at any instance of time. To deal with this paper proposes a new modified Particle Swarm Optimization (PSO) algorithm that work efficiently in dynamic environments. The proposed Hierarchical Particle Swarm Optimization with Ortho Cyclic Circles (HPSO-OCC) receives the request in cloud from various resources, employs multiple swarm interaction and implements cyclic and orthogonal properties in a hierarchical manner to provide the near optimal solution. HPSO-OCC is tested and analysed in both static and dynamic environments using seven benchmark optimization functions. The proposed algorithm gives the best solution and outperforms in terms of accuracy and convergence speed when compared with the performance of existing PSO algorithms in dynamic scenarios. As a case study, HPSO-OCC is imple- mented in remote health monitoring application for optimal service scheduling in cloud. The near opti- mal solution from HPSO-OCC and Dynamic Round Robin Scheduling algorithm is implemented to schedule the services in healthcare. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction Optimization problem refers to the process of minimizing or maximizing the value of an optimization function subject to con- straints. It is the process of evaluating the optimization function with selected values of input from a confined set. Evolutionary Computation (EC) is an optimization method with stochastic behavior. ECs work on a set of inputs called population with an iterative approach. Swarm Intelligence (SI) (Poli, Kennedy, & Blackwell, 2007) is the process of achieving desirable results by a swarm as a whole. This is facilitated by local interactions within the swarm and communication with the environment. Members of the swarm learn from others by synergy and move towards the goal, thereby exhibiting a social behavior. Particle Swarm Optimization (PSO) is one of the Evolutionary Algorithm (EA), which have become an active branch of SI, simulat- ing swarm behavior like fish schooling, wasp swarming and bird flocking. PSO is found to be different from other evolutionary opti- mization algorithms, for not employing conventional operators like crossover and mutation. PSO solves the optimization problem by augmenting candidate solutions iteration by iteration. The solutions are represented as particles, the collection of which con- stitutes the swarm. The particles have distinct properties like velocity and position that define their state in the search space. These particles move in steps as defined by their velocity, which is determined by their local best known position and the global best position of the swarm. This way, the swarm is expected to converge at an optimum point. In order to avoid premature conver- gence, the particles may use the best position of sub-swarm that is formed within the neighborhood of the particle. Particle neighbor- hood depends on the scheme of the swarm’s population topology. PSO is found to be useful to copious applications deployed in dynamic backdrop. Applications are said to be dynamic when their environment consists of continuous changes like advent of a new process, machine failure, unanticipated downtime, network failure, unstable network connectivity and others. Using classic PSO in dy- namic environment is a critical issue where the particles converge to local or global optima over some successive iterations and the arrival of new particle makes the converged particles to start from the scratch for tracking the new optima. Classic PSO exhibits major drawback due to loss of diversity. The probability of premature convergence is high when the dimensionality of the search space is large. Another reason for loss of diversity is that the particles move to a single point which is determined by the gbest and pbest. But this point is not guaranteed to be a local optimum. Drawbacks of classic PSO is more in static environment and it is eventually more severe in dynamic problems. Therefore, classic PSO needs to be modified to deal loss of diversity in real world problems. Sev- eral mechanisms are adopted to improve the performance of clas- sic PSO in dynamic environment. These mechanisms are dynamic parameter tuning, dynamic network topology, hybridizing PSO 0957-4174/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.10.050 Corresponding author. E-mail addresses: [email protected] (K. Ganapathy), vaidehi@annauniv. edu (V. Vaidehi), [email protected] (B. Kannan), [email protected] (H. Murugan). Expert Systems with Applications 41 (2014) 3460–3476 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Upload: harini

Post on 30-Dec-2016

220 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Expert Systems with Applications 41 (2014) 3460–3476

Contents lists available at ScienceDirect

Expert Systems with Applications

journal homepage: www.elsevier .com/locate /eswa

Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

0957-4174/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.eswa.2013.10.050

⇑ Corresponding author.E-mail addresses: [email protected] (K. Ganapathy), vaidehi@annauniv.

edu (V. Vaidehi), [email protected] (B. Kannan), [email protected](H. Murugan).

Kirupa Ganapathy ⇑, V. Vaidehi, Bhairavi Kannan, Harini MuruganDepartment of Information Technology, Madras Institute of Technology, Anna University, Chennai, India

a r t i c l e i n f o a b s t r a c t

Keywords:Particle Swarm OptimizationCloud computingRemote health monitoringDynamic environmentOrthogonalityDynamic Round Robin Scheduling

Cloud computing is an emerging technology which deals with real world problems that changes dynam-ically. The users of dynamically changing applications in cloud demand for rapid and efficient service atany instance of time. To deal with this paper proposes a new modified Particle Swarm Optimization (PSO)algorithm that work efficiently in dynamic environments. The proposed Hierarchical Particle SwarmOptimization with Ortho Cyclic Circles (HPSO-OCC) receives the request in cloud from various resources,employs multiple swarm interaction and implements cyclic and orthogonal properties in a hierarchicalmanner to provide the near optimal solution. HPSO-OCC is tested and analysed in both static anddynamic environments using seven benchmark optimization functions. The proposed algorithm givesthe best solution and outperforms in terms of accuracy and convergence speed when compared withthe performance of existing PSO algorithms in dynamic scenarios. As a case study, HPSO-OCC is imple-mented in remote health monitoring application for optimal service scheduling in cloud. The near opti-mal solution from HPSO-OCC and Dynamic Round Robin Scheduling algorithm is implemented toschedule the services in healthcare.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Optimization problem refers to the process of minimizing ormaximizing the value of an optimization function subject to con-straints. It is the process of evaluating the optimization functionwith selected values of input from a confined set. EvolutionaryComputation (EC) is an optimization method with stochasticbehavior. ECs work on a set of inputs called population with aniterative approach. Swarm Intelligence (SI) (Poli, Kennedy, &Blackwell, 2007) is the process of achieving desirable results by aswarm as a whole. This is facilitated by local interactions withinthe swarm and communication with the environment. Membersof the swarm learn from others by synergy and move towardsthe goal, thereby exhibiting a social behavior.

Particle Swarm Optimization (PSO) is one of the EvolutionaryAlgorithm (EA), which have become an active branch of SI, simulat-ing swarm behavior like fish schooling, wasp swarming and birdflocking. PSO is found to be different from other evolutionary opti-mization algorithms, for not employing conventional operators likecrossover and mutation. PSO solves the optimization problem byaugmenting candidate solutions iteration by iteration. Thesolutions are represented as particles, the collection of which con-stitutes the swarm. The particles have distinct properties like

velocity and position that define their state in the search space.These particles move in steps as defined by their velocity, whichis determined by their local best known position and the globalbest position of the swarm. This way, the swarm is expected toconverge at an optimum point. In order to avoid premature conver-gence, the particles may use the best position of sub-swarm that isformed within the neighborhood of the particle. Particle neighbor-hood depends on the scheme of the swarm’s population topology.

PSO is found to be useful to copious applications deployed indynamic backdrop. Applications are said to be dynamic when theirenvironment consists of continuous changes like advent of a newprocess, machine failure, unanticipated downtime, network failure,unstable network connectivity and others. Using classic PSO in dy-namic environment is a critical issue where the particles convergeto local or global optima over some successive iterations and thearrival of new particle makes the converged particles to start fromthe scratch for tracking the new optima. Classic PSO exhibits majordrawback due to loss of diversity. The probability of prematureconvergence is high when the dimensionality of the search spaceis large. Another reason for loss of diversity is that the particlesmove to a single point which is determined by the gbest and pbest.But this point is not guaranteed to be a local optimum. Drawbacksof classic PSO is more in static environment and it is eventuallymore severe in dynamic problems. Therefore, classic PSO needsto be modified to deal loss of diversity in real world problems. Sev-eral mechanisms are adopted to improve the performance of clas-sic PSO in dynamic environment. These mechanisms are dynamicparameter tuning, dynamic network topology, hybridizing PSO

Page 2: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3461

with genetic algorithm, multi swarm approach, multi dimensional-ity, dynamically changing neighborhood structures etc. However,there are a number of problems in such Dynamic OptimizationProblems (DOP) which remain unsolved.

This paper suggests a two level novel PSO optimization tech-nique as a solution to DOPs, where a dynamic change in one levelis overcome by the other. Contribution of this paper for improvingthe adaptation of PSO in dynamic environment are described asfollows.

1. Multiple swarms are constructed based on the particles similar-ities in the form of circles/swarms.

2. The particles share the information within the circles undergoesconventional PSO for convergence.

3. Selection of similarly converging circle for information sharingbetween the swarms employs a special orthogonal arrayanalysis.

4. Hierarchical PSO (second level PSO) is employed where thevelocity of the gbest of the selected ortho cyclic circle is usedto update the velocity of the competing circle particles andrefine the position.

Brief discussion on proposed Hierarchical Particle Swarm Opti-mization with Ortho-Cyclic Circles (HPSO-OCC) is as follows. Thealgorithm aims to improve the performance of dynamic PSO byoffering accurate best solution and faster convergence speed. Inthe first level, the swarms are grouped based on the similar prop-erties and allowed to undergo conventional PSO. In a topologicalstructure, every swarm discovers the similarly converging neigh-bor swarms using cyclic property and selects the best neighborswarm using orthogonal analysis. The information from the per-sonal best fitness (pbest) of thus discovered ortho-cyclic circle isused along with the pbest of the circle and gbest of all circles todefine the velocity and refine the position. Second level PSO is per-formed in the current swarm with the updated velocity equation.

HPSO-OCC algorithm is found to be suitable for numerousapplications such as surveillance, military, habitat monitoring, sen-sor selection etc. As a case study, the proposed HPSO-OCC withDynamic Round Robin scheduler is implemented in remote healthmonitoring application for optimal service scheduling. Physiologi-cal sensors worn over the body of the patient’s sends the vital signmeasurement to the remote cloud server. Web enabled sensor datastream are categorised as normal and abnormal class. Based on theabnormality percentage, the particles enter the swarms andHPSO-OCC identifies the optimal patient (particles) to be servedwith minimum waiting time and schedules them using dynamicround robin scheduler.

2. Particle Swarm Optimization Preliminaries

PSO employs population-based stochastic search (Engelbrecht,2006; Kennedy & Mendes, 2002) with an iterative learning ap-proach, exploring optima in a multidimensional search space.The algorithm is initialized with a population of particles, whereeach particle betokens a plausible solution in a d-dimensionalspace, which is found by utilizing personal memories of the parti-cles and shared information within a specific neighborhood. Eachparticle Pi has position vector ,i = [,i1,,i2, . . . ,,id] and velocity vec-tor ˆi = [ˆi1,ˆi2, . . . ,ˆid] where ,id 2 {�100, 100}; i = 1, 2, . . . ,N (totalnumber of particles); and d = 1, 2, . . . ,d (Kennedy & Eberhart,1997). The movements of the particles are guided by their ownbest known position in the search-space, called pbest (li), as wellas the entire swarm’s best known position, called gbest (e). Thisis called Global PSO (GPSO). A second version of PSO exists, calledLocal PSO (LPSO) that considers best known position in a particle’s

neighborhood (lbest) instead of gbest. Neighborhood is definedbased on the topology used as shown in Fig. 1. The vectors ˆi and ,i

are randomly initialized and are revised based on Eqs. (1) and (2).

tid xtid þuprpðlid�idÞ þugrgðtid � vidÞ ð1Þ

vid ¼ vid þ tid ð2Þ

The parameters x, up, and ug which are selected by the practi-tioner regulates the delivery and potency of the PSO algorithm.Coefficients up and ug are cognitive and social acceleration factors(Zhan, Zhang, Li, & Chung, 2009) that can take values in the range[0,4] (1.49 and 2.00 are the most commonly used values), whereup is the personal accelerator and ug is the global accelerator. Highvalue of x, the inertia weight (Shi & Eberhart, 1998) advocatesexploration, and a small value patronizes exploitation. x is oftenlinearly decreased from a high value (0.90) to a low value (0.40)along the generations of PSO.The search space of the flying parti-cles is limited in the range [,min,,max]. Their velocities are regu-lated within a reasonable limit, which is taken care by theparameter ˆmax. ˆmax. is a positive integer that determines the max-imum step one particle can take during one iteration, and is gener-ally set to a value,max- ,min. The fitness of each particle iscalculated in every generation using a fitness function f. The func-tioning of classical PSO is shown in Algorithm 1.

Algorithm 1 PSO

BEGIN(a) UNTIL a termination criterion is met i.e., number ofiterations performed, REPEAT(b) FOR EACH particle i 0, 1, ..., N DO

(i) FOR EACH dimension d 1, ..., d DO(1)Pick random numbers: rp, rg � U(0,1)(2) Update the particle’s velocity:

ˆid x ˆid + up rp (li,d-,id) + ug rg (ed-,id)(ii) Update the particle’s position: ,id ,id + ˆid

(iii) IF (f(,i) < f(li)) THEN(1) Update the particle’s best known position: li ,i

(2) IF (f(li) < f(e)) THEN update the swarm’s bestknown position: e li

RETURN eEND

The remainder of this paper is organised as follows. Section 3describes the related works in PSO algorithm. Section 4 elaboratesthe proposed HPSO-OCC algorithm. Section 5 describes the applica-tion of HPSO-OCC in remote healthcare application. Section 6 pre-sents the result and discussions with experimental set-up,performance analysis of HPSO-OCC with existing PSO algorithmsand performance of HPSO-OCC with optimization functions respec-tively. Section 7 discusses the conclusions.

3. Related works

PSO algorithm is one of the evolutionary algorithm that solvesoptimization problems. Recently, much of the real time problemsare solved using PSO due to its simplicity. Many researchers havemodified classic PSO to solve the problems such as loss of diversity,outdated memory, reliability, convergence speed etc. Several tech-niques are applied to improve traditional PSO search mechanism instatic and dynamic environments. In recent years, investigation ofPSO in changing environment problems has become one of themost important issue for real time applications. To solve the issues,

Page 3: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

(b)(a) (d)(c)

Fig. 1. Different types of particle topologies, (a) star topology used in gbest (b) ring topology used in lbest (c) von Neumann topology (d) four clusters topology.

3462 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

various dynamic Particle Swarm Optimization methods have beenexperimented and tested (Blackwell & Branke, 2006; Greef &Engelbrecht, 2008; Hu & Eberhart, 2002). Major techniques usedto enhance the search performance of dynamic optimization areparameter tuning, multi swarm schemes (Zhao, Nagaratnam &Das, 2010), multi dimensionality (Kiranyaz, Pulkkinen, & Gabbouj,2011), update of particle velocity & position (Chen & Li, 2007; Liang& Qin, 2006), hybridized PSO (Shen, Shi, & Kong, 2008), speciation(Li, Branke, & Blackwell, 2006; Rezazadeh, Meybodi, & Naebi,2011), exclusion & anti convergence (Blackwell & Branke, 2006)and topology investigation (Li & Dam, 2003). Few techniques re-cently developed to improve the global convergence performanceof dynamic PSO and their issues are briefly discussed below.

Liang and Suganthan (2005) proposed a modified dynamicParticle Swarm Optimization using quasi Newton’s method toachieve faster convergence velocity. This algorithm constraintsthe particle within the range and calculates the fitness to updatethe pbest only if the particle is within the range. But, the resultsare not satisfactory for large population; grouping & exchange ofinformation between the swarm particles are randomized and failsto achieve global optimum in dynamic environment. Li and yang(2008) developed a fast multi swarm dynamic PSO where theswarm of particles identifies the gbest and forms a sphere of radiusr which acts as a sub swarm with gbest as its centre. Repeatedlythe child swarms are formed for every gbest. If the distancebetween the child swarms are less than r, the worse one is elimi-nated to avoid overlap. But, the work is completely based on theselection of radius r. This radii is randomly selected from the rangevalues. There is no proper concept focused on the selection ofradius. Liu, Yang, and Wang (2010) presented a mechanism to trackpromising peaks in dynamic environments. Composite particlesare constructed using worst first principle and spatial informationfor efficient information sharing. Velocity-anisotropic reflectionpoint is generated to replace the worst particle for better directionin the search space. However, this algorithm is not focusing onmultiple swarm interaction and involves high time complexity ifnumber of particles in the swarm is more. Connolly, Granger, andSabourin (2012) proposed an incremental learning strategy basedDynamic Particle Swarm Optimization (DPSO). The weight andthe architecture of the pool of learning networks are optimizedusing DPSO. User defined hyper parameters are given to each clas-sifier in the swarm and the optimal hyper parameters are storedand managed in the long term memory. The classifier with accu-racy and diversity is selected and combined from the swarm forthe given testing data. But this work is near to static PSO as it isnot focusing on the maintenance of diversity around each localoptimum in dynamic environment. Omranpur, Ebadzadeh, Shiry,and Barzegar (2012) focused on dynamic Particle Swarm Optimiza-tion to find the best local optima by searching backspace from thelocal minima obtained. Any one of the particle is in local minima,the other particles nearby also converges to it. To overcome this,every particle in the search space is given the predefined Pbestand pworst. As the particle enters the space, pbest and pworst of

every particle are compared. So the particle searches the backspaceto get the best national extremum. However, this technique fails todiscuss about the value assumed for pbest and pworst. Hernandezand Corona (2011) proposed multi swarm PSO in dynamic environ-ment to manage the outdated memory and diversity loss. Theswarm is divided in to two groups of active swarm and bad swarmbased on the quality (fitness) of the particle. A hyper sphere isdrawn around the gbest of the swarm with certain radii. Controlrule is given to identify and select the swarm that is towards theoptimum solution. Remaining sub optimal swarms are not consid-ered as it takes significant computational resources. However, theinformation from the neighbor particles outside the hyper sphereis not considered to achieve promising optima. This may worsethe performance in high dimensional search space. That is, the par-ticles not in hyper sphere may become active swarm particle afterfew iterations. Cai and Yang (2013) developed dynamic parametertuning in PSO for multi target detection using multi robots. Sensorsin the robots (particles) sense the direction and the sensingdistance. The objective of this multi robot target detection is thata robot detects some targets and the percentage of uncovered areaby the target is estimated. This facilitates the other robots tochange the direction and the distance. The parameters such ascloseness of the robot with other, relative directions and the areaare estimated to change the velocity and the position of particlesreaching the target. But, the numbers of targets are fixed anddefinitely fail to handle dynamic environment. This algorithm ismore application specific and will give poor performance for mostof the other applications.

The existing techniques just try to improve the search mecha-nism, convergence speed and diversity by introducing a newparameter. This new parameter is assigned a random value orthreshold and thus yield limited improvement in dynamic environ-ment. Randomizing the parameter in the swarm may eliminatesome important information learnt previously when tuned to an-other random value in a totally new environment. Parameter tun-ing of this kind fails to retain reliable solution when there isincrease in population size. The problem becomes even more chal-lenging in higher dimensions and multimodal problems. Weak par-ticles that are not within the specified radius are eliminated andknowledge from the weaker ones are not used by the fitter parti-cles thus creating high possibility of inaccurate solution (local opti-mum). An algorithm with proved mathematical concepts can beapplied to any application in any scenario rather than randomiza-tion of the parameters. Another major drawback of the traditionalPSO and the above mentioned PSOs is that the algorithm is appliedto the search space of fixed/single dimensionality. Few PSO algo-rithms concludes that the number of dimensions are equal to thenumber of swarms. But in dynamic scenarios, the number of parti-cles entering the swarm cannot be predicted and therefore thenumber of swarm generation is also dynamic. That is, number ofswarms changes over time. Kiranyaz et al. (2011) developed a mul-ti swarm multidimensional PSO algorithm which initially createsartificial particle in every swarm and this selects the best particle

Page 4: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3463

location from the entire swarm iteration by iteration. If artificialparticle is better than the gbest of the swarm, the gbest is replacedby the location of the artificial particle. Traditional PSO is used forpositional optima with velocity updates and navigation of the par-ticle through dimensions is done for dimensional optima. Whenthe particle re enters, it remembers the position, velocity and itspbest in its dimension. The personal best fitness score achievedis the optimum solution and dimension of the particles. However,this set up enables convergence speed and diversity gain for lessnumber of particles in a swarm. Also, the gbests of the swarms getsdisturbed if a new particle enters with no previous personal bestfitness score. Known particles stores its previous best informationand attains its position when it re enters. Even in dynamic scenario,the known particle identifies its best position and reaches the opti-mum soon. But the unknown particle takes more time to adapt tothe system and adjusts its parameter with the known particles.Real time problems will definitely have varying population andcannot be restricted to certain population size. Then the algorithmitself proves to be not global.

Researchers have developed hybridized PSO technique withother optimization algorithm such as PSO with GA, PSO with Neu-ral Network etc. Most of the hybridization techniques are pre-sented to optimize the tuning parameter and network structureof the other optimization algorithm using PSO. Recent techniquesof hybrid PSO model, uses operators and collaborative populationsto maintain diversity of the swarm in dynamic environment.Korurek and Dogan (2010) have performed ECG beats classificationusing radial basis function neural network and PSO. R-peaks of theECG beats are extracted for classification. The centre and the band-width of the hidden neurons are optimized using Particle SwarmOptimization. The particles are the centers and bandwidthspecifying the solution. Classical PSO is applied in a search spacewhere the classification performance is improved with 10 or lessneurons in the hidden layer. This work is a hybrid technique whichuses neural network and PSO for accurate classification.

Recent research in PSO algorithms, incorporates OrthogonalExperimental Design (OED) in evolutionary algorithms to improvethe search performance significantly. Leung and Wang (2001) pro-posed orthogonal genetic algorithm using OED to improve thecrossover operator and population inversion. Hu, Zhang and Zhong(2006) introduced OED on chromosomes to identify the bettersolutions in the environment. Ho, Lin, Liauh, and Ho (2008) pro-posed particle move behavior using orthogonal experimental de-sign to adjust the velocity in high dimensional vector spaces. Theintelligent move mechanism generates two temporary movesand generates a good combination of partial vectors from thesemoves. The good combination selected using orthogonal effect isthe next move of the particle. This orthogonal PSO is applied totask assignment problem and compared with the traditional PSOin static environment. However, the algorithm’s performance isbased on the random initialization of the two temporary moves.Also, it is not experimented for multi swarm interaction indynamic environment.

In this paper, we propose an entirely different approach ofimproving PSO performance in dynamic environment by applyingcyclic property, orthogonal technique and two level PSO. To over-come the issues of the existing work, the proposed algorithmemphasize on multi swarm neighbor interaction, Weak particleencouragement, higher dimensionality, unimodal and multimodalproblems, population size, dynamic swarm generation, stable opti-ma tracking and supply strong mathematical concept than ran-domization of parameter. The proposed algorithm employs amulti-swarm strategy, where swarms are formed based on likeli-ness among the particles. This multi swarm strategy adopts inter-action of swarm with the neighbor swarm of similar property. It ismade time consuming to interact with limited number of swarms

rather interacting with the swarms in the entire search space.Neighbor swarms are identified using cyclic property for informa-tion sharing. Orthogonal test design is a robust design approachused to balance the convergence speed and the diversity of the par-ticles. Instead of searching the better solutions from the entireswarms evenly distributed in the search space, orthogonal strategyis a time consuming approach of searching global best solutionfrom the limited neighbor swarms (cyclic swarms). Selection ofthe swarm is the process of identifying the best combinations offactors that affect the chosen solution variables. Chosen betterswarm’s gbest particle guides the weaker particles entering newlyin other swarm to move towards the correct direction in later iter-ations. In changing environment, less or more particles enter newlyto their respective swarm and move to the direction of theirrespective orthogonal swarm’s gbest. Advantages of the proposedmethod is summarized as follows. The particles in the swarm ex-ploit information from another particle, in addition to the best fitparticle. Weak particles interact with fitter particles in order toprogress towards assuring optima. On an advent of a new particle,the swarm interacts with neighboring promising swarms, in theinterest of attaining quick stability. The removal of a particle fromthe swarm is a disastrous effect on the swarm convergence, espe-cially, when the removed particle is a gbest or an lbest. Hence, theswarm communicates with similarly converging swarms to regainthe momentum. The experience of the weaker particles is used toeducate the swarm about the discouraging positions in the searchspace. The changes in one swarm are ensured not to affect thediversity in other swarms. The optima in multi-swarms is discov-ered and tracked simultaneously to improve the performance ofthe algorithm. The particles’ memory becomes unreliable after achange in the environment causing outdated memory. In this ap-proach, the memory is recovered instead of being re-evaluated orforgotten. Thus robust and statistically sound PSO algorithm isdeveloped to handle good population, high dimensions, unimodaland multimodal problems in dynamic environment.

4. The proposed Hierarchical Particle Swarm Optimization withOrtho Cyclic Circles in dynamic environments

4.1. Circle formation

HPSO-OCC employs a new phenomenon termed circles is trea-ted analogous to particles. Circles are formed based on similarityin property among the particles in the search space. These swarmsare denominated as ‘‘Circles’’. Each circle has a Circle Representa-tive (CR) that symbolizes the properties of that circle. A circle isrepresented by the best fit particle conforming to that circle,known as CR. Thus, the particles in the PSO landscape are parti-tioned into multiple swarms called circles, and the global best par-ticle in each circle is considered to depict the behavior of therespective circle. The CRs are treated as high-level particles andthe particles within circles are treated as low-level particles. Theworking of this two-level PSO is depicted in Fig. 2. The low-levelparticles are allowed to participate in dedicated classic PSO threadsand the best particle is continuously tracked. Thus, there are asmany gbests (or CRs) as the number of circles. When a circle is saidto participate in the novel PSO with ortho-cyclic property (at thehigher level of hierarchy), it is the respective CR that participatesin the algorithm in actuality. The hops made by the CRs influencethe other particles conforming to that circle, and thereby guidethe particles to fly towards global optimum even after an unantic-ipated change in the environment. This assures that particles ineach circle exploit information from another particle, i.e., the bestparticle in other circles, in addition to the best fit particle withinitself. Also, the parallel discovery and tracking of optima in circles

Page 5: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Fig. 2. Circles in Hierarchical Particle Swarm Optimization.

Table 1Factors and levels representation.

Levels Factors (variables)

Factor 1 Factor 2 Factor 3

P1 Q1 R1P2 Q2 R2P3 Q3 R3

Table 2Best combinations of factors and levels.

Combinations Factors

Factor 1 Factor 2 Factor 2

1 P1 Q1 R12 P1 Q2 R23 P1 Q3 R34 P2 Q1 R25 P2 Q2 R36 P2 Q3 R17 P3 Q1 R38 P3 Q2 R19 P3 Q3 R2

3464 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

guarantees commendable improvement in the performance of thealgorithm. The memory of the particles in a circle with environ-mental changes is now updated by the interaction of its CR withother CRs, and in-turn the interaction of this CR with the circle par-ticles, thereby solving the outdated memory problem.

The interaction of CRs at the higher-level is a substantial pro-cess, as this phase of the algorithm decides the ability of the pro-posed PSO to operate in dynamic environments. Apart from theinformation from the pbest of the CR and the gbest of all gbests(gbest of CRs) is used as a third parameter to determine the veloc-ity and the new position. This parameter is decided employing theorthogonal and cyclic properties of the CRs. Orthogonality prop-erty, inspired from the OLPSO, helps to reveal useful informationfrom the surrounding CRs, resulting efficient search in complexproblem spaces. Cyclic property helps to ensure that the CRs, afteran atmosphere distortion, are guided by similarly-converging CRs.This is dependent on the property used to construct the circles, andcircles with similar properties are expected to converge in similarfashion. Each CR has two cyclic neighbors, and of these two, anorthogonal CR is discovered. The information from the personalbest fitness (pbest) of thus discovered ortho-cyclic CR is used alongwith the pbest of the CR and gbest of all CRs to define the velocityand refine the position.

4.2. Formation of Ortho-Cyclic Circles

Discovery of OCC involves four steps which are described be-low. The first step involved in OCC discovery is selection of cyclic cir-cles. The circles are allotted with CIN (Circle Identity Number)during their construction. If there are N circles, the CIN ranges from1 to N. Circles with CIN (a � 1)modN and (a+1)modN are said tobe cyclic to the circle with CIN(a), (1 6 a P N). This is unlike a ringneighborhood, where position is used as the base criterion.

Second step involved is the Construction of Orthogonal Array thatis followed after the selection of the cyclic circles. An example is gi-ven in Table 1 to introduce the basic concept of experimental designmethods. The fitness function depends on the variables and thesethree quantities (factor 1, factor 2 and factor 3) are called the factorsof the experiment. Each factor has three possible discriminative val-ues of a factor is called as a level of the factor. P1, P2, P3 are the valuesof the factor 1in 3 levels. Similarly Q1, Q2, Q3 are the values of factor2 in 3 levels and R1, R2, R3 are the values of factor 3 in 3 levels. Thefactors are the parameters which affect the fitness function. Numberof factors and number of levels vary depending on the applicationvariables. For example, the objective function of minimum servicewaiting time is affected by the variables (factors). Example: signalstrength and bandwidth. Orthogonal strategy involves the

construction of orthogonal array to discover potentially bestcombination level through factor analysis.

From Table 1, there are 33 = 3 � 3 � 3 = 27 combinations. Theorthogonal statistical design is used here to reduce the complexityof every particle interaction with the neighbor particle in largepopulation. Therefore if the number of factors and levels are large,then combinations will be more. It is desirable to select certainrepresentative set of combinations for experimentation. Orthogo-nal array and factor analysis is an efficient technique proposedfor systematic selection of neighbor swarm particle with best fac-tors in some combinations. The conventional generate and gomethod interacts with all the swarm particles to find the bettersolution. Thereby, the proposed orthogonality reduces the diver-sity of the particles and increases the convergence speed. Orthog-onal strategy used in this proposed work, selects the best factorsfrom the possible level combinations and the cyclic circle whichhas gbest near or exactly to this best factor is considered for inter-acting/ information sharing with the similarly converging circle. Ifthe factors are two and the levels are three then 32 combination ofexperimental designs are possible. The algorithm identifies theorthogonal circle using an Orthogonal Array (OA) (Yang, Bouzer-doum, & Phung, 2010), denoted as LM (Q

_N). Here L denotes theOA i.e Latin square, M is the number of combinations, _N representsthe number of factors and Q is the number of levels per factor. Forexample, L9(34) OA obtained by the algorithm as given in AppendixI for a 3 levelled 4 factor OA is

L9ð34Þ ¼

1 1 1 11 2 2 21 3 3 32 1 2 32 2 3 12 3 1 23 1 3 23 2 1 33 3 2 1

0BBBBBBBBBBBBBBBB@

1CCCCCCCCCCCCCCCCA

ð3Þ

L9(34) can be utilized for applications with at most 4 factors. For3 level 4 factors, 27 combinations are possible and by applyingorthogonal array nine representative combinations are selectedas best combinations. Nine best combinations are shown in Table 2

Page 6: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3465

Orthogonal array is an array of rows and columns where eachrow represents the levels of the factor and each column representsthe specific factor. i.e (1,1,1) = (P1,Q1,R1) similarly for other com-binations. The array is orthogonal where each column represents aspecific factor. That is one column is independent of the other. Thisimplies that even a two-factored application can use this OA, omit-ting any two random columns. Thus, we get M combinations to betested for our maximum benefit from the OA.

Next step is the Factor Analysis that concentrates in exploringthe best among the M combinations (Mendes, Kennedy, & Neves,2004). For each combination m (1 6m P M), a factor value fm iscalculated. Furthermore, another parameter binary value, Bmnq iscalculated. Bmnq is set to 1 if the level of _nth ð1 � _n � _NÞ factor isq(1 6 q P Q) in mth combination and 0 otherwise. The effect valueof qth level q on the factor n is denoted by Enq, which is calculatedas the sum of all factor values fm in which the level of _nth factor is qdivided by sum of all binary values Bmnq as given in Eq. (4).

Enq ¼PM

m¼1fm � BmnqPMm¼1Bmnq

ð4Þ

This helps to find the effect of each level on each factor, and the le-vel with maximum effect value is considered as the best combina-tion. Identification of OCC is followed, where the circle with CRproducing maximum value effect value is considered to be the bestparticle for memory update. Since this circle is found to have a cyc-lic relationship with the current circle and is found to the best usingOA factor analysis, it is chosen as the Ortho-Cyclic Circle (OCC). Thegbest of this OCC (CR) is considered while calculating the newvelocity, position and fitness of the particles in the current circle.

4.3. Hierarchical Particle Swarm Optimization with Ortho Cyclic Circle

The proposed HPSO with OCC embeds properties of circles,orthogonality, cycle and hierarchy to handle dynamic complexproblems using PSO. The steps in the algorithm are described be-low. Construction of circles is done based on the similarity in prop-erty of the particles. Thus it can be said that each circle is a typicalmulti-swarm with like-particles. The circles are identified usingCircle Identity Number (CIN) allotted such that circles with adja-cent CINs have similar properties. Therefore, dynamic changes inone circle can be handled by considering the information from cir-cles with adjacent CINs. Each circle is represented by a CR which isthe gbest particle within the circle. Classic PSO algorithm describedin Section 2 is actuated for particles within the circles. These par-ticles update their velocity and position as per the relations inEqs. (1) and (2) correspondingly. This is done concurrently andcontinuously for all the circles by allocating dedicated threads foreach circle. The next step is the Selection of CR. The concurrent exe-cution of PSO within the circles leads to continuous changing oftheir gbest. This change is continuously tracked iteration-by-itera-tion and the CR is selected and updated accordingly. Parallel to theclassic PSO within circles, for each circle an Ortho-Cyclic Circle isfound, and Particle Swarm Optimization is executed for the CRswith the velocity of the CR of the OCC. The updated velocity equa-tion at which the second level PSO is actuated as in the Eq. (5). Theposition of the CR is updated using Eq. (6).

tbid ¼ xtoid þuprpðvid � vbidÞ þugrgðqd � vbidÞ ð5Þ

vbid ¼ vbid þ tbid: ð6Þ

Here, ˆbi is the velocity of the CRi, toid is the velocity of the CR ofthe OCC of Circlei, ,biis the position of the CRi. The velocity updatedto the CR gets reflected in all the particles within that circle in thesubsequent generation of classic PSO. This helps the particles in all

circles to achieve promising global optima, even after an environ-mental change. Classic PSO, selection of CR and PSO-OCC are re-peated continuously and simultaneously until all the particles inthe search space attain promising optima.

Let N be the number of circles in the swarm. The position, veloc-ity of the CR of Circle i(CRi) is the position and velocity respectivelyof the gbest particle within that circle, and represented as ,bi andˆbi respectively. Let ji be the number of particles within the circle i,where the particle Pij has a position ,ij and a velocity ˆij. The fitnessof each particle is calculated generation-by-generation, using a fit-ness function f. Let lij be the pbest of particle Pij and vi be the gbestof circle i. This conveys that vi is the best known position of CRi. Letq be the best known position of the entire swarm (gbest of CRs). Letd be the number of dimensions. The velocity of the CR of the OCC isrepresented as toid. The algorithm of HPSO-OCC is presented inAlgorithm 2.

Algorithm 2 HPSO-OCC

Begin(1) Construct Circles(2) FOR EACH Circle Ci, i 1, ..., N DO

(a) FOR EACH particle Pij, j 1, ..., ji DO(i) Initialize the particle’s position with a uniformly

distributed random vector: ,ij � U(blow, bup), where blow

andbup are respectively the lower and upper boundaries ofthe search-space.

(ii) Initialize the Pij ‘s velocity: ˆij � U(-|bup � blow |,| bup � blow |)

(iii)Initialize the particle’s best known position to itsinitial position: lij ,ij

b) IF(f(vi) < f(q)) THEN(i) Update the circle’s best known position: q vi

(3) UNTIL the number of iterations performed, REPEAT(a) FOR EACH Circle Ci, i 1, . . ., NDO

(i) Perform Classic PSO(ii) Update the vi of Ci with the return value of classic

PSO.(b) FOR EACH Circle Ci, i 1, . . ., NDO

(i) Compute the OCC for circle Ci.

(ii) Calculate ˆbid x ˆöid + up rp (vid-,bid)+ ug rg (qd-,bid)

(iii) Calculate ,bid ,bid + ˆbid.

(iv) IF (f(,bi) < f(vi)) THENA. Update the CRi’s best known position: vi ,bi

(v) IF (f(vi) < f(q)) THENA. Update the swarm’s best known position:

q vi.

END

The behavior of the algorithm is expected to be outstanding instatic as well as dynamic environments. When there are nochanges in the backdrop, the circle chooses itself as its OCC andcarries out an algorithm like classic PSO. However, when thereis an unexpected change in the environment, the circle choosesone of its two cyclic circles as OCC and updates itself with theinformation from the CR of OCC. Therefore this algorithm avoidsthe necessity of rediversification, thereby reducing convergencetime.

4.4. Behavior of HPSO-OCC in dynamic environment

The HPSO-OCC algorithm is found to handle dynamic conditionswith good performance, taking into advantage the two levels ofoptimization. The behavior of proposed algorithm and Classic

Page 7: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Table 3Behaviour of HPSO–OCC and PSO in dynamic environments.

HPSO-OCC Classic PSO

Arrival of new particleThe new particle enters into any one of the circles based on its abnormality value.

This new particle undergoes optimization, being affected by other particleswithin the circle, as well as, other CRs. This makes the new particle to attainoptimization quicker than classic PSO

The new particle enters into the swarm, which has already converged. Once theswarm has converged, particles lose their ability to track new optima due to newvelocity, being affected by the far-lagging newly arrived particle. This deterioratesthe performance heavily

Deletion of existing particles (worst case: best particle deletion)The deletion of the best particle may initially affect the state of the particles in the

circle. But, in the second level of algorithm, where the CRs undergo Ortho-Cyclic PSO, the CR of the affected circle is rectified and stabilized by its Ortho-Cyclic Circle

If the best particle of the swarm is deleted, it affects the state of the entire swarm,as all particles rely on the position of the best particle to attain optimization

3466 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

PSO in dynamic environment is illustrated in Table 3, hence prov-ing that HPSO-OCC performs better than Classic PSO.

5. A case study: remote health monitoring of chronic heartdisease patient with optimal service scheduling in cloud

Healthcare is the examination, medication and prevention ofphysical and mental ailments by medical practitioners. The stor-age, processing and retrieval of healthcare information with theapplication of computer hardware and software for communica-tion and decision making are referred as Health Information Tech-nology (HIT). However, existing HIT applications suffer fromserious drawbacks such as unavailability of servers for huge datastorage of all patients, difficulty in serving large number of pa-tients, non-categorisation of data requests, absence of prioritisa-tion of users and there is no way for the physicians to get thecurrent physical status and image of the abnormal patients. Theseissues are due to access delay, unsupported hardware, infinitelylarge access and huge data storage. The developed system over-comes all these issues by porting the application to cloud and sup-porting high level of customization.

Integration of healthcare with cloud computing services is arecent area of interest among researchers, identified to solvethe above cited issues by offering valuable advantages like scala-bility, efficient data-sharing and cost. Cloud computing is the on-demand provisioning of computing resources, with an assuranceof good Quality-of-Service (QoS) with adaptable service rate(Vaquero, Rodero-Merino, & Caceres, 2008). It is a service-oriented approach with three types of services (Furht, 2010),Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS),and Software-as-a-Service (SaaS). IaaS clouds provide virtualresources to store data and this reduces the maintenance costinstead of using number of servers required by the hospitals. PaaSprovide tools to the user for supporting the applications deployedin cloud. SaaS helps to improve the performance of the healthcaresoftware in terms of throughput and service time. Thus cloudcomputing ensures the healthcare services to be on-demand,elastic and pay-per-use.

However, the existing cloud healthcare applications suffer fromdata redundancy. The normal as well as abnormal data of all thepatients are stored in the cloud server, while it is sufficient to storethe data of the abnormal patients alone. Also, the healthcare soft-ware must be ensured to work in dynamically changing environ-ments with uncompromised performance. With an alarming rateof requests to the server, the scheduling of services by the cloudserver is a yet to be solved issue. This paper proposes a scheme thatoptimizes the scheduling application of the cloud server using anovel Particle Swarm Optimization Algorithm (PSO), thus improv-ing performance, accuracy and scalability.

Existing scheduling schemes do not consider the degree ofseverity of the patients. When the percentage of abnormality isused as the factor for scheduling, there arises a problem when a

number of patients have the same level of abnormality. Anotherfactor called service waiting time is introduced, which is the max-imum time a patient can wait before being served. This time isoptimized using the proposed algorithm, thereby ensuring mini-mum waiting. Hence the most critical patients are served first.The described healthcare application facilitates in sharing of med-ical records with physicians, patients, caregivers and professionals,coupled with optimized scheduling of services to the abnormal pa-tients. The application is integrated with bio-sensors, patients’ Per-sonal Computer which acts as the Gateway server, cloud database,and uses HPSO-OCC for optimized scheduling of services. Thedeveloped system is found to guarantee optimal performanceand prioritized access of cloud data. Evolution of wireless sensornetwork, lead to the development of body area network. Wirelesswearable sensors of small size and light weight are placed overthe body of the patient and monitored continuously to detect theabnormality in vital sign. If the threshold value of vital signs ex-ceeds, alert is generated to the caregiver/ physician for furthertreatment. Bioharnes 3 (ZephyrBio-Harness) chest strap body sen-sor node is placed on the body of the subject. Bioharness 3 consistsof ECG, Heart Rate, Respiration Rate sensors and communicates tothe gateway server via Bluetooth. Wrist worn wireless sensor nodeis used for sensing the blood pressure. Finger tip worn (Nonin Pulseoximeter) sensor is used to sense the oxygen saturation level(SPO2). These physiological parameters are of major importancefor chronic heart disease patients.

Remote monitoring does not restrict the mobility of the patient,whereas bluetooth can send data within a limited range of 100 m.Therefore, PDA with Wi-Fi technology is placed on the waist of thepatient for communication. Gateway server uses a screener appli-cation, which computes the percentage of anomaly of the patientfrom the received sensor data, and identifies if the patient is abnor-mal. If the patient is found abnormal, the image of the patient iscaptured by the camera controlled by the gateway server, and issent to the cloud sensor through internet, along with the data ofthe patient. The percentage of anomaly, a is calculated from vari-ous healthcare parameters like ECG, Heart Rate, Respiration Rate,Blood Pressure, Skin Temperature, SPO2 and history details suchas age, gender, alcohol consumption and smoking habit. The cloudserver processes the received data, and forwards it to the regis-tered physicians, who can interpret the patient’s condition for fur-ther analysis.

5.1. System architecture of remote health monitoring system in cloud

Overall working principle of remote health monitoring systemis shown in Fig. 3. Various modules of this monitoring system is ex-plained in detail in this section. The Gateway server receives thesensor data and computes the percentage of abnormality in phys-iological parameters. The optimal scheduling algorithm is theapplication deployed in cloud platform to support high scalability,good Quality of Service (QoS) and on-demand access to huge

Page 8: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Fig. 3. Remote healthcare monitoring with optimized service scheduling in cloud.

Fig. 4. Five phases of gateway server in Remote health monitoring system.

Table 4Normal ranges of various health parameters.

Age (years) ECG – PR interval (ms) Heart rate (beats/s) Blood pressure (BP) (mmHg)

Male Female Male Female Systolic BP Diastolic BP

18–24 153 153 62–65 66–69 120 7925–29 153 153 62–65 65–68 121 8039–34 153 153 62–65 65–68 122 8135–39 153 153 63–66 65–69 123 8240–44 153 153 63–66 65–69 125 8345–49 153 153 64–67 66–69 127 8450–54 153 153 64–67 66–69 129 8555–59 153 153 62–67 65–68 131 8660-64 163 156 62–67 65–68 134 8760-69 163 156 62–67 65–68 140 9070–79 168 160 62–67 65–68 140 9080–99 177 163 62–67 65–68 140 90

Table 5Weights assigned to physiological parameter for abnormality calculation.

ID (i) Parameter Weight Wi

0 Respiration rate 0.216453735034530601 Blood pressure 0.183262356122621322 ECG 0.172198563151984873 Heart rate 0.161134770181348454 SPO2 0.150070977210712005 Skin temperature 0.11687959829880270

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3467

database. This approach ensures that there is no data redundancy.Only the data of patients with abnormal sensor readings are portedto cloud. The normal data is placed in local database in Gatewayserver, which deploys five main phases namely: Sensor DataEnumerator, Anomaly Appraiser, Anomaly Screener, CameraTrigger and Patient Data Handler as shown in Fig. 4.

Sensor Data Enumerator is the first phase where the physiologi-cal data of the patients are measured by the wearable sensorsenabled with the capacity to transmit the sensed data to the server

Page 9: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Fig. 5. Cloud server module of remote health monitoring system.

3468 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

using bluetooth. The second prime work of the gateway server is toestimate the percentage of abnormality in the patient using theAnomaly Appraiser. The entire application is dependent on theaccuracy of this phase, and immense care is taken to ensure thecorrectness.

For each healthcare parameter, a range of normal and abnormalvalues has been identified, which is tabulated in Table 4. Abnor-mality percentage of the sensor data is determined by inferringthe threshold values from Table 4 based on gender and age ofthe subject. The rules used for finding the anomaly value are as fol-lows. Normal Skin Temperature is 98.6�F and Normal RespirationRate is 12–20. The SPO2 range for a healthy person is 96 to 99%and for patients with respiratory disease is 90–99%. Smoking habitand alcohol habit increase abnormality by each 20%. A low heartrate in a patient having hypertension increases abnormality by30%.

Weights are entitled to each physiological parameter as given inTable 5. Weights are calculated using Eq. (7) where T is the sum ofweights, = 1, n is the number of parameters, = 6, k is the intensity,= n/2, and f(x) is the homogeneous poisson process function asgiven in Eq. (8).

wi ¼Tnþ ð�1i � f ðmodði;�kÞÞÞ ð7Þ

Table 6Construction of circles.

Abnormality range (%) CIN Abnormality range (%) CIN

[0.00 to 9.99] C0 [50.00, 59.99] C5[10.00, 19.99] C1 [60.00, 69.99] C6[20.00, 29.99] C2 [70.00, 79.99] C7[30.00, 39.99] C3 [80.00, 89.99] C8

f ðxÞ ¼ e�kT � ðkTxÞx!

ð8Þ

The formula to calculate abnormality is given in Eq. (9) where wi isthe weight assigned to the parameter i, di is the deflection of param-eter i from the normal value, given in Eq. (10). Here ri is the sensorreading of the parameter i, and ni is the normal (expected) value ofthe parameter i. The percentage of abnormality is given as a ⁄ 100.

a ¼X6

i¼0wi � di ð9Þ

[40.00, 49.99] C4 [90.00, 100] C9

Table 7Mapping of HPSO-OCC with healthcare application.

HPSO-OCC notation Healthcare application

Particle Sensor data of patientSwarm Group of patients who arrive at an instance of timeFitness Service timeCircle Group of patients whose abnormality falls in the

same anomaly rangeCircle representative The patient with the best fitness within a circle

di ¼ri

ni � 1:0ð10Þ

The next phase is the Anomaly Screener, where the computedpercentage of abnormality is used as a parameter for data filteringto identify the abnormal patients. This ensures that the data of thepatient in normal condition is neither processed nor stored untilthere is a demand. The abnormal data of the patients is sent tothe Cloud Server using the Patient Data Handler. The percentageof anomaly, data from and the Patient ID are given to the cloudserver for processing and service provision.

The server module is deployed in the centralized cloud, aimingto receive data of abnormal patients from the gateway server andserve them in an optimal order. This includes five main phasesnamely: Patient data receiver, Circle assembler, HPSO-OCC, Data-base update, and Dynamic Round Robin Scheduler (DRRS) is shownin Fig. 5. The data of the abnormal patients are received and HPSO-OCC algorithm is used to compute the optimal service waiting timefor each patient. The patients are served in a decreasing order oftheir abnormality, with the service waiting time being used asthe scheduling parameter for patients with same abnormality. Inother words, the patient with the highest abnormality and the low-est service waiting time has the highest priority.

Patient Data Receiver accepts the data from patient data handlerand sends to the cloud server. Using the Patient Identity (PID) andhistory details of patient such as age, alcohol, smoking, genderwhich are maintained as patient history are obtained from thecloud database. The received dynamic data and the fetched historyof patients are processed. Circle Assembler constructs the circlesbased on the percentage of anomaly in vital sign measurement.The application is designed to construct 10 circles, with CIN C0to C9, and hence 10 CRs. The construction of circles is done basedon the mapping in Table 6.

HPSO-OCC algorithm is adopted to optimize the service schedul-ing. Table 7 shows the mapping of HPSO-OCC to the healthcareapplication in study. The property used here for circle constructionand CIN allocation is the ‘Percentage of Abnormality’. Fig. 6(a)

Page 10: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Fig. 6. (a) Particles are grouped into circles based on the percentage of abnormality, (b) classic PSO algorithm is applied within the Circles, (c) ortho-Cyclic PSO is appliedamong the CRs, (d) circles along with particles achieve optimization.

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3469

shows the grouping of abnormal patient data as circles. Particleswith similar abnormality range are grouped together. Fig. 6(b)depicts the behavior of the particles within Circles when classicPSO is applied, where the particles in grey-scale symbolize theCR of the corresponding circle. Fig. 6(c) shows the behavior ofthe CRs that implements PSO-OCC for optimization. The CRscommunicate with the two cyclic particles, and identifies theOCC. The CR of OCC helps in moving towards promising optima.This information is conveyed to other particles in the circle duringthe lower level of PSO. Thus, all the circles and all the particleswithin circles achieve optimization. In this way, the service waitingtime is minimized. (See Fig. 6(d))

The Dynamic Round-Robin Scheduler is used in parallel with thedatabase update after optimization. The patients are served basedon the solution of HPSO-OCC and percentage of abnormality. Thepatient with highest percentage of abnormality and lowest servicewaiting time (fitness value of HPSO-OCC) is given the highestpriority. Here service provision refers to the delivery of ShortMessage Service about the patient’s health condition to theregistered physician. Whenever a patient is found to be abnormal,an alert (SMS) is generated to the physician. The message sent asalert includes details such as Patient ID, Name, Age, Gender,Abnormality, Abnormal Parameters’ value. The message code foreach healthcare parameter is depicted in Table 8. The messageformat used by the application is <PID, PName, Age, a(in%), Param1:Value1, Param2: Value2,. . .>, where Parami is the three-lettered

Table 8Message codes for short message service.

Parameter Message code

ECG ECGHeart rate _HRRespiration rate _RRSystolic blood pressure SBPDiastolic blood pressure DBPSPO2 SPOSkin temperature STP

message code for the healthcare parameter i and Valuei is a numberin between �100 and +100, with + symbolizing increase, - symbol-izing decrease and the number symbolizing the percentage ofdeflection from normalcy. For example, if a patient named XYZwith ID P2144 and age 56 years is found to be abnormal with40% increase in Systolic Blood Pressure, 25% increase in DiastolicBlood Pressure, and 20% increase in Heart Rate, the abnormality va-lue a is computed as 0.0844 (i.e 8.44% abnormal). The messagepacket is represented as <P2144; XYZ, 56, 8.44%, _HR: +10, SBP:+40, DBP: +25>.

6. Results and discussions

6.1. Experimental set-up

The proposed algorithm of HPSO-OCC is experimented on thecloud healthcare application that optimally schedules the servicesto the abnormal patients in dynamic environments. The applica-tion is deployed and tested in real-time environment, the specifica-tions of which are described as below. Zephyr BioHarness 3 isemployed for facilitating the measurement and transmission ofthe Heart Rate, R-R Interval of ECG and Breathing Rate of the pa-tients. The details of the sensor sampling interval and the time tomeasure the stable output as suggested by physician is listed inTable 9.

The back-end of the application is coded using the Java pro-gramming language, with the user interface in JavaServer Pages(JSP). JDK 1.6 has been used as the Java platform and JUnit 4.5for Testing. The web interface with Java Enterprise Edition 6 uses

Table 9Sampling frequencies.

Parameter Sampling interval (ms) Time to stable output (s)

ECG 4 15Respiration rate 40 45Heart rate 4 15

Page 11: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Table 10Test functions for optimization.

f Name Formula

f1 Rastrigin 20 + (x1 � x1) + (x2 � x2) � (10 � (cos(2 � p⁄x1) + cos(2 � p � x2)))f2 McCormick sin(x1 + x2) + ((x1 � x2)2) � (1.5 � x1) + (2.5 � x2) + 1f3 Lévi round((sin(3 � p � x1))2) + ((x1 � 1)2) � (1 + ((sin(3 � p � x2))2)) + ((x2 � 1)2) � (1 + ((sin(2 � p � x2))2))f4 Matyas (0.26 � ((x12) + (x22))) � (0.48 � x1 � x2)f5 Bukin 100 � p (abs(x2 � 0.01 � x12)) + 0.01 � abs(x1 + 10)f6 Booth’’ (x1 + 2 � x2 � 7)2 + (2 � x1 + x2 � 5)2

f7 Goldstein–Price

(1 + ((x1 + x2 + 1)2) � (19 � 14 � x1 + 3 � x12 � 14 � x2 + 6 � x1 � x2 + 3 � x22)) � (30 + ((2 � x1 � 3 � x2)2) � (18 � 32 � x1 + 12 � x12 + 48 � x2 � 36� x1 � x2 + 27 � x22))

Table 11Performance comparison of HPSO –OCC with benchmark fitness functions.

I. Swarms = 1; Swarm size = 50; Total particles = 50 II. Swarms = 2; Swarm size = 25; Total particles = 50

f l ± r Range m MAD f l ± r Range M MAD

I. Swarms = 1; Swarm size = 50; Total particles = 50 II. Swarms = 2; Swarm size = 25; Total particles = 50

f1 4.474564 ± 19.9 [4.40E-6, 62.22] 1.863 14.7 f1 14.88919154 ± 19.9 [4.4E-06, 62.22] 1.863 14.7f2 5.141371 ± 14.3 [0.1084, 64.5] 0.5645 5.95 f2 6.231780778 ± 14.3 [0.1084, 64.46] 0.5645 5.95f3 4.376585 ± 7.63 [1.48E-5, 38.42] 2.303 4.82 f3 5.61452948 ± 7.63 [1.48E-5, 38.42] 2.303 4.82f4 0.512558 ± 2.46 [5.46E-6, 13.6] 0.6895 1.27 f4 1.417710916 ± 2.46 [5.46E-6, 13.61] 0.6895 1.27f5 0.856607 ± 2.03 [2.20E-03, 14.3] 0.6430 0.717 f5 0.907202938 ± 2.03 [2.20E-03, 14.27] 0.6430 0.717f6 1.578561 ± 2.55 [3.37E-4, 14.2] 2.57E-2 1.06 f6 1.064839133 ± 2.55 [3.37E-4, 14.23] 2.57E-2 1.06f7 2.153872 ± 1.03 [1.26E-7, 3.729] 4.81E-2 0.641 f7 0.647837205 ± 1.03 [1.26E-7, 3.729] 4.81E-2 0.641

III. Swarms = 10; Swarm size = 5; Total particles = 50 IV. Swarms = 1; Swarm size = 100; Total particles = 100f l ± r Range m MAD f l ± r Range M MADf1 4.879391 ± 9.13 [3.925E-4, 42.45] 0.1609 4.80 f1 17.70795327 ± 18.7 [1.67E-2, 70.12] 9.251 15.6f2 4.019455 ± 8.43 [3.6795E-3, 39.63] 0.7479 3.56 f2 1.981629544 ± 3.06 [2.493E-2, 16.43] 0.6675 1.6f3 5.888844 ± 12.8 [2.5066E-3, 55.9] 3.205 5.45 f3 6.011749082 ± 14.5 [1.10E-3, 66.50] 0.3258 5.92f4 0.609521 ± 0.933 [1.489E-6, 2.790] 6.91E-3 0.607 f4 1.284776956 ± 2.13 [8.71E-07, 6.633] 0.205 1.24f5 1.014863 ± 0.849 [2.20E-3, 2.53] 0.9960 0.754 f5 0.967083899 ± 1.01 [2.20E-03, 2.83] 0.3961 0.847f6 0.523254 ± 2.25 [6.557E-4, 15.61] 1.93E-2 0.518 f6 0.667318031 ± 0.859 [1.551E-4, 2.593] 5.66E-2 0.656f7 1.05363 ± 2.12 [4.628E-9, 8.109] 2.62E-2 1.05 f7 0.549017765 ± 0.981 [5.492E-9, 5.968] 0.1419 0.515

V. Swarms = 4; Swarm size = 25; Total particles = 100 VI. Swarms = 5; Swarm size = 20; Total particles = 100l ± r Range m MAD f l ± r Range M MAD

f1 5.945523 ± 10.6 [8.5369E-4, 72.2] 0.7508 5.75 f1 9.974601208 ± 15.9 [1.31E-5, 70.9] 1.623 9.64f2 4.045756 ± 6.82 [2.7393E-2, 28.5] 0.6675 3.67 f2 2.366714973 ± 4.43 [1.051E-2, 29.49] 0.7527 1.95f3 10.90637 ± 18.7 [4.4903E-5, 83.8] 2.548 10.4 f3 6.810037228 ± 10.9 [8.96E-04, 55.24] 0.7790 6.62f4 0.888171 ± 1.9 [3.01E-08, 15.5] 0.2157 0.833 f4 1.020379153 ± 1.60 [8.987E-5, 10.32] 0.3466 0.978f5 0.811841 ± 0.781 [2.20E-03, 2.55] 0.7243 0.665 f5 0.8633002 ± 0.822 [2.20E-03, 2.504] 0.8031 0.736f6 0.85213 ± 2.06 [2.4115E-4, 13.9] 0.1372 0.825 f6 1.325750966 ± 2.08 [4.42E-4, 9.504] 0.4496 1.22f7 0.936012 ± 1.4 [3.17E-16, 5.96] 0.2392 0.887 f7 0.775245686 ± 1.06 [3.377E-8, 4.674] 0.2246 0.717

VII. Swarms = 6; Swarm size = 50; Total particles = 300 VIII. Swarms = 5; Swarm size = 100; Total particles = 500f l ± r Range m MAD f l ± r Range M MADf1 9.125147 ± 14.3 [7.220E-5, 61.57] 1.455 8.91 f1 10.65370616 ± 17.2 [1.103E-4, 80] 0.9696 10.4f2 5.290191 ± 10.5 [3.81E-03, 61.34] 0.6675 4.92 f2 4.055450897 ± 10.9 [5.160E-3, 88.72] 0.5083 3.89f3 4.573632 ± 12.4 [4.61E-04, 79.8] 0.2780 4.53 f3 8.313125953 ± 16.3 [3.20E-6, 82.85] 0.7402 8.12f4 0.950455 ± .63 [3.900E-7, 12.47] 0.3210 0.88 f4 0.819697197 ± 1.32 [1.40E-06, 10.8] 0.3496 0.728f5 0.940586 ± 1.00 [2.200E-03, 11.9] 0.8167 0.676 f5 1.087635775 ± 0.858 [2.20E-03, 2.83] 0.9363 0.713f6 2.340497 ± 3.63 [7.303E-5, 15.61] 0.3666 2.27 f6 1.629673505 ± 2.87 [1.73E-04, 15.61] 0.2482 1.57f7 1.09257 ± 1.83 [6.409E-16, 8.375] 0.1781 1.07 f7 0.570570715 ± 1.22 [9.1E-15, 7.678] 3.76E-2 0.564

IX. Swarms = 10; Swarm size = 50; Total particles = 500 X. Swarms = 10; Swarm size = 100; Total particles = 1000f l ± r Range m MAD f l ± r Range M MADf1 9.566704 ± 15.9 [1.9991E-5, 80.00] 1.363 9.31 f1 8.0669 ± 16.4 [1.34E-04, 183.4] 0.9318 7.82f2 3.34618 ± 7.81 [3.827E-3, 79] 0.6542 3.01 f2 3.1606 ± 9.18 [3.20E-06, 191.6] 0.6225 2.87f3 8.728926 ± 18 [4.842E-5, 82] 0.3523 8.66 f3 7.5782 ± 23.2 [3.01E-08, 596.5] 0.7949 7.46f4 0.789475 ± 1.31 [3.3071E-7, 18.3] 0.4396 0.681 f4 1.462725 ± 6.25 [5.02E-07, 183.9] 0.3570 1.39f5 1.022065 ± 0.732 [2.200E-03, 2.668] 1.040 0.618 f5 1.046840427 ± 2.20 [1.73E-04, 63.20] 0.9384 0.732f6 1.857842 ± 3.08 [7.64E-05, 15.61] 0.7487 1.73 f6 2.1274 ± 10.6 [3.18E-16, 328.1] 0.5168 2.04f7 0.825703 ± 1.57 [2.207E-16, 8.565] 7.50E-2 0.818 f7 0.65869 ± 1.29 [4.85E-15, 8.153] 9.03E-2 0.648

3470 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

the Personal Glassfish v3 as the server. Cloudbees, which functionsas a Platform as a Service (PaaS) with agile development anddeployment services is chosen as the cloud environment. Cloud-bees is found to insure ingenious services, with maximum resourceutilization, global access and zero downtime. Selection of Cloud-bees environment ensures support to JVM based languages andframework. The database of Cloudbees is used by the applicationfor storing the records of patients, physicians and hospitals. It also

stores the abnormal data from sensors and history of patients. Thedatabase server is ec2-50-19-213-178.compute-1.amazo-naws.com, and the type being MySQL/5.0.51. The supported Gitis used as the repository for the application.

The parameters of the novel PSO algorithm are fixed as follows.The number of swarms and the swarm size are dynamic. Clustertopology with 5 particles neighborhood is adopted. The searchspace is defined as [100,�100] and the inertia of 0.95 is applied.Goldstein-Price fitness function is used, where the fitness function

Page 12: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Table 12Best performance among various fitness functions.

f I II III IV V VI VII VIII IX X

f1 4.4746 14.89 4.879 17.708 5.9455 9.975 9.12515 10.65 9.5667 8.067f2 5.1414 6.232 4.019 1.9816 4.0458 2.367 5.29019 4.055 3.3462 3.161f3 4.3766 5.615 5.889 6.0117 10.906 6.81 4.57363 8.313 8.7289 7.578f4 0.5126 1.418 0.61 1.2848 0.8882 1.02 0.95045 0.82 0.7895 1.463f5 0.8566 0.907 1.015 0.9671 0.8118 0.863 0.94059 1.088 1.0221 1.047f6 1.5786 1.065 0.523 0.6673 0.8521 1.326 2.3405 1.63 1.8578 2.127f7 2.1539 0.648 1.054 0.549 0.936 0.775 1.09257 0.571 0.8257 0.659

Table 13Percentage of deviation from Best mean value.

Best Mean I II III IV V VI VII VIII IfX X % of Deviation

0.5126 0.648 0.523 0.549 0.686 0.775 0.939 0.571 0.789 0.659 NILf1 7.7299 21.98 8.325 31.254 7.669 11.87 8.716 17.67 11.12 11.25 13.758f2 9.0308 8.619 6.682 2.6094 4.899 2.053 4.633 6.108 3.238 3.798 5.16705f3 7.5387 7.667 10.25 9.95 14.9 7.784 3.87 13.57 10.06 10.5 9.60976f4 0 1.188 0.165 1.3401 0.295 0.316 0.012 0.437 0 1.221 0.497391f5 0.6712 0.4 0.94 0.7615 0.184 0.114 0.002 0.906 0.295 0.589 0.486154f6 2.0798 0.644 0 0.2155 0.242 0.71 1.492 1.856 1.353 2.23 1.08228f7 3.2022 0 1.014 0 0.365 0 0.163 0 0.046 0 0.478982

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3471

is evaluated for 300 iterations for each particle in the search space.The Dynamic Round Robin Scheduler uses a quantum time of200ms.

6.2. Comparison for various fitness functions

The test of efficiency and validation of optimization algorithmsis carried out by using chosen set of common standard benchmarkfunctions or test functions. For any new optimization problem, it isessential to validate its performance and compare it with otherexisting algorithms over a good set of test functions. To evaluatean algorithm, it is necessary to characterize the type of the problemwhere the algorithm is suitable. This is possible only if the testfunctions are chosen with wide variety of problems. In this paper,Seven benchmark functions listed in Table 10 are used in experi-mental results. These benchmark functions are widely adopted invarious global optimization problems. Functions are divided in totwo groups such as unimodal functions and multimodal functions.Function f6, f3 and f4 are non separable unimodal functions. f1, f2,f5, f7 are non separable multimodal functions. If the solution of thealgorithm falls near to / exactly to the global optimum of the testfunction, then the run is judged to be successful.

The optima determined using the algorithm of HPSO-OCC withvarious fitness functions is compared for performance analysis. Theformulae for the fitness functions are presented in Table 10.

The performance analyses of HPSO-OCC with various fitnessfunctions are done by varying the number of swarms, number of

Table 14Functions near to optimum.

Test cases Functions

Case 1 F4,f5Case 2 F5,f7Case 3 F4, f6Case 4 F5, f6, f7Case 5 F4, f5, f6, f7Case 6 F5, f7Case 7 F4, f5Case 8 F4, f7Case 9 F4, f7Case 10 F7

particles in the swarm (swarm size), and the total number ofparticles. The statistical parameters such as mean, standard devia-tion, range, median and Average absolute Deviation from Medianare calculated for various landscapes. Each particle runs for 300iteration and the statistical value is compared with the near opti-mum value of the functions. The results are shown in Table 11where, l denotes the mean, r is the standard deviation, m is themedian and MAD denotes the Average absolute Deviation fromMedian for the solution obtained from the HPSO-OCC. Experimen-tation is done using 10 test cases. For example, Test Case V denotes4 swarms each with 25 particles, accounting to 100 particles in to-tal. This test case gives a solution with mean 0.936012, standarddeviation 1.4, lowest value 3.17E-16, highest value 5.96, median0.2392 and MAD 0.887. Based on the abnormality category, the to-tal particles mentioned for all cases I to X are distributed and aftersome iterations new particles of same numbers enter the respec-tive swarms. Every swarm that receives new particle selects theorthogonal neighbor particle swarm particle immediately and con-tinues exploration for the further iterations.

Fig. 7. Comparison of fitness functions for various datasets.

Page 13: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Fig. 8. Comparison of fitness functions for small datasets.

Fig. 9. Comparison of fitness functions for large datasets.

3472 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

The behavior of the algorithm is expected to be outstanding instatic as well as dynamic environments. When there are nochanges in the backdrop, the circle chooses itself as its OCC andcarries out an algorithm like classic PSO. From Table 11, whereSwarm = 1 and Total particles = 50 (case I), Swarm = 1, Total parti-cles = 100 (case IV) there is no neighbor swarm and the swarmruns OCC for itself. That is, when new particle enters the swarm,

Table 15Comparison of MAD of the fitness functions.

F I II III IV V

f1 14.7 14.7 4.8 15.6 5.75f2 5.95 5.95 3.56 1.6 3.67f3 4.82 4.82 5.45 5.92 10.4f4 1.27 1.27 0.607 1.24 0.833f5 0.717 0.717 0.754 0.847 0.665f6 1.06 1.06 0.518 0.656 0.825f7 0.641 0.641 1.05 0.515 0.887

the best combination factors obtained from orthogonality is givento the solution space to locate the good particles.

The new particles move to the best points for further explora-tion in subsequent iterations. This optimization technique avoidsthe swarm to track the optima from the scratch in changing envi-ronment. In case II (Swarm = 2, Total particles = 50), there is nocyclic circles. New particles entering in either swarms interactswith the other neighbor swarm for best factors selection and runsnext level PSO. In case III, case V, case VI, case VII, case VIII, case IXand case X when there is an unexpected change in the environ-ment, the circle chooses one of its two cyclic circles as OCC and up-dates itself with the information from the CR of OCC. Resultsshown in Table 11, proves that the HPSO-OCC algorithm performswell for any population size.

Table 12 shows the mean values of all functions f1 to f7 for theten test cases I to X. In every test cases, the mean values of thesolutions are compared with the optimum and the functions nearto optimum are given in Table 14. It is seen that f4, f5 and f7 yieldcomparably good performance for most test cases with respect tofinal solution. That is, brings solution with much higher accuracyto the problem. It is apparent that HPSO-OCC can avoid local opti-ma and yields improved performance to obtain global optimum ro-bustly in multimodal functions of higher dimensional search space.The best mean value obtained is boldfaced. The results show thatGoldstein-Price fitness function (f7) generally outperforms otherfitness functions f1 to f6. For instance, f7 does better than all otherfunctions for the test cases II, IV, VI, VIII and X. Goldstein pricefunction on an average perform steadily for more as well as lessnumber of particles and swarms. Table 13 shows the best solutionamong the solutions obtained by functions f1 to f7, and the

VI VII VIII IX X

9.64 8.91 10.4 9.31 7.821.95 4.92 3.89 3.01 2.876.62 4.53 8.12 8.66 7.460.978 0.88 0.728 0.681 1.390.736 0.676 0.713 0.618 0.7321.22 2.27 1.57 1.73 2.040.717 1.07 0.564 0.818 0.648

Page 14: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Fig. 10. Comparison of fitness functions with respect to MAD.

Table 16Comparison of HPSO-OCC with other existing PSOs.

I. Swarms = 1; Particles per swarm = 50; Total particles = 50 II. Swarms = 2; Particles per swarm = 25; Total particles = 50l ± r Range m MAD l ± r Range M MAD

HPSO-OCC 1.258599 ± 0.502 [0.1666, 1.982] 0.7135 0.375 0.866331 ± 0.502 [0.1666, 1.982] 0.7135 0.375HPSO 1.722566 ± 2.19 [0.188, 11.51] 0.9103 1.26 3.018164 ± 4.50 [0.1870, 16.83] 1.033 2.41OLPSO 1.549234 ± 7.456E-02 [1.035, 1.569] 1.562 1.55E-2 0.917306 ± 0.126 [0.4335, 1.053] 0.8568 0.101PSO 2.217419 ± 0.470 [0.8094, 3.579] 2.252 0.408 1.018746 ± 1.05 [0.1874, 2.718] 0.2343 0.804III. Swarms = 10; Particles per swarm = 5; Total particles = 50 IV. Swarms = 1; Particles per swarm = 100; Total PARTICLES = 100

l ± r Range m MAD l ± r Range m MADHPSO-OCC 0.683709 ± 0.454 [0.1663, 1.867] 0.5821 0.361 0.898513 ± 0.477 [0.1677, 1.663] 0.7729 0.373HPSO 1.511085 ± 2.11 [0.1870, 10.5] 0.5063 1.22 1.83033 ± 3.67 [0.1870, 13.43] 0.5817 1.47OLPSO 0.788224 ± 0.384 [0.2066, 1.64] 0.8221 0.315 1.299588 ± 8.979E-02 [0.6486, 1.325] 1.312 ####PSO 1.824889 ± 2.37 [0.1895, 10.5] 0.7392 0.7392 1.659646 ± 1.31 [0.1889, 5.780] 1.502 0.850V. Swarms = 4; Particles per swarm = 25; Total particles = 100 VI. Swarms = 5; Particles per swarm = 20; Total particles = 100

l ± r Range m MAD l ± r Range m MADHPSO-OCC 0.6858336 ± 0.419 [0.1665, 1.674] 0.6427 0.347 0.911628 ± 0.562 [0.1663, 1.982] 0.8224 0.464HPSO 3.23320088 ± 4.64 [0.1871, 16.83] 0.7249 2.91 2.750725 ± 3.40 [0.1870, 16.50] 1.476 2.28OLPSO 0.97726532 ± 0.502 [0.2010, 1.785] 1.009 0.386 0.957889 ± 0.332 [0.2262, 1.564] 0.9407 0.257PSO 2.11131837 ± 1.75 [0.1870, 5.703] 1.545 1.50 4.123436 ± 3.06 [0.1870, 9.725] 3.609 2.71VII. Swarms = 6; Particles per swarm = 50; Total particles = 300 VIII. Swarms = 5; Particles per swarm = 100; Total particles = 500

l ± r Range m MAD l ± r Range m MADHPSO-OCC 0.93916302 ± 0.588 [0.1663, 1.982] 0.8956 0.508 0.662357 ± 0.444 [0.1663, 1.982] 0.5416 0.35HPSO 3.53451165 ± 4.07 [0.1870, 16.83] 1.702 2.99 2.925982 ± 3.56 [0.1870, 16.50] 1.111 2.50OLPSO 1.11337511 ± 0.512 [0.2064, 2.000] 0.8636 0.411 0.917207 ± 0.606 [0.1996, 1.815] 1.178 0.519PSO 3.03410006 ± 4.68 [0.1870, 13.21] 0.358 2.81 2.753394 ± 2.32 [0.1870, 6.524] 2.563 2.21IX. Swarms = 10; Particles per swarm = 50; Total particles = 500 X. Swarms = 10; Particles per swarm = 100; Total particles = 1000

l ± r Range m MAD l ± r Range m MADHPSO-OCC 0.740163 ± 0.525 [0.1663, 1.982] 0.538 0.434 0.89173 ± 2.77 [0.1663, 86.60] 0.6730 0.553HPSO 3.169782 ± 3.70 [0.1870, 16.83] 1.854 2.57 2.965196 ± 3.71 [0.1870, 16.83] 1.193 2.53OLPSO 0.811775 ± 0.485 [0.1998, 1.966] 0.6939 0.4 0.997134 ± 0.467 [0.2017, 1.94] 1.035 0.383PSO 2.338437 ± 2.62 [0.1870, 9.788] 1.168 1.94 2.984747 ± 4.35 [0.1870, 16.83] 0.3359 2.76

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3473

percentage of deviation for each test case. The mean deviation isalso shown in the last column of the table. It is evidently seen thatf7performs the best with least mean percentage of deviation (bold-faced). Bukin and Matyas are found to perform equally well with alow value of percentage of deviation. Booth’s function is discoveredto deliver a mediocre solution. Also, f1 (Rastrigin) is found to havethe maximum deviation from the best solution in HPSO-OCC algo-rithm, followed by the Levi function. Rastrigin function with theaddition of cosine modulation produces frequent local minimaand the location of the minima is largely disturbed.

Fig. 7 shows the plot of the mean of solutions of the fitness func-tions f1 to f7 for the test cases I to X and comparison of fitness func-tions with various data sets. The aim of this set of experimentsshown is to test the effect of swarms and its swarm size on the per-formance of HPSO-OCC in dynamic environments.

Experiments were carried out with the swarms set 1–10 and thevalue of the swarm size set 50–1000. From Fig 7, it can be seen thatthe total particles of 300–1000 gives a better result (less deviationfrom the optimum) in most of the fitness functions. Rastrigin, Leviand McCormick shows severe deviation from the current value and

Page 15: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

Fig. 11. Comparison of PSOs with mean of final solution.

Fig. 12. Relative Performance of PSO variants with HPSO-OCC.

3474 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

the optimum value (error) for smaller swarm size between 50 and100.

Fig. 8 compares the behavior of the fitness functions for smalldatasets namely, I (Swarms = 1; Swarm Size = 50; Total Parti-cles = 50), II (Swarms = 2; Swarm Size = 25; Total Particles = 50),III (Swarms = 10; Swarm Size = 5; Total Particles = 50), IV(Swarms = 1; Swarm Size = 100; Total Particles = 100), V(Swarms = 4; Swarm Size = 25; Total Particles = 100) and VI(Swarms = 5; Swarm Size = 20; Total Particles = 100). Similarly,Fig. 9 compares the performance of the fitness functions with re-spect to the final solution for large datasets namely VII(Swarms = 6; Swarm Size = 50; Total Particles = 300), VIII(Swarms = 5; Swarm Size = 100; Total Particles = 500), IX(Swarms = 10; Swarm Size = 50; Total Particles = 500) andX(Swarms = 10; Swarm Size = 100; Total Particles = 1000).

Table 15 displays the average absolute deviation from median(MAD) of solutions obtained by the fitness functions f1 to f7 forthe ten test cases I to X. It is evidently seen that f7 fitness functiongives the least MAD (boldfaced) for seven out of ten test cases. Thisproves that f7 performs the best with respect to accuracy of finalsolution, percentage of deviation from the best solution andMAD. The same is represented as a graph plot in Fig. 10.

6.3. Performance comparison of HPSO-OCC with other existing PSOs

Various PSO algorithms are used for comparisons. From the Ta-ble 11–15 it is proved that test function f7 outperforms other func-tions and chosen to be better for final solution accuracy. In thissection HPSO-OCC using f7 function is compared with ClassicPSO, Orthogonal Learning PSO (OLPSO) and Hierarchical PSO(HPSO). The first is the traditional PSO (Engelbrecht, 2006), secondis the OLPSO (Zhan, Zhang, Li, & Shi, 2011) which uses orthogonalstrategy to construct a particle that acts as a guiding exemplar foradjusting its flying velocity and direction, third is the HPSO de-signed in this paper, that is analogous to the proposed HPSO-OCC, with classic PSO applied to both levels of hierarchy withoutorthogonal strategy. In HPSO, gbest’s are constructed in the first le-vel for every swarm and all the neighboring swarms gbest’s areexecuted in the second level to find the global optima. The mean(l), standard deviation (r), range, median (m) and average abso-lute deviation from median (MAD) of the final solutions of HPSO-OCC, HPSO, OLPSO and PSO are given in Table 16. It is also observedthat HPSO-OCC gives the best solution for all the test cases, irre-spective of the number of swarms and the swarm size. The algo-rithms can be ranked based on performance in terms of thestatistical parameters solution accuracy. It can be observed fromthe solutions that HPSO-OCC offers the best performance by show-ing less deviation from the actual value. While the second is OLPSO,followed by HPSO and classic PSO. The results show that irrespec-tive of swarm and swarm size, the mean values of HPSO-OCCranges from 0.6 to 0.9.

Fig. 11 plots the mean of the final solution given by HPSO-OCC,HPSO, OLPSO and PSO for the test cases I to X. It is also observed thatHPSO-OCC gives the best solution for all the test cases, irrespective ofthe number of swarms and the swarm size. Results show that HPSO-OCC performs 64.7% better than HPSO with respect to the final solu-tion. Also, HPSO-OCC outperforms OLPSO by 16.6% and classic PSOby 59.55%. The relative performance of other three algorithms withrespect to HPSO-OCC is shown in Fig. 12.

7. Conclusions

In this paper, we presented a multi swarm PSO technique for anefficient and robust optimization over static and dynamic environ-ment. The technique uses cyclic property, orthogonal strategy andHierarchical PSO. Briefly the working principle of the HPSO-OCC isas follows. In the first level, Circle particles learns its own historicalexperience using traditional PSO algorithm. Orthogonal array anal-ysis, discovers a similarly converging neighbor circle. The globalbest particle’s position and its velocity of ortho cyclic circle arelearned by its orthogonal circle. Second level PSO in the hierarchyis executed with the updated velocity equation.

In some real time applications, the particles of some propertieswould expect to collect the useful information from the particlesof similar properties rather than exchanging information fromswarms with dissimilar properties. That is, particles should exploitinformation from their near neighbors which holds valuable knowl-edge. This will be the effective interaction mode in dealing withapplications both in static and dynamic environment. This paperemploys a multi-swarm strategy where interaction of swarm is onlywith the neighbor swarm of similar property. It is made time con-suming to interact with limited number of swarms rather interactingwith the swarms in the entire search space. Interaction with allneighboring swarms particles may waste computations, takes moretime for convergence and provide no steady search direction. If thenumber of swarms and number of swarm particles are high thendynamism will be severe. Selection of neighbor swarm with bestfactor particle using orthogonal strategy reduces multiple swarm

Page 16: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476 3475

interaction and improves convergence speed. Orthogonal strategycan be applied to any kind of topology structure. Employing cyclicproperty for selecting few neighbor swarms, orthogonal test designfor selecting the best factor combination and update of the the weak-er particle’s velocity and position with the ortho cyclic circle gbestvelocity proves HPSO-OCC a statistically robust design approach thatbalances the convergence speed and the diversity of the particles.

Comprehensive experimental tests have been conducted on 7benchmark test functions including both uni modal and multi-modal functions for large and small number of particles in theswarms. The performance analyses of HPSO-OCC with various fit-ness functions are done by varying the number of swarms, numberof particles in the swarm (swarm size), and the total number ofparticles. The statistical parameters such as mean, standard devia-tion, range, median and Average absolute Deviation from Medianare calculated for various landscapes. Each particle runs for 300iteration and the statistical value is compared with the near opti-mum value of the functions. Experiments were carried out withthe number of swarms from 1 to 10 and the particles in the swarmsvaries from 50 to 1000. It is observed that the swarms with totalparticles of 300–1000 gives a better result (less deviation fromthe optimum) in most of the fitness functions. Goldstein-Price mul-timodal fitness function generally outperforms other fitness func-tions for any population size. It is apparent that HPSO-OCC canavoid local optima and yields improved performance to obtain glo-bal optimum robustly in multimodal functions of higher dimen-sional search space. The behavior of the algorithm is expected tobe outstanding in static as well as dynamic environments. Whenthere are no changes in the backdrop, the circle chooses itself asits OCC and carries out an algorithm like classic PSO. If there isno neighbor swarm and the swarm runs OCC for itself. That is,when new particle enters the swarm, the best combination factorsobtained from orthogonality is given to the solution space to locatethe good particles. The new particles move to the best points forfurther exploration in subsequent iterations. This optimizationtechnique avoids the swarm to track the optima from the scratchin changing environment. In case of only two swarms there is nocyclic circles. New particles entering in either swarms interactswith the other neighbor swarm for best factors selection and runsnext level PSO. HPSO-OCC is compared with Classic PSO, Orthogo-nal Learning PSO (OLPSO) and Hierarchical PSO (HPSO). It is alsoobserved that HPSO-OCC gives the best solution for all the testcases, irrespective of the number of swarms and the swarm size.It can be observed from the solutions that HPSO-OCC offers thebest performance by showing less deviation from the actual value.While the second is OLPSO, followed by HPSO and classic PSO. Theresults show that irrespective of swarm and swarm size, the meanvalues HPSO-OCC ranges from 0.6 to 0.9. Overall, the proposed ap-proach shows improvement in terms of speed and accuracy. It effi-ciently handles multi swarms, good population, Weak particleencouragement, higher dimensionality, unimodal and multimodalproblems, dynamic swarm generation and stable optima tracking.This technique supplies strong mathematical concept than ran-domization of parameter.

For future work, it would be valuable to apply HPSO-OCC to areal time application and change the constant coefficients in thevelocity and position equations dynamically with respect to thepopulation. Next is to evaluate HPSO-OCC with shifted and rotatedbenchmark functions and planning to standardise HPSO-OCC bycomparing with other evolutionary learning algorithms.

Acknowledgement

This research project is supported by NRDMS, Department ofScience and Technology, Government of India, New Delhi and AnnaUniversity, Centre for Research, Chennai. The authors would like to

extend their sincere thanks to NRDMS and Anna University fortheir support.

Appendix I

� Construction of Orthogonal Array LMðQ_NÞ

� _N is the number of factors.� Q is the number of levels per factor.� M is the number of combinations� L is the Orthogonal Array with dimensions M X P.� M := Q ⁄ Q;� P := Q + 1;� for each i := 1to M do� L[i, 1] = b(i-1)/Qc% Q� L[i, 2] = mod(i-1, Q);� for each j := 1 to P - 2 do� L[I, 2+j] = L[i, 1]⁄j + L[i, 2]% Q� return L.

References

Blackwell, T. M., & Branke, J. (2006). Multiswarms, exclusion, and anti-convergencein dynamic environments. IEEE Transaction on Evolutionary Computing, 10(4),459–472.

Cai, Y., & Yang, S. X. (2013). An improved PSO-based approach with dynamicparameter tuning for cooperative multi-robot target searching in complexunknown environments. International Journal of Control, Taylor and Francis,86(13), 1–13.

Chen, X., & Li, Y. (2007). A modified PSO structure resulting in high explorationability with convergence guaranteed. IEEE Transactions on Systems, Man, andCybernetics, Part B, 37(5), 1271–1289.

Connolly, J. F., Granger, E., & Sabourin, R. (2012). Evolution of heterogeneousensembles through dynamic particle swarm optimization for video-based facerecognition. Pattern Recognition, Elsevier, 45(7), 2460–2477.

Engelbrecht, A.P. (2006). Particle swarm optimization: Where does it belong?. InProceedings of the ieee swarm intellgence, symposium (pp. 48–54).

Furht, B. (2010). Cloud computing fundamentals. Handbook of Cloud Computing.Springer, pp. 3–19.

Greef, M., & Engelbrecht, A. P. (2008). Solving dynamic multi-objective problemswith vector evaluated particle swarm optimization. In Proceedings of ieeecongress on evolutionary computation (pp. 2922–2929).

Hernandez, P. N., & Corona, C. C. (2011). Efficient multi-swarm PSO algorithms fordynamic environments. Memetic Computing, Springer, 3(3), 163–174.

Hu, X., & Eberhart, R. C. (2002). Adaptive particle swarm optimization: Detectionand response to dynamic systems. In Proceedings of ieee congress on evolutionarycomputation. Vol. 2, (pp. 1666–1670).

Hu, X. M., Zhang, J., Zhong, J. H. (2006). An enhanced genetic algorithm withorthogonal design. In Proceedings of ieee congress on, evolutionary computation(pp. 3174–318).

Ho, S. H., Lin, H.-S., Liauh, W. H., & Ho, S. J. (2008). OPSO: Orthogonal particle swarmoptimization and its application to task assignment problems. IEEE Transactionson Systems, Man, and Cybernetics - Part A: Systems and Humans, 38(2), 288–298.

Kennedy, J., & Eberhart, R. C. (1997). A discrete binary version of the particle swarmalgorithm. In International conference on systems, man, and cybernetics. Vol. 5,(pp. 4104–4108). Orlando, FL.

Kennedy, J., & Mendes, R. (2002). Population structure and particle swarmperformance. In Proceedings of the ieee congress on, evolutionary computation.Vol. 2, (pp. 1671–1676).

Kiranyaz, S., Pulkkinen, J., & Gabbouj, M. (2011). Multi-dimensional particle swarmoptimization in dynamic environments. Expert Systems with Applications, 38(3),2212–2223.

Korurek, M., & Dogan, B. (2010). ECG beat classification using particle swarmoptimization and radial basis function neural network. Expert Systems withApplications, Elsevier, 37(12), 7563–7569.

Leung, Y. W., & Wang, Y. (2001). An orthogonal genetic algorithm with quantizationfor global numerical optimization. IEEE Transaction on Evolutionary Computation,5(1), 41–53.

Li, C., & Yang, S. (2008). Fast multi- swarm optimization for dynamic optimizationproblems. In Fourth international conference on natural computation. Vol. 7, (pp624–628). Jinan.

Li, X., Branke, J., & Blackwell, T. M. (2006). Particle Swarm with Speciation andAdaptation in a Dynamic Environment. In Proceedings of the 8th annualconference on genetic and evolutionary computation, ACM. (pp. 51–58), USA.

Li, X., & Dam, K. H. (2003). Comparing particle swarms for tracking extrema indynamic environments. In Proceedings of ieee congress on evolutionarycomputation. Vol. 3, (pp. 1772–1779).

Page 17: Hierarchical Particle Swarm Optimization with Ortho-Cyclic Circles

3476 K. Ganapathy et al. / Expert Systems with Applications 41 (2014) 3460–3476

Liang, J. J., & Qin, A. K. (2006). Comprehensive learning particle swarm optimizer forglobal optimization of multimodal functions. IEEE Transactions on EvolutionaryComputation, 10(3), 281–295.

Liang, J. J., & Suganthan, P. N. (2005). Dynamic Multi- Swarm Particle SwarmOptimizer with Local Search. In Proceedings of IEEE congress on, evolutionarycomputation. Vol. 1, (pp. 522–528).

Liu, L., Yang, S., & Wang, D. (2010). Particle swarm optimization with compositeparticles in dynamic environment. IEEE Transactions on Systems, Man andCybernetics - Part B: Cybernetics, 40(6), 1634–1648.

Mendes, R., Kennedy, J., & Neves, J. (2004). The fully informed particle swarm:Simpler, maybe better. IEEE Transaction on Evolutionary Computation, 8(3),204–210.

Nonin Pulse oximeter from http://www.nonin.com/pulseoximetry/fingertip/onyx9550.

Omranpur, H., Ebadzadeh, M., Shiry, S., & Barzegar, S. (2012). Dynamic particleswarm optimization for multimodal function. International Journal of ArtificialIntelligence, 1(1), 1–10.

Poli, R., Kennedy, J., & Blackwell, T. M. (2007). Particle swarm optimization. SwarmIntelligence (vol. 1). Springer. no. 1, 33–57.

Rezazadeh, I., Meybodi, M. R., & Naebi, A. (2011). Adaptive Particle SwarmOptimization Algorithm for Dynamic Environments. In International conferenceon swarm intelligence, LNCS, Springer-Verlag (pp. 120–129), China.

Shen, Q., Shi, W. M., & Kong, W. (2008). Hybrid particle swarm optimization andtabu search approach for selecting genes for tumor classification using geneexpression data. Computational Biology and Chemistry, 32(1), 53–60.

Shi, Y. H., & Eberhart, R. C. (1998). A modified particle swarm optimizer. InProceedings of the ieee world congress on computational intelligence. (pp. 69–73).

Vaquero, L. M., Rodero-Merino, L., Caceres, J., & Lindner, M. (2008). A Break in theclouds: Towards a cloud definition ACM SIGCOMM computer communication.(Vol. 39, pp. 50–55).

Yang, J., Bouzerdoum, A., & Phung, S. L. (2010). A particle swarm optimizationalgorithm based on orthogonal design. IEEE World Congress on EvolutionaryComputation, 593–599.

ZephyrBio-Harness, http://www.zephyr-technology.com/, http://www.zephyranywhere.com/healthcare/zephyrlife/.

Zhan, Z. H., Zhang, J., Li, Y., & Chung, H. S. H. (2009). Adaptive particle swarmoptimization. IEEE Transactions on Systems, Man and Cybernetics, Part B, 39(6),1362–1381.

Zhan, Z.-H., Zhang, J., Li, Y., & Shi, Y.-H. (2011). Orthogonal learning particle swarmoptimization. IEEE Transactions On Evolutionary Computation, 15(6), 832–846.

Zhao, S-Z., Nagaratnam, P., Suganthan & Das, S. (2010). Dynamic MultiswarmParticle Swarm Optimizer with Sub-regional Harmony Search. WCCI 2010 IEEEWorld Congress on Computational Intelligence, 1983–1990.