directed bee colony optimization algorithm to solve the...

27
Research Article Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem M. Rajeswari, 1 J. Amudhavel, 2 Sujatha Pothula, 1 and P. Dhavachelvan 1 1 Department of CSE, Pondicherry University, Puducherry, India 2 Department of CSE, KL University, Andhra Pradesh, India Correspondence should be addressed to M. Rajeswari; [email protected] Received 26 October 2016; Revised 6 January 2017; Accepted 1 March 2017; Published 4 April 2017 Academic Editor: Reinoud Maex Copyright © 2017 M. Rajeswari et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shiſts per day by considering both hard and soſt constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). is work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. is MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. e performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. e experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. 1. Introduction Metaheuristic techniques, especially the Bee Colony Opti- mization Algorithm, can be easily adapted to solve a larger number of NP-hard combinatorial optimization problems by combining other methods. e metaheuristic method can be divided into local search methods and global search meth- ods. Local search methods such as tabu search, simulated annealing, and the Nelder-Mead Methods are used to exploit search space of the problem while global search methods such as scatter search, genetic algorithms, and Bee Colony Optimization focus on the exploration of the search space area [1]. Exploitation is the process of intensifying the search space; this method repeatedly restarts searching for each time from a different initial solution. Exploration is the process of diversifying the search space to evade trapping in a local optimum. A hybrid method is used to obtain a balance between exploration and exploitation by introducing local search within global search to obtain a robust solution for the NRP. In a previous study, the genetic algorithm was chosen for global search and simulated annealing for a local search to solve the NRP in [2]. In swarm intelligence, the natural behavior of organisms will follow a simple basic rule to structure their environment. e agents will not have any centralized structure to control other individuals; it uses the local interactions among the agents to determine the complex global behavior of the agents [3]. Some of the inspired natural behavior of swarm intelli- gence comprises bird flocking, ant colony, fish schooling, and animal herding methods. e various algorithms include the ant colony optimization algorithm, genetic algorithm, and the particle swarm optimization algorithm [4–6]. e natural foraging behavior of honey bees has inspired bee algorithm. All honey bees will start to collect nectar from various sites around their new hive, and the process of finding out the best nectar site is done by the group decision of honey bees. e mode of communication among the honey bees is carried out by the process of the waggle dance to inform hive mates about Hindawi Computational Intelligence and Neuroscience Volume 2017, Article ID 6563498, 26 pages https://doi.org/10.1155/2017/6563498

Upload: others

Post on 13-Oct-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Research ArticleDirected Bee Colony Optimization Algorithm toSolve the Nurse Rostering Problem

M Rajeswari1 J Amudhavel2 Sujatha Pothula1 and P Dhavachelvan1

1Department of CSE Pondicherry University Puducherry India2Department of CSE KL University Andhra Pradesh India

Correspondence should be addressed to M Rajeswari rajirajeswari18gmailcom

Received 26 October 2016 Revised 6 January 2017 Accepted 1 March 2017 Published 4 April 2017

Academic Editor Reinoud Maex

Copyright copy 2017 M Rajeswari et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Nurse Rostering Problem is an NP-hard combinatorial optimization scheduling problem for assigning a set of nurses toshifts per day by considering both hard and soft constraints A novel metaheuristic technique is required for solving NurseRostering Problem (NRP) This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithmusing the Modified Nelder-Mead Method for solving the NRP To solve the NRP the authors used a multiobjective mathematicalprogramming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization(MODBCO) MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems ThisMODBCO is an integration of deterministic local search multiagent particle system environment and honey bee decision-makingprocess The performance of the algorithm is assessed using the standard dataset INRC2010 and it reflects many real-world caseswhich vary in size and complexity The experimental analysis uses statistical tools to show the uniqueness of the algorithm onassessment criteria

1 Introduction

Metaheuristic techniques especially the Bee Colony Opti-mization Algorithm can be easily adapted to solve a largernumber of NP-hard combinatorial optimization problems bycombining other methodsThe metaheuristic method can bedivided into local search methods and global search meth-ods Local search methods such as tabu search simulatedannealing and the Nelder-Mead Methods are used to exploitsearch space of the problem while global search methodssuch as scatter search genetic algorithms and Bee ColonyOptimization focus on the exploration of the search spacearea [1] Exploitation is the process of intensifying the searchspace thismethod repeatedly restarts searching for each timefrom a different initial solution Exploration is the processof diversifying the search space to evade trapping in a localoptimum A hybrid method is used to obtain a balancebetween exploration and exploitation by introducing localsearch within global search to obtain a robust solution for the

NRP In a previous study the genetic algorithm was chosenfor global search and simulated annealing for a local searchto solve the NRP in [2]

In swarm intelligence the natural behavior of organismswill follow a simple basic rule to structure their environmentThe agents will not have any centralized structure to controlother individuals it uses the local interactions among theagents to determine the complex global behavior of the agents[3] Some of the inspired natural behavior of swarm intelli-gence comprises bird flocking ant colony fish schooling andanimal herding methods The various algorithms include theant colony optimization algorithm genetic algorithm andthe particle swarm optimization algorithm [4ndash6]The naturalforaging behavior of honey bees has inspired bee algorithmAll honey bees will start to collect nectar from various sitesaround their new hive and the process of finding out the bestnectar site is done by the group decision of honey bees Themode of communication among the honey bees is carried outby the process of the waggle dance to informhivemates about

HindawiComputational Intelligence and NeuroscienceVolume 2017 Article ID 6563498 26 pageshttpsdoiorg10115520176563498

2 Computational Intelligence and Neuroscience

the location of rich food sources Some of the algorithmswhich follow the waggle dance of communication performedby scout bees about the nectar site are bee system Bee ColonyOptimization [7] and Artificial Bee Colony [8]

TheDirected BeeColony (DBC)OptimizationAlgorithm[9] is inspired by the group decision-making process ofbee behavior for the selection of the nectar site The groupdecision process includes consensus and quorum methodsConsensus is the process of vote agreement and the votingpattern of the scouts is monitored The best nest site isselected once the quorum (threshold) value is reached Theexperimental result shows that the algorithm is robust andaccurate for generating the unique solutionThe contributionof this research article is the use of a hybrid Directed BeeColony Optimization with the Nelder-Mead Method foreffective local search The authors have adapted MODBCOfor solving multiobjective problems which integrate the fol-lowing processes At first a deterministic local searchmethodModifiedNelder-Mead is used to obtain the provisional opti-mal solutionThen a multiagent particle system environmentis used in the exploration and decision-making process forestablishing a new colony and nectar site selection Only fewhoney bees were active in the process of decision-making sothe energy conservation of the swarm is highly achievable

The Nurse Rostering Problem (NRP) is a staff schedulingproblem that intends to assign a set of nurses to workshifts to maximize hospital benefit by considering a setof hard and soft constraints like allotment of duty hourshospital regulations and so forth This nurse rostering is adelicate task of finding combinatorial solutions by satisfyingmultiple constraints [10] Satisfying the hard constraint ismandatory in any scheduling problem and a violation ofany soft constraints is allowable but penalized To achievean optimal global solution for the problem is impossible inmany cases [11] Many algorithmic techniques such as meta-heuristic method graph-based heuristics and mathematicalprogramming model have been proposed to solve automatedscheduling problems and timetabling problems over the lastdecades [12 13]

In this work the effectiveness of the hybrid algorithmis compared with different optimization algorithms usingperformancemetrics such as error rate convergence rate bestvalue and standard deviationThewell-known combinatorialscheduling problem NRP is chosen as the test bed toexperiment and analyze the effectiveness of the proposedalgorithm

This paper is organized as follows Section 2 presentsthe literature survey of existing algorithms to solve theNRP Section 3 highlights the mathematical model andthe formulation of hard and soft constraints of the NRPSection 4 explains the natural behavior of honey bees tohandle decision-making process and the Modified Nelder-Mead Method Section 5 describes the development of themetaheuristic approach and the effectiveness of the MOD-BCO algorithm to solve the NRP is demonstrated Section 6confers the computational experiments and the analysis ofresults for the formulated problem Finally Section 7 providesthe summary of the discussion and Section 8 will concludewith future directions of the research work

2 Literature Review

Berrada et al [19] considered multiple objectives to tacklethe nurse scheduling problem by considering various orderedsoft constraints The soft constraints are ordered based onpriority level and this determines the quality of the solutionBurke et al [20] proposed a multiobjective Pareto-basedsearch technique and used simulated annealing based on aweighted-sum evaluation function towards preferences and adominated-based evaluation function towards the Pareto setMany mathematical models are proposed to reduce the costand increase the performance of the taskThe performance ofthe problem greatly depends on the type of constraints used[21] Dowsland [22] proposed a technique of chain movesusing a multistate tabu search algorithm This algorithmexchanges the feasible and infeasible search space to increasethe transmission rate when the system gets disconnected Butthis algorithm fails to solve other problems in different searchspace instances

Burke et al [23] proposed a hybrid tabu search algorithmto solve the NRP in Belgian hospitals In their constraintsthe authors have added the previous roster along with hardand soft constraints To consider this they included heuris-tic search strategies in the general tabu search algorithmThis model provides flexibility and more user control Ahyperheuristic algorithm with tabu search is proposed forthe NRP by Burke et al [24] They developed a rule basedreinforcement learning which is domain specific but itchooses a little low-level heuristic to solve the NRP Theindirect genetic algorithm is problem dependent which usesencoding and decoding schemes with genetic operator tosolve NRP Burke et al [25] developed a memetic algorithmto solve the nurse scheduling problem and the authorshave compared memetic and tabu search algorithm Theexperimental result shows a memetic algorithm outperformswith better quality than the genetic algorithm and tabu searchalgorithm

Simulated annealing has been proposed to solve the NRPHadwan and Ayob [26] introduced a shift pattern approachwith simulated annealing The authors have proposed agreedy constructive heuristic algorithm to generate therequired shift patterns to solve the NRP at UKMMC (Univer-siti KebangsaanMalaysiaMedical Centre)Thismethodologywill reduce the complexity of the search space solutionto generate a roster by building two- or three-day shiftpatterns The efficiency of this algorithm was shown byexperimental results with respect to execution time per-formance considerations fairness and the quality of thesolution This approach was capable of handling all hard andsoft constraints and produces a quality roster pattern Sharifet al [27] proposed a hybridized heuristic approach withchanges in the neighborhood descent search algorithm tosolve the NRP at UKMMCThis heuristic is the hybridizationof cyclic schedule with noncyclic schedule They appliedrepairing mechanism which swaps the shifts between nursesto tackle the random shift arrangement in the solution Avariable neighborhood descent search algorithm (VNDS) isused to change the neighborhood structure using a localsearch and generate a quality duty roster In VNDS the first

Computational Intelligence and Neuroscience 3

neighborhood structure will reroster nurses to different shiftsand the second neighborhood structure will do repairingmechanism

Aickelin and Dowsland [28] proposed a technique forshift patterns they considered shift patterns with penaltypreferences and number of successive working days Theindirect genetic algorithm will generate various heuristicdecoders for shift patterns to reconstruct the shift roster forthe nurse A qualified roster is generated using decoderswith the help of the best permutations of nurses To generatebest search space solutions for the permutation of nursesthe authors used an adaptive iterative method to adjust theorder of nurses as scheduled one by one Asta et al [29] andAnwar et al [30] proposed a tensor-based hyperheuristic tosolve the NRP The authors tuned a specific group of datasetsand embedded a tensor-based machine learning algorithmA tensor-based hyperheuristic with memory managementis used to generate the best solution This approach isconsidered in life-long applications to extract knowledge anddesired behavior throughout the run time

Todorovic and Petrovic [31] proposed the Bee ColonyOptimization approach to solve the NRP all the unscheduledshifts are allocated to the available nurses in the constructivephase This algorithm combines the constructive move withlocal search to improve the quality of the solution For eachforward pass the predefined numbers of unscheduled shiftsare allocated to the nurses and discarded the solution withless improvement in the objective function The process ofintelligent reduction in neighborhood search had improvedthe current solution In construction phase unassigned shiftsare allotted to nurses and lead to violation of constraints tohigher penalties

Severalmethods have been proposed using the INRC2010dataset to solve the NRP the authors have consideredfive latest competitors to measure the effectiveness of theproposed algorithm Asaju et al [14] proposed Artificial BeeColony (ABC) algorithm to solve NRP This process is donein two phases at first heuristic based ordering of shift patternis used to generate the feasible solution In the second phaseto obtain the solution ABC algorithm is used In thismethodpremature convergence takes place and the solution getstrapped in local optima The lack of a local search algorithmof this process leads to yielding higher penalty Awadallah etal [15] developed a metaheuristic technique hybrid artificialbee colony (HABC) to solve the NRP In ABC algorithm theemployee bee phase was replaced by a hill climbing approachto increase exploitation process Use of hill climbing in ABCgenerates a higher value which leads to high computationaltime

The global best harmony search with pitch adjustmentdesign is used to tackle the NRP in [16] The author adaptedthe harmony search algorithm (HAS) in exploitation pro-cess and particle swarm optimization (PSO) in explorationprocess In HAS the solutions are generated based on threeoperator namely memory consideration random consider-ation and pitch adjustment for the improvisation processThey did two improvisations to solve the NRP multipitchadjustment to improve exploitation process and replaced ran-dom selectionwith global best to increase convergence speed

The hybrid harmony search algorithm with hill climbing isused to solve the NRP in [17] For local search metaheuristicharmony and hill climbing approach are used The memoryconsideration parameter in harmony is replaced by PSOalgorithm The derivative criteria will reduce the numberof iterations towards local minima This process considersmany parameters to construct the roster since improvisationprocess is to be at each iteration

Santos et al [18] used integer programming (IP) to solvetheNRP andproposedmonolith compact IPwith polynomialconstraints and variables The authors have used both upperand lower bounds for obtaining optimal cost They estimatedand improved lower bound values towards optimum and thismethod requires additional processing time

3 Mathematical Model

The NRP problem is a real-world problem at hospitals theproblem is to assign a predefined set of shifts (like S1-dayshift S2-noon shift S3-night shift and S4-Free-shift) of ascheduled period for a set of nurses of different preferencesand skills in each ward Figure 1 shows the illustrativeexample of the feasible nurse roster which consists of fourshifts namely day shift noon shift night shift and free shift(holiday) allocating five nurses over 11 days of scheduledperiod Each column in the scheduled table represents theday and the cell content represents the shift type allocatedto a nurse Each nurse is allocated one shift per day and thenumber of shifts is assigned based on the hospital contractsThis problem will have some variants on a number of shifttypes nurses nurse skills contracts and scheduling periodIn general both hard and soft constraints are considered forgenerating and assessing solutions

Hard constraints are the regulations which must besatisfied to achieve the feasible solution They cannot beviolated since hard constraints are demanded by hospitalregulations The hard constraints HC1 to HC5 must be filledto schedule the roster The soft constraints SC1 to SC14 aredesirable and the selection of soft constraints determines thequality of the roster Tables 1 and 2 list the set of hard andsoft constraints considered to solve the NRP This sectiondescribes the mathematical model required for hard and softconstraints extensively

The NRP consists of a set of nurses 119899 = 1 2 119873 whereeach row is specific to particular set of shifts 119904 = 1 2 119878for the given set day 119889 = 1 2 119863 The solution roster S forthe 01matrix dimension119873 lowast 119878119863 is as in

S119899119889119904 = 1 if nurse 119899 works 119904 shift for day 1198890 otherwise

(1)

HC1 In this constraint all demanded shifts are assigned to anurse

119873sum119899=1

S119899119889119904 = 119864119889119904 forall119889 isin 119863 119904 isin 119878 (2)

4 Computational Intelligence and Neuroscience

M T W T F

Nurse 1 S1 S2

Nurse 2 S2 S1

Nurse 3 S3 S1

Nurse 4 S1 S4

Nurse 5 S2

S S M T W T

S3 S4

S4 S3

S4 S2

S2 S3

S3 S1 S4

S2S3S4

S1 Day shiftNoon shift

Night shiftFree shift

Figure 1 Illustrative example of Nurse Rostering Problem

Table 1

Hard constraintsHC1 All demanded shifts assigned to a nurseHC2 A nurse can work with only a single shift per dayHC3 The minimum number of nurses required for the shiftHC4 The total number of working days for the nurse should be between the maximum and minimum rangeHC5 A day shift followed by night shift is not allowed

Table 2

Soft constraintsSC1 The maximum number of shifts assigned to each nurseSC2 The minimum number of shifts assigned to each nurseSC3 The maximum number of consecutive working days assigned to each nurseSC4 The minimum number of consecutive working days assigned to each nurseSC5 The maximum number of consecutive working days assigned to each nurse on which no shift is allottedSC6 The minimum number of consecutive working days assigned to each nurse on which no shift is allottedSC7 The maximum number of consecutive working weekends with at least one shift assigned to each nurseSC8 The minimum number of consecutive working weekends with at least one shift assigned to each nurseSC9 The maximum number of weekends with at least one shift assigned to each nurseSC10 Specific working daySC11 Requested day offSC12 Specific shift onSC13 Specific shift offSC14 Nurse not working on the unwanted pattern

where 119864119889119904 is the number of nurses required for a day (119889) atshift (119904) and S119889119904 is the allocation of nurses in the feasiblesolution roster

HC2 In this constraint each nurse can work not more thanone shift per day

119878sum119904=1

S119904119899119889 le 1 forall119899 isin 119873 119889 isin 119863 (3)

where S119899119889 is the allocation of nurses (119899) in solution at shift (119904)for a day (119889)HC3This constraint deals with aminimumnumber of nursesrequired for each shift

119873sum119899=1

S119899119889119904 ge min119899119889119904 forall119889 isin 119863 119904 isin 119878 (4)

Computational Intelligence and Neuroscience 5

where min119899119889119904 is the minimum number of nurses required fora shift (119904) on the day (119889)HC4 In this constraint the total number of working days foreach nurse should range between minimum and maximumrange for the given scheduled period

119882min le 119863sum119889=1

119878sum119904=1

S119889119904119899 le 119882max forall119899 isin 119873 (5)

The average working shift for nurse can be determined byusing

119882avg = 1119873 (119863sum119889=1

119878sum119904=1

S119889119904119899 forall119899 isin 119873) (6)

where 119882min and 119882max are the minimum and maximumnumber of days in scheduled period and119882avg is the averageworking shift of the nurse

HC5 In this constraint shift 1 followed by shift 3 is notallowed that is a day shift followed by a night shift is notallowed

119873sum119899=1

119863sum119889=1

S1198991198891199043 + S119899119889+11199041 le 1 forall119904 isin 119878 (7)

SC1 The maximum number of shifts assigned to each nursefor the given scheduled period is as follows

max(( 119863sum119889=1

119878sum119904=1

S119889119904119899 minus Φ119906119887119899 ) 0) forall119899 isin 119873 (8)

whereΦ119906119887119899 is themaximumnumber of shifts assigned to nurse(119899)SC2 The minimum number of shifts assigned to each nursefor the given scheduled period is as follows

max((Φ119897119887119899 minus 119863sum119889=1

119878sum119904=1

S119889119904119899 ) 0) forall119899 isin 119873 (9)

whereΦ119897119887119899 is theminimumnumber of shifts assigned to nurse(119899)SC3 The maximum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((C119896119899 minus Θ119906119887119899 ) 0) forall119899 isin 119873 (10)

where Θ119906119887119899 is the maximum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutive

working spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working spans of nurse (119899)SC4 The minimum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((Θ119897119887119899 minus C119896119899) 0) forall119899 isin 119873 (11)

where Θ119897119887119899 is the minimum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutiveworking spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working span of the nurse (119899)SC5 The maximum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((eth119896119899 minus 120593119906119887119899 ) 0) forall119899 isin 119873 (12)

where120593119906119887119899 is themaximumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC6 The minimum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((120593119897119887119899 minus eth119896119899) 0) forall119899 isin 119873 (13)

where 120593119897119887119899 is theminimumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC7 The maximum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((120577119896119899 minus Ω119906119887119899 ) 0) forall119899 isin 119873 (14)

where Ω119906119887119899 is the maximum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC8 The minimum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((Ω119897119887119899 minus 120577119896119899) 0) forall119899 isin 119873 (15)

6 Computational Intelligence and Neuroscience

where Ω119897119887119899 is the minimum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC9 The maximum number of weekends with at least oneshift assigned to nurse in four weeks is as follows

119899sum119896=1

max ((119896119899 minus 120603119906119887119899 ) 0) forall119899 isin 119873 (16)

where 119896119899 is the number of working days at the 119896th weekendof nurse (119899) 120603119906119887119899 is the maximum number of working daysfor nurse (119899) and 119899 is the total count of the weekend in thescheduling period of nurse (119899)SC10 The nurse can request working on a particular day forthe given scheduled period

119863sum119889=1

120582119889119899 = 1 forall119899 isin 119873 (17)

where 120582119889119899 is the day request from the nurse (119899) to work on anyshift on a particular day (119889)SC11 The nurse can request that they do not work on aparticular day for the given scheduled period

119863sum119889=1

120582119889119899 = 0 forall119899 isin 119873 (18)

where 120582119889119899 is the request from the nurse (119899) not to work on anyshift on a particular day (119889)SC12 The nurse can request working on a particular shift ona particular day for the given scheduled period

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 1 forall119899 isin 119873 (19)

where Υ119889119904119899 is the shift request from the nurse (119899) to work ona particular shift (119904) on particular day (119889)SC13 The nurse can request that they do not work on aparticular shift on a particular day for the given scheduledperiod

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 0 forall119899 isin 119873 (20)

where Υ119889119904119899 is the shift request from the nurse (119899) not to workon a particular shift (119904) on particular day (119889)SC14 The nurse should not work on unwanted patternsuggested for the scheduled period

984858119899sum119906=1

120583119906119899 forall119899 isin 119873 (21)

where 120583119906119899 is the total count of occurring patterns for nurse (119899)of type 119906 984858119899 is the set of unwanted patterns suggested for thenurse (119899)

The objective function of the NRP is to maximize thenurse preferences and minimize the penalty cost from vio-lations of soft constraints in (22)

min119891 (S119899119889119904)= 14sum

SC=1119875sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) lowast 119879sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) (22)

Here SC refers to the set of soft constraints indexed inTable 2 119875sc(119909) refers to the penalty weight violation of thesoft constraint and 119879sc(119909) refers to the total violations of thesoft constraints in roster solution It has to be noted that theusage of penalty function [32] in the NRP is to improve theperformance and provide the fair comparison with anotheroptimization algorithm

4 Bee Colony Optimization

41 Natural Behavior of Honey Bees Swarm intelligence isan emerging discipline for the study of problems whichrequires an optimal approach rather than the traditionalapproach The use of swarm intelligence is the part ofartificial intelligence based on the study of the behavior ofsocial insects The swarm intelligence is composed of manyindividual actions using decentralized and self-organizedsystem Swarm behavior is characterized by natural behaviorof many species such as fish schools herds of animals andflocks of birds formed for the biological requirements tostay together Swarm implies the aggregation of animalssuch as birds fishes ants and bees based on the collectivebehavior The individual agents in the swarm will have astochastic behavior which depends on the local perception ofthe neighborhood The communication between any insectscan be formed with the help of the colonies and it promotescollective intelligence among the colonies

The important features of swarms are proximity qualityresponse variability stability and adaptability The proximityof the swarm must be capable of providing simple spaceand time computations and it should respond to the qualityfactorsThe swarm should allow diverse activities and shouldnot be restricted among narrow channels The swarm shouldmaintain the stability nature and should not fluctuate basedon the behaviorThe adaptability of the swarmmust be able tochange the behavior mode when required Several hundredsof bees from the swarm work together to find nesting sitesand select the best nest site Bee Colony Optimization isinspired by the natural behavior of beesThe bee optimizationalgorithm is inspired by group decision-making processesof honey bees A honey bee searches the best nest site byconsidering speed and accuracy

In a bee colony there are three different types of beesa single queen bee thousands of male drone bees andthousands of worker bees

(1) The queen bee is responsible for creating new coloniesby laying eggs

Computational Intelligence and Neuroscience 7

(2) The male drone bees mated with the queen and werediscarded from the colonies

(3) The remaining female bees in the hive are calledworker bees and they are called the building block ofthe hiveThe responsibilities of the worker bees are tofeed guard and maintain the honey bee comb

Based on the responsibility worker bees are classifiedas scout bees and forager bees A scout bee flies in searchof food sources randomly and returns when the energygets exhausted After reaching a hive scout bees share theinformation and start to explore rich food source locationswith forager bees The scout beersquos information includesdirection quality quantity and distance of the food sourcethey found The way of communicating information about afood source to foragers is done using dance There are twotypes of dance round dance and waggle dance The rounddance will provide direction of the food source when thedistance is small The waggle dance indicates the positionand the direction of the food source the distance can bemeasured by the speed of the dance A greater speed indicatesa smaller distance and the quantity of the food depends onthe wriggling of the beeThe exchange of information amonghive mates is to acquire collective knowledge Forager beeswill silently observe the behavior of scout bee to acquireknowledge about the directions and information of the foodsource

The group decision process of honey bees is for searchingbest food source and nest siteThe decision-making process isbased on the swarming process of the honey bee Swarming isthe process inwhich the queen bee and half of theworker beeswill leave their hive to explore a new colony The remainingworker bees and daughter bee will remain in the old hiveto monitor the waggle dance After leaving their parentalhive swarm bees will form a cluster in search of the newnest site The waggle dance is used to communicate withquiescent bees which are inactive in the colonyThis providesprecise information about the direction of the flower patchbased on its quality and energy level The number of followerbees increases based on the quality of the food source andallows the colony to gather food quickly and efficiently Thedecision-making process can be done in two methods byswarm bees to find the best nest site They are consensusand quorum consensus is the group agreement taken intoaccount and quorum is the decision process taken when thebee vote reaches a threshold value

Bee Colony Optimization (BCO) algorithm is apopulation-based algorithm The bees in the populationare artificial bees and each bee finds its neighboring solutionfrom the current path This algorithm has a forward andbackward process In forwarding pass every bee starts toexplore the neighborhood of its current solution and enablesconstructive and improving moves In forward pass entirebees in the hive will start the constructive move and thenlocal search will start In backward pass bees share theobjective value obtained in the forward pass The bees withhigher priority are used to discard all nonimproving movesThe bees will continue to explore in next forward pass orcontinue the same process with neighborhoodThe flowchart

Forward pass

Initialization

Construction move

Backward pass

Update the bestsolution

Stopping criteriaFalse

True

Figure 2 Flowchart of BCO algorithm

for BCO is shown in Figure 2 The BCO is proficient insolving combinatorial optimization problems by creatingcolonies of the multiagent system The pseudocode for BCOis described in Algorithm 1 The bee colony system providesa standard well-organized and well-coordinated teamworkmultitasking performance [33]

42 Modified Nelder-Mead Method The Nelder-MeadMethod is a simplex method for finding a local minimumfunction of various variables and is a local search algorithmfor unconstrained optimization problems The whole searcharea is divided into different fragments and filled with beeagents To obtain the best solution each fragment can besearched by its bee agents through Modified Nelder-MeadMethod (MNMM) Each agent in the fragments passesinformation about the optimized point using MNMMBy using NMMM the best points are obtained and thebest solution is chosen by decision-making process ofhoney bees The algorithm is a simplex-based method119863-dimensional simplex is initialized with 119863 + 1 verticesthat is two dimensions and it forms a triangle if it has threedimensions it forms a tetrahedron To assign the best andworst point the vertices are evaluated and ordered based onthe objective function

The best point or vertex is considered to the minimumvalue of the objective function and the worst point is chosen

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

2 Computational Intelligence and Neuroscience

the location of rich food sources Some of the algorithmswhich follow the waggle dance of communication performedby scout bees about the nectar site are bee system Bee ColonyOptimization [7] and Artificial Bee Colony [8]

TheDirected BeeColony (DBC)OptimizationAlgorithm[9] is inspired by the group decision-making process ofbee behavior for the selection of the nectar site The groupdecision process includes consensus and quorum methodsConsensus is the process of vote agreement and the votingpattern of the scouts is monitored The best nest site isselected once the quorum (threshold) value is reached Theexperimental result shows that the algorithm is robust andaccurate for generating the unique solutionThe contributionof this research article is the use of a hybrid Directed BeeColony Optimization with the Nelder-Mead Method foreffective local search The authors have adapted MODBCOfor solving multiobjective problems which integrate the fol-lowing processes At first a deterministic local searchmethodModifiedNelder-Mead is used to obtain the provisional opti-mal solutionThen a multiagent particle system environmentis used in the exploration and decision-making process forestablishing a new colony and nectar site selection Only fewhoney bees were active in the process of decision-making sothe energy conservation of the swarm is highly achievable

The Nurse Rostering Problem (NRP) is a staff schedulingproblem that intends to assign a set of nurses to workshifts to maximize hospital benefit by considering a setof hard and soft constraints like allotment of duty hourshospital regulations and so forth This nurse rostering is adelicate task of finding combinatorial solutions by satisfyingmultiple constraints [10] Satisfying the hard constraint ismandatory in any scheduling problem and a violation ofany soft constraints is allowable but penalized To achievean optimal global solution for the problem is impossible inmany cases [11] Many algorithmic techniques such as meta-heuristic method graph-based heuristics and mathematicalprogramming model have been proposed to solve automatedscheduling problems and timetabling problems over the lastdecades [12 13]

In this work the effectiveness of the hybrid algorithmis compared with different optimization algorithms usingperformancemetrics such as error rate convergence rate bestvalue and standard deviationThewell-known combinatorialscheduling problem NRP is chosen as the test bed toexperiment and analyze the effectiveness of the proposedalgorithm

This paper is organized as follows Section 2 presentsthe literature survey of existing algorithms to solve theNRP Section 3 highlights the mathematical model andthe formulation of hard and soft constraints of the NRPSection 4 explains the natural behavior of honey bees tohandle decision-making process and the Modified Nelder-Mead Method Section 5 describes the development of themetaheuristic approach and the effectiveness of the MOD-BCO algorithm to solve the NRP is demonstrated Section 6confers the computational experiments and the analysis ofresults for the formulated problem Finally Section 7 providesthe summary of the discussion and Section 8 will concludewith future directions of the research work

2 Literature Review

Berrada et al [19] considered multiple objectives to tacklethe nurse scheduling problem by considering various orderedsoft constraints The soft constraints are ordered based onpriority level and this determines the quality of the solutionBurke et al [20] proposed a multiobjective Pareto-basedsearch technique and used simulated annealing based on aweighted-sum evaluation function towards preferences and adominated-based evaluation function towards the Pareto setMany mathematical models are proposed to reduce the costand increase the performance of the taskThe performance ofthe problem greatly depends on the type of constraints used[21] Dowsland [22] proposed a technique of chain movesusing a multistate tabu search algorithm This algorithmexchanges the feasible and infeasible search space to increasethe transmission rate when the system gets disconnected Butthis algorithm fails to solve other problems in different searchspace instances

Burke et al [23] proposed a hybrid tabu search algorithmto solve the NRP in Belgian hospitals In their constraintsthe authors have added the previous roster along with hardand soft constraints To consider this they included heuris-tic search strategies in the general tabu search algorithmThis model provides flexibility and more user control Ahyperheuristic algorithm with tabu search is proposed forthe NRP by Burke et al [24] They developed a rule basedreinforcement learning which is domain specific but itchooses a little low-level heuristic to solve the NRP Theindirect genetic algorithm is problem dependent which usesencoding and decoding schemes with genetic operator tosolve NRP Burke et al [25] developed a memetic algorithmto solve the nurse scheduling problem and the authorshave compared memetic and tabu search algorithm Theexperimental result shows a memetic algorithm outperformswith better quality than the genetic algorithm and tabu searchalgorithm

Simulated annealing has been proposed to solve the NRPHadwan and Ayob [26] introduced a shift pattern approachwith simulated annealing The authors have proposed agreedy constructive heuristic algorithm to generate therequired shift patterns to solve the NRP at UKMMC (Univer-siti KebangsaanMalaysiaMedical Centre)Thismethodologywill reduce the complexity of the search space solutionto generate a roster by building two- or three-day shiftpatterns The efficiency of this algorithm was shown byexperimental results with respect to execution time per-formance considerations fairness and the quality of thesolution This approach was capable of handling all hard andsoft constraints and produces a quality roster pattern Sharifet al [27] proposed a hybridized heuristic approach withchanges in the neighborhood descent search algorithm tosolve the NRP at UKMMCThis heuristic is the hybridizationof cyclic schedule with noncyclic schedule They appliedrepairing mechanism which swaps the shifts between nursesto tackle the random shift arrangement in the solution Avariable neighborhood descent search algorithm (VNDS) isused to change the neighborhood structure using a localsearch and generate a quality duty roster In VNDS the first

Computational Intelligence and Neuroscience 3

neighborhood structure will reroster nurses to different shiftsand the second neighborhood structure will do repairingmechanism

Aickelin and Dowsland [28] proposed a technique forshift patterns they considered shift patterns with penaltypreferences and number of successive working days Theindirect genetic algorithm will generate various heuristicdecoders for shift patterns to reconstruct the shift roster forthe nurse A qualified roster is generated using decoderswith the help of the best permutations of nurses To generatebest search space solutions for the permutation of nursesthe authors used an adaptive iterative method to adjust theorder of nurses as scheduled one by one Asta et al [29] andAnwar et al [30] proposed a tensor-based hyperheuristic tosolve the NRP The authors tuned a specific group of datasetsand embedded a tensor-based machine learning algorithmA tensor-based hyperheuristic with memory managementis used to generate the best solution This approach isconsidered in life-long applications to extract knowledge anddesired behavior throughout the run time

Todorovic and Petrovic [31] proposed the Bee ColonyOptimization approach to solve the NRP all the unscheduledshifts are allocated to the available nurses in the constructivephase This algorithm combines the constructive move withlocal search to improve the quality of the solution For eachforward pass the predefined numbers of unscheduled shiftsare allocated to the nurses and discarded the solution withless improvement in the objective function The process ofintelligent reduction in neighborhood search had improvedthe current solution In construction phase unassigned shiftsare allotted to nurses and lead to violation of constraints tohigher penalties

Severalmethods have been proposed using the INRC2010dataset to solve the NRP the authors have consideredfive latest competitors to measure the effectiveness of theproposed algorithm Asaju et al [14] proposed Artificial BeeColony (ABC) algorithm to solve NRP This process is donein two phases at first heuristic based ordering of shift patternis used to generate the feasible solution In the second phaseto obtain the solution ABC algorithm is used In thismethodpremature convergence takes place and the solution getstrapped in local optima The lack of a local search algorithmof this process leads to yielding higher penalty Awadallah etal [15] developed a metaheuristic technique hybrid artificialbee colony (HABC) to solve the NRP In ABC algorithm theemployee bee phase was replaced by a hill climbing approachto increase exploitation process Use of hill climbing in ABCgenerates a higher value which leads to high computationaltime

The global best harmony search with pitch adjustmentdesign is used to tackle the NRP in [16] The author adaptedthe harmony search algorithm (HAS) in exploitation pro-cess and particle swarm optimization (PSO) in explorationprocess In HAS the solutions are generated based on threeoperator namely memory consideration random consider-ation and pitch adjustment for the improvisation processThey did two improvisations to solve the NRP multipitchadjustment to improve exploitation process and replaced ran-dom selectionwith global best to increase convergence speed

The hybrid harmony search algorithm with hill climbing isused to solve the NRP in [17] For local search metaheuristicharmony and hill climbing approach are used The memoryconsideration parameter in harmony is replaced by PSOalgorithm The derivative criteria will reduce the numberof iterations towards local minima This process considersmany parameters to construct the roster since improvisationprocess is to be at each iteration

Santos et al [18] used integer programming (IP) to solvetheNRP andproposedmonolith compact IPwith polynomialconstraints and variables The authors have used both upperand lower bounds for obtaining optimal cost They estimatedand improved lower bound values towards optimum and thismethod requires additional processing time

3 Mathematical Model

The NRP problem is a real-world problem at hospitals theproblem is to assign a predefined set of shifts (like S1-dayshift S2-noon shift S3-night shift and S4-Free-shift) of ascheduled period for a set of nurses of different preferencesand skills in each ward Figure 1 shows the illustrativeexample of the feasible nurse roster which consists of fourshifts namely day shift noon shift night shift and free shift(holiday) allocating five nurses over 11 days of scheduledperiod Each column in the scheduled table represents theday and the cell content represents the shift type allocatedto a nurse Each nurse is allocated one shift per day and thenumber of shifts is assigned based on the hospital contractsThis problem will have some variants on a number of shifttypes nurses nurse skills contracts and scheduling periodIn general both hard and soft constraints are considered forgenerating and assessing solutions

Hard constraints are the regulations which must besatisfied to achieve the feasible solution They cannot beviolated since hard constraints are demanded by hospitalregulations The hard constraints HC1 to HC5 must be filledto schedule the roster The soft constraints SC1 to SC14 aredesirable and the selection of soft constraints determines thequality of the roster Tables 1 and 2 list the set of hard andsoft constraints considered to solve the NRP This sectiondescribes the mathematical model required for hard and softconstraints extensively

The NRP consists of a set of nurses 119899 = 1 2 119873 whereeach row is specific to particular set of shifts 119904 = 1 2 119878for the given set day 119889 = 1 2 119863 The solution roster S forthe 01matrix dimension119873 lowast 119878119863 is as in

S119899119889119904 = 1 if nurse 119899 works 119904 shift for day 1198890 otherwise

(1)

HC1 In this constraint all demanded shifts are assigned to anurse

119873sum119899=1

S119899119889119904 = 119864119889119904 forall119889 isin 119863 119904 isin 119878 (2)

4 Computational Intelligence and Neuroscience

M T W T F

Nurse 1 S1 S2

Nurse 2 S2 S1

Nurse 3 S3 S1

Nurse 4 S1 S4

Nurse 5 S2

S S M T W T

S3 S4

S4 S3

S4 S2

S2 S3

S3 S1 S4

S2S3S4

S1 Day shiftNoon shift

Night shiftFree shift

Figure 1 Illustrative example of Nurse Rostering Problem

Table 1

Hard constraintsHC1 All demanded shifts assigned to a nurseHC2 A nurse can work with only a single shift per dayHC3 The minimum number of nurses required for the shiftHC4 The total number of working days for the nurse should be between the maximum and minimum rangeHC5 A day shift followed by night shift is not allowed

Table 2

Soft constraintsSC1 The maximum number of shifts assigned to each nurseSC2 The minimum number of shifts assigned to each nurseSC3 The maximum number of consecutive working days assigned to each nurseSC4 The minimum number of consecutive working days assigned to each nurseSC5 The maximum number of consecutive working days assigned to each nurse on which no shift is allottedSC6 The minimum number of consecutive working days assigned to each nurse on which no shift is allottedSC7 The maximum number of consecutive working weekends with at least one shift assigned to each nurseSC8 The minimum number of consecutive working weekends with at least one shift assigned to each nurseSC9 The maximum number of weekends with at least one shift assigned to each nurseSC10 Specific working daySC11 Requested day offSC12 Specific shift onSC13 Specific shift offSC14 Nurse not working on the unwanted pattern

where 119864119889119904 is the number of nurses required for a day (119889) atshift (119904) and S119889119904 is the allocation of nurses in the feasiblesolution roster

HC2 In this constraint each nurse can work not more thanone shift per day

119878sum119904=1

S119904119899119889 le 1 forall119899 isin 119873 119889 isin 119863 (3)

where S119899119889 is the allocation of nurses (119899) in solution at shift (119904)for a day (119889)HC3This constraint deals with aminimumnumber of nursesrequired for each shift

119873sum119899=1

S119899119889119904 ge min119899119889119904 forall119889 isin 119863 119904 isin 119878 (4)

Computational Intelligence and Neuroscience 5

where min119899119889119904 is the minimum number of nurses required fora shift (119904) on the day (119889)HC4 In this constraint the total number of working days foreach nurse should range between minimum and maximumrange for the given scheduled period

119882min le 119863sum119889=1

119878sum119904=1

S119889119904119899 le 119882max forall119899 isin 119873 (5)

The average working shift for nurse can be determined byusing

119882avg = 1119873 (119863sum119889=1

119878sum119904=1

S119889119904119899 forall119899 isin 119873) (6)

where 119882min and 119882max are the minimum and maximumnumber of days in scheduled period and119882avg is the averageworking shift of the nurse

HC5 In this constraint shift 1 followed by shift 3 is notallowed that is a day shift followed by a night shift is notallowed

119873sum119899=1

119863sum119889=1

S1198991198891199043 + S119899119889+11199041 le 1 forall119904 isin 119878 (7)

SC1 The maximum number of shifts assigned to each nursefor the given scheduled period is as follows

max(( 119863sum119889=1

119878sum119904=1

S119889119904119899 minus Φ119906119887119899 ) 0) forall119899 isin 119873 (8)

whereΦ119906119887119899 is themaximumnumber of shifts assigned to nurse(119899)SC2 The minimum number of shifts assigned to each nursefor the given scheduled period is as follows

max((Φ119897119887119899 minus 119863sum119889=1

119878sum119904=1

S119889119904119899 ) 0) forall119899 isin 119873 (9)

whereΦ119897119887119899 is theminimumnumber of shifts assigned to nurse(119899)SC3 The maximum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((C119896119899 minus Θ119906119887119899 ) 0) forall119899 isin 119873 (10)

where Θ119906119887119899 is the maximum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutive

working spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working spans of nurse (119899)SC4 The minimum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((Θ119897119887119899 minus C119896119899) 0) forall119899 isin 119873 (11)

where Θ119897119887119899 is the minimum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutiveworking spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working span of the nurse (119899)SC5 The maximum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((eth119896119899 minus 120593119906119887119899 ) 0) forall119899 isin 119873 (12)

where120593119906119887119899 is themaximumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC6 The minimum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((120593119897119887119899 minus eth119896119899) 0) forall119899 isin 119873 (13)

where 120593119897119887119899 is theminimumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC7 The maximum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((120577119896119899 minus Ω119906119887119899 ) 0) forall119899 isin 119873 (14)

where Ω119906119887119899 is the maximum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC8 The minimum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((Ω119897119887119899 minus 120577119896119899) 0) forall119899 isin 119873 (15)

6 Computational Intelligence and Neuroscience

where Ω119897119887119899 is the minimum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC9 The maximum number of weekends with at least oneshift assigned to nurse in four weeks is as follows

119899sum119896=1

max ((119896119899 minus 120603119906119887119899 ) 0) forall119899 isin 119873 (16)

where 119896119899 is the number of working days at the 119896th weekendof nurse (119899) 120603119906119887119899 is the maximum number of working daysfor nurse (119899) and 119899 is the total count of the weekend in thescheduling period of nurse (119899)SC10 The nurse can request working on a particular day forthe given scheduled period

119863sum119889=1

120582119889119899 = 1 forall119899 isin 119873 (17)

where 120582119889119899 is the day request from the nurse (119899) to work on anyshift on a particular day (119889)SC11 The nurse can request that they do not work on aparticular day for the given scheduled period

119863sum119889=1

120582119889119899 = 0 forall119899 isin 119873 (18)

where 120582119889119899 is the request from the nurse (119899) not to work on anyshift on a particular day (119889)SC12 The nurse can request working on a particular shift ona particular day for the given scheduled period

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 1 forall119899 isin 119873 (19)

where Υ119889119904119899 is the shift request from the nurse (119899) to work ona particular shift (119904) on particular day (119889)SC13 The nurse can request that they do not work on aparticular shift on a particular day for the given scheduledperiod

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 0 forall119899 isin 119873 (20)

where Υ119889119904119899 is the shift request from the nurse (119899) not to workon a particular shift (119904) on particular day (119889)SC14 The nurse should not work on unwanted patternsuggested for the scheduled period

984858119899sum119906=1

120583119906119899 forall119899 isin 119873 (21)

where 120583119906119899 is the total count of occurring patterns for nurse (119899)of type 119906 984858119899 is the set of unwanted patterns suggested for thenurse (119899)

The objective function of the NRP is to maximize thenurse preferences and minimize the penalty cost from vio-lations of soft constraints in (22)

min119891 (S119899119889119904)= 14sum

SC=1119875sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) lowast 119879sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) (22)

Here SC refers to the set of soft constraints indexed inTable 2 119875sc(119909) refers to the penalty weight violation of thesoft constraint and 119879sc(119909) refers to the total violations of thesoft constraints in roster solution It has to be noted that theusage of penalty function [32] in the NRP is to improve theperformance and provide the fair comparison with anotheroptimization algorithm

4 Bee Colony Optimization

41 Natural Behavior of Honey Bees Swarm intelligence isan emerging discipline for the study of problems whichrequires an optimal approach rather than the traditionalapproach The use of swarm intelligence is the part ofartificial intelligence based on the study of the behavior ofsocial insects The swarm intelligence is composed of manyindividual actions using decentralized and self-organizedsystem Swarm behavior is characterized by natural behaviorof many species such as fish schools herds of animals andflocks of birds formed for the biological requirements tostay together Swarm implies the aggregation of animalssuch as birds fishes ants and bees based on the collectivebehavior The individual agents in the swarm will have astochastic behavior which depends on the local perception ofthe neighborhood The communication between any insectscan be formed with the help of the colonies and it promotescollective intelligence among the colonies

The important features of swarms are proximity qualityresponse variability stability and adaptability The proximityof the swarm must be capable of providing simple spaceand time computations and it should respond to the qualityfactorsThe swarm should allow diverse activities and shouldnot be restricted among narrow channels The swarm shouldmaintain the stability nature and should not fluctuate basedon the behaviorThe adaptability of the swarmmust be able tochange the behavior mode when required Several hundredsof bees from the swarm work together to find nesting sitesand select the best nest site Bee Colony Optimization isinspired by the natural behavior of beesThe bee optimizationalgorithm is inspired by group decision-making processesof honey bees A honey bee searches the best nest site byconsidering speed and accuracy

In a bee colony there are three different types of beesa single queen bee thousands of male drone bees andthousands of worker bees

(1) The queen bee is responsible for creating new coloniesby laying eggs

Computational Intelligence and Neuroscience 7

(2) The male drone bees mated with the queen and werediscarded from the colonies

(3) The remaining female bees in the hive are calledworker bees and they are called the building block ofthe hiveThe responsibilities of the worker bees are tofeed guard and maintain the honey bee comb

Based on the responsibility worker bees are classifiedas scout bees and forager bees A scout bee flies in searchof food sources randomly and returns when the energygets exhausted After reaching a hive scout bees share theinformation and start to explore rich food source locationswith forager bees The scout beersquos information includesdirection quality quantity and distance of the food sourcethey found The way of communicating information about afood source to foragers is done using dance There are twotypes of dance round dance and waggle dance The rounddance will provide direction of the food source when thedistance is small The waggle dance indicates the positionand the direction of the food source the distance can bemeasured by the speed of the dance A greater speed indicatesa smaller distance and the quantity of the food depends onthe wriggling of the beeThe exchange of information amonghive mates is to acquire collective knowledge Forager beeswill silently observe the behavior of scout bee to acquireknowledge about the directions and information of the foodsource

The group decision process of honey bees is for searchingbest food source and nest siteThe decision-making process isbased on the swarming process of the honey bee Swarming isthe process inwhich the queen bee and half of theworker beeswill leave their hive to explore a new colony The remainingworker bees and daughter bee will remain in the old hiveto monitor the waggle dance After leaving their parentalhive swarm bees will form a cluster in search of the newnest site The waggle dance is used to communicate withquiescent bees which are inactive in the colonyThis providesprecise information about the direction of the flower patchbased on its quality and energy level The number of followerbees increases based on the quality of the food source andallows the colony to gather food quickly and efficiently Thedecision-making process can be done in two methods byswarm bees to find the best nest site They are consensusand quorum consensus is the group agreement taken intoaccount and quorum is the decision process taken when thebee vote reaches a threshold value

Bee Colony Optimization (BCO) algorithm is apopulation-based algorithm The bees in the populationare artificial bees and each bee finds its neighboring solutionfrom the current path This algorithm has a forward andbackward process In forwarding pass every bee starts toexplore the neighborhood of its current solution and enablesconstructive and improving moves In forward pass entirebees in the hive will start the constructive move and thenlocal search will start In backward pass bees share theobjective value obtained in the forward pass The bees withhigher priority are used to discard all nonimproving movesThe bees will continue to explore in next forward pass orcontinue the same process with neighborhoodThe flowchart

Forward pass

Initialization

Construction move

Backward pass

Update the bestsolution

Stopping criteriaFalse

True

Figure 2 Flowchart of BCO algorithm

for BCO is shown in Figure 2 The BCO is proficient insolving combinatorial optimization problems by creatingcolonies of the multiagent system The pseudocode for BCOis described in Algorithm 1 The bee colony system providesa standard well-organized and well-coordinated teamworkmultitasking performance [33]

42 Modified Nelder-Mead Method The Nelder-MeadMethod is a simplex method for finding a local minimumfunction of various variables and is a local search algorithmfor unconstrained optimization problems The whole searcharea is divided into different fragments and filled with beeagents To obtain the best solution each fragment can besearched by its bee agents through Modified Nelder-MeadMethod (MNMM) Each agent in the fragments passesinformation about the optimized point using MNMMBy using NMMM the best points are obtained and thebest solution is chosen by decision-making process ofhoney bees The algorithm is a simplex-based method119863-dimensional simplex is initialized with 119863 + 1 verticesthat is two dimensions and it forms a triangle if it has threedimensions it forms a tetrahedron To assign the best andworst point the vertices are evaluated and ordered based onthe objective function

The best point or vertex is considered to the minimumvalue of the objective function and the worst point is chosen

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 3

neighborhood structure will reroster nurses to different shiftsand the second neighborhood structure will do repairingmechanism

Aickelin and Dowsland [28] proposed a technique forshift patterns they considered shift patterns with penaltypreferences and number of successive working days Theindirect genetic algorithm will generate various heuristicdecoders for shift patterns to reconstruct the shift roster forthe nurse A qualified roster is generated using decoderswith the help of the best permutations of nurses To generatebest search space solutions for the permutation of nursesthe authors used an adaptive iterative method to adjust theorder of nurses as scheduled one by one Asta et al [29] andAnwar et al [30] proposed a tensor-based hyperheuristic tosolve the NRP The authors tuned a specific group of datasetsand embedded a tensor-based machine learning algorithmA tensor-based hyperheuristic with memory managementis used to generate the best solution This approach isconsidered in life-long applications to extract knowledge anddesired behavior throughout the run time

Todorovic and Petrovic [31] proposed the Bee ColonyOptimization approach to solve the NRP all the unscheduledshifts are allocated to the available nurses in the constructivephase This algorithm combines the constructive move withlocal search to improve the quality of the solution For eachforward pass the predefined numbers of unscheduled shiftsare allocated to the nurses and discarded the solution withless improvement in the objective function The process ofintelligent reduction in neighborhood search had improvedthe current solution In construction phase unassigned shiftsare allotted to nurses and lead to violation of constraints tohigher penalties

Severalmethods have been proposed using the INRC2010dataset to solve the NRP the authors have consideredfive latest competitors to measure the effectiveness of theproposed algorithm Asaju et al [14] proposed Artificial BeeColony (ABC) algorithm to solve NRP This process is donein two phases at first heuristic based ordering of shift patternis used to generate the feasible solution In the second phaseto obtain the solution ABC algorithm is used In thismethodpremature convergence takes place and the solution getstrapped in local optima The lack of a local search algorithmof this process leads to yielding higher penalty Awadallah etal [15] developed a metaheuristic technique hybrid artificialbee colony (HABC) to solve the NRP In ABC algorithm theemployee bee phase was replaced by a hill climbing approachto increase exploitation process Use of hill climbing in ABCgenerates a higher value which leads to high computationaltime

The global best harmony search with pitch adjustmentdesign is used to tackle the NRP in [16] The author adaptedthe harmony search algorithm (HAS) in exploitation pro-cess and particle swarm optimization (PSO) in explorationprocess In HAS the solutions are generated based on threeoperator namely memory consideration random consider-ation and pitch adjustment for the improvisation processThey did two improvisations to solve the NRP multipitchadjustment to improve exploitation process and replaced ran-dom selectionwith global best to increase convergence speed

The hybrid harmony search algorithm with hill climbing isused to solve the NRP in [17] For local search metaheuristicharmony and hill climbing approach are used The memoryconsideration parameter in harmony is replaced by PSOalgorithm The derivative criteria will reduce the numberof iterations towards local minima This process considersmany parameters to construct the roster since improvisationprocess is to be at each iteration

Santos et al [18] used integer programming (IP) to solvetheNRP andproposedmonolith compact IPwith polynomialconstraints and variables The authors have used both upperand lower bounds for obtaining optimal cost They estimatedand improved lower bound values towards optimum and thismethod requires additional processing time

3 Mathematical Model

The NRP problem is a real-world problem at hospitals theproblem is to assign a predefined set of shifts (like S1-dayshift S2-noon shift S3-night shift and S4-Free-shift) of ascheduled period for a set of nurses of different preferencesand skills in each ward Figure 1 shows the illustrativeexample of the feasible nurse roster which consists of fourshifts namely day shift noon shift night shift and free shift(holiday) allocating five nurses over 11 days of scheduledperiod Each column in the scheduled table represents theday and the cell content represents the shift type allocatedto a nurse Each nurse is allocated one shift per day and thenumber of shifts is assigned based on the hospital contractsThis problem will have some variants on a number of shifttypes nurses nurse skills contracts and scheduling periodIn general both hard and soft constraints are considered forgenerating and assessing solutions

Hard constraints are the regulations which must besatisfied to achieve the feasible solution They cannot beviolated since hard constraints are demanded by hospitalregulations The hard constraints HC1 to HC5 must be filledto schedule the roster The soft constraints SC1 to SC14 aredesirable and the selection of soft constraints determines thequality of the roster Tables 1 and 2 list the set of hard andsoft constraints considered to solve the NRP This sectiondescribes the mathematical model required for hard and softconstraints extensively

The NRP consists of a set of nurses 119899 = 1 2 119873 whereeach row is specific to particular set of shifts 119904 = 1 2 119878for the given set day 119889 = 1 2 119863 The solution roster S forthe 01matrix dimension119873 lowast 119878119863 is as in

S119899119889119904 = 1 if nurse 119899 works 119904 shift for day 1198890 otherwise

(1)

HC1 In this constraint all demanded shifts are assigned to anurse

119873sum119899=1

S119899119889119904 = 119864119889119904 forall119889 isin 119863 119904 isin 119878 (2)

4 Computational Intelligence and Neuroscience

M T W T F

Nurse 1 S1 S2

Nurse 2 S2 S1

Nurse 3 S3 S1

Nurse 4 S1 S4

Nurse 5 S2

S S M T W T

S3 S4

S4 S3

S4 S2

S2 S3

S3 S1 S4

S2S3S4

S1 Day shiftNoon shift

Night shiftFree shift

Figure 1 Illustrative example of Nurse Rostering Problem

Table 1

Hard constraintsHC1 All demanded shifts assigned to a nurseHC2 A nurse can work with only a single shift per dayHC3 The minimum number of nurses required for the shiftHC4 The total number of working days for the nurse should be between the maximum and minimum rangeHC5 A day shift followed by night shift is not allowed

Table 2

Soft constraintsSC1 The maximum number of shifts assigned to each nurseSC2 The minimum number of shifts assigned to each nurseSC3 The maximum number of consecutive working days assigned to each nurseSC4 The minimum number of consecutive working days assigned to each nurseSC5 The maximum number of consecutive working days assigned to each nurse on which no shift is allottedSC6 The minimum number of consecutive working days assigned to each nurse on which no shift is allottedSC7 The maximum number of consecutive working weekends with at least one shift assigned to each nurseSC8 The minimum number of consecutive working weekends with at least one shift assigned to each nurseSC9 The maximum number of weekends with at least one shift assigned to each nurseSC10 Specific working daySC11 Requested day offSC12 Specific shift onSC13 Specific shift offSC14 Nurse not working on the unwanted pattern

where 119864119889119904 is the number of nurses required for a day (119889) atshift (119904) and S119889119904 is the allocation of nurses in the feasiblesolution roster

HC2 In this constraint each nurse can work not more thanone shift per day

119878sum119904=1

S119904119899119889 le 1 forall119899 isin 119873 119889 isin 119863 (3)

where S119899119889 is the allocation of nurses (119899) in solution at shift (119904)for a day (119889)HC3This constraint deals with aminimumnumber of nursesrequired for each shift

119873sum119899=1

S119899119889119904 ge min119899119889119904 forall119889 isin 119863 119904 isin 119878 (4)

Computational Intelligence and Neuroscience 5

where min119899119889119904 is the minimum number of nurses required fora shift (119904) on the day (119889)HC4 In this constraint the total number of working days foreach nurse should range between minimum and maximumrange for the given scheduled period

119882min le 119863sum119889=1

119878sum119904=1

S119889119904119899 le 119882max forall119899 isin 119873 (5)

The average working shift for nurse can be determined byusing

119882avg = 1119873 (119863sum119889=1

119878sum119904=1

S119889119904119899 forall119899 isin 119873) (6)

where 119882min and 119882max are the minimum and maximumnumber of days in scheduled period and119882avg is the averageworking shift of the nurse

HC5 In this constraint shift 1 followed by shift 3 is notallowed that is a day shift followed by a night shift is notallowed

119873sum119899=1

119863sum119889=1

S1198991198891199043 + S119899119889+11199041 le 1 forall119904 isin 119878 (7)

SC1 The maximum number of shifts assigned to each nursefor the given scheduled period is as follows

max(( 119863sum119889=1

119878sum119904=1

S119889119904119899 minus Φ119906119887119899 ) 0) forall119899 isin 119873 (8)

whereΦ119906119887119899 is themaximumnumber of shifts assigned to nurse(119899)SC2 The minimum number of shifts assigned to each nursefor the given scheduled period is as follows

max((Φ119897119887119899 minus 119863sum119889=1

119878sum119904=1

S119889119904119899 ) 0) forall119899 isin 119873 (9)

whereΦ119897119887119899 is theminimumnumber of shifts assigned to nurse(119899)SC3 The maximum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((C119896119899 minus Θ119906119887119899 ) 0) forall119899 isin 119873 (10)

where Θ119906119887119899 is the maximum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutive

working spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working spans of nurse (119899)SC4 The minimum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((Θ119897119887119899 minus C119896119899) 0) forall119899 isin 119873 (11)

where Θ119897119887119899 is the minimum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutiveworking spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working span of the nurse (119899)SC5 The maximum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((eth119896119899 minus 120593119906119887119899 ) 0) forall119899 isin 119873 (12)

where120593119906119887119899 is themaximumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC6 The minimum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((120593119897119887119899 minus eth119896119899) 0) forall119899 isin 119873 (13)

where 120593119897119887119899 is theminimumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC7 The maximum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((120577119896119899 minus Ω119906119887119899 ) 0) forall119899 isin 119873 (14)

where Ω119906119887119899 is the maximum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC8 The minimum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((Ω119897119887119899 minus 120577119896119899) 0) forall119899 isin 119873 (15)

6 Computational Intelligence and Neuroscience

where Ω119897119887119899 is the minimum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC9 The maximum number of weekends with at least oneshift assigned to nurse in four weeks is as follows

119899sum119896=1

max ((119896119899 minus 120603119906119887119899 ) 0) forall119899 isin 119873 (16)

where 119896119899 is the number of working days at the 119896th weekendof nurse (119899) 120603119906119887119899 is the maximum number of working daysfor nurse (119899) and 119899 is the total count of the weekend in thescheduling period of nurse (119899)SC10 The nurse can request working on a particular day forthe given scheduled period

119863sum119889=1

120582119889119899 = 1 forall119899 isin 119873 (17)

where 120582119889119899 is the day request from the nurse (119899) to work on anyshift on a particular day (119889)SC11 The nurse can request that they do not work on aparticular day for the given scheduled period

119863sum119889=1

120582119889119899 = 0 forall119899 isin 119873 (18)

where 120582119889119899 is the request from the nurse (119899) not to work on anyshift on a particular day (119889)SC12 The nurse can request working on a particular shift ona particular day for the given scheduled period

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 1 forall119899 isin 119873 (19)

where Υ119889119904119899 is the shift request from the nurse (119899) to work ona particular shift (119904) on particular day (119889)SC13 The nurse can request that they do not work on aparticular shift on a particular day for the given scheduledperiod

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 0 forall119899 isin 119873 (20)

where Υ119889119904119899 is the shift request from the nurse (119899) not to workon a particular shift (119904) on particular day (119889)SC14 The nurse should not work on unwanted patternsuggested for the scheduled period

984858119899sum119906=1

120583119906119899 forall119899 isin 119873 (21)

where 120583119906119899 is the total count of occurring patterns for nurse (119899)of type 119906 984858119899 is the set of unwanted patterns suggested for thenurse (119899)

The objective function of the NRP is to maximize thenurse preferences and minimize the penalty cost from vio-lations of soft constraints in (22)

min119891 (S119899119889119904)= 14sum

SC=1119875sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) lowast 119879sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) (22)

Here SC refers to the set of soft constraints indexed inTable 2 119875sc(119909) refers to the penalty weight violation of thesoft constraint and 119879sc(119909) refers to the total violations of thesoft constraints in roster solution It has to be noted that theusage of penalty function [32] in the NRP is to improve theperformance and provide the fair comparison with anotheroptimization algorithm

4 Bee Colony Optimization

41 Natural Behavior of Honey Bees Swarm intelligence isan emerging discipline for the study of problems whichrequires an optimal approach rather than the traditionalapproach The use of swarm intelligence is the part ofartificial intelligence based on the study of the behavior ofsocial insects The swarm intelligence is composed of manyindividual actions using decentralized and self-organizedsystem Swarm behavior is characterized by natural behaviorof many species such as fish schools herds of animals andflocks of birds formed for the biological requirements tostay together Swarm implies the aggregation of animalssuch as birds fishes ants and bees based on the collectivebehavior The individual agents in the swarm will have astochastic behavior which depends on the local perception ofthe neighborhood The communication between any insectscan be formed with the help of the colonies and it promotescollective intelligence among the colonies

The important features of swarms are proximity qualityresponse variability stability and adaptability The proximityof the swarm must be capable of providing simple spaceand time computations and it should respond to the qualityfactorsThe swarm should allow diverse activities and shouldnot be restricted among narrow channels The swarm shouldmaintain the stability nature and should not fluctuate basedon the behaviorThe adaptability of the swarmmust be able tochange the behavior mode when required Several hundredsof bees from the swarm work together to find nesting sitesand select the best nest site Bee Colony Optimization isinspired by the natural behavior of beesThe bee optimizationalgorithm is inspired by group decision-making processesof honey bees A honey bee searches the best nest site byconsidering speed and accuracy

In a bee colony there are three different types of beesa single queen bee thousands of male drone bees andthousands of worker bees

(1) The queen bee is responsible for creating new coloniesby laying eggs

Computational Intelligence and Neuroscience 7

(2) The male drone bees mated with the queen and werediscarded from the colonies

(3) The remaining female bees in the hive are calledworker bees and they are called the building block ofthe hiveThe responsibilities of the worker bees are tofeed guard and maintain the honey bee comb

Based on the responsibility worker bees are classifiedas scout bees and forager bees A scout bee flies in searchof food sources randomly and returns when the energygets exhausted After reaching a hive scout bees share theinformation and start to explore rich food source locationswith forager bees The scout beersquos information includesdirection quality quantity and distance of the food sourcethey found The way of communicating information about afood source to foragers is done using dance There are twotypes of dance round dance and waggle dance The rounddance will provide direction of the food source when thedistance is small The waggle dance indicates the positionand the direction of the food source the distance can bemeasured by the speed of the dance A greater speed indicatesa smaller distance and the quantity of the food depends onthe wriggling of the beeThe exchange of information amonghive mates is to acquire collective knowledge Forager beeswill silently observe the behavior of scout bee to acquireknowledge about the directions and information of the foodsource

The group decision process of honey bees is for searchingbest food source and nest siteThe decision-making process isbased on the swarming process of the honey bee Swarming isthe process inwhich the queen bee and half of theworker beeswill leave their hive to explore a new colony The remainingworker bees and daughter bee will remain in the old hiveto monitor the waggle dance After leaving their parentalhive swarm bees will form a cluster in search of the newnest site The waggle dance is used to communicate withquiescent bees which are inactive in the colonyThis providesprecise information about the direction of the flower patchbased on its quality and energy level The number of followerbees increases based on the quality of the food source andallows the colony to gather food quickly and efficiently Thedecision-making process can be done in two methods byswarm bees to find the best nest site They are consensusand quorum consensus is the group agreement taken intoaccount and quorum is the decision process taken when thebee vote reaches a threshold value

Bee Colony Optimization (BCO) algorithm is apopulation-based algorithm The bees in the populationare artificial bees and each bee finds its neighboring solutionfrom the current path This algorithm has a forward andbackward process In forwarding pass every bee starts toexplore the neighborhood of its current solution and enablesconstructive and improving moves In forward pass entirebees in the hive will start the constructive move and thenlocal search will start In backward pass bees share theobjective value obtained in the forward pass The bees withhigher priority are used to discard all nonimproving movesThe bees will continue to explore in next forward pass orcontinue the same process with neighborhoodThe flowchart

Forward pass

Initialization

Construction move

Backward pass

Update the bestsolution

Stopping criteriaFalse

True

Figure 2 Flowchart of BCO algorithm

for BCO is shown in Figure 2 The BCO is proficient insolving combinatorial optimization problems by creatingcolonies of the multiagent system The pseudocode for BCOis described in Algorithm 1 The bee colony system providesa standard well-organized and well-coordinated teamworkmultitasking performance [33]

42 Modified Nelder-Mead Method The Nelder-MeadMethod is a simplex method for finding a local minimumfunction of various variables and is a local search algorithmfor unconstrained optimization problems The whole searcharea is divided into different fragments and filled with beeagents To obtain the best solution each fragment can besearched by its bee agents through Modified Nelder-MeadMethod (MNMM) Each agent in the fragments passesinformation about the optimized point using MNMMBy using NMMM the best points are obtained and thebest solution is chosen by decision-making process ofhoney bees The algorithm is a simplex-based method119863-dimensional simplex is initialized with 119863 + 1 verticesthat is two dimensions and it forms a triangle if it has threedimensions it forms a tetrahedron To assign the best andworst point the vertices are evaluated and ordered based onthe objective function

The best point or vertex is considered to the minimumvalue of the objective function and the worst point is chosen

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

4 Computational Intelligence and Neuroscience

M T W T F

Nurse 1 S1 S2

Nurse 2 S2 S1

Nurse 3 S3 S1

Nurse 4 S1 S4

Nurse 5 S2

S S M T W T

S3 S4

S4 S3

S4 S2

S2 S3

S3 S1 S4

S2S3S4

S1 Day shiftNoon shift

Night shiftFree shift

Figure 1 Illustrative example of Nurse Rostering Problem

Table 1

Hard constraintsHC1 All demanded shifts assigned to a nurseHC2 A nurse can work with only a single shift per dayHC3 The minimum number of nurses required for the shiftHC4 The total number of working days for the nurse should be between the maximum and minimum rangeHC5 A day shift followed by night shift is not allowed

Table 2

Soft constraintsSC1 The maximum number of shifts assigned to each nurseSC2 The minimum number of shifts assigned to each nurseSC3 The maximum number of consecutive working days assigned to each nurseSC4 The minimum number of consecutive working days assigned to each nurseSC5 The maximum number of consecutive working days assigned to each nurse on which no shift is allottedSC6 The minimum number of consecutive working days assigned to each nurse on which no shift is allottedSC7 The maximum number of consecutive working weekends with at least one shift assigned to each nurseSC8 The minimum number of consecutive working weekends with at least one shift assigned to each nurseSC9 The maximum number of weekends with at least one shift assigned to each nurseSC10 Specific working daySC11 Requested day offSC12 Specific shift onSC13 Specific shift offSC14 Nurse not working on the unwanted pattern

where 119864119889119904 is the number of nurses required for a day (119889) atshift (119904) and S119889119904 is the allocation of nurses in the feasiblesolution roster

HC2 In this constraint each nurse can work not more thanone shift per day

119878sum119904=1

S119904119899119889 le 1 forall119899 isin 119873 119889 isin 119863 (3)

where S119899119889 is the allocation of nurses (119899) in solution at shift (119904)for a day (119889)HC3This constraint deals with aminimumnumber of nursesrequired for each shift

119873sum119899=1

S119899119889119904 ge min119899119889119904 forall119889 isin 119863 119904 isin 119878 (4)

Computational Intelligence and Neuroscience 5

where min119899119889119904 is the minimum number of nurses required fora shift (119904) on the day (119889)HC4 In this constraint the total number of working days foreach nurse should range between minimum and maximumrange for the given scheduled period

119882min le 119863sum119889=1

119878sum119904=1

S119889119904119899 le 119882max forall119899 isin 119873 (5)

The average working shift for nurse can be determined byusing

119882avg = 1119873 (119863sum119889=1

119878sum119904=1

S119889119904119899 forall119899 isin 119873) (6)

where 119882min and 119882max are the minimum and maximumnumber of days in scheduled period and119882avg is the averageworking shift of the nurse

HC5 In this constraint shift 1 followed by shift 3 is notallowed that is a day shift followed by a night shift is notallowed

119873sum119899=1

119863sum119889=1

S1198991198891199043 + S119899119889+11199041 le 1 forall119904 isin 119878 (7)

SC1 The maximum number of shifts assigned to each nursefor the given scheduled period is as follows

max(( 119863sum119889=1

119878sum119904=1

S119889119904119899 minus Φ119906119887119899 ) 0) forall119899 isin 119873 (8)

whereΦ119906119887119899 is themaximumnumber of shifts assigned to nurse(119899)SC2 The minimum number of shifts assigned to each nursefor the given scheduled period is as follows

max((Φ119897119887119899 minus 119863sum119889=1

119878sum119904=1

S119889119904119899 ) 0) forall119899 isin 119873 (9)

whereΦ119897119887119899 is theminimumnumber of shifts assigned to nurse(119899)SC3 The maximum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((C119896119899 minus Θ119906119887119899 ) 0) forall119899 isin 119873 (10)

where Θ119906119887119899 is the maximum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutive

working spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working spans of nurse (119899)SC4 The minimum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((Θ119897119887119899 minus C119896119899) 0) forall119899 isin 119873 (11)

where Θ119897119887119899 is the minimum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutiveworking spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working span of the nurse (119899)SC5 The maximum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((eth119896119899 minus 120593119906119887119899 ) 0) forall119899 isin 119873 (12)

where120593119906119887119899 is themaximumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC6 The minimum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((120593119897119887119899 minus eth119896119899) 0) forall119899 isin 119873 (13)

where 120593119897119887119899 is theminimumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC7 The maximum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((120577119896119899 minus Ω119906119887119899 ) 0) forall119899 isin 119873 (14)

where Ω119906119887119899 is the maximum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC8 The minimum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((Ω119897119887119899 minus 120577119896119899) 0) forall119899 isin 119873 (15)

6 Computational Intelligence and Neuroscience

where Ω119897119887119899 is the minimum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC9 The maximum number of weekends with at least oneshift assigned to nurse in four weeks is as follows

119899sum119896=1

max ((119896119899 minus 120603119906119887119899 ) 0) forall119899 isin 119873 (16)

where 119896119899 is the number of working days at the 119896th weekendof nurse (119899) 120603119906119887119899 is the maximum number of working daysfor nurse (119899) and 119899 is the total count of the weekend in thescheduling period of nurse (119899)SC10 The nurse can request working on a particular day forthe given scheduled period

119863sum119889=1

120582119889119899 = 1 forall119899 isin 119873 (17)

where 120582119889119899 is the day request from the nurse (119899) to work on anyshift on a particular day (119889)SC11 The nurse can request that they do not work on aparticular day for the given scheduled period

119863sum119889=1

120582119889119899 = 0 forall119899 isin 119873 (18)

where 120582119889119899 is the request from the nurse (119899) not to work on anyshift on a particular day (119889)SC12 The nurse can request working on a particular shift ona particular day for the given scheduled period

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 1 forall119899 isin 119873 (19)

where Υ119889119904119899 is the shift request from the nurse (119899) to work ona particular shift (119904) on particular day (119889)SC13 The nurse can request that they do not work on aparticular shift on a particular day for the given scheduledperiod

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 0 forall119899 isin 119873 (20)

where Υ119889119904119899 is the shift request from the nurse (119899) not to workon a particular shift (119904) on particular day (119889)SC14 The nurse should not work on unwanted patternsuggested for the scheduled period

984858119899sum119906=1

120583119906119899 forall119899 isin 119873 (21)

where 120583119906119899 is the total count of occurring patterns for nurse (119899)of type 119906 984858119899 is the set of unwanted patterns suggested for thenurse (119899)

The objective function of the NRP is to maximize thenurse preferences and minimize the penalty cost from vio-lations of soft constraints in (22)

min119891 (S119899119889119904)= 14sum

SC=1119875sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) lowast 119879sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) (22)

Here SC refers to the set of soft constraints indexed inTable 2 119875sc(119909) refers to the penalty weight violation of thesoft constraint and 119879sc(119909) refers to the total violations of thesoft constraints in roster solution It has to be noted that theusage of penalty function [32] in the NRP is to improve theperformance and provide the fair comparison with anotheroptimization algorithm

4 Bee Colony Optimization

41 Natural Behavior of Honey Bees Swarm intelligence isan emerging discipline for the study of problems whichrequires an optimal approach rather than the traditionalapproach The use of swarm intelligence is the part ofartificial intelligence based on the study of the behavior ofsocial insects The swarm intelligence is composed of manyindividual actions using decentralized and self-organizedsystem Swarm behavior is characterized by natural behaviorof many species such as fish schools herds of animals andflocks of birds formed for the biological requirements tostay together Swarm implies the aggregation of animalssuch as birds fishes ants and bees based on the collectivebehavior The individual agents in the swarm will have astochastic behavior which depends on the local perception ofthe neighborhood The communication between any insectscan be formed with the help of the colonies and it promotescollective intelligence among the colonies

The important features of swarms are proximity qualityresponse variability stability and adaptability The proximityof the swarm must be capable of providing simple spaceand time computations and it should respond to the qualityfactorsThe swarm should allow diverse activities and shouldnot be restricted among narrow channels The swarm shouldmaintain the stability nature and should not fluctuate basedon the behaviorThe adaptability of the swarmmust be able tochange the behavior mode when required Several hundredsof bees from the swarm work together to find nesting sitesand select the best nest site Bee Colony Optimization isinspired by the natural behavior of beesThe bee optimizationalgorithm is inspired by group decision-making processesof honey bees A honey bee searches the best nest site byconsidering speed and accuracy

In a bee colony there are three different types of beesa single queen bee thousands of male drone bees andthousands of worker bees

(1) The queen bee is responsible for creating new coloniesby laying eggs

Computational Intelligence and Neuroscience 7

(2) The male drone bees mated with the queen and werediscarded from the colonies

(3) The remaining female bees in the hive are calledworker bees and they are called the building block ofthe hiveThe responsibilities of the worker bees are tofeed guard and maintain the honey bee comb

Based on the responsibility worker bees are classifiedas scout bees and forager bees A scout bee flies in searchof food sources randomly and returns when the energygets exhausted After reaching a hive scout bees share theinformation and start to explore rich food source locationswith forager bees The scout beersquos information includesdirection quality quantity and distance of the food sourcethey found The way of communicating information about afood source to foragers is done using dance There are twotypes of dance round dance and waggle dance The rounddance will provide direction of the food source when thedistance is small The waggle dance indicates the positionand the direction of the food source the distance can bemeasured by the speed of the dance A greater speed indicatesa smaller distance and the quantity of the food depends onthe wriggling of the beeThe exchange of information amonghive mates is to acquire collective knowledge Forager beeswill silently observe the behavior of scout bee to acquireknowledge about the directions and information of the foodsource

The group decision process of honey bees is for searchingbest food source and nest siteThe decision-making process isbased on the swarming process of the honey bee Swarming isthe process inwhich the queen bee and half of theworker beeswill leave their hive to explore a new colony The remainingworker bees and daughter bee will remain in the old hiveto monitor the waggle dance After leaving their parentalhive swarm bees will form a cluster in search of the newnest site The waggle dance is used to communicate withquiescent bees which are inactive in the colonyThis providesprecise information about the direction of the flower patchbased on its quality and energy level The number of followerbees increases based on the quality of the food source andallows the colony to gather food quickly and efficiently Thedecision-making process can be done in two methods byswarm bees to find the best nest site They are consensusand quorum consensus is the group agreement taken intoaccount and quorum is the decision process taken when thebee vote reaches a threshold value

Bee Colony Optimization (BCO) algorithm is apopulation-based algorithm The bees in the populationare artificial bees and each bee finds its neighboring solutionfrom the current path This algorithm has a forward andbackward process In forwarding pass every bee starts toexplore the neighborhood of its current solution and enablesconstructive and improving moves In forward pass entirebees in the hive will start the constructive move and thenlocal search will start In backward pass bees share theobjective value obtained in the forward pass The bees withhigher priority are used to discard all nonimproving movesThe bees will continue to explore in next forward pass orcontinue the same process with neighborhoodThe flowchart

Forward pass

Initialization

Construction move

Backward pass

Update the bestsolution

Stopping criteriaFalse

True

Figure 2 Flowchart of BCO algorithm

for BCO is shown in Figure 2 The BCO is proficient insolving combinatorial optimization problems by creatingcolonies of the multiagent system The pseudocode for BCOis described in Algorithm 1 The bee colony system providesa standard well-organized and well-coordinated teamworkmultitasking performance [33]

42 Modified Nelder-Mead Method The Nelder-MeadMethod is a simplex method for finding a local minimumfunction of various variables and is a local search algorithmfor unconstrained optimization problems The whole searcharea is divided into different fragments and filled with beeagents To obtain the best solution each fragment can besearched by its bee agents through Modified Nelder-MeadMethod (MNMM) Each agent in the fragments passesinformation about the optimized point using MNMMBy using NMMM the best points are obtained and thebest solution is chosen by decision-making process ofhoney bees The algorithm is a simplex-based method119863-dimensional simplex is initialized with 119863 + 1 verticesthat is two dimensions and it forms a triangle if it has threedimensions it forms a tetrahedron To assign the best andworst point the vertices are evaluated and ordered based onthe objective function

The best point or vertex is considered to the minimumvalue of the objective function and the worst point is chosen

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 5

where min119899119889119904 is the minimum number of nurses required fora shift (119904) on the day (119889)HC4 In this constraint the total number of working days foreach nurse should range between minimum and maximumrange for the given scheduled period

119882min le 119863sum119889=1

119878sum119904=1

S119889119904119899 le 119882max forall119899 isin 119873 (5)

The average working shift for nurse can be determined byusing

119882avg = 1119873 (119863sum119889=1

119878sum119904=1

S119889119904119899 forall119899 isin 119873) (6)

where 119882min and 119882max are the minimum and maximumnumber of days in scheduled period and119882avg is the averageworking shift of the nurse

HC5 In this constraint shift 1 followed by shift 3 is notallowed that is a day shift followed by a night shift is notallowed

119873sum119899=1

119863sum119889=1

S1198991198891199043 + S119899119889+11199041 le 1 forall119904 isin 119878 (7)

SC1 The maximum number of shifts assigned to each nursefor the given scheduled period is as follows

max(( 119863sum119889=1

119878sum119904=1

S119889119904119899 minus Φ119906119887119899 ) 0) forall119899 isin 119873 (8)

whereΦ119906119887119899 is themaximumnumber of shifts assigned to nurse(119899)SC2 The minimum number of shifts assigned to each nursefor the given scheduled period is as follows

max((Φ119897119887119899 minus 119863sum119889=1

119878sum119904=1

S119889119904119899 ) 0) forall119899 isin 119873 (9)

whereΦ119897119887119899 is theminimumnumber of shifts assigned to nurse(119899)SC3 The maximum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((C119896119899 minus Θ119906119887119899 ) 0) forall119899 isin 119873 (10)

where Θ119906119887119899 is the maximum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutive

working spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working spans of nurse (119899)SC4 The minimum number of consecutive working daysassigned to each nurse on which a shift is allotted for thescheduled period is as follows

Ψ119899sum119896=1

max ((Θ119897119887119899 minus C119896119899) 0) forall119899 isin 119873 (11)

where Θ119897119887119899 is the minimum number of consecutive workingdays of nurse (119899) Ψ119899 is the total number of consecutiveworking spans of nurse (119899) in the roster and C119896119899 is the countof the 119896th working span of the nurse (119899)SC5 The maximum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((eth119896119899 minus 120593119906119887119899 ) 0) forall119899 isin 119873 (12)

where120593119906119887119899 is themaximumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC6 The minimum number of consecutive working daysassigned to each nurse on which no shift is allotted for thegiven scheduled period is as follows

Γ119899sum119896=1

max ((120593119897119887119899 minus eth119896119899) 0) forall119899 isin 119873 (13)

where 120593119897119887119899 is theminimumnumber of consecutive free days ofnurse (119899) Γ119899 is the total number of consecutive free workingspans of nurse (119899) in the roster and eth119896119899 is the count of the 119896thworking span of the nurse (119899)SC7 The maximum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((120577119896119899 minus Ω119906119887119899 ) 0) forall119899 isin 119873 (14)

where Ω119906119887119899 is the maximum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC8 The minimum number of consecutive working week-ends with at least one shift assigned to nurse for the givenscheduled period is as follows

Υ119899sum119896=1

max ((Ω119897119887119899 minus 120577119896119899) 0) forall119899 isin 119873 (15)

6 Computational Intelligence and Neuroscience

where Ω119897119887119899 is the minimum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC9 The maximum number of weekends with at least oneshift assigned to nurse in four weeks is as follows

119899sum119896=1

max ((119896119899 minus 120603119906119887119899 ) 0) forall119899 isin 119873 (16)

where 119896119899 is the number of working days at the 119896th weekendof nurse (119899) 120603119906119887119899 is the maximum number of working daysfor nurse (119899) and 119899 is the total count of the weekend in thescheduling period of nurse (119899)SC10 The nurse can request working on a particular day forthe given scheduled period

119863sum119889=1

120582119889119899 = 1 forall119899 isin 119873 (17)

where 120582119889119899 is the day request from the nurse (119899) to work on anyshift on a particular day (119889)SC11 The nurse can request that they do not work on aparticular day for the given scheduled period

119863sum119889=1

120582119889119899 = 0 forall119899 isin 119873 (18)

where 120582119889119899 is the request from the nurse (119899) not to work on anyshift on a particular day (119889)SC12 The nurse can request working on a particular shift ona particular day for the given scheduled period

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 1 forall119899 isin 119873 (19)

where Υ119889119904119899 is the shift request from the nurse (119899) to work ona particular shift (119904) on particular day (119889)SC13 The nurse can request that they do not work on aparticular shift on a particular day for the given scheduledperiod

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 0 forall119899 isin 119873 (20)

where Υ119889119904119899 is the shift request from the nurse (119899) not to workon a particular shift (119904) on particular day (119889)SC14 The nurse should not work on unwanted patternsuggested for the scheduled period

984858119899sum119906=1

120583119906119899 forall119899 isin 119873 (21)

where 120583119906119899 is the total count of occurring patterns for nurse (119899)of type 119906 984858119899 is the set of unwanted patterns suggested for thenurse (119899)

The objective function of the NRP is to maximize thenurse preferences and minimize the penalty cost from vio-lations of soft constraints in (22)

min119891 (S119899119889119904)= 14sum

SC=1119875sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) lowast 119879sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) (22)

Here SC refers to the set of soft constraints indexed inTable 2 119875sc(119909) refers to the penalty weight violation of thesoft constraint and 119879sc(119909) refers to the total violations of thesoft constraints in roster solution It has to be noted that theusage of penalty function [32] in the NRP is to improve theperformance and provide the fair comparison with anotheroptimization algorithm

4 Bee Colony Optimization

41 Natural Behavior of Honey Bees Swarm intelligence isan emerging discipline for the study of problems whichrequires an optimal approach rather than the traditionalapproach The use of swarm intelligence is the part ofartificial intelligence based on the study of the behavior ofsocial insects The swarm intelligence is composed of manyindividual actions using decentralized and self-organizedsystem Swarm behavior is characterized by natural behaviorof many species such as fish schools herds of animals andflocks of birds formed for the biological requirements tostay together Swarm implies the aggregation of animalssuch as birds fishes ants and bees based on the collectivebehavior The individual agents in the swarm will have astochastic behavior which depends on the local perception ofthe neighborhood The communication between any insectscan be formed with the help of the colonies and it promotescollective intelligence among the colonies

The important features of swarms are proximity qualityresponse variability stability and adaptability The proximityof the swarm must be capable of providing simple spaceand time computations and it should respond to the qualityfactorsThe swarm should allow diverse activities and shouldnot be restricted among narrow channels The swarm shouldmaintain the stability nature and should not fluctuate basedon the behaviorThe adaptability of the swarmmust be able tochange the behavior mode when required Several hundredsof bees from the swarm work together to find nesting sitesand select the best nest site Bee Colony Optimization isinspired by the natural behavior of beesThe bee optimizationalgorithm is inspired by group decision-making processesof honey bees A honey bee searches the best nest site byconsidering speed and accuracy

In a bee colony there are three different types of beesa single queen bee thousands of male drone bees andthousands of worker bees

(1) The queen bee is responsible for creating new coloniesby laying eggs

Computational Intelligence and Neuroscience 7

(2) The male drone bees mated with the queen and werediscarded from the colonies

(3) The remaining female bees in the hive are calledworker bees and they are called the building block ofthe hiveThe responsibilities of the worker bees are tofeed guard and maintain the honey bee comb

Based on the responsibility worker bees are classifiedas scout bees and forager bees A scout bee flies in searchof food sources randomly and returns when the energygets exhausted After reaching a hive scout bees share theinformation and start to explore rich food source locationswith forager bees The scout beersquos information includesdirection quality quantity and distance of the food sourcethey found The way of communicating information about afood source to foragers is done using dance There are twotypes of dance round dance and waggle dance The rounddance will provide direction of the food source when thedistance is small The waggle dance indicates the positionand the direction of the food source the distance can bemeasured by the speed of the dance A greater speed indicatesa smaller distance and the quantity of the food depends onthe wriggling of the beeThe exchange of information amonghive mates is to acquire collective knowledge Forager beeswill silently observe the behavior of scout bee to acquireknowledge about the directions and information of the foodsource

The group decision process of honey bees is for searchingbest food source and nest siteThe decision-making process isbased on the swarming process of the honey bee Swarming isthe process inwhich the queen bee and half of theworker beeswill leave their hive to explore a new colony The remainingworker bees and daughter bee will remain in the old hiveto monitor the waggle dance After leaving their parentalhive swarm bees will form a cluster in search of the newnest site The waggle dance is used to communicate withquiescent bees which are inactive in the colonyThis providesprecise information about the direction of the flower patchbased on its quality and energy level The number of followerbees increases based on the quality of the food source andallows the colony to gather food quickly and efficiently Thedecision-making process can be done in two methods byswarm bees to find the best nest site They are consensusand quorum consensus is the group agreement taken intoaccount and quorum is the decision process taken when thebee vote reaches a threshold value

Bee Colony Optimization (BCO) algorithm is apopulation-based algorithm The bees in the populationare artificial bees and each bee finds its neighboring solutionfrom the current path This algorithm has a forward andbackward process In forwarding pass every bee starts toexplore the neighborhood of its current solution and enablesconstructive and improving moves In forward pass entirebees in the hive will start the constructive move and thenlocal search will start In backward pass bees share theobjective value obtained in the forward pass The bees withhigher priority are used to discard all nonimproving movesThe bees will continue to explore in next forward pass orcontinue the same process with neighborhoodThe flowchart

Forward pass

Initialization

Construction move

Backward pass

Update the bestsolution

Stopping criteriaFalse

True

Figure 2 Flowchart of BCO algorithm

for BCO is shown in Figure 2 The BCO is proficient insolving combinatorial optimization problems by creatingcolonies of the multiagent system The pseudocode for BCOis described in Algorithm 1 The bee colony system providesa standard well-organized and well-coordinated teamworkmultitasking performance [33]

42 Modified Nelder-Mead Method The Nelder-MeadMethod is a simplex method for finding a local minimumfunction of various variables and is a local search algorithmfor unconstrained optimization problems The whole searcharea is divided into different fragments and filled with beeagents To obtain the best solution each fragment can besearched by its bee agents through Modified Nelder-MeadMethod (MNMM) Each agent in the fragments passesinformation about the optimized point using MNMMBy using NMMM the best points are obtained and thebest solution is chosen by decision-making process ofhoney bees The algorithm is a simplex-based method119863-dimensional simplex is initialized with 119863 + 1 verticesthat is two dimensions and it forms a triangle if it has threedimensions it forms a tetrahedron To assign the best andworst point the vertices are evaluated and ordered based onthe objective function

The best point or vertex is considered to the minimumvalue of the objective function and the worst point is chosen

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

6 Computational Intelligence and Neuroscience

where Ω119897119887119899 is the minimum number of consecutive workingweekends of nurse (119899) Υ119899 is the total number of consecutiveworking weekend spans of nurse (119899) in the roster and 120577119896119899 isthe count of the 119896th working weekend span of the nurse (119899)SC9 The maximum number of weekends with at least oneshift assigned to nurse in four weeks is as follows

119899sum119896=1

max ((119896119899 minus 120603119906119887119899 ) 0) forall119899 isin 119873 (16)

where 119896119899 is the number of working days at the 119896th weekendof nurse (119899) 120603119906119887119899 is the maximum number of working daysfor nurse (119899) and 119899 is the total count of the weekend in thescheduling period of nurse (119899)SC10 The nurse can request working on a particular day forthe given scheduled period

119863sum119889=1

120582119889119899 = 1 forall119899 isin 119873 (17)

where 120582119889119899 is the day request from the nurse (119899) to work on anyshift on a particular day (119889)SC11 The nurse can request that they do not work on aparticular day for the given scheduled period

119863sum119889=1

120582119889119899 = 0 forall119899 isin 119873 (18)

where 120582119889119899 is the request from the nurse (119899) not to work on anyshift on a particular day (119889)SC12 The nurse can request working on a particular shift ona particular day for the given scheduled period

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 1 forall119899 isin 119873 (19)

where Υ119889119904119899 is the shift request from the nurse (119899) to work ona particular shift (119904) on particular day (119889)SC13 The nurse can request that they do not work on aparticular shift on a particular day for the given scheduledperiod

119863sum119889=1

119878sum119904=1

Υ119889119904119899 = 0 forall119899 isin 119873 (20)

where Υ119889119904119899 is the shift request from the nurse (119899) not to workon a particular shift (119904) on particular day (119889)SC14 The nurse should not work on unwanted patternsuggested for the scheduled period

984858119899sum119906=1

120583119906119899 forall119899 isin 119873 (21)

where 120583119906119899 is the total count of occurring patterns for nurse (119899)of type 119906 984858119899 is the set of unwanted patterns suggested for thenurse (119899)

The objective function of the NRP is to maximize thenurse preferences and minimize the penalty cost from vio-lations of soft constraints in (22)

min119891 (S119899119889119904)= 14sum

SC=1119875sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) lowast 119879sc( 119873sum119899=1

119878sum119904=1

119863sum119889=1

S119899119889119904) (22)

Here SC refers to the set of soft constraints indexed inTable 2 119875sc(119909) refers to the penalty weight violation of thesoft constraint and 119879sc(119909) refers to the total violations of thesoft constraints in roster solution It has to be noted that theusage of penalty function [32] in the NRP is to improve theperformance and provide the fair comparison with anotheroptimization algorithm

4 Bee Colony Optimization

41 Natural Behavior of Honey Bees Swarm intelligence isan emerging discipline for the study of problems whichrequires an optimal approach rather than the traditionalapproach The use of swarm intelligence is the part ofartificial intelligence based on the study of the behavior ofsocial insects The swarm intelligence is composed of manyindividual actions using decentralized and self-organizedsystem Swarm behavior is characterized by natural behaviorof many species such as fish schools herds of animals andflocks of birds formed for the biological requirements tostay together Swarm implies the aggregation of animalssuch as birds fishes ants and bees based on the collectivebehavior The individual agents in the swarm will have astochastic behavior which depends on the local perception ofthe neighborhood The communication between any insectscan be formed with the help of the colonies and it promotescollective intelligence among the colonies

The important features of swarms are proximity qualityresponse variability stability and adaptability The proximityof the swarm must be capable of providing simple spaceand time computations and it should respond to the qualityfactorsThe swarm should allow diverse activities and shouldnot be restricted among narrow channels The swarm shouldmaintain the stability nature and should not fluctuate basedon the behaviorThe adaptability of the swarmmust be able tochange the behavior mode when required Several hundredsof bees from the swarm work together to find nesting sitesand select the best nest site Bee Colony Optimization isinspired by the natural behavior of beesThe bee optimizationalgorithm is inspired by group decision-making processesof honey bees A honey bee searches the best nest site byconsidering speed and accuracy

In a bee colony there are three different types of beesa single queen bee thousands of male drone bees andthousands of worker bees

(1) The queen bee is responsible for creating new coloniesby laying eggs

Computational Intelligence and Neuroscience 7

(2) The male drone bees mated with the queen and werediscarded from the colonies

(3) The remaining female bees in the hive are calledworker bees and they are called the building block ofthe hiveThe responsibilities of the worker bees are tofeed guard and maintain the honey bee comb

Based on the responsibility worker bees are classifiedas scout bees and forager bees A scout bee flies in searchof food sources randomly and returns when the energygets exhausted After reaching a hive scout bees share theinformation and start to explore rich food source locationswith forager bees The scout beersquos information includesdirection quality quantity and distance of the food sourcethey found The way of communicating information about afood source to foragers is done using dance There are twotypes of dance round dance and waggle dance The rounddance will provide direction of the food source when thedistance is small The waggle dance indicates the positionand the direction of the food source the distance can bemeasured by the speed of the dance A greater speed indicatesa smaller distance and the quantity of the food depends onthe wriggling of the beeThe exchange of information amonghive mates is to acquire collective knowledge Forager beeswill silently observe the behavior of scout bee to acquireknowledge about the directions and information of the foodsource

The group decision process of honey bees is for searchingbest food source and nest siteThe decision-making process isbased on the swarming process of the honey bee Swarming isthe process inwhich the queen bee and half of theworker beeswill leave their hive to explore a new colony The remainingworker bees and daughter bee will remain in the old hiveto monitor the waggle dance After leaving their parentalhive swarm bees will form a cluster in search of the newnest site The waggle dance is used to communicate withquiescent bees which are inactive in the colonyThis providesprecise information about the direction of the flower patchbased on its quality and energy level The number of followerbees increases based on the quality of the food source andallows the colony to gather food quickly and efficiently Thedecision-making process can be done in two methods byswarm bees to find the best nest site They are consensusand quorum consensus is the group agreement taken intoaccount and quorum is the decision process taken when thebee vote reaches a threshold value

Bee Colony Optimization (BCO) algorithm is apopulation-based algorithm The bees in the populationare artificial bees and each bee finds its neighboring solutionfrom the current path This algorithm has a forward andbackward process In forwarding pass every bee starts toexplore the neighborhood of its current solution and enablesconstructive and improving moves In forward pass entirebees in the hive will start the constructive move and thenlocal search will start In backward pass bees share theobjective value obtained in the forward pass The bees withhigher priority are used to discard all nonimproving movesThe bees will continue to explore in next forward pass orcontinue the same process with neighborhoodThe flowchart

Forward pass

Initialization

Construction move

Backward pass

Update the bestsolution

Stopping criteriaFalse

True

Figure 2 Flowchart of BCO algorithm

for BCO is shown in Figure 2 The BCO is proficient insolving combinatorial optimization problems by creatingcolonies of the multiagent system The pseudocode for BCOis described in Algorithm 1 The bee colony system providesa standard well-organized and well-coordinated teamworkmultitasking performance [33]

42 Modified Nelder-Mead Method The Nelder-MeadMethod is a simplex method for finding a local minimumfunction of various variables and is a local search algorithmfor unconstrained optimization problems The whole searcharea is divided into different fragments and filled with beeagents To obtain the best solution each fragment can besearched by its bee agents through Modified Nelder-MeadMethod (MNMM) Each agent in the fragments passesinformation about the optimized point using MNMMBy using NMMM the best points are obtained and thebest solution is chosen by decision-making process ofhoney bees The algorithm is a simplex-based method119863-dimensional simplex is initialized with 119863 + 1 verticesthat is two dimensions and it forms a triangle if it has threedimensions it forms a tetrahedron To assign the best andworst point the vertices are evaluated and ordered based onthe objective function

The best point or vertex is considered to the minimumvalue of the objective function and the worst point is chosen

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 7

(2) The male drone bees mated with the queen and werediscarded from the colonies

(3) The remaining female bees in the hive are calledworker bees and they are called the building block ofthe hiveThe responsibilities of the worker bees are tofeed guard and maintain the honey bee comb

Based on the responsibility worker bees are classifiedas scout bees and forager bees A scout bee flies in searchof food sources randomly and returns when the energygets exhausted After reaching a hive scout bees share theinformation and start to explore rich food source locationswith forager bees The scout beersquos information includesdirection quality quantity and distance of the food sourcethey found The way of communicating information about afood source to foragers is done using dance There are twotypes of dance round dance and waggle dance The rounddance will provide direction of the food source when thedistance is small The waggle dance indicates the positionand the direction of the food source the distance can bemeasured by the speed of the dance A greater speed indicatesa smaller distance and the quantity of the food depends onthe wriggling of the beeThe exchange of information amonghive mates is to acquire collective knowledge Forager beeswill silently observe the behavior of scout bee to acquireknowledge about the directions and information of the foodsource

The group decision process of honey bees is for searchingbest food source and nest siteThe decision-making process isbased on the swarming process of the honey bee Swarming isthe process inwhich the queen bee and half of theworker beeswill leave their hive to explore a new colony The remainingworker bees and daughter bee will remain in the old hiveto monitor the waggle dance After leaving their parentalhive swarm bees will form a cluster in search of the newnest site The waggle dance is used to communicate withquiescent bees which are inactive in the colonyThis providesprecise information about the direction of the flower patchbased on its quality and energy level The number of followerbees increases based on the quality of the food source andallows the colony to gather food quickly and efficiently Thedecision-making process can be done in two methods byswarm bees to find the best nest site They are consensusand quorum consensus is the group agreement taken intoaccount and quorum is the decision process taken when thebee vote reaches a threshold value

Bee Colony Optimization (BCO) algorithm is apopulation-based algorithm The bees in the populationare artificial bees and each bee finds its neighboring solutionfrom the current path This algorithm has a forward andbackward process In forwarding pass every bee starts toexplore the neighborhood of its current solution and enablesconstructive and improving moves In forward pass entirebees in the hive will start the constructive move and thenlocal search will start In backward pass bees share theobjective value obtained in the forward pass The bees withhigher priority are used to discard all nonimproving movesThe bees will continue to explore in next forward pass orcontinue the same process with neighborhoodThe flowchart

Forward pass

Initialization

Construction move

Backward pass

Update the bestsolution

Stopping criteriaFalse

True

Figure 2 Flowchart of BCO algorithm

for BCO is shown in Figure 2 The BCO is proficient insolving combinatorial optimization problems by creatingcolonies of the multiagent system The pseudocode for BCOis described in Algorithm 1 The bee colony system providesa standard well-organized and well-coordinated teamworkmultitasking performance [33]

42 Modified Nelder-Mead Method The Nelder-MeadMethod is a simplex method for finding a local minimumfunction of various variables and is a local search algorithmfor unconstrained optimization problems The whole searcharea is divided into different fragments and filled with beeagents To obtain the best solution each fragment can besearched by its bee agents through Modified Nelder-MeadMethod (MNMM) Each agent in the fragments passesinformation about the optimized point using MNMMBy using NMMM the best points are obtained and thebest solution is chosen by decision-making process ofhoney bees The algorithm is a simplex-based method119863-dimensional simplex is initialized with 119863 + 1 verticesthat is two dimensions and it forms a triangle if it has threedimensions it forms a tetrahedron To assign the best andworst point the vertices are evaluated and ordered based onthe objective function

The best point or vertex is considered to the minimumvalue of the objective function and the worst point is chosen

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

8 Computational Intelligence and Neuroscience

Bee Colony Optimization(1) Initialization Assign every bee to an empty solution(2) Forward Pass

For every bee(21) set 119894 = 1(22) Evaluate all possible construction moves(23) Based on the evaluation choose one move using Roulette Wheel(24) 119894 = 119894 + 1 if (119894 le 119873) Go to step (22)

where 119894 is the counter for construction move and119873 is the number of construction moves during one forwardpass

(3) Return to Hive(4) Backward Pass starts(5) Compute the objective function for each bee and sort accordingly(6) Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee(7) For every follower choose the new solution from recruiters(8) If stopping criteria is not met Go to step (2)(9) Evaluate and find the best solution(10) Output the best solution

Algorithm 1 Pseudocode of BCO

with a maximum value of the computed objective functionTo form simplex new vertex function values are computedThismethod can be calculated using four procedures namelyreflection expansion contraction and shrinkage Figure 3shows the operators of the simplex triangle in MNMM

The simplex operations in each vertex are updated closerto its optimal solution the vertices are ordered based onfitness value and ordered The best vertex is 119860119887 the secondbest vertex is 119860 119904 and the worst vertex is 119860119908 calculated basedon the objective function Let 119860 = (119909 119910) be the vertex in atriangle as food source points 119860119887 = (119909119887 119910119887) 119860 119904 = (119909119904 119910119904)and119860119908 = (119909119908 119910119908) are the positions of the food source pointsthat is local optimal points The objective functions for 119860119887119860 119904 and 119860119908 are calculated based on (23) towards the foodsource points

The objective function to construct simplex to obtainlocal search using MNMM is formulated as

119891 (119909 119910) = 1199092 minus 4119909 + 1199102 minus 119910 minus 119909119910 (23)

Based on the objective function value the vertices foodpoints are ordered ascending with their corresponding honeybee agentsThe obtained values are ordered as119860119887 le 119860 119904 le 119860119908with their honey bee position and food points in the simplextriangle Figure 4 describes the search of best-minimizedcost value for the nurse based on objective function (22)The working principle of Modified Nelder-Mead Method(MNMM) for searching food particles is explained in detail

(1) In the simplex triangle the reflection coefficient 120572expansion coefficient 120574 contraction coefficient 120573 andshrinkage coefficient 120575 are initialized

(2) The objective function for the simplex triangle ver-tices is calculated and ordered The best vertex withlower objective value is 119860119887 the second best vertex is119860 119904 and the worst vertex is named as 119860119908 and thesevertices are ordered based on the objective functionas 119860119887 le 119860 119904 le 119860119908

(3) The first two best vertices are selected namely119860119887 and119860 119904 and the construction proceeds with calculatingthe midpoint of the line segment which joins the twobest vertices that is food positions The objectivefunction decreases as the honey agent associated withthe worst position vertex moves towards best andsecond best verticesThe value decreases as the honeyagent moves towards 119860119908 to 119860119887 and 119860119908 to 119860 119904 It isfeasible to calculate the midpoint vertex 119860119898 by theline joining best and second best vertices using

119860119898 = 119860119887 + 119860 1199042 (24)

(4) A reflecting vertex 119860119903 is generated by choosing thereflection of worst point 119860119908 The objective functionvalue for 119860119903 is 119891(119860119903) which is calculated and it iscompared with worst vertex 119860119908 objective functionvalue 119891(119860119908) If 119891(119860119903) lt 119891(119860119908) proceed with step(5) the reflection vertex can be calculated using

119860119903 = 119860119898 + 120572 (119860119898 minus 119860119908) where 120572 gt 0 (25)

(5) The expansion process starts when the objectivefunction value at reflection vertex 119860119903 is lesser thanworst vertex 119860119908 119891(119860119903) lt 119891(119860119908) and the linesegment is further extended to 119860119890 through 119860119903 and119860119908 The vertex point 119860119890 is calculated by (26) If theobjective function value at119860119890 is lesser than reflectionvertex 119860119903 119891(119860119890) lt 119891(119860119903) then the expansion isaccepted and the honey bee agent has found best foodposition compared with reflection point

119860119890 = 119860119903 + 120574 (119860119903 minus 119860119898) where 120574 gt 1 (26)

(6) The contraction process is carried out when 119891(119860119903) lt119891(119860 119904) and 119891(119860119903) le 119891(119860119887) for replacing 119860119887 with

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 9

AwAs

Ab

(a) Simplex triangle

Ar

As

Ab

Aw

(b) Reflection

Ae

Ar

As

Ab

Aw

(c) Expansion

Ac

As

Ab

Aw

(d) Contraction (119860ℎ lt 119860119903)

Ac

As

Ab

Aw

(e) Contraction (119860119903 lt 119860ℎ)

A㰀b

A㰀s

As

Ab

Aw

(f) Shrinkage

Figure 3 Nelder-Mead operations

119860119903 If 119891(119860119903) gt 119891(119860ℎ) then the direct contractionwithout the replacement of 119860119887 with 119860119903 is performedThe contraction vertex 119860119888 can be calculated using

119860119888 = 120573119860119903 + (1 minus 120573)119860119898 where 0 lt 120573 lt 1 (27)

If 119891(119860119903) le 119891(119860119887) the contraction can be done and119860119888 replaced with 119860ℎ go to step (8) or else proceed tostep (7)

(7) The shrinkage phase proceeds when the contractionprocess at step (6) fails and is done by shrinking allthe vertices of the simplex triangle except 119860ℎ using(28) The objective function value of reflection andcontraction phase is not lesser than the worst pointthen the vertices 119860 119904 and 119860119908 must be shrunk towards119860ℎThus the vertices of smaller value will form a newsimplex triangle with another two best vertices

119860 119894 = 120575119860 119894 + 1198601 (1 minus 120575) where 0 lt 120575 lt 1 (28)

(8) The calculations are stopped when the terminationcondition is met

Algorithm 2 describes the pseudocode for ModifiedNelder-Mead Method in detail It portraits the detailed pro-cess of MNMM to obtain the best solution for the NRP Theworkflow of the proposed MNMM is explained in Figure 5

5 MODBCO

Bee Colony Optimization is the metaheuristic algorithm tosolve various combinatorial optimization problems and itis inspired by the natural behavior of bee for their foodsources The algorithm consists of two steps forward andbackward pass During forwarding pass bees started toexplore the neighborhood of its current solution and findall possible ways In backward pass bees return to thehive and share the values of the objective function of theircurrent solution Calculate nectar amount using probability

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

10 Computational Intelligence and Neuroscience

Ab

Aw

Ar

As

Am

d

d

Ab

Aw

Ar

As

Am

d

d

Aed2

Ab

Aw

Ar

As

Am

Ac1

Ac2

Ab

Aw As

Am

Anew

Figure 4 Bees search movement based on MNMM

function and advertise the solution the bee which has thebetter solution is given higher priority The remaining beesbased on the probability value decide whether to explore thesolution or proceed with the advertised solution DirectedBee Colony Optimization is the computational system whereseveral bees work together in uniting and interact with eachother to achieve goals based on the group decision processThe whole search area of the bee is divided into multiplefragments different bees are sent to different fragments Thebest solution in each fragment is obtained by using a localsearch algorithmModified Nelder-Mead Method (MNMM)To obtain the best solution the total varieties of individualparameters are partitioned into individual volumes Eachvolume determines the starting point of the exploration offood particle by each bee The bees use developed MNMMalgorithm to find the best solution by remembering thelast two best food sites they obtained After obtaining thecurrent solution the bee starts to backward pass sharingof information obtained during forwarding pass The beesstarted to share information about optimized point by thenatural behavior of bees called waggle dance When all theinformation about the best food is shared the best among theoptimized point is chosen using a decision-making processcalled consensus and quorummethod in honey bees [34 35]

51 Multiagent System All agents live in an environmentwhich is well structured and organized Inmultiagent systemseveral agents work together and interact with each otherto obtain the goal According to Jiao and Shi [36] andZhong et al [37] all agents should possess the followingqualities agents should live and act in an environmenteach agent should sense its local environment each agent

should be capable of interacting with other agents in a localenvironment and agents attempt to perform their goal Allagents interact with each other and take the decision toachieve the desired goals The multiagent system is a com-putational system and provides an opportunity to optimizeand compute all complex problems In multiagent system allagents start to live and act in the same environment which iswell organized and structured Each agent in the environmentis fixed on a lattice point The size and dimension of thelattice point in the environment depend upon the variablesused The objective function can be calculated based on theparameters fixed

(1) Consider ldquo119890rdquo number of independent parameters tocalculate the objective function The range of the 119892thparameter can be calculated using [119876119892119894 119876119892119891] where119876119892119894 is the initial value of the 119892th parameter and 119876119892119891is the final value of the 119892th parameter chosen

(2) Thus the objective function can be formulated as 119890number of axes each axis will contain a total rangeof single parameter with different dimensions

(3) Each axis is divided into smaller parts each partis called a step So 119892th axis can be divided into 119899119892number of steps each with the length of 119871119892 where thevalue of 119892 depends upon parameters thus 119892 = 1 to 119890The relationship between 119899119892 and 119871119892 can be given as

119899119892 = 119876119892119894 minus 119876119892119891119871119892 (29)

(4) Then each axis is divided into branches foreach branch 119892 number of branches will form an

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 11: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 11

Modified Nelder-Mead Method for directed honey bee food search(1) Initialization119860119887 denotes the list of vertices in simplex where 119894 = 1 2 119899 + 1120572 120574 120573 and 120575 are the coefficients of reflection expansion contraction and shrinkage119891 is the objective function to be minimized(2)Ordering

Order the vertices in simplex from lowest objective function value 119891(1198601) to highest value 119891(119860119899+1) Ordered as 1198601le 1198602 le sdot sdot sdot le 119860119899+1(3)Midpoint

Calculate the midpoint for first two best vertices in simplex 119860119898 = sum(119860 119894119899) where 119894 = 1 2 119899(4) Reflection Process

Calculate reflection point 119860119903 by 119860119903 = 119860119898 + 120572(119860119898 minus 119860119899+1)if 119891(1198601) le 119891(1198602) le 119891(119860119899) then119860119899 larr 119860119903 and Go to to Step (8)end if

(5) Expansion Processif 119891(119860119903) le 119891(1198601) thenCalculate expansion point using 119860 119890 = 119860119903 + 120574(119860119903 minus 119860119898)end ifif 119891(119860 119890) lt 119891(119860119903) then119860119899 larr 119860 119890 and Go to to Step (8)else119860119899 larr 119860119903 and Go to to Step (8)end if

(6) Contraction Processif 119891(119860119899) le 119891(119860119903) le 119891(119860119899+1) thenCompute outside contraction by 119860 119888 = 120573119860119903 + (1 minus 120573)119860119898end ifif 119891(1198601) ge 119891(119860119899+1) thenCompute inside contraction by 119860 119888 = 120573119860119899+1 + (1 minus 120573)119860119898end ifif 119891(119860119903) ge 119891(119860119899) thenContraction is done between 119860119898 and the best vertex among 119860119903 and 119860119899+1end ifif 119891(119860 119888) lt 119891(119860119903) then119860119899 larr 119860 119888 and Go to to Step (8)else goes to Step (7)end ifif 119891(119860 119888) ge 119891(119860119899+1) then119860119899+1 larr 119860 119888 and Go to to Step (8)else Go to to Step (7)end if

(7) Shrinkage ProcessShrink towards the best solution with new vertices by 119860 119894 = 120575119860 119894 + 1198601(1 minus 120575) where 119894 = 2 119899 + 1

(8) Stopping CriteriaOrder and re-label new vertices of the simplex based on their objective function and go to step (4)

Algorithm 2 Pseudocode of Modified Nelder-Mead Method

119890-dimensional volume Total number of volumes 119873Vcan be formulated using

119873V = 119890prod119892=1

119899119892 (30)

(5) The starting point of the agent in the environmentwhich is one point inside volume is chosen bycalculating themidpoint of the volumeThemidpointof the lattice can be calculated as

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ] (31)

52 Decision-Making Process A key role of the honey beesis to select the best nest site and is done by the process ofdecision-making to produce a unified decisionThey follow adistributed decision-making process to find out the neighbornest site for their food particles The pseudocode for theproposed MODBCO algorithm is shown in Algorithm 3Figure 6 explains the workflow of the proposed algorithm forthe search of food particles by honey bees using MODBCO

521 Waggle Dance The scout bees after returning from thesearch of food particle report about the quality of the foodsite by communicationmode called waggle dance Scout beesperform thewaggle dance to other quiescent bees to advertise

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 12: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

12 Computational Intelligence and Neuroscience

Yes

Reflectionprocess

Order and label verticesbased on f(A)

Initialization

Coefficients 훼 훾 훽 훿

Objective function f(A)

f(Ab) lt f(Ar) lt f(Aw) Aw larr Ar

f(Ae) le f(Ar)

two best verticesAm forCalculate midpoint

Start

Terminationcriteria

Stop

Ar = Am + 훼(Am minus Aw)

ExpansionprocessNo

Yesf(Ar) le f(Aw) Aw larr Ae

No

b larr true Aw larr Ar

Contractionprocess

f(Ar) ge f(An)Yes

f(Ac) lt f(Ar)Aw larr Ac

b larr false

No

Shrinkageprocess

b larr true

Yes

Yes

No

Ae = Ar + 훾(Ar minus

Am)

Ac = 훽Ar + (1 minus 훽)Am

Ai = 훿Ai + A1(1 minus 훿)

Figure 5 Workflow of Modified Nelder-Mead Method

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 13: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 13

Multi-Objective Directed Bee Colony Optimization(1) Initialization119891(119909) is the objective function to be minimized

Initialize 119890 number of parameters and 119871119892 length of steps where 119892 = 0 to 119890Initialize initial value and the final value of the parameter as 119876119892119894 and 119876119892119891lowastlowast Solution Representation lowastlowastThe solutions are represented in the form of Binary values which can be generated as followsFor each solution 119894 = 1 119899119883119894 = 1199091198941 1199091198942 119909119894119889 | 119889 isin total days amp 119909119894119889 = rand ge 029 forall119889End for

(2) The number of steps in each step can be calculated using

119899119892 = 119876119892119894 minus 119876119892119891119871119892(3) The total number of volumes can be calculated using119873V = 119890prod

119892=1

119899119892(4) The midpoint of the volume to calculate starting point of the exploration can be calculated using

[1198761198941 minus 11987611989112 1198761198942 minus 11987611989122 119876119894119890 minus 1198761198911198902 ](5) Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2(6) The recorded value of the optimized point in vector table using[119891(1198811) 119891(1198812) 119891(119881119873V )](7) The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum

method approach 119891(119881119892) = min [119891(1198811) 119891(1198812) 119891(119881119873V )]Algorithm 3 Pseudocode of MODBCO

their best nest site for the exploration of food source Inthe multiagent system each agent after collecting individualsolution gives it to the centralized systems To select the bestoptimal solution forminimal optimal cases themathematicalformulation can be stated as

dance119894 = min (119891119894 (119881)) (32)

This mathematical formulation will find the minimaloptimal cases among the search solution where 119891119894(119881) is thesearch value calculated by the agent The search values arerecorded in the vector table 119881 119881 is the vector which consistsof 119890 number of elements The element 119890 contains the value ofthe parameter both optimal solution and parameter valuesare recorded in the vector table

522 Consensus Theconsensus is thewidespread agreementamong the group based on voting the voting pattern ofthe scout bees is monitored periodically to know whetherit reached an agreement and started acting on the decisionpattern Honey bees use the consensus method to select thebest search value the globally optimized point is chosen bycomparing the values in the vector table The globally opti-mized points are selected using themathematical formulation

119891 (119881119892) = min [119891 (1198811) 119891 (1198812) 119891 (119881119873V)] (33)

523 Quorum In quorummethod the optimum solution iscalculated as the final solution based on the threshold levelobtained by the group decision-making process When thesolution reaches the optimal threshold level 120585119902 then the solu-tion is considered as a final solution based on unison decisionprocess The quorum threshold value describes the quality of

the food particle result When the threshold value is less thecomputation time decreases but it leads to inaccurate experi-mental resultsThe threshold value should be chosen to attainless computational timewith an accurate experimental result

6 Experimental Design and Analysis

61 Performance Metrics The performance of the proposedalgorithm MODBCO is assessed by comparing with fivedifferent competitor methods Here six performance metricsare considered to investigate the significance and evaluate theexperimental results The metrics are listed in this section

611 Least Error Rate Least Error Rate (LER) is the percent-age of the difference between known optimal value and thebest value obtained The LER can be calculated using

LER () = 119903sum119894=1

OptimalNRP-Instance minus fitness119894OptimalNRP-Instance

(34)

612 Average Convergence The Average Convergence is themeasure to evaluate the quality of the generated populationon average The Average Convergence (AC) is the percentageof the average of the convergence rate of solutions The per-formance of the convergence time is increased by the AverageConvergence to exploremore solutions in the populationTheAverage Convergence is calculated usingAC

= 119903sum119894=1

1 minus Avg_fitness119894 minusOptimalNRP-InstanceOptimalNRP-Instance

lowast 100 (35)

where (119903) is the number of instances in the given dataset

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 14: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

14 Computational Intelligence and Neuroscience

Start

Choose method

Explore search using MNMM

Consensus Quorum

Calculate optimum solution

Waggle dance

Threshold level

Global optimum solution

Stop

Calculate number of steps

ng =Qgi minus Qgf

Lg

Calculate midpoint value

[Qi1 minus Qf1

2Qi2 minus Qf2

2

Qie minus Qfe

2]

Calculate number of Volumes

N =

gprodgminus1

ng

Initialization

Parameter eObjective function f(X)

LgStep lengthQgiInitial parameterQgfFinal parameter

Figure 6 Flowchart of MODBCO

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 15: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 15

613 Standard Deviation Standard deviation (SD) is themeasure of dispersion of a set of values from its meanvalue Average Standard Deviation is the average of the

standard deviation of all instances taken from the datasetThe Average Standard Deviation (ASD) can be calculatedusing

ASD = radic 119903sum119894=1

(value obtained in each instance119894 minusMean value of the instance)2 (36)

where (119903) is the number of instances in the given dataset

614 Convergence Diversity The Convergence Diversity(CD) is the difference between best convergence rate andworst convergence rate generated in the population TheConvergence Diversity can be calculated using

CD = Convergencebest minus Convergenceworst (37)

where Convergencebest is the convergence rate of best fitnessindividual and Convergenceworst is the convergence rate ofworst fitness individual in the population

615 Cost Diversion Cost reduction is the differencebetween known cost in the NRP Instances and the costobtained from our approach Average Cost Diversion (ACD)is the average of cost diversion to the total number of instan-ces taken from the datasetThe value ofACRcan be calculatedfrom

ACR = 119903sum119894=1

Cost119894 minus CostNRP-InstanceTotal number of instances

(38)

where (119903) is the number of instances in the given dataset

62 Experimental Environment Setup The proposed Direct-ed Bee Colony algorithm with the Modified Nelder-MeadMethod to solve the NRP is illustrated briefly in this sectionThe main objective of the proposed algorithm is to satisfymultiobjective of the NRP as follows

(a) Minimize the total cost of the rostering problem(b) Satisfy all the hard constraints described in Table 1(c) Satisfy as many soft constraints described in Table 2(d) Enhance the resource utilization(e) Equally distribute workload among the nurses

The Nurse Rostering Problem datasets are taken fromthe First International RosteringCompetition (INRC2010) byPATAT-2010 a leading conference inAutomated Timetabling[38]The INRC2010 dataset is divided based on its complexityand size into three tracks namely sprint medium andlong datasets Each track is divided into four types as earlylate hidden and hint with reference to the competitionINRC2010 The first track sprint is the easiest and consistsof 10 nurses 33 datasets which are sorted as 10 early types10 late types 10 hidden types and 3 hint type datasets Thescheduling period is for 28 days with 3 to 4 contract types 3to 4 daily shifts and one skill specification The second track

is a medium which is more complex than sprint track andit consists of 30 to 31 nurses 18 datasets which are sorted as5 early types 5 long types 5 hidden types and 3 hint typesThe scheduling period is for 28 days with 3 to 4 contracttypes 4 to 5 daily shifts and 1 to 2 skill specifications Themost complicated track is long with 49 to 40 nurses andconsists of 18 datasets which are sorted as 5 early types 5 longtypes 5 hidden types and 3 hint typesThe scheduling periodfor this track is 28 days with 3 to 4 contract types 5 dailyshifts and 2 skill specifications The detailed description ofthe datasets available in the INRC2010 is shown in Table 3The datasets are classified into twelve cases based on the sizeof the instances and listed in Table 4

Table 3 describes the detailed description of the datasetscolumns one to three are used to index the dataset to tracktype and instance Columns four to seven will explain thenumber of available nurses skill specifications daily shifttypes and contracts Column eight explains the number ofunwanted shift patterns in the roster The nurse preferencesare managed by shift off and day off in columns nine and tenThe number of weekend days is shown in column elevenThelast column indicates the scheduling period The symbol ldquo119909rdquoshows there is no shift off and day off with the correspondingdatasets

Table 4 shows the list of datasets used in the experimentand it is classified based on its size The datasets presentin case 1 to case 4 are smaller in size case 5 to case 8 areconsidered to be medium in size and the larger sized datasetis classified from case 9 to case 12

The performance of MODBCO for NRP is evaluatedusing INRC2010 dataset The experiments are done on dif-ferent optimization algorithms under similar environmentconditions to assess the performance The proposed algo-rithm to solve the NRP is coded using MATLAB 2012platform under Windows on an Intel 2GHz Core 2 quadprocessor with 2GB of RAM Table 3 describes the instancesconsidered by MODBCO to solve the NRP The empiricalevaluations will set the parameters of the proposed systemAppropriate parameter values are determined based on thepreliminary experiments The list of competitor methodschosen to evaluate the performance of the proposed algo-rithm is shown in Table 5 The heuristic parameter and thecorresponding values are represented in Table 6

63 Statistical Analysis Statistical analysis plays a majorrole in demonstrating the performance of the proposedalgorithm over existing algorithms Various statistical testsand measures to validate the performance of the algorithmare reviewed byDemsar [39]The authors used statistical tests

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 16: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

16 Computational Intelligence and Neuroscience

Table 3 The features of the INRC2010 datasets

Track Type Instance Nurses Skills Shifts Contracts Unwanted pattern Shift off Day off Weekend Time period

Sprint

Early 01ndash10 10 1 4 4 3 2 1-01-2010 to 28-01-2010

Hidden

01-02 10 1 3 3 4 2 1-06-2010 to 28-06-201003 05 08 10 1 4 3 8 2 1-06-2010 to 28-06-201004 09 10 1 4 3 8 2 1-06-2010 to 28-06-201006 07 10 1 3 3 4 2 1-01-2010 to 28-01-201010 10 1 4 3 8 2 1-01-2010 to 28-01-2010

Late

01 03ndash05 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 3 3 4 2 1-01-2010 to 28-01-2010

06 07 10 10 1 4 3 0 2 1-01-2010 to 28-01-201008 10 1 4 3 0 times times 2 1-01-2010 to 28-01-201009 10 1 4 3 0 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 03 10 1 4 3 8 2 1-01-2010 to 28-01-201002 10 1 4 3 0 2 1-01-2010 to 28-01-2010

Medium

Early 01ndash05 31 1 4 4 0 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 30 2 5 4 9 times times 2 1-06-2010 to 28-06-201005 30 2 5 4 9 times times 2 1-06-2010 to 28-06-2010

Late

01 30 1 4 4 7 2 1-01-2010 to 28-01-201002 04 30 1 4 3 7 2 1-01-2010 to 28-01-201003 30 1 4 4 0 2 1-01-2010 to 28-01-201005 30 2 5 4 7 2 1-01-2010 to 28-01-2010

Hint 01 03 30 1 4 4 7 2 1-01-2010 to 28-01-201002 30 1 4 4 7 2 1-01-2010 to 28-01-2010

Long

Early 01ndash05 49 2 5 3 3 2 1-01-2010 to 28-01-2010

Hidden 01ndash04 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-201005 50 2 5 3 9 times times 2 3 1-06-2010 to 28-06-2010

Late 01 03 05 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 04 50 2 5 4 9 times times 2 3 1-01-2010 to 28-01-2010

Hint 01 50 2 5 3 9 times times 2 3 1-01-2010 to 28-01-201002 03 50 2 5 3 7 times times 2 1-01-2010 to 28-01-2010

Table 4 Classification of INRC2010 datasets based on the size

SI number Case Track Type1 Case 1 Sprint Early2 Case 2 Sprint Hidden3 Case 3 Sprint Late4 Case 4 Sprint Hint5 Case 5 Middle Early6 Case 6 Middle Hidden7 Case 7 Middle Late8 Case 8 Middle Hint9 Case 9 Long Early10 Case 10 Long Hidden11 Case 11 Long Late12 Case 12 Long Hint

like ANOVA Dunnett test and post hoc test to substantiatethe effectiveness of the proposed algorithm and help todifferentiate from existing algorithms

631 ANOVA Test To validate the performance of theproposed algorithm ANOVA (Analysis of Variance) is usedas the statistical analysis tool to demonstrate whether oneor more solutions significantly vary [40] The authors usedone-way ANOVA test [41] to show significance in proposedalgorithm One-way ANOVA is used to validate and compare

Table 5 List of competitors methods to compare

Type Method ReferenceM1 Artificial Bee Colony Algorithm [14]M2 Hybrid Artificial Bee Colony Algorithm [15]M3 Global best harmony search [16]M4 Harmony Search with Hill Climbing [17]M5 Integer Programming Technique for NRP [18]

Table 6 Configuration parameter for experimental evaluation

Type MethodNumber of bees 100Maximum iterations 1000Initialization technique BinaryHeuristic Modified Nelder-Mead MethodTermination condition Maximum iterationsRun 20Reflection coefficient 120572 gt 0Expansion coefficient 120574 gt 1Contraction coefficient 0 gt 120573 gt 1Shrinkage coefficient 0 lt 120575 lt 1differences between various algorithms The ANOVA testis performed with 95 confidence interval the significantlevel of 005 In ANOVA test the null hypothesis is testedto show the difference in the performance of the algorithms

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 17: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 17

Table 7 Experimental result with respect to best value

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Sprint early 01 56 56 75 63 74 57 81 59 75 58 77 58 77Sprint early 02 58 59 77 66 89 59 80 61 82 64 88 60 85Sprint early 03 51 51 68 60 83 52 75 54 70 59 84 53 67Sprint early 04 59 59 77 68 86 60 76 63 80 67 84 61 85Sprint early 05 58 58 74 65 86 59 77 60 79 63 86 60 84Sprint early 06 54 53 69 59 81 55 79 57 73 58 75 56 77Sprint early 07 56 56 75 62 81 58 79 60 75 61 85 57 78Sprint early 08 56 56 76 59 79 58 79 59 80 58 82 57 78Sprint early 09 55 55 82 59 79 57 76 59 80 61 78 56 80Sprint early 10 52 52 67 58 76 54 73 55 75 58 78 53 76Sprint hidden 01 32 32 48 45 67 34 57 43 60 46 65 33 55Sprint hidden 02 32 32 51 41 59 34 55 37 53 44 68 33 51Sprint hidden 03 62 62 79 74 92 63 86 71 90 78 96 62 84Sprint hidden 04 66 66 81 79 102 67 91 80 96 78 100 66 91Sprint hidden 05 59 58 73 68 90 60 79 63 84 69 88 59 77Sprint hidden 06 134 128 146 164 187 131 145 203 219 169 188 132 150Sprint hidden 07 153 154 172 181 204 154 175 197 215 187 210 155 174Sprint hidden 08 204 201 219 246 265 205 225 267 286 240 260 206 224Sprint hidden 09 338 337 353 372 390 339 363 274 291 372 389 340 365Sprint hidden 10 306 306 324 328 347 307 330 347 368 322 342 308 324Sprint late 01 37 37 53 50 68 38 57 46 66 52 72 39 57Sprint late 02 42 41 61 53 74 43 61 50 71 56 80 44 67Sprint late 03 48 45 65 57 80 50 73 57 76 60 77 49 72Sprint late 04 75 71 87 88 108 75 95 106 127 95 115 74 99Sprint late 05 44 46 66 52 65 46 68 53 74 57 74 46 70Sprint late 06 42 42 60 46 65 44 68 45 64 52 70 43 62Sprint late 07 42 44 63 51 69 46 69 62 81 55 78 44 68Sprint late 08 17 17 36 17 37 19 41 19 39 19 40 17 39Sprint late 09 17 17 33 18 37 19 42 19 40 17 40 17 40Sprint late 10 43 43 59 56 74 45 67 56 74 54 71 43 62Sprint hint 01 78 73 92 85 103 77 91 103 119 90 108 77 99Sprint hint 02 47 43 59 57 76 48 68 61 80 56 81 45 68Sprint hint 03 57 49 67 74 96 52 75 79 97 69 93 53 66Medium early 01 240 245 263 262 284 247 267 247 262 280 305 250 269Medium early 02 240 243 262 263 280 247 270 247 263 281 301 250 273Medium early 03 236 239 256 261 281 244 264 244 262 287 311 245 269Medium early 04 237 245 262 259 278 242 261 242 258 278 297 247 272Medium early 05 303 310 326 331 351 310 332 310 329 330 351 313 338Medium hidden 01 123 143 159 190 210 157 180 157 177 410 429 192 214Medium hidden 02 243 230 248 286 306 256 277 256 273 412 430 266 284Medium hidden 03 37 53 70 66 84 56 78 56 69 182 203 61 81Medium hidden 04 81 85 104 102 119 95 119 95 114 168 191 100 124Medium hidden 05 130 182 201 202 224 178 202 178 197 520 545 194 214Medium late 01 157 176 195 207 227 175 198 175 194 234 257 179 194Medium late 02 18 30 45 53 76 32 52 32 53 49 67 35 55Medium late 03 29 35 52 71 90 39 60 39 59 59 78 44 66Medium late 04 35 42 58 66 83 49 57 49 70 71 89 48 70Medium late 05 107 129 149 179 199 135 156 135 156 272 293 141 164Medium hint 01 40 42 62 70 90 49 71 49 65 64 82 50 71Medium hint 02 84 91 107 142 162 95 116 95 115 133 158 96 115Medium hint 03 129 135 153 188 209 141 161 141 152 187 208 148 166

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 18: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

18 Computational Intelligence and Neuroscience

Table 7 Continued

Instances Optimal value MODBCO M1 M2 M3 M4 M5Best Worst Best Worst Best Worst Best Worst Best Worst Best Worst

Long early 01 197 194 209 241 259 200 222 200 221 339 362 220 243Long early 02 219 228 245 276 293 232 254 232 253 399 419 253 275Long early 03 240 240 258 268 291 243 257 243 262 349 366 251 273Long early 04 303 303 321 336 356 306 324 306 320 411 430 319 342Long early 05 284 284 300 326 347 287 310 287 308 383 403 301 321Long hidden 01 346 389 407 444 463 403 422 403 421 4466 4488 422 442Long hidden 02 90 108 126 132 150 120 139 120 139 1071 1094 128 148Long hidden 03 38 48 63 61 78 54 72 54 75 163 181 54 79Long hidden 04 22 27 45 49 71 32 54 32 51 113 132 36 58Long hidden 05 41 55 71 78 99 59 81 59 70 139 157 60 83Long late 01 235 249 267 290 310 260 278 260 276 588 606 262 286Long late 02 229 261 280 295 318 265 278 265 281 577 595 263 281Long late 03 220 259 275 307 325 264 285 264 285 567 588 272 295Long late 04 221 257 292 304 323 263 284 263 281 604 627 276 294Long late 05 83 92 107 142 161 104 122 104 125 329 349 118 138Long hint 01 31 40 58 53 73 44 67 44 65 126 150 50 72Long hint 02 17 29 47 40 62 32 55 32 51 122 145 36 61Long hint 03 53 79 137 117 135 85 104 85 101 278 303 102 123

If the obtained significance value is less than the criticalvalue (005) then the null hypothesis is rejected and thusthe alternate hypothesis is accepted Otherwise the nullhypothesis is accepted by rejecting the alternate hypothesis

632 Duncanrsquos Multiple Range Test After the null hypothesisis rejected to explore the group differences post hoc ormultiple comparison test is performed Duncan developed aprocedure to test and compare all pairs in multiple ranges[42] Duncanrsquos multiple range test (DMRT) classifies thesignificant and nonsignificant difference between any twomethods This method ranks in terms of mean values inincreasing or decreasing order and group method which isnot significant

64 Experimental and Result Analysis In this section theeffectiveness of the proposed algorithm MODBCO is com-pared with other optimization algorithms to solve the NRPusing INRC2010 datasets under similar environmental setupusing performance metrics as discussed To compare theresults produced byMODBCO seems to bemore competitivewith previous methods The performance of MODBCO iscomparable with previous methods listed in Tables 7ndash18The computational analysis on the performance metrics is asfollows

641 Best Value The results obtained by MODBCO withcompetitive methods are shown in Table 7 The performanceis compared with previous methods the number in the tablerefers to the best solution obtained using the correspondingalgorithm The objective of NRP is the minimization ofcost the lowest values are the best solution attained In theevaluation of the performance of the algorithm the authors

Table 8 Statistical analysis with respect to best value

(a) ANOVA test

Source factor best valueSum ofsquares df Mean

square 119865 Sig

Betweengroups 1061949 5 2123898 3620681 0003

Withingroups 23933354 408 5866018

Total 24995303 413

(b) DMRT test

Duncan test best value

Method 119873 Subset for alpha = 0051 2

MODBCO 69 1202319M2 69 1241304M5 69 128087M3 69 1293478M1 69 1431594M4 69 2635507Sig 0629 1000

have considered 69 datasets with diverse size It is apparentlyshown that MODBCO accomplished 34 best results out of 69instances

The statistical analysis tests ANOVA and DMRT forbest values are shown in Table 8 It is perceived that thesignificance values are less than 005 which shows the nullhypothesis is rejected The significant difference between

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 19: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 19

Table 9 Experimental result with respect to error rate

Case MODBCO M1 M2 M3 M4 M5Case 1 minus001 1154 254 577 941 288Case 2 minus073 2016 169 1981 2235 083Case 3 minus047 1827 563 2324 2472 226Case 4 minus965 2003 minus264 3348 1853 minus418Case 5 000 957 273 273 1631 393Case 6 minus257 4637 2771 2771 22044 4062Case 7 2800 10540 3798 3798 11636 4582Case 8 393 6326 1497 1497 5443 1800Case 9 052 1714 215 215 5404 861Case 10 1555 6970 3625 3625 65247 4325Case 11 916 4008 1813 1813 18592 2341Case 12 4956 10901 6352 6352 44954 8850

Case 4 Case 5 Case 6NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

Erro

r rat

e (

)

Case 10 Case 11 Case 12NRP Instance

MODBCOM1M2

M3M4M5

0

100

200

300

400

Erro

r rat

e (

)

Case 7 Case 8 Case 9NRP Instance

MODBCOM1M2

M3M4M5

0

20

40

60

80

100

Erro

r rat

e (

)

Case 1 Case 2 Case 3NRP Instance

MODBCOM1M2

M3M4M5

0

5

10

15

20

25

Erro

r rat

e (

)

Figure 7 Performance analysis with respect to error rate

various optimization algorithms is observed The DMRT testshows the homogenous group two homogeneous groups forbest values are formed among competitor algorithms

642 Error Rate The evaluation based on the error rateshows that our proposed MODBCO yield lesser error ratecompared to other competitor techniques The computa-tional analysis based on error rate () is shown in Table 9 andout of 33 instances in sprint type 18 instances have achievedzero error rate For sprint type dataset 88 of instances have

attained a lesser error rate For medium and larger sizeddatasets the obtained error rate is 62 and 44 respectivelyA negative value in the column indicates correspondinginstances have attained lesser optimum valve than specifiedin the INRC2010

TheCompetitorsM2 andM5 generated better solutions atthe initial stage as the size of the dataset increases they couldnot be able to find the optimal solution and get trapped inlocal optimaThe error rate () obtained by usingMODBCOwith different algorithms is shown in Figure 7

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 20: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

20 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

0

20

40

60

80

100Av

erag

e Con

verg

ence

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

20

40

60

80

100

Aver

age C

onve

rgen

ce

MODBCOM1M2

M3M4M5

Figure 8 Performance analysis with respect to Average Convergence

Table 10 Statistical analysis with respect to error rate

(a) ANOVA test

Source factor error rateSum ofsquares df Mean square 119865 Sig

Betweengroups 6386809 5 1277361796 1526182 0000

Withingroups 3414820 408 8369657384

Total 4053501 413

(b) DMRT test

Duncan test error rate

Method 119873 Subset for alpha = 0051 2

MODBCO 69 5402238M2 69 1377936M5 69 1731724M3 69 2099903M1 69 3649033M4 69 1211591Sig 007559 1000

The statistical analysis on error rate is presented inTable 10 InANOVA test the significance value is 0000whichis less than 005 showing rejection of the null hypothesisThus there is a significant difference in value with respectto various optimization algorithmsThe DMRT test indicatestwo homogeneous groups formed from different optimiza-tion algorithms with respect to the error rate

643 Average Convergence The Average Convergence ofthe solution is the average fitness of the population to thefitness of the optimal solutionThe computational results withrespect to Average Convergence are shown in Table 11MOD-BCO shows 90 convergence rate in small size instances and82 convergence rate in medium size instances For longerinstances it shows 77 convergence rate Negative values inthe column show the corresponding instances get deviatedfrom optimal solution and trapped in local optima It isobserved that with increase in the problem size convergencerate reduces and becomesworse inmany algorithms for largerinstances as shown in Table 11The Average Convergence rateattained by various optimization algorithms is depicted inFigure 8

The statistical test result for Average Convergence isobserved in Table 12 with different optimization algorithmsFrom the table it is clear that there is a significant difference

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 21: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 21

Table 11 Experimental result with respect to Average Convergence

Case MODBCO M1 M2 M3 M4 M5Case 1 8701 7084 6979 7240 6259 6687Case 2 9196 6588 7693 6419 5688 7721Case 3 8340 5363 4723 3823 3007 4744Case 4 9902 6296 7723 4594 5314 7720Case 5 9627 8649 9110 9271 7729 8901Case 6 7949 4252 5287 5908 minus13909 3982Case 7 5634 minus3273 2452 2623 minus5491 997Case 8 8400 2172 6167 6851 2295 5822Case 9 9698 7881 9168 9259 3978 8436Case 10 7016 816 2997 3766 minus58444 1854Case 11 8458 5410 7349 7440 minus9485 6695Case 12 1513 minus4699 minus2273 minus946 minus41118 minus5343

Case 7 Case 8 Case 9NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 10 Case 11 Case 12NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

0

5

10

15

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Case 1 Case 2 Case 3NRP Instance

0

2

4

6

8

10

12

14

Aver

age S

tand

ard

Dev

iatio

n

MODBCOM1M2

M3M4M5

Figure 9 Performance analysis with respect to Average Standard Deviation

in mean values of convergence in different optimizationalgorithms The ANOVA test depicts the rejection of the nullhypothesis since the value of significance is 0000 The posthoc analysis test shows there are two homogenous groupsamong different optimization algorithms with respect to themean values of convergence

644 Average Standard Deviation The Average StandardDeviation is the dispersion of values from its mean valueand it helps to deduce features of the proposed algorithm

The computed result with respect to the Average StandardDeviation is shown in Table 13 The Average Standard Devia-tion attained by various optimization algorithms is depictedin Figure 9

The statistical test result for Average Standard Deviationis shown in Table 14 with different types of optimizationalgorithms There is a significant difference in mean valuesof standard deviation in different optimization algorithmsThe ANOVA test proves the null hypothesis is rejected sincethe value of significance is 000 which is less than the critical

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 22: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

22 Computational Intelligence and Neuroscience

Case 1 Case 2 Case 3NRP Instance

Con

verg

ence

0

20

40

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 7 Case 8 Case 9NRP Instance

Con

verg

ence

0

10

20

30

40

50

60

Div

ersit

y

MODBCOM1M2

M3M4M5

Con

verg

ence

Case 10 Case 11 Case 12NRP Instance

0

20

40

60

80

100

Div

ersit

y

MODBCOM1M2

M3M4M5

Case 4 Case 5 Case 6NRP Instance

Con

verg

ence

0

10

20

30

40

Div

ersit

y

MODBCOM1M2

M3M4M5

Figure 10 Performance analysis with respect to Convergence Diversity

Table 12 Statistical analysis with respect to Average Convergence

(a) ANOVA test

Source factor Average ConvergenceSum ofsquares df Mean square 119865 Sig

Betweengroups 712642475 5 142528495 1547047 00000

Withingroups 375887826 408 92129369

Total 447152073 413

(b) DMRT test

Duncan test Average Convergence

Method 119873 Subset for alpha = 0051 2

M4 69 minus476945M1 69 464245M5 69 536879M3 69 576296M2 69 595103MODBCO 69 816997Sig 100 0054

value 005 InDMRT test there are three homogenous groupsamong different optimization algorithms with respect to themean values of standard deviation

645 Convergence Diversity The Convergence Diversity ofthe solution is to calculate the difference between best con-vergence and worst convergence generated in the populationThe Convergence Diversity and error rate help to infer theperformance of the proposed algorithm The computationalanalysis based on Convergence Diversity for MODBCO withanother competitor algorithm is shown in Table 15 TheConvergence Diversity for smaller and medium datasets is58 and 50 For larger datasets the Convergence Diversityis 62 to yield an optimum value Figure 10 shows thecomparison of various optimization algorithms with respectto Convergence Diversity

The statistical test of ANOVA and DMRT is observed inTable 16 with respect to Convergence Diversity There is asignificant difference in the mean values of the ConvergenceDiversity with various optimization algorithms For post hocanalysis test the significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected FromDMRT test the grouping of various algorithms based onmean value is shown there are three homogenous groups

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 23: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 23

Table 13 Experimental result with respect to Average Standard Deviation

Case MODBCO M1 M2 M3 M4 M5Case 1 6501382 8551285 983255 8645203 1057402 9239032Case 2 5275281 8170864 1133125 8472083 1055361 7382535Case 3 5947749 8474928 8898058 8337811 1193913 8033767Case 4 5270417 5310443 1024612 950821 9092851 8049586Case 5 7035945 9400025 7790991 1028423 1345243 1146945Case 6 715779 1013578 9109779 9995431 1354185 8020126Case 7 7502947 9069255 9974609 8341374 1150138 1060612Case 8 7861593 9799198 7447106 1005138 1163378 1061275Case 9 9310626 1383453 9437943 1213188 1356668 1054568Case 10 1042404 1514141 1187235 1059549 1415797 1296181Case 11 1025454 1468924 1033091 1030386 1121071 133541Case 12 106846 1349546 1075478 1540791 1442514 113357

Aver

age C

ost

Case

7

Case

8

Case

9

Case

10

Case

11

Case

12

Case

6

Case

1

Case

2

Case

3

Case

4

Case

5

NRP Instance

MODBCOM1M2

M3M4M5

000100002000030000400005000060000

Div

ersio

n

Figure 11 Performance analysis with respect to Average CostDiversion

among the various optimization algorithms with respect tothe mean values of the cost diversity

646 Average Cost Diversion The computational analysisbased on cost diversion shows proposed MODBCO yieldsless diversion in cost compared to other competitor tech-niques The computational analysis with respect to AverageCost Diversion is shown in Table 17 For smaller andmediumdataset 13 and 38 of instances got diverged out of whichmany instances yield optimum value The larger dataset got56 of cost divergence A negative value in the table indicatescorresponding instances have achieved new optimized val-ues Figure 11 depicts the comparison of various optimizationalgorithms with respect to Average Cost Diversion

The statistical test of ANOVA and DMRT is observed inTable 18 with respect to Average Cost Diversion From thetable it is inferred that there is a significant difference in themean values of the cost diversion with various optimizationalgorithms The significance value is 0000 which is less thanthe critical value Thus the null hypothesis is rejected TheDMRT test reveals there are two homogenous groups among

Table 14 Statistical analysis with respect to Average StandardDeviation

(a) ANOVA test

Source factor Average Standard DeviationSum ofsquares df Mean square 119865 Sig

Betweengroups 6974407 5 1394881 136322 0000

Withingroups 417476 408 1023226

Total 4872201 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

MODBCO 69 74743M2 69 9615152M3 69 9677027M5 69 9729477M1 69 1013242M4 69 119316Sig 100 0394 100

the various optimization algorithms with respect to the meanvalues of the cost diversion

7 Discussion

The experiments to solve NP-hard combinatorial NurseRostering Problem are conducted by our proposed algorithmMODBCO Various existing algorithms are chosen to solvethe NRP and compared with the proposed MODBCO algo-rithm The results of our proposed algorithm are comparedwith other competitor methods and the best values are tabu-lated in Table 6 To evaluate the performance of the proposed

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 24: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

24 Computational Intelligence and Neuroscience

Table 15 Experimental result with respect to Convergence Diversity

Case MODBCO M1 M2 M3 M4 M5Case 1 3000 3525 3728 3283 3794 3883Case 2 2296 2792 2918 2385 2795 2762Case 3 5305 5621 6466 5929 6079 6505Case 4 2999 3403 3362 3084 3946 3332Case 5 701 787 833 671 877 929Case 6 2089 2221 2698 1929 2545 2487Case 7 4369 5466 4813 5547 5024 5618Case 8 2767 3003 3183 2411 3035 2969Case 9 689 810 822 804 824 910Case 10 3710 4429 4553 3895 4191 4998Case 11 1143 1164 1081 1136 1191 1215Case 12 9113 7596 8178 6990 8663 8588

Table 16 Statistical analysis with respect to Convergence Diversity

(a) ANOVA test

Source factor Convergence DiversitySum ofsquares df Mean square 119865 Sig

Betweengroups 5144758 5 1028952 9287168 0000

Withingroups 4520348 408 1107928

Total 5034824 413

(b) DMRT test

Duncan test Average Standard Deviation

Method 119873 Subset for alpha = 0051 2 3

M3 69 18363232MODBCO 69 1842029M1 69 1968116M2 69 2057971M4 69 2068116M5 69 212464Sig 0919 0096 1

algorithm various performance metrics are considered toevaluate the efficiency of the MODBCO Tables 7ndash18 showthe outcome of our proposed algorithm and other existingmethods performance From Tables 7ndash18 and Figures 7ndash11it is evidently shown that MODBCO has more ability toattain the best value on performance metrics compared tocompetitor algorithms which use the INRC2010

Compared with other existing methods the mean valueof MODBCO is 19 reduced towards optimum value withother competitor methods and it attained lesser worst valuein addition to the best solution The datasets are dividedbased on their size as smaller medium and large datasetthe standard deviation of MODBCO is reduced to 49

222 and 413 respectivelyThe error rate of our proposedapproach when compared with other competitor methodswith various sized datasets reduces to 106 for the smallerdataset 945 for the medium datasets and 704 for thelarger datasets The convergence rate of MODBCO hasachieved 90 for the smaller dataset 82 for the mediumdataset and 7737 for the larger dataset The error rate ofour proposed algorithm is reduced by 77 when comparedwith other competitor methods

Theproposed system is tested on larger sized datasets andit is working astoundingly better than the other techniquesIncorporation of Modified Nelder-Mead in Directed BeeColony Optimization Algorithm increases the exploitationstrategy within the given exploration search space Thismethod balances the exploration and exploitation withoutany biased natureThusMODBCO converges the populationtowards an optimal solution at the end of each iteration Bothcomputational and statistical analyses show the significantperformance over other competitor algorithms in solving theNRP The computational complexity is greater due to theuse of local heuristic Nelder-Mead Method However theproposed algorithm is better than exact methods and otherheuristic approaches in solving the NRP in terms of timecomplexity

8 Conclusion

This paper tackles solving the NRP using MultiobjectiveDirected Bee Colony Optimization Algorithm namedMOD-BCO To solve the NRP effectively Directed Bee Colonyalgorithm is chosen for global search and Modified Nelder-MeadMethod for local best searchTheproposed algorithm isevaluated using the INRC2010 dataset and the performanceof the proposed algorithm is compared with other fiveexisting methods To assess the performance of our proposedalgorithm 69 different cases of various sized datasets arechosen and 34 out of 69 instances got the best resultThus our algorithm contributes with a new deterministicsearch and effective heuristic approach to solve the NRPThus MODBCO outperforms with classical Bee Colony

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 25: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Computational Intelligence and Neuroscience 25

Table 17 Experimental result with respect to Average Cost Diversion

Case MODBCO M1 M2 M3 M4 M5Case 1 000 640 140 320 520 160Case 2 minus100 2120 080 1960 2190 080Case 3 minus040 810 180 1060 1100 090Case 4 minus567 1133 minus167 2033 1100 minus233Case 5 520 2400 680 680 4000 980Case 6 1580 4640 2560 2560 21560 3980Case 7 1320 4600 1680 1680 6780 2020Case 8 500 4900 1067 1067 4367 1367Case 9 120 4080 500 500 12760 2020Case 10 1800 4540 2620 2620 10000 3260Case 11 2600 7000 3360 3360 33540 4060Case 12 1567 3633 2000 2000 14167 2900

Table 18 Statistical analysis with respect to Average Cost Diversion

(a) ANOVA test

Source factor Average Cost DiversionSum ofsquares df Mean square 119865 Sig

Betweengroups 1061949 5 2123898 4919985 0000

Withingroups 17612867 408 4316879

Total 18674816 413

(b) DMRT test

Duncan test Average Cost Diversion

Method 119873 Subset for alpha = 0051 2

MODBCO 69 6202899M2 69 1010145M5 69 1405797M3 69 1531884M1 69 2913043M4 69 1495217Sig 0573558 1

Optimization for solving NRP by satisfying both hard andsoft constraints

The future work can be projected to

(a) adapting proposed MODBCO for various schedulingand timetabling problems

(b) exploring unfeasible solution to imitate optimal solu-tion

(c) further tuning the parameters of the proposed algo-rithm andmeasuring the exploitation and explorationstrategy

(d) investigating for applying Second International INRC2014 datasets

Conflicts of Interest

The authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

This work is a part of the Research Projects sponsoredby the Major Project Scheme UGC India Referencenos FNo2014-15NFO-2014-15-OBC-PON-3843(SA-IIIWEBSITE) dated March 2015 The authors would like toexpress their thanks for their financial support offered by theSponsored Agencies

References

[1] M Crepinsek S-H Liu and M Mernik ldquoExploration andexploitation in evolutionary algorithms a surveyrdquo ACM Com-puting Surveys vol 45 no 3 article 35 2013

[2] R Bai E K BurkeG Kendall J Li andBMcCollum ldquoAhybridevolutionary approach to the nurse rostering problemrdquo IEEETransactions on Evolutionary Computation vol 14 no 4 pp580ndash590 2010

[3] M Wooldridge An Introduction to Multiagent Systems JohnWiley amp Sons 2009

[4] E Goldberg David Genetic Algorithm in Search Optimizationand Machine Learning vol 3 Pearson Education 1988

[5] J Kennedy ldquoParticle swarm optimizationrdquo in Encyclopedia ofMachine Learning pp 760ndash766 Springer US 2011

[6] M Dorigo M Birattari and T Stutzle ldquoAnt colony optimiza-tionrdquo IEEE Computational Intelligence Magazine vol 1 no 4pp 28ndash39 2006

[7] D Teodorovic P Lucic G Markovic and M DellrsquoOrco ldquoBeecolony optimization principles and applicationsrdquo in Proceed-ings of the 8th Seminar on Neural Network Applications inElectrical Engineering pp 151ndash156 September 2006

[8] D Karaboga and B Basturk ldquoOn the performance of artificialbee colony (ABC) algorithmrdquo Applied Soft Computing vol 8no 1 pp 687ndash697 2008

[9] R Kumar ldquoDirected bee colony optimization algorithmrdquoSwarm and Evolutionary Computation vol 17 pp 60ndash73 2014

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 26: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

26 Computational Intelligence and Neuroscience

[10] T Osogami and H Imai ldquoClassification of various neigh-borhood operations for the nurse scheduling problemrdquo inProceedings of the International Symposium on Algorithmsand Computation Taipei Taiwan December 2000 pp 72ndash83Springer Berlin Germany 2000

[11] H H Millar and M Kiragu ldquoCyclic and non-cyclic schedulingof 12 h shift nurses by network programmingrdquoEuropean Journalof Operational Research vol 104 no 3 pp 582ndash592 1998

[12] J Van den Bergh J Belien P De Bruecker E Demeulemeesterand L De Boeck ldquoPersonnel scheduling a literature reviewrdquoEuropean Journal of Operational Research vol 226 no 3 pp367ndash385 2013

[13] B Cheang H Li A Lim and B Rodrigues ldquoNurse rosteringproblemsmdasha bibliographic surveyrdquo European Journal of Opera-tional Research vol 151 no 3 pp 447ndash460 2003

[14] L B Asaju M A Awadallah M A Al-Betar and A T KhaderldquoSolving nurse rostering problem using artificial bee colonyalgorithmrdquo in Proceedings of the 7th International Conference onInformation Technology (ICIT rsquo15) pp 32ndash38 Amman JordanMay 2015

[15] M A Awadallah A L Bolaji and M A Al-Betar ldquoA hybridartificial bee colony for a nurse rostering problemrdquo Applied SoftComputing vol 35 pp 726ndash739 2015

[16] M A Awadallah A T Khader M A Al-Betar and A L BolajildquoGlobal best harmony search with a new pitch adjustmentdesigned for nurse rosteringrdquo Journal of King Saud University-Computer and Information Sciences vol 25 no 2 pp 145ndash1622013

[17] M A Awadallah M A Al-Betar A T Khader A L Bolajiand M Alkoffash ldquoHybridization of harmony search withhill climbing for highly constrained nurse rostering problemrdquoNeural Computing and Applications vol 28 no 3 pp 463ndash4822017

[18] H G Santos T A M Toffolo R A M Gomes and SRibas ldquoInteger programming techniques for the nurse rosteringproblemrdquoAnnals of Operations Research vol 239 no 1 pp 225ndash251 2016

[19] I Berrada J A Ferland and P Michelon ldquoA multi-objectiveapproach to nurse scheduling with both hard and soft con-straintsrdquo Socio-Economic Planning Sciences vol 30 no 3 pp183ndash193 1996

[20] E K Burke J Li and R Qu ldquoA Pareto-based search methodol-ogy for multi-objective nurse schedulingrdquo Annals of OperationsResearch vol 196 pp 91ndash109 2012

[21] K A Dowsland and J MThompson ldquoSolving a nurse schedul-ing problemwith knapsacks networks and tabu searchrdquo Journalof the Operational Research Society vol 51 no 7 pp 825ndash8332000

[22] K A Dowsland ldquoNurse scheduling with tabu search andstrategic oscillationrdquo European Journal of Operational Researchvol 106 no 2-3 pp 393ndash407 1998

[23] E Burke P De Causmaecker and G VandenBerghe ldquoA hybridtabu search algorithm for the nurse rostering problemrdquo in Pro-ceedings of the Asia-Pacific Conference on Simulated Evolutionand Learning vol 1585 pp 187ndash194 Springer Berlin Germany1998

[24] E K Burke G Kendall and E Soubeiga ldquoA tabu-search hyper-heuristic for timetabling and rosteringrdquo Journal of Heuristicsvol 9 no 6 pp 451ndash470 2003

[25] E Burke P Cowling P De Causmaecker and G V BergheldquoA memetic approach to the nurse rostering problemrdquo AppliedIntelligence vol 15 no 3 pp 199ndash214 2001

[26] M Hadwan and M Ayob ldquoA constructive shift patternsapproach with simulated annealing for nurse rostering prob-lemrdquo in Proceedings of the International Symposium on Infor-mation Technology (ITSim rsquo10) pp 1ndash6 IEEE Kuala LumpurMalaysia June 2010

[27] E Sharif M Ayob andM Hadwan ldquoHybridization of heuristicapproach with variable neighborhood descent search to solvenurse Rostering problem at Universiti Kebangsaan MalaysiaMedical Centre (UKMMC)rdquo in Proceedings of the 3rd Confer-ence on Data Mining and Optimization (DMO rsquo11) pp 178ndash183June 2011

[28] U Aickelin and K A Dowsland ldquoAn indirect genetic algorithmfor a nurse-scheduling problemrdquo Computers and OperationsResearch vol 31 no 5 pp 761ndash778 2004

[29] S Asta E Ozcan and T Curtois ldquoA tensor based hyper-heuristic for nurse rosteringrdquoKnowledge-Based Systems vol 98pp 185ndash199 2016

[30] K Anwar M A Awadallah A T Khader and M A Al-BetarldquoHyper-heuristic approach for solving nurse rostering prob-lemrdquo in Proceedings of the IEEE Symposium on ComputationalIntelligence in Ensemble Learning (CIEL rsquo14) pp 1ndash6 December2014

[31] N Todorovic and S Petrovic ldquoBee colony optimization algo-rithm for nurse rosteringrdquo IEEE Transactions on Systems Manand Cybernetics Systems vol 43 no 2 pp 467ndash473 2013

[32] X-S Yang Nature-Inspired Meta-Heuristic Algorithms LuniverPress 2010

[33] S Goyal ldquoThe applications survey bee colonyrdquo IRACST-Engineering Science and Technology vol 2 no 2 pp 293ndash2972012

[34] T D Seeley P Kirk Visscher and K M Passino ldquoGroupdecision-making in honey bee swarmsrdquoAmerican Scientist vol94 no 3 pp 220ndash229 2006

[35] KM Passino T D Seeley and P K Visscher ldquoSwarm cognitionin honey beesrdquo Behavioral Ecology and Sociobiology vol 62 no3 pp 401ndash414 2008

[36] W Jiao and Z Shi ldquoA dynamic architecture for multi-agentsystemsrdquo in Proceedings of the Technology of Object-OrientedLanguages and Systems (TOOLS 31 rsquo99) pp 253ndash260 NanjingChina November 1999

[37] W Zhong J Liu M Xue and L Jiao ldquoA multi-agent geneticalgorithm for global numerical optimizationrdquo IEEE Transac-tions on Systems Man and Cybernetics Part B Cybernetics vol34 no 2 pp 1128ndash1141 2004

[38] S Haspeslagh P De Causmaecker A Schaerf and M StoslashlevikldquoThe first international nurse rostering competition 2010rdquoAnnals of Operations Research vol 218 no 1 pp 221ndash236 2014

[39] J Demsar ldquoStatistical comparisons of classifiers over multipledata setsrdquo Journal of Machine Learning Research vol 7 pp 1ndash302006

[40] A Costa F A Cappadonna and S Fichera ldquoA dual encoding-basedmeta-heuristic algorithm for solving a constrained hybridflow shop scheduling problemrdquo Computers and Industrial Engi-neering vol 64 no 4 pp 937ndash958 2013

[41] G Gonzalez-Rodrıguez A Colubi and M A Gil ldquoFuzzy datatreated as functional data a one-way ANOVA test approachrdquoComputational Statistics and Data Analysis vol 56 no 4 pp943ndash955 2012

[42] D B Duncan ldquoMultiple range and multiple 119865 testsrdquo Biometricsvol 11 pp 1ndash42 1955

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 27: Directed Bee Colony Optimization Algorithm to Solve the ...downloads.hindawi.com/journals/cin/2017/6563498.pdfDirected Bee Colony Optimization Algorithm to Solve the Nurse Rostering

Submit your manuscripts athttpswwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 201

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014