batching in production planning for flexible manufacturing systems

11
ELSEVIER Int. J. Production Economics 43 (1996) 127-137 international journal of production economics Batching in production planning for flexible manufacturing systems Bharatendu Srivastavaa, *, Wun-Hwa Chenb> aDepartment of Management, Marquette University, P. 0. Box 1881, Milwaukee. Wisconsin 53201-1881, USA bDepartment of Business Administration, National Taiwan University, Taipei. Taiwan Accepted for publication 19 October 1995 Abstract Generally, production planning in flexible manufacturing systems is hierarchically grouped into two subproblems: batching and loading. These two subproblems can be solved either sequentially or simultaneously to generate a feasible production plan. This paper focuses on the batching problem which partitions the given production order of part types into batches that can be processed with the limited production resources such as the capacity of the tool magazines, pallets, fixtures and available machine time. A O-l integer program is formulated to address the batching problem and a simulated annealing algorithm is proposed for solving it. A systematic computational test is conducted to test the performance of the proposed algorithm. The results show that the simulated annealing algorithm can provide high-quality solutions in a reasonable amount of time for practical size problems. Keywords: Production planning; Flexible manufacturing systems 1. Introduction Flexible manufacturing system (FMS) can be described as an integrated manufacturing system consisting of work stations (such as numerically controlled (NC), computer numerically controlled (CNC) and digital numerically controlled (DNC) machine tools) linked by an automated material handling system, such as an automatic guided ve- hicle (AGV) or robots. It is capable of producing a variety of part types simultaneously and can be used universally as a production system. However, currently most of the FMSs are being used in the metal cutting, forming, and assembly operations for small to mid-volume batch production in automo- * Corresponding author. This work was partially done while this author was at Washington State University. biles, electrical equipment, machinery and aerospace industries [l]. Generally, an FMS has the flexibility associated with the job shops and approaches the high productivity identified with transfer lines. A growing number of new FMSs consist of versatile machines, which are generally very reliable and can be run unattended [2]. Although an FMS offers many strategic and operational benefits over conventional manufacturing systems (such as flexibility in deal- ing with machine and tool breakdowns, changes in schedule, product mix, and volume by rerouting the parts through different paths in the system), its effi- cient management requires solution to fairly complex production planning problems. In this paper, we focus on batching, a subproblem in production planning for FMSs. The purpose of production planning is to develop a cost effective and operative production plan over the planning horizon. Decisions regarding the planning 0925-5273/96/$15.00 Copyright 0 1996 Elsevier Science B.V. All rights reserved SSDZ 0925-5273(96)00036-9

Upload: bharatendu-srivastava

Post on 26-Jun-2016

215 views

Category:

Documents


3 download

TRANSCRIPT

ELSEVIER Int. J. Production Economics 43 (1996) 127-137

international journal of

production economics

Batching in production planning for flexible manufacturing systems

Bharatendu Srivastavaa, *, Wun-Hwa Chenb> ’

aDepartment of Management, Marquette University, P. 0. Box 1881, Milwaukee. Wisconsin 53201-1881, USA bDepartment of Business Administration, National Taiwan University, Taipei. Taiwan

Accepted for publication 19 October 1995

Abstract

Generally, production planning in flexible manufacturing systems is hierarchically grouped into two subproblems: batching and loading. These two subproblems can be solved either sequentially or simultaneously to generate a feasible production plan. This paper focuses on the batching problem which partitions the given production order of part types into batches that can be processed with the limited production resources such as the capacity of the tool magazines, pallets, fixtures and available machine time. A O-l integer program is formulated to address the batching problem and a simulated annealing algorithm is proposed for solving it. A systematic computational test is conducted to test the performance of the proposed algorithm. The results show that the simulated annealing algorithm can provide high-quality solutions in a reasonable amount of time for practical size problems.

Keywords: Production planning; Flexible manufacturing systems

1. Introduction

Flexible manufacturing system (FMS) can be described as an integrated manufacturing system consisting of work stations (such as numerically controlled (NC), computer numerically controlled (CNC) and digital numerically controlled (DNC) machine tools) linked by an automated material handling system, such as an automatic guided ve- hicle (AGV) or robots. It is capable of producing a variety of part types simultaneously and can be used universally as a production system. However, currently most of the FMSs are being used in the metal cutting, forming, and assembly operations for

small to mid-volume batch production in automo-

* Corresponding author. ’ This work was partially done while this author was at Washington State University.

biles, electrical equipment, machinery and aerospace industries [l]. Generally, an FMS has the flexibility associated with the job shops and approaches the high productivity identified with transfer lines. A growing number of new FMSs consist of versatile machines, which are generally very reliable and can be run unattended [2]. Although an FMS offers many strategic and operational benefits over conventional manufacturing systems (such as flexibility in deal- ing with machine and tool breakdowns, changes in schedule, product mix, and volume by rerouting the parts through different paths in the system), its effi- cient management requires solution to fairly complex production planning problems. In this paper, we focus on batching, a subproblem in production planning for FMSs.

The purpose of production planning is to develop a cost effective and operative production plan over the planning horizon. Decisions regarding the planning

0925-5273/96/$15.00 Copyright 0 1996 Elsevier Science B.V. All rights reserved SSDZ 0925-5273(96)00036-9

128 B. Sriwstava, W.-H. Chenl Inr. J. Production Economics 43 (1996) 127-137

problem have to be made before the start of actual production, and consists of organizing the limited pro- duction resources (like machine tools, cutting tools, pallets, fixtures, etc.) efficiently, so as to satisfy a given master schedule. Given the size and complex- ity of the most realistic production system and the associated computational difficulties, it is highly un- likely that a single-level analytical model can solve the problem. Consequently, several hierarchical ap- proaches have been proposed [2-61 which decompose the original problem into subproblems which are computationally simpler to solve. Decisions made at higher levels restrict the range within which lower level decisions are made.

Based on the classical planning and scheduling methodology, Kusiak’s [4] approach decomposes the problem into four levels. At the highest level is the aggregate planning, where the products are grouped into product types and is similar to the model pro- posed in [7]. Resource grouping is carried out next where the parts are grouped into part families and machines into machine cells. Batching and loading form the next level with scheduling at the lowest. On the other hand, Van Looveren, Gelders and Van Wassenhove [6] decompose the problem into three levels of planning, i.e., strategic, tactical, and op- erational. At the strategic level long term decisions are made concerning the families of part types to be produced. These are also called the screening and selection problems. Batching and loading are carried out at the tactical level, whereas detailed sequencing, release and dispatching of the parts are dealt with at the operational level. Jaikumar and Van Wassenhove [2] proposed a three level hierarchical scheme. At the top level, selection of part types and their associated production quantities is carried out for processing in the next production period (such as shift). At the second level, part types that share common tools are grouped into part families and scheduled for pro- cessing on various machine types so that machine utilization is maximized. Lastly, detailed releasing and dispatching supporting the decisions made ear- lier is carried out at the third level. Buzacott’s [3] hierarchical model consists of pre-release planning at the top level, with part release and operational control for part routing at the next two levels, re- spectively. One of the most well-known and detailed decomposition of the planning problem in FMS has

been suggested by Stecke [5], where the following five subproblems were outlined: (i) part type selec- tion: determine a subset from the set of part types having production requirements, for immediate and simultaneous processing; (ii) machine grouping: partition the set of machines into machine groups so each machine in a particular group can per- form the same set of operations; (iii) production ratio: determine the relative ratios at which the part type selected will be produced; (iv) resource allocation: allocate the limited number of pallets and fixtures of each fixture type to the selected part types; (v) loading: allocate operations and required tools for the selected part types among the machine groups subject to technological and capacity constraints of the FMS. Although the decom- position schemes described above are not identical, yet they share many similarities and batching happens to be one of them. In this paper, our focus will be on the following batching problem:

Given a production order of part types (such as monthly or weekly) with known tool and process- ing time requirement, determine a subset of part types for immediate and simultaneous processing over the upcoming period of the planning horizon (such as daily or next shift), according to an appro- priate objective and within the limited capacity of the tool magazine and the available processing time of each machine type.

Once a batch of part types and their associated pro- duction quantities have been identified for processing, the next step in the production planning process is to allocate the tool types and part operations among the machines in some optimal fashion. This problem is called the loading problem. Generally, a batch of se- lected part types is processed without any preemption and at the completion of the batch, the FMS is recon- figured for the next batch of part types. Batching and loading problems cannot be completely separated, and they are generally linked through a set of constraints (such as available capacity and tool magazine of the machines) and can therefore, be solved sequentially, simultaneously or in an iterative manner to gener- ate an operative and acceptable production plan for a given production order. In the past, several researchers have addressed these two problems or variations of it

B. Srivastava. W.-H. Chenl Int. J. Production Economics 43 (1996) 127-137 129

[8-141. For the batching problem, Whitney and Gaul [14] proposed a greedy heuristic which sequentially selects a part type with the highest estimated proba- bility of success into the batch. Hwang [8] suggested that only two subproblems, part selection and load- ing need be considered in production planning for FMS. His model assumed all machines to be of gen- eral purpose type (the functionality of a machine being determined by the tools assigned to them) and considers only the tool magazine limitation. This model was subsequently extended by incorporat- ing the processing time limitation of the machines and a Lagrangian based heuristic was developed to solve the problem [9]. Rajagopalan [lo] integrated the part type selection, production ratio and load- ing into a single mixed integer programming model and solved it using two heuristics with different op- timizing criteria. Stecke and Kim [13] proposed a flexible approach to part type selection which allows dynamic changes in the production requirement. such as introduction of new part and tool types, once the production requirement for some part types has been carried out. They used simulation to show the supe- riority of this approach over strictly batching. Liang and Dutta [ 151 considered the part selection, loading and tool configuration simultaneously and proposed a sequential bicriteria approach, however, for large size problems they proposed Lagrangian relaxation.

In this paper, we propose a simulated annealing al- gorithm for solving the batching problem described before. A systematic computational test is conducted to test the performance of the proposed algorithm, The results show the efficiency of our algorithm. The pa- per is organized as follows. In Section 2, we present the mathematical formulation of the problem, Sec- tion 3 describes the implementation of the simulated annealing algorithm, with Section 4 describing the computational experience. Finally, some concluding remarks are given in Section 5.

2. Mathematical formulation of the problem

In the batching problem under consideration, each batch selected for processing may contain sev- eral part types, and must be processed completely for the entire production order quantity of the se- lected part types. After the completion of a batch,

the machines are set up for the next batch and the tools that are no longer needed are replaced with tools required for the next batch. Major setups are thus carried out between the batches. There may be minor setups within a batch, however, the ra- tio of setup times to processing times are small [2], and can be included in the processing time. In many FMSs, tools are often automatically trans- ported to and from the centralized or local stor- age area by a robot carrier. This operation may be carried out in a parallel mode to reduce the setup times between batches. Automatic tool hauling can also reduce the variability of setup times between batches.

A machine type consists of one or more identical or equivalent units with limited capacity in its tool magazine. Machines within a machine type are iden- tical in their ability to accept different tool types, and are identically tooled so that they can perform the same set of operations. Thus, within a machine type several different routings are possible. It is important to mention that identical machines can be partitioned into several different machine types and then loaded with different set of tools. As pointed out by Jaikumar [2] individual machines in FMSs are becoming very versatile and most part operations can now be accom- plished by one or two machine types, thereby elim- inating the need to consider complex routings. This is especially true where an FMS is organized as an integrated system of independent cells. A tool type may be capable of performing several operations on a particular machine type, and many tools may be needed to process a part type. There is no duplica- tion of a tool type on a machine, i.e., different part types share the same tool, if they are routed to the same machine type. The size of a tool is measured by the number of tool slots it occupies. Further, the savings in tool slots resulting from the overlapping of tools on machines is considered to be insignificant. Ignoring tool overlap provides for redundancy. which can be used to load multiple tool copies to balance workload among the machines at scheduling stage [9]. Consequently, in formulating the batching problem, only one machine of each type needs to be consid- ered for tool assignment, whereas the total processing time available depends upon the number of units of each machine type, likely machine breakdowns, time for preventive maintenance, etc. The mathematical

130 B. Srivastava, W.-H. Chen J Int. J. Production Economics 43 (1996) 127-137

formulation of the problem uses the following notation: n is the number of part types, m is the number of machine types, K is the number of tool types,

Wi is the weight associated with part type i, gk is the number of tool slots required by

tool type k,

tij is the required processing time for the set of operations for the entire production order quantity of part type i on machine type j,

bj is the total processing time available on machine type j,

hj is the tool magazine capacity of a machine type j,

Kij is the set of tool types required for part type i on machine type j,

Xi is a O-l decision variable that will be 1, if part type i is selected in the current batch, and 0 otherwise,

ykj is a O-l decision variable that will be 1, if tool type k is assigned to machine type j, and 0 otherwise.

The batching problem can then be formulated as the following O-l integer program:

P : U(P) = max 2 WiXi

i=l

(1)

n

s.t. C ti/xi d bj for j = l,..., M, (2) i=l

K

c gkykj d hj for j= l,...,m,(3) k=l

XI < ykj for i= l,...,~,

j=l,..., m, andkEKij, (4)

Xi=OOr 1 for i= l,...,n, (5)

ykj=Oorl fork=l,...,K,

and j= l,...,m (6)

The objective function in a problem like this can be several, like maximizing the production rate or profit,

minimizing the number of tool changeovers or the number of batches. In addition, the objective function can be constructed to account for the due dates of the part types, which is one of the most important crite- ria. Hence, Wi in the objective function (1) indicates the weight associated with part type i, and can take on many different forms, some of which are described below.

(a) If Wi = 1, the model will select as many part types as possible in batch. For a given production order of 12 part types, this will lead to nonincreasing batch sizes (in terms of number of part types selected). However, as pointed out in [16], this does not lead to the minimization of number of batches.

(b) Minimizing makespan is an important schedul- ing criterion especially where the total production time for the entire production order of n part types is ira- portant. Assuming the setup time between batches to be a constant, then minimizing the number of batches or the number of major setups (tool changeovers) is likely to work towards reducing the makespan. The following definition due to Stecke and Kim [ 161 can be utilized for this purpose:

Wi = c gk>

kE&,, where ja = mF{ 2 c gk/hj).

r=l kEK,,

Here the part types requiring the largest number of tool slots required by the largest weighted machine type are selected first.

(c) Part types may have a due date assigned to them either by the customer or by the management based on assembly or master schedule. In a just-in-time (JIT) manufacturing environment both earliness and tardiness are discouraged. The objective of due date management in JIT is to complete the work on time. Several performance measures related to due dates are used such as maximum tardiness, mean tardiness, fraction of late orders, etc. If due dates are important then batching shou!d be carried out in such a way, that most of the due date requirements are satisfied. Within the scope of P, one of the simplest ways to account for due dates is to make wi = f(due-date,) for example, wi = I/due+dote;.

(d) A balance between processing times and due dates can be achieved by suitably defining wi = f(C,“=r tij, due-datei).

B. Srivastava, W.-H. Chen J Int. J. Production Economics 43 (1996) 127-137 131

(e) If wi= profit or benefit associated with part type i, then we are maximizing the total profit or benefit.

Referring to formulation P, constraint set (2) speci- fies that the required processing time cannot be greater than the available machine time. It is important to note that the available processing time on a machine type need not be same for all machine types or equal to the time period (for example shift), for it depends upon the number of machines in that group, time for scheduled maintenance, machine or tool breakdown, etc. Constraint set (3) is the tool magazine capacity limitation. Constraint set (4) states that a part type can only be produced on a machine type equipped with the necessary tool types. Finally, constraint sets (5) and (6) state the O-l integral requirements. A simi- lar model was suggested by Khade and Ignizio [ 171, where a Lagrangian based heuristic was developed to solve the problem.

3. Simulated annealing algorithm

Simulated annealing (SA) based on the anneal- ing phenomena is a general purpose combinatorial optimization technique. It has been used success- fully in solving several combinatorial problems. A review of various applications is given by Laarhoven and Aarts [18]. It is commonly described as a se- quence of homogenous Markov chain, where each computation step of the chain starts only when the previous step is completed. Under certain conditions the algorithm converges asymptotically to an opti- mal solution [ 181 however, in practice it can never be attained and hence, convergence to an optimal solution cannot be guaranteed. It does however, pos- sess the ability to get out of a local optimum by accepting a deterioration in its objective function value.

SA algorithm is a randomized search process and starts with an initial solution at a high temperature which is gradually reduced. A random perturbation is used to generate a candidate solution from the neighborhood of the current solution. If the candidate solution is better than the current solution, then the candidate solution is accepted. On the other hand, if the current solution is better than the candidate solu- tion, then the candidate solution is accepted with a probability of exp( -A/i), where A is the change in

the objective function value and 2 is the temperature. Consequently, there is always a nonzero probabil- ity of accepting a candidate solution inferior to the current one and it is this efficacy of SA, that helps in overcoming local optimum. At each temperature a sufficient number of perturbations are attempted for the system to reach a stable configuration. Thus, at high temperatures almost all perturbations are ac- cepted and the system moves almost freely between different states of its solution space, whereas at very low temperatures it behaves like a greedy algorithm.

The computation time required by SA can be very large, and the calculation of exp( - A/n) is a signifi- cant factor in the run time of the algorithm. To reduce the computational times, a simple look up table of pre-computed exponentials as described in [ 191 is used to approximate exp(-A/l,) to within 0.5% of its value. Lastly, it is possible for the algorithm to converge to a solution which is worse than the best solution found during the whole search process. Since it does not take much time or space to keep track of the best solution for this particular problem, the SA algorithm was slightly modified to keep track of the best feasible solution. With this modification, there is less need for the algorithm to rely on stabilizing effect over time [20]. In order to use the SA algorithm, we need an objective function, a generation mechanism, a solution space and an annealing schedule which are described next.

3.1. Solution space, objective function and generation mechanism

The SA algorithm as described before is applicable for solving unconstrained optimization problems. For constrained optimization problems such as P, the con- straints are usually relaxed and incorporated into the objective function through some penalty terms. Alter- natively, the generation mechanism can be designed to generate only feasible states. It has been shown that the efficiency of SA algorithm depends on the solution space and the neighborhood structure used [21-231. In general, a neighborhood structure which imposes a smooth topology is preferred to a bumpy topol- ogy where there are many deep local optima. For a constrained problem like ours, allowing solutions which break the constraints at the expense of a suitably

132 B. Srivustuw. W.-H. Chen/ Int. J. Production Economics 43 (1996) 127-137

defined penalty function is more likely to lead to sim- pler neighborhood moves and a smoother topology [24]. This is because, the candidate solutions gener- ated by random perturbations need not be feasible with respect to constraint sets (2) and (3), and hence, we have adopted this approach.

Since xi and ykj are linked only through constraint (4), the y values can be determined by simple inspec- tion once x is fixed, i.e.,

1 if xi = 1 and k E K,, Ykj =

0, otherwise.

Hence, we define the solution space Q as the collection of all possible part type selection patterns, and can be formally stated as:

Q={X:{1)...) n}-{O,l}},

with 1521 = 2”. An element x E 52 is a solution state and will be called a selection pattern. The solution space Q is thus not necessarily feasible with respect to the resource constraints (2) and (3). For any given x E s2, let B={jE{l,..., m} : (C:=, tljxi - bj ) > 0) and

H={j~{l,..., m> : (cf=i gkykj - hj) > O> de-

note the set of machine types violating constraints (2) and (3) respectively. By appending nonneg- ative penalties cc and p for these two resource violations, the objective function can be written as:

n / n \

max Z = 2 wixi - (R/A) C ( 2 tijxi - b/- ) I=1 jEB \i=l /

To ascertain feasibility of the solutions at low tem- perature, the penalties are divided by temperature 1, which gradually increases the penalties as the tem- perature is reduced. Choosing suitable values of a and ,0 is complex and problem dependent. If the penalty terms are too small, the algorithm will be ineffective in converging to a feasible solution. In other words, the final solution may select a large number of part types for which there may not enough processing time available, and/or capacity in the machine’s tool magazine. On the other hand, very large values of penalty terms are likely to force the

algorithm to converge to a feasible but inferior so- lution, which may result in under utilization of the resources. In this implementation we have used an empirical approach (described later on) to estimate c1 and p.

As mentioned before, the definition of neigh- borhood is critical to the performance of the al- gorithm. Generating a new solution state from the current one is done by randomly adding or remov- ing a part type from the current batch scheduled for production, in other words randomly chang- ing an integer variable from its present value of xi to 1 -xi. Let QX represent the set of neigh- borhood solutions of XE Q, then the above gen- eration mechanism will have a neighborhood size of 18) = n, kc E Q. A new solution state X’ E s2, is accepted if the objective value does not dete- riorate (i.e., A d 0), otherwise (deteriorated ob- jective value), it is accepted with a probability of exp( - A/3,).

3.2. Annealing schedule

Generally, an annealing schedule specifies a start- ing temperature &, an updating rule for A, the num- ber of transitions (length of Markov chain) at each temperature, and a stopping criterion. The choice of annealing schedule has an important bearing on the performance of the algorithm. In many of the com- monly used annealing schedules in the literature [ 19, 25],1, is decremented by multiplying it with a constant close to 1 with a variable length of Markov chain. However, the procedure does not guarantee a poly- nomial time convergence. In this paper, a polynomial time annealing schedule (provided the neighborhood is polynomial) due to Aarts and Laarhoven [26] is adopted with some modifications. The schedule uses a fixed length of Markov chain with a variable decre- ment of temperature.

The starting temperature is, should be high enough to ensure a high probability of acceptance of all pro- posed transitions. The starting temperature is approx- imated based on a given initial acceptance ratio ~0, i.e., the lowest temperature at which approximately no fraction of random perturbations are accepted. We start with any temperature and update it using a

B. Srivastuca, W.-H. Chrnllnt. J. Production Economics 43 (1996) 127-137 133

binary search until the acceptance ratio is approxi- mately equal to 90.

The length of Markov chain at each temperature should be such that, the system is able to attain equi- librium at that temperature. Here, the length of the Markov chain L is equal to the neighborhood size, n at all temperatures. The temperature is updated us- ing the following rule:

with

y = ln( 1 + 6)/3s(&).

where s(&) is the standard deviation of the objective function value for the states of the Markov chain at temperature &, and 6 is a distance parameter used for controlling the annealing rate. Small values of 6 lead to a slower annealing and thus better quality of the final solution. On the other hand, large values of 6 will lead to rapid annealing. Finally, the algorithm is terminated when the value of the objective function does not change for a certain number nstOP consecutive temperatures.

3.3. Determination of penalties

To determine cx and /I, we use an empirical pro- cedure, which resulted in reasonably good values of tl and /I. This is done by performing SA runs for cho- sen values of IX and ,6 under somewhat relaxed condi- tions than the final optimization run (i.e., with a much smaller chain length, a much lower limit on the max- imum number of iterations, and a lower starting tem- perature). We describe the procedure for computing C.Y below. The procedure for computing ,6 is identical with the corresponding set of constraints. They are de- termined simultaneously.

We start with some initial values of ~(1 (lower limit of LX) and E, (upper limit of IX), and with c( = (al + cr,)/2.0 the SA algorithm is run. At the end of the run we check the final solution for the following. If any of the constraints in the con- straint set (2) is violated, c11 = a and CC, is dou- bled. Otherwise, a, = a and CII is reduced by half. Simultaneously, we apply the same updating pro- cedure for ,$ and ,$ with respect to the constraint

set (3). The SA algorithm is repeated with up- dated c[ and fl (and a new starting solution), until the algorithm converges to a feasible solution and satisfies the convergence criterion of the SA algo- rithm. Since a number of trial runs are required, the computational time is increased. This procedure worked well and gave reasonably good values of c( and /I, with the algorithm converging to a fea- sible solution in most of the cases, while satisfy- ing the convergence criterion of SA procedure at the same time. We found this approach to be quite robust.

4. Computational experience

In order to test the effectiveness of the SA algo- rithm, we tested it on a set of randomly generated problems. To ensure that the generated problems are representative of realistic situations, we developed some guidelines for the problem generator based on the data presented in [27], and the results of a survey of FMSs in USA and Japan by Jaikumar [28]. For a given number of part, machine and tool types, the weight coefficient wi for each part type was gener- ated from unif orrn( 10,100). The processing time tl, for part type i on machine type j was drawn from unif orrn( 1,lO). The number of tool slots for tool type k, gk, was generated with the following probability as stated in [27]:

(

1, with a probability of 0.80;

Sk = 2, with a probability of 0.15;

3, with a probability of 0.05.

While in practice tools are usually symmetrical in shape and gk is thus either 1 or 3, however, gk = 2 has been used in some of the previous studies [9, 271. A feasible solution is generated by randomly setting Xi‘s equal to 0 or 1, and then obtaining the y values according to (4). This solution is used to calculate the processing time bj and the number of tool slots h, available on each machine type j by

bj = Vk tijx,, i=l K

k=l

134 B. Srivastava, W.-H. ChenlInt. J. Production Economics 43 (1996) 127-137

where the capacity tightening ratio (v) was set at 1.2, 1.0, and 0.8. This was done to test the effect of tight- ening the right hand side of the constraints. As v is reduced the problem becomes comparatively hard to solve.

Four levels were chosen for the part type size (50, 75, 100, 125), three levels for machine type size (10, 15, 25) and two levels for tool type size (20,30). The density of part-machine-tool matrix D = {d,jk}(d;jk is equal to 1, if part type i requires tool type k on ma- chine type j, 0 otherwise) was set at 5% for all the runs. In generating the problem, we made sure that each part type required at least one operation to be carried out on any machine type. For each set of prob- lem, five replicates were tested, yielding a total of 360 randomly generated problems. We feel that this set of test problems is sufficient to illustrate the performance and complexity of the annealing procedure. It is im- portant to note that these test problems are relatively hard to solve.

To evaluate the performance of an algorithm, an up- per bound or a good heuristic solution is usually used, if the optimal value cannot be obtained in a reason- able amount of time, which is true in this case. An upper bound (US) for objective function v(P), can be computed by relaxing constraint (3), which reduces the problem to a multiconstraint knapsack problem, which was solved to optimality using an algorithm outlined by Gavish and Pirkul [29]. As the computa- tional results will indicate, this is a good upper bound, and one of the reasons for it is that the objective func- tion does not contain the integer variable y. The up- per bound percentage gap (UBPG) is then computed as

all the cases, yet the final solution was feasible only in the number of instances shown in Table 1. Gen- erally, as v decreases the problem becomes difficult to solve hence, the number of instances of feasible solutions decreases. In other words, the algorithm re- quires more iterations to converge to a feasible than the maximum allowable, for the same stopping cri- terion. In most of the cases where the final solution was infeasible, it was due to the violation of con- straint (3) by one or more machine types. While a sequence of specific moves can remove infeasibility, there is little likelihood of restoring feasibility by a single random move. This is especially true in this problem, if infeasibility is due only to the violation of constraint set (3). Since x and y are related by con- straint set (4), generally a series of moves will be re- quired to remove infeasibility and satisfy constraint set (3) even though constraint set (2) may be sat- isfied at all times. Further, when the capacity tight- ening ratio v is reduced restoring feasibility becomes more difficult and time consuming. This shows the difficulty associated with solving the problem as the right hand side of the constraints is tightened. In sub- sequent analysis of the computational results, we re- strict our discussion to only the converged feasible solutions.

UBPG = (UB - u(P)~~)/UB * 100,

where USA is the objective function from the SA algorithm.

The value of parameters used in this implementa- tion of the algorithm were set at: 90 = 0.90, nstop = 15, 6 = 10.0 (for v = 1.2 and 1.0) and 20.0 (for v = 0.8). The maximum number of iterations were set at 350. The algorithm was written in C under VMKMS for an IBM 3090 mainframe.

Table 1 also presents the CPU time as a mnc- tion of problem parameters for all three cases of v. Clearly, all three problem parameters, namely the number of part types (n), number of ma- chine types (m), and number of tool types (K) affect the computational time to varying degree which generally increases with an increase in any of the three parameters. Additionally, the computational time for the algorithm increases as v decreases as the problem becomes harder to solve. However, for a given v, an ANOVA analysis showed that only the number of machine types and part types as having a significant effect on the computational time. The effect of these two factors on the computa- tional time for v = 1.2 is shown in Fig. 1. Thus, these results indicate that the algorithm has been able to solve reasonably large problems in a fair amount of time.

Table 1 presents the number of instances (NCFS) Table 2 presents the solution quality as measured where the algorithm converged to a feasible solution by UBPG (as defined before) for v = 1.0 and 1.2. We (out of 5) in each problem set. Although the algo- did not compute the upper bound for the case v = 0.8 rithm was able to find at least one feasible solution in due to the excessive computational time required to

B. Srivastava, W.-H. Chen/ Int. J. Production Economics 43 (1996) 127-137 135

Table 1 Number of instances of converged solutions and CPU time (seconds) as a function of problem parameters

Problem size Capacity tightening ratio (v)

n m K 0.8 1.0 1.2

NCFS CPU time NCFS CPU time NCFS

Avg SD Avg SD

CPU time

Avg SD

50 10

15

25

15 10

15

25

100 10

15

25

125 10

15

25

20

30 20

30

20

30

20

30

20

30 20

30

20

30

20

30

20

30

20

30

20

30

20

30

3 1.29 0.72 5 10.13 3.95

2 11.86 3.43 4 11.28 3.21

3 15.55 8.25 5 12.17 3.65

3 14.79 6.12 5 14.83 4.85 2 17.94 9.85 4 24.03 11.26 3 30.98 12.90 3 31.69 20.64

4 16.00 2.59 5 12.43 6.71 4 17.66 4.83 5 20.44 8.77 3 29.02 7.37 5 21.26 10.73 1 25.20 0.00 3 34.13 12.31 4 32.60 16.58 3 29.69 8.67 2 62.90 2.08 3 50.07 17.27

4 18.71 1.56 5 17.56 7.32 3 26.41 12.12 5 28.93 12.96 2 18.68 4.38 5 31.65 7.92 3 43.70 19.10 2 34.97 5.70 3 61.64 21.72 5 59.49 27.79 1 35.87 0.00 3 78.44 36.99

4 26.15 6.90 3 22.72 8.69 2 31.50 12.25 4 43.66 5.59 2 30.25 4.62 4 30.11 7.14 2 42.68 6.43 4 49.69 19.78 1 81.19 0.00 3 77.57 37.98 3 79.11 40.91 4 78.59 26.62

64a 978

5 5.71

5 11.55 4 9.20

3 10.45 5 17.21

3 28.63

5 13.00

5 8.84

5 10.13

5 19.18 5 34.26 4 41.51

5 10.69

3 11.90 4 13.66 4 23.12

4 42.06

5 47.03

5 15.06 5 21.81

5 26.2 1

5 27.44

3 33.23 5 54.13

107a

1.03 4.44

3.34

2.42

10.77

16.26

2.54 4.63

1.57

17.06

17.98

21.61

5.57

3.11

4.45

11.89

25.72

32.91

6.94

12.54

18.45

7.45 16.43

16.09

n: number of part types; m: number of machine types; K: number of tool types; NCFS: number of converged feasible solutions out of 5;

SD: standard deviation.

a total number of converged feasible solutions out of 120.

solve the multiconstraint knapsack problem to opti- mality using Gavish and Pirkul’s algorithm. In many of the problem instances, computing an upper bound took more time than solving the problem using the SA procedure. As is evident from the table, the SA algorithm performed quite well in most of the cases, with the maximum average UBPG being 5.32. In fact, in some cases the algorithm was able find an op- timal solution. There is however, no clear cut pat- tern evident from the table regarding the solution quality and the problem parameters, although in

all but six cases the average UBPG increases as v decreases.

5. Concluding remarks

In this paper we have developed an SA based algo- rithm for solving the batching problem in production planning for flexible manufacturing systems. In ab- sence of any theoretical basis to compute the penalties

136 B. Sritiastava, W.-H. Chenl Int. J. Production Economics 43 (1996) 127-137

125 Number of Part Types

Number of Machine Types

Fig. 1. A plot of average computational time as a function of

number of machine and part types for Y = 1.2.

for constraint violation, the empirical approach out- lined performed quite well. Moreover, as the empirical approach is not problem specific, it can be utilized for other optimization problems. As the computational re- sults and analysis indicate, the SA algorithm was able to obtain good quality solution in reasonable amount of time even for fairly large problems. It should be noted that experimental problems used to test the al- gorithm are comparatively hard. The SA approach can be easily modified to solve more complex FMS pro- duction planning models that embody additional fea- tures found in real world settings. For example, the nonlinearity associated with the tool overlap is usually difficult to handle by the traditional optimization ap- proaches, but can be handled by SA in a simpler way. Thus, with a good annealing schedule, the SA algo- rithm is capable of providing high quality solutions to practical sized production planning problems within a reasonable amount of time.

Acknowledgements

The authors thank the referees for their careful re- view and constructive comments on an earlier version of this paper.

Table 2

UBPG as a function of problem parameters

Problem size Capacity tightening ratio (v)

n m K 1.0 1.2

Min Avg Max Min Avg Max

50 10 20 1.65 3.23 4.74 0.00 1.55 3.06 30 0.00 3.37 6.27 0.81 4.00 6.72

I5 20 2.02 3.20 4.78 0.44 1.12 1.81 30 0.00 2.21 5.59 2.40 4.91 9.61

25 20 1.81 5.32 8.53 0.00 3.62 9.67 30 1.48 2.42 3.13 2.95 4.65 7.95

75 10 20 0.78 2.72 4.14 2.94 4.39 8.01 30 1.01 2.70 6.74 I .05 1.91 3.82

15 20 1.07 2.88 4.49 0.83 2.59 3.51 30 2.22 3.44 4.89 0.95 2.50 3.85

25 20 2.14 4.77 7.06 1.10 2.06 3.23 30 1.43 2.37 3.52 1.87 3.37 5.52

100 10 20 1.09 2.51 4.97 0.80 3.49 8.62 30 1.32 3.03 5.46 0.91 1.54 2.60

15 20 1.45 2.61 4.38 0.68 1.57 2.38 30 3.06 3.45 3.84 0.36 1.68 3.15

25 20 2.73 4.30 6.75 0.11 2.40 3.51

30 1.54 3.05 4.75 1.10 2.69 4.05

125 10 20 2.27 2.74 3.09 0.66 1.94 5.01

30 0.95 2.59 5.03 0.40 1.79 2.83

15 20 2.41 3.18 4.36 0.70 1.61 2.28

30 0.51 2.43 3.53 1.05 1.98 2.82

25 20 3.21 3.71 4.40 2.98 3.25 3.49 30 2.28 3.50 6.22 0.62 3.22 4.98

n: number of part types; m: number of machine types; K: number of tool types.

References

[l] Mansfield, E., 1993. The diffusion of flexible manufacturing

systems in Japan, Europe and the United States. Management Science, 39: 149-159.

[2] Jaikumar, R. and Van Wassenhove, L.K., 1989. A production

planning framework for flexible manufacturing systems.

J. Manuf. Oper. Mgmt., 2: 52-79.

[3] Buzacott, J.A., 1985. Modeling manufacturing systems. Robotics and Computer Integrated Manufacturing, 2: 25532.

[4] Kusiak, A., 1984. Loading models in flexible manufacturing systems, in: Recent Developments in Flexible Manufacturing

Systems and Allied Areas. North-Holland, Amsterdam, pp. 119-132.

[5] Stecke, K.E., 1983. Formulation and solution of

nonlinear integer production planning problems for flexible manufacturing systems. Management Science, 29: 273-288.

[6] Van Looveren, A.J., Gelders L.F. and Van Wassenhove, L.N., 1986. A review of FMS planning models, in: Modeling

B. Srivastava. W.-H. Chen/ Int. J. Production Economics 43 (1996) 127-137 137

and Design of Flexible Manufacturing Systems. Elsevier,

Amsterdam, pp. 3-3 1. [7] Hax, A.C. and Meal, H.C., 1975. Hierarchical integration of

production planning and scheduling, in: M.A. Geisler, (Ed.),

Studies in Management Sciences, Logistics, 1. North-Holland,

New York.

[S] Hwang, S., 1986. A constraint directed method to solve

the part selection problem in flexible manufacturing systems

planning stage. Proc. 2nd ORSA/TIMS Conf. on FMS:

Operations Research Models and Applications. pp. 297-309.

[9] Hwan, S. and Shogan, A., 1989. Modelling and solving

an FMS part selection problem. Int. J. Prod. Res., 27:

1349-1366. [lo] Rajagopalan, S., 1986. Formulation and heuristic solutions

for parts grouping and tool loading in flexible manufacturing

systems. Proc. 2nd ORSA/TIMS Conf. on FMS: Operations

Research Models and Applications, pp. 3 1 I-320. [ 111 Ram, B., Satin, S.C. and Chen, C.S., 1990. A model and a

solution approach for the machine loading and tool allocation

problem in a flexible manufacturing system. Int. J. Prod.

Res., 28: 637-645.

[12] Sawik, T., 1990. Modelling and scheduling of a flexible

manufacturing system. Eur. J. Oper. Res., 45: 177-190.

[13] Stecke, K.E. and Kim, I., 1986. A flexible approach to

implementing the short term FMS planning function. Proc.

2nd ORSA/TIMS Conf. on FMS: Operations Research

Models and Applications, pp. 283-295.

[14] Whitney, C.K. and Gaul, T.S., 1984. Sequential decision procedures for batching and balancing in FMSs. Proc. 1st

ORSA/TIMS Conf. on FMS: Operations Research Models and Applications, pp. 243-248.

[15] Liang, M. and Dutta, S.P., 1993. Solving a combined part-

selection, machine-loading, and tool-configuration problem

in flexible manufacturing systems. Prod. Oper. Mgmt., 2:

97-l 13.

[16] Stecke, K.E. and Kim, I., 1988. A study of FMS part type selection approaches for short term production planning. Int.

J. Flexible Manuf. Systems, 1: 7-29.

[17] Khade, S.B. and Ignizio, J.P., 1990. A Lagrangian relaxation based algorithm for production planning in flexible

manufacturing systems. Proc. Annual Meeting of Decision

Sciences Institute, pp. 1805-l 807.

[18] Van Laarhoven, P.J.M. and Aarts, E.H.L., 1987. Simulated Annealing: Theory and Applications, Reidel, Dordrecht.

[19] Johnson, D.S., Aragon, CR., McGeoch, L.A. and Schevon, C., 1989. Optimization by simulated annealing: An

experimental evaluation; Part I, graph partitioning. Oper. Res.,

37: 865-892.

[20] Glover, F. and Greenberg, H.J., 1989. New approaches for heuristic search: A bilateral linkage with artificial intelligence.

Eur. J. Oper. Res., 39: 119-130.

[21] Cerny, V., 1985. Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm.

J. Optim. Theory Appl., 45: 41-51.

[22] Kuik, R. and Salomon, M., 1990. Multi-level lot-sizing problem: Evaluation of a simulated-annealing heuristic. Eur.

J. Oper. Res., 45: 25-37.

[23] Osman, I.H. and Potts, C.N., 1989. Simulated annealing for permutation flow shop scheduling, OMEGA. Int. J. Mgmt.

Sci., 17: 551-557.

[24] Eglese, R.W., 1990. Simulated annealing: A tool for operational research. Eur. J. Oper. Res., 46: 271-281.

[25] Kirkpatrick, S., Gelatt Jr., C.D. and Vecchi, M.P., 1983. Optimization by simulated annealing. Science, 220: 671680.

[26] Aarts, E.H.L. and Van Laarhoven, P.J.M., 1985. Statistical cooling: A general approach to combinatorial optimization

problems. Philips J. Res., 40: 193-226.

[27] Shanker, K. and Tzen, Y.J., 1985. A loading and dispatching problem in a random flexible manufacturing system. Int. J.

Prod. Res., 23: 579-595.

[28] Jaikumar, R., 1986. Postindustrial manufacturing. Har. Bus. Rev., 64: 69-76.

[29] Gavish, B. and Pirkul, H., 1985. Efficient algorithm for solving multiconstraint zero-one knapsack problems to

optimality. Math. Programming, 3 1: 78-105.