chapter 1 scm

71
CHAPTER 1 INTRODUCTION 1.1 BACKGROUND OF SUPPLY CHAIN MANAGEMENT Competitiveness in today’s marketplace depends heavily on the ability of a company to handle the several important challenges like reducing total supply chain operating cost and reducing lead-times, increasing customer service levels, and improving product quality. In Figure 1.1 a typical non integral supply chain is shown, in which the goods flow starts as raw materials at natural resources and ends with products at final customers. Raw material winners keep raw materials on stock and supply them to component manufacturers. Component manufacturers have an inventory of materials at the start of the production process and an inventory of components at the end. Product manufacturers hold inventories of products and components, the latter being 1

Upload: blshankar6449

Post on 26-Mar-2015

262 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Chapter 1 Scm

CHAPTER 1

INTRODUCTION

1.1 BACKGROUND OF SUPPLY CHAIN MANAGEMENT

Competitiveness in today’s marketplace depends heavily on the

ability of a company to handle the several important challenges like reducing

total supply chain operating cost and reducing lead-times, increasing customer

service levels, and improving product quality.

In Figure 1.1 a typical non integral supply chain is shown, in which

the goods flow starts as raw materials at natural resources and ends with

products at final customers. Raw material winners keep raw materials on

stock and supply them to component manufacturers. Component

manufacturers have an inventory of materials at the start of the production

process and an inventory of components at the end. Product manufacturers

hold inventories of products and components, the latter being supplied by

component manufacturers. Wholesalers buy products from product

manufacturers and hold central and regional stock in central and regional

distribution centers respectively. Retailers get their supply from wholesalers

and have products in local stock for sales to final customers. In today’s

world, most supply chains still do not profit from supply chain management

from natural resources to final customers. Instead, the supply chains struggle

due to the increase of competition and dynamics in today’s markets,

organizations integral inventory management due to opposition between

organizations. Due to the increase of competition and dynamics in today’s

1

Page 2: Chapter 1 Scm

markets, organizations are forced to further improve their business

performance. So far, very few supply chains have realized supply chain

management from natural resources to final customers, by one organization

dominating the other organizations in the supply chain. As presented in

Figure 1.2, a dominant organization can make use of hierarchical control to

achieve integral inventory management across organizational boundaries. The

subordinate organizations in the supply chain are forced to obey the

instructions of the dominant organization.

Figure 1.1 Non-integral inventory management by opposition between

organizations using central and procedural information

systems

To obtain the required flexibility, supply chains could introduce

supply chain management from natural resources to final customers, by co-

operation across a network of organizations. As is shown in Figure 1.3,

networked organizations apply lateral control to achieve integral inventory

management beyond their organizational boundaries.

2

Page 3: Chapter 1 Scm

Figure 1.2 Integral inventory management by domination of one

organization over others, using a central and procedural

information system

When compared to non-integral inventory management in supply

chains due to opposition between organizations, the extension of integral

inventory management beyond organizational boundaries can increase the

productivity of supply chains by reducing inventory related costs and

improving inventory related quality (customer service). By focusing on the

inventory aspect, a network of supply chains can be considered as a network

of stock-points connected through transit processes.

Figure 1.3 Integral inventory management by co-operation across

networked organizations, using distributed and object-

oriented information systems

3

Page 4: Chapter 1 Scm

Raw materials, components, assemblies and finished products all

represent inventory at the various stages of production and distribution.

Inventory occurs at all places in a supply chain, both in storage processes and

in production and in transportation processes. Inventory in a storage process is

called stock-point inventory, while inventory in production and transportation

processes is called transit inventory.

Inventory categories in supply chains are plotted for a particular

supply chain, from natural resources to final customers in Figure 1.4. For

every stage in the supply chain the inventory levels are indicated as per

category. Given the levels of the desired as well as the required inventory,

there would ideally be no extra inventory categories in a supply chain.

Integral inventory management concerns the coordinated planning, control

and monitoring of inventory levels in stock-points throughout supply chains,

in order to maximize overall supply chain performance. Integral inventory

management is a means for businesses by which they can raise their

productivity, including cost reduction and quality improvement.

Figure 1.4 Inventory categories in supply chains

Thus, integral inventory management and networked organization

management are supplementary directions for improvement of business

performance in supply chains. Ideally, both issues in supply chain

4

Page 5: Chapter 1 Scm

management should be combined to satisfy customer requirements. The

combination of Integral Inventory Management (IIM) and Networked

Organization Management (NOM) is called Networked inventory

management (NIM) (Verwijmeren 1996a and Verwijmeren 1997). The

simultaneous achievement of integral inventory management and networked

organization management is depicted in Figure 1.5 .

Figure 1.5 Networked inventory management

1.2 SUPPLY CHAIN MANAGEMENT (SCM)

The challenges facing business organizations continue to grow

rapidly due to the globalization of commerce, rapid commoditization of

products, demand for customized products, global distribution of

manufacturing and warehousing facilities, etc. All these factors have driven

business organizations to concentrate on their supply chains so as to gain

competitive advantage in the market place. The ability to manage the

complete Supply Chain Network (SCN) and to optimize decisions is

increasingly being recognized as a crucial competitive factor in order to make

good decisions within a SCN. In today’s competitive world, the success of an

industry is contingent upon the management of its supply chain. The global

economy and the recent developments in information technology have

5

Page 6: Chapter 1 Scm

significantly modified the business organization of enterprises and the way

that they do business. New forms of organizations such as extended

enterprises and virtual enterprises turn to appear and they are quickly adopted

by most leading enterprises. It is noticed that “competition in the future will

not be between individual organizations but between competing supply

chains” (Christopher, 1992). Thus, business opportunities are captured by

groups of enterprises in the same enterprise network. The main reason for this

change is the global competition that forces enterprises to focus on their core

competences. According to a visionary report of manufacturing challenges

2020 conducted in USA, this trend will continue and one of the six grand

challenges of this visionary report is the ability to reconfigure manufacturing

enterprises rapidly in response to changing needs and opportunities. Among

the techniques supporting a multi-decisional context, computer simulation

algorithms can undoubtedly play an important role in evaluating quantitative

and qualitative benefits in supply chain management environment. This

research focuses on the application of new heuristic optimization algorithms

in supply chain network architectures to meet above challenges effectively

and efficiently.

1.2.1 Definition of Supply Chain Management

In early 1990s, the phrase “supply chain management” came in to

use . In today’s global market, managing the entire supply chain becomes key

factor for the successful business. World-Class organizations now realize that

non-integrated manufacturing processes, non-integrated distribution processes

and poor relationship with suppliers and customers are inadequate for their

success. The challenges facing business organizations grow rapidly because

of the globalization of commerce, global distribution of manufacturing and

warehousing facilities, rapid commoditization of products, demand for

customized products, competitive pressures, rapid advances of information

6

Page 7: Chapter 1 Scm

technology, etc. The task of getting the right product to the right customer at

the right place and at the right time is not easy, and this task leads to the

study on ‘supply chain’. Supply Chain Management (SCM) is a relatively

new term. It crystallizes concepts about integrated business planning that have

been espoused by logistics experts, strategists, and operations research

practitioners as far back as the 1950’s. Today, integrated planning is finally

possible due to advances in information technology, but most companies still

have much to learn about implementing new analytical tools needed to

achieve it.

Simchi-Levi et al (2000) defined SCM as a set of approaches

utilized to efficiently integrate suppliers, manufacturers, warehouses and

stores, so that merchandise is produced and distributed at the right quantities,

to the right location and at the right time in order to minimize system-wide

cost, while satisfying service level requirements. On the other hand,

Christopher (2000) defined SCM as the management of upstream and

downstream relationships with suppliers and customers to deliver superior

customer value at minimal cost in the supply chain as a whole. The common

thread in any definition is that supply chain management seeks to integrate

performance measures over multiple firms or processes, rather than taking the

perspective of a single firm or process. General supply chain network is

shown in Figure 1.6.

7

Page 8: Chapter 1 Scm

Figure 1.6 Multi-echelon supply chain network

1.2.2 Functions and Tasks of Supply Chain Management

Using the concept of flow from raw material to consumer, Mabert

and Venkataramanan (1998) presented a general structure of the supply chain

and sample of elements (managerial functions and tasks) that configure it. The

chain contains five aggregate or major stages to represent important phases in

the flow.

Sourcing involves not only the supply of raw materials and

components through a network of vendors; it also includes product

development support through subassembly design and tooling production for

process changes.

8

Page 9: Chapter 1 Scm

Inbound Logistics focuses on effective and efficient movement and

storage of required materials to meet production schedules.

Manufacturing uses provided inputs to produce a high quality and

price competitive product in a timely manner.

Outbound Logistics concentrates on movement of finished goods

through the distribution network to global markets for consumer use.

After-market Service recognizes the need to support the product

either through repair service, or customer service representatives, to answer

product-use questions.

1.2.3 Objectives of Supply Chain Management

The objective of supply chain management is to minimize total

supply chain cost to meet given demand in the market. This total cost may be

comprised of the following,

Raw material and other acquisition costs

In-bound transportation costs

Facility investment costs

Direct and indirect manufacturing costs

Direct and indirect distribution center costs

Inventory holding costs

Inter-facility transportation costs

Out-bound transportation costs

9

Page 10: Chapter 1 Scm

1.3 SUPPLY CHAIN PLANNING OPTIMIZATION

FRAMEWORK

1.3.1 Importance of Performance Evaluation

A total supply chain cost analysis approach would look at all cost

consequences of supply chain structure and policy decisions. This approach

suggests that as a team participants in the supply chain should maximize the

overall profit in the chain, rather than optimizing their own portion of it. The

implication is that pricing of goods moving between participants can be

adjusted to reflect a fair sharing of the profit pie.

The supply chain is a complex network of facilities and

organizations, with different and conflicting objectives. Hence decision-

making is also a complex process. Hence, Planning processes are typically

subdivided in to multiple hierarchical-based planning levels. Each level has a

planning cycle that its processes follow. Currently, supply chain planning

decisions are usually done using three hierarchical planning levels:

Strategic or high-level planning is typically done yearly or on

an ad hoc basis

Tactical or mid-level planning is typically done quarterly or

monthly

Operational or low-level planning-largely involving

scheduling rescheduling, and execution is typically done

weekly, daily, or by shift

1.3.1.1 Strategic Level Planning Supply Chain Network Design

To support supply chain design, optimization determines the

location, size, and the number of plants, distribution centers, and suppliers.

10

Page 11: Chapter 1 Scm

This level of planning includes sourcing and deployment plans for each plant,

each distribution center, and each customer. It also considers the flow of

goods through the supply chain network. Generally, supply chain network

design is done infrequently (i.e., every few years) as companies do not need to

add new plants or distribution centers on a routine basis.

1.3.1.2 Tactical Level Planning Supply Planning

Supply chain planning at a tactical level is called supply planning

and involves optimizing the flow of goods throughout a given supply chain

configuration over a time horizon. Similar to supply chain design, supply

planning develops sourcing, production, deployment, and distribution plans.

But there are some major differences:

The supply chain network configuration is already in place

with supply entities such as suppliers, plants, distribution

centers, and transportation lanes.

Supply plans are time-dependent using a "time buckets"

concept (generally, supply planning is done monthly or

weekly).

Supply plans may consider aggregate views of multistage

production processes by incorporating partial levels of a

plant's routings and a product's bill-of-materials.

Setup and changeover times may also be considered, but not

the sequencing of orders through a manufacturing facility.

1.3.1.3 Operational Level Planning-Production Scheduling

At an operational level, supply chain planning can be viewed as

supply scheduling. For a manufacturer, supply scheduling is essentially

11

Page 12: Chapter 1 Scm

production scheduling done on a plant-by-plant basis. Production scheduling

develops a minute-to-minute or hour-to-hour schedule for all of a plant's

resource needs, including labor, equipment, and materials. Production

scheduling optimally sequences orders into the manufacturing process.

Generally, production scheduling is done frequently, potentially several times

a day to account for changes to orders, machine failures, material shortages,

and other plant disruptions. The process usually considers the lowest level of

detail on plant routings, product bills-of-material, and changeover and setup

times.

1.4 SUPPLY CHAIN NETWORK ARCHITECTURE

OPTIMIZATION 

1.4.1 Introduction

Generally, optimization problems seek a solution where decisions

need to be made in a constrained or limited resource environment. Most

supply chain optimization problems require matching demand and supply

when one, the other, or both may be limited. By and large, the most important

limited resource is the time needed to procure, make, or deliver something.

Since the rate of procurement, production, distribution, and transportation

resources is limited, demand cannot be instantaneously satisfied. It always

takes some amount of time to satisfy demand, and this may not be quick

enough unless supply is developed well in advance of demand. In addition to

time, other resources, such as warehouse storage space or a truck's capacity,

may be constrained in meeting demand. An optimization problem comprises

of mainly following decision variables and constraints.

1) These SCN Decision Variables are within the planner's span of

control:

12

Page 13: Chapter 1 Scm

When and how much of a raw material to order from a

supplier?

When to manufacture an order?

When and how much of the product to ship to a customer or

distribution center?

2) SCN Constraints are limitations placed upon the supply plan:

A supplier's capacity to produce raw materials or components

A production line that can only run for a specified number of

hours per day and a worker that must only work so much

overtime

A customer's or distribution center's capacity to handle and

process receipts

Constraints in an optimization problem are either hard or soft. Hard

constraints, such as the number of working hours in a shift or the maximum

capacity of a truck, must be adhered to or satisfied. Soft constraints can be

relaxed or violated. Examples of soft constraints include customer due dates

or warehouse space limitations. Customer due dates can be changed or a

product may be squeezed into a warehouse temporarily, making constraints

less stringent.

1.4.2 General Optimization Objectives of Supply Chain Network

SCN Objectives maximize, minimize, or satisfy something, such as

the following:

Maximizing profits or margins

Minimizing supply chain costs or cycle times

13

Page 14: Chapter 1 Scm

Maximizing customer service

Minimizing lateness

Maximizing production throughput

Those unfamiliar with optimization are often confused about the

difference between a constraint and an objective. Fueling this confusion, some

factors can be formulated as either an objective or a constraint.

1.4.3 Optimization Models

Models describe the relationships among decisions, constraints, and

objectives. These are often expressed in the form of mathematical equations.

This is probably the most important but least understood part of an

optimization problem. Generally, the model must represent the "real world" to

the degree needed to capture the essence of the problem. It must represent the

important aspects of the supply chain in order to provide a useful solution.

Once an optimization problem is formulated, a algorithms / solver determines

the best course of action. A solver comprises a set of logical steps or

algorithms embodied in a computer program to search for a solution that

achieves the objective.

A solver can develop three types of solutions:

Feasible Solution-satisfies all the constraints of the problem.

Optimum Solution-the best feasible solution that achieves the

objective of the optimization problem. Although some problems

may yield more than one feasible solution, there is usually only

one optimum.

Optimized Solution-a solution that partially achieves the

objective of the optimization problem. It is not the optimum or

14

Page 15: Chapter 1 Scm

best solution, but it is a satisfying or reasonable one. This is

usually one of the best feasible solutions. However, for

optimization problems that have no feasible solutions, it may be

one of the best infeasible solutions.

Figure 1.7 represents an optimization problem with a generalized set

of objectives to maximize. It depicts the different types of solutions that might

be developed by a solver. In some cases a solution may be a local optimized

one (see Figure 1.7).

Figure 1.7 Graphical representation of optimization solution types

15

Optimized, Feasible Solution

Optimized, Infeasible Solution

Optimum, Feasible Solution

Decision Variables

Local Optimized,Infeasible Solution

Set of feasibleDecision Variables

Objective(S)

Page 16: Chapter 1 Scm

1.5 OPTIMIZATION

In recent years, optimization algorithms have received wide

attention by the research community as well as the industry. Many real world

issues are optimization problems and are combinatorial in nature. Generally,

combinatorial problems are NP-hard and they cannot be solved to optimality

within polynomial bounded computation time using exact methods like

branch and bound and dynamic programming.

To solve these combinatorial optimization problems, one often has

to use approximate methods which return near optimal solutions in a

relatively short time. Algorithms of these types are called as heuristics and

they often use some problem specific knowledge to either build or to improve

solutions. Recently, many researchers have focused their attention on a new

class of algorithms called meta-heuristics. It is a set of algorithmic concepts

that can be used to define heuristic methods applicable to wide set of varied

problems. The use of meta-heuristics has significantly produced good quality

solutions to hard combinatorial problems in a reasonable time

With rapidly advancing computer technology, computers are

becoming more powerful and correspondingly, the size and the complexity of

the problems being solved using optimization methods are also increasing.

1.5.1 Statement of an Optimization Problem

Optimization is the act of obtaining the best results for the given

problem under given circumstances. In design, manufacturing and

maintenance of any engineering system, engineers have to take many

technological and managerial decisions at several stages. The ultimate goal of

all such decisions is either to minimize the effort required or to maximize the

16

Page 17: Chapter 1 Scm

desired benefit. The effort required or the benefit desired in any practical

situation is to find the conditions that give the maximum or minimum value of

a function. An optimization problem can be stated as follows:

Find which minimizes (1.1)

Subject to constraints

;

;

where ‘X’ is an ‘n’ dimensional vector called the design vector, f(X) is termed

as the objective function, and and are known as inequality and

equality constraints respectively. The number of variables (dimensions) ‘n’

and the number of constraints ‘m’ and/or ‘p’ need not be related in any way.

The problem stated in the equation (1.1) is called a constrained optimization

problem. Some optimization problems do not involve any constraints and are

called as unconstrained optimization problems (see equation 1.2). An

unconstrained optimization problem can be stated as follows:

Find which minimizes (1. 2)

Most of the practical problems are constrained in nature. A study of

unconstrained problems is also important for the following reasons:

17

Page 18: Chapter 1 Scm

a) The constraints do not have the significant influence in certain

design and manufacturing problems.

b) Some of the powerful and robust methods of solving constrained

optimization problems require the use of unconstrained

minimization techniques.

c) The study of unconstrained minimization techniques provide the

basic understanding necessary for the study of constrained

minimization methods.

d) Most of the constrained optimization problems can be converted

into unconstrained problems using methods like penalty function

approach (Deb 2000) and are solved using unconstrained

algorithms.

1.5.2 Classification of Optimization Problems

Optimization problems can be broadly classified into continuous

and discrete problems. The design variables of an optimization

problem are permitted to take any real value and then the optimization

problem is called as continuous optimization problem. Continuous problems

are constrained and unconstrained in nature. Most of the real world

unconstrained and constrained problems are highly non linear, multimodal

and multi dimensional in nature. One of the major issues for solving

constrained optimization problems is the method of handling the constraints.

A simple approach is to convert the constrained optimization problem into an

unconstrained optimization problem by adding penalty for violation of

constraints. If some or all of the design variables of an optimization problem

are restricted to have only integer (discrete) values, the problem is known as

18

Page 19: Chapter 1 Scm

discrete optimization problems. Examples of real world discrete problems

include travelling salesman problems, scheduling, quadratic assignment

problems etc.

There is no single method available for solving all optimization

problems efficiently. A number of optimization techniques have been

developed over the years for solving different types of optimization problems.

Optimization algorithms are broadly classified into exact methods (traditional

methods) and approximate methods (modern heuristics). Further, exact

methods are classified according to the solution construction procedures.

Approximate methods are also classified based on deterministic search

process or the stochastic search process. Finally the stochastic search methods

are classified based on the number of solutions obtained - a single solution or

population of solutions. Traditional methods such as linear programming,

branch and bound and dynamic programming methods give exact solutions to

the problem. A gradient based method initialized with a good starting point

gives the optimum solution for the unimodal function. The direct analytical

methods can be stopped at any time and return a solution because they

process complete solutions, whereas branch and bound and dynamic

programming methods construct solutions during the search process, and

therefore they cannot be stopped to return a complete solution before the

whole search is done. A diagrammatic representation of the above

classification is shown in Figure 1.8.

1.5.2.1 Exact Methods

Exact methods are often useful in optimization. They are

deterministic, fast and give exact solutions. One class of exact methods is

based on exhaustive search. Simple exhaustive search is even possible when

the number of solutions is small enough so that all of them can be checked

within an acceptable time-span. When linear programming is used, it is not

19

Page 20: Chapter 1 Scm

Figure 1.8 Classification of Optimization methods

20

Optimization methods

Exact methods (Traditional methods)

Approximate methods (Modern heuristics)

Direct analytical and work on

complete solutionsStochasticConstruct

SolutionsDeterministic

Examples:

LP, Local search, Gradient based

methods

Examples:

Branch and Bound, Dynamic

Programming

Example:

Tabu Search

Single solution method (point to point method)

Population based method

Example:

Simulated Annealing

Examples:Genetic algorithms

Evolutionary programmingEvolutionary strategyGenetic programming

Particle swarm optimization

Page 21: Chapter 1 Scm

necessary to check all solutions during the exhaustive search process. The

properties of the linear fitness function and convex search space makes it

feasible only to check a path on boundary of the search space. Optimization

methods such as branch and bound and dynamic programming methods work

on partial solutions, and can likewise cut off parts of the search space without

examining them. Algorithms that perform exhaustive search always find the

global optimum, but are often too time consuming or do not apply for solving

real-world problems. Either the search space of these problems is too large, or

the methods have to simplify the problem to be computationally efficient,

which is not possible for real world problems. When compared with

exhaustive search methods, local search methods and gradient based methods

are different.

In local search techniques, a new point is created within the

neighborhood of the current point, and if the neighborhood point is better than

the current point (better solution), it becomes the new current point. Gradient

based methods are analytical local search methods. These methods compute

the derivative information of the function and then move the search in the

direction of the derivative. These methods assume that the function is

differentiable with respect to the design variables and the derivatives are

continuous. Most of the real world industrial optimization problems are

discrete and combinatorial in nature, where objective functions are non

differentiable and discontinuous. In such cases gradient based methods are not

used to solve the problems.

Regarding problems with many optima (multi-modal), local search

strategies can only return a locally optimal solution. The neighborhood size

can of course be enlarged in an attempt to avoid ending up in a local

optimum, but this will increase the time complexity of the algorithm as the

size of the neighborhood is increased.

21

Page 22: Chapter 1 Scm

1.5.2.2 Approximate Methods

It is not possible to change the time complexity from exponential to

polynomial for the exact search algorithms on NP hard problems.

Approximate methods (modern heuristics) focus on escaping local optima and

try to find the global optimum solution. The major advantage of approximate

algorithms is that their efficiency or applicability is not tied to any specific

problem domain. Approximate methods are generally implemented as meta-

heuristics to solve the hard combinatorial problems. Some of the important

approximate methods are discussed briefly in the next subsections.

1.5.2.3 Tabu search

Tabu Search (TS) is a recent addition to non-derivative optimization

algorithms developed by Glover (1989). Tabu Search is an adaptive heuristic

strategy that was primarily designed for combinatorial optimization. It has

been applied to a wide range of problems however, mostly of combinatorial in

nature. In the Tabu search method , flexible memory cycles (tabu lists) are

used to control the search. At each iteration we take the best move possible

that is not tabu, even if it means an increase in objective function value. The

idea is that when it reaches a local minimum, it is required to escape via

different path.

A short term memory is implemented as a list of previously visited

solutions that are classed as tabu. Whilst a solution is contained within the

tabu-list cannot be returned to. The tabu restrictions stop the search from

returning to the local optima and the new search trajectory ensures that new

regions of the search space are explored and the global optimum located.

22

Page 23: Chapter 1 Scm

1.5.2.4 Simulated Annealing approach

The simulated annealing process is stochastic variant of the local

search method, but it can, in contrast, escape local optima. It simulates the

process of slow cooling of molten metal to achieve minimum value in a

minimization problem. The cooling phenomenon is simulated by controlling

the temperature like parameter introduced with the concept of Boltzmann

probability distribution. According to the Boltzmann probability distribution,

a system in thermal equilibrium at a temperature ‘T’ has its energy distributed

probabilistically according to , where k is the Boltzmann

constant. This expression suggests that a system at a high temperature has

almost uniform probability of being at any energy state, but at a low

temperature it has a small probability of being at a high energy state.

Metropolis et al (1953) suggested one way to implement Boltzmann

probability distribution in thermodynamic systems. The same can also be used

in the function minimization context. Let us say, at any instant the current

point and the function value at that point is . The

probability of the next point (from the current point s0) being at depends

on the difference in the function values at these two points or on

and is calculated using the Boltzmann probability

distribution:

(1.3)

If , this probability is one and the point is always accepted. In

the function minimization context, this makes sense because if the function

value at is better than that at , the point must be accepted. The

23

Page 24: Chapter 1 Scm

interesting situation happens when , which implies that the function

value at is worse than that at . According to many traditional

algorithms, the point must not be chosen in this situation. But according

to the metropolis algorithm, there is some finite probability of selecting the

point s even though it is worse than the point . However, this probability is

not the same for all situations. This probability depends on relative magnitude

of and ‘T’ values. The pseudo code of simulated annealing is shown in

Figure 1.9.

Figure 1.9 Pseudo Code of Simulated Annealing Algorithm

24

Step 1 : Choose an initial point , a termination criterion .

Set T a sufficiently high value, number of iterations to

be

performed at a particular temperature ‘n’, and set t = 0.

Step 2 : Calculate a neighborhood point = .

Usually a random point in the neighborhood is created.

Step 3 : If , set t = t + 1;

Else create a random number u, in the range (0, 1).

If set t = t + 1;

Else Go to Step 2.

Step 4 : If and T is small, Terminate;

Else if ( t mod n ) = 0 then lower ‘T’ according to a

cooling schedule.

Go to Step 2;

Else go to Step 2.

Page 25: Chapter 1 Scm

1.5.2.5 Evolutionary Algorithms (EA)

Evolutionary Algorithms (EA) are stochastic search methods

inspired from the metaphor of natural biological evolution. Genetic algorithm

(GA) is a Evolutionary Algorithm and was introduced by John Holland

(1975). These algorithms operate on a population of potential solutions

applying the principle of survival of the fittest to produce better

approximations to a solution.

Figure 1.10 Genetic algorithm flow chart

The initial population is randomly generated over the search space .

At each generation, operators borrowed from natural genetics such as

selection, recombination, mutation, migration, inversion, reinsertion, etc. are

applied to the individuals from the population. By applying genetic operators

25

Yes

No

Generate initial population

Evaluate population

Select best individuals

Print the results

Selection

Recombination

Mutation

Generate a new population

Is the termination condition satisfied?

Page 26: Chapter 1 Scm

these individuals are evolved. Each individual from population is evaluated

by using a quality (fitness) function. Using this quality the best individuals are

selected at each generation. In this way new solutions are obtained. Some of

these new solutions can be better than the existing solutions. There are many

modalities to accept the new solutions (also called offspring) in population.

Some algorithms accept the new solution only if this solution is better than his

parent (or parents). The elitist algorithms accept the new obtained solution in

population. The general flow chart is depicted in Figure 1.10.

1.5.2.6 Ant Colony approach

The idea of imitating the behavior of ants for finding good solutions

to combinatorial optimization problems was initiated by Dorigo et al (1991).

Ant Colony Optimization (ACO) simulates the collective foraging habits of

ants, venturing out for food and bringing back to the nest. Real ants are

capable of finding the shortest path from a food source to their nest without

using visual cues as they have poor vision. They communicate information

concerning food sources via an aromatic essence. This chemical substance

deposited by ants as they travel is called a pheromone. A greater amount of

pheromone on the path gives an ant as a stronger stimulation and thus a higher

probability to follow it. They essentially move randomly, but when they

encounter a pheromone trail, they decide whether or not to follow it, and if

they do so, they deposit their own pheromone on the trail, which reinforces

the path. Since ants passing through a food source by a shorter path will have

higher traffic intensity and therefore will make the quantity of pheromone laid

down on the shorter path grow faster. However, there is always a small

probability that an ant will not follow a well-marked pheromone trail. This

small probability allows for exploration of other trails. The foraging behavior

of the real ant colonies can be used to solve combinatorial optimization

26

Page 27: Chapter 1 Scm

problems. The detailed descriptions about the ACO algorithms and their

implementations are presented in the book “Ant Colony Optimization”

(Dorigo and Stutzle 2004).

1.5.2.7 Particle Swarm Optimization approach

The particle swarm optimization algorithm (PSO) is a relatively new

approach in modern heuristics for optimization. PSO is one of the

evolutionary computation methods. PSO algorithm was first proposed by

Kennedy and Eberhart (1995) for continuous function optimization. Since its

introduction, PSO has attracted a lot of researchers around the world.

PSO originated from the research of food hunting behaviors of

birds. Researchers found that in the course of flight flocks of birds would

always suddenly change direction, scatter and gather. Their behaviors are

unpredictable but always consistent as a whole, with individuals keeping the

most suitable distance. Through the research of the behaviors of similar

biological communities, it is found that there exists a social information

sharing mechanism in biological communities. This mechanism provides an

advantage for the evolution of biological communities, and provides the basis

for the formation of PSO. Every swarm of PSO is a solution in the solution

space. It adjusts its flight according to its own and its companion’s flying

experience. The best position in the course of flight of each swarm is the best

solution that is found by the swarm. The best position of the whole flock is

the best solution, which is found by the flock. The former is called pBest, and

the latter is called gBest. Every swarm continuously updates itself through the

above mentioned best solution. Thus a new generation of community comes

into being. In the practical operation, the fitness function, which is determined

by the optimization problem, assesses the extent to which the swarm is good

or bad.

27

Page 28: Chapter 1 Scm

According to Angeline (1998), the two main distinctions between

PSO and evolutionary algorithms are:

i) Evolutionary algorithms rely on three mechanisms in their

processing: parent representation, selection of individuals

and the fine tuning of their parameters. In contrast, PSO only

relies on two mechanisms, since PSO does not adopt an

explicit selection function. The absence of a selection

mechanism in PSO is compensated by the use of leaders to

guide the search. However, there is no notion of offspring

generation in PSO as with evolutionary algorithms.

ii) A second difference between evolutionary algorithms and

PSO has to do with the way in which the individuals are

manipulated. PSO uses an operator that sets the velocity of a

particle to a particular direction. This can be seen as a

directional mutation operator in which the direction is

defined by both the particle’s personal best and the global

best (of the swarm). If the direction of the personal best is

similar to the direction of the global best, the angle of

potential directions will be small, whereas a larger angle will

provide a larger range of exploration. In contrast,

evolutionary algorithms use a mutation operator that can set

an individual in any direction (although the relative

probabilities for each direction may be different). In fact, the

limitations exhibited by the directional mutation of PSO has

led to the use of mutation operators similar to those adopted

in evolutionary algorithms.

28

Page 29: Chapter 1 Scm

Following are the two key aspects by which PSO has become more

popular:

The main algorithm of PSO is relatively simple (since in its

original version, it only adopts one operator for creating new

solutions, unlike most evolutionary algorithms) and its

implementation is, therefore, straightforward.

PSO has been found to be very effective in a wide variety of

applications, being able to produce very good results at a very

low computational cost .

1.6 VARIANTS OF PARTICLE SWARM OPTIMIZATION

The variations introduced in the basic PSO algorithm by various

researchers are discussed in this section.

1.6.1 Basic Particle Swarm Optimization algorithm (B-PSO)

In order to establish a common terminology, in the following we

provide some definitions of several technical terms commonly used:

Swarm: Population of the algorithm.

Particle: Member (individual) of the swarm. Each particle represents

a potential solution to the problem being solved. The position of a particle is

determined by the solution it currently represents.

pbest (personal best): Personal best position of a given particle, so

far. That is, the position of the particle that has provided the greatest success

(measured in terms of a scalar value analogous to the fitness adopted in

evolutionary algorithms).

29

Page 30: Chapter 1 Scm

lbest (local best): Position of the best particle member of the

neighborhood of a given particle.

gbest (global best): Position of the best particle of the entire swarm.

Leader: Particle that is used to guide another particle towards better

regions of the search space.

Velocity (vector): This vector drives the optimization process, that

is, it determines the direction in which a particle needs to “fly” (move), in

order to improve its current position.

Inertia weight: Denoted by w, the inertia weight is employed to

control the impact of the previous history of velocities on the current velocity

of a given particle.

Learning factor: Represents the attraction that a particle has toward

either its own success or that of its neighbors. Two are the learning factors

used: c1 and c2 , where c1 is the cognitive learning factor and represents the

attraction that a particle has toward its own success and c2 is the social

learning factor and represents the attraction that a particle has toward the

success of its neighbors. Both, c1 and c2, are usually defined as constants.

In PSO in each iteration t, each particle k keeps track of its

coordinates in the problem space, which are associated with the best solution

it has achieved so far. This value is called Pk. Another best value that is

tracked is the overall best value, and its location, obtained so far by any

particle in the population. This location is called G (Global Best). In the

original PSO concept proposed by Kennedy and Eberhart (1995), in every

iteration t, changing the velocity of each particle makes them to move toward

possibly new Pk and G locations. The kth particle in the multidimensional

30

Page 31: Chapter 1 Scm

search space is represented by (i.e., (Xk1, kk2, …

XkD)). The best previous position (the position giving the best objective

function value) of the kth particle is recorded and represented by (i.e.,

(Pk1,Pk2, … PkD)), and the global best solution (obtained so far) is denoted by

( i.e., (G1, G2, …, GD). The rate of the position change (i.e., velocity) for

the particle is represented by (i.e., (vk1, vk2, … , vkD)).

The new velocity and new position of the particle is calculated by

using the following equations,

Velocity

for d= 1,2,..D. (1.3)

(1.4)

The equations (1.3) and (1.4) describe the flying trajectory of a population of

particles. Equation (1.3) describes how the velocity is dynamically updated

and equation (1.4) describes the position update of the flying particle. The

equation (1.3) consists of three parts. The first part is known as the

momentum part. The velocity cannot be changed abruptly. It is changed from

the current velocity. The second part is known as cognitive part which

represents private thinking of itself learning from its own flying experience.

The third part is the social part which represents the collaboration among

particles learning from group flying experience. In equation (1.3), if the sum

of the three parts on the right side exceeds a constant value specified by the

user, then the velocity on that dimension is assigned to be , that is,

31

Page 32: Chapter 1 Scm

particles’ velocities on each dimension is clamped to a maximum velocity

, which is an important parameter. Originally is the only parameter

required to be adjusted by the users. Large leads the particles to fly past

the good solution areas. Small would lead the particles to be potentially

trapped into local minima, making them unable to fly into better solution

areas. Usually a fixed constant value is used as the , but a well designed

dynamically changing might improve the performance of PSO. The

velocity and position updates in Particle Swarm Optimization Depicted in

Figure 1.11(b).

PSO can be applied, like other algorithms in the field of

evolutionary computation, in the areas of solving discrete problems involving

system design, multi-objective optimization, classification, pattern

recognition, system modelling, scheduling, planning, robotic applications,

decision making, simulation and identification.

1.6.2 Linearly Decreasing Inertia Weight PSO algorithm

(LDIW-PSO)

In evolutionary programming, the balance between the global and

local search is adjusted through adapting the variance (strategy parameter) of

the Gaussian random function or step size, which can even be encoded in to

the chromosomes to undergo evolution itself.

32

Page 33: Chapter 1 Scm

The generic PSO algorithm for a minimization problem is given below.

Figure 1.11 (a) The generic PSO algorithm for a minimization problem

33

Step 1:Initialization of population: Generate K particles with random positions

or values and initial velocities in the multi (D) dimensional

problem space (with d = 1, 2, . . . , D).

Step 2:For each particle, evaluate the optimization function yielded by the

particle.

Step 3: Initialize particle best

for d = 1, 2, . . . , D, and

Global best , where , in

general, denotes the value of objective function yielded by particle k.

Step 4: Apply velocity, and move the particle according to Equation 1.3 and

Equation 1.4, respectively:

Velocity (vkd)new =

for d= 1,2,..D.

and hence obtain

In the

above,

c1, c2 are two positive constants, and r1 and r2 are two uniformly

distributed random numbers in the range (0, 1).

Step 5: Go back to step 4 until the termination criterion is met with.

Page 34: Chapter 1 Scm

Figure 1.11(b) Depiction of the velocity and position updates in

Particle Swarm Optimization

Velocity changes of a PSO consist of three parts, the “social” part,

the “cognitive” part, and the “momentum” part. The balance among these

parts determines the global and local search ability, and hence the

performance of a PSO. The first new parameter added into the original PSO

algorithm is the inertia weight ‘w’ (Shi and Eberhart 1998a, 1998b). The

inertia weight has characteristics that are reminiscent of the temperature in the

simulated annealing. A large inertia weight facilitates a global search, while a

small inertia weight facilitates local search. by linearly decreasing the inertia

weight from a relatively large value to a small value through the course of the

PSO run, the PSO tends to have more global search ability at the beginning of

the run while having more local search ability near the end of the run.

The dynamic equation of PSO with inertia weight is modified to be:

Velocity , =

for d = 1, 2, … , D. (1.5)

34

Page 35: Chapter 1 Scm

The following weighting function is usually utilized in (equation 1.5):

(1.6)

where, : Maximum inertia weight

: Minimum inertia weight

: Maximum iteration number

: Current iteration number

and hence obtain new position of particle as follows,

(1.7)

Equation (1.5) is the same as equation (1.3), except for a new

parameter, inertia weight w. The inertia weight is introduced to balance

between the global and local search abilities. The large inertia weight

facilitates global search, while the small inertia weight facilitates local search.

The vmax can be simply set to the value of the range of each variable and the

PSO algorithm still performs well enough, if not better.

1.6.3 Clerc construction Factor Method PSO algorithm (CFM-PSO)

Another interesting variation of PSO has been reported by Clerc

and Kennedy (2002), they introduced the new concept of construction

coefficient which should control each of the velocity update relation to limit

the explosion of the particles beyond limits.

Another parameter called constriction coefficient is introduced with

the hope that it can insure a PSO to converge to the global minimum (see

35

Page 36: Chapter 1 Scm

Clerc and Kennedy 2002). A simplified method of incorporating the

parameters is given in equation (1.8), where is a function of c1 and c 2.

Velocity, =

for d = 1, 2, … , D. (1.8)

and hence obtain

(1.9)

with , (1.10)

where = c1 + c2 and > 4 .

Mathematically, equations (1.5) and (1.8) are proved to be

equivalent by setting inertia weight w to be k, and c1 and c2 to meet the

condition = c1 + c2 and > 4. The PSO algorithm with the constriction

factor can be considered as a special case of the PSO algorithm with inertia

weight and the three parameters related through equation (1.10). A better

approach to use is to utilize the PSO with constriction factor while limiting

vmax to Xmax , the range of each variable on each dimension, or utilize the PSO

with inertia weight, w and c1 and c2 selected according to equation (1.8) (see

Eberhart and Shi 2000). When Clerc’s constriction method is used, is

commonly set to 4.1 and the constant multiplier k is approximately 0.729.

This is equivalent to the PSO with inertia weight w = 0.729 and

c1 = c2 =1.49445. Shi and Eberhart (1998a, 1999) introduced a linearly

decreasing inertia weight to the PSO and then further designed fuzzy systems

36

Page 37: Chapter 1 Scm

to nonlinearly change the inertia weight (Shi and Eberhart 2001a, 2001b).

Recently, a new variation of PSO model introducing nonlinear variation of

inertia weight with dynamic adaptation was proposed by Chatterjee and Siarry

(2006). The search process of a PSO algorithm is nonlinear and complicated.

A PSO with well-selected parameter set can have good performance, but

much better performance could be obtained if a dynamically changing

parameter is well designed (Shi et al 2005).

1.6.4 Non-Linear Inertia Weight PSO algorithm (NLIW-PSO)

Although , the algorithms have shown some important advances by

providing high speed of convergence in specific problems it has also been

reported that the algorithms have tendency to get struck in near optimal

solution and may find it difficult to improve the solution accuracy by fine

tuning. This work uses the application of a new variation of PSO model

introducing a non-linear variations of inertia weight along with the particles

old velocity to improve the speed of convergence as well as fine tuning the

search in the multi dimensional space and this approach was proposed by

Chatterji and Siarry (2006).This method suggests setting a complete set of

free parameters for any given problem, saving the user from a tedious trial

and error based approach to determine them for each specific problem.

While introducing the concept of inertia weight (w), Shi and

Eberhart (2002) observed that a reasonable choice for ‘w’ should decrease

with a higher choice of vmax. In fact, when vmax is assigned the same value as

xmax, a reasonable w under this condition coincides with that value obtained

for ‘w’ , when vmax is independently chosen a high value. As a general remark,

Shi and Eberhart (2001a) opined that a better performance would be obtained

if the inertia weight were chosen a time varying, linearly decreasing quantity,

rather than being a constant value and supported their statement with a single

case study. A higher value of the inertia weight implied larger incremental

37

Page 38: Chapter 1 Scm

changes in velocity per unit time step which meant exploration of new search

areas in pursuit of a better solution. However smaller inertia weight meant

less variation in velocity to provide slower updating for fine tuning a local

search. It was inferred that the system should start with a high inertia weight

for course global exploration and the inertia weight should linearly decrease

to facilitate final local search exploration. This should help the system to

approach the optimum of fitness function quickly. This method proposes a

new nonlinear function modulated inertia weight adaption with time for

improved performance of PSO algorithm.

This dynamic adaptation for PSO algorithm proposes to update the

velocity relation according to the following relation 1.11. The important

modification is the determination of the inertia weight witer as a nonlinear

function of the present iteration number (iter) at each time step. The proposed

adaption of witer is given by relation 1.12 .Where wintial is the initial inertia

weight at the start of a given run. wfinal the final inertia weight at the end of a

given run, when iter= (iter)max. (iter)max the maximum number of iterations in a

given run, iter the iteration number at the present time step, w iter the inertia

weight at the present time step and n the nonlinear modulation index.

The system stars with a high initial inertia weight (w initial) which

should allow it to explore new search areas aggressively and then decreases it

gradually according to relation 2, following different paths for different

values of n to reach wfinal at iter=(itermax).The proposed algorithm also

attempts to derive a reasonable set of choice for the free parameters of any

given system.ie { winitial,wfinal,n} on the basis of a fixed itermax .the objective is

to arrive at an attractive solution for any given problem with the known,

fixed free parameters applying our proposed PSO variation which should

require less computational burden and time compared to trial and error

approaches.

38

Page 39: Chapter 1 Scm

Velocity , =

(1.11)

(1.12)

where (1.13a)

(1.13b)

iter The iteration number starting from 0. ;

itermax The maximum number of iterations that you are running the

PSO algorithm, For example if you are running 1000 iterations,

itermax = 999.

n Coefficient can be taken as 1.2

winitial Initial weight = 0.2

By choosing m = - 2.5 ×10-4, using the wfinal can be calculated.

c1 and c2 Two uniformly generated random numbers;; generated

from the seed.

39

Page 40: Chapter 1 Scm

The above is the basic equation used to calculate the velocity and

hence obtain new position of particle as follows,

(1.14)

This algorithm found to be giving better solutions for the benchmark

functions considered in their paper comparing with the results obtained by

well- known PSO models.

1.7 THE MULTI-OBJECTIVE OPTIMIZATION PROBLEM

1.7.1 Basic concept of multi-objective programming

This section focuses on basic concept and up to date development of

Evolutionary Multi-objective Optimization (EMO) methods. Multi-objective

programming problems are formulated as follows:

Minimize or Maximize f(x) = ( f1(x), f2(x),……, fr(x) ) (1.15)

The constraint set X may be given by

gj (x) ≤ 0 , j = 1,………, m (1.16)

and/or a subset of Rn itself. For the problem (MOP), we define Pareto

solutions as follows:

Definition : A solution is said Pareto optimal, if there is no better

solution x Є X other than , namely, if

f(x) f( ) for any x ЄX (1.17)

In general, there may be many Pareto solutions. The final decision is

made among them taking the total balance over all criteria into account. This

40

Page 41: Chapter 1 Scm

is a problem of value judgment of decision maker. The totally balancing over

criteria is usually called trade-off.

1.7.2 Constraint Handling

There are many algorithms underlying optimization problem is free

from any constraint. However; this is hardly the case, when it comes to

solving real-world optimization problems as most engineering problems are

constrained problems. General mathematical formulation of the constrained

optimization problem has been discussed in the above section 1.5.1.

Constraints divide the search space in to two divisions-feasible and infeasible

regions. Constraints can be two types: equality and inequality constraints.

Constraints can be hard or soft. A constraint is considered hard if it must be

satisfied in order to make a solution acceptable. A soft constraint, on the other

hand, can be relaxed to some extent in order to accept solution. Hard equality

constraints are difficult to satisfy, particularly if the constraints are nonlinear

in decision variables. Such hard equality constraints may possible to relax (or

made soft) by converting them in to an inequality constraint with some loss of

accuracy (Deb 1995). In all of the constraint handling strategies discussed

here, we assume greater-than-equal-to type inequality constraints only. It is

important to reiterate that this relaxation does not mean that the algorithms

cannot handle equality constraints. Instead, it suggests that equality

constraints should be handled by converting them in to relaxed inequality

constraints.

1.7.3 Transformation Methods

Transformation methods are the simplest and most popular

optimization methods of handling constraints. The constrained problem is

transformed in to a sequence of unconstrained problems by adding penalty

terms for each constraint violation, if a constraint is violated at any point, the

41

Page 42: Chapter 1 Scm

objective function is penalized by an amount depending on the extent of

constraint violation. Penalty terms vary in the way the penalty is assigned.

Some penalty methods cannot deal with infeasible points at all and even

penalize feasible points that are close to the constraint boundary. These

methods are known as interior penalty methods. In these methods, every

sequence of unconstrained optimization method finds a feasible solution. The

solution found in one sequence is used as the starting solution for the next

sequence of the unconstrained optimization. In subsequent sequences, the

solution improves gradually and finally converges to the optimum solution.

The other kind of penalty methods penalizes infeasible points but do

not penalize feasible points. These methods are known as exterior penalty

methods. In these methods, every sequence of unconstrained optimization

finds an improved yet infeasible solution.

1.7.4 Penalty function approach

This penalty parameter approach is a popular constraint handling

strategy. Minimization of all objective functions is assumed here. However, a

maximization function can be handled by converting it in to a minimization

function by using the duality principle.

Before the constraint violation is calculated, all constraints are

normalized. Thus, the resulting constraint functions are g j (x(i)) ≥ 0 for

j = 1,2,……..,J. For each solution x(i), the constraint violation for each

constraint is calculated as follows:

ω j (x(1) ) =│g j (x(1)│ ,if g i (x(1)) < 0 ;

or ω j (x(1) ) = 0 ; otherwise (1.18)

42

Page 43: Chapter 1 Scm

Thereafter, all constraint violation are added together to get the overall

constraint violation:

Ω ( x(i) ) (1.19)

This constraint violation is then multiplied with a penalty parameter Rm and

the product is added to each of the objective function values:

Fm (x(i)) = fm (x(i) ) + Rm Ω ( x(i) ) (1.20)

The function Fm takes into account the constraint violations. For a

feasible solution, the corresponding Ω term is zero and Fm becomes equal to

the original objective function fm. However, for an infeasible solution,

Fm > fm, thereby adding a penalty corresponding to total constraint violation.

The penalty parameter Rm is used to make both of the terms on the right side

of the above equation to have the same order of magnitude. Since the original

objective functions could be of different magnitudes, the penalty parameter

must also vary from one objective function to another. A number of static and

dynamic strategies to update the penalty parameter are suggested in the single

objective GA literature (Michalewicz 1992; Michalewicz and Schoenauer

1996 ; Homaifar et al 1994).any of these techniques can be used here as usual.

The change of penalty parameter Rm in successive sequences of the

penalty function method depends on whether an exterior or interior penalty

term is used. If the optimum point of the unconstrained objective problem, an

initial penalty parameter Rm = 0 (or another value of Rm) will solve the

constrained problem. In the exterior penalty method, a feasible or an

infeasible point can be used as the initial point of the first sequence, whereas

in the interior penalty method, usually an initial feasible point is desired. In

the case of exterior penalty term, a small initial value of R results in an

43

Page 44: Chapter 1 Scm

optimum solution, close to the unconstrained optimum point. As Rm is

increased in successive sequences, the solution improves and finally

approaches the true constrained optimum point. With the interior penalty

term, however, a large initial value of Rm results in a feasible solution far

away from the constraint boundaries. As Rm is decreased, the solution is

improved and approaches the true optimum point. General algorithm of

constraint handling by penalty approach in shown below in Figure 1.12.

The mail advantage of this method is that any constraint (convex or

non convex) can be handled. The algorithm does not take in to account the

structure of the constraints, that is linear or nonlinear constraints can be

tackled with this algorithm

Figure 1.12 General algorithm of constraint handling by penalty

approach

44

Step 1: Choose two termination parameters Є1 , Є2 , an initial

solution x(0), a penalty term, and an initial penalty

parameter R(0)m .choose parameter ‘c’ to update R such that

0 < c < 1 is used for interior penalty terms and c > 1 is

used for exterior penalty terms. Set t=0.

Step 2 : Form Fm (x(i)) = fm (x(i) ) + Rm Ω ( x(i) ) .

Step 3 : Starting with a solution x(i) , find x(i+1) such that Fm (x(i+1)) is

minimum for a fixed value of Rm . Use Є1 to terminate the

unconstrained search.

Step 4 : Is │ Fm (x(i+1)) - Fm (x(i)) │ ≤ Є2 ?

If yes, set xT =x(i+1) and Terminate ;

Else go to Step 5.

Step 5 : Choose Rm(i+1) = c *Rm

(i) ,

Set t =t+1 and go to Step 2.

Page 45: Chapter 1 Scm

1.8 OUTLINE OF THE PRESENT STUDY

In this thesis an attempt has been made to design, modeling and

analysis of supply chain network architectures using different variants of PSO

as solution methodologies. PSO algorithms are proposed for the performance

evaluation applications and are tested for their performance by solving the for

multi echelon supply chain networks. Novel solution procedures have been

presented for handling constrained multi echelon supply chain network

architecture for solving single and multi-objective analysis problems.

The outline of the thesis is as follows:

Chapter 1, presents an overview of supply chain management,

supply chain optimization and general classification of optimization problems

to solve optimization problems. A detailed literature review on single and

multi-objective analysis in integrated multi echelon supply chain network

design and models used are discussed in chapter 2. Development of

mathematical model for integrated tactical level three stage multi echelon

constrained supply chain networks configurations and proposes the

application of different PSO Algorithms for performance evaluation in

Chapter 3. Chapter 4, is the extension of chapter 3 , it presents mathematical

formulation and application of best PSO algorithm for the analysis and

performance evaluation of tactical level four-echelon constrained supply

chain network architecture. Chapter 5, is devoted to present the multi

objective analysis of the multi-stage multi echelon production-inventory-

distribution supply chain networks different sets of objectives. The

performance analysis is performed using weighted sum approach, and trade-

off solutions between the sets of objectives are proposed for managerial

decision making . The performance analysis and validation of the application

of proposed best PSO algorithm is done in chapter 6. Chapter 7, Summarizes

this research and concludes with a discussion on some possible research

extensions.

45