introduction to or - tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · web...

159
Compiled By 03IT44 - 1 - OPERATION RESEARCH Contents:- Introduction History Basic OR concepts Integer Programming Advance Linear Programming 1. Linear programming - formulation 2. Linear programming formulation examples 3. Linear programming - solution 4. Linear programming solution examples 5. LP relaxation Queuing theory Note:-More topics will be added soon. Introduction to OR Terminology The British/Europeans refer to "operational research", the Americans to "operations research" - but both are often shortened to just "OR" (which is the term we will use).

Upload: others

Post on 29-Jan-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 1 -

OPERATION RESEARCH

Contents:-

Introduction History

Basic OR concepts

Integer Programming

Advance Linear Programming

1. Linear programming - formulation

2. Linear programming formulation examples

3. Linear programming - solution

4. Linear programming solution examples

5. LP relaxation

Queuing theory

Note:-More topics will be added soon.

Introduction to OR

Terminology

The British/Europeans refer to "operational research", the Americans to "operations research" - but both are often shortened to just "OR" (which is the term we will use).

Another term which is used for this field is "management science" ("MS"). The Americans sometimes combine the terms OR and MS together and say "OR/MS" or "ORMS". Yet other terms sometimes used are "industrial engineering" ("IE") and "decision science" ("DS"). In recent years there has been a move towards a standardisation upon a single term for the field, namely the term "OR".

Page 2: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 2 -

Journals

OR is a new field which started in the late 1930's and has grown and expanded tremendously in the last 30 years (and is still expanding). As such the academic journals contain many useful articles that reflect state of the art applications of OR. We give below a selection of the major OR journals.

1. Operations Research 2. Management Science 3. European Journal of Operational Research 4. Journal of the Operational Research Society 5. Mathematical Programming 6. Networks 7. Naval Research Logistics 8. Interfaces

The first seven of the above are mainly theoretical whilst the eighth (Interfaces) concentrates upon case studies. If you have access to an appropriate library that stocks these journals have a browse through them to see what is happening in state of the art OR.

Note here that my personal view is that in OR, as in many fields, the USA is the country that leads the world both in the practical application of OR and in advancing the theory (for example, the American OR conferences have approximately 2500 participants, the UK OR conference has 300).

One thing I would like to emphasise in relation to OR is that it is (in my view) a subject/discipline that has much to offer in making a real difference in the real world. OR can help you to make better decisions and it is clear that there are many, many people and companies out there in the real world that need to make better decisions. I have tried to include throughout OR-Notes discussion of some of the real-world problems that I have personally been involved with.

History of OR

OR is a relatively new discipline. Whereas 70 years ago it would have been possible to study mathematics, physics or engineering (for example) at university it would not have been possible to study OR, indeed the term OR did not exist then. It was really only in the late 1930's that operational research began in a systematic fashion, and it started in the UK. As such I thought it would be interesting to give a short history of OR and to consider some of the problems faced (and overcome) by early OR workers.

Whilst researching for this short history I discovered that history is not clear cut, different people have different views of the same event. In addition many of the participants in the events described below are now elderly/dead. As such what is given below is only my understanding of what actually happened.

Page 3: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 3 -

Note: some of you may have moral qualms about discussing what are, at root, more effective ways to kill people. However I cannot change history and what is presented

below is essentially what happened, whether one likes it or not.

1936

Early in 1936 the British Air Ministry established Bawdsey Research Station, on the east coast, near Felixstowe, Suffolk, as the centre where all pre-war radar experiments for both the Air Force and the Army would be carried out. Experimental radar equipment was brought up to a high state of reliability and ranges of over 100 miles on aircraft were obtained.

It was also in 1936 that Royal Air Force (RAF) Fighter Command, charged specifically with the air defense of Britain, was first created. It lacked however any effective fighter aircraft - no Hurricanes or Spitfires had come into service - and no radar data was yet fed into its very elementary warning and control system.

It had become clear that radar would create a whole new series of problems in fighter direction and control so in late 1936 some experiments started at Biggin Hill in Kent into the effective use of such data. This early work, attempting to integrate radar data with ground based observer data for fighter interception, was the start of OR.

1937

The first of three major pre-war air-defence exercises was carried out in the summer of 1937. The experimental radar station at Bawdsey Research Station was brought into operation and the information derived from it was fed into the general air-defense warning and control system. From the early warning point of view this exercise was encouraging, but the tracking information obtained from radar, after filtering and transmission through the control and display network, was not very satisfactory.

1938

In July 1938 a second major air-defense exercise was carried out. Four additional radar stations had been installed along the coast and it was hoped that Britain now had an aircraft location and control system greatly improved both in coverage and effectiveness. Not so! The exercise revealed, rather, that a new and serious problem had arisen. This was the need to coordinate and correlate the additional, and often conflicting, information received from the additional radar stations. With the outbreak of war apparently imminent, it was obvious that something new - drastic if necessary - had to be attempted. Some new approach was needed.

Accordingly, on the termination of the exercise, the Superintendent of Bawdsey Research Station, A.P. Rowe, announced that although the exercise had again demonstrated the technical feasibility of the radar system for detecting aircraft, its operational achievements still fell far short of requirements. He therefore proposed that a crash

Page 4: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 4 -

program of research into the operational - as opposed to the technical - aspects of the system should begin immediately. The term "operational research" [RESEARCH into (military) OPERATIONS] was coined as a suitable description of this new branch of applied science. The first team was selected from amongst the scientists of the radar research group the same day.

1939

In the summer of 1939 Britain held what was to be its last pre-war air defence exercise. It involved some 33,000 men, 1,300 aircraft, 110 antiaircraft guns, 700 searchlights, and 100 barrage balloons. This exercise showed a great improvement in the operation of the air defence warning and control system. The contribution made by the OR teams was so apparent that the Air Officer Commander-in-Chief RAF Fighter Command (Air Chief Marshal Sir Hugh Dowding) requested that, on the outbreak of war, they should be attached to his headquarters at Stanmore in north London.

Initially, they were designated the "Stanmore Research Section". In 1941 they were redesignated the "Operational Research Section" when the term was formalised and officially accepted, and similar sections set up at other RAF commands.

1940

On May 15th 1940, with German forces advancing rapidly in France, Stanmore Research Section was asked to analyse a French request for ten additional fighter squadrons (12 aircraft a squadron - so 120 aircraft in all) when losses were running at some three squadrons every two days (i.e. 36 aircraft every 2 days). They prepared graphs for Winston Churchill (the British Prime Minister of the time), based upon a study of current daily losses and replacement rates, indicating how rapidly such a move would deplete fighter strength. No aircraft were sent and most of those currently in France were recalled.

This is held by some to be the most strategic contribution to the course of the war made by OR (as the aircraft and pilots saved were consequently available for the successful air defense of Britain, the Battle of Britain).

1941 onward

In 1941 an Operational Research Section (ORS) was established in Coastal Command which was to carry out some of the most well-known OR work in World War II.

The responsibility of Coastal Command was, to a large extent, the flying of long-range sorties by single aircraft with the object of sighting and attacking surfaced U-boats (German submarines). The technology of the time meant that (unlike modern day submarines) surfacing was necessary to recharge batteries, vent the boat of fumes and

Page 5: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 5 -

recharge air tanks. Moreover U-boats were much faster on the surface than underwater as well as being less easily detected by sonar.

Amongst the problems that ORS considered were:

organisation of flying maintenance and inspection

Here the problem was that in a squadron each aircraft, in a cycle of approximately 350 flying hours, required in terms of routine maintenance 7 minor inspections (lasting 2 to 5 days each) and a major inspection (lasting 14 days). How then was flying and maintenance to be organised to make best use of squadron resources?

ORS decided that the current procedure, whereby an aircrew had their own aircraft, and that aircraft was serviced by a devoted ground crew, was inefficient (as it meant that when the aircraft was out of action the aircrew were also inactive). They proposed a central garage system whereby aircraft were sent for maintenance when required and each aircrew drew a (different) aircraft when required.

The advantage of this system is plainly that flying hours should be increased. The disadvantage of this system is that there is a loss in morale as the ties between the aircrew and "their" plane/ground crew and the ground crew and "their" aircrew/plane are broken.

In one trial (over 5 months) when flying was organised by ORS the daily operational flying hours were increased by 61% over the previous best achieved with the same number of aircraft. Their system was accepted and implemented.

comparison of aircraft type

Here the problem was one of deciding, for a particular type of operation, the relative merits of different aircraft in terms of factors such as: miles flown per maintenance man per month; lethality of load; length of sortie; chance of U-boat sighting; etc.

improvement of attack kill probability (the probability of attacking and killing a U-boat)

Experience showed that it required some 170 man-hours by maintenance and ground staff to produce one hour of operational flying and more than 200 hours of flying to produce one attack on a surfaced U-boat. Hence over 34,000 man-hours of effort were necessary just to attack a U-boat.

In early 1941 the attack kill probability was 2% to 3% (i.e. between 1.1 million and 1.7 million man-hours were needed by Coastal Command to destroy one U-boat). It is in this area that the greatest contribution was made by OR in Coastal Command and so we shall examine it in more detail. (Note here that we ignore the question of the U-boat being attacked and damaged, but not killed. To include this merely complicates the discussion).

Page 6: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 6 -

Plainly in the above calculation the "weak link" is the low attack kill probability and it is this that really needs to be improved.

The main weapon of attack against a surfaced (when spotted) U-boat was depth charges dropped in a stick (typically six 250lb (110kg) depth charges) in a more or less straight line along the direction of flight of the attacking aircraft. After hitting the water a depth charge sinks whilst at the same time being carried forward by its own momentum. After a pre-set time delay, or upon reaching a certain depth, it explodes and any U-boat within a certain distance (the lethal radius) is fatally damaged. Six variables were considered as influencing the kill probability:

depth (time) setting for depth charge explosion lethal radius aiming errors in dropping the stick orientation of the stick with respect to the U-boat spacing between successive depth charges in the stick low level bombsights.

We consider each in turn.

depth (time) setting for depth charge explosion

In the first two years of the war depth charges were mainly set for explosion at a depth of 30/45 metres [this figure having being set years ago and never altered since]. Analysis of pilot reports by ORS showed that in 40% of attacks the U-boat was either still visible or had been submerged less than 15 seconds (these are the U-boats that we would expect to have most chance of killing as we have a good idea of their position). Since the lethal radius of a depth charge was around 5-6 metres it was clear that a shallower setting was necessary. Explosion at a depth of 15 metres was initiated and as new fuses became available at 10 metres and then 8 metres.

Here we have the issue of historical inertia in decision making - in the dim and distant past someone decided that the standard depth setting should be 30/45 metres and this historical decision has been carried forward - never being questioned/re-examined until ORS came on the scene.

lethal radius

As mentioned above the standard 250lb depth charge was believed to have a lethal radius of only 5-6 metres. Plainly to increase this radius (within the 250lb limit) the chemical explosive inside the depth charge should be more powerful (e.g. increasing the lethal radius by just 20% increases the lethal volume (sphere) around the depth charge by 72.8%). The best chemical explosive currently available was therefore introduced.

Page 7: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 7 -

Note here that it could be argued (and was) that since a 250lb depth charge had too small a lethal radius a bigger charge (600lb (270kg) was prescribed by the Air Staff) was needed. ORS suggested 100lb (45kg) on the basis that it would be more effective to have many small explosions rather than one large explosion. (As an analogy would you prefer to throw many small balls at a small target or one large ball?). In fact neither alternative ever really preceded past the trial stage due to increasing success with the 250lb depth charge.

This illustrates the concept of "tradeoff" which often appears in OR in that, for a given total bomb load we have to make a choice (tradeoff) between bomb size and number of bombs (from one big bomb to many small ones).

aiming errors in dropping the stick

By the end of 1942 it had become clear that too many pilots were reporting having had "straddled" a target U-boat with a stick of depth charges without sinking it. Either their claims were unduly optimistic (the ORS view) or the lethal radius of a depth charge was much less than currently believed (the Air Staff view).

To settle the issue cameras were installed for recording U-boat attacks. Analysis of 16 attacks indicated that ORS were right. This analysis also showed that pilots were following tactical instructions and "aiming off" (aiming ahead of the U-boat to allow for its forward travel during fall of the depth charges). However analysis also revealed that had they not aimed off 50% more kills would have been recorded. Pilots were therefore instructed not to aim off.

orientation of the stick with respect to the U-boat

Here the question was whether to attack from the beam, quarter or along the U-boat track. No definite answer was really reached until 1944 when it was concluded that track attacks were more accurate (probably due to the pilot using the U-boat wake to help him line the plane up).

spacing between successive depth charges in a stick

In the early part of the war this spacing was specified at 12 metres. ORS calculated that increasing this to 33 metres would increase kills by 35% and this was done.

low level bombsights

For much of the war all low level attacks on U-boats were by the pilot acting as bomb aimer/release. Although pilots (and Air Staff) believed they were accurate photographic evidence did not support this belief and ORS pressed for bombsights to be provided. By late 1943 a low level (Mk.III) sight came into use increasing kills per attack by 35%.

Page 8: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 8 -

The overall effect of all the measures discussed above was such that by 1945 the attack kill probability had risen to over 40% (remember it started out at 2-3%).

Discussion

Although scientists had (plainly) been involved in the hardware side of warfare (designing better planes, bombs, tanks, etc) scientific analysis of the operational use of military resources had never taken place in a systematic fashion before the Second World War. Military personnel, often by no means stupid, were simply not trained to undertake such analysis.

These early OR workers came from many different disciplines, one group consisted of a physicist, two physiologists, two mathematical physicists and a surveyor. What such people brought to their work were "scientifically trained" minds, used to querying assumptions, logic, exploring hypotheses, devising experiments, collecting data, analysing numbers, etc. Many too were of high intellectual calibre (at least four UK wartime OR personnel were later to win Nobel prizes when they returned to their peacetime disciplines).

By the end of the war OR was well established in the armed services both in the UK and in the USA.

Following the end of the war OR took a different course in the UK as opposed to in the USA. In the UK many of the distinguished OR workers returned to their original peacetime disciplines. As such OR did not spread particularly well, except for a few isolated industries (iron/steel and coal). In the USA OR spread to the universities so that systematic training in OR for future workers began.

How was OR perceived?

Professor P.M.S. Blackett was one of the first scientists to define the essential elements of Operational Research. In October 1941 he wrote a Report on Operational Research which is considered by many to be the original 'definition of Operational Research'. Of the use of scientists at the operational level he said;

'The object of having scientists in close touch with operations is to enable operational staffs to obtain scientific advice on those matters which are not handled by the service technical establishments... Operational staff provide the scientists with the operational outlook and data. The scientists apply scientific methods of analysis to this data, and are thus able to give useful advice. The main field of their activity is clearly the analysis of actual operations, using as data the material to be found in an operations room, e.g. all signals, track charts, combat reports, meteorological information, etc. . . .'

Page 9: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 9 -

In 1947 Kittel described OR thus: 'Operations Research is a scientific method for providing executive departments with a quantitative basis for decisions.' A year later Sir Charles Goodeve summed it up as 'quantitative common-sense' . By 1962 the definition had been expanded to: 'Operational Research is the attack of modern science on complex problems arising in the direction and management of large systems of men, machines, material and money in industry, business, government and defence...' Nowadays the OR Society steers clear of formal definitions, preferring to illustrate what OR does by means of examples.

Conclusion

OR started just before World War II in Britain with the establishment of teams of scientists to study the strategic and tactical problems involved in military operations. The objective was to find the most effective utilisation of limited military resources by the use of quantitative techniques.

Following the end of the war OR spread, although it spread in different ways in the UK and USA.

You should be clear that the growth of OR since it began (and especially in the last 35 years) is, to a large extent, the result of the increasing power and widespread availability of computers. Most (though not all) OR involves carrying out a large number of numeric calculations. Without computers this would simply not be possible

Basic OR concepts

Definition

So far we have avoided the problem of defining exactly what OR is. In order to get a clearer idea of what OR is we shall actually do some by considering the specific problem below and then highlight some general lessons and concepts from this specific example.

Two Mines Company

The Two Mines Company own two different mines that produce an ore which, after being crushed, is graded into three classes: high, medium and low-grade. The company has contracted to provide a smelting plant with 12 tons of high-grade, 8 tons of medium-grade and 24 tons of low-grade ore per week. The two mines have different operating characteristics as detailed below.

Mine Cost per day (£'000) Production (tons/day) High Medium Low

Page 10: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 10 -

X 180 6 3 4Y 160 1 1 6

How many days per week should each mine be operated to fulfil the smelting plant contract?

Note: this is clearly a very simple (even simplistic) example but, as with many things, we have to start at a simple level in order to progress to a more complicated level.

Guessing

To explore the Two Mines problem further we might simply guess (i.e. use our judgement) how many days per week to work and see how they turn out.

work one day a week on X, one day a week on Y

This does not seem like a good guess as it results in only 7 tonnes a day of high-grade, insufficient to meet the contract requirement for 12 tonnes of high-grade a day. We say that such a solution is infeasible.

work 4 days a week on X, 3 days a week on Y

This seems like a better guess as it results in sufficient ore to meet the contract. We say that such a solution is feasible. However it is quite expensive (costly).

Rather than continue guessing we can approach the problem in a structured logical fashion as below. Reflect for a moment though that really we would like a solution which supplies what is necessary under the contract at minimum cost. Logically such a minimum cost solution to this decision problem must exist. However even if we keep guessing we can never be sure whether we have found this minimum cost solution or not. Fortunately our structured approach will enable us to find the minimum cost solution.

Two Mines solution

What we have is a verbal description of the Two Mines problem. What we need to do is to translate that verbal description into an equivalent mathematical description.

In dealing with problems of this kind we often do best to consider them in the order:

1. variables 2. constraints 3. objective.

Page 11: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 11 -

We do this below and note here that this process is often called formulating the problem (or more strictly formulating a mathematical representation of the problem).

(1) Variables

These represent the "decisions that have to be made" or the "unknowns".

Let

x = number of days per week mine X is operated y = number of days per week mine Y is operated

Note here that x >= 0 and y >= 0.

(2) Constraints

It is best to first put each constraint into words and then express it in a mathematical form.

ore production constraints - balance the amount produced with the quantity required under the smelting plant contract

OreHigh 6x + 1y >= 12Medium 3x + 1y >= 8Low 4x + 6y >= 24

Note we have an inequality here rather than an equality. This implies that we may produce more of some grade of ore than we need. In fact we have the general rule: given a choice between an equality and an inequality choose the inequality.

For example - if we choose an equality for the ore production constraints we have the three equations 6x+y=12, 3x+y=8 and 4x+6y=24 and there are no values of x and y which satisfy all three equations (the problem is therefore said to be "over-constrained"). For example the values of x and y which satisfy 6x+y=12 and 3x+y=8 are x=4/3 and y=4, but these values do not satisfy 4x+6y=24.

The reason for this general rule is that choosing an inequality rather than an equality gives us more flexibility in optimising (maximising or minimising) the objective (deciding values for the decision variables that optimise the objective).

days per week constraint - we cannot work more than a certain maximum number of days a week e.g. for a 5 day week we have

x <= 5 y <= 5

Page 12: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 12 -

Constraints of this type are often called implicit constraints because they are implicit in the definition of the variables.

(3) Objective

Again in words our objective is (presumably) to minimise cost which is given by 180x + 160y

Hence we have the complete mathematical representation of the problem as:

minimise 180x + 160ysubject to 6x + y >= 12 3x + y >= 8 4x + 6y >= 24 x <= 5 y <= 5 x,y >= 0

There are a number of points to note here:

a key issue behind formulation is that IT MAKES YOU THINK. Even if you never do anything with the mathematics this process of trying to think clearly and logically about a problem can be very valuable.

a common problem with formulation is to overlook some constraints or variables and the entire formulation process should be regarded as an iterative one (iterating back and forth between variables/constraints/objective until we are satisfied).

the mathematical problem given above has the form o all variables continuous (i.e. can take fractional values) o a single objective (maximise or minimise) o the objective and constraints are linear i.e. any term is either a constant or

a constant multiplied by an unknown (e.g. 24, 4x, 6y are linear terms but xy is a non-linear term).

o any formulation which satisfies these three conditions is called a linear program (LP). As we shall see later LP's are important..

we have (implicitly) assumed that it is permissible to work in fractions of days - problems where this is not permissible and variables must take integer values will be dealt with under integer programming (IP).

often (strictly) the decision variables should be integer but for reasons of simplicity we let them be fractional. This is especially relevant in problems where

Page 13: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 13 -

the values of the decision variables are large because any fractional part can then usually be ignored (note that often the data (numbers) that we use in formulating the LP will be inaccurate anyway).

the way the complete mathematical representation of the problem is set out above is the standard way (with the objective first, then the constraints and finally the reminder that all variables are >=0).

Discussion

Considering the Two Mines example given above:

this problem was a decision problem

we have taken a real-world situation and constructed an equivalent mathematical representation - such a representation is often called a mathematical model of the real-world situation (and the process by which the model is obtained is called formulating the model).

Just to confuse things the mathematical model of the problem is sometimes called the formulation of the problem.

having obtained our mathematical model we (hopefully) have some quantitative method which will enable us to numerically solve the model (i.e. obtain a numerical solution) - such a quantitative method is often called an algorithm for solving the model.

Essentially an algorithm (for a particular model) is a set of instructions which, when followed in a step-by-step fashion, will produce a numerical solution to that model. You will see some examples of algorithms later in this course but note here that many algorithms for OR problems are available in computer packages.

our model has an objective, that is something which we are trying to optimise.

having obtained the numerical solution of our model we have to translate that solution back into the real-world situation.

On an historical note the Encyclopedia Britannica notes that the word algorithm derives from the Latin translation, Algoritmi de numero Indorum, of the 9th-century Muslim mathematician Abu Ja'far Muhammad ibn Musa Al-Khwarizmi who wrote "Al-Khwarizmi Concerning the Hindu Art of Reckoning."

Hence we have a definition of OR as:

Page 14: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 14 -

OR is the representation of real-world systems by mathematical models together with the use of quantitative methods (algorithms) for solving such models, with a view to optimising.

One thing I wish to emphasise about OR is that it typically deals with decision problems. You will see examples of the many different types of decision problem that can be tackled using OR throughout OR-Notes.

We can also define a mathematical model as consisting of:

Decision variables, which are the unknowns to be determined by the solution to the model.

Constraints to represent the physical limitations of the system. An objective function. A solution (or optimal solution) to the model is the identification of a set of

variable values which are feasible (i.e. satisfy all the constraints) and which lead to the optimal value of the objective function.

Philosophy

In general terms we can regard OR as being the application of scientific methods/thinking to decision making. Underlying OR is the philosophy that:

decisions have to be made; and using a quantitative (explicit, articulated) approach will lead (on average) to better

decisions than using non-quantitative (implicit, unarticulated) approaches (such as those used (?) by human decision makers).

Indeed it can be argued that although OR is imperfect it offers the best available approach to making a particular decision in many instances (which is not to say that using OR will produce the right decision).

Often the human approach to decision making can be characterised (conceptually) as the "ask Fred" approach, simply give Fred ('the expert') the problem and relevant data, shut him in a room for a while and wait for an answer to appear.

The difficulties with this approach are:

speed (cost) involved in arriving at a solution quality of solution - does Fred produce a good quality solution in any particular

case consistency of solution - does Fred always produce solutions of the same quality

(this is especially important when comparing different options).

Page 15: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 15 -

You can form your own judgement as to whether OR is better than this approach or not.

Phases of an OR project

Drawing on our experience with the Two Mines problem we can identify the phases that a (real-world) OR project might go through.

1. Problem identification

Diagnosis of the problem from its symptoms if not obvious (i.e. what is the problem?)

Delineation of the subproblem to be studied. Often we have to ignore parts of the entire problem.

Establishment of objectives, limitations and requirements.

2. Formulation as a mathematical model

It may be that a problem can be modelled in differing ways, and the choice of the appropriate model may be crucial to the success of the OR project. In addition to algorithmic considerations for solving the model (i.e. can we solve our model numerically?) we must also consider the availability and accuracy of the real-world data that is required as input to the model.

Note that the "data barrier" ("we don't have the data!!!") can appear here, particularly if people are trying to block the project. Often data can be collected/estimated, particularly if the potential benefits from the project are large enough.

You will also find, if you do much OR in the real-world, that some environments are naturally data-poor, that is the data is of poor quality or nonexistent and some environments are naturally data-rich. As examples of this I have worked on a church location study (a data-poor environment) and an airport terminal check-in desk allocation study (a data-rich environment).

This issue of the data environment can affect the model that you build. If you believe that certain data can never (realistically) be obtained there is perhaps little point in building a model that uses such data.

3. Model validation (or algorithm validation)

Model validation involves running the algorithm for the model on the computer in order to ensure:

the input data is free from errors the computer program is bug-free (or at least there are no outstanding bugs)

Page 16: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 16 -

the computer program correctly represents the model we are attempting to validate

the results from the algorithm seem reasonable (or if they are surprising we can at least understand why they are surprising). Sometimes we feed the algorithm historical input data (if it is available and is relevant) and compare the output with the historical result.

4. Solution of the model

Standard computer packages, or specially developed algorithms, can be used to solve the model (as mentioned above). In practice, a "solution" often involves very many solutions under varying assumptions to establish sensitivity. For example, what if we vary the input data (which will be inaccurate anyway), then how will this effect the values of the decision variables? Questions of this type are commonly known as "what if" questions nowadays.

Note here that the factors which allow such questions to be asked and answered are:

the speed of processing (turn-around time) available by using pc's; and the interactive/user-friendly nature of many pc software packages.

5. Implementation

This phase may involve the implementation of the results of the study or the implementation of the algorithm for solving the model as an operational tool (usually in a computer package).

In the first instance detailed instructions on what has to be done (including time schedules) to implement the results must be issued. In the second instance operating manuals and training schemes will have to be produced for the effective use of the algorithm as an operational tool.

It is believed that many of the OR projects which successfully pass through the first four phases given above fail at the implementation stage (i.e. the work that has been done does not have a lasting effect). As a result one topic that has received attention in terms of bringing an OR project to a successful conclusion (in terms of implementation) is the issue of client involvement. By this is meant keeping the client (the sponsor/originator of the project) informed and consulted during the course of the project so that they come to identify with the project and want it to succeed. Achieving this is really a matter of experience.

A graphical description of this process is given below.

Page 17: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 17 -

Example OR projects

Not all OR projects get reported in the literature (especially OR projects which fail). However to give you an idea of the areas in which OR can be applied we give below some abstracts from papers on OR projects that have been reported in the literature (all US projects drawn from the journal Interfaces). Some other OR projects can be found here.

Note here that, at this stage of the course, you will probably not understand every aspect of these abstracts but you should have a better understanding of them by the end of the course.

Yield management at American Airlines

Critical to an airline's operation is the effective use of its reservations inventory. American Airlines began research in the early 1960's in managing revenue from this inventory. Because of the problem's size and difficulty, American Airlines Decision Technologies has developed a series of OR models that effectively reduce the large problem to three much smaller and far more manageable subproblems: overbooking, discount allocation and traffic management. The results of the subproblem solutions are combined to determine the final inventory levels. American Airlines estimates the quantifiable benefit at $1.4 billion over the last three years and expects an annual revenue contribution of over $500 million to continue into the future.

Page 18: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 18 -

Yield management is also sometimes referred to as capacity management. It applies in systems where the cost of operating is essentially fixed and the focus is primarily, though not exclusively, on revenue maximisation. For example all transport systems (air, land, sea) operating to a fixed timetable (schedule) could potentially benefit from yield management. Hotels and universities would be other examples of systems where the focus should primarily be on revenue maximisation.

To give you an illustration of the kind of problems involved in yield management suppose that we consider a specific flight, say the 4pm on a Thursday from Chicago O'Hare to New York JFK. Further suppose that there are exactly 100 passenger seats on the plane subdivided into 70 economy seats and 30 business class seats (and that this subdivision cannot be changed). An economy fare is $200 and a business class fare is $1000. Then a fundamental question (a decision problem) is : How many tickets can we sell ?

One key point to note about this decision problem is that it is a routine one, airlines need to make similar decisions day after day about many flights.

Suppose now that at 7am on the day of the flight the situation is that we have sold 10 business class tickets and 69 economy tickets. A potential passenger phones up requesting an economy ticket. Then a fundamental question (a decision problem) is : Would you sell it to them ? Reflect - do the figures given for fares $200 economy, $1000 business affect the answer to this question or not ?

Again this decision problem is a routine one, airlines need to make similar decisions day after day, minute after minute, about many flights. Also note that in this decision problem an answer must be reached quickly. The potential passenger on the phone expects an immediate answer. One factor that may influence your thinking here is consider certain money (money we are sure to get) and uncertain money (money we may, or may not, get).

Suppose now that at 1pm on the day of the flight the situation is that we have sold 30 business class tickets and 69 economy tickets. A potential passenger phones up requesting an economy ticket. Then a fundamental question (a decision problem) is : Would you sell it to them ?

NETCAP - an interactive optimisation system for GTE telephone network planning

With operations extending from the east coast to Hawaii, GTE is the largest local telephone company in the United States. Even before its 1991 merger with Contel, GTE maintained more than 2,600 central offices serving over 15.7 million customer lines. It does extensive planning to ensure that its $300 million annual investment in customer access facilities is well spent. To help GTE Corporation in a very complex task of planning the customer access network, GTE Laboratories developed a decision support

Page 19: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 19 -

tool called NETCAP that is used by nearly 200 GTE network planners, improving productivity by more than 500% and saving an estimated $30 million per year in network construction costs.

Managing consumer credit delinquency in the US economy: a multi-billion dollar management science application

GE Capital provides credit card services for a consumer credit business exceeding $12 billion in total outstanding dollars. Its objective is to optimally manage delinquency by improving the allocation of limited collection resources to maximise net collections over multiple billing periods. We developed a probabilistic account flow model and statistically designed programs to provide accurate data on collection resource performance. A linear programming formulation produces optimal resource allocations that have been implemented across the business. The PAYMENT system has permanently changed the way GE Capital manages delinquent consumer credit, reduced annual losses by approximately $37 million, and improved customer goodwill.

Note here that GE Capital also operates in the UK. My Debenhams store card is administered/operated by them.

Operational research example 1987 UG exam

After graduating from Imperial College you find yourself at a business lunch with the managing director of the company employing you. You know that he started as a tea-boy 40 years ago and rose through the ranks of the company (without any formal education) to his present position. He believes that all a person needs to succeed in business are (innate) ability and experience. What arguments would you use to convince him that the decision-making techniques dealt with in this course are of value?

Solution

The points that we would expect to see in an answer include:

OR obviously of value in tactical situations where data well defined an advantage of explicit decision making is that it is possible to examine

assumptions explicitly might expect an "analytical" approach to be better (on average) than a person OR techniques combine the ability and experience of many people sensitivity analysis can be performed in a systematic fashion OR enables problems too large for a person to tackle effectively to be dealt with constructing an OR model structures thought about what is/is not important in a

problem a training in OR teaches a person to think about problems in a logical fashion

Page 20: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 20 -

using standard OR techniques prevents a person having to "reinvent the wheel" each time they meet a suitable problem

OR techniques enable computers to be used with (usually) standard packages and consequently all the benefits of computerised analysis (speed, rapid (elapsed) solution time, graphical output, etc)

OR techniques an aid (complement) to ability and experience not a substitute for them

many OR techniques simple to understand and apply there have been many successful OR projects (e.g. ...) other companies use OR techniques - do we want to be left behind? ability and experience are vital but need OR to use these effectively in tackling

large problems OR techniques free executive time for more creative tasks

Operational research example 1987 UG exam

Discuss the phases that a typical operational research project might go through, with reference to one particular problem of which you are aware.

Solution

The phases that a typical OR project might go through are:

1. problem identification 2. formulation as a mathematical model 3. model validation 4. solution of the model 5. implementation

We would be looking for a discussion of these points with reference to one particular problem.

Advanced linear programming

Linear and integer programming are mathematical techniques that are concerned with optimization, that is with finding the best possible answer to a problem. They are often associated with the wider field of operations research. They have been studied and researched since the late 1940s and elements of them are now taught in undergraduate and graduate programmes in mathematics/operations research worldwide.

We consider here a number of more advanced LP topics than we considered previously. If you want to learn more about advanced LP than is given here see here.

The following are terms that you may come across if you do much solving of LP's using packages.

Page 21: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 21 -

Dual linear program

For an LP with m constraints and n variables if b is a one-dimensional vector of length m, c and x are one-dimensional vectors of length n and A is a two-dimensional matrix with m rows and n columns the primal linear program (in matrix notation) is:

minimise cxsubject to Ax >= b x >= 0

Associated with this primal linear program we have the dual linear program (involving a one-dimensional vector y of length m) given (in matrix notation) by:

maximise bysubject to yA <= c y >= 0

Linear programming theory tells us that (provided the primal LP is feasible) the optimal value of the primal LP is equal to the optimal value of the dual LP.

Basis

Any LP involving inequality constraints can be converted into an equivalent LP involving just equality constraints (simply add slack and artificial variables). After such a conversion the LP (in matrix notation) is:

minimise cxsubject to Ax = b x >= 0

with m equality constraints and n variables (where we can assume m>n). Then, theory tells us that each vertex of the feasible region of this LP can be found by:

choosing m of the n variables (these m variables are collectively known as the basis);

setting the remaining (n-m) variables to zero; and solving a set of simultaneous linear equations to determine values for the m

variables we have selected.

If these values for the m variables are all >0 then the basis is non-degenerate. If one or more of these variables is zero then the basis is degenerate.

Degeneracy in practise

Essentially the simplex algorithm starts at one vertex of the feasible region and moves (at each iteration) to another (adjacent) vertex, improving (or leaving unchanged) the objective function value as it does so, until it reaches the vertex corresponding to the optimal LP solution.

Page 22: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 22 -

Obviously the ideal situation is when moving from one vertex to another we improve the objective function value by a significant amount. The worse case is when the objective function value is unchanged when we move from one vertex to another.

When we solve the LP then, if it is highly degenerate (i.e. there are many vertices of the feasible region for which the associated basis is degenerate), we may find that a large number of iterations (moves between adjacent vertices) occur with little or no improvement in the objective function value.

Computationally this is very unfortunate!

Primal and dual simplex

The revised simplex algorithm is one example of a primal simplex algorithm. The dual simplex algorithm is related to the dual of the LP. Many packages contain both primal and dual simplex algorithms.

Computationally one algorithm (primal or dual) will solve a particular LP quicker than the other algorithm. If your LP solution time is excessive with primal simplex (the usual default algorithm) it may be worthwhile trying dual simplex. Trying dual simplex is particularly useful if your LP appears to be highly degenerate.

Package solution

The simplex algorithm, as typified by a package, will:

crash to find an initial vertex of the feasible region, that is, an initial basis (crashing is the name given to the procedures that packages adopt to find an initial feasible solution (vertex of the feasible region))

at each simplex iteration price to find a variable to bring into the basis (and then choose a variable to drop from the basis) - this replacing of a single variable in the basis by another variable is what moves us between adjacent vertices of the feasible region

at each simplex iteration perform a major or a minor iteration. Major iterations are iterations at which an inversion (sometimes called a factorisation or refactorisation) is carried out.

Essentially at a major iteration a matrix is inverted (the inverse is found). This is done to maintain numeric stability (i.e. avoid rounding errors) during simplex iterations. At a minor iteration no inversion is done.

The algorithm stops when the optimal solution is found.

Preprocessing

Page 23: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 23 -

Some packages have options that enable you to preprocess the problem, e.g. you may have included in the LP a constraint of the form x=5. Obviously the variable x (as well as this constraint) could be eliminated from the LP very easily simply by doing some algebra (replace x by 5 everywhere it appears).

There are also more sophisticated tests available that enable reduction in the size of the LP to be achieved. Generally preprocessing is a good idea as it can reduce LP solution time dramatically.

Note too that preprocessing can also be applied to integer and mixed-integer programs.

Matrix generators and modelling languages

Obviously if we have a large LP then input procedures can become unwieldy. A matrix generator is a piece of software that enables you to easily generate the LP input for a package (or generate a MPS file that can be fed to any package). Nowadays it tends to be the case that LP input comes via an algebraic modelling language such as GAMS or AMPL.

Report generator

Obviously if we have a large LP then the output becomes potentially too large to digest intelligently. A report generator is a piece of software that enables you to easily extract from the LP solution items of interest and present them in a readable format.

Scaling

The computer time required to solve an LP can be affected by how the problem data is scaled. For example dividing all terms (including the right-hand side) of a constraint by 100 cannot affect the optimal solution but may affect the number of iterations needed to find the optimal solution. Many packages have options to scale the data automatically.

Restarting

Having solved an LP it is often worthwhile saving the final solution. This is because, at a later date, we may wish to solve essentially the same LP but with just a few changes made (e.g. some data values altered and some constraints added). Generally restarting from the previously saved solution, rather than from scratch, reduces solution time.

Column generation

The variables in an LP are often referred to as columns (thinking of them as being the columns of the A matrix in the definition of the problem). In column generation we choose to start solving the LP problem using just a subset of the variables and automatically include any additional variables that we need as required. Often at the LP optimal solution many variables have the value zero and so need never really have been

Page 24: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 24 -

considered during the course of the simplex algorithm. Column generation is often used for LP's with a relatively small number of constraints (rows) compared to the number of variables (columns).

Parametric analysis

This is an option available in some packages and involves automatically investigating how the solution changes as some parameter is altered. For example for the LP:

minimise 45x1 + 70x2

subject to certain constraints

a parametric analysis of the objective function would involve looking at the LP:

minimise (45+alpha)x1 + (70+alpha)x2

subject to the same constraints

for varying values of alpha and seeing how the optimal solution changes (if at all) as alpha varies. Parametric analysis can also be carried out for right-hand sides, columns and rows.

Linear programming - formulation

You will recall from the Two Mines example that the conditions for a mathematical model to be a linear program (LP) were:

all variables continuous (i.e. can take fractional values) a single objective (minimise or maximise) the objective and constraints are linear i.e. any term is either a constant or a

constant multiplied by an unknown.

LP's are important - this is because:

many practical problems can be formulated as LP's there exists an algorithm (called the simplex algorithm) which enables us to solve

LP's numerically relatively easily.

We will return later to the simplex algorithm for solving LP's but for the moment we will concentrate upon formulating LP's.

Some of the major application areas to which LP can be applied are:

Blending Production planning Oil refinery management

Page 25: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 25 -

Distribution Financial and economic planning Manpower planning Blast furnace burdening Farm planning

We consider below some specific examples of the types of problem that can be formulated as LP's. Note here that the key to formulating LP's is practice. However a useful hint is that common objectives for LP's are minimise cost/maximise profit.

Financial planning

A bank makes four kinds of loans to its personal customers and these loans yield the following annual interest rates to the bank:

First mortgage 14% Second mortgage 20% Home improvement 20% Personal overdraft 10%

The bank has a maximum foreseeable lending capability of £250 million and is further constrained by the policies:

1. first mortgages must be at least 55% of all mortgages issued and at least 25% of all loans issued (in £ terms)

2. second mortgages cannot exceed 25% of all loans issued (in £ terms) 3. to avoid public displeasure and the introduction of a new windfall tax the average

interest rate on all loans must not exceed 15%.

Formulate the bank's loan problem as an LP so as to maximise interest income whilst satisfying the policy limitations.

Note here that these policy conditions, whilst potentially limiting the profit that the bank can make, also limit its exposure to risk in a particular area. It is a fundamental principle of risk reduction that risk is reduced by spreading money (appropriately) across different areas.

Financial planning solution

We follow the same approach as for the Two Mines example - namely

variables constraints objective.

Page 26: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 26 -

Note here that as in all formulation exercises we are translating a verbal description of the problem into an equivalent mathematical description.

A useful tip when formulating LP's is to express the variables, constraints and objective in words before attempting to express them in mathematics.

Variables

Essentially we are interested in the amount (in £) the bank has loaned to customers in each of the four different areas (not in the actual number of such loans). Hence let

xi = amount loaned in area i in £m (where i=1 corresponds to first mortgages, i=2 to second mortgages etc)

and note that xi >= 0 (i=1,2,3,4).

Note here that it is conventional in LP's to have all variables >= 0. Any variable (X, say) which can be positive or negative can be written as X1-X2 (the difference of two new variables) where X1 >= 0 and X2 >= 0.

Constraints

(a) limit on amount lent

x1 + x2 + x3 + x4 <= 250

Note here the use of <= rather than = (following the general rule we put forward in the Two Mines problem, namely given a choice between an equality and an inequality choose the inequality (as this allows for more flexibility in optimising the objective function)).

(b) policy condition 1

x1 >= 0.55(x1 + x2)

i.e. first mortgages >= 0.55(total mortgage lending) and also

x1 >= 0.25(x1 + x2 + x3 + x4)

i.e. first mortgages >= 0.25(total loans)

(c) policy condition 2

x2 <= 0.25(x1 + x2 + x3 + x4)

Page 27: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 27 -

(d) policy condition 3 - we know that the total annual interest is 0.14x1 + 0.20x2 + 0.20x3 + 0.10x4 on total loans of (x1 + x2 + x3 + x4). Hence the constraint relating to policy condition (3) is

0.14x1 + 0.20x2 + 0.20x3 + 0.10x4 <= 0.15(x1 + x2 + x3 + x4)

Note: whilst many of the constraints given above could be simplified by collecting together terms this is not strictly necessary until we come to solve the problem numerically and does tend to obscure the meaning of the constraints.

Objective

To maximise interest income (which is given above) i.e.

maximise 0.14x1 + 0.20x2 + 0.20x3 + 0.10x4

In case you are interested the optimal solution to this LP (solved using the package as dealt with later) is x1= 208.33, x2=41.67 and x3=x4=0. Note here this this optimal solution is not unique - other variable values, e.g. x1= 62.50, x2=0, x3=100 and x4=87.50 also satisfy all the constraints and have exactly the same (maximum) solution value of 37.5

Blending problem

Consider the example of a manufacturer of animal feed who is producing feed mix for dairy cattle. In our simple example the feed mix contains two active ingredients and a filler to provide bulk. One kg of feed mix must contain a minimum quantity of each of four nutrients as below:

Nutrient A B C Dgram 90 50 20 2

The ingredients have the following nutrient values and cost

A B C D Cost/kgIngredient 1 (gram/kg) 100 80 40 10 40Ingredient 2 (gram/kg) 200 150 20 - 60

What should be the amounts of active ingredients and filler in one kg of feed mix?

Blending problem solution

Variables

In order to solve this problem it is best to think in terms of one kilogram of feed mix. That kilogram is made up of three parts - ingredient 1, ingredient 2 and filler so let:

Page 28: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 28 -

x1 = amount (kg) of ingredient 1 in one kg of feed mix x2 = amount (kg) of ingredient 2 in one kg of feed mix x3 = amount (kg) of filler in one kg of feed mix where x1 >= 0, x2 >= 0 and x3 >= 0.

Essentially these variables (x1, x2 and x3) can be thought of as the recipe telling us how to make up one kilogram of feed mix.

Constraints

nutrient constraints

100x1 + 200x2 >= 90 (nutrient A) 80x1 + 150x2 >= 50 (nutrient B) 40x1 + 20x2 >= 20 (nutrient C) 10x1 >= 2 (nutrient D)

Note the use of an inequality rather than an equality in these constraints, following the rule we put forward in the Two Mines example, where we assume that the nutrient levels we want are lower limits on the amount of nutrient in one kg of feed mix.

balancing constraint (an implicit constraint due to the definition of the variables)

x1 + x2 + x3 = 1

Objective

Presumably to minimise cost, i.e.

minimise 40x1 + 60x2

which gives us our complete LP model for the blending problem.

In case you are interested the optimal solution to this LP (solved using the package as dealt with later) is x1= 0.3667, x2=0.2667 and x3=0.3667 to four decimal places.

Obvious extensions/uses for this LP model include:

increasing the number of nutrients considered increasing the number of possible ingredients considered - more ingredients can

never increase the overall cost (other things being unchanged), and may lead to a decrease in overall cost

placing both upper and lower limits on nutrients dealing with cost changes dealing with supply difficulties filler cost

Page 29: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 29 -

Blending problems of this type were, in fact, some of the earliest applications of LP (for human nutrition during rationing) and are still widely used in the production of animal feedstuffs.

Production planning problem

A company manufactures four variants of the same product and in the final part of the manufacturing process there are assembly, polishing and packing operations. For each variant the time required for these operations is shown below (in minutes) as is the profit per unit sold.

Assembly Polish Pack Profit (£)Variant 1 2 3 2 1.50 2 4 2 3 2.50 3 3 3 2 3.00 4 7 4 5 4.50

Given the current state of the labour force the company estimate that, each year, they have 100000 minutes of assembly time, 50000 minutes of polishing time and 60000 minutes of packing time available. How many of each variant should the company make per year and what is the associated profit?

Suppose now that the company is free to decide how much time to devote to each of the three operations (assembly, polishing and packing) within the total allowable time of 210000 (= 100000 + 50000 + 60000) minutes. How many of each variant should the company make per year and what is the associated profit?

Production planning solution

Variables

Let:

xi be the number of units of variant i (i=1,2,3,4) made per year Tass be the number of minutes used in assembly per year Tpol be the number of minutes used in polishing per year Tpac be the number of minutes used in packing per year

where xi >= 0 i=1,2,3,4 and Tass, Tpol, Tpac >= 0

Constraints

(a) operation time definition

Tass = 2x1 + 4x2 + 3x3 + 7x4 (assembly) Tpol = 3x1 + 2x2 + 3x3 + 4x4 (polish) Tpac = 2x1 + 3x2 + 2x3 + 5x4 (pack)

Page 30: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 30 -

(b) operation time limits

The operation time limits depend upon the situation being considered. In the first situation, where the maximum time that can be spent on each operation is specified, we simply have:

Tass <= 100000 (assembly) Tpol <= 50000 (polish) Tpac <= 60000 (pack)

In the second situation, where the only limitation is on the total time spent on all operations, we simply have:

Tass + Tpol + Tpac <= 210000 (total time)

Objective

Presumably to maximise profit - hence we have

maximise 1.5x1 + 2.5x2 + 3.0x3 + 4.5x4

which gives us the complete formulation of the problem.

We shall solve this particular problem later in the course.

Factory planning problem

Under normal working conditions a factory produces up to 100 units of a certain product in each of four consecutive time periods at costs which vary from period to period as shown in the table below.

Additional units can be produced by overtime working. The maximum quantity and costs are shown in the table below, together with the forecast demands for the product in each of the four time periods.

Time Demand Normal Overtime OvertimePeriod (units) Production Production Production Costs Capacity Cost (£K/unit) (units) (£K/unit)1 130 6 60 82 80 4 65 63 125 8 70 104 195 9 60 11

It is possible to hold up to 70 units of product in store from one period to the next at a cost of £1.5K per unit per period. (This figure of £1.5K per unit per period is known as a

Page 31: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 31 -

stock-holding cost and represents the fact that we are incurring costs associated with the storage of stock).

It is required to determine the production and storage schedule which will meet the stated demands over the four time periods at minimum cost given that at the start of period 1 we have 15 units in stock. Formulate this problem as an LP.

Factory planning solution

Variables

The decisions that need to be made relate to the amount to produce in normal/overtime working each period. Hence let:

xt = number of units produced by normal working in period t (t=1,2,3,4), where xt >= 0yt = number of units produced by overtime working in period t (t=1,2,3,4) where yt >= 0

In fact, for this problem, we also need to decide how much stock we carry over from one period to the next so let:

It = number of units in stock at the end of period t (t=0,1,2,3,4)

Constraints

production limits

xt <= 100 t=1,2,3,4

y1 <= 60 y2 <= 65 y3 <= 70 y4 <= 60

limit on space for stock carried over

It <= 70 t=1,2,3,4

we have an inventory continuity equation of the form

closing stock = opening stock + production - demand

then assuming

opening stock in period t = closing stock in period t-1 and that production in period t is available to meet demand in period t

Page 32: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 32 -

we have that

I1 = I0 + (x1 + y1) - 130 I2 = I1 + (x2 + y2) - 80 I3 = I2 + (x3 + y3) - 125 I4 = I3 + (x4 + y4) - 195

where I0 = 15

Note here that inventory continuity equations of the type shown above are common in production planning problems involving more than one time period. Essentially the inventory variables (It) and the inventory continuity equations link together the time periods being considered and represent a physical accounting for stock.

demand must always be met - i.e. no "stock-outs". This is equivalent to saying that the opening stock in period t plus the production in period t must be greater than (or equal to) the demand in period t, i.e.

I0 + (x1 + y1) >= 130 I1 + (x2 + y2) >= 80 I2 + (x3 + y3) >= 125 I3 + (x4 + y4) >= 195

However these equations can be viewed in another way. Considering the inventory continuity equations we have that the above equations which ensure that demand is always met can be rewritten as:

I1 >= 0 I2 >= 0 I3 >= 0 I4 >= 0

Objective

To minimise cost - which consists of the cost of ordinary working plus the cost of overtime working plus the cost of carrying stock over (1.5K per unit). Hence the objective is:

minimise

(6x1 + 4x2 + 8x3 + 9x4) + (8y1 + 6y2 + 10y3 + 11y4) + (1.5I0 + 1.5I1 + 1.5I2 + 1.5I3 + 1.5I4)

Note here that we have assumed that if we get an answer involving fractional variable values this is acceptable (since the number of units required each period is reasonably large this should not cause too many problems).

Page 33: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 33 -

In case you are interested the optimal solution to this LP (solved using the package as dealt with later) is x1=x2=x3=x4=100; y1=15, y2=50, y3=0 and y4=50; I0=15, I1=0, I2=70, I3=45 and I4=0 with the minimal objective function value being 3865

Note:

As discussed above assuming It >= 0 t=1,2,3,4 means "no stock-outs" i.e. we need a production plan in which sufficient is produced to ensure that demand is always satisfied.

Allowing It (t=1,2,3,4) to be unrestricted (positive or negative) means that we may end up with a production plan in which demand is unsatisfied in period t (It < 0). This unsatisfied demand will be carried forward to the next period (when it will be satisfied if production is sufficient, carried forward again otherwise).

If It is allowed to be negative then we need to amend the objective to ensure that we correctly account for stock-holding costs (and possibly to account for stock-out costs).

If we get a physical loss of stock over time (e.g. due to damage, pilferage, etc) then this can be easily accounted for. For example if we lose (on average) 2% of stock each period then multiply the right-hand side of the inventory continuity equation by 0.98. If this is done then we often include a term in the objective function to account financially for the loss of stock.

If production is not immediately available to meet customer demand then the appropriate time delay can be easily incorporated into the inventory continuity equation. For example a 2 period time delay for the problem dealt with above means replace (xt + yt) in the inventory continuity equation for It by (xt-2 + yt-2).

In practice we would probably deal with the situation described above on a "rolling horizon" basis in that we would get an initial production plan based on current data and then, after one time period (say), we would update our LP and resolve to get a revised production plan. In other words even though we plan for a specific time horizon, here 4 months, we would only even implement the plan for the first month, so that we are always adjusting our 4 month plan to take account of future conditions as our view of the future changes. We illustrate this below.

Period 1 2 3 4 5 6 7 8 P=plan P P P P D=do (follow) the plan in a

period D P P P P D P P P P

D P P P P

This rolling horizon approach would be preferable to carrying out the plan for 4 time periods and then producing a new plan for the next 4 time periods, such as shown below.

Period 1 2 3 4 5 6 7 8 P=plan P P P P D=do (follow) the plan in a period D D D D P P P P D D D D

Page 34: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 34 -

Linear programming formulation examples

Linear programming example 1996 MBA exam

A cargo plane has three compartments for storing cargo: front, centre and rear. These compartments have the following limits on both weight and space:

Compartment Weight capacity (tonnes) Space capacity (cubic metres)Front 10 6800Centre 16 8700Rear 8 5300

Furthermore, the weight of the cargo in the respective compartments must be the same proportion of that compartment's weight capacity to maintain the balance of the plane.

The following four cargoes are available for shipment on the next flight:

Cargo Weight (tonnes) Volume (cubic metres/tonne) Profit (£/tonne)C1 18 480 310C2 15 650 380C3 23 580 350C4 12 390 285

Any proportion of these cargoes can be accepted. The objective is to determine how much (if any) of each cargo C1, C2, C3 and C4 should be accepted and how to distribute each among the compartments so that the total profit for the flight is maximised.

Formulate the above problem as a linear program What assumptions are made in formulating this problem as a linear program? Briefly describe the advantages of using a software package to solve the above

linear program, over a judgemental approach to this problem.

Solution

Variables

We need to decide how much of each of the four cargoes to put in each of the three compartments. Hence let:

xij be the number of tonnes of cargo i (i=1,2,3,4 for C1, C2, C3 and C4 respectively) that is put into compartment j (j=1 for Front, j=2 for Centre and j=3 for Rear) where xij >=0 i=1,2,3,4; j=1,2,3

Note here that we are explicitly told we can split the cargoes into any proportions (fractions) that we like.

Constraints

Page 35: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 35 -

cannot pack more of each of the four cargoes than we have available

x11 + x12 + x13 <= 18x21 + x22 + x23 <= 15 x31 + x32 + x33 <= 23 x41 + x42 + x43 <= 12

the weight capacity of each compartment must be respected

x11 + x21 + x31 + x41 <= 10 x12 + x22 + x32 + x42 <= 16 x13 + x23 + x33 + x43 <= 8

the volume (space) capacity of each compartment must be respected

480x11 + 650x21 + 580x31 + 390x41 <= 6800 480x12 + 650x22 + 580x32 + 390x42 <= 8700 480x13 + 650x23 + 580x33 + 390x43 <= 5300

the weight of the cargo in the respective compartments must be the same proportion of that compartment's weight capacity to maintain the balance of the plane

[x11 + x21 + x31 + x41]/10 = [x12 + x22 + x32 + x42]/16 = [x13 + x23 + x33 + x43]/8

Objective

The objective is to maximise total profit, i.e.

maximise 310[x11+ x12+x13] + 380[x21+ x22+x23] + 350[x31+ x32+x33] + 285[x41+ x42+x43]

The basic assumptions are:

that each cargo can be split into whatever proportions/fractions we desire that each cargo can be split between two or more compartments if we so desire that the cargo can be packed into each compartment (for example if the cargo was

spherical it would not be possible to pack a compartment to volume capacity, some free space is inevitable in sphere packing)

all the data/numbers given are accurate

The advantages of using a software package to solve the above linear program, rather than a judgemental approach are:

actually maximise profit, rather than just believing that our judgemental solution maximises profit (we may have bad judgement, even if we have an MBA!)

Page 36: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 36 -

makes the cargo loading the decision one that we can solve in a routine operational manner on a computer, rather than having to exercise judgement each and every time we want to solve it

problems that can be appropriately formulated as linear programs are almost always better solved by computers than by people

can perform sensitivity analysis very easily using a computer

Linear programming example 1995 MBA exam

Briefly describe the main steps in using mathematical modelling to support management.

A canning company operates two canning plants. The growers are willing to supply fresh fruits in the following amounts:

S1: 200 tonnes at £11/tonne S2: 310 tonnes at £10/tonne S3: 420 tonnes at £9/tonne

Shipping costs in £ per tonne are:

To: Plant A Plant BFrom: S1 3 3.5 S2 2 2.5 S3 6 4

Plant capacities and labour costs are:

Plant A Plant BCapacity 460 tonnes 560 tonnesLabour cost £26/tonne £21/tonne

The canned fruits are sold at £50/tonne to the distributors. The company can sell at this price all they can produce.

The objective is to find the best mixture of the quantities supplied by the three growers to the two plants so that the company maximises its profits.

Formulate the problem as a linear program and explain it Explain the meaning of the dual values associated with the supply and plant

capacity constraints What assumptions have you made in expressing the problem as a linear program

Solution

The main steps in using mathematical modelling to support management are:

Page 37: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 37 -

1. Problem identification o Diagnosis of the problem from its symptoms if not obvious (i.e. what is

the problem?) o Delineation of the subproblem to be studied. Often we have to ignore parts

of the entire problem. o Establishment of objectives, limitations and requirements.

2. Formulation as a mathematical model 3. Model validation (or algorithm validation)

o Model validation involves running the algorithm for the model on the computer in order to ensure:

the input data is free from errors the computer program is bug-free (or at least there are no

outstanding bugs) the computer program correctly represents the model we are

attempting to validate the results from the algorithm seem reasonable (or if they are

surprising we can at least understand why they are surprising). 4. Solution of the model

o Standard computer packages, or specially developed algorithms, can be used to solve the model.

o In practice, a "solution" often involves very many solutions under varying assumptions to establish sensitivity.

5. Implementation o This phase may involve the implementation of the results of the study or

the implementation of the algorithm for solving the model as an operational tool (usually in a computer package).

To formulate the problem given in the question as a linear program we need to define:

variables constraints objective

Variables

We need to decide how much to supply from each of the three growers to each of the two canning plants. Hence let xij be the number of tonnes supplied from grower i (i=1,2,3 for S1, S2 and S3 respectively) to plant j (j=1 for Plant A and j=2 for Plant B) where xij >=0 i=1,2,3; j=1,2

Constraints

cannot supply more than a grower has available - a supply constraint

Page 38: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 38 -

x11 + x12 <= 200x21 + x22 <= 310x31 + x32 <= 420

the capacity of each plant must be respected - a capacity constraint

x11 + x21 + x31 <= 460 x12 + x22 + x32 <= 560

Objective

The objective is to maximise total profit, i.e.

maximise revenue - grower supply cost - grower shipping cost - plant labour cost

and this is

maximise 50SUM{i=1,2,3}SUM{j=1,2}xij - 11(x11+x12) - 10(x21+x22) - 9(x31+x32) - 3x11 - 2x21 - 6x31 - 3.5x12 - 2.5x22 - 4x32 - 26SUM{i=1,2,3}xi1 - 21SUM{i=1,2,3}xi2

The dual values associated with the supply and plant capacity constraints in the optimal solution of the above linear program tell us by how much the optimal objective function value will change if we change the right-hand side of the corresponding constraints

The basic assumptions are:

can ship from a grower any quantity we desire no loss in weight in processing at the plant no loss in weight in shipping can sell all we produce all the data/numbers given are accurate

Linear programming example 1993 UG exam

The production manager of a chemical plant is attempting to devise a shift pattern for his workforce. Each day of every working week is divided into three eight-hour shift periods (00:01-08:00, 08:01-16:00, 16:01-24:00) denoted by night, day and late respectively. The plant must be manned at all times and the minimum number of workers required for each of these shifts over any working week is as below:

Mon Tues Wed Thur Fri Sat SunNight 5 3 2 4 3 2 2Day 7 8 9 5 7 2 5Late 9 10 10 7 11 2 2

Page 39: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 39 -

The union agreement governing acceptable shifts for workers is as follows:

1. Each worker is assigned to work either a night shift or a day shift or a late shift and once a worker has been assigned to a shift they must remain on the same shift every day that they work.

2. Each worker works four consecutive days during any seven day period.

In total there are currently 60 workers.

Formulate the production manager's problem as a linear program. Comment upon the advantages/disadvantages you foresee of formulating and

solving this problem as a linear program.

Solution

Variables

The union agreement is such that any worker can only start their four consecutive work days on one of the seven days (Mon to Sun) and in one of the three eight-hour shifts (night, day, late).

Let:

Monday be day 1, Tuesday be day 2, ..., Sunday be day 7

Night be shift 1, Day be shift 2, Late be shift 3

then the variables are:

Nij the number of workers starting their four consecutive work days on day i (i=1,...,7) and shift j (j=1,...,3)

Note here that strictly these variables should be integer but, as we are explicitly told to formulate the problem as a linear program in part (a) of the question, we allow them to take fractional values.

Constraints

upper limit on the total number of workers of 60

SUM{i=1 to 7} SUM{j=1 to 3} Nij <= 60

since each worker can start his working week only once during the seven day, three shift, week

lower limit on the total number of workers required for each day/shift period

Page 40: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 40 -

let Dij be the (known) number of workers required on day i (i=1,...,7) and shift period j (j=1,...,3) e.g. D53=11 (Friday, Late)

then the constraints are

Monday: N1j + N7j + N6j + N5j >= D1j j=1,...,3 Tuesday: N2j + N1j + N7j + N6j >= D2j j=1,...,3 Wednesday: N3j + N2j + N1j + N7j >= D3j j=1,...,3 Thursday: N4j + N3j + N2j + N1j >= D4j j=1,...,3 Friday: N5j + N4j + N3j + N2j >= D5j j=1,...,3 Saturday: N6j + N5j + N4j + N3j >= D6j j=1,...,3 Sunday: N7j + N6j + N5j + N4j >= D7j j=1,...,3

The logic here is straightforward, for example for Wednesday (day 3) the workers working shift j on day 3 either started on Wednesday (day 3, N3j) or on Tuesday (day 2, N2j) or on Monday (day 1, N1j) or on Sunday (day 7, N7j) - so the sum of these variables is the total number of workers on duty on day 3 in shift j and this must be at least the minimum number required (D3j).

Objective

It appears from the question that the production manager's objective is simply to find a feasible schedule so any objective is possible. Logically however he might be interested in reducing the size of the workforce so the objective function could be:

minimise SUM{i=1 to 7} SUM{j=1 to 3} Nij

where all variables Nij>=0 and continuous (i.e. can take fractional values).

This completes the formulation of the problem as a linear program.

Some of the advantages and disadvantages of solving this problem as a linear program are:

really need variable values which are integer some workers will always end up working weekends how do we choose the workers to use, e.g. if N43=7 which 7 workers do we choose

to begin their work week on day 4 working shift 3 what happens if workers fail to report in (e.g. if they are sick) - we may fall below

the minimum number required the approach above enables us to deal with the problem in a systematic fashion have the potential to reduce the size of the workforce by more effectively

matching the resources to the needs

able to investigate changes (e.g. in shift patterns, workers needed per day, etc) very easily.

Page 41: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 41 -

Linear programming example 1991 UG exam

A company manufactures four products (1,2,3,4) on two machines (X and Y). The time (in minutes) to process one unit of each product on each machine is shown below:

Machine X YProduct 1 10 27 2 12 19 3 13 33 4 8 23

The profit per unit for each product (1,2,3,4) is £10, £12, £17 and £8 respectively. Product 1 must be produced on both machines X and Y but products 2, 3 and 4 can be produced on either machine.

The factory is very small and this means that floor space is very limited. Only one week's production is stored in 50 square metres of floor space where the floor space taken up by each product is 0.1, 0.15, 0.5 and 0.05 (square metres) for products 1, 2, 3 and 4 respectively.

Customer requirements mean that the amount of product 3 produced should be related to the amount of product 2 produced. Over a week approximately twice as many units of product 2 should be produced as product 3.

Machine X is out of action (for maintenance/because of breakdown) 5% of the time and machine Y 7% of the time.

Assuming a working week 35 hours long formulate the problem of how to manufacture these products as a linear program.

Solution

Variables

Essentially we are interested in the amount produced on each machine. Hence let:

xi = amount of product i (i=1,2,3,4) produced on machine X per week

yi = amount of product i (i=2,3,4) produced on machine Y per week

where xi >= 0 i=1,2,3,4 and yi >= 0 i=2,3,4

Note here that as product 1 must be processed on both machines X and Y we do not define y1.

Constraints

Page 42: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 42 -

floor space

0.1x1 + 0.15(x2 + y2) + 0.5(x3 + y3) + 0.05(x4 + y4) <= 50

customer requirements

x2 + y2 = 2(x3 + y3)

Note here that as this is only an approximate (±5% say) constraint we might do better to express this constraint as

0.95[2(x3 + y3)] <= x2 + y2 <= 1.05[2(x3 + y3)]

available time

10x1 + 12x2 + 13x3 + 8x4 <= 0.95(35)(60) (machine X)

27x1 + 19y2 + 33y3 + 23y4 <= 0.93(35)(60) (machine Y)

Objective

maximise profit, i.e.

maximise 10x1 + 12(x2 + y2) + 17(x3 + y3) + 8(x4 + y4)

Linear programming example 1987 UG exam

A company is planning its production schedule over the next six months (it is currently the end of month 2). The demand (in units) for its product over that timescale is as shown below:

Month 3 4 5 6 7 8Demand 5000 6000 6500 7000 8000 9500

The company currently has in stock: 1000 units which were produced in month 2; 2000 units which were produced in month 1; 500 units which were produced in month 0.

The company can only produce up to 6000 units per month and the managing director has stated that stocks must be built up to help meet demand in months 5, 6, 7 and 8. Each unit produced costs £15 and the cost of holding stock is estimated to be £0.75 per unit per month (based upon the stock held at the beginning of each month).

The company has a major problem with deterioration of stock in that the stock inspection which takes place at the end of each month regularly identifies ruined stock (costing the company £25 per unit). It is estimated that, on average, the stock inspection at the end of

Page 43: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 43 -

month t will show that 11% of the units in stock which were produced in month t are ruined; 47% of the units in stock which were produced in month t-1 are ruined; 100% of the units in stock which were produced in month t-2 are ruined. The stock inspection for month 2 is just about to take place.

The company wants a production plan for the next six months that avoids stockouts. Formulate their problem as a linear program.

Because of the stock deterioration problem the managing director is thinking of directing that customers should always be supplied with the oldest stock available. How would this affect your formulation of the problem?

Solution

Variables

Let

Pt be the production (units) in month t (t=3,...,8)

Iit be the number of units in stock at the end of month t which were produced in month i (i=t,t-1,t-2)

Sit be the number of units in stock at the beginning of month t which were produced in month i (i=t-1,t-2)

dit be the demand in month t met from units produced in month i (i=t,t-1,t-2)

Constraints

production limit

Pt <= 6000

initial stock position

I22 = 1000

I12 = 2000

I02 = 500

relate opening stock in month t to closing stock in previous months

St-1,t = 0.89It-1,t-1

Page 44: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 44 -

St-2,t = 0.53It-2,t-1

inventory continuity equation where we assume we can meet demand in month t from production in month t. Let Dt represent the (known) demand for the product in month t (t=3,4,...,8) then

closing stock = opening stock + production - demand

and we have

It,t = 0 + Pt - dt,t

It-1,t = St-1,t + 0 - dt-1,t

It-2,t = St-2,t + 0 - dt-2,t

where

dt,t + dt-1,t + dt-2,t = Dt

no stockouts

all inventory (I,S) and d variables >= 0

Objective

Presumably to minimise cost and this is given by

SUM{t=3 to 8}15Pt + SUM{t=3 to 9}0.75(St-1,t+St-2,t) + SUM{t=3 to 8}25(0.11It,t+0.47It-

1,t+1.0It-2,t)

Note because we are told to formulate this problem as a linear program we assume all variables are fractional - in reality they are likely to be quite large and so this is a reasonable approximation to make (also a problem occurs with finding integer values which satisfy (for example) St-1,t=0.89It-1,t-1 unless this is assumed).

If we want to ensure that demand is met from the oldest stock first then we can conclude that this is already assumed in the numerical solution to our formulation of the problem since (plainly) it worsens the objective to age stock unnecessarily and so in minimising costs we will automatically supply (via the dit variables) the oldest stock first to satisfy demand (although the managing director needs to tell the employees to issue the oldest stock first).

Linear programming example 1986 UG exam

Page 45: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 45 -

A company assembles four products (1, 2, 3, 4) from delivered components. The profit per unit for each product (1, 2, 3, 4) is £10, £15, £22 and £17 respectively. The maximum demand in the next week for each product (1, 2, 3, 4) is 50, 60, 85 and 70 units respectively.

There are three stages (A, B, C) in the manual assembly of each product and the man-hours needed for each stage per unit of product are shown below:

Product 1 2 3 4Stage A 2 2 1 1 B 2 4 1 2 C 3 6 1 5

The nominal time available in the next week for assembly at each stage (A, B, C) is 160, 200 and 80 man-hours respectively.

It is possible to vary the man-hours spent on assembly at each stage such that workers previously employed on stage B assembly could spend up to 20% of their time on stage A assembly and workers previously employed on stage C assembly could spend up to 30% of their time on stage A assembly.

Production constraints also require that the ratio (product 1 units assembled)/(product 4 units assembled) must lie between 0.9 and 1.15.

Formulate the problem of deciding how much to produce next week as a linear program.

Solution

Variables

Let

xi = amount of product i produced (i=1,2,3,4)

tBA be the amount of time transferred from B to A

tCA be the amount of time transferred from C to A

Constraints

maximum demand

x1 <= 50

x2 <= 60

Page 46: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 46 -

x3 <= 85

x4 <= 70

ratio

0.9 <= (x1/x4) <= 1.15

i.e. 0.9x4 <= x1 and x1 <= 1.15x4

work-time

2x1 + 2x2 + x3 + x4 <= 160 + tBA + tCA

2x1 + 4x2 + x3 + 2x4 <= 200 - tBA

3x1 + 6x2 + x3 + 5x4 <= 80 - tCA

limit on transferred time

tBA <= 0.2(200)

tCA <= 0.3(80)

all variables >= 0

Objective

maximise 10x1 + 15x2 + 22x3 + 17x4

Note we neglect the fact that the xi variables should be integer because we are told to formulate the problem as an LP.

Linear programming example

A company makes three products and has available 4 workstations. The production time (in minutes) per unit produced varies from workstation to workstation (due to different manning levels) as shown below:

Workstation 1 2 3 4Product 1 5 7 4 10 2 6 12 8 15 3 13 14 9 17

Page 47: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 47 -

Similarly the profit (£) contribution (contribution to fixed costs) per unit varies from workstation to workstation as below

Workstation 1 2 3 4Product 1 10 8 6 9 2 18 20 15 17 3 15 16 13 17

If, one week, there are 35 working hours available at each workstation how much of each product should be produced given that we need at least 100 units of product 1, 150 units of product 2 and 100 units of product 3. Formulate this problem as an LP.

Solution

Variables

At first sight we are trying to decide how much of each product to make. However on closer inspection it is clear that we need to decide how much of each product to make at each workstation. Hence let

xij = amount of product i (i=1,2,3) made at workstation j (j=1,2,3,4) per week.

Although (strictly) all the xij variables should be integer they are likely to be quite large and so we let them take fractional values and ignore any fractional parts in the numerical solution. Note too that the question explicitly asks us to formulate the problem as an LP rather than as an IP.

Constraints

We first formulate each constraint in words and then in a mathematical way.

limit on the number of minutes available each week for each workstation

5x11 + 6x21 + 13x31 <= 35(60)

7x12 + 12x22 + 14x32 <= 35(60)

4x13 + 8x23 + 9x33 <= 35(60)

10x14 + 15x24 + 17x34 <= 35(60)

lower limit on the total amount of each product produced

x11 + x12 + x13 + x14 >= 100

Page 48: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 48 -

x21 + x22 + x23 + x24 >= 150

x31 + x32 + x33 + x34 >= 100

Objective

Presumably to maximise profit - hence we have

maximise

10x11 + 8x12 + 6x13 + 9x14 + 18x21 + 20x22 + 15x23 + 17x24 + 15x31 + 16x32 + 13x33 + 17x34

Production planning problem

Consider the production of tin cans which are stamped from metal sheets. A can consists of a main body and two ends,. We have 4 possible stamping patterns (involving 2 different types (sizes) of metal sheet). as shown below

Page 49: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 49 -

We have the following information:

Pattern 1 2 3 4Type of sheet used 1 2 1 1Number of main bodies 1 4 2 0Number of ends 7 4 3 9Amount of scrap s1 s2 s3

s4

Time to stamp (hours) t1 t2 t3

t4

Note here that the si (i=1,2,3,4) and the ti (i=1,2,3,4) are not variables but constants (which have a known value). Often in formulating LP's it is easier to use a symbol for a number rather than write out the number in full every time it occurs in a constraint or in the objective function.

Page 50: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 50 -

Let P be the profit obtained from selling one can, C be the cost per unit of scrap, T be the total number of hours available per week, L1 be the number of metal sheets of type 1 which are available for stamping per week and L2 be the number of metal sheets of type 2 which are available for stamping per week.

At the start of the week there is nothing in stock. Each (unused) main body in stock at the end of the week incurs a stock-holding cost of c1. Similarly each (unused) end in stock at the end of the week incurs a stock-holding cost of c2. Assume that all cans produced one week are sold that week.

How many cans should be produced per week?

Production planning solution

Variables

Let

xi be the number of patterns of type i (i=1,2,3,4) stamped per week

y be the number of cans produced per week

Note xi >= 0 i=1,2,3,4 and y >= 0 and again we assume that the xi and y are large enough for fractional values not to be significant.

Constraints

time available

t1x1 + t2x2 + t3x3 + t4x4 <= T

sheet availability

x1 + x3 + x4 <= L1 (sheet 1)

x2 <= L2 (sheet 2)

number of cans produced

y = min[ (7x1+4x2+3x3+9x4)/2, (x1+4x2+2x3) ]

where the first term in this expression is the limit imposed upon y by the number of can ends produced and the second term in this expression is the limit imposed upon y by the number of can bodies produced. This constraint (because of the min[,] part) is not a linear constraint.

Page 51: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 51 -

Objective

Presumably to maximise profit - hence

maximise

revenue - cost of scrap - unused main bodies stock - holding cost - unused ends stock - holding cost

i.e. maximise

Py - C(s1x1 + s2x2 + s3x3 + s4x4) - c1(x1 + 4x2 + 2x3 - y) - c2((7x1 + 4x2 + 3x3 + 9x4) - 2y)

As noted above this formulation of the problem is not an LP - however it is relatively easy (for this particular problem) to turn it into an LP by replacing the y = min[,] non-linear equation by two linear equations.

Suppose we replace the constraint

y = min[ (7x1+4x2+3x3+9x4)/2,(x1+4x2+2x3) ] (A)

by the two constraints

y <= (7x1+4x2+3x3+9x4)/2 (B)y <= (x1+4x2+2x3) (C)

(which are both linear constraints) then we do have an LP and in the optimal solution of this LP either:

constraint (B) or constraint (C) is satisfied with equality, in which case constraint (A) is also satisfied with equality; or

neither constraint (B) nor constraint (C) is satisfied with equality i.e. y < (7x1+4x2+3x3+9x4 )/2 and y < (x1+4x2+2x3) - but in this case we can increase y (without changing any xi values), increasing the objective function (assuming P + c1 + 2c2 >0) and contradicting the statement (above) that we already had the optimal solution.

Hence case (b) cannot occur and so case (a) is valid - replacing constraint (A) by constraints (B) and (C) generates a valid LP formulation of the problem.

Note that this problem illustrates that even if our initial formulation of the problem is non-linear we may be able to transform it into an LP.

Page 52: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 52 -

Note too that it is relatively easy to extend the LP formulation of the problem to cope with the situation where can bodies/ends unused at the end of one week are available for production the following week.

Production planning problem

A company is producing a product which requires, at the final assembly stage, three parts. These three parts can be produced by two different departments as detailed below.

Production rate (units/hr) Part 1 Part 2 Part 3 Cost (£/hr)Department 1 7 6 9 25.0Department 2 6 11 5 12.5

One week, 1050 finished (assembled) products are needed (but up to 1200 can be produced if necessary). If department 1 has 100 working hours available, but department 2 has 110 working hours available, formulate the problem of minimising the cost of producing the finished (assembled) products needed this week as an LP, subject to the constraint that limited storage space means that a total of only 200 unassembled parts (of all types) can be stored at the end of the week.

Note: because of the way production is organised in the two departments it is not possible to produce, for example, only one or two parts in each department, e.g. one hour of working in department 1 produces 7 part 1 units, 6 part 2 units and 9 part 3 units and this cannot be altered.

Production planning solution

Variables

We need to decide the amount of time given over to the production of parts in each department (since we, obviously, may not make use of all the available working time) and also to decide the total number of finished (assembled) products made. Hence let:

xi = number of hours used in department i (i=1,2)

y = number of finished (assembled) products made

where xi >= 0 i=1,2 and y >= 0 and (as is usual) we assume that any fractional parts in the variables in the numerical solution of the LP are not significant.

Constraints

working hours available

x1 <= 100

Page 53: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 53 -

x2 <= 110

number of assembled products produced

1050 <= y <= 1200

production constraints relating the hours worked to the number of assembled products

We produce (7x1 + 6x2) part 1 units, (6x1 + 11x2) part 2 units and (9x1 + 5x2) part 3 units. Now to ensure that the number of assembled products produced is exactly y we need at least y part 1 units, at least y part 2 units and at least y part 3 units. Hence we have the three constraints

7x1 + 6x2 >= y

6x1 + 11x2 >= y

9x1 + 5x2 >= y

the total number of parts (of all types) produced is (7x1 + 6x2) + (6x1 + 11x2) + (9x1 + 5x2) = 22x1 + 22x2. Since we produce exactly y assembled products the number of parts left over at the end of the week is (22x1 + 22x2) - 3y and hence the constraint relating to the limited storage space is given by

22x1 + 22x2 - 3y <= 200

Objective

minimise 25.0x1 + 12.5x2

Obvious extensions to this problem involve increasing (from the current value of 3) the number of parts needed for the finished product and changing the ratio of parts used in a finished product from its current value of 1:1:1.

Linear programming - solution

To get some insight into solving LP's consider the Two Mines problem that we had before - the LP formulation of the problem was:

minimise 180x + 160ysubject to 6x + y >= 12 3x + y >= 8 4x + 6y >= 24 x <= 5 y <= 5

Page 54: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 54 -

x,y >= 0

Since there are only two variables in this LP problem we have the graphical representation of the LP given below with the feasible region (region of feasible solutions to the constraints associated with the LP) outlined.

To draw the diagram above we turn all inequality constraints into equalities and draw the corresponding lines on the graph (e.g. the constraint 6x + y >= 12 becomes the line 6x + y = 12 on the graph). Once a line has been drawn then it is a simple matter to work out which side of the line corresponds to all feasible solutions to the original inequality constraint (e.g. all feasible solutions to 6x + y >= 12 lie to the right of the line 6x + y = 12).

We determine the optimal solution to the LP by plotting (180x + 160y) = K (K constant) for varying K values (iso-profit lines). One such line (180x + 160y = 180) is shown dotted on the diagram. The smallest value of K (remember we are considering a minimisation problem) such that 180x + 160y = K goes through a point in the feasible region is the value of the optimal solution to the LP (and the corresponding point gives the optimal values of the variables).

Hence we can see that the optimal solution to the LP occurs at the vertex of the feasible region formed by the intersection of 3x + y = 8 and 4x + 6y = 24. Note here that it is

Page 55: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 55 -

inaccurate to attempt to read the values of x and y off the graph and instead we solve the simultaneous equations

3x + y = 8 4x + 6y = 24

to get x = 12/7 = 1.71 and y = 20/7 = 2.86 and hence the value of the objective function is given by 180x + 160y = 180(12/7) + 160(20/7) = 765.71

Hence the optimal solution has cost 765.71

It is clear that the above graphical approach to solving LP's can be used for LP's with two variables but (alas) most LP's have more than two variables. This brings us to the simplex algorithm for solving LP's.

Simplex

Note that in the example considered above the optimal solution to the LP occurred at a vertex (corner) of the feasible region. In fact it is true that for any LP (not just the one considered above) the optimal solution occurs at a vertex of the feasible region. This fact is the key to the simplex algorithm for solving LP's.

Essentially the simplex algorithm starts at one vertex of the feasible region and moves (at each iteration) to another (adjacent) vertex, improving (or leaving unchanged) the objective function as it does so, until it reaches the vertex corresponding to the optimal LP solution.

The simplex algorithm for solving linear programs (LP's) was developed by Dantzig in the late 1940's and since then a number of different versions of the algorithm have been developed. One of these later versions, called the revised simplex algorithm (sometimes known as the "product form of the inverse" simplex algorithm) forms the basis of most modern computer packages for solving LP's.

Although the basic simplex algorithm is relatively easy to understand and use, the fact that it is widely available in the form of computer packages means that I decided it was not worth teaching you the details of the simplex algorithm. Instead I decided to teach you some things about the output from a simplex based LP package.

LP output

Recall the production planning problem concerned with four variants of the same product which we formulated before as an LP. To remind you of it we repeat below the problem and our formulation of it.

Page 56: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 56 -

Production planning problem

A company manufactures four variants of the same product and in the final part of the manufacturing process there are assembly, polishing and packing operations. For each variant the time required for these operations is shown below (in minutes) as is the profit per unit sold.

Assembly Polish Pack Profit (£)Variant 1 2 3 2 1.50 2 4 2 3 2.50 3 3 3 2 3.00 4 7 4 5 4.50

Given the current state of the labour force the company estimate that, each year, they have 100000 minutes of assembly time, 50000 minutes of polishing time and 60000 minutes of packing time available. How many of each variant should the company make per year and what is the associated profit?

Suppose now that the company is free to decide how much time to devote to each of the three operations (assembly, polishing and packing) within the total allowable time of 210000 (= 100000 + 50000 + 60000) minutes. How many of each variant should the company make per year and what is the associated profit?

Production planning solution

Variables

Let:

xi be the number of units of variant i (i=1,2,3,4) made per year

Tass be the number of minutes used in assembly per year Tpol be the number of minutes used in polishing per year Tpac be the number of minutes used in packing per year

where xi >= 0 i=1,2,3,4 and Tass, Tpol, Tpac >= 0

Constraints

(a) operation time definition

Tass = 2x1 + 4x2 + 3x3 + 7x4 (assembly) Tpol = 3x1 + 2x2 + 3x3 + 4x4 (polish) Tpac = 2x1 + 3x2 + 2x3 + 5x4 (pack)

(b) operation time limits

Page 57: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 57 -

The operation time limits depend upon the situation being considered. In the first situation, where the maximum time that can be spent on each operation is specified, we simply have:

Tass <= 100000 (assembly) Tpol <= 50000 (polish) Tpac <= 60000 (pack)

In the second situation, where the only limitation is on the total time spent on all operations, we simply have:

Tass + Tpol + Tpac <= 210000 (total time)

Objective

Presumably to maximise profit - hence we have

maximise 1.5x1 + 2.5x2 + 3.0x3 + 4.5x4

which gives us the complete formulation of the problem.

Solution - using the QSB package

Below we solve this LP using the LP option in the pc package associated with this course. The LP module in this is not as sophisticated, nor as powerful, as packages intended purely for solving LP but it is sufficient for the purposes of this course.

If you do not have access to this (or similar) pc package you can solve this LP here using a Web based LP package. Alternatively a free copy of one package (albeit of restricted capacity) is available here.

Using the package we have:

Page 58: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 58 -

which sets up the input and then we have a spreadsheet type screen into which we enter the problem data, as below

There are a number of points to make:

In order to input an LP to the package we need to rearrange the constraints such that the right-hand side of the constraint is just a number (and hence all the variables are collected together on the left-hand side of the constraint).

Lower and upper bounds allows us to constrain the values a single variable may take, i.e. lower bound <= variable <= upper bound. We could, if we have wished, entered the restrictions on Tass, Tpol and Tpac above as bounds rather than as explicit constraints.

Always save your problem at regular intervals, and especially after input - if you do not then you may lose it (e.g. if you exit the package without first having saved the problem).

Page 59: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 59 -

If you use the package you have various options involving solving and displaying tableau. If I was going to teach you the simplex algorithm in detail then the term "tableau" would become familiar to you. As I do not intend to go into that level of detail then we solve without displaying any tableau.

MPS - stands for Mathematical Programming System and is a standard data format (initially from IBM). You may encounter it if you ever solve LP's. All serious LP packages will read an MPS file and MPS files are now a common way of transferring LP problems between different people and different software packages. You can find out more about MPS format here. Since our simple package is designed for an educational environment however it cannot cope with MPS files.

We can show the problem in a more natural form (equation form) by using "Switch to Normal Model Form" to get:

The solution to this problem is also shown below.

Page 60: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 60 -

We can see that the optimal solution to the LP has value 58000 (£) and that Tass=82000, Tpol=50000, Tpac=60000, X1=0, X2=16000, X3=6000 and X4=0.

This implies that we only produce variants 2 and 3 (a somewhat surprising result in that we are producing none of variant 4 which had the highest profit per unit produced).

How can you explain (in words) the fact that it appears that the best thing to do is not to produce any of the variant with the lowest profit per unit?

How can you explain (in words) the fact that it appears that the best thing to do is not to produce any of the variant with the highest profit per unit?

Referring back to the physical situation we can see that at the LP optimal we have 18,000 minutes of assembly time that is not used (Tass=82000 compared with a maximum time of 100000) but that all of the polishing and packing time is used.

For each constraint in the problem we also have a "Slack or Surplus" column. This column tells us, for a particular constraint, the difference between the left-hand side of the constraint when evaluated at the LP optimal (i.e. when evaluated with X1, X2, X3 and X4 taking the values given above) and the right-hand side of the constraint.

Constraints with a "Slack or Surplus" value of zero are said to be tight or binding in that they are satisfied with equality at the LP optimal. Constraints which are not tight are called loose.

A summary of the input to the computer for the second situation considered in the question (only limitation is on total time spent on all operations) is shown below.

Page 61: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 61 -

which in equation form is:

The solution to this problem is also shown below.

Page 62: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 62 -

We can see that the optimal solution to the LP has value 78750 (£) and that Tass=78750, Tpol=78750, Tpac=52500, X1=0, X2=0, X3=26250 and X4=0.

This implies that we only produce variant 3.

Note here how much higher the associated profit is than before (£78750 compared with £58000, an increase of 36%!). This indicates that, however the allocation given before of 100,000; 50,000 and 60,000 minutes for assembly, polishing and packing respectively was arrived at it was a bad decision!

Solution - using Solver

Below we solve this LP with the Solver add-in that comes with Microsoft Excel.

If you click here you will be able to download an Excel spreadsheet called lp.xls that already has the LP we are considering set up.

Take this spreadsheet and look at Sheet A. You should see the problem we considered above set out as:

Here the values in cells B2 to B5 are how much of each variant we choose to make - here set to zero. Cells C6 to E6 give the total assembly/polishing and packing time used and cell F6 the total profit associated with the amount we choose to produce.

To use Solver in Excel do Tools and then Solver. In the version of Excel I am using (different versions of Excel have slightly different Solver formats) you will get the Solver model as below:

Page 63: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 63 -

Here our target cell is F6 (ignore the use of $ signs here - that is a technical Excel issue if you want to go into it in greater detail) which we wish to maximise. We can change cells B2 to B5 - i.e. the amount of each variant we produce subject to the constraint that C6 to E6 - the total amount of assembly/polishing/packing used cannot exceed the limits given in C7 to E7.

In order to tell Solver we are dealing with a linear program click on Options in the Solver box and you will see:

Page 64: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 64 -

where both the 'Assume Linear Model' and 'Assume Non-Negative' boxes are ticked - indicating we are dealing with a linear model with non-negative variables.

Solving via Solver the solution is:

We can see that the optimal solution to the LP has value 58000 (£) and that Tass=82000, Tpol=50000, Tpac=60000, X1=0, X2=16000, X3=6000 and X4=0.

This implies that we only produce variants 2 and 3 (a somewhat surprising result in that we are producing none of variant 4 which had the highest profit per unit produced).

How can you explain (in words) the fact that it appears that the best thing to do is not to produce any of the variant with the lowest profit per unit?

How can you explain (in words) the fact that it appears that the best thing to do is not to produce any of the variant with the highest profit per unit?

Referring back to the physical situation we can see that at the LP optimal we have 18,000 minutes of assembly time that is not used (Tass=82000 compared with a maximum time of 100000) but that all of the polishing and packing time is used.

Page 65: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 65 -

For the second situation given in the question, where the only limitation is on the total time spent on all operations examine Sheet B in spreadsheet lp.xls

Invoking Solver in that sheet you will see:

where cell C7 is the total amount of processing time used and the only constraint in Solver relates to that cell not exceeding the limit of 210000 shown in cell C8. Note here that if you check Options in Solver here you will see that both the 'Assume Linear Model' and 'Assume Non-Negative' boxes are ticked.

Solving we get:

Page 66: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 66 -

We can see that the optimal solution to the LP has value 78750 (£) and that Tass=78750, Tpol=78750, Tpac=52500, X1=0, X2=0, X3=26250 and X4=0.

This implies that we only produce variant 3.

Note here how much higher the associated profit is than before (£78750 compared with £58000, an increase of 36%!). This indicates that, however the allocation given before of 100,000; 50,000 and 60,000 minutes for assembly, polishing and packing respectively was arrived at it was a bad decision!

Problem sensitivity

Problem sensitivity refers to how the solution changes as the data changes. Two issues are important here:

robustness; and planning.

We deal with each of these in turn.

Robustness

Plainly, in the real world, data is never completely accurate and so we would like some confidence that any proposed course of action is relatively insensitive (robust) with

Page 67: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 67 -

respect to data inaccuracies. For example, for the production planning problem dealt with before, how sensitive is the optimal solution with respect to slight variations in any particular data item.

For example consider the packing time consumed by variant 3. It is currently set to exactly 2 minutes, i.e. 2.0000000. But suppose it is really 2.1, what is the effect of this on what we are proposing to do?

What is important here is what you might call "the shape of the strategy" rather than the specific numeric values. Look at the solution of value 58000 we had before. The shape of the strategy there was "none of variant 1 or 4, lots of variant 2 and a reasonable amount of variant 3". What we would like is that, when we resolve with the figure of 2 for packing time consumed by variant 3 replaced by 2.1, this general shape remains the same. What might concern us is if we get a very different shape (e.g. variants 1 and 4 only).

If the general shape of the strategy remains essentially the same under (small) data changes we say that the strategy is robust.

If we take Sheet A again, change the figure of 2 for packing time consumed by variant 3 to 2.1 and resolve we get:

This indicates that for this data change the strategy is robust.

Planning

With regard to planning we may be interested in seeing how the solution changes as the data changes (e.g. over time). For example for the production planning problem dealt with above (where the solution was of value 58000 involving production of variants 2 and 3) how would increasing the profit per unit on variant 4 (e.g. by 10 per cent to 4.95 by raising the price) impact upon the optimal solution.

Again taking Sheet A, making the appropriate change and resolving, we get:

Page 68: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 68 -

indicating that if we were able to increase the profit per unit on variant 4 by 10 per cent to 4.95 it would be profitable to make that variant in the quantities shown above.

There is one thing to note here - namely that we have a fractional solution X3=1428.571 and X4=11428.57. Recall that we have a linear program - for which a defining characteristic is that the variables are allowed to take fractional values. Up to now for this production planning problem we had not seen any fractional values when we solved numerically - here we do. Of course in reality, given that the numbers are large there is no practical significance to these fractions and we can equally well regard the solution as being a conventional integer (non-fractional) solution such as X3=1429 and X4=11429.

Approach

In fact the approach taken both for robustness and planning issues is identical, and is often referred to as sensitivity analysis.

Given the LP package it is a simple matter to change the data and resolve to see how the solution changes (if at all) as certain key data items change.

In fact, as a by-product of using the simplex algorithm, we automatically get sensitivity information (e.g. the reduced cost information given on the LP output for the production planning problem):

for the variables, the Reduced Cost (also known as Opportunity Cost) column gives us, for each variable which is currently zero, an estimate of how much the objective function will change if we make that variable non-zero. This is often called the "reduced cost" for the variable. Note here than an alternative (and equally valid) interpretation of the reduced cost is the amount by which the objective function coefficient for a variable needs to change before that variable will become non-zero.

for each constraint the column headed Shadow Price tells us by how much the objective function will change if we change the right-hand side of the corresponding constraint. This is often called the "marginal value" or "dual value" for the constraint.

However, interpreting simplex sensitivity information is somewhat complicated and so, for the purposes of this course, I prefer to approach sensitivity via resolving the LP.

Page 69: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 69 -

Those interested in interpreting the sensitivity information automatically produced by the simplex algorithm can find some information here.

LP - State of the art

Most large LP's are not entered directly into the computer in the same fashion as in our simple package. Instead an algebraic modelling language is used. A tutorial description of one of these modelling languages is available here. More about advanced LP can be found here.

Until the 1980's all packages for solving LP's relied upon variants of the simplex algorithm. However in 1984 Karmarkar published a new algorithm for LP, called an interior point method, which is completely different from the simplex algorithm.

Karmarkar's work has sparked an immense amount of LP research, both into interior point methods and into improving the simplex algorithm.

Since 1984 new commercial software products have appeared, e.g.

OSL (Optimisation Solutions and Library) from IBM Cplex (Cplex Optimisation)

Both these products now include both simplex and interior point algorithms.

Let us consider the case where we have formulated some problem as an LP and we are thinking of solving it numerically. Can we find a computer package that has the capacity to solve our LP?

In terms of LP's the key factor is the number of constraints and a typical workstation/mainframe LP package (e.g. OSL (version 2)) has the capacity for 2 billion variables and 16 million constraints (excluding bounds on variables which are treated implicitly rather than explicitly).

However, these capacity limits vastly overstate what we could actually solve in real-life. To give you a rough idea of the state of the art the following results have been reported:

Code Number of constraints Number of variables Time ComputerOSL 105,000 155,000 4 hours IBM 3090 750 12,000,000 27 mins IBM 3090Cplex 145 1,000,000 6 mins Cray Y-MP 41,000 79,000 3 mins Cray 2

This is not to say that it is going to be impossible to solve very large LP's by simplex as there is a small chance that advanced LP theory might enable us to solve such problems

Page 70: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 70 -

(by decomposition or by taking advantage of any structure in the LP) but essentially we face a very difficult task.

Note here that the problem with solving very large LP's via simplex is not merely a matter of using a faster computer - the problem is theoretical in nature due to large LP's being degenerate (there is some evidence to suggest that for large LP's the solution time required when using simplex is approximately proportional to (number of constraints)2.4).

Large scale LP application areas

Problem areas where large LP's arise are:

Pacific Basin facility planning for AT&T

The problem here is to determine where undersea cables and satellite circuits should be installed, when they will be needed, the number of circuits needed, cable technology, call routing, etc over a 19 year planning horizon (an LP with 28,000 constraints, 77,000 variables).

Military officer personnel planning

The problem is to plan US Army officer promotions (to Lieutenant, Captain, Major, Lieutenant Colonel and Colonel), taking into account the people entering and leaving the Army and training requirements by skill categories to meet the overall Army force structure requirements (an LP with 21,000 constraints and 43,000 variables).

Military patient evacuation

The US Air Force Military Airlift Command (MAC) has a patient evacuation problem that can be modelled as a LP. They use this model to determine the flow of patients moved by air from an area of conflict to bases and hospitals in the continental United States. The objective is to minimise the time that patients are in the air transport system. The constraints are:

all patients that need transporting must be transported; and limits on the size and composition of hospitals, staging areas and air fleet must be

observed.

MAC have generated a series of problems based on the number of time periods (days). A 50 day problem consists of an LP with 79,000 constraints and 267,000 variables (solved in 10 hours).

Military logistics planning

Page 71: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 71 -

The US Department of Defense Joint Chiefs of Staff have a logistics planning problem that models the feasibility of supporting military operations during a crisis.

The problem is to determine if different materials (called movement requirements) can be transported overseas within strict time windows.

The LP includes capacities at embarkation and debarkation ports, capacities of the various aircraft and ships that carry the movement requirements and penalties for missing delivery dates.

One problem (using simulated data) that has been solved had 15 time periods, 12 ports of embarkation, 7 ports of debarkation and 9 different types of vehicle for 20,000 movement requirements. This resulted in an LP with 20,500 constraints and 520,000 variables (solved in 75 minutes).

Bond arbitrage

Many financial transactions can be modelled as LP's, e.g. bond arbitrage, switching between various financial instruments (bonds) so as to make money. Typical problems have approximately 1,000 constraints and 50,000 variables and can be solved in "real-time".

Airline crew scheduling

Here solving an LP is only the first stage in deciding crew schedules for commercial aircraft. The problem that has to be solved is actually an integer programming problem, the set partitioning problem. American Airlines has a problem containing 12,000,000 potential crew schedules (variables) - see OSL above. As such crew scheduling models are a key to airline competitive cost advantage these days (crew costs often being the second largest flying cost after fuel costs) we shall enlarge upon this problem in greater detail.

Within a fixed airline schedule (the schedule changing twice a year typically) each flight in the schedule can be broken down into a series of flight legs. A flight leg comprises a takeoff from a specific airport at a specific time to the subsequent landing at another airport at a specific time. For example a flight in the schedule from Chicago O'Hare to London Heathrow might have 2 flight legs, from Chicago to JFK New York and from JFK to Heathrow. A key point is that these flight legs may be flown by different crews.

Typically in a crew scheduling exercise aircraft types have been preassigned (not all crews can fly all types) so for a given aircraft type and a given time period (the schedule repeating over (say) a 2 week period) the problem becomes one of ensuring that all flight legs for a particular aircraft type can have a crew assigned. Note here that by crew we mean not only the pilots/flight crew but also the cabin service staff, typically these work together as a team and are kept together over a schedule.

Page 72: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 72 -

As you probably know there are many restrictions on the hours that crews (pilots and others) can work. These restrictions can be both legal restrictions and union agreement restrictions. A potential crew schedule is a series of flight legs that satisfies these restrictions, i.e. a crew could successfully and legally work the flight legs in the schedule. All such potential crew schedules can have a cost assigned to them.

Hence for our American Airlines problem the company has a database with 12 million potential crew schedules. Note here that we stress the word potential. We have a decision problem here, namely out of these 12 million which shall we choose (so as to minimise costs obviously) and ensure that all flight legs have a crew assigned to them.

Typically a matrix type view of the problem is adopted, where the rows of the matrix are the flight legs and the columns the potential crew schedules, as below.

Crew schedules 1 2 3 etc ----->Leg A-B 0 1 1 B-C 0 1 1 C-A 0 0 1 B-D 0 0 0 A-D 1 0 0 D-A 1 0 0 etc

Here a 0 in a column indicates that that flight leg is not part of the crew schedule, a 1 that the flight leg is part of the crew schedule. Usually a crew schedule ends up with the crew returning to their home base, e.g. A-D and D-A in crew schedule 1 above. A crew schedule such as 2 above (A-B and B-C) typically includes as part of its associated cost the cost of returning the crew (as passengers) to their base. Such carrying of crew as passengers (on their own airline or on another airline) is called deadheading.

LP is used as part of the solution process for this crew scheduling problem for two main reasons:

a manual approach to crew scheduling problems of this size is just hopeless, you may get a schedule but the cost is likely to be far from minimal

a systematic approach to minimising cost can result in huge cost savings (e.g. even a small percentage savings can add up to ten's of millions of dollars)

Plainly there are people in the real world with large LP problems to solve. Informally what appears to be happening currently is that an increase in solution technology (advances in hardware, software and algorithms) is leading to users becoming aware that large problems can be tackled. This in turn is generating a demand for further improvements in solution technology.

Linear programming solution examples

Page 73: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 73 -

Linear programming example 1997 UG exam

A company makes two products (X and Y) using two machines (A and B). Each unit of X that is produced requires 50 minutes processing time on machine A and 30 minutes processing time on machine B. Each unit of Y that is produced requires 24 minutes processing time on machine A and 33 minutes processing time on machine B.

At the start of the current week there are 30 units of X and 90 units of Y in stock. Available processing time on machine A is forecast to be 40 hours and on machine B is forecast to be 35 hours.

The demand for X in the current week is forecast to be 75 units and for Y is forecast to be 95 units. Company policy is to maximise the combined sum of the units of X and the units of Y in stock at the end of the week.

Formulate the problem of deciding how much of each product to make in the current week as a linear program.

Solve this linear program graphically.

Solution

Let

x be the number of units of X produced in the current week y be the number of units of Y produced in the current week

then the constraints are:

50x + 24y <= 40(60) machine A time 30x + 33y <= 35(60) machine B time x >= 75 - 30 i.e. x >= 45 so production of X >= demand (75) - initial stock (30), which ensures we meet demand y >= 95 - 90 i.e. y >= 5 so production of Y >= demand (95) - initial stock (90), which ensures we meet demand

The objective is: maximise (x+30-75) + (y+90-95) = (x+y-50)i.e. to maximise the number of units left in stock at the end of the week

It is plain from the diagram below that the maximum occurs at the intersection of x=45 and 50x + 24y = 2400

Page 74: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 74 -

Solving simultaneously, rather than by reading values off the graph, we have that x=45 and y=6.25 with the value of the objective function being 1.25

Linear programming example 1995 UG exam

The demand for two products in each of the last four weeks is shown below.

Week 1 2 3 4Demand - product 1 23 27 34 40Demand - product 2 11 13 15 14

Apply exponential smoothing with a smoothing constant of 0.7 to generate a forecast for the demand for these products in week 5.

These products are produced using two machines, X and Y. Each unit of product 1 that is produced requires 15 minutes processing on machine X and 25 minutes processing on machine Y. Each unit of product 2 that is produced requires 7 minutes processing on machine X and 45 minutes processing on machine Y. The available time on machine X in week 5 is forecast to be 20 hours and on machine Y in week 5 is forecast to be 15 hours.

Page 75: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 75 -

Each unit of product 1 sold in week 5 gives a contribution to profit of £10 and each unit of product 2 sold in week 5 gives a contribution to profit of £4.

It may not be possible to produce enough to meet your forecast demand for these products in week 5 and each unit of unsatisfied demand for product 1 costs £3, each unit of unsatisfied demand for product 2 costs £1.

Formulate the problem of deciding how much of each product to make in week 5 as a linear program.

Solve this linear program graphically.

Solution

Note that the first part of the question is a forecasting question so it is solved below.

For product 1 applying exponential smoothing with a smoothing constant of 0.7 we get:

M1 = Y1 = 23M2 = 0.7Y2 + 0.3M1 = 0.7(27) + 0.3(23) = 25.80 M3 = 0.7Y3 + 0.3M2 = 0.7(34) + 0.3(25.80) = 31.54 M4 = 0.7Y4 + 0.3M3 = 0.7(40) + 0.3(31.54) = 37.46

The forecast for week five is just the average for week 4 = M4 = 37.46 = 31 (as we cannot have fractional demand).

For product 2 applying exponential smoothing with a smoothing constant of 0.7 we get:

M1 = Y1 = 11 M2 = 0.7Y2 + 0.3M1 = 0.7(13) + 0.3(11) = 12.40 M3 = 0.7Y3 + 0.3M2 = 0.7(15) + 0.3(12.40) = 14.22 M4 = 0.7Y4 + 0.3M3 = 0.7(14) + 0.3(14.22) = 14.07

The forecast for week five is just the average for week 4 = M4 = 14.07 = 14 (as we cannot have fractional demand).

We can now formulate the LP for week 5 using the two demand figures (37 for product 1 and 14 for product 2) derived above.

Let

x1 be the number of units of product 1 produced

x2 be the number of units of product 2 produced

where x1, x2>=0

Page 76: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 76 -

The constraints are:

15x1 + 7x2 <= 20(60) machine X

25x1 + 45x2 <= 15(60) machine Y

x1 <= 37 demand for product 1

x2 <= 14 demand for product 2

The objective is to maximise profit, i.e.

maximise 10x1 + 4x2 - 3(37- x1) - 1(14-x2)

i.e. maximise 13x1 + 5x2 - 125

The graph is shown below, from the graph we have that the solution occurs on the horizontal axis (x2=0) at x1=36 at which point the maximum profit is 13(36) + 5(0) - 125 = £343

Page 77: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 77 -

Linear programming example 1994 UG exam

A company is involved in the production of two items (X and Y). The resources need to produce X and Y are twofold, namely machine time for automatic processing and craftsman time for hand finishing. The table below gives the number of minutes required for each item:

Machine time Craftsman timeItem X 13 20 Y 19 29

The company has 40 hours of machine time available in the next working week but only 35 hours of craftsman time. Machine time is costed at £10 per hour worked and craftsman time is costed at £2 per hour worked. Both machine and craftsman idle times incur no costs. The revenue received for each item produced (all production is sold) is £20 for X and £30 for Y. The company has a specific contract to produce 10 items of X per week for a particular customer.

Formulate the problem of deciding how much to produce per week as a linear program.

Solve this linear program graphically.

Solution

Let

x be the number of items of X y be the number of items of Y

then the LP is:

maximise

20x + 30y - 10(machine time worked) - 2(craftsman time worked)

subject to:

13x + 19y <= 40(60) machine time 20x + 29y <= 35(60) craftsman time x >= 10 contract x,y >= 0

so that the objective function becomes

maximise

20x + 30y - 10(13x + 19y)/60 - 2(20x + 29y)/60

Page 78: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 78 -

i.e. maximise

17.1667x + 25.8667y

subject to:

13x + 19y <= 2400 20x + 29y <= 2100 x >= 10 x,y >= 0

It is plain from the diagram below that the maximum occurs at the intersection of x=10 and 20x + 29y <= 2100

Solving simultaneously, rather than by reading values off the graph, we have that x=10 and y=65.52 with the value of the objective function being £1866.5

Page 79: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 79 -

Linear programming example 1992 UG exam

A company manufactures two products (A and B) and the profit per unit sold is £3 and £5 respectively. Each product has to be assembled on a particular machine, each unit of product A taking 12 minutes of assembly time and each unit of product B 25 minutes of assembly time. The company estimates that the machine used for assembly has an effective working week of only 30 hours (due to maintenance/breakdown).

Technological constraints mean that for every five units of product A produced at least two units of product B must be produced.

Formulate the problem of how much of each product to produce as a linear program.

Solve this linear program graphically. The company has been offered the chance to hire an extra machine, thereby

doubling the effective assembly time available. What is the maximum amount you would be prepared to pay (per week) for the hire of this machine and why?

Solution

Let

xA = number of units of A produced

xB = number of units of B produced

then the constraints are:

12xA + 25xB <= 30(60) (assembly time)

xB >= 2(xA/5)

i.e. xB - 0.4xA >= 0

i.e. 5xB >= 2xA (technological)

where xA, xB >= 0

and the objective is

maximise 3xA + 5xB

It is plain from the diagram below that the maximum occurs at the intersection of 12xA + 25xB = 1800 and xB - 0.4xA = 0

Page 80: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 80 -

Solving simultaneously, rather than by reading values off the graph, we have that:

xA= (1800/22) = 81.8

xB= 0.4xA = 32.7

with the value of the objective function being £408.9

Doubling the assembly time available means that the assembly time constraint (currently 12xA + 25xB <= 1800) becomes 12xA + 25xB <= 2(1800) This new constraint will be parallel to the existing assembly time constraint so that the new optimal solution will lie at the intersection of 12xA + 25xB = 3600 and xB - 0.4xA = 0

i.e. at xA = (3600/22) = 163.6

xB= 0.4xA = 65.4

with the value of the objective function being £817.8

Page 81: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 81 -

Hence we have made an additional profit of £(817.8-408.9) = £408.9 and this is the maximum amount we would be prepared to pay for the hire of the machine for doubling the assembly time.

This is because if we pay more than this amount then we will reduce our maximum profit below the £408.9 we would have made without the new machine.

Linear programming example 1988 UG exam

Solve

minimise

4a + 5b + 6c

subject to

a + b >= 11

a - b <= 5

c - a - b = 0

7a >= 35 - 12b

a >= 0 b >= 0 c >= 0

Solution

To solve this LP we use the equation c-a-b=0 to put c=a+b (>= 0 as a >= 0 and b >= 0) and so the LP is reduced to

minimise

4a + 5b + 6(a + b) = 10a + 11b

subject to

a + b >= 11

a - b <= 5

7a + 12b >= 35

Page 82: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 82 -

a >= 0 b >= 0

From the diagram below the minimum occurs at the intersection of a - b = 5 and a + b = 11

i.e. a = 8 and b = 3 with c (= a + b) = 11 and the value of the objective function 10a + 11b = 80 + 33 = 113.

Linear programming example 1987 UG exam

Solve the following linear program:

maximise 5x1 + 6x2

subject to

x1 + x2 <= 10

Page 83: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 83 -

x1 - x2 >= 3

5x1 + 4x2 <= 35

x1 >= 0

x2 >= 0

Solution

It is plain from the diagram below that the maximum occurs at the intersection of

5x1 + 4x2 = 35 and

x1 - x2 = 3

Solving simultaneously, rather than by reading values off the graph, we have that

5(3 + x2) + 4x2 = 35

i.e. 15 + 9x2 = 35

i.e. x2 = (20/9) = 2.222 and

x1 = 3 + x2 = (47/9) = 5.222

The maximum value is 5(47/9) + 6(20/9) = (355/9) = 39.444

Page 84: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 84 -

Linear programming example 1986 UG exam

A carpenter makes tables and chairs. Each table can be sold for a profit of £30 and each chair for a profit of £10. The carpenter can afford to spend up to 40 hours per week working and takes six hours to make a table and three hours to make a chair. Customer demand requires that he makes at least three times as many chairs as tables. Tables take up four times as much storage space as chairs and there is room for at most four tables each week.

Formulate this problem as a linear programming problem and solve it graphically.

Solution

Variables

Let

xT = number of tables made per week

xC = number of chairs made per week

Page 85: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 85 -

Constraints

total work time

6xT + 3xC <= 40

customer demand

xC >= 3xT

storage space

(xC/4) + xT <= 4

all variables >= 0

Objective

maximise 30xT + 10xC

The graphical representation of the problem is given below and from that we have that the solution lies at the intersection of

(xC/4) + xT = 4 and 6xT + 3xC = 40

Solving these two equations simultaneously we get xC = 10.667, xT = 1.333 and the corresponding profit = £146.667

Page 86: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 86 -

Queuing theory

Queuing theory deals with problems which involve queuing (or waiting). Typical examples might be:

banks/supermarkets - waiting for service computers - waiting for a response failure situations - waiting for a failure to occur e.g. in a piece of machinery public transport - waiting for a train or a bus

As we know queues are a common every-day experience. Queues form because resources are limited. In fact it makes economic sense to have queues. For example how many supermarket tills you would need to avoid queuing? How many buses or trains would be needed if queues were to be avoided/eliminated?

In designing queueing systems we need to aim for a balance between service to customers (short queues implying many servers) and economic considerations (not too many servers).

In essence all queuing systems can be broken down into individual sub-systems consisting of entities queuing for some activity (as shown below).

Page 87: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 87 -

Typically we can talk of this individual sub-system as dealing with customers queuing for service. To analyse this sub-system we need information relating to:

arrival process: o how customers arrive e.g. singly or in groups (batch or bulk arrivals) o how the arrivals are distributed in time (e.g. what is the probability

distribution of time between successive arrivals (the interarrival time distribution))

o whether there is a finite population of customers or (effectively) an infinite number

The simplest arrival process is one where we have completely regular arrivals (i.e. the same constant time interval between successive arrivals). A Poisson stream of arrivals corresponds to arrivals at random. In a Poisson stream successive customers arrive after intervals which independently are exponentially distributed. The Poisson stream is important as it is a convenient mathematical model of many real life queuing systems and is described by a single parameter - the average arrival rate. Other important arrival processes are scheduled arrivals; batch arrivals; and time dependent arrival rates (i.e. the arrival rate varies according to the time of day).

service mechanism: o a description of the resources needed for service to begin o how long the service will take (the service time distribution) o the number of servers available o whether the servers are in series (each server has a separate queue) or in

parallel (one queue for all servers) o whether preemption is allowed (a server can stop processing a customer to

deal with another "emergency" customer)

Assuming that the service times for customers are independent and do not depend upon the arrival process is common. Another common assumption about service times is that they are exponentially distributed.

queue characteristics: o how, from the set of customers waiting for service, do we choose the one

to be served next (e.g. FIFO (first-in first-out) - also known as FCFS (first-come first served); LIFO (last-in first-out); randomly) (this is often called the queue discipline)

Page 88: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 88 -

o do we have: balking (customers deciding not to join the queue if it is too long) reneging (customers leave the queue if they have waited too long

for service) jockeying (customers switch between queues if they think they will

get served faster by so doing) a queue of finite capacity or (effectively) of infinite capacity

Changing the queue discipline (the rule by which we select the next customer to be served) can often reduce congestion. Often the queue discipline "choose the customer with the lowest service time" results in the smallest value for the time (on average) a customer spends queuing.

Note here that integral to queuing situations is the idea of uncertainty in, for example, interarrival times and service times. This means that probability and statistics are needed to analyse queuing situations.

In terms of the analysis of queuing situations the types of questions in which we are interested are typically concerned with measures of system performance and might include:

How long does a customer expect to wait in the queue before they are served, and how long will they have to wait before the service is complete?

What is the probability of a customer having to wait longer than a given time interval before they are served?

What is the average length of the queue? What is the probability that the queue will exceed a certain length? What is the expected utilisation of the server and the expected time period during

which he will be fully occupied (remember servers cost us money so we need to keep them busy). In fact if we can assign costs to factors such as customer waiting time and server idle time then we can investigate how to design a system at minimum total cost.

These are questions that need to be answered so that management can evaluate alternatives in an attempt to control/improve the situation. Some of the problems that are often investigated in practice are:

Is it worthwhile to invest effort in reducing the service time? How many servers should be employed? Should priorities for certain types of customers be introduced? Is the waiting area for customers adequate?

In order to get answers to the above questions there are two basic approaches:

analytic methods or queuing theory (formula based); and simulation (computer based).

Page 89: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 89 -

The reason for there being two approaches (instead of just one) is that analytic methods are only available for relatively simple queuing systems. Complex queuing systems are almost always analysed using simulation (more technically known as discrete-event simulation).

The simple queueing systems that can be tackled via queueing theory essentially:

consist of just a single queue; linked systems where customers pass from one queue to another cannot be tackled via queueing theory

have distributions for the arrival and service processes that are well defined (e.g. standard statistical distributions such as Poisson or Normal); systems where these distributions are derived from observed data, or are time dependent, are difficult to analyse via queueing theory

The first queueing theory problem was considered by Erlang in 1908 who looked at how large a telephone exchange needed to be in order to keep to a reasonable value the number of telephone calls not connected because the exchange was busy (lost calls). Within ten years he had developed a (complex) formula to solve the problem.

Additional queueing theory information can be found here and here

Queueing notation and a simple example

It is common to use to use the symbols:

lamda to be the mean (or average) number of arrivals per time period, i.e. the mean arrival rate

µ to be the mean (or average) number of customers served per time period, i.e. the mean service rate

There is a standard notation system to classify queueing systems as A/B/C/D/E, where:

A represents the probability distribution for the arrival process B represents the probability distribution for the service process C represents the number of channels (servers) D represents the maximum number of customers allowed in the queueing system

(either being served or waiting for service) E represents the maximum number of customers in total

Common options for A and B are:

M for a Poisson arrival distribution (exponential interarrival distribution) or a exponential service time distribution

D for a deterministic or constant value

Page 90: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 90 -

G for a general distribution (but with a known mean and variance)

If D and E are not specified then it is assumed that they are infinite.

For example the M/M/1 queueing system, the simplest queueing system, has a Poisson arrival distribution, an exponential service time distribution and a single channel (one server).

Note here that in using this notation it is always assumed that there is just a single queue (waiting line) and customers move from this single queue to the servers.

Simple M/M/1 example

Suppose we have a single server in a shop and customers arrive in the shop with a Poisson arrival distribution at a mean rate of lamda=0.5 customers per minute, i.e. on average one customer appears every 1/lamda = 1/0.5 = 2 minutes. This implies that the interarrival times have an exponential distribution with an average interarrival time of 2 minutes. The server has an exponential service time distribution with a mean service rate of 4 customers per minute, i.e. the service rate µ=4 customers per minute. As we have a Poisson arrival rate/exponential service time/single server we have a M/M/1 queue in terms of the standard notation.

We can analyse this queueing situation using the package. The input is shown below:

with the output being:

Page 91: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 91 -

The first line of the output says that the results are from a formula. For this very simple queueing system there are exact formulae that give the statistics above under the assumption that the system has reached a steady state - that is that the system has been running long enough so as to settle down into some kind of equilibrium position.

Naturally real-life systems hardly ever reach a steady state. Simply put life is not like that. However despite this simple queueing formulae can give us some insight into how a system might behave very quickly. The package took a fraction of a second to produce the output seen above.

One factor that is of note is traffic intensity = (arrival rate)/(departure rate) where arrival rate = number of arrivals per unit time and departure rate = number of departures per unit time. Traffic intensity is a measure of the congestion of the system. If it is near to zero there is very little queuing and in general as the traffic intensity increases (to near 1 or even greater than 1) the amount of queuing increases. For the system we have considered above the arrival rate is 0.5 and the departure rate is 4 so the traffic intensity is 0.5/4 = 0.125

Faster servers or more servers?

Consider the situation we had above - which would you prefer:

one server working twice as fast; or two servers each working at the original rate?

The simple answer is that we can analyse this using the package. For the first situation one server working twice as fast corresponds to a service rate µ=8 customers per minute. The output for this situation is shown below.

Page 92: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 92 -

For two servers working at the original rate the output is as below. Note here that this situation is a M/M/2 queueing system. Note too that the package assumes that these two servers are fed from a single queue (rather than each having their own individual queue).

Compare the two outputs above - which option do you prefer?

Of the figures in the outputs above some are identical. Extracting key figures which are different we have:

Page 93: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 93 -

One server twice as fast Two servers, original rateAverage time in the system 0.1333 0.2510(waiting and being served) Average time in the queue 0.0083 0.0010Probability of having to wait for service 6.25% 0.7353%

It can be seen that with one server working twice as fast customers spend less time in the system on average, but have to wait longer for service and also have a higher probability of having to wait for service.

Extending the example: M/M/1 and M/M/2 with costs

Below we have extended the example we had before where now we have multiplied the customer arrival rate by a factor of six (i.e. customers arrive 6 times as fast as before). We have also entered a queue capacity (waiting space) of 2 - i.e. if all servers are occupied and 2 customers are waiting when a new customer appears then they go away - this is known as balking.

We have also added cost information relating to the server and customers:

each minute a server is idle costs us £0.5 each minute a customer waits for a server costs us £1 each customer who is balked (goes away without being served) costs us £5

The package input is shown below:

with the output being:

Page 94: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 94 -

Note, as the above output indicates, that this is an M/M/1/3 system since we have 1 server and the maximum number of customers that can be in the system (either being served or waiting) is 3 (one being served, two waiting).

The key here is that as we have entered cost data we have a figure for the total cost of operating this system, 3.0114 per minute (in the steady state).

Suppose now we were to have two servers instead of one - would the cost be less or more? The simple answer is that the package can tell us, as below. Note that this is an M/M/2/4 queueing system as we have two servers and a total number of customers in the system of 4 (2 being served, 2 waiting in the queue for service). Note too that the package assumes that these two servers are fed from a single queue (rather than each having their own individual queue).

Page 95: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 95 -

So we can see that there is a considerable cost saving per minute in having two servers instead of one.

In fact the package can automatically perform an analysis for us of how total cost varies with the number of servers. This can be seen below.

Page 96: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 96 -

Page 97: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 97 -

General queueing

The screen below shows the possible input parameters to the package in the case of a general queueing model (i.e. not a M/M/r system).

Here we have a number of possible choices for the service time distribution and the interarrival time distribution. In fact the package recognises some 15 different distributions! Other items mentioned above are:

service pressure coefficient - indicates how servers speed up service when the system is busy, i.e. when all servers are busy the service rate is increased. If this coefficient is s and we have r servers each with service rate µ then the service rate changes from µ to (n/r)sµ when there are n customers in the system and n>=r.

arrival discourage coefficient - indicates how customer arrivals are discouraged when the system is busy, i.e. when all servers are busy the arrival rate is decreased. If this coefficient is s and we have r servers with the arrival rate being lamda then the arrival rate changes from lamda to (r/(n+1))slamda when there are n customers in the system and n>=r.

batch (bulk) size distribution - customers can arrive together (in batches, also known as in bulk) and this indicates the distribution of size of such batches.

As an indication of the analysis that can be done an example problem is shown below:

Page 98: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 98 -

Solving the problem we get:

This screen indicates that no formulae exist to evaluate the situation we have set up. We can try to evaluate this situation using an approximation formula, or by Monte Carlo Simulation. If we choose to adopt the approximation approach we get:

Page 99: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 99 -

The difficulty is that these approximation results are plainly nonsense (i.e. not a good approximation). For example the average number of customers in the queue is -2.9813, the probability that all servers are idle is -320%, etc. Whilst for this particular case it is obvious that approximation (or perhaps the package) is not working, for other problems it may not be readily apparent that approximation does not work.

If we adopt the Monte Carlo Simulation approach then we have the screen below.

Page 100: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 100 -

What will happen here is that the computer will construct a model of the system we have specified and internally generate customer arrivals, service times, etc and collect statistics on how the system performs. As specified above it will do this for 1000 time units (hours in this case). The phrase "Monte Carlo" derives from the well-known gambling city on the Mediterranean in Monaco. Just as in roulette we get random numbers produced by a roulette wheel when it is spun, so in Monte Carlo simulation we make use of random numbers generated by a computer.

The results are shown below:

Page 101: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 101 -

These results seem much more reasonable than the results obtained the approximation.

However one factor to take into consideration is the simulation time we specified - here 1000 hours. In order to collect more accurate information on the behaviour of the system we might wish to simulate for longer. The results for simulating both 10 and 100 times as long are shown below.

Page 102: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 102 -

Page 103: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 103 -

Clearly the longer we simulate, the more confidence we may have in the statistics/probabilities obtained.

As before we can investigate how the system might behave with more servers. Simulating for 1000 hours (to reduce the overall elapsed time required) and looking at just the total system cost per hour (item 22 in the above outputs) we have the following:

Number of servers Total system cost 1 4452 2 3314 3 2221 4 1614 5 1257 6 992 7 832 8 754 9 718 10 772 11 833 12 902

Hence here the number of servers associated with the minimum total system cost is 9

Page 104: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 104 -

Integer programming

When formulating LP's we often found that, strictly, certain variables should have been regarded as taking integer values but, for the sake of convenience, we let them take fractional values reasoning that the variables were likely to be so large that any fractional part could be neglected. Whilst this is acceptable in some situations, in many cases it is not, and in such cases we must find a numeric solution in which the variables take integer values.

Problems in which this is the case are called integer programs (IP's) and the subject of solving such programs is called integer programming (also referred to by the initials IP).

IP's occur frequently because many decisions are essentially discrete (such as yes/no, go/no-go) in that one (or more) options must be chosen from a finite set of alternatives.

Note here that problems in which some variables can take only integer values and some variables can take fractional values are called mixed-integer programs (MIP's).

As for formulating LP's the key to formulating IP's is practice. Although there are a number of standard "tricks" available to cope with situations that often arise in formulating IP's it is probably true to say that formulating IP's is a much harder task than formulating LP's.

We consider an example integer program below.

Capital budgeting

There are four possible projects, which each run for 3 years and have the following characteristics.

Capital requirements (£m)Project Return (£m) Year 1 2 31 0.2 0.5 0.3 0.22 0.3 1.0 0.8 0.23 0.5 1.5 1.5 0.34 0.1 0.1 0.4 0.1Available capital (£m) 3.1 2.5 0.4

We have a decision problem here: Which projects would you choose in order to maximise the total return?

Capital budgeting solution

We follow the same approach as we used for formulating LP's - namely:

Page 105: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 105 -

variables constraints objective.

We do this below and note here that the only significant change in formulating IP's as opposed to formulating LP's is in the definition of the variables.

Variables

Here we are trying to decide whether to undertake a project or not (a "go/no-go" decision). One "trick" in formulating IP's is to introduce variables which take the integer values 0 or 1 and represent binary decisions (e.g. do a project or not do a project) with typically:

the positive decision (do something) being represented by the value 1; and the negative decision (do nothing) being represented by the value 0.

Such variables are often called zero-one or binary variables

To define the variables we use the verbal description of

xj = 1 if we decide to do project j (j=1,...,4) = 0 otherwise, i.e. not do project j (j=1,...,4)

Note here that, by definition, the xj are integer variables which must take one of two possible values (zero or one).

Constraints

The constraints relating to the availability of capital funds each year are

0.5x1 + 1.0x2 + 1.5x3 + 0.1x4 <= 3.1 (year 1) 0.3x1 + 0.8x2 + 1.5x3 + 0.4x4 <= 2.5 (year 2) 0.2x1 + 0.2x2 + 0.3x3 + 0.1x4 <= 0.4 (year 3)

Objective

To maximise the total return - hence we have

maximise 0.2x1 + 0.3x2 + 0.5x3 + 0.1x4

This gives us the complete IP which we write as

maximise 0.2x1 + 0.3x2 + 0.5x3 + 0.1x4

subject to

Page 106: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 106 -

0.5x1 + 1.0x2 + 1.5x3 + 0.1x4 <= 3.1 0.3x1 + 0.8x2 + 1.5x3 + 0.4x4 <= 2.5 0.2x1 + 0.2x2 + 0.3x3 + 0.1x4 <= 0.4 xj = 0 or 1 j=1,...,4

Note:

in writing down the complete IP we include the information that xj = 0 or 1 (j=1,...,4) as a reminder that the variables are integers

you see the usefulness of defining the variables to take zero/one values - e.g. in the objective the term 0.2x1 is zero if x1=0 (as we want since no return from project 1 if we do not do it) and 0.2 if x1=1 (again as we want since get a return of 0.2 if we do project 1). Hence effectively the zero-one nature of the decision variable means that we always capture in the single term 0.2x1 what happens both when we do the project and when we do not do the project.

you will note that the objective and constraints are linear (i.e. any term in the constraints/objective is either a constant or a constant multiplied by an unknown). In this course we deal only with linear integer programs (IP's with a linear objective and linear constraints). It is plain though that there do exist non-linear integer programs - these are, however, outside the scope of this course.

whereas before in formulating LP's if we had integer variables we assumed that we could ignore any fractional parts it is clear that we cannot do so in this problem e.g. what would be the physical meaning of a numeric solution with x1=0.4975 for example?

Extensions to this basic problem include:

projects of different lengths projects with different start/end dates adding capital inflows from completed projects projects with staged returns carrying unused capital forward from year to year mutually exclusive projects (can have one or the other but not both) projects with a time window for the start time.

How to amend our basic IP to deal with such extensions is given here.

In fact note here that integer programming/quantitative modelling techniques are increasingly being used for financial problems.

Solving IP's

For solving LP's we have general purpose (independent of the LP being solved) and computationally effective (able to solve large LP's) algorithms (simplex or interior point).

Page 107: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 107 -

For solving IP's no similar general purpose and computationally effective algorithms exist.

Indeed theory suggests that no general purpose computationally effective algorithms will ever be found. This area is known as computational complexity and concerns NP-completeness. It was developed from the early 1970's onward and basically is a theory concerning "how long it takes algorithms to run". This means that IP's are a lot harder to solve than LP's.

Solution methods for IP's can be categorised as:

general purpose (will solve any IP) but potentially computationally ineffective (will only solve relatively small problems); or

special purpose (designed for one particular type of IP problem) but potentially computationally more effective.

Solution methods for IP's can also be categorised as:

optimal heuristic

An optimal algorithm is one which (mathematically) guarantees to find the optimal solution.

It may be that we are not interested in the optimal solution:

because the size of problem that we want to solve is beyond the computational limit of known optimal algorithms within the computer time we have available; or

we could solve optimally but feel that this is not worth the effort (time, money, etc) we would expend in finding the optimal solution.

In such cases we can use a heuristic algorithm - that is an algorithm that should hopefully find a feasible solution which, in objective function terms, is close to the optimal solution. In fact it is often the case that a well-designed heuristic algorithm can give good quality (near-optimal) results.

For example a heuristic for our capital budgeting problem would be:

consider each project in turn decide to do the project if this is feasible in the light of previous decisions

Applying this heuristic we would choose to do just project 1 and project 2, giving a total return of 0.5, which may (or may not) be the optimal solution.

Hence we have four categories that we potentially need to consider:

Page 108: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 108 -

general purpose, optimal general purpose, heuristic special purpose, optimal special purpose, heuristic.

Note here that the methods presented below are suitable for solving both IP's (all variables integer) and MIP's (mixed-integer programs - some variables integer, some variables allowed to take fractional values).

General purpose optimal solution algorithms

We shall deal with just two general purpose (able to deal with any IP) optimal solution algorithms for IP's:

enumeration (sometimes called complete enumeration) branch and bound (tree search).

We consider each of these in turn below. Note here that there does exist another general purpose solution algorithm based upon cutting planes but this is beyond the scope of this course.

Enumeration

Unlike LP (where variables took continuous values (>=0)) in IP's (where all variables are integers) each variable can only take a finite number of discrete (integer) values.

Hence the obvious solution approach is simply to enumerate all these possibilities - calculating the value of the objective function at each one and choosing the (feasible) one with the optimal value.

For example for the capital budgeting problem considered above there are 24=16 possible solutions. These are:

x1 x2 x3 x4

0 0 0 0 do no projects

0 0 0 1 do one project0 0 1 00 1 0 01 0 0 0

0 0 1 1 do two projects0 1 0 11 0 0 10 1 1 01 0 1 01 1 0 0

Page 109: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 109 -

1 1 1 0 do three projects1 1 0 11 0 1 10 1 1 1

1 1 1 1 do four projects

Hence for our example we merely have to examine 16 possibilities before we know precisely what the best possible solution is. This example illustrates a general truth about integer programming:

What makes solving the problem easy when it is small is precisely what makes it become very hard very quickly as the problem size increases

This is simply illustrated: suppose we have 100 integer variables each with 2 possible integer values then there are 2x2x2x ... x2 = 2100 (approximately 1030) possibilities which we have to enumerate (obviously many of these possibilities will be infeasible, but until we generate one we cannot check it against the constraints to see if it is feasible or not).

This number is plainly too many for this approach to solving IP's to be computationally practicable. To see this consider the fact that the universe is around 1010 years old so we would need to have considered 1020 possibilities per year, approximately 4x1012 possibilities per second, to have solved such a problem by now if we started at the beginning of the universe.

Be clear here - conceptually there is not a problem - simply enumerate all possibilities and choose the best one. But computationally (numerically) this is just impossible.

IP nowadays is often called "combinatorial optimisation" indicating that we are dealing with optimisation problems with an extremely large (combinatorial) increase in the number of possible solutions as the problem size increases.

Branch and bound (tree search)

The most effective general purpose optimal algorithm is LP-based tree search (tree search also being called branch and bound). This is a way of systematically enumerating feasible solutions such that the optimal integer solution is found.

Where this method differs from the enumeration method is that not all the feasible solutions are enumerated but only a fraction (hopefully a small fraction) of them. However we can still guarantee that we will find the optimal integer solution. The method was first put forward in the early 1960's by Land and Doig.

Consider our example capital budgeting problem. What made this problem difficult was the fact that the variables were restricted to be integers (zero or one). If the variables had been allowed to be fractional (takes all values between zero and one for example) then we would have had an LP which we could easily solve. Suppose that we were to solve

Page 110: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 110 -

this LP relaxation of the problem [replace xj = 0 or 1 j=1,...,4 by 0 <= xj <= 1 j=1,...,4]. Then using the package we get x2=0.5, x3=1, x1=x4=0 of value 0.65 (i.e. the objective function value of the optimal linear programming solution is 0.65).

As a result of this we now know something about the optimal integer solution, namely that it is <= 0.65, i.e. this value of 0.65 is a (upper) bound on the optimal integer solution. This is because when we relax the integrality constraint we (as we are maximising) end up with a solution value at least that of the optimal integer solution (and maybe better).

Consider this LP relaxation solution. We have a variable x2 which is fractional when we need it to be integer. How can we rid ourselves of this troublesome fractional value? To remove this troublesome fractional value we can generate two new problems:

original LP relaxation plus x2=0 original LP relaxation plus x2=1

then we will claim that the optimal integer solution to the original problem is contained in one of these two new problems. This process of taking a fractional variable (a variable which takes a fractional value in the LP relaxation) and explicitly constraining it to each of its integer values is known as branching. It can be represented diagrammatically as below (in a tree diagram, which is how the name tree search arises).

We now have two new LP relaxations to solve. If we do this we get:

P1 - original LP relaxation plus x2=0, solution x1=0.5, x3=1, x2=x4=0 of value 0.6 P2 - original LP relaxation plus x2=1, solution x2=1, x3=0.67, x1=x4=0 of value

0.63

This can be represented diagrammatically as below.

Page 111: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 111 -

To find the optimal integer solution we just repeat the process, choosing one of these two problems, choosing one fractional variable and generating two new problems to solve.

Choosing problem P1 we branch on x1 to get our list of LP relaxations as:

P3 - original LP relaxation plus x2=0 (P1) plus x1=0, solution x3=x4=1, x1=x2=0 of value 0.6

P4 - original LP relaxation plus x2=0 (P1) plus x1=1, solution x1=1, x3=0.67, x2=x4=0 of value 0.53

P2 - original LP relaxation plus x2=1, solution x2=1, x3=0.67, x1=x4=0 of value 0.63

This can again be represented diagrammatically as below.

At this stage we have identified a integer feasible solution of value 0.6 at P3. There are no fractional variables so no branching is necessary and P3 can be dropped from our list of LP relaxations.

Page 112: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 112 -

Hence we now have new information about our optimal (best) integer solution, namely that it lies between 0.6 and 0.65 (inclusive).

Consider P4, it has value 0.53 and has a fractional variable (x3). However if we were to branch on x3 any objective function solution values we get after branching can never be better (higher) than 0.53. As we already have an integer feasible solution of value 0.6 P4 can be dropped from our list of LP relaxations since branching from it could never find an improved feasible solution. This is known as bounding - using a known feasible solution to identify that some relaxations are not of any interest and can be discarded.

Hence we are just left with:

P2 - original LP relaxation plus x2=1, solution x2=1, x3=0.67, x1=x4=0 of value 0.63

Branching on x3 we get

P5 - original LP relaxation plus x2=1 (P2) plus x3=0, solution x1=x2=1, x3=x4=0 of value 0.5

P6 - original LP relaxation plus x2=1 (P2) plus x3=1, problem infeasible

Neither of P5 or P6 lead to further branching so we are done, we have discovered the optimal integer solution of value 0.6 corresponding to x3=x4=1, x1=x2=0.

The entire process we have gone through to discover this optimal solution (and to prove that it is optimal) is shown graphically below.

Page 113: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 113 -

You should be clear as to why 0.6 is the optimal integer solution for this problem, simply put if there were a better integer solution the above tree search process would (logically) have found it.

Note here that this method, like complete enumeration, also involves powers of two as we progress down the (binary) tree. However also note that we did not enumerate all possible integer solutions (of which there are 16). Instead here we solved 7 LP's. This is an important point, and indeed why tree search works at all. We do not need to examine as many LP's as there are possible solutions. Whilst the computational efficiency of tree search differs for different problems it is this basic fact that enables us to solve problems that would be completely beyond us were we to try complete enumeration.

You may have noticed that in the example above we never had more than one fractional variable in the LP solution at any tree node. This arises due to the fact that in constructing the above example I decided to make the situation as simple as possible. In general we might well have more than one fractional variable at a tree node and so we face a decision as to which variable to choose to branch on. A simple rule for deciding might be to take the fractional variable which is closest in value to 0.5, on the basis that the two branches (setting this variable to zero and one respectively) may well perturb the situation significantly.

Good computer packages (solvers) exist for finding optimal solutions to IP's/MIP's via LP-based tree search. Many of the computational advances in IP optimal solution methods (e.g. constraint aggregation, coefficient reduction, problem reduction, automatic generation of valid inequalities) are included in these packages. Often the key to making successful use of such packages for any particular problem is to put effort into a good formulation of the problem in terms of the variables and constraints. By this we mean that for any particular IP there may be a number of valid formulations. Deciding which formulation to adopt in a solution algorithm is often a combination of experience and trial and error.

Constraint logic programming (CLP), also called constraint programming, which is essentially branch and bound but without the bound, can be of use here if:

the problem cannot be easily expressed in linear mathematics the gap between the LP relaxation solution and the IP optimal solution is so large

as to render LP-based tree search impracticable.

Currently there is a convergence between CLP and LP-based solvers with ILOG and CPLEX merging.

Capital budgeting solution - using QSB

Page 114: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 114 -

The branch and bound method is the method used by the package. If you do not have access to an integer programming package one package (albeit with restricted capacity) is available free here.

The package input for the problem presented above is given below.

with the solution being shown below.

Here we can see that the optimal decision is to choose to do projects 3 and 4.

Page 115: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 115 -

Capital budgeting solution - using Solver

Below we solve the capital budgeting problem with the Solver add-in that comes with Microsoft Excel.

If you click here you will be able to download an Excel spreadsheet called ip.xls that already has the IP we are considering set up.

Take this spreadsheet and look at Sheet A. You should see the problem we considered above set out as:

Here we have set up the problem with the decisions (the zero-one variables) being shown in cells B3 to B6. The capital used in each of the three years is shown in cells D9 to F9 and the objective (total return) is shown in cell B10 (for those of you interested in Excel we have used SUMPRODUCT in cells D9 to F9 and B10 as a convenient way of computing the sum of the product of two sets of cells).

Doing Tools and then Solver you will see:

Page 116: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 116 -

where B10 is to be maximised (ignore the use of $ signs here - that is a technical Excel issue if you want to go into it in greater detail) by changing the decision cells B3 to B6 subject to the constraints that these cells have to be binary (zero-one) and that D9 to F9 are less than or equal to D7 to F7, the capital availability constraints.

Clicking on Options on the above Solver Parameters box you will see:

showing that we have ticked Assume Linear Model and Assume Non-Negative.

Solving we get:

Page 117: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 117 -

indicating that the optimal decision (the decision that achieves the maximum return of 0.6 shown in cell B10) is to do projects 3 and 4 (the decision variables are one in cells B5 and B6, but not to do projects 1 and 2 (the decision variables are zero in cells B3 and B4).

IP applications

I will mention in the lecture (if there is time) a number of IP problems I have been involved with:

a manpower scheduling problem concerned with security personnel a church location problem placing boxes on shelves scheduling aircraft landings

General purpose heuristic solution algorithms

Essentially the only effective approach here is to run a general purpose optimal algorithm and terminate early (e.g. after a specified computer time).

Special purpose optimal solution algorithms

Page 118: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 118 -

If we are dealing with one specific type of IP then we might well be able to develop a special purpose solution algorithm (designed for just this one type of IP) that is more effective computationally than the general purpose branch and bound method given earlier. These are typically tree search approaches based upon generating bounds via:

dual ascent lagrangean relaxation and:

o subgradient optimisation; or o multiplier adjustment.

Such algorithms, being different for different problems, are really beyond the scope of this course but suffice to say:

such algorithms draw upon the concepts, such as branch and bound, outlined previously

such algorithms often use linear programming via LP relaxation such algorithms take advantage of the structure of the constraints of the IP they

are solving. different methods have different computational performance and their general

behaviour is that each method is computationally effective up to a certain size of problem and then becomes computationally ineffective (as the effort needed to obtain an optimal solution begins to increase exponentially in terms of the size of problem considered).

A large amount of academic effort in this field is devoted to generating methods that out-perform previous methods for the same problem.

On a personal note this is an area with which I am familiar and special purpose algorithms can be very effective (e.g. a problem dealing with the location of warehouses and involving some 500,000 continuous variables, 500 zero-one variables and 500,000 constraints solved in some 20 minutes). They can be very successful compared with general purpose optimal algorithms (perhaps an order of magnitude or more in terms of the size of problem that can be solved).

You should be clear though why a special purpose optimal algorithm can be so computationally more effective than a general purpose optimal algorithm, it is because you have to put intellectual effort into designing the algorithm!

Special purpose heuristic solution algorithms

With regard to heuristics we have a number of generic approaches in the literature, for example:

greedy interchange bound based heuristics (e.g. lagrangean heuristics)

Page 119: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 119 -

tabu search simulated annealing population heuristics (e.g. genetic algorithms)

By generic here we mean that there is a general framework/approach from which to build an algorithm. All of these generic approaches however must be tailored for the particular IP we are considering. In addition we can design heuristics purely for the particular problem we are considering (problem-specific heuristics).

Heuristics for IP's are widespread in the literature and applied quite widely in practice. Less has been reported though in terms of heuristics for MIP's.

More about heuristics can be found here.

General IP application areas

There are many areas in which IP has been applied, below we briefly mention some of them, but only in words, no more maths!

Facility location

Given a set of facility locations and a set of customers who are served from the facilities then:

which facilities should be used which customers should be served from which facilities so as to minimise the

total cost of serving all the customers.

Typically here facilities are regarded as "open" (used to serve at least one customer) or "closed" and there is a fixed cost which is incurred if a facility is open. Which facilities to have open and which closed is our decision (hence an IP with a zero-one variable representing whether the facility is closed (zero) or open (one)).

Below we show a graphical representation of the problem.

Page 120: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 120 -

One possible solution is shown below.

Other factors often encountered here:

Page 121: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 121 -

customers have an associated demand with capacities (limits) on the total customer demand that can be served from a facility

customers being served by more than one facility.

Vehicle routing

Given a set of vehicles based at a central depot and a set of geographically dispersed customers which require visits from the vehicles (e.g. to supply some goods) which vehicles should visit which customers and in what order?

The problem is shown graphically below.

One possible solution is:

Page 122: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 122 -

Other factors that can occur here are:

time windows for customer visits deliveries and collections compartmentalised vehicles (e.g. tankers).

For more about this problem see the vehicle routing notes.

How do I recognise an IP problem?

As a final thought, any decision problem where any of the decisions that have to be made are essentially discrete (such as yes/no, go/no-go), in that one option must be chosen from a finite set of alternatives, can potentially be formulated and solved as an integer programming problem.

How do I choose which IP solution method is appropriate?

Recall the four categories we had:

general purpose, optimal general purpose, heuristic special purpose, optimal

Page 123: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 123 -

special purpose, heuristic.

One point to note here, although we did not stress it above, is that often an heuristic algorithm can be built for an IP problem without ever having a mathematical formulation of the problem. After all one starts from a verbal description of a problem to construct a mathematical formulation, equally one can construct (i.e. design and code) a heuristic algorithm from a verbal description of the problem (e.g. see here to see an example of this).

Factors which come into play in choosing which IP solution method is appropriate are:

size of the IP (variables and constraints) time available to build the model (formulation plus solution algorithm) time available for computer solution once the model has been built experience.

Note too that typically IP is used (if applicable) when:

the problem is a strategic one with large amounts of money involved the problem is a tactical one that requires repeated solutions.

LP relaxation

For any IP we can generate an LP (called the LP relaxation) from the IP by taking the same objective function and same constraints but with the requirement that variables are integer replaced by appropriate continuous constraints

e.g. xi = 0 or 1 can be replaced by the two continuous constraints xi >= 0 and xi <= 1

We can then solve this LP relaxation of the original IP.

For example consider the IP for the capital budgeting problem considered before.

The LP relaxation of this IP is given by:

maximise 0.2x1 + 0.3x2 + 0.5x3 + 0.1x4

subject to

0.5x1 + 1.0x2 + 1.5x3 + 0.1x4 <= 3.1

0.3x1 + 0.8x2 + 1.5x3 + 0.4x4 <= 2.5

0.2x1 + 0.2x2 + 0.3x3 + 0.1x4 <= 0.4

xj <= 1 j=1,...,4

Page 124: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 124 -

xj >= 0 j=1,...,4

If we solve this LP then:

the LP solution might turn out to have all variables taking integer values at the LP optimal solution - if so we have found the optimal integer solution (in such a case we have been lucky and the LP is said to be "naturally integer").

However for the LP relaxation of the capital budgeting problem the LP solution is x1=0, x2=0.5, x3=1, x4=0, so we have been unlucky.

If we have variables taking fractional values at the LP optimal solution then we can round these to the nearest integer value. If we do this then:

this may lead to certain constraints being violated (i.e. we have an infeasible solution) - this may, or may not, be important.

For example if we round the LP solution given above to x1=0, x2=1, x3=1, x4=0 then the constraints relating to the amount of available capital in year 3 is violated.

the solution may be feasible (all constraints satisfied) but the solution may not be the optimal integer solution.

For example if we round the LP solution given above to x1=0, x2=0, x3=1, x4=0 then all the constraints are satisfied and we have a feasible solution of value 0.5.

In fact for this particular (very small) problem this is not the optimal solution. In general we can say that the rounded LP relaxation solution is almost certainly non-optimal and may be very non-optimal (in the sense that the objective function value of the rounded LP relaxation solution is very far away from the objective function value of the optimal integer solution).

In general the solution process used in many LP based IP packages is

specify the IP: objective, constraints, integer variables (typically xj = integer between 0 and n).

automatically generate the LP relaxation: same objective as IP, same constraints as IP with the addition of 0<=xj<=n, variables as before but no longer required to be integer

use the tree search procedure to generate the optimal solution to the IP.

Page 125: Introduction to OR - Tripod.comzma44.tripod.com/sitebuildercontent/sitebuilderfiles/or.doc · Web viewProfessor P.M.S. Blackett was one of the first scientists to define the essential

Compiled By 03IT44 - 125 -