qm0011

35
ASSIGNMENT- Set 1 1. Write a note on the following: a. Cause and effect diagram Definition A Cause and Effect Diagram is also sometimes called a "Fishbone" or “ Ishikawa Diagram” because when it is completed, often appears to be the picture of a fishbones radiating off of a core spine. The Ishikawa diagram was originally developed by a Japanese quality management professional, Kaoru Ishikawa, based on cause-and-effect paradigm applied in the management processes which facilitates deciphering the key relationship among various process variables for application in process improvement. In simpler words it is an analysis tool that provides a systematic way of looking at effects and the causes that create or contribute to those effects. It may be referred to as a cause-and-effect diagram. When to Use a Fishbone Diagram 1. When you are not able to identify possible causes for a problem. 2. when your team’s thinking tends to fall into ruts and dont knwo where to start with. Contributing Factors The Ishikawa Diagram is used to decompose factors or forces which produce an outcome. A Horizontal line on a page is used to lead to the outcome. The different forces or contributing factors are branched off of that main line. Each of these contributing factors, can also be decomposed using additional lines branching off of them.

Upload: hharir

Post on 27-Nov-2014

113 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: qm0011

ASSIGNMENT- Set 1

1. Write a note on the following:

a. Cause and effect diagram

Definition

A Cause and Effect Diagram is also sometimes called a "Fishbone" or “ Ishikawa Diagram” because when it is completed, often appears to be the picture of a fishbones radiating off of a core spine. The Ishikawa diagram was originally developed by a Japanese quality management professional, Kaoru Ishikawa, based on cause-and-effect paradigm applied in the management processes which facilitates deciphering the key relationship among various process variables for application in process improvement. In simpler words it is an analysis tool that provides a systematic way of looking at effects and the causes that create or contribute to those effects. It may be referred to as a cause-and-effect diagram.

When to Use a Fishbone Diagram 1. When you are not able to identify possible causes for a problem. 2. when your team’s thinking tends to fall into ruts and dont knwo where to start

with.

Contributing Factors

The Ishikawa Diagram is used to decompose factors or forces which produce an outcome. A Horizontal line on a page is used to lead to the outcome. The different forces or contributing factors are branched off of that main line. Each of these contributing factors, can also be decomposed using additional lines branching off of them.

Visual Understanding

After completing an Ishikawa Diagram, a person is able to gain a visual understanding of how many different causes or contributing factors led to an outcome. This tool also helps identify what elements or forces can be modified to alter the ultimate outcome.For further micro analysis on the inferences from the Ishikawa diagram another technique Pareto Chart can also be used.

Steps in creating a Fish Bone Diagram

Creating a Fish Bone Diagram

Page 2: qm0011

Steps Activities

Identify the problem

Write the problem/issue to be studied in the "head of the fish". From this box originates the main branch (the 'fish spine') of the diagram.

Identify the major factors involved

Brainstorm the major categories of causes of the problem. Label each bone of the fish. Write the categories of causes as

branches from the main arrow.

If this is difficult use generic headings The 6 M's

Machine, Method, Materials, Measurement, Man and Mother Nature. The 8 P's Price, Promotion, People, Processes, Place / Plant, Policies, Procedures & Product. The 4 S's Surroundings, Suppliers, Systems, Skills.

Identify possible causes

Brainstorm all the possible causes of the problem. Ask: Why does this happen? As each idea is given, the facilitator writes it as a branch from the appropriate category. Again ask: why does this happen? about each cause. Write sub-causes branching off the causes. Continue to ask: Why? and generate deeper levels of causes. Layers of branches indicate causal relationships. When the group runs out of ideas, focus attention to places on the chart where ideas are few.

Interpret your diagram

Analyze the results of the fishbone after team members agree that an adequate amount of detail has been provided under each major category. Do this by looking for those items that appear in more than one category. These become the most likely causes. For those items identified as the most likely causes, the team should reach consensus on listing those items in priority order with the first item being the most probable cause.

b. Control charts

Page 3: qm0011

Comparison of univariate and multivariate control data Control charts are used to routinely monitor quality. Depending on the number of process characteristics to be monitored, there are two basic types of control charts. The first, referred to as a univariate control chart, is a graphical display (chart) of one quality characteristic. The second, referred to as a multivariate control chart, is a graphical display of a statistic that summarizes or represents more than one quality characteristic. Characteristics of control charts If a single quality characteristic has been measured or computed from a sample, the control chart shows the value of the quality characteristic versus the sample number or versus time. In general, the chart contains a center line that represents the mean value for the in-control process. Two other horizontal lines, called the upper control limit (UCL) and the lower control limit (LCL), are also shown on the chart. These control limits are chosen so that almost all of the data points will fall within these limits as long as the process remains in-control. The figure below illustrates this. Chart demonstrating basis of control chart

Why control charts "work" The control limits as pictured in the graph might be .001 probability limits. If so, and if chance causes alone were present, the probability of a point falling above the upper limit would be one out of a thousand, and similarly, a point falling below the lower limit would be one out of a thousand. We would be searching for an assignable cause if a point would fall outside these limits. Where we put these limits will determine the risk of undertaking such a search when in reality there is no assignable cause for variation.

Since two out of a thousand is a very small risk, the 0.001 limits may be said to give practical assurances that, if a point falls outside these limits, the variation was caused be an assignable cause. It must be noted that two out of one thousand is a purely arbitrary number. There is no reason why it could not have been set to one out a hundred or even larger. The decision would depend on the amount of risk the management of the quality control program is willing to take. In general (in the world of quality control) it is customary to use limits that approximate the 0.002 standard.

Page 4: qm0011

Letting X denote the value of a process characteristic, if the system of chance causes generates a variation in X that follows the normal distribution, the 0.001 probability limits will be very close to the 3 limits. From normal tables we glean that the 3 in one direction is 0.00135, or in both directions 0.0027. For normal distributions, therefore, the 3 limits are the practical equivalent of 0.001 probability limits.

Plus or minus "3 sigma" limits are typical In the U.S., whether X is normally distributed or not, it is an acceptable practice to base the control limits upon a multiple of the standard deviation. Usually this multiple is 3 and thus the limits are called 3-sigma limits. This term is used whether the standard deviation is the universe or population parameter, or some estimate thereof, or simply a "standard value" for control chart purposes. It should be inferred from the context what standard deviation is involved. (Note that in the U.K., statisticians generally prefer to adhere to probability limits.)

If the underlying distribution is skewed, say in the positive direction, the 3-sigma limit will fall short of the upper 0.001 limit, while the lower 3-sigma limit will fall below the 0.001 limit. This situation means that the risk of looking for assignable causes of positive variation when none exists will be greater than one out of a thousand. But the risk of searching for an assignable cause of negative variation, when none exists, will be reduced. The net result, however, will be an increase in the risk of a chance variation beyond the control limits. How much this risk will be increased will depend on the degree of skewness.

If variation in quality follows a Poisson distribution, for example, for which np = .8, the risk of exceeding the upper limit by chance would be raised by the use of 3-sigma limits from 0.001 to 0.009 and the lower limit reduces from 0.001 to 0. For a Poisson distribution the mean and variance both equal np. Hence the upper 3-sigma limit is 0.8 + 3 sqrt(.8) = 3.48 and the lower limit = 0 (here sqrt denotes "square root"). For np = .8 the probability of getting more than 3 successes = 0.009.

Strategies for dealing with out-of-control findings If a data point falls outside the control limits, we assume that the process is probably out of control and that an investigation is warranted to find and eliminate the cause or causes.

Does this mean that when all points fall within the limits, the process is in control? Not necessarily. If the plot looks non-random, that is, if the points exhibit some form of systematic behavior, there is still something wrong. For example, if the first 25 of 30 points fall above the center line and the last 5 fall below the center line, we would wish to know why this is so. Statistical methods to detect sequences or nonrandom patterns can be applied to the interpretation of control charts. To be sure, "in control" implies that all points are between the control limits and they form a random pattern.

c. Pareto analysis

Page 5: qm0011

The Pareto effect.

In practically every industrial country a small proportion of all the factories employ a disproportionate number of factory operatives. In some countries 15 percent of the firms employ 70 percent of the people. This same state of affairs is repeated time after time. In retailing for example, one usually finds that up to 80 percent of the turnover is accounted for by 20 percent of the lines.

This effect, known as the 80 : 20 rule, can be observed in action so often that it seems to be almost a universal truth. As several economists have pointed out, at the turn of the century the bulk of the country’s wealth was in the hands of a small number of people.

This fact gave rise to the Pareto effect or Pareto’s law: a small proportion of causes produce a large proportion of results. Thus frequently a vital few causes may need special attention wile the trivial many may warrant very little. It is this phrase that is most commonly used in talking about the Pareto effect – ‘the vital few and the trivial many’. A vital few customers may account for a very large percentage of total sales. A vital few taxes produce the bulk of total revenue. A vital few improvements can produce the bulk of the results.

The Pareto effect is named after Vilfredo Pareto, an economist and sociologist who lived from 1848 to 1923. Originally trained as an engineer he was a one time managing director of a group of coalmines. Later he took the chair of economics at Lausanne University, ultimately becoming a recluse. Mussolini made him a senator in 1922 but by his death in 1923 he was already at odds with the regime. Pareto was an elitist believing that the concept of the vital few and the trivial many extended to human beings.

Much of his writing is now out of favour and some people would like to re-name the effect after Mosca, or even Lorenz. However it is too late now – the Pareto principle has earned its place in the manager’s kit of productivity improvement tools.

This method stems in the first place from Pareto’s suggestion of a curve of the distribution of wealth in a book of 1896. Whatever the source, the phrase of ‘the vital few and the trivial many’ deserves a place in every manager’s thinking. It is itself one of the most vital concepts in modern management. The results of thinking along Pareto lines are immense.

For example, we may have a large number of customer complaints, a lot of shop floor accidents, a high percentage of rejects, and a sudden increase in costs etc. The first stage is to carry out a Pareto analysis. This is nothing more than a list of causes in descending order of their frequency or occurrence. This list automatically reveals the vital few at the top of the list, gradually tailing off into the trivial many at the bottom of the list. Management’s task is now clear and unavoidable: effort must be expended on those vital few at the head of the list first. This is because nothing of importance can take place unless it affects the vital few. Thus management’s attention is unavoidably focussed where it will do most good.

Page 6: qm0011

Another example is stock control. You frequently find an elaborate procedure for stock control with considerable paperwork flow. This is usually because the systems and procedures are geared to the most costly or fast-moving items. As a result trivial parts may cost a firm more in paperwork than they cost to purchase or to produce. An answer is to split the stock into three types, usually called A, B and C. Grade A items are the top 10 percent or so in money terms while grade C are the bottom 50-75 percent. Grade B are the items in between. It is often well worthwhile treating these three types of stock in a different way leading to considerable savings in money tied up in stock.

Production control can use the same principle by identifying these vital few processes, which control the manufacture, and then building the planning around these key processes. In quality control concentrating in particular on the most troublesome causes follows the principle. In management control, the principle is used by top management looking continually at certain key figures.

Thus it is clear that the Pareto concept – ‘the vital few and the trivial many’ – is of utmost importance to management.

The Pareto chart

A Pareto chart is a graphical representation that displays data in order of priority. It can be a powerful tool for identifying the relative importance of causes, most of which arise from only a few of the processes, hence the 80:20 rule. Pareto Analysis is used to focus problem solving activities, so that areas creating most of the issues and difficulties are addressed first.

Some problems

Difficulties associated with Pareto Analysis:

Page 7: qm0011

Misrepresentation of the data. Inappropriate measurements depicted. Lack of understanding of how it should be applied to particular problems. Knowing when and how to use Pareto Analysis. Inaccurate plotting of cumulative percent data.

Overcoming the difficulties

Define the purpose of using the tool. Identify the most appropriate measurement parameters. Use check sheets to collect data for the likely major causes. Arrange the data in descending order of value and calculate % frequency and/or

cost and cumulative percent. Plot the cumulative percent through the top right side of the first bar. Carefully scrutinise the results. Has the exercise clarified the situation?

In conclusion

Even in circumstances which do not strictly conform to the 80 : 20 rule the method is an extremely useful way to identify the most critical aspects on which to concentrate. When used correctly Pareto Analysis is a powerful and effective tool in continuous improvement and problem solving to separate the ‘vital few’ from the ‘many other’ causes in terms of cost and/or frequency of occurrence.

It is the discipline of organising the data that is central to the success of using Pareto Analysis. Once calculated and displayed graphically, it becomes a selling tool to the improvement team and management, raising the question why the team is focusing its energies on certain aspects of the problem.

3. Write a note on evolution of Quality.

IntroductionBefore the concepts and ideas of TQM were formalised, much work had taken place over the centuries toreach this stage. This section charts the evolution, from inspection through to the present day conceptsof total quality.From inspection to total qualityDuring the early days of manufacturing, an operative’s work was inspected and a decision made whether toaccept or reject it. As businesses became larger, so too did this role, and full time inspection jobswere created.Accompanying the creation of inspection functions, other problems arose:• More technical problems occurred, requiring specialised skills, often not possessed byproduction workers• The inspectors lacked training• Inspectors were ordered to accept defective goods, to increase output

Page 8: qm0011

• Skilled workers were promoted into other roles, leaving less skilled workers to perform theoperational jobs, such as manufacturingThese changes led to the birth of the separate inspection department with a “chief inspector”, reporting toeither the person in charge of manufacturing or the works manager. With the creation of this newdepartment, there came new services and issues, e.g, standards, training, recording of data and theaccuracy of measuring equipment. It became clear that the responsibilities of the “chief inspector” weremore than just product acceptance, and a need to address defect prevention emerged.Hence the quality control department evolved, in charge of which was a “quality control manager”, withresponsibility for the inspection services and quality control engineering.In the 1920’s statistical theory began to be applied effectively to quality control, and in 1924 Shewhartmade the first sketch of a modern control chart. His work was later developed by Deming and the earlywork of Shewhart, Deming, Dodge and Romig constitutes much of what today comprises the theory ofstatistical process control (SPC). However, there was little use of these techniques in manufacturingcompanies until the late 1940’s.At that time, Japan’s industrial system was virtually destroyed, and it had a reputation for cheap imitationproducts and an illiterate workforce. The Japanese recognised these problems and set about solving themwith the help of some notable quality gurus – Juran, Deming and Feigenbaum.In the early 1950’s, quality management practices developed rapidly in Japanese plants, and become amajor theme in Japanese management philosophy, such that, by 1960, quality control and managementhad become a national preoccupation.By the late 1960’s/early 1970’s Japan’s imports into the USA and Europe increased significantly, due to itscheaper, higher quality products, compared to the Western counterparts.

In 1969 the first international conference on quality control, sponsored by Japan, America and Europe, washeld in Tokyo. In a paper given by Feigenbaum, the term “total quality” was used for the first time, andreferred to wider issues such as planning, organisation and management responsibility. Ishikawa gave apaper explaining how “total quality control” in Japan was different, it meaning “company wide qualitycontrol”, and describing how all employees, from top management to the workers, must study andparticipate in quality control. Company wide quality management was common in Japanese companies bythe late 1970’s.The quality revolution in the West was slow to follow, and did not begin until the early 1980’s, whencompanies introduced their own quality programmes and initiatives to counter the Japanese success. Totalquality management (TQM) became the centre of these drives in most cases.

Page 9: qm0011

In a Department of Trade & Industry publication in 1982 it was stated that Britain’s world trade share wasdeclining and this was having a dramatic effect on the standard of living in the country. There was intenseglobal competition and any country’s economic performance and reputation for quality was made up of thereputations and performances of its individual companies and products/services.The British Standard (BS) 5750 for quality systems had been published in 1979, and in 1983 the NationalQuality Campaign was launched, using BS5750 as its main theme. The aim was to bring to the attention ofindustry the importance of quality for competitiveness and survival in the world market place.Since then the International Standardisation Organisation (ISO) 9000 has become the internationallyrecognised standard for quality management systems. It comprises a number of standards that specify therequirements for the documentation, implementation and maintenance of a quality system.TQM is now part of a much wider concept that addresses overall organisational performance andrecognises the importance of processes. There is also extensive research evidence that demonstrates thebenefits from the approach.As we move into the 21st century, TQM has developed in many countries into holistic frameworks, aimedat helping organisations achieve excellent performance, particularly in customer and business results. InEurope, a widely adopted framework is the so-called “Business Excellence” or “Excellence” Model,promoted by the European Foundation for Quality Management (EFQM), and in the UK by the British

Quality Foundation (BQF).”

3. Explain briefly the famous “14 points” of Quality management by Edwards Deming.

Deming offered fourteen key principles for management for transforming business effectiveness. The points were first presented in his book

Page 10: qm0011

1. Create constancy of purpose toward improvement of product and service, with the aim to become competitive and stay in business, and to provide jobs.

2. Adopt the new philosophy. We are in a new economic age. Western management must awaken to the challenge, must learn their responsibilities, and take on leadership for change.

3. Cease dependence on inspection to achieve quality. Eliminate the need for massive inspection by building quality into the product in the first place.

4. End the practice of awarding business on the basis of price tag. Instead, minimize total cost. Move towards a single supplier for any one item, on a long-term relationship of loyalty and trust.

5. Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs.

6. Institute training on the job.7. Institute leadership (see Point 12 and Ch. 8 of "Out of the Crisis"). The aim of

supervision should be to help people and machines and gadgets to do a better job. Supervision of management is in need of overhaul, as well as supervision of production workers.

8. Drive out fear, so that everyone may work effectively for the company. (See Ch. 3 of "Out of the Crisis")

9. Break down barriers between departments. People in research, design, sales, and production must work as a team, to foresee problems of production and in use that may be encountered with the product or service.

10. Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity. Such exhortations only create adversarial relationships, as the bulk of the causes of low quality and low productivity belong to the system and thus lie beyond the power of the work force.

11. a. Eliminate work standards (quotas) on the factory floor. Substitute leadership.b. Eliminate management by objective. Eliminate management by numbers, numerical goals. Substitute leadership.

12. a. Remove barriers that rob the hourly worker of his right to pride of workmanship. The responsibility of supervisors must be changed from sheer numbers to quality.b. Remove barriers that rob people in management and in engineering of their right to pride of workmanship. This means, inter alia," abolishment of the annual or merit rating and of management by objective (See Ch. 3 of "Out of the Crisis").

13. Institute a vigorous program of education and self-improvement.14. Put everybody in the company to work to accomplish the transformation. The

transformation is everybody's job.

4. Describe the Philip Crosby‟s “four absolutes of Quality”

Crosby’s Four Absolutes of Quality.

Crosby espoused his basic theories about quality in four Absolutes of Quality Management as follows:

Page 11: qm0011

Quality means conformance to requirements, not goodness.

1. The system for causing quality is prevention, not appraisal.2. The performance standard must be zero defects, not "that’s close enough."3. The measurement of quality is the price of nonconformance, not indexes.

To support his Four Absolutes of Quality Management, Crosby developed the Quality Management Maturity Grid and Fourteen Steps of Quality Improvement. Crosby sees the Quality Management Maturity Grid as a first step in moving an organization towards quality management. After a company has located its position on the grid, it implements a quality improvement system based on Crosby’s Fourteen Steps of Quality Improvement. Crosby’s Absolutes of Quality Management are further delineated in his Fourteen Steps of Quality Improvement as shown below:

Step 1 Management Commitment

Step 2 Quality Improvement Teams

Step 3 Quality Measurement

Step 4 Cost of Quality Evaluation

Step 5 Quality Awareness

Step 6 Corrective Action

Step 7 Zero-Defects Planning

Step 8 Supervisory Training

Step 9 Zero Defects

Step 10 Goal Setting

Step 11 Error Cause Removals

Step 12 Recognition

Step 13 Quality Councils

Step 14 Do It All Over Again

5. Describe briefly the „5S‟ principles.

Page 12: qm0011

Ans:- ‘5S’ is a list of five Japanese words, which, transliterated and translated to English, start with the letter ‘S’ and are synonymous names of the methodology for organizing, cleaning, developing, and sustaining a productive work environment and to bring in efficient flow of activities

Industries producing critical items (health care, aerospace) have realized that clean and neat workplaces are essential in achieving low levels of defects and improve customer satisfaction. While the quality levels demanded by Six Sigma approach may have the similar impact as in 5S, but the simplicity of following 5S is to be recognized.

These five words are part of a very basic management system that focuses employee’s attention on 5S movement of the organization.

The Japanese words, referred to as 5S process or house keeping steps are:

1) SEIRI (Sorting),

2) SEITON (Straighten),

3) SEISO (Sweep or Spic and span),

Page 13: qm0011

4) SEIKETSU (Standardize) and

5) SHITSUKE (Self-discipline).

Some of the benefits realized are listed below

Good House Keeping:

5S is a determination to organize the work place, to keep it neat and clean and maintain the discipline necessary to do a good job. Good housekeeping along with pleasant and conducive atmosphere, facilitates idea generations and improvements

Minimizes waste, rework & Improves Productivity:

5S as the system reduces waste and optimizes productivity and an orderly workplace brings in a sense of ownership to achieve consistent operational results.

Employee motivation:

A well organized workplace motivates people, leads to higher level of satisfaction and achievement.

Quality improvement:

5S facilitates liability management in Quality. Defects are prevented, as non-conformities, abnormalities; discrepancies are identified before they occur.

Improves production Efficiency:

5S is a philosophy of managing the workspace, as it improves the efficiency by eliminating waste, improves work flow, reduces process unevenness, improves logistics and ensures smooth production process. Further Cleaning of machines, preventive maintenance, shop floor painting, washing, and replacing spares are all contributory factors to improve Production efficiency.

Ensures timely delivery:

Time elimination in set ups, document retrievals, etc and preplanning will help for timely delivery of products.

Ensure Safety:

Page 14: qm0011

A well organized and orderly workplace is a safer workplace, as 5S activities removes clutters, alarm people of hazardous situations etc and gives way to a safe work environment. Accident prone areas etc are identified well before they occur,

Improves employee morale and motivation:

5S process can improve the employee’s morale as they feel better about their work, take pride in their work and own the responsibility.

Has an effect on Cost Reduction:

All benefits listed in 1 to 8 above results in improved efficiency and productivity, which all will directly contribute to effective cost reductions.

Optimizes utilization of space, time and energy:

Removing the unwanted and minimizing the waste will maximizes the productive space and also saves in other logistics.

6. What are the objectives and features of MBNQA?

The Malcolm Baldrige National Quality Award (MBNQA) is presented annually by the President of the United States to organizations that demonstrate quality and performance excellence. Three awards may be given annually in each of six categories:

Manufacturing Service company Small business Education Healthcare Nonprofit

Established by Congress in 1987 for manufacturers, service businesses and small businesses, the Baldrige Award was designed to raise awareness of quality management and recognize U.S. companies that have implemented successful quality-management systems.

The education and healthcare categories were added in 1999. A government and nonprofit category was added in 2007.

The Baldrige Award is named after the late Secretary of Commerce Malcolm Baldrige, a proponent of quality management. The U.S. Commerce Department's National Institute of Standards and Technology manages the award and ASQ administers it.

Page 15: qm0011

Organizations that apply for the Baldrige Award are judged by an independent board of examiners. Recipients are selected based on achievement and improvement in seven areas, known as the Baldrige Criteria for Performance Excellence:

1. Leadership: How upper management leads the organization, and how the organization leads within the community.

2. Strategic planning: How the organization establishes and plans to implement strategic directions.

3. Customer and market focus: How the organization builds and maintains strong, lasting relationships with customers.

4. Measurement, analysis, and knowledge management: How the organization uses data to support key processes and manage performance.

5. Human resource focus: How the organization empowers and involves its workforce.

6. Process management: How the organization designs, manages and improves key processes.

7. Business/organizational performance results: How the organization performs in terms of customer satisfaction, finances, human resources, supplier and partner performance, operations, governance and social responsibility, and how the organization compares to its competitors.

ASSIGNMENT- Set 2

Page 16: qm0011

1. What is meant by “Design of experiments”? What are the salient features of Design of experiments?

Ans:- Design of Experiments [Doe]

Taguchi observed that 80% of defective items are caused because of poor design. The Design of experiment technique is used to resolve these problems at the design stage itself.

Organizations are achieving world class quality and improving productivity by using designed experiments, wherein the changes are intentionally introduced into the process or system and observe their effect on the performance characteristics.

A statistical approach is used here for evaluating the changes in the variations/deviations before and after the experiment.

Salient Features Of Doe:

1) DOE is the method used for testing and optimizing the performance of a process, product, service or solution.

2) DOE techniques are important engineering approach which is based on developing a robust design: i.e. design of a product which can perform over a wide range of conditions.

3) DOE techniques are– Tests of statistical significance, correlation and regression about the behavior of a product or a process under varying conditions. Statistical process control is a very powerful tool for optimization of a process, system, design etc.

4) Experimental design is a systematic manipulation of a set of variables, in which the effects are determined, conclusions drawn and results implemented.

5) Hence the Experimental design is the experimentation that makes the desired changes in the input variables of a process and measures the output response.

6) The Goals of a designed experimentation are to determine the variables and their magnitude, levels of these variables and evolve plan to manipulate these variables for the desired results.

7) It is necessary to reduce the variations in the performance characteristics, check to what extent these variables are having impact on cost and quality and to examine as to how to inject and manipulate these variables for a better design.

8)DOE tool is a very powerful tool that helps you to identify and quantify the effect of the X on the Y, that is, to determine which inputs are significant in affecting the output of a process.

Page 17: qm0011

9) DOE is a means of identifying the most influential factors more efficiently through experiments with many factors simultaneously. As the number of factors rise, costs increases exponentially.

10) A typically designed experiment has three factors each set at two levels (maximum and minimum values). Such an experiment would require eight runs and results measured for each run. From these values, an empirical model is evolved to predict the process behavior

11) A good experiment must be efficient and a well planned set of experiments is one in which all the parameters of interest are varied over a specified range and collect systematic data until desired results are achieved

12) Every experimenter develops a nominal process/product that has the desired functionality for end user and optimizes the processes/products by varying the control factors, such that the results are reliable and repeatable (i.e. show less variations).

2. Describe briefly the PDCA Cycle and its importance.

From problem-faced to problem-solved

The PDCA Cycle is a checklist of the four stages which you must go through to get from `problem-faced' to `problem solved'. The four stages are Plan-Do-Check-Act, and they are carried out in the cycle illustrated below.

The concept of the PDCA Cycle was originally developed by Walter Shewhart, the pioneering statistician who developed statistical process control in the Bell Laboratories in the US during the 1930's. It is often referred to as `the Shewhart Cycle'. It was taken up and promoted very effectively from the 1950s on by the famous Quality Management authority, W. Edwards Deming, and is consequently known by many as `the Deming Wheel'.

Page 18: qm0011

Use the PDCA Cycle to coordinate your continuous improvement efforts. It both emphasises and demonstrates that improvement programs must start with careful planning, must result in effective action, and must move on again to careful planning in a continuous cycle.

Also use the PDCA Cycle diagram in team meetings to take stock of what stage improvement initiatives are at, and to choose the appropriate tools to see each stage through to successful completion.How to use the PDCA Cycle diagram to choose the appropriate tool is explained in detail in the `How to use it' section below.

Plan-Do-Check-Act

Here is what you do for each stage of the Cycle:

Plan to improve your operations first by finding out what things are going wrong (that is identify the problems faced), and come up with ideas for solving these problems.

Do changes designed to solve the problems on a small or experimental scale first. This minimises disruption to routine activity while testing whether the changes will work or not.

Check whether the small scale or experimental changes are achieving the desired result or not. Also, continuously Check nominated key activities (regardless of any experimentation going on) to ensure that you know what the quality of the output is at all times to identify any new problems when they crop up.

Act to implement changes on a larger scale if the experiment is successful. This means making the changes a routine part of your activity. Also Act to involve other persons (other departments, suppliers, or customers) affected by the changes and whose cooperation you need to implement them on a larger scale, or those who may simply benefit from what you have learned (you may, of course, already have involved these people in the Do or trial stage).

You have now completed the cycle to arrive at `problem solved'. Go back to the Plan stage to identify the next `problem faced'.

If the experiment was not successful, skip the Act stage and go back to the Plan stage to come up with some new ideas for solving the problem and go through the cycle again. Plan-Do-Check-Act describes the overall stages of improvement activity, but how is each stage carried out? This is where other specific quality management, or continuous improvement, tools and techniques come into play. The diagram below lists the tools and techniques which can be used to complete each stage of the PDCA Cycle.

Page 19: qm0011

This classification of tools into sections of the PDCA Cycle is not meant to be strictly applied, but it is a useful prompt to help you choose what to do at each critical stage of your improvement efforts.

Importance

Use PDCA to answer the question. "What do you have?" (Current situation) "What do you want?" (Target) "What has to be done to close the gap?" (perhaps the business must prosper, etc) Articulate the plan to do it. Are we executing to the plan? (Check) Are we getting the results? (Check) Why not? What are we doing about it? (Act)

Page 20: qm0011

4. What is meant by Quality Circle? What are the significant characteristics and features of Quality circle?

Quality Circle A group of employees who perform similar duties and meet at periodic intervals, often with management, to discuss work-related issues and to offer suggestions and ideas for improvements, as in production methods or quality control.

significant characteristics and features of Quality circle 1. Volunteers

2. Set rues and priorities

3. Decisions made my Consensus

4. Use of organized approaches to problem solving

4. Describe the concept of “poka-yoke”.    What is it?

Poka-yoke (poh-kah yoh-keh) was coined in Japan during the 1960s by Shigeo Shingo who was one of the industrial engineers at Toyota. Shigeo Shingo is also credited with creating and formalizing Zero Quality Control (poka-yoke techniques to correct possible defects + source inspection to prevent defects equals zero quality control).

The initial term was baka-yoke, which means ‘fool-proofing’. In 1963, a worker at Arakawa Body Company refused to use baka-yoke mechanisms in her work area, because of the term’s dishonorable and offensive connotation. Hence, the term was changed to poka-yoke, which means ‘mistake-proofing’ or more literally avoiding (yokeru) inadvertent errors (poka). Ideally, poka-yokes ensure that proper conditions exist before actually executing a process step, preventing defects from occurring in the first place. Where this is not possible, poka-yokes perform a detective function, eliminating defects in the process as early as possible.

Why is it important? Poka-yoke helps people and processes work right the first time. Poka-yoke refers to techniques that make it impossible to make mistakes. These techniques can drive defects out of products and processes and substantially improve quality and reliability. It can be thought of as an extension of FMEA. It can also be used to fine tune improvements and process designs from six-sigma Define - Measure - Analyze - Improve - Control (DMAIC) projects. The use of simple poka-yoke ideas and methods in product and process design can eliminate both human and mechanical errors. Poka-yoke does not need to be costly. For instance, Toyota has an average of 12 mistake-proofing devices at each workstation and a goal of implementing each mistake-proofing device for under $150.

Page 21: qm0011

When to use it?

Poka-yoke can be used wherever something can go wrong or an error can be made. It is a technique, a tool that can be applied to any type of process be it in manufacturing or the service industry. Errors are many types - 1 Processing error

Process operation missed or not performed per the standard operating procedure. 

2 Setup error Using the wrong tooling or setting machine adjustments incorrectly. 

3 Missing partNot all parts included in the assembly, welding, or other processes.  

4 Improper part/itemWrong part used in the process.  

5 Operations errorCarrying out an operation incorrectly; having the incorrect version of the specification.  

6 Measurement errorErrors in machine adjustment, test measurement or dimensions of a part coming in from a supplier.  

How to use it? Step by step process in applying poka-yoke:

1Identify the operation or process - based on a pareto.2Analyze the 5-whys and understand the ways a process can fail.3Decide the right poka-yoke approach, such as using a

shut out type (preventing an error being made), or anattention type (highlighting that an error has been made) poka-yoketake a more comprehensive approach instead of merely thinking of poka-yokes as limit switches, or automatic shutoffsa poka-yoke can be electrical, mechanical, procedural, visual, human or any other form that prevents incorrect execution of a process step

4Determine whether acontact - use of shape, size or other physical attributes for detection,constant number - error triggered if a certain number of actions are not madesequence method - use of a checklist to ensure completing all process steps is appropriate

5Trial the method and see if it works6Train the operator, review performance and measure success.

5. Describe the eight pillars of Total Productive Maintenance.

A set of techniques originally pioneered by Mr. DENSO in the Toyota Group in Japan. TPM is to "Optimize the effectiveness of manufacturing equipment and tooling"

TPM is based on eight Pillars, they are:1. Focused Equipment & Process Improvement: Measurement of equipment- or

process-related losses & specify improvement activities to reduce the losses.2. Autonomous Maintenance: Operator involvement in regular cleaning, inspection,

lubrication & learning about equipment to maintain basis conditions & spot early signs of trouble.

 

Page 22: qm0011

3. Planned Maintenance: A combination of preventive, predictive & proactive maintenance to avoid losses & planned responses to fix breakdowns quickly.

4. Quality Maintenance: Activities to manage product quality by maintaining optimal operating conditions.

5. Early Equipment Management :Methods to shorten the lead time for getting new equipment online & making defect-free products.

6. Safety: Safety training; integration of safety checks, visual controls & mistake-proofing devices in daily work.

7. Equipment investment & maintenance prevention design: Purchase & design decisions informed by costs of operation & maintenance during the machine's entire life cycle.

8. Training & skill building: A planned program for developing employee skills & knowledge to support TPM implementation.

Each and every pillar of the TPM is suitably considered to provide utmost stability and strength to the objective and principle of TPM. Each pillar shall prove to be a mile stone if meticulously understood and applied during entire life span of equipment.

The TPM can be adopted at any juncture of equipment life and puts no limitations on usage but instead will support increasing the life span and improve output quality.The eight Pillars may be termed as vital organs to keep system (TPM) operating at its best to ensure that every machine in a production process is always available to perform desired task with optimum qualitative efficiency.

TPM focus on equipment and process improvement through qualitative autonomous and planned maintenance by imparting necessary and appropriate training and skill to the operator with entire team who have ownership of the machine. Early equipment management is vital and important along with involvement of equipment investment and maintenance prevention design.

TPM is a wholesome system guarantees Maximum equipment effectiveness and availability, if all the eight pillars (Principles) of the system are practiced in totality following Six Big Losses will be avoided:

 

Breakdowns / failures Setups / changeovers Minor stoppages Reduced speed Defects / rework

Startup / yield loss

 

6. What is meant by Six Sigma? What are the steps in implementing Six Sigma?

Why Six Sigma?

Page 23: qm0011

For Motorola, the originator of Six Sigma, the answer to the question "Why Six Sigma?" was simple: survival. Motorola came to Six Sigma because it was being consistently beaten in the competitive marketplace by foreign firms that were able to produce higher quality products at a lower cost. When a Japanese firm took over a Motorola factory that manufactured Quasar television sets in the United States in the 1970s, they promptly set about making drastic changes in the way the factory operated. Under Japanese management, the factory was soon producing TV sets with 1/20th the number of defects they had produced under Motorola management. They did this using the same workforce, technology, and designs, making it clear that the problem was Motorola's management. Eventually, even Motorola's own executives had to admit "our quality stinks,"[i]

Finally, in the mid 1980s, Motorola decided to take quality seriously. Motorola's CEO at the time, Bob Galvin, started the company on the quality path known as Six Sigma and became a business icon largely as a result of what he accomplished in quality at Motorola. Today, Motorola is known worldwide as a quality leader and a profit leader. After Motorola won the Malcolm Baldrige National Quality Award in 1988 the secret of their success became public knowledge and the Six Sigma revolution was on. Today it's hotter than ever.

It would be a mistake to think that Six Sigma is about quality in the traditional sense. Quality, defined traditionally as conformance to internal requirements, has little to do with Six Sigma. Six Sigma is about helping the organization make more money. To link this objective of Six Sigma with quality requires a new definition of quality. For Six Sigma purposes I define quality as the value added by a productive endeavor. Quality comes in two flavors: potential quality and actual quality. Potential quality is the known maximum possible value added per unit of input. Actual quality is the current value added per unit of input. The difference between potential and actual quality is waste. Six Sigma focuses on improving quality (i.e., reducing waste) by helping organizations produce products and services better, faster and cheaper. In more traditional terms, Six Sigma focuses on defect prevention, cycle time reduction, and cost savings. Unlike mindless cost-cutting programs which reduce value and quality, Six Sigma identifies and eliminates costs which provide no value to customers: waste costs.

For non-Six Sigma companies, these costs are often extremely high. Companies operating at three or four sigma typically spend between 25 and 40 percent of their revenues fixing problems. This is known as the cost of quality, or more accurately the cost of poor quality. Companies operating at Six Sigma typically spend less than 5 percent of their revenues fixing problems (Figure 1). The dollar cost of this gap can be huge. General Electric estimates that the gap between three or four sigma and Six Sigma was costing them between $8 billion and $12 billion per year.

Figure 1: Cost of Poor Quality versus Sigma Level

Page 24: qm0011

What is Six Sigma?

Six Sigma is a rigorous, focused and highly effective  implementation of proven quality principles and techniques. Incorporating elements from the work of many quality pioneers, Six Sigma aims for virtually error free business performance. Sigma, , is a letter in the Greek alphabet used by statisticians to measure the variability in any process. A company's performance is measured by the sigma level of their business processes. Traditionally companies accepted three or four sigma performance levels as the norm, despite the fact that these processes created between 6,200 and 67,000 problems per million opportunities! The Six Sigma standard of 3.4 problems per million opportunities[1] is a response to the increasing expectations of customers and the increased complexity of modern products and processes.

If you're looking for new techniques, don't bother. Six Sigma's magic isn't in statistical or high-tech razzle-dazzle. Six Sigma relies on tried and true methods that have been around for decades. In fact, Six Sigma discards a great deal of the complexity that characterized Total Quality Management (TQM). By one expert's count, there were over 400 TQM tools and techniques. Six Sigma takes a handful of proven methods and trains a small cadre of in-house technical leaders, known as Six Sigma Black Belts, to a high level of proficiency in the application of these techniques. To be sure, some of the methods used by Black Belts are highly advanced, including the use of up-to-date computer technology. But the tools are applied within a simple performance improvement model known as DMAIC, or Define-Measure-Analyze-Improve-Control[2]. DMAIC can be described as follows:

D Define the goals of the improvement activity. At the top level the goals will be the strategic objectives of the organization, such as a higher ROI or market share. At the operations level, a goal might be to increase the throughput of a production department. At the project level goals might be to reduce the defect level and increase throughput. Apply data mining methods to identify potential improvement opportunities.

M Measure the existing system. Establish valid and reliable metrics to help monitor progress towards the goal(s) defined at the previous step. Begin by determining the current baseline. Use exploratory and descriptive data analysis to help you understand the data.

A Analyze the system to identify ways to eliminate the gap between the current

Page 25: qm0011

performance of the system or process and the desired goal. Apply statistical tools to guide the analysis.

I Improve the system. Be creative in finding new ways to do things better, cheaper, or faster. Use project management and other planning and management tools to implement the new approach. Use statistical methods to validate the improvement.

C Control the new system. Institutionalize the improved system by modifying compensation and incentive systems, policies, procedures, MRP, budgets, operating instructions and other management systems. You may wish to utilize systems such as ISO 9000 to assure that documentation is correct.

 

Page 26: qm0011