lssqtt tool 26 - semantic scholar...5. product liability, fmea, reliability, finite element analysis...

23
Lean Six Sigma Quality Transformation Toolkit (LSSQTT)* LSSQTT Tool #26 Courseware Content “Failure Mode And Effects Analysis (FMEA), Quality Function Deployment (QFD), Base For Reliable Quality Communication” 1. Failure mode and effects analysis (FMEA) 2. The design process 3. Design, product and process FMEA 4. FMEA steps and procedures 5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment (QFD) for lean six sigma introduced *Updated fall, 2007 by John W. Sinn. Failure Mode And Effects Analysis (FMEA) The purpose of the current tool is to introduce and explain FMEA and QFD, and develop them within the context of other analysis and documentation tools. FMEA and QFD both can draw heavily upon several other tools and applications, specifically all basic SPC information as solid data, cause and effect, OPCP, SOP's and so on. FMEA and QFD are powerful analytical tools for systematically focusing our efforts for ongoing improvement. FMEA is a formalized technique and process whereby cross functional teams of technical persons can assess product and process systems to assure that failure in components or elements have been addressed, and hopefully, prevented. This involves identification, analysis and prioritization for ongoing improvement, consistent with all Kaizen and documentation approaches for lean environments. FMEA is used to analyze failures after they have occurred or to prevent their occurrence. Frequently called potential failure mode and effects analysis, the opportunity for identifying a problem before it becomes a reality is advantageous to all concerned. The extent to which FMEA tools can be used prior to a failure to enhance the design function, the better. FMEA tends to be used either as a design tool, a process analysis tool, or a product improvement tool. The earlier FMEA systems are used, the better, raising applications opportunities for new product introduction and launch. FMEA seeks out the root cause of a problem, or potential problem, and rates or prioritizes the likelihood of its occurrence. Since there are typically numerous roots involved in any problem, it is important to identify them all, and then to rate them in terms of severity and or likelihood of occurrence. Most FMEA systems actually result in a systemic identification of likely causes of failure, and a numeric weighting related to its occurrence, called RPN or risk priority number. FMEA tools help flush out likely effects of the cause, again providing numeric rankings for effects. This is different from cause and effect analysis in that cause and effect is more interested in identifying the root causes for action and follow through while FMEA documents and delineates actions categorically as a follow through mechanism. Follow through is also concerned with recommended actions and effects over time. FMEA information is generally fed back to engineering and quality groups or others for enhanced design work or for other changes/improvements. It should be recognized that the design FMEA is likely aimed at engineering design changes. The process FMEA results in identification of root causes, based on noted or suspected effects in processing. Based on these causes and effects being identified, changes and improvements can and should be pursued through an applications engineering group or team in the traditional organization or through quality engineering groups or teams of various persons in the less traditional organization. There is an obvious relationship between failures or problems in processing and possible problems in the end product. It is particularly important that the FMEA process and application view be presented within the context of the broader systems approach to quality functions. Problem solving at the shop floor or job site level today (and more importantly in the future) must be within the context of technological impacts

Upload: others

Post on 13-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

Lean Six Sigma Quality Transformation Toolkit (LSSQTT)* LSSQTT Tool #26 Courseware Content

“Failure Mode And Effects Analysis (FMEA), Quality Function Deployment (QFD), Base For Reliable Quality Communication”

1. Failure mode and effects analysis (FMEA) 2. The design process 3. Design, product and process FMEA 4. FMEA steps and procedures 5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment (QFD) for lean six sigma introduced *Updated fall, 2007 by John W. Sinn. Failure Mode And Effects Analysis (FMEA) The purpose of the current tool is to introduce and explain FMEA and QFD, and develop them within the context of other analysis and documentation tools. FMEA and QFD both can draw heavily upon several other tools and applications, specifically all basic SPC information as solid data, cause and effect, OPCP, SOP's and so on. FMEA and QFD are powerful analytical tools for systematically focusing our efforts for ongoing improvement. FMEA is a formalized technique and process whereby cross functional teams of technical persons can assess product and process systems to assure that failure in components or elements have been addressed, and hopefully, prevented. This involves identification, analysis and prioritization for ongoing improvement, consistent with all Kaizen and documentation approaches for lean environments. FMEA is used to analyze failures after they have occurred or to prevent their occurrence. Frequently called potential failure mode and effects analysis, the opportunity for identifying a problem before it becomes a reality is advantageous to all concerned. The extent to which FMEA tools can be used prior to a failure to enhance the design function, the better. FMEA tends to be used either as a design tool, a process analysis tool, or a product improvement tool. The earlier FMEA systems are used, the better, raising applications opportunities for new product introduction and launch. FMEA seeks out the root cause of a problem, or potential problem, and rates or prioritizes the likelihood of its occurrence. Since there are typically numerous roots involved in any problem, it is

important to identify them all, and then to rate them in terms of severity and or likelihood of occurrence. Most FMEA systems actually result in a systemic identification of likely causes of failure, and a numeric weighting related to its occurrence, called RPN or risk priority number. FMEA tools help flush out likely effects of the cause, again providing numeric rankings for effects. This is different from cause and effect analysis in that cause and effect is more interested in identifying the root causes for action and follow through while FMEA documents and delineates actions categorically as a follow through mechanism. Follow through is also concerned with recommended actions and effects over time. FMEA information is generally fed back to engineering and quality groups or others for enhanced design work or for other changes/improvements. It should be recognized that the design FMEA is likely aimed at engineering design changes. The process FMEA results in identification of root causes, based on noted or suspected effects in processing. Based on these causes and effects being identified, changes and improvements can and should be pursued through an applications engineering group or team in the traditional organization or through quality engineering groups or teams of various persons in the less traditional organization. There is an obvious relationship between failures or problems in processing and possible problems in the end product. It is particularly important that the FMEA process and application view be presented within the context of the broader systems approach to quality functions. Problem solving at the shop floor or job site level today (and more importantly in the future) must be within the context of technological impacts

Page 2: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

and implications throughout the organization--all aimed at enhanced competitiveness based on ongoing improvements. This requires enhanced documentation based on data and people, as outlined in other tools and sections of the toolkit, all targeted at lean kaizen. A myopic view of the product and its operations will simply not be satisfactory for competitive problem solving. The FMEA process and application must also include a hands-on orientation with knowledge and experiences, as well as abilities in materials, processing and mechanical aspects of technological functions. If problems must be solved relative to design and process, these systems and areas within production must be approached from the hands on operators' vantage point. Typical FMEA broader issues and relationships could include: 1. Root cause, effect. Analyzing quality problems

or concerns related to processing or design issues or circumstances. This is an obvious relationship to cause and effect roots, and wanting to "know more".

2. Value analysis. Conducting value and/or cost analysis on a new process being considered for production. While it is not the intent of FMEA to conduct value or cost analysis, it is a logical by-product for good design and planning.

3. Innovation. Analyzing various materials or processes for a redesigned product. The FMEA process, if used systematically, can assist us in "looking inside" our product and process for new and innovative ways.

4. Layout improvement. Determining plant layout and materials handling in new or existing processing facilities--looking for implications in the final product--and at all stages of production.

5. Up front planning. New product or process development--implications for quality. The FMEA makes an excellent analytical tool to determine "up front" the impact of change.

6. Understanding the customer. Market analysis as related to technical aspects of product or process development--customer input--both internal and external.

7. Teaching and learning. Training or evaluation related to redesigned products or processes can and should occur. As we improve, the RPN numbers should be reduced over time. As we share information and knowledge in the process of conducting FMEA, we all have an opportunity and challenge to teach and learn.

8. Documenting process. Simply to document the process, creating an important record and paper trail of our overall effort. This must include both the macro process as well as sub-processes within the broader process.

9. Ongoing improvement, measurement. To identify ways to improve the product or process, prior to or after a problem has been realized. FMEA serves as a point of reference, and context, to gage our improvement, with everyone being on the same sheet of music.

10. Prevention. The interest in FMEA is prevention wherever possible. When a failure or other problem occurs, we will want to prevent future occurrences.

11. Reaction and enhancement. Systematic immediate reaction and enhancement method for upgrading products and processes, based on potential concerns which surface from internal and external sources knowledgeable. If the FMEA is being pursued in a disciplined and well managed manner we can anticipate problems rather than merely "put out fires".

12. Team process. The FMEA provides an excellent team "learning environment", mentoring new or existing persons for growth in the broader organization. In fact, it should be recognized and underscored that the FMEA should be conducted from a team approach, drawing upon various elements of expertise in suppliers and customers, internal and external.

13. Robust improvement. The FMEA is probably second only to the OPCP in its robustness as a planning and improvement tool--just by following the steps in the process it will likely lead to enhanced product performance. In fact, this will likely require most organizations to place the FMEA in the OPCP as a standard operating procedure, not simply being done by chance or when customers are yelling loudly.

14. Regular review systems. Ideally, systematically, and on a semi-regular basis, we should exercise the discipline required to review each design, product or process for improvements--the FMEA may prove useful for this. FMEA review process can "trigger" ongoing questioning for improvement.

15. Broad communication. The FMEA, like other documentation tools and systems, provides an excellent communication tool both internally and externally, to keep all parties and individuals concerned apprised of the situation, upstream and downstream, internal and external. And once again, this assumes

Page 3: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

that a cross functional team is doing the FMEA--at minimum representing quality, engineering and production (operators and supervisors). Maintenance personnel will be involved for obvious relationships to reliability.

Each of the above areas represent applications, circumstances and relationships which can bring about improvements in quality and productivity, leading to overall enhanced competitiveness. Operators, supervisors and others will be involved in teams in the FMEA process, since they will often be collecting much of the original data which will lead to further understanding and possibly the solution or completion for the FMEA. Knowledge about process or product is resident with operators--it is incumbent upon the teams to listen carefully to what the operators have to say. It is also important that we structure our teams carefully at the outset of a technical problem to fully and appropriately use our internal and external expertise, as well as others', as a team function. The Design Process Part of doing improvements in lean competitiveness, as well as for basic quality decisions, is the design process. Design processes typically involve at least four major phases. These phases are (l) identifying the problem, (2) preliminary design, (3) basic drawings, and (4) specification development. As might be guessed, problem identification occurs in the early stages of the design process, or market research. As the problem is further focused, renderings completed, prototypes developed, and other relationships explored, various plans are moving forward. It is the design process, as related to FMEA data and documentation which we are primarily concerned about in this section and tool. Following the preliminary design selection, working drawings are drawn up, showing all component details and relationships. Engineers and designers are working closely with the production group to ensure that the product can actually be produced. The better the communication in this phase, the less likely that costly errors in judgment will be made. Also, the better the communication, the easier it will be to incorporate the new product design into production when the startup actually occurs, as a synchronous activity. A serious cost analysis will be conducted to provide further proof that the product is

feasible. It is possible at this phase, as with any phase, that the product could be shelved due to prohibitive cost estimates. It is also important to realize that lead time is increasingly needing to be reduced through concurrent engineering systems. Specifications are developed at some point in a final phase of the design process, leading to a fully developed product. Working with one or more prototypes which were developed during the second phase, various technical groups will usually begin gathering final production data prior to actual production. The prototypes will be disassembled, time studies completed, flow charts provided, machinery acquired, personnel hired and trained, and other final preparations made. A pilot run is usually conducted at this point to gain actual data prior to full production. Also during this final design/development phase various testing will be completed on prototypes and pilot products. The product tests may be field tests, destructive tests, or other appropriate methods for gaining final data inputs. It is quite likely at this point that changes in materials, processes, dimensions, or other relationships may occur since final specifications are often somewhat different than the original product ideas may have dictated. Creativity. A critical part of design and product development is creativity. Creativity is addressed here in terms of how it can be improved for purposes of providing better products, and certainly higher levels of productivity. In terms of addressing productivity through creativity it is important to recognize that there are many different ways to solve the same problem. It is important to try to solve problems in new ways rather than only using traditional approaches. People should learn to evaluate their ideas in objective terms without letting their ego interfere. Creative people often have problems seeking opinions about their ideas. Worse yet, once criticism is offered by others, creative people may become offensive and defensive rather than trying to take advice in a constructive fashion. Relating to the above points, teamwork is often an essential part of industrial design and creativity for product development. This is true for a variety of reasons, but fundamentally most technical problems are simply too complicated for a single individual to tackle. Teamwork for creativity is an art and a science requiring give and take in order to successfully and creatively be productive. Also, the creative individual must be able to present ideas and concepts, both verbally and graphically, in ways that will effectively communicate to all concerned. Far too many people are good CAD operators but are

Page 4: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

limited designers because they simply can not exchange ideas and information with others in an effective manner. In order to be creative, people must know the field. All technical options, components, processes, materials and so on must be thoroughly understood so the designer can adequately create from among all possibilities. People must take time with colleagues and even by oneself to simply try to generate ideas/solutions to technical problems. It may be important to keep a notebook handy at all times to list/design possible solutions for varieties of problems. People should not trust themselves to remember, since the truly creative mind will be moving along into other areas and may likely forget an idea that is not documented. Technologists must keep a sketchpad handy. Some of the best ideas to become products started out on restaurant napkins, notepads from meetings, the back of a bulletin in church, and so on, where doodling often occurs. Sketching is probably one of the single best tools the creative mind has access to, and it should be capitalized on. People must learn to accomplish three-dimensional sketching efficiently and without effort, simply by practice and doing. It is generally only through concentrated effort over time that most ideas are actually turned into products. Much sweat and toil are necessary to bring ideas through creative development to fruition as products. Part of this relates to confidence. People must not be afraid to develop their ideas, nor should they be intimidated by others who may not want them to be successful for petty or unprofessional reasons. Fear of failure often not only keeps us from pursuing our ideas, but also slows down the creative mind, keeping an individual from moving ahead. People must think positively about their ideas and themselves, particularly to "sell" their ideas. We will only accomplish what we believe we can do. Although true creativity cannot be time-clocked in an eight to five fashion, many creative minds function better under the gun, with pressure to produce. This may be particularly true if the pressure is self imposed. If it is outside pressure (particularly from within the organization) creativity may even be hampered. But an appropriate balance is often necessary in the real world. While this does not mean we should be unrealistic about goals and timelines, most of us can afford to push ourselves, perhaps improving both productivity and creativity. The creative person will probably need to organize all resources in a logical manner and place. This, along with methodical and systematic pursuit of solutions to problems, is clearly part of the key to

creativity. It is recognized that creativity cannot be mechanically produced in a machine-like fashion. A reasonable amount of discipline surely must be a part of most creative acts. Many people create better at certain times and places. While there is no general rule to fit all individuals, creative people will generally know when and where their best work is accomplished. This could be in the shower, at church, while jogging, during meals, at sporting events, while shopping with the spouse, in the shop, on the plant floor, and at any time in the day or night. The "light bulb" of an idea may come on at any time in any environment. Creative people learn when and where they are most creative and then they capitalize on this. It may be necessary to get some perspective. Sometimes we all hit low periods when we are simply less creative (or productive) than at other times. This may come and go briefly, or it may linger for days on end. The creative individual should recognize this for what it is and attempt to deal with it in a reasonable manner. This may mean taking some time away from the project to get some fresh air and start anew. Often a change in environment, people, or both, will help. Putting the project in perspective can help the creative mind be even more effective overall. Truly creative people are not afraid to admit that they may not know all the possibilities. The reality is that we all can access more information about any given technical subject or problem. Colleagues should be consulted, friends talked to, the literature reviewed, catalogues studied, and perhaps other sources surveyed. All sources of information should be exhausted prior to expecting a truly creative and definitive solution. Perhaps one of the best sources which must be studied is the competition. It may be necessary to obtain the competition's product, and physically tear it apart to determine the strengths and weaknesses. These then must be avoided or built into the creative version which is currently being addressed. Skill exercised in creating products capable of inexpensive and efficient production often means the difference between success and failure. Many otherwise excellent products fail due to production problems which should have been solved in the design process. Product design considerations are: noise strength shape/size reliability corrosion

Page 5: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

thermal considerations dimensional stability flexibility versatility styling/aesthetics wear process capability lubrication control elements surface finish friction service life cost/function maintenance safety volume/lot size quality utility value weight The extent to which the above factors are addressed will certainly help determine the success or failure of a product in the marketplace. These are not limited to mechanical properties only, but includes a broad spectrum of areas to attend to for maximum technological competitiveness. General Design Rules. In addition to the above factors, several general design rules should be observed when developing products. The design rules aim to help insure maximum simplicity in the overall design. This means developing the product for utmost simplicity in physical and functional terms, analyzing the materials to be used, selecting materials for suitability to the product as well as availability and cost. It is also necessary to use quality standards which are as liberal as possible without diminishing product quality. Wherever possible use pre-manufactured components and standardized parts. Regarding inventories, components should be cross-referenced, using the same parts, materials and so on in as many products as possible. Focused more on production and design, as attempts are made to determine the best process, it is well to attempt to use existing processes. It is also important to note that special processes can often be built or obtained, although often at considerable cost. It should always be a goal to use existing machinery and processes in an attempt to keep capital expansion at a minimum. Wherever possible, this should not, however, be done at the expense of efficiency. If a new piece of automated equipment will get production up to speed, it should be designed in, obtained and put on line. Production steps should be

eliminated at every possible occasion. Eliminate handling steps at every opportunity to help ease locating, set-up, orienting, moving, holding, transferring and so on. Plan for the largest volume of production possible since, generally, larger lot sizes lower the overall cost due to lower inventory, transport, training, among others. Concurrent Design. Increasingly, there is a desire to move the product from design phases to production more efficiently. Over the past several years there has been a trend in technological organizations to perform design and production activities in a more parallel manner. For purposes of technological organizations in general, it will be referred to as concurrent design and production, and it is clearly related to all technologies. Regardless of what it is called the fundamental concept remains the same. The concept is to reduce lead time between project start-up in design phases and on the tail end, when production is occurring. By sharing design information with production personnel in the early phases of design, consideration can be given to production equipment available and needed, fixturing required, new personnel requirements, vendors to be pursued, customer requirements in quality and so on. In all cases, however, formerly the design would have been complete prior to releasing information to production. But now, due primarily to competitive forces world-wide, organizations must be quicker to respond to the market demand, else they miss part of the available opportunity to capture that market and develop consumers around their product base. This also relates to downsizing and fundamental changes in organizations, due not only to competition but also to changes in the technology, primarily the computer. Now, due to computer aided design, designers can move their information virtually with a key stroke to others in various production functions such as computer numerically controlled toolpath programming, inventory and quality areas, among others. As well, engineering design changes can be more carefully performed and controlled, with less impact and better quality overall. As organizations continue downsizing and reorganizing to meet technological demands of the future, only a small staff of persons will be required to perform several different functions related to concurrent design and production responsibilities, obviously contingent upon size of the organization, nature of the product and competition, and so on. This is all consistent with the overall theme of this text, that the organization should be driven by the technological functions to meet the demands of the future.

Page 6: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

Material considerations. The basic question about materials' consideration is, "what material will meet the specifications and criterion set forth for a given product's performance or technological application"? Five categories of information are presented to help address this question, leading to proper material selections. These are design concerns, service requirements, fabrication or processing requirements, general economic considerations, and properties and characteristics of materials. Typical issues requiring decisions confronted in the design stage have to do with the conditions under which the material (in the product) must serve. But the bottom line also relates to being able to produce a product or service competitively and remain profitable. But materials' selection relates, during the design stage in other pivotal ways. Perhaps certain materials can not be used for availability, environmental, safety, or other similar reasons. Obviously, one of the key questions, to be followed-up on below, relates to processing capability and expertise. When studying the material/product selection requirements relationship, several technical points must be considered. Among those are dimensional stability, corrosion resistance, hardness, fatigue factors, thermal and electrical considerations. Fabrication or processing requirements. This material consideration pertains to processing techniques and was explored in a different tool. Typical technical fabrication and processing issues have to do with material machinability, formability, hardenability and thermal capabilities, ductility/toughness, chemical, joinability, and automation. This is a rather straight forward question of aligning the selection of materials with the technological processing capability. General economic considerations. Based on relationships established in the previous tools, general materials economic conditions can clearly affect the success or failure of a product. The basic question pertaining to materials and economics has to do with the ability of a product to successfully sell and perform at a required cost level. Sub-question related considerations include material availability and cost, general processing cost, finished product weight (shipping/transport costs), and volume production capability. Other economic considerations in materials' selection relate to waste due to type of material and overall design, environmental costs due to a given material application, process time costs, among others. Energy costs for transforming and preparing a material prior to final processing should also be considered.

Properties/characteristics of materials. Many properties and characteristics must be considered in the material selection for a product. Foremost are physical (gross and internal); mechanical; thermal; and, electrical. A brief review and listing of significant properties/characteristics is presented here, and more fully explored elsewhere: Physical Mechanical Thermal Electrical/chemical Reader's are advised to refer to other tools on materials and processes. The current discussion depends, to a great extent, on understanding materials knowledge in general and particular to the product being considered. Design, Product And Process FMEA It is important to understand the importance of the design process in the overall quality system, and process of bringing product to fruition in production. This has been summarized in the previous section, generally to result in prints or drawings which capture the important specifications for production. While the prints may capture important specifications from design and engineering, still we must translate this into workable information and value adding potential for the organization. This becomes the role of the quality manager, engineer, technician or services person, all part of the broader quality professional team needed to do the work described. FMEA becomes one of the critical tools needed as a mechanism to translate and move data and documentation forward for communicating broadly with suppliers and customers, internal and external. Design FMEA. Design FMEA is used primarily to enhance or modify engineering functions and designs by determining potential failure modes and effects in product or systems. The design FMEA would relate typically to the systems level where components or devices in product or process which must interact with other devices or components must be analyzed and improved. Once analyzed, and weaknesses or faults are identified, the weaknesses or faults are enhanced and improved based to a large extent on what is disclosed in the FMEA process. Design FMEA tends to be more focused on a broad area of a product, mechanism or system, relative to product or process FMEA which is more oriented to some specific element in the product or process

Page 7: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

which is thought to be leading to failure. Design FMEA involves both product and process elements. Design FMEA would also be more typically aggressively pursued in the new product introduction programs, seeking to eliminate or reduce the likelihood of failures at the point of prototyping and test, or beyond. While this would traditionally be pursued by engineering personnel, it may frequently be done in collaboration with quality personnel, operators and others--those most knowledgeable with the design at the point of its origination in the new product introduction. While engineering must likely remain the prime mover on design FMEA's, others will equally as frequently need to be involved. Again this is true since both product and process will be involved ultimately as related to the design. Other likely opportunities for design FMEA activity, for ongoing improvement, are at the point of engineering design changes, or other similar areas, regardless of reason for change. When changes are performed in design and engineering, this is commonly difficult enough to communicate and provide to all needing the information--but it is also a good time to evaluate the design for robustness and integrity beyond simply "getting the job done". The disciplined approach--FMEA--provides a ripe opportunity for enhanced designs--and improvements. Product FMEA. Product FMEA is the typical terminology used where an existing, mature product, has failed in service--or is suspected of potential failure. Particularly where product in the field has been identified as having problems, it will be necessary to pursue aggressive measures to correct and improve the element, component, subsystem or system which has failed. This is not necessarily the result of field failures, but may also be a function of internal test and analysis programs done routinely before, during and after the product is in production. The product FMEA also considers the impact of failure of the product or system upon the user, and/or the entire system. This is typically discussed in terms of the critical or likely failures, with the overall system placing priorities on those most likely to occur, and the impact which will be felt. Failure would typically be a function of component or sub-system wearout or over-stress. Information determined through the FMEA process, via team activities, would be used to enhance the product design, and to prevent actual or future failures. Additionally, in the product FMEA, key characteristics of the product may be evaluated and/or upgraded or downgraded as a function of the FMEA steps. Dimensions and tolerances, or other

design deficiencies or malfunctions, may surface as needing to be changed. Changes in materials, or in the way we use the product may also be called for through the FMEA. Instructions for product use represent one additional area of concern which may contribute to the failure as well. This may also involve SOP's, training and other documentation for field use. Process FMEA. Process FMEA tends to be more oriented to processing systems relative to specific design functions. Process FMEA seeks to enhance specific dysfunction's in processing, leading to improved final quality. Most knowledgeable know that this opens the door for virtually unlimited opportunities to improve. The processing function affords many avenues for improvement--touching virtually all aspects of the product--from start to finish. Process FMEA may, and likely should, be done prior to starting production, as a function of effective planning. Process FMEA is useful in mature processes where malfunctions or failure are surfacing in end product or components, at various phases in production. It is also for this reason that the disciplined and systematic approach, the FMEA, is important. If regular review and enhancement opportunities are not pursued, the likelihood is that we simply will not be providing the more rigorous approach to our product--and the most competitive position. Process FMEA seeks to address the question, "might there be a better way, and can we reduce the likelihood of causing a failure in product, based on faulty processing?" Significant connections in broader "systemic" quality and productivity tools, particularly related to documentation outlined here, includes OPCP, SOP's, QFD and value analysis, among others. Relationships among FMEA types. Throughout product life cycle FMEA is useful for preventing, detecting or resolving failures. This can be done in the design FMEA early on in the product life, in product as it matures, and at the process where product is produced. There may be various FMEA's in motion at different stages of the product life cycle, depending on failures presumed or actual, and depending on other customer needs and requests. Various FMEA's may relate to one another, depending on focus and intent. The product life cycle, also related to quality function deployment, is shown below as related to FMEA. Product life cycle provides additional insights into relationships inherent in product development and deployment. Three stages are commonly identified as important parts of product life cycle related to

Page 8: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

potential failures. These are early life, useful life and wear out. During early life, the failure rate is high because we do not have all the bugs out., people are being trained, new processes are being introduced, and so on. During useful life the failure rate goes down and levels off for a period of time because we have gotten the design and engineering side of the product, as well as production (process) under control. In the wear out phase, due to maintenance issues, length of service life, changes occurring around the product in other technologies, and other factors, we see an increase in failures. FMEA can be useful at any phase of the product life cycle, and the life cycle may be impacted by the use of FMEA at various points. Assuming various products are being introduced, matured, and gradually phased in and out, as a function of a broader organizational plan, FMEA, systematically used, will be pivotal for planning. FMEA should be used with other documentation tools--particularly the OPCP discussed elsewhere. Another relational look at the FMEA approach and how the different types are functional is shown in the graphic on the next page. This graphic presents the design, product and process FMEA, and provides insights into how they function independently as well as together for the broader product life cycle and all products systematically. It should be noted that all systems involve distinct design, product and process opportunities for improvement, based on the FMEA process. Yet, at the same time, there is a real relationship between and among the three types of FMEA, all collectively aimed at improved quality. FMEA Steps And Procedures This is further pursued following design FMEA, process FMEA and product FMEA information and activity, presented next, and in example forms using actual product information over the next several pages. In all cases (design, process or product), note the similarities and relationships to and with QFD and cause and effect tools. Also, we are reminded that this information should all be considered as part of the information for the broader OPCP and statistical process control and data tools. Typical steps used to perform FMEA generally go something like the following, all related to a two part form provided in the applications section. 1. Process description/purpose. Identify key

steps of the process or function for analysis.

This would typically include referencing relevant SOP information, flow charting, layout diagrams, relational diagrams, and other detailed descriptive information to help communicate to all involved, how the process or product works, related to the failure (or potential failures) under study. In the case of a device or component it may be drawings or specification details related to characteristics, or others. FMEA's conducted on similar products could also prove helpful. If the element under study has more than one function, all are identified and described.

2. Potential failure mode. Identify how functional areas can fail. This can be based on past experience with failures in the field, complaints through existing or past customers, or through our own internal communication mechanisms. Test data may also be relevant at this point. Similar to root causes, failure mode should not be confused with effects. Identifying potential failure modes will require brainstorming and imagination, as well as flushing out pivotal information. Areas to consider include where poor maintenance was used, incorrect use procedures, failure of related parts or systems, and so on.

3. Potential effects of failure. What are the most significant likely effects on customer, both internal and external? If failure were to occur, how would we know this in actual likely effects noted? It should be observed that cause and effect tools may relate to this step (as well as others). This may also be noted as failure impacts or effects on components, sub-systems, or devices. Individuals knowledgeable of the product, process or design should be involved in providing views on what the potential effects may be. A relational diagram could also prove helpful again in this stage of the FMEA.

4. Severity. This is a numerical value from 1-10, based on observations regarding cause and effect in failure (1 is low and 10 is high). This is a relative ranking which must be defined within the culture of the organization and the context of the issue under study. For example, in most cases, a 10 ranking of severity for process would mean shutting down the line or calling back product. Or a 1 may mean that specifications or tolerances should be adjusted. But some definitional parameters should be assigned and documented for these, to enable all concerned

Page 9: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

to better understand what the numbers mean. Typical definitions of ranking values could be:

(1) Minor. Customer will probably not

notice. (2--3) Low. Slight customer annoyance.

(4--6) Moderate. Causes some customer dissatisfaction.

(7--8) High. High degree of dissatisfaction, out of specification, and product is inoperable.

(9-10) Very High. Failure affects safety or involves non-compliance with standards or governmental regulations.

5. Potential cause of failure. Determine root

cause--being certain to flush out all variables--but getting to the root. Cause and effect tools may help here, to communicate with others.

6. Occurrence of failure. This is a numeric value, identifying probability of the failure occurring--again 1 to 10. If failure has occurred, and we are analyzing for future reductions in failure, this value should be assigned based on likelihood of repeated failure probability. Typical values for defining occurrence may be:

(1) Remote. Failure is unlikely. (2--3) Low. Relatively few failures. (4--6) Moderate. Occasional failure may

occur. (7--8) High. Repeat failure has or will occur. (9--10) Very High. Failure is inevitable. 7. Current detection/control. This is generally

some form of evaluative action relative to what our current systems are for detection, correction and overall control. What systems are in place to help eliminate or control the failure over time? This might include shifting from attribute to variable data and charting systems, evaluation of characteristics and/or specifications, enhancements to the data acquisition systems, or other detection and control activities being undertaken or considered, or in place.

8. Detection. This provides a numeric rating (1-10) for our judgment of the current detection and correction plan and system as listed in # 7 above. If we have a lot of faith in our systems for detecting and controlling this failure, then a lower value is assigned, while if less faith is given to our systems we will assign a higher value. This is accomplished in this manner

since it will tend to drive the RPN value up, as shown in later steps. Typical values assignable to help define detection systems could be:

(1--2) Very high. Detection program will

almost certainly detect. (3--4) High. Detection program has a high

chance to detect. (5--6) Moderate. Detection program may

detect. (7--8) Low. Detection program is not likely

to detect. (9--10) Very low. Detection program will not

or cannot detect. 9. Risk priority number (RPN). This is a numeric

value identified by multiplying numbers 4, 6 and 8 together, and providing an enhanced value--the RPN. The RPN is a "quick" assessment value which customers and suppliers, internal and external, will use to help evaluate our effectiveness with the FMEA process. Each RPN value on the total FMEA will also serve to "automatically" rank order our areas of priority as well.

10. Recommended actions. What is our plan of attack for how we will improve quality and reliability--and reduce the likelihood of failure? This is consistent with our "8 D" approach to corrective action.

11. Area/individual responsible. Consistent with the nature of the failure, and the recommended actions in # 10 above, this should identify tasks, time frames, persons responsible, and so on. This would likely be related to a project plan which is, or may be , in place for a team or others as related to the FMEA or parts of related work.

12. Characteristics criticality rating. Depending on the nature of our product and process, and other variables relating to customers and suppliers, among others, it is often necessary to add a column/step and feature in the FMEA process. This is sometimes called a "criticality" measure or statement, identifying key characteristics in the process or component under study in the FMEA. A rating is typically added, providing the opportunity for all involved to actually "rate" or evaluate the characteristics as a function of relationships identified in the FMEA. This is not necessarily a part of the RPN calculation.

Page 10: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

13. Actions taken. What is the reassessment action plan for demonstrating improvement? This should identify tasks, time frames, persons responsible, accountability systems and so on, all of an evaluative nature, to assist us all in reducing or eliminating the failure under analysis. This should, however, also tie into all existing data and documentation which should be available as a function of the broader quality system, and identified above as related to characteristics evaluation. Capability information, gage R & R, and other attribute and variable data information should be relevant to actions taken.

An additional measure or step which is sometimes added to the FMEA system, is the recalculation of the RPN based on the actions taken, changes in key characteristics, and other variables identified and acted on in the FMEA process. While this is not shown on the form provided in the toolkit, it should be given careful consideration since it does recognize and allow for ongoing improvement as a function of the process. It is also true that generally the total FMEA process would be redone or repeated over time, allowing for a more robust approach to the recalculation of the RPN over time. We anticipate that this value would come down over time. Product Liability, FMEA, Reliability, Finite Element Analysis (FEA) When product fails, generally it is assumed that it is the producer who is at fault rather than looking at the user. This prevailing attitude suggests that technologists must work ever harder to achieve reasonable assurances that we have done all that is possible to achieve a non-failure system or function. This is at the core of the FMEA process, meaning that the FMEA, if used properly, can assist us in avoiding failures which may or may not be our fault as the producer, but almost always will appear to be our responsibility. Recognize, if we have substantial "paper trail" indicating we have tried to do the "prudent" thing in analyzing our product for potential failures and improvements, we should be in a preferred position relative to liability issues and circumstances. This is part of the purpose and function of the FMEA. It would also be safe to say that the producer, given their knowledge of the design, product and process involved, is generally in the best position to

identify and prevent failures through the use of FMEA. This is why we must: 1. Explore all reasonable design alternates and

solutions, and select from among the best. 2. Ascertain the potential impact a failure may

cause to the broader system. 3. Understand broader issues which may surface

if failures occur in a sub-system or component. 4. Predict with reasonable assurances, the

likelihood of failures in product or process. 5. Adequately test all components, devices, and

sub-systems in the broader product, process or system.

6. Perhaps most important, have systems in place, in the form of quality systems, to address concerns or problems identified as failures, once identified.

7. Reasonable and responsible warranty policies in place to address failures if they do occur.

Even where proper instructions or directions for use or operation are provided, the producer is not relieved of the responsibility to provide safe and reliable product. This is why suppliers are often asked to participate on teams involving FMEA, helping assure that all supplied components are held to the same high standards as the original producer expects of themselves. This also explains why we try to locate and utilize the process which will hold the best capability, reducing defects and defectives in production. The overall quality system must be geared to assisting in identifying potential failures through data collection and analysis, documentation, well trained people, and so on. General reliability issues, Kaizen and FMEA. It must be recognized that FMEA relates to broader issues in quality, and that the door is opened, through FMEA, to better quality by understanding reliability in product, process and design. But, like FMEA, reliability and its failure relationship in quality, is not a simple or easily understood area. The purpose of the following discussion is to summarize and present a few key areas related to failure and reliability for purposes of helping us all be better equipped to address the FMEA process. This also typically relates to test/laboratory procedures and systems for analyzing product under rather controlled conditions. Reliability in quality addresses quality of product service life over time. Given adequate written instructions and general support, including sufficient long term maintenance and service, the product should have overall safety and reliability considerations from concept through completion of

Page 11: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

service/use by the customer. Reliability through quality is also a fundamental concern for the length of service life, the basic issue being how long can the product function the way it was designed to be used? Quality of a product when it leaves production, and begins to be used by the consumer, must take into account overall reliability in a product's service life. If our only concern in the quality system is the immediate quality at shipping time, then over time, as the product is used by the consumer, we may likely fall woefully short as a technological organization. Reliability can be defined as the ability to perform without failure in a specified function under given service conditions for a specified period of time. This definition provides several general conditions required to determine reliability in products, as related to quality: 1. Stated successful product performance/service. 2. Defined environment where product performs. 3. Specified operating time and conditions. 4. Nature of failure behavior which is predicted. We know that our product is going to wear out and deteriorate over time. But through various tests and analysis, data collected over time, experiences both internal and external, and in other ways, we project that our product can be expected to perform in certain ways over time. The task then becomes to improve on this without adding substantially to the product cost. It should also be clear that use of data collected in production, various engineering data, market and customer use information and documentation, among others, can be very important in improving our quality over time. Failure rate is the ratio of the number of products that fail to the total time of all of the products being life tested. The failure rate is not always specified in terms of hours, but can be defined in terms of cycles of operation, distance traveled, etc. It is usually used to define the life of component products (products which are frequently not repairable). Mean time and failure relationships are generally expressed in the following ways: 1. Mean time to first failure. 2. Mean time to failure. 3. Mean time between failure. Each of these will be briefly addressed and explained. Mean time to first failure (MTTFF) is the average time to first failure of the products being

tested. This is usually used to define the reliability of non-repairable products. Obviously, the key variable in a non-repairable product or component is the point at which it will likely fail, and then why it failed, based on further analysis and study over time. While this may be most frequently used with non-repairable product, it may also be a useful piece of data for repairable product or components. Mean time to failure (MTTF) is the operating time of a single product divided by the total number of failures of the product during the measured time interval. The measurement is usually made during the period of time between early life and wearout failures. Mean time between failure (MTBF) is the total operating time of a population of repairable products divided by total number of failures. This measurement is usually made during the period of time between early life and wearout failures. Early life failures are those failures that occur just after completion of the product when failures occur at higher than normal due to defective parts or inadequate manufacturing procedures. This stage is sometimes referred to as the equipment "debugging" or component "burn-in" period. Wearout failures are those failures which occur as a result of deterioration processes or mechanical wear and whose probability of occurrence increases with time. These failures often occur near the end of life of a product and are usually characterized by chemical or mechanical changes. Early life, or "burn in" period, is where the product is being "broken in" or just getting started, while wearout failures are due to age or number of cycles in operation, or other variables relating to life in actual operation, or the point where the product is no longer functioning as it should according to specification. This also begins to relate to maintenance functions and other aspects of the broader applications environment or system. Failures can often be prevented by a replacement and maintenance policy, and through careful planning by users. Related to this, the maintenance ratio is the number of maintenance man-hours of downtime required to support each hour of operation. The ratio reflects the frequency of failures of the system, the amount of time required to locate and replace the faulty part and to some extent the overall efficiency of the maintenance organization. The maintenance ratio provides a figure of merit for use in estimating maintenance manpower requirements. Several techniques are used to increase reliability of products, briefly presented below. One technique, design margins, addresses building the product so that it will withstand much greater stress than it would ordinarily be subjected to.

Page 12: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

For example, bridges have been designed with safety factors as high as four, which means that the bridge is four times stronger than necessary to meet normal, expected stresses. This safety factor is meant to cover unknown stresses that might occur, variability in the strength of materials used, variation in the manufacturing process, etc. This is also referred to as derating, where the product is used to perform a task less severe than it was designed for. Another technique is termed reduced design margins. This is where products are designed so that the maintenance is automatically performed or is relatively easy to perform. For example, use of oil impregnated bushings can ease the likelihood that failure will occur under proper conditions. Human error in maintenance can also be diminished with this. When a reliability requirement is written into a contract a sampling plan has to be devised so that the buyer has assurance that the requirements are properly met. Usually the sampling plan will be in the form of multiple sampling where multiple units are studied rather than only a single unit. Regardless of how the sampling occurs, reliability must be underscored in tests and inspection based on carefully designed plans. Moreover, all test documentation must be carefully handled to assure accuracy and security. Routine procedures facilitate both in-house and independent lab tests on the product to determine exactly what should be documented and certifiable. Service/use instructions must be written from the perspective of helping consumers use products correctly, thus extending the life of the product. Technological producers are responsible for providing clear, accurate, and concise instructions for installing, servicing and, certainly, use and operation of their product. No details should be overlooked, and proper diagrams and procedures should be included. This includes instructions on how to operate the product where the environment is kept relatively constant. This may appear similar to the SOP approaches identified and explained elsewhere. Human reliability techniques attempt to apply effective communication from producers to users, and as well in production, where failures of product are attempted to be circumvented by people. Human reliability techniques take the position that simplification, motivation, communication, training, and other human factors can assist in improving reliability. Among other things, this will be done by improving engineering design change procedures through improved communications, by team problem solving techniques, computer networking, improved

documentation methods, and perhaps in other ways. But it is important to recognize the significance of the person, both in production, and in all other facets of the product, as it may relate to reliability and failure.

Synchronized reliability measures, robust design. Several key measures are generally accepted as part of the basis for determining and better understanding reliability. As was indicated in the previous introductory section, reliability has to do with performance over time. It is also true that reliability then has much to do with determining the point and nature of failure. Thus, one major analytical area associated with reliability is what is called failure analysis. Failure analysis includes failure rate, percentage of failures, mean life, the exponential failure law, inverse reliability, break in and wear out failure analysis. Failure rate. This is further defined as the number of failures per some unit of time, generally one hour. Three conditions usually dictate the nature of failure rate. First, repair and replacement; second, no repairs or replacements but a log of actual test conditions is kept; and third, no repairs or replacements occur, nor is test data or failure times logged. Failures repaired and replaced. In the scenario where failures are repaired and replaced, a formula is used which is as follows:

λ = f / nt where λ = fph = failures per hour f = total number of failures t = testing period/unit test time n = sample size For example, 200 parts are tested for 5 hours, and when a failure occurs, it is immediately repaired and placed back in service. At the end of the test period, there had been 20 failures. The failure rate would be determined by:

λ = f / nt λ = 20/5(200) = 20/1000 = 0.02 fph

The failure rate is .02 fph, based on a total test time of 1000 hours, a sample period of 5 hours. This scenario does not specify length of time for repairs, or how/if the repair time would impact on the overall failure rate. A more specific calculation and test system would be required to gain additional detail. Failures not repaired/replaced, log is kept. Next, the scenario where failures are not repaired and replaced and a log of actual test times is kept of

Page 13: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

failures. The formula used for determining this failure rate is mathematically described as:

λ = f / ∑ t f( + ∑ tg) where λ = fph = failures per hour ∑ t f

= sum of the failures ∑ tg

= sum of test times for good parts An example is provided where failures were not repaired and replaced but a log is kept of failure times. The 8 failures were identified as follows: 2, 4, 4, 3, 3, 2, 4.5 and 4.5 hours. 100 other parts also ran for the full 5 hour test without failure. Thus, based on the values provided from the example: ∑ t f

= 2 + 4 + 4 + 3 + 3 + 2 + 4.5 + 4.5 = 27 ∑ tg

= 5 (100) = 500

λ = f / ∑ t f( + ∑ tg) λ = 8/ (27 + 500) = 8/527 = .015 fph The total test time was calculated to be 527 when all parts tested were included in the scenario. Failures not repaired/replaced, and no log kept. The scenario where failures are not repaired and replaced, and no log is kept of time to failure for tested units, is illustrated. In this example, since time to failure is not known, a method must be used to factor in the average. This is done by using the number of good units at the start of testing and the number of good units at the end of testing and a modified formula: λ = 2 f / t(s1 + s2 ) where the value 2 is used to provide the averaging effect, and f and t are the same as before, with f = total number of failures t = testing period/unit test time and s1

= number of good units at start of test s2

= number of good units at end of test Based on values used in the previous example, λ = 2 (8)/5 (108 + 100) = .015 fph

It should be remembered that the previous example had 8 units which were failed using a test time of 5 hours. 108 units were good at the start of the test and 100 were good after 8 failures. Note that fph values were equal to three places, demonstrating another useful measure of calculating reliability values. Percentage of failure. Another simple, but useful, failure analysis technique is termed the percentage of failure. Sometimes called the ratio failure rate, it is simply the number of failures per part. Calculated by dividing the number of parts in the sample (n) into the number of failures (f). Mathematically this is shown as the formula:

r f = f / n where r f

= ratio of failures, failures per part f = number of failures in the sample n = sample size An example could be cited where a certain process typically produces parts having a failure rate of 10%. Given a production run of 40, used here as the sample, what will be the actual number of failures projected as defined above? Using the formula presented earlier, with values for our example:

r f = f / n .10 = f / 40 f =.10(40) f = 4 Thus, we know the baseline value to work forward from for improvement. Failure rate for this process would be projected at 4, under conditions given.

Introduction to finite element analysis (FEA). This section is designed to introduce technologists to finite element analysis (FEA) in practical terms. Terms that correspond to FEA are introduced and explained in general ways which are applicable to most technological circumstances. Personal computer (PC) and mainframe FEA systems are discussed in terms of cost and performance. A brief practical example of the concepts introduced is applied to further demonstrate the usefulness of FEA. Avoiding disasters in the development and production of a product is a major reason for using. FEA can "drastically speed up" the prototyping of products. By modeling and discovering what the minimum material requirements for a design are before generating the physical model takes much of the time out of prototyping, potentially eliminating the embarrassment of releasing a poor design and

Page 14: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

then the added expenses of redesign and damage control from the resulting disaster. Infusing the CAD data as a source or database makes the process become a part of a rapid prototyping scenario. After a design is analyzed, the data that was used to model it in FEA can be converted to CAM data and the part may be machined in many instances. FEA is a technique that is used to solve a broad range of industrial problems including mechanical loadings and thermal characteristics. FEA is a method that allows the designer to analyze complex components, often nearly impossible to do mathematically for continuous forms by dividing the shapes into smaller, simpler, finite elements. The elements are then analyzed for their stress and strain characteristics, and the results are related back to the entire structure of the component. The points, called nodes, form each element's shape, and lines crossing through nodes make up a finite element mesh. It should be noted that usually nodes are placed at each change of cross section in the structure and as meshing is increased the number of corresponding nodes is also increased respectively. High density meshes may result in a more accurate analysis relative to low density meshes. However, high density meshes may also take longer to analyze because each node has six equations to be solved resulting from their respective six degrees of freedom, including the three translations along the x, y, and, z axes and the three rotations about each of these axes. Correspondingly, the more nodes and elements needed, the more computer resources which are required to calculate the stresses and strains at the nodes. Development Of PC-Based FEA. Traditionally, FEA has been an exclusive engineering tool for mainframe and minicomputer users because of its large memory requirements. However, with developments in microprocessor technology, FEA packages can now also successfully run on PC computers. In the mid 1980’s, early full-feature FEA packages for the PC were introduced. Initially, the stress and dynamic package sold for about $5,000, and when compared to the cost of leasing the mainframe FEA package, which was as much as $15,000-20,000 for a yearly lease--$5,000 for a PC-based FEA system was a very attractive price. This facilitated the PC-based FEA to become a feasible alternative to mainframe-based FEA software. Technological improvements for PC's were continually being introduced in the 1980s. Speed of processors, higher resolution graphics boards with color capability, increased memory, extended hard disk capacity, and lower prices were some of the changes in PC's that allowed for PC-based FEA developments. By the mid 1980’s the producers of PC-based FEA software, found that their PC-based

FEA product was more than an alternative to the mainframe-based FEA. It had become, in many cases, the first choice. The price of PC-based FEA packages had been slashed from around $15,000 to $5,000 and it appears to be dropping further as time goes on. Why is FEA used? FEA, as a leading approach to test designs against laws of nature, has been well accepted and widely used in industry. Traditional analysis techniques can only be satisfactorily used with a range of typical shapes and specific loading conditions. The uncertain analysis for complex components requires designers to use high safety factors for the mechanical loads within the products they design so that the components are usually overweight and waste expensive materials. For the sake of both safety and economic issues, designers can optimize mechanical structures by employing the FEA technique that will assure that the materials used can support the related loadings while not wasting excessive amounts of material. Another benefit of FEA is that it allows us to see how the product will react under loads and predict any errors before going through all the expense of building and testing prototypes that are doomed to fail. Due to high cost in later stages of product development and (often concurrent) production, early optimization and refinement by FEA can greatly reduce time and cost in various stages of product development, removing lead time and cost requirements in traditional prototype development. The value of FEA as a tool for technologists’ is becoming increasingly apparent. With costs coming down and the utilization of the PC (as was discussed with the PC-based FEA), the use of FEA is going from being a high technology toy to an essential tool in the technologist's toolkit. The real world application provided also assists in demonstrating usefulness of the tool for aiding in the overall design process. General FEA procedures. The procedure for performing FEA typically includes three steps: preprocessing, processing, and post-processing. Each of these is discussed briefly. Preprocessing starts with building a model, typically, in a computer aided design (CAD) format. A solid model would be a possible construction approach in an IGES, DXF, or similar file form that could be used as the basis for FEA (in geometrical terms). Next, element types are chosen which break up the geometry involved in the model. Some possible choices for abstracting the continuous shape into elements include the following: thin shell, thick shell, plate, solid, plain, and membrane. It should be noted that software packages generally have additional specialized element types that apply to

Page 15: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

specific applications and, selection of element types varies according to software. CAD information, in terms of geometry, is used to construct a mesh that corresponds to the component being modeled in the FEA. This takes place when elements are projected onto the CAD geometry and a 3-D mesh is generated by the computer. By having the computer create the mesh automatically the time to produce an FEA model and solution goes down drastically. Using large meshes in areas that are less critical and smaller mesh sizes in areas that are critical also helps speed up the solution time while not affecting the relative accuracy of the solution. Some softwares are automated to the point where they can do some of this mesh selection. After mesh has been created, structural loads (or enforced displacements) and boundary conditions are applied to geometry being analyzed. The structural loads are forces, temperatures, pressures, and/or accelerations applied at nodes and elements of the model. Boundary conditions describe how the geometric model is connected with its environment. This model represents the final output from the preprocessing phase of FEA. Processing is the FEA procedure where calculations for the stresses and strains are performed. Processing is a computer solution of the lengthy mathematical calculations required within the FEA. For a small problem, it takes minutes; for a medium problem, it may take hours; larger problems may take overnight or a weekend due to the extensive number crunching that must be performed by the computer. This is also a function of the capacity of the computing power as well. Post-processing is the most exciting process of the three steps, because it lets the user display and study the results of the analysis. The displays may include contour plots, deformed geometry, criterion plot, "hot spots", etc., and the output may also be a data file with node numbers and stress and strain values on it. Often, the data found in tables are displayed in color-coded plots as alternatives or in addition to the tabular form. In these colored plottings failure areas can easily be identified graphically, aiding designers, engineers and technologists in knowing where to focus their energies for additional work. If an area in a design is "over done" it could be reduced in wall thickness, or in other material related ways, thus reducing the costs and helping the organization producing the product remain increasingly competitive. Quality Function Deployment (QFD) For Lean Introduced

One tool used to enhance competitiveness is termed quality function deployment (QFD). Originating with the Japanese, this improvement technique is an attempt to simplify the design and production phases of technology, and focus on: 1. Product function. What must product do? Do

all product features add value to the product in ways which the customer is truly interested in?

2. Deriving quality. How quality can be derived within product functions? How is value added to overall product quality via each function?

3. Prioritization. Prioritization for improvements in product occur as QFD flushes out potential areas for improvement in order of importance based on multi-inputs from various sources.

4. Quality versus function. How quality and function are best produced? What production systems will best achieve quality functions?

5. Voice of customer. What customers are really saying? How do we know what the voice of the customer, and others, really is?

6. Communication. Bringing enhanced communication to all. At various stages in product development, production, and deployment, QFD helps all better "listen" and "hear"--to better serve customers.

7. Understanding competition. Understanding the competition and our position in the marketplace relative to competition. By better understanding ourselves and our product we can better produce to be competitive.

8. Cost reductions. Reducing costs in product where features in design may not actually address functional requirements or other needs of the customer. Although not a value analysis tool, QFD is a systematic tool for comparing value versus cost, at least in general terms.

9. Innovation. Fostering innovation and creative thinking is encouraged through QFD by placing substantial and focused information together with various technical and non-technical talent for improvement.

10. Efficiency. Speeding up design and technical process in bringing ideas to production, and improved product in production.

11. Teaching and learning. Internal and external knowledge transfer is enhanced through QFD since all go eye-ball to eye-ball to address product improvement issues, and it is documented for others to learn from.

12. Cross functional teams. Since emphasis is on bringing together divergent views such as

Page 16: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

engineering and marketing in team environments, we are not only encouraged to grow cross functionally, but it is inevitable.

Thus, QFD is a holistic way of thinking about customer needs, designing to meet those needs, and meeting them through production methods as well as design. But this is all quality focused and driven in all phases and stages of design and deployment for use. And like most Kaizen documentation techniques, it must be realized that QFD is not a one time process, but that it is an ongoing iterative process which can result in systematic, incremental and disciplined improvements in product and process. QFD and design stages for quality will generally include prototype development and study of performance standards and requirements, general feasibility studies, pre-market studies of quality desired and needed, and overall cost projections related to achieving quality. As organizations continue shifting toward concurrent engineering methods, this stage will also include determining and preparing control systems for production, facilitating a reduction in lead time to go to production. Finally, as design processes move forward, we must determine and define quality standards and characteristics which translate into production, to be tracked and logged per customer demands and input. QFD helps build quality into the product by verification through component and product testing under simulated operations/service conditions. This part of the QFD process requires that the product as designed must be determined to be produce-able in existing facilities. If not produce-able under existing conditions, the question becomes what will costs be involved in making changes to attain or maintain quality? The bottom line is to determine what the functional requirements of the product or system are, allow quality to be built in around these requirements, and produce the product using proper systems. Problem solving and brainstorming for quality Improvements. Systematic methods for improving through QFD and related methods use effective problem solving. It is less effective to solve problems which enable "putting out fires" rather than affecting proper long term decision making. Technical problems are solved in disciplined ways: 1. Identify the problem. The theme or problem is

related to objectives to be accomplished or specific problems which have arisen.

2. Set parameters. What are reasons for the problem selection? What is the problem and expected results? What is not the focus?

3. Analyze the problem. This is analysis via focus on parts of the problem and sub components, sometimes called cause and effect.

4. Preliminary ideas selection. This identifies possible plans for action and alternative solutions. Various inputs and information are gathered relative to each alternative.

5. Decision identification. A decision is made, putting the best alternative into action.

6. Analyze decision. Comparison between original plan (or targets) and actual results in data-based value terms to the organization.

7. Prevention. This step provides action to prevent recurrence. This may mean standardization of procedures and training.

8. Future planning. Remaining problems, and current solution analysis/impact cause reflection on how to approach future problems.

Another approach to problem solving is often described as the classic "design process". This problem solving/decision making process commonly has six major steps consisting of problem identification, preliminary ideas, refinement, analysis, decision, and implementation, explained as follows: 1. Problem identification. A clear and concise

definition of the problem is determined. This cannot be underscored since until the problem is known how can we attack it?

2. Preliminary ideas. Early solutions are identified based on preliminary reviews of available information, experience and so on.

3. Refinement. Additional detailed study is conducted on preliminary ideas and other available but increasing information.

4. Analysis. Detailed in-depth analysis and testing is pursued including engineering analysis, field testing, prototyping and so on.

5. Decision. Solution is selected and final work-ups for implementation conducted. This includes final planning for full production.

6. Implementation. The system or device is implemented. Evaluation provides feedback into the system for improvements, long-term.

Note that “ongoing improvement” provides obvious connections to Kaizen and QFD techniques.

Page 17: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

Brainstorming is another useful tool for addressing QFD as related to robust problem solving. Brainstorming is an idea generating activity, usually conducted in groups of 3-12 people (this varies). The basis for brainstorming is that groups typically can be more creative and productive relative to an individual based on synergy. Human imagination applied to problems via reflection and freewheeling assists in the success of brainstorming. It is necessary to have a group leader to help focus group efforts, and it is necessary to have someone or some method for documenting ideas generated. It is also necessary to have a relatively comfortable atmosphere and agreement on the topic or problem. Problem focus provided through previous steps identified in problem solving should be used wherever possible. When the process is actually being conducted all members in the group should be encouraged to participate, providing only one idea per turn (to help avoid anyone dominating the process). People should be sequenced regularly to help provide ideas, and no criticism should be allowed. To assist in generating "free wheeling" and creative ideas, the following "idea spurring" questions should be asked: 1. Other uses. Can the unit be put to other uses?

Are there new ways to use it as is? Are there other uses if modified? Or if not?

2. Adapt. Can the unit be adapted? What else is like this? What other ideas does this suggest?

3. Modify. Can we change meaning, color, motion, sound, odor, taste, form, shape?

4. More. Can we add? What? Where? Should frequency, strength or size be increased?

5. Reduce. Can we minify? Subtract? Eliminate? Smaller? Lighter? Slower? Split? Frequency?

6. Substitute. Can we substitute? What else instead? What other plans?

7. Rearrange sequence. Can we rearrange? Other layout? Other sequence? Change place?

8. Reverse, turn around. Can we reverse parts or components? Opposites? Turn it backwards, upside down, or inside out?

9. Combine, blend. Can components or parts be combined? How about a blend, assortment? Combine purposes? Combine ideas?

While there may be other questions which can assist in enriching and guiding the brainstorming process, the above points should aid in moving the process forward. As QFD is presented, it should be apparent that brainstorming and problem solving are highly useful methods to understand and employ.

Cause and effect relationships for QFD. Also referred to as fishbone diagramming, cause and effect tools help sort out root causes of effects in products, processes, and related circumstances. This is a problem solving type tool which assists in isolating actual causes rather than symptoms of the cause--but tying in the symptoms as part of the analytical process. Each new branch of discovered cause, provides additional roots to be sought, until the actual root cause is presented. This occurs when no further sources of potential or real cause can be identified. This relates directly to failure and reliability in process and product. Getting to the root of the actual problem for improved reliability is not as easy as it may at first appear. If it were, there would be fewer problems to solve. The fundamental problem with getting to the root is that effects often seem to be more significant than they are in actuality, relative to true cause. This is generally not readily apparent, else the problem would likely not be a problem. The analytical tool known as cause and effect can help build part of the information base required for QFD in the broader Kaizen documentation process. The main root of the cause and effect diagram represents what is thought to be the main driver of the problem at a given point in time. As additional points of view are brought to bear on the main driver, or main effect as it is sometimes called, it is likely that these will become additional drivers in the discovery of the actual root cause. Other important elements involved in the use of cause and effect tools, reviewed here in summary form, are: 1. List causes, effects. Identify as many causes of

one part of the root main effect as possible prior to proceeding to any of these individually. After each sub-cause is identified, treat each of these as the effect and identify root causes, evolving multiple branches.

2. Repeat identification process. The process of identifying causes and effects is repeated until virtually no additional causes or effects can be identified. As the process proceeds, we may identify new areas for pursuit.

3. Five why's. At some point in the process, each cause and effect should be questioned over and over to determine its applicability to the actual root cause or problem to be solved. Called "five why's", this must be repeated over time to assist in getting to the root of the problem.

Page 18: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

4. Brainstorm. These drivers and effects must be flushed out through iterative brainstorming with individuals familiar with the problem. This can be done formally or informally, but should involve the broader team process, if we wish to achieve maximum potential.

5. Prioritize. It is also quite likely that some type of weighting system must be applied to the effects to determine what knowledgeable persons perceive to be the main effects--and possibly the actual problem. These weighted values can actually be placed next to the causes and effects on the diagrams.

6. Repeat process, iterations. The information should be "recycled", and repeated, leading to enhanced cause and effect analysis over time.

As was more thoroughly addressed in previous tools, the relationship to Pareto, checklists and other listings, "five why's", and certainly brainstorming tools must not be overlooked. Used either individually or collectively these represent very powerful analytical tools. Even though it may require several brainstorming sessions to flush out the important effects, let alone the actual cause of the problem, much can be learned--and solved--about a given problem, in this manner. The beauty of the tool is that it encourages free-thinking, and innovative solutions or partial solutions to emerge in a rather rational and systematic manner. But this can also be a potential weakness, since it can be a misguided "black hole" of effort directed in the wrong direction. This speaks to having systematic rules and approaches for use of the tool in disciplined ways. Issues and infrastructure prior to undertaking QFD. Several basic decisions and issues must be made or wrestled with prior to undertaking the QFD process. This also relates to having necessary and sufficient infrastructural components in place for enabling the successful QFD experience. Even with the following, it must be realized that QFD, like most Kaizen documentation methods or systems, are designed to be used on an iterative basis for several "back and forth" approaches. This requires a fairly mature organizational infrastructure, and people with a true commitment to ongoing improvement. QFD should be used to enhance, and build further, pre-existing infrastructural tools. Several "attitudinal" and "mechanical" issues or circumstances must be addressed and in place prior to trying to move forward with QFD. The following attitudinal issues or circumstances should be in place or underway for QFD to be successful:

1. Disciplined team. Based on the purpose of the

QFD process, team functions should not be started with the QFD approach. QFD should not represent the first attempts at team in the organization. The QFD is used because it provides a powerful process for bringing together technical and non-technical groups and perspectives. But this also recognizes that there has been some degree of experience and knowledge in the team building approach already undertaken. When it is necessary to actually assemble a team for QFD purposes, it should virtually be "in place" based on the ongoing mature experiences of your group.

2. Growth and maturity. The QFD process requires a fair amount of maturity and knowledge from various views and perspectives. If the group is unaccustomed to growing and learning, and teaching together, they will likely struggle with QFD. This process assumes that we are sufficiently flexible and mature to work our way through novel situations without "flaking out" on one another. This is referred to as "stretching", part of the necessity for creativity and innovation.

3. Trust and relationship. Not only must we have disciplined team in place, but we must truly trust one another as customers and suppliers. Internal and external groups and/or individuals may need to be involved in order to enhance quality functions, and without trust, this simply cannot happen. QFD requires that we sit in "eye ball to eye ball" ways to grow product and process relationships. Increasingly we must involve external and internal customer and supplier groups in improvement processes, underscoring QFD as a tool to use.

Additionally, several "macro mechanical" infrastructural issues or circumstances should be in place or underway for QFD to be successful: 1. Long term systemic brainstorming for root

causes. A broad and ongoing brainstorming and problem solving approach is entered into with all parties providing their views relative to the problem or issue identified. These are provided as inputs into the total matrix and may also be arrived at through root cause and effect type tools and processes. It should be realized that this could be repeated and continued in an iterative process approach

Page 19: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

until all inputs from various parties are exhausted. The mature organization would also draw upon FMEA and OPCP information for previously documented material to pursue and develop further with the QFD. But systematically, we must have ways to brainstorm root causes aimed at the QFD process.

2. Documentation and records. When sufficient information and rankings from various perspectives have been provided, this information can be summarized either numerically or in text forms, or both for solid and robust documentation purposes. It should be realized that several levels and phases of the same matrix could be produced at this point, through the process, again reflecting a mature and growing organization and group. Documentation and record keeping may also be a function of ISO registration or various forms of certification. The important point is that documentation is (a) shared widely in the organization, and (b) used as record keeping for future needs and issues with suppliers and customers. This could include evaluation, new product development, and so on.

3. Problem solving and suggestion systems. The QFD process works best where systems are in place to solve problems (previous tools). This exists in several forms but is best identified with the 8-D and suggestion systems shown in earlier tools. Documentation and systems for solving problems, including FMEA, SOP, OPCP, charting and other data-based information brings more than simple opinion to the table, pivotal to mature improvement.

4. Evaluative systems. Attitudes and systems for evaluation are part of the QFD approach. The extent to which we have been successful in supplier evaluation and building our own evaluative systems internal will be reflected in our QFD process, and how well it works.

Several “start up” elements should be in place or underway for QFD to be successful. These are related to QFD documentation processes, and also are provided in the form in the applications section: 1. Matrices and format. QFD provides a

mechanism for inputs from the producer, customer, or others knowledgeable, which could then be placed in matrix formats for constructive analysis. Persons from quality, engineering, manufacturing and sales or

marketing are the typical targets of this tool. The QFD process also places inputs according to most to least important, affording the opportunity for all to eventually become familiar with this prioritized information. Generally from two to four matrices are identified and developed in the QFD system, with each being provided various symbols for rapid identification. It is possible to have two or more different but related parties who are concerned with the same quality function doing QFD. Typical symbols identified with each parties' input and matrix are boxes, circles, and triangles, but others could be used. The organization or group wishing to conduct or participate in QFD must have a format and matrix identified, similar to the generic start up information provided. This is designed only as a start and should be modified and built on.

2. Symbols and numbers. Each party and matrix will place symbols on the matrix, and then use a weighting value from some internal or external (or most likely a combination--agreement) system to assist in determining importance of identified functions or design elements previously identified. This should likely be consistent with FMEA and other systems, and thus 1 to 10 is a good place to begin. This is relative per the organization, nature of the QFD group, issue under discussion, and so on. Organizations develop QFD processes per inputs from suppliers and customers, products in the market, and so on.

This section should provide some indications regarding essential components for getting started and maintaining a QFD process which actually contributes toward the broader quality and productivity systems being addressed in Kaizen and documentation. The section should also serve as a useful evaluative guide for determining where we are as an organization toward the journey of improvement. Quality function deployment procedures. Quality function deployment (QFD) is built around and upon the cause and effect logic and system, as well as the broader problem solving approach, all which help systematically and logically identify and improve on potential or real weaknesses in product quality. The emphasis in QFD tends to be on customer input or demands as they are generally called. Much of the QFD logic also relies on matrices development to determine areas of common

Page 20: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

concern and/or areas of weakness needing to be pursued. Matrix development helps identify cross functional interests or relationships, where all gain by better understanding and articulating information or knowledge between and among groups. The QFD focus is also heavily oriented toward product development and design functions for products being introduced. If weaknesses or faults in current design are identified, and methods for improvement are developed, this information can be systematically built into a broader product program. The key, however, is that it is a communication tool for reducing lead time from concept to market--in conjunction with much other information such as cause and effect, design FMEA and process FMEA. In all cases, this is in essence, the way the more robust and less prone to failure design can result. The advantage of QFD is that it provides a positive system for directly placing customer inputs into the mix of information considered for product improvements, changes. It is important to remember that the customer and supplier relationship applies both internal and external. QFD is a useful tool to use internally to help all persons understand what we are trying to accomplish. It is also a good way to determine and document external customer demands. The QFD process forces, or encourages, a consensus to be built around what we really "expect" our product to be, particularly from the quality side, but also in engineering terms, and so on. By putting all parties in a "eye ball to eye ball" meeting, and recording all the inputs, over time, and then by repeating this process, we virtually force communication and detailing of the product or issue under consideration. It is important that all parties agree that this is our objective in the process. Various groups may use the QFD process as an evaluative tool for their own performance, rather than waiting for customers to review them. The QFD system, as a stand alone activity, can itself be considered a useful form of documentation and prioritizing for deployment of resources. These could include any number of elements in the product but are typically oriented toward specific functions, as the name would imply. QFD can be used similarly to value engineering since the process force us to consider each component function, necessity, cost, contribution to quality, safety, and so on. The example approach illustrated is one interpretation--a specific organization may wish to develop its own approach for working with customers and others. One approach used for QFD follows, first outlining general start up and planning procedures:

1. Identify a leader. As with any of the tools,

where we wish to move forward, leadership will be required. The QFD process is somewhat unique since it can represent multiple, divergent, and sometimes nearly adversarial groups as the process kicks off. This is where a "special" leader may be required. The special blend of technical and people skills for blended leadership will be helpful. It will also be important for the leader to have broad technical and business knowledge about product and process studied.

2. Agree on the purpose. All participants will be more productive in the process if we have agreed up front that there is a specific purpose, or product focus, and generally how this is going to be approached. This may be a specific element or mechanism in a product, a component, or the entire product, either from a micro or macro view. Several iterations will be required for optimum results.

3. Build team. Based on purposes of the QFD process, determine who should participate. This could be internal customers and suppliers or external suppliers, and may be primarily technical groups, but not limited to these. Part of the reason QFD is used is it provides a powerful process for bringing together technical and non-technical groups and perspectives. The extent a team is comfortable working together will be directly related to the final outcome of their work in a QFD process.

4. Brainstorm quality functions for root causes. This includes functions broadly considered, to ultimately become customer demands and technical requirements. The team should agree on broad functions based on root cause analysis and other brainstorming techniques.

5. Determine ground rules. Will all persons in the process place all information on one sheet? Or will all members keep their own sheet with their interpretation on it? Will we complete a first iteration and then summarize all in a consensus format at the conclusion? Will we publish a summary interpretation and analysis of the entire process? These are but some of the typical ground rules that should be decided before progressing, to enable all to fully understand where we are headed at the outset.

Specific steps used to conduct the actual process could easily take several hours to one or more days, depending on maturity of the product, organization,

Page 21: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

level of iteration we are in, and so on. The processes should be conducted in efficient ways but not rushed. 6. Identify customer demands. Identification of

specific demanded quality characteristics, functions, or other customer inputs, is placed on the vertical (left) category of the matrix. These are specific requirements or demands identified as being significant or important, based on the voice of the customer.

7. Prioritize customer demands. Using a scale, typically from one to five points, customer demands are rated or prioritized according to various perspectives represented. Customer demands do not need to be listed in any order of importance since our rankings will come from the 1-5 values placed on them in the column just to the right of each demand.

8. Identify technical requirements. These are listed across the top of the form. The technical requirements are actual engineering and functional issues which must be built into the product. They represent specifications, characteristics, or similar technical deliverables, primarily from suppliers' perspective. While technical requirements are usually related to customer demands, they are not necessarily organized or directly correlated in an organized manner at this point. Customers and others are involved in this conversation, gaining a better understanding of requirements produce the product.

9. Explain technical requirements. Just below each technical requirement, a symbol for nominal, larger or smaller should be placed. This is designed to help all participants understand the prevailing view on each requirement. For example, if the technical requirement is generally agreed to be reasonable, useful and accepted by all, we would agree to use the bulls eye symbol. If, based on conversation, it is determined that the technical requirement should be more robust, the arrow will point up in the box, indicating an enhanced technical requirement. If the requirement is too strong, as in over produced and adding unnecessarily to costs, we will point the arrow down, indicating we wish to reduce this technical requirement's role.

10. Strength relationships. Just below each technical requirement, at the point in the matrix where the requirement column aligns with the demand information for customers, a symbol should be placed to indicate the

strength of the relationship between how technical requirements meet perceived demands of customers. The symbols on the form are weighted at 9, 3 or 1 intended to relate a high, medium or low level of strength. It is possible that no symbol will be placed if the relationship is deemed to be questionable or not well understood. It is also possible that a technical requirement will align with multiple customer requirements and vice versa.

11. Strength relationship values. Based on symbols placed on matrix intersections, and previously identified priority values to the right of customer demands, multiplication must occur for each matrix intersection point where a symbol was placed under the technical requirements. If no symbol was placed, then no multiplication would occur. All values are summed and placed in the horizontal "strength values" line on the form. These values provide the basis from which to pursue areas for improvement. Generally the higher the value, the greater the strength relationship and more likely we should pursue the area.

12. Evaluating demands and requirements. The areas at the far right and lowest horizontal sections on the form provide a place for plotting various views on performance in the demanded and required areas. On the far right, depending on our view, we may wish to place a nominal, or up or down arrow, showing our agreement or disagreement with the customer view, under either bad or good, as our reference on the customer demands. Across the lower horizontal area, we can evaluate the combined values of demands and requirements. "O's" are for us and "X's" are for them, and each matrix point is rated and then plotted out to show overall trends and directions. When completed, based on individual views, we should have a fairly good idea of how we view the relative rankings for each combined demand and requirement value. Where the two views cross over (equal one another) will represent areas of agreement and correlation in thinking, consensus for pursuit.

13. Brainstorming and problem solving. After all inputs are gained from all parties, additional rounds of brainstorming and problem solving are entered into with all parties providing their views relative to the problem or issue

Page 22: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment

identified. These are listed as changed inputs in the matrix, repeated and continued in an iterative process approach until all inputs from various parties are exhausted and final points on the matrix reached as consensus. This may take several iterations and sessions.

14. Blending and explaining. When all parties have provided their individual matrix inputs, these are collected from individuals and blended into one form for all to build on and analyze. This is essentially an educational and communication process to a great extent, since various parties are giving their point of view.

15. Negotiation and stretch. At or near the point of blending, it is important to have representatives from all parties involved, since much negotiation and "stretching" of various points of view will occur, of necessity, and communication and creativity will run quite high. Values and points of view may be changed or discussed, in the current iteration or later based on further study and information.

16. Documentation, reporting. When sufficient information and rankings from various perspectives have been provided, this information can be summarized either numerically or in text forms, or both for solid and robust documentation purposes. Several levels and phases of the same matrix could be produced at this point, through the process.

17. Analysis, ongoing improvements. Based on findings from QFD, various groups could evaluate further on their own, and eventually re-conduct the process. The QFD, like most Kaizen-related documentation processes, is not intended to be done only once, or quickly.

One basic QFD approach is shown in matrices form on a nearby page. It should be recognized that support documentation in various forms would generally accompany this system--and that it all should relate to broader components of the system, many which relate to FMEA, presented earlier.

Page 23: LSSQTT Tool 26 - Semantic Scholar...5. Product liability, FMEA, reliability, finite element analysis (FEA) 6. General reliability issues, Kaizen and FMEA 7. Quality function deployment