a statistical comparison of three

9
1 The Official Electronic Publication of the National Association of Industrial Technology • www.nait.org © 2004 A Statistical Comparison of Three Root Cause Analysis Tools By Dr. Anthony Mark Doggett Volume 20, Number 2 - February 2004 to April 2004 Peer-Refereed Article Leadership Management Quality Research Sociology KEYWORD SEARCH

Upload: sagarlal

Post on 22-Jul-2016

12 views

Category:

Documents


2 download

DESCRIPTION

stat

TRANSCRIPT

1

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

The Official Electronic Publication of the National Association of Industrial Technology • www.nait.org© 2004

A Statistical Comparison of ThreeRoot Cause Analysis Tools

By Dr. Anthony Mark Doggett

Volume 20, Number 2 - February 2004 to April 2004

Peer-Refereed Article

LeadershipManagement

QualityResearchSociology

KEYWORD SEARCH

2

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

A Statistical Comparison ofThree Root Cause AnalysisToolsBy Dr. Anthony Mark Doggett

Dr. Mark Doggett is a post-doctoral fellow andinstructor at Colorado State University and is anadjunct faculty member at Aims Community Col-lege. He is currently working on grants for theNational Science Foundation in medical technol-ogy and the Department of Education in careerand technical education. He also teaches coursesin process control, leadership, and project man-agement. His research interests are decision-mak-ing and problem-solving strategies, technicalmanagement, theory of constraints, and opera-tions system design.

To solve a problem, one must firstrecognize and understand what iscausing the problem. According toWilson et al. (1993), a root cause is themost basic reason for an undesirablecondition or problem. If the real causeof the problem is not identified, then oneis merely addressing the symptoms andthe problem will continue to exist. Forthis reason, identifying and eliminatingroot causes of problems is of utmostimportance (Dew, 1991; Sproull, 2001).Root cause analysis is the process ofidentifying causal factors using astructured approach with techniquesdesigned to provide a focus for identify-ing and resolving problems. Tools thatassist groups and individuals in identify-ing the root causes of problems areknown as root cause analysis tools.

PurposeThree root cause analysis tools

have emerged from the literature asgeneric standards for identifying rootcauses. They are the cause-and-effectdiagram (CED), the interrelationshipdiagram (ID), and the current realitytree (CRT). There is no shortage ofinformation available about these tools.The literature provided detaileddescriptions, recommendations, andinstructions for their construction anduse. The literature documented pro-cesses and structured variations foreach tool. Furthermore, the literature isquite detailed in providing colorful andillustrative examples for each of thetools so practitioners can quickly learnand apply them. In summary, theliterature confirmed that these threetools do, in fact, have the capacity tofind root causes with varying degreesof accuracy, efficiency, and quality(Anderson & Fagerhaug, 2000; Arcaro,1997; Brown, 1994; Brassard, 1996;Brassard & Ritter, 1994; Cox &

Spencer, 1998; Dettmer; 1997; Lepore& Cohen, 1999; Moran et al., 1990;Robson, 1993; Scheinkopf, 1999;Smith, 2000)

For example, Ishikawa (1982)advocated the CED as a tool forbreaking down potential causes intomore detailed categories so they can beorganized and related into factors thathelp identify the root cause. In contrast,Mizuno (1979/1988) supported the IDas a tool to quantify the relationshipsbetween factors and thereby classifypotential causal issues or drivers.Finally, Goldratt (1994) championedthe CRT as a tool to find logicalinterdependent chains of relationshipsbetween undesirable effects leading tothe identification of the core cause.

A fundamental problem for thesetools is that individuals and organiza-tions have little information to comparethem to each other. The perception isthat one tool is as good as another tool.While the literature was quite completeon each tool as a stand-alone applica-tion and their relationship with otherproblem solving methods, the literatureis deficient on how these three toolsdirectly compare to each other. In fact,there are only two studies that com-pared them and the comparisons werequalitative. Fredendall et al. (2002)compared the CED and the CRT usingpreviously published examples of theirseparate effectiveness whilePasquarella et al. (1997) compared allthree tools using a one-group post-testdesign with qualitative responses.There is little published research thatquantitatively measures and comparesthe CED, ID, and CRT. This studyattempted to address those deficiencies.

The purpose of this study was tocompare the perceived differencesbetween the independent variables: the

3

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

cause-and-effect diagram (CED), theinterrelationship diagram (ID), and thecurrent reality tree (CRT) with regardto causality, factor relationships,usability, and participation. The firstdependent variable was the perceivedability of the tool to find root causesand the interdependencies betweencauses. The second dependent variablewas the perceived ability of the tool tofind relationships between factors orcategories of factors. Factors mayinclude causes, effects, or both. Thethird dependent variable was theoverall perception of the tool’s usabil-ity to produce outputs that werelogical, productive, and readable. Thefourth dependent variable was theperception of participation resulting inconstructive discussion or dialogue. Inaddition, the secondary interests of thestudy were to determine the averageprocess times required to constructeach tool, the types of questions orstatements raised by participants duringand after the process, and the nature ofthe tool outputs.

Delimitations, Assumptions, andLimitations

The delimitations of the study werethat the tools were limited to the CED,ID, and CRT while participants werelimited to small groups representing anauthentic application of use. Thelimitations of the study were groundedin the statistical requirements for theGeneral Linear Model. The experimen-tal results reflected the efficacy of thetools in the given context. While theresearcher attempted to control obviousextraneous variables during the study,participant and organizational culturalattributes, politics, and social climateremained outside the scope and controlof the analysis.

The assumptions of the study werethat (a) root cause analysis techniquesare useful in finding root causes, (b)the identification of a root cause willlead to a better solution than theidentification of a symptom, and (c) theidentification of causal interdependen-cies is important. In addition, expertise,aptitude, and prior knowledge aboutthe tools, or lack thereof, were assumedto be randomly distributed within the

participant groups and did not affectthe overall perceptions or results. Also,the sample problem scenarios used inthe study were considered as havingequal complexity.

MethodologyThe specific design was a within-

subjects single factor repeated mea-sures with three levels. The indepen-dent variables were counterbalanced asshown in Table 1, where T representsthe treatment, M represents the mea-sure, and the group observations areindicated by O. The rationale for thisdesign is that it compares the treat-ments to each other in a relativefashion using the judgments of theparticipants. In this type of comparativesituation, each participant serves as hisor her own control making the use ofindependent groups unnecessary(Girden, 1992). The advantage of thisdesign is that it required fewer partici-pants while reducing the variabilityamong them, which decreased the errorterm and the possibility of making aType I error. The disadvantage was thatit reduced the degrees of freedom(Anderson, 2001; Gliner & Morgan,2000; Gravetter & Wallnau, 1992;Stevens, 1999).

Measures and InstrumentThree facilitators were trained in

the tools, processes, and proceduresbefore the experiment. They wereinstructed to be available to answerquestions from the participants aboutthe tools, goals, purpose, methods, orinstructions. The facilitators did notprovide information about the problemscenarios. They were also trained inobservational techniques and instructedto intervene in the treatment process ifa group was having difficulty con-structing the tool or managing their

process. The activity of the facilitatorswas intended to help control thepotential diffusion of treatment acrossthe groups.

To ensure consistency, eachtreatment packet was similarly format-ted with the steps for tool constructionand a graphical example based onpublished material. Each treatmentgroup also received the same suppliesfor constructing the tools. The depen-dent variables were measured using atwelve-question self-report question-naire with a five-point Likert scale andsemantic differential phrases.

ProcedureParticipants and facilitators were

randomly assigned to one of threegroups. The researcher providedsimultaneous instructions about theexperiment, problem scenarios, andmaterials. Five minutes were allowedfor the participants to review theirrespective scenarios and instructions,followed by a ten minute questionperiod. The participants were then askedto analyze and find the perceived rootcause of the problem. The facilitatorswere available for help throughout thetreatment. Upon completion of thetreatment, the participants completed theself-report instrument. This process wasrepeated until all groups applied allthree analysis tools to three randomizedproblems. Each subsequent treatmentwas administered every seven days.

Reliability and ValidityContent validity of the instrument

was deemed adequate by group ofgraduate students familiar with theresearch. Cronbach’s alpha was .82 fora pilot study of 42 participants. Thedependent variables were also congru-ent with an exploratory principlecomponent analysis.

Table 1. Repeated Measures Design Model

4

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

Threats to internal validity werematuration and carryover effects. Otherthreats included potential participantbias, statistical regression to the mean,outside influence, interruptions, andinteraction between participants. Anadditional threat was attrition ofparticipants due to fatigue, boredom, ortime constraints.

The most significant threat toexternal validity was the representative-ness of the selected sample because ofecological components associated withqualitative research. The setting andcontext were congruent with a typicaleducational environment. Therefore, theassumption of generalizability would bebased on a judgment about the similaritybetween the empirical design and anideal situation in the field (Anderson,2001). From a pure experimentalstandpoint, the external validity mightbe considered low, but from a represen-tative design standpoint, the validity washigh (Snow, 1974).

ParticipantsThe participants were first and

second year undergraduate students inthree intact classroom sections of ageneral education course on teamproblem solving and leadership. Eachgroup consisted of ten to thirteenparticipants and was primarily whitemales. Females accounted for 11% ofthe sample and minorities comprisedless than 3%. The initial sample was107 participants. The actual samplewas 72 participants due to attrition andnon-responses on the instrument.

Data AnalysisA repeated-measures ANOVA with

a two-tailed .05 alpha level wasselected for the study. For significantfindings, post hoc t tests withBonferroni adjustments identifiedspecific tool differences based onconditions of sphericity (Field, 2000;Stevens, 1999) and calculated effectsizes (d) (Cohen, 1988). As in otherANOVA designs, homogeneity ofvariance is important, but in repeatedmeasures, each score is somewhatcorrelated with the previous measure-ment. This correlation is known assphericity. While repeated-measures

ANOVA is robust to violations ofnormality, it is not robust to violationsof sphericity. For violations of spheric-ity, the researcher used the Huynh andFeldt (1976) corrected estimatessuggested by Girden (1992). Allstatistical analyses were performedusing the Statistical Package for SocialSciences™(SPSS) software.

Statistical FindingsScreening indicated the data were

normally distributed and met allassumptions for parametric statisticalanalysis. After all responses,Cronbach’s alpha for the instrumentwas .93. Descriptive statistical data forthe dependent variables found that themean for the CED was either the sameor higher on all dependent variableswith standard deviations less than one.For the individual questions on theinstrument, the mean for the CED washigher on eight questions, while themean for the ID was higher on four.

No statistical difference was foundamong the three tools for causality orparticipation. Therefore, the nullhypothesis (H

0) was retained and there

does not appear to be a difference in theperceptions of the participants concern-ing the ability of the tools to identifycausality or affect participation.

No statistical difference wasfound among the three tools regarding

factor relationships. Therefore, thenull hypothesis (H

0) for factor rela-

tionships was retained. However, asshown in source Table 2, there was asignificant difference in response to anindividual question on this variableregarding how well the tools identifycategories of causes (F (2, 74) =7.839, p = .001). Post hoc tests foundthat the CED was perceived asstatistically better at identifying causecategories than either the CRT (t (85)= 4.54, p < .001) or the ID (t (83) =2.81, p = .023) with medium effectsizes of 0.59 and 0.47 respectively.

Using a corrected estimate, asignificant statistical difference wasalso found for usability (F (1.881, 74)= 9.156, p < .001). Post hoc compari-sons showed that both the CED (t (85)= 5.04, p < .001) and ID (t (81) = 2.37,p = .009) were perceived more usablethan the CRT with medium effectssizes of 0.56 and 0.53, respectively.The overall results for usability areshown in source Table 3. Therefore, thenull hypothesis (H

0) was rejected and

there is a significant difference be-tween the CED, ID, and CRT withregard to perceived usability.

The usability variable was theperception of the tool’s ease or diffi-culty, productive output, readability,and assessment of integrity. Thisvariable was measured using four

Table 2. Repeated Measures Analysis of Variance (ANOVA) for Individual QuestionRegarding Cause Categories

Table 3. Significant Within-Group Differences for Usability Variable

5

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

questions, to which participantsperceived a significant difference onthree questions. The statistical resultsfor the individual questions aboutusability are shown in Table 4.

Using a corrected estimate,participants responded significantly tothe question regarding perceived easeor difficulty of use (F (1.886, 71) =38.395, p < .001). The post hoccomparison revealed a large differencebetween the CRT and CED (t (81) =8.95, p < .001, ES = 1.15) as well asbetween the CRT and ID (t (77) = 5.99,p < .001, ES = 1.18). Thus, the CRT isperceived as being much more difficultthan either the CED or ID.

Participants responded signifi-cantly to the question regardingperceived differences between the toolsfor productive problem solving activity(F (2, 71) = 3.650, p = .028). Post hoctests found that the CED was perceivedto be statistically better at facilitatingproductive problem solving activitythan the CRT (t (81) = 3.67, p = .010).The result was a small-to-mediumeffect size of 0.38.

Participants responded significantlyto the question regarding the perceivedreadability of the tools (F (2, 71) =3.480, p = .033). Post hoc tests indicatedthe CED was significantly preferredover the CRT for overall readability (t(81) = 3.41, p = .021) with a small-to-medium effect size of 0.39.

Summary of Statistical FindingsThe statistical significance of

usability was primarily due to signifi-cant differences on three of the fourindividual questions. The large effectsize between the CRT and the othertools in response to ease or difficulty ofuse was the dominant factor. Thus, theCRT was deemed more difficult thanthe other tools. The other significantstatistical finding was that the CEDwas perceived better at identifyingcause categories than either the ID orCRT. However, this finding did notchange participant’s perceptions offactor relationships. Post hoc compari-sons for all significant findings areshown in Table 5.

Other Findings

Process Times and CountsThe mean times for the CED, ID,

and CRT were 26 minutes, 26 minutes,and 30 minutes, respectively. The CEDhad the smallest standard deviation at5.59 and the CRT had the largeststandard deviation at 9.47. The stan-dard deviation for the ID was 8.94. Theresearcher also counted the number offactors and arrows on each tool output.On the average, both the CED and CRTused less factors and arrows than theID, but the CRT used a third lessarrows than either the CED or ID.

Open-Ended Participant ResponsesComments received were generally

about the groups, the process, or thetools. Comments about the groups weretypically about group size or theamount of participation. Commentsabout the process were typicallycomplaints or comments and wereeither positive or negative dependingon the participant’s experience. Com-ments about the CED and ID toolswere either favorable or ambiguous.Most of the comments about the CRTconcerned its degree of difficulty.

Researcher ObservationsThe researcher sorted participant

questions and facilitator observationsusing key words in context to discoveremergent themes or categories. Thissorting resulted in four categoricaltypes: process, construction, rootcause, and group dynamics. Questionsraised during the development of theCED were about the cause categoriesand if multiple root causes wereacceptable. Questions about the IDwere primarily about the direction ofthe arrows. Questions about the CRTwere varied, but generally about thetool process or construction. A com-mon question from participants amongall three tools was whether theirassignment was to find a root cause orto determine a solution to the problem.

Most facilitator observations wereabout group dynamics or the overallprocess. Typical facilitator commentsconcerned the degree of participation orleadership roles. Facilitator observationswere relatively consistent regardless ofthe tool used, except for the CRT. Thoseobservations were different in that theconstruction observations were muchhigher and there were no observationsconcerning difficulty with root causes.

Table 4. Repeated Measures Analysis of Variance (ANOVA) for Individual QuestionsRegarding Usability

6

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

The tool outputs were also qualita-tively evaluated for technical accuracy,if a root cause was identified, and theintegrity of the root cause. The techni-cal accuracy of the tool was evaluatedbased on the (a) directionality of thecause and effect relationships; (b)specificity of the factors; (c) format,and (d) use of conventions. Root causewas evaluated by whether the group,through some means, visual or verbal,identified a root cause during thetreatment. Groups that were unable toidentify a root cause either disagreedabout the root cause or indicated that aroot cause could not be found. Theintegrity of the root cause was evalu-ated based upon specificity andreasonableness. Specificity was definedas something that could be specificallyacted upon, whereas reasonablenesswas defined such that someone notfamiliar with the problem would statethat the root cause seemed rational orwithin the bounds of common sense.

Summary of Other FindingsA summary of the questions,

observations, and tool output evalua-tions is shown in Table 6. The technicalaccuracy of both the CED and the IDwere high, whereas the technicalaccuracy of the CRT was mixed.Groups using the CED were seldomable to identify a specific root causewhile the groups using the ID didbetter. Because groups using the CEDcould not identify a specific root cause,the integrity of their selections alsosuffered. Only one CED group found aroot cause that was both specific andreasonable. While the ID groups founda root cause most of the time, theintegrity of their root causes wasmixed. In contrast, CRT groups found aroot cause most of the time with highintegrity in over half the cases. Inaddition, the CRT groups were able todo this using less factors and arrows asthe other tools.

The number of root cause questionsand observations for the CED was twiceas high as the ID and ten times greaterthan the CRT. Conversely, the CRTgenerated more process and construc-tion questions/observations than the IDor CED. In summary, the CED pro-

duced the greatest amount of questionsand discussion about root causes,whereas the CRT produced the greatestamount of questions and discussionabout process and construction.

DiscussionThe CED was specifically per-

ceived by participants as better than theCRT at identifying cause categories,facilitating productive problem solvingactivity, being easier to use, and morereadable. The ID was easier to use, butwas no different from any of the othertools in other aspects, except causecategories. Concurrently, none of thetools was perceived as being signifi-cantly better for causality or participa-

tion. Rather, the study suggested thatthe ID is another easy alternative forroot cause analysis. The data alsosupport that the ID leads groups tospecific root causes in about the sameamount time as the CED.

The results neither supported norrefuted the value of the CRT other thanto verify that it is, in fact, a morecomplex method. Considering theeffect sizes, participants definedusability as ease of use. Thus, the CRTwas perceived by participants as toocomplex. However, usability was notrelated to finding root causes. Authorsand scholars can reasonably argue thata basic requirement for a root causeanalysis is the identification of root

Table 5. Post Hoc T-Tests with Bonferroni Adjustment

7

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

causes. Until this can be statisticallyverified, one cannot definitively claimthe superiority of one tool over another.

It is postulated that the differencein average process time was due to theconstruction complexity of the CRT.Also, the variability of the ID improvedover time while the variability of theCRT deteriorated. If any conclusioncan be reached from this, it is perhapsthat as groups gain experience, thelearning curve for the ID increases.Conversely, experience may be detri-mental to the CRT because it uses adifferent logic system that starts witheffects and moves to causes, whereasthe ID starts with potential causes.

Specific observations indicated thatmore extensive facilitator interventionwas required in the CRT process thanon the other tools. Facilitators had tointervene more often because thegroups had difficulty building the CRTwithout making construction errors orprocess mistakes. Most interestingly, inspite of difficulties, groups using theCRT were able to identify specific andreasonable root causes over half thetime as compared to the CED and ID.This was one of the distinguishingfindings of the study.

With the ID, groups were able toidentify root causes, but only becauseID construction requires a count ofincoming and outgoing arrows. Bysimply looking at the factor with themost outgoing arrows, groups declaredthat a root cause had been found andstopped any further any critical analysis.Their certainty undermined criticalanalysis. Khaimovich (1999) encoun-tered similar results in a behavioralstudy of problem solving teams whereparticipants were reluctant to scrutinizethe validity of their models and re-mained content with their original ideas.

So why were participants unable torecognize the difference between thethree tools with respect to the integrityof their root causes? First, because theparticipants were asked only to identifya root cause, not a root cause that wasreasonable and specific enough to beacted upon. Second, most of the groupsavoided creating the tension that mighthave produced better results. To quoteSenge (1990), “Great teams are not

characterized by an absence of con-flict” (p. 249). Although the majority ofthe CRT groups were uncomfortableduring the process, the quality of theiroutputs was better.

Third, groups were (a) learning thetools for the first time, (b) emotionallyinvolved in the process, and (c) engag-ing in what Scholtes (1988) called the“rush to accomplishment.” Becausemany participants were learning anddoing, they did not have time to assessor reflect on the meaning of theiroutputs. Their reflection was impairedby the emotionally-laden groupprocesses. In addition, participantswere instructed to work on the problemuntil they had identified a root cause,which, in some groups, was manifestedby the need to achieve that goal asquickly as possible.

Implications for Policy andPractice

The type and amount of trainingneeded for each tool varies. The CEDand ID can be used with little formaltraining, but the CRT requires compre-hensive instruction because of its logicsystem and complexity. However, theCED and ID both appear to need sometype of supplemental instruction incritical evaluation and decision makingmethods. The CRT incorporates theevaluation system, but the CED and IDhave no such mechanism and arehighly dependent on the thoroughnessof the group using them.

Serious practitioners shouldconsider using facilitators during rootcause analysis. Observations found thatmost groups had individuals who

dominated or led the process. Whenleadership took place, individualcontributions were carefully consideredwith a mix of discussion and inquiry.Group leaders did not attempt toconvince others of their superiorexpertise and conflicts were not consid-ered threatening. In contrast, groups thatwere dominated did not encouragediscussion and differences of opinionwere viewed as disruptive. An experi-enced facilitator could encourage groupmembers to raise difficult and poten-tially conflicting viewpoints so the bestideas would emerge.

These tools can be used to theirgreatest potential with repeatedpractice. Like other developed skills,the more groups use the tools, thebetter they become with them. Formany participants in this study, thiswas the first time they had used a rootcause analysis tool. Indeed, for some, itwas their first experience with anystructured group decision-makingmethod. Their experience and partici-pation created insights that could betransferred to other analysis activities.

ConclusionThe intent of this research was to

be able to identify the best tool for rootcause analysis. This study was not ableto identify that tool. However, knowl-edge was gained about the characteris-tics that make certain tools better undercertain conditions. Using these tools,finding root causes seems to be relatedto how effectively groups can worktogether to test assumptions. Thechallenge of this study was how tocapture and test what people say about

Table 6. Summary of Questions, Observations, and Tool Outputs

8

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

the tools versus how the tools actuallyperform. Perhaps the answer lies in theinteraction between the participantsand the tool. Root cause analysis maybe valuable because it has the potentialof developing new ways of thinking.

ReferencesAnderson, B., & Fagerhaug, T. (2000).

Root cause analysis: Simplifiedtools and techniques. Milwaukee:ASQ Quality Press.

Anderson, N. H. (2001). Empiricaldirection in design and analysis.Mahwah, NJ: Erlbaum.

Arcaro, J. S. (1997). TQMFacilitator’s Guide. Boca Raton,FL: St. Lucie Press.

Brassard, M., & Ritter, D. (1994). Thememory jogger II: A pocket guide oftools for continuous improvementand effective planning. Salem, NH:GOAL/QPC.

Brassard, M. (1996). The memoryjogger plus+: Featuring the sevenmanagement and planning tools.Salem, NH: Goal/QPC.

Brown, J. I. (1994). Root-causeanalysis in commercial aircraft finalassembly. (Master’s project, Califor-nia State University DominguezHills). Master’s Abstracts Interna-tional, 33 (6): 1901.

Cohen, J. (1988). Statistical power andanalysis for the behavioral sciences(2nd ed.). Hillsdale, NJ: LawrenceErlbaum Associates.

Cox, J. F., III, & Spencer, M. S. (1998).The constraints managementhandbook. Boca Raton, FL: St.Lucie Press.

Dettmer, H. W. (1997). Goldratt’stheory of constraints. Milwaukee:ASQC Press.

Dew, J. R. (1991). In search of theroot cause. Quality Progress, 24(3): 97-107.

Field, A. (2000). Discovering statis-tics using SPSS for Windows.London: Sage.

Fredendall, L D., Patterson, J. W.,Lenhartz, C., & Mitchell, B. C.(2002). What should be changed?Quality Progress, 35 (1): 50-59.

Girden, E. R. (1992). ANOVA repeatedmeasures. Newbury Park, CA: Sage.

High

absenteeism

Environment Machinery

Wrong person in

the job

MethodsPeople

Too little

responsibility

Poor training

Incorrect training

Inadequate

training

Poor budgeting

Poor reward

system

Little positive

feedback

Low morale

Low trust

Internal

competition

Difficult to

operate

Poor maintenance

Not enough

equipment

Maintenance

Preventive maintenance

not done on schedule

Too much

overtime

Example of a Cause-and-Effect Diagram

Example of an Interrelationship Diagram

Shippingpolicies & procedures

Employeeturnover is

high

Newest employees

go to shipping

Labels fall offpackages

Data entry iscomplex

Human Resourcespolicies & procedures

IN OUT

2 0

IN OUT

0 1

IN OUT

0 1IN OUT

1 0

IN OUT

1 0

IN OUT

0 2

9

Journal of Industrial Technology • Volume 20, Number 2 • February 2004 to April 2004 • www.nait.org

Gliner, J. A., & Morgan, G. A., (2000).Research methods in appliedsettings: An integrated approach todesign and analysis. Mahwah, NJ:Erlbaum.

Goldratt, E. M. (1994). It’s not luck.Great Barrington, MA: North RiverPress.

Gravetter, F. J., & Wallnau, L. B.(1992). Statistics for the behavioralsciences: A first course for studentsof psychology and education (3rd

ed.) St Paul, MN: West PublishingCo.

Huynh, H., & Feldt, L. (1976). Estima-tion of the Box correction fordegrees of freedom from sampledata in the randomized block andsplit plot designs. Journal ofEducational Statistics, 1 (1): 69-82.

Ishikawa, K. (1982). Guide to qualitycontrol (2nd rev. ed.). Tokyo: AsianProductivity Organization.

Khaimovich, L. (1999). Toward a trulydynamic theory of problem-solvinggroup effectiveness: Cognitive andemotional processes during the rootcause analysis performed by abusiness process re-engineeringteam. (Dissertation, University ofPittsburgh). Dissertation AbstractsInternational, 60 (04B): 1915.

Lepore, D., & Cohen, O. (1999).Deming and Goldratt: The theory ofconstraints and the system ofprofound knowledge. GreatBarrington, MA: North River Press.

Mizuno, S. (Ed.). (1988). Managementfor quality improvement: The sevennew QC tools. Cambridge: Produc-tivity Press. (Original work pub-lished in 1979).

Moran, J. W., Talbot, R. P., & Benson,R. M. (1990). A guide to graphicalproblem-solving processes. Milwau-kee: ASQC Quality Press.

Pasquarella, M., Mitchell, B., &Suerken, K. (1997). A comparisonon thinking processes and totalquality management tools. In 1997APICS constraints managementproceedings: Make common sense acommon practice, Denver, CO,April 17-18, 1997. Falls Church,VA: APICS. 59-65.

Robson, M. (1993). Problem solving ingroups. (2nd ed.). Brookfield, VT:Gower

Scheinkopf, L. J. (1999). Thinking fora change: Putting the TOC thinkingprocesses to use. Boca Raton, FL:St. Lucie Press.

Scholtes, P. (1988). The team hand-book: How to use teams to improvequality. Madison, WI: Joiner.

Senge, P. (1990). The fifth discipline.New York: Doubleday.

Smith, D. (2000). The measurementnightmare: How the theory ofconstraints can resolve conflictingstrategies, policies, and measures.Boca Raton, FL: St. Lucie Press.

Snow, R. E. (1974). Designs forresearch on teaching. Review ofEducational Research, 44 (3): 265-291.

Sproull, B. (2001). Process problemsolving: A guide for maintenanceand operations teams. Portland:Productivity Press.

Stevens, J. P. (1999). Intermediatestatistics: A modern approach.Mahwah, NJ: Erlbaum.

Wilson, P. F., Dell, L. D., & Anderson,G. F. (1993). Root cause analysis: Atool for total quality management.Milwaukee: ASQC Quality Press.

Standardization of processes is not aCompany value

Competency comes through experience

Most operators have5-10 years ofexperience

The company doesnot have a defined

system for creating and updating SOPs

Some operationsdo not have SOPs

SOPs are not updated regularly

Some SOPs are incorrect

Mostoperators are

competent

Competent andexperienced

operatorsdo not need SOPs

Operators want tobe viewed as

experienced and competent

Operators view SOPs as a tool for inexperienced and

incompetent operators

Operators do notuse

SOPs

Company does notenforce the use

of SOPs

Management expects

operators to becompetent

Example of a Current Reality Tree