paisley project

Upload: hamada-ahmed

Post on 03-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 Paisley Project

    1/6

    The DevTeam division of a large software firm creates entertain-

    ment-related web features. A feature, for example, might be a

    popup with local movie listings, or a notice for an upcoming con-cert event. The goal of the DevTeam division is to deliver these fea-

    tures in a timely manner so that the software company can plan

    their releases accordingly. Web products have a short lifespan, so

    release management is a key success criterion.

    The business problem, however, was that the DevTeam was not

    delivering features as promised. This led to a process problem of

    not being able to give realistic estimates of what would be accom-

    plished each work cycle. The uncertainty in delivery times of fea-

    tures impacted companys ability to release products according to

    plan and ship completed products in a competitive manner. To

    increase the number of features completed per cycle, and therefore

    better meet release dates, a DMAIC project was initiated, focusing

    on identifying and eliminating the source of the variation.

    Improving Web FeatureOn-time Delivery

    The Final Tollgate features a Six Sigma project as it wouldbe presented to a panel of company executives at the finalproject review. The objectives of such a presentation are tocommunicate significant results of the project and share

    highlights of how results were achieved. The slides are theproject leaders visual presentation and the accompanyingtext is the verbal presentation. It is assumed that the audi-ence has a basic understanding of Six Sigma.

    Do you have an exemplary Six Sigma project to share? Would you like to see it here? Submit it to us at isixsigma.com/submit.

    BY DAVID PAISLEY

    January/February 2011 iSixSigma Magazine

    THE FINAL TOLLGATE

    43

    iSS Mag JF11 43 Final Tollgate_R.qxd:Layout 1 12/14/10 6:33 PM Page 43

  • 7/28/2019 Paisley Project

    2/6

    iSixSigma Magazine January/February 2011

    the survey had a 95 percent confidence interval, with a mar-gin of error of +/- 4.62 percent.

    The survey questions asked clients to rate their agreementwith a series of statements on a scale of 1 (totally disagree) to9 (totally agree). Their mean responses were as follows:

    I want development costs to be lower. 5.67 I want to know if something isnt going to be deliv-

    ered in time. 5.73 I want new deliverables more quickly. 6.05 I want better status reporting. 5.52

    THE FINAL TOLLGATE

    44

    Define

    The DevTeam operated under the Scrum methodology of soft-

    ware development (see Basic Scrum Terminolo-gy). Theprocess that the DevTeam deployed to deliver web featuresbegan with the product owner supplying a list of featuresneeded. Next the DevTeams subject matter experts (SMEs)would estimate the time needed to complete each task. Theteam would then execute the tasks according to specialty.

    The Six Sigma team determined that only 46 percent ofthe features planned for each release were being deliveredduring the cycle, or sprint. The project team set a target ofachieving 80 percent on-time delivery of website features persprint, with an average estimation error rate of 1 hour.Estimation erroris how early or late a feature was actuallycompleted compared to the predicted time. For example, a

    feature that was given an estimation of four hours would becompleted within three to five hours.One of the first steps in the Define phase was to gather

    voice-of-the-customer (VOC) data. A subset of six DevTeamclients the internal groups in the software company thatuse their web-feature-development services were inter-viewed individually, and the responses were grouped andinterpreted using affinity diagrams. The project team thencreated a web-based survey and distributed it to the popula-tion, which consisted of 350 clients. After receiving 197responses to the survey, the Six Sigma team determined that

    Basic Scrum TerminologyHere are the operational definitions of some common termsused in the Scrum software development methodology:

    Sprint One round of the development process,usually one month in length

    ScrumMaster The individual in charge of manag-ing each sprint

    Product owner The individual(s) who provide therequirements and to whom the output of eachsprint is delivered

    Subject matter expert (SME) An individual whois well-versed in a specific work type

    Product backlog List of requirements for thesprint, delivered at the start of each sprint

    Voice of the Customer (VOC) Analysis

    VOC

    I wantdeliverables

    more quickly

    I wantto know ifsomething

    isnt going tobe delivered

    on time

    What theCustomer Is

    ActuallySaying

    I need toship newproducts

    and featuresto stay

    competitivein the

    marketplace

    I needrealistic

    estimates ofwhat will getdone each

    sprint

    Critical-to-CustomerCharacter-

    istics (CTCs)

    More newproducts

    and featuresdelivered

    each cycle

    The ability toexecuteon their

    release man-agement plan

    Key BusinessMeasures

    Y= Increasedaverage

    in percentageof tasks

    completedLSL = 80%

    USL = 100%

    Y=Decreasedvariation inestimation

    errorLSL = Noearlier than 1hour per task

    USL = Nolater than 1

    hour per task

    Multiple Regression Analysis

    Area

    Overallsatisfactionwith DevTeam

    Knowingif a feature willbe late

    New featuresdeliveredquicker

    Lower devel-opment costs

    Better statusreporting

    The multiple linear regression above calculates how mucha 1-point improvement will impact overall customer satis-

    faction in each of the areas the DevTeam deems important.

    Avg. Score(based on

    scale of 19)

    5.96

    5.73

    6.05

    5.67

    5.52

    Return onImprovement

    0.48

    0.19

    0.15

    0.10

    Impact onOverall

    Satisfaction

    6.44

    6.15

    6.11

    6.06

    Adj. R square: 0.66

    iSS Mag JF11 43 Final Tollgate_R.qxd:Layout 1 12/14/10 6:33 PM Page 44

  • 7/28/2019 Paisley Project

    3/6

    January/February 2011 iSixSigma Magazine

    THE FINAL TOLLGATE

    45

    The project team also asked clients to rate their satisfac-tion with the DevTeam on a scale of 1 (very dissatisfied) to 9

    (very satisfied). The mean response was 5.96.Judging by the mean scores, the results would indicate

    that development costs and status reporting were the areasthat needed the most improvement. However, the data wasthen further analyzed using multiple linear regression, lead-ing to different conclusions.

    The regression showed that the responses I want deliver-ables more quickly and I want to know if something isntgoing to be delivered on time were the areas that wouldincrease overall customer satisfaction. These attributes wereclarified into critical-to-customer (CTC) characteristics. Keybusiness measures were then established through a series ofmeetings with various clients in different disciplines and geo-

    graphical regions.Once the CTCs were clearly established, the DevTeamrealized that they had been operating for nearly a year withno real key business measures.

    For most software development, the amount of data istypically low, meaning teams rarely track time spent on thedifferent parts of their development cycles. This team, how-ever, used Scrum as their software development tool, so theyhad been collecting data as part of that methodology. Thisforsight about data would prove helpful during the improve-ment project.

    Measure

    To respond to the VOC statement, I want new deliverables

    more quickly, the team measured the percentage of tasks thatwere completed, still in progress or not started for the previ-ous seven development cycles. The team determined thatnone of the tasks met the goal of 80 percent on-time deliveryand that the average delivery rate was 46 percent on time.

    Next, the team measured the estimation error for thesame seven development cycles. The average estimation errorrate the amount of time that each original estimate per taskwas incorrect is expressed as actual time needed minusoriginal time estimated. The resulting data were determinedto be not normal. There were both positive and negative val-ues, as there were times when the DevTeam delivered tooearly as well as too late (positive = hours late, negative =

    hours early). During the previous seven sprints, the averageestimation error was 1.08 hours per task, with a standarddeviation of 3.21 hours. There were a number of outliers thatwere individually reviewed and left included in the data.

    The DevTeam group thought they were on target withtheir mean of 1.08 hours in estimation error and that thevalue of further improvements would be small. However,once they were introduced to the concepts of variation andprocess capability (Cp/Cpk), the team began to understandthe magnitude of their customers frustrations. This realiza-tion was a key turning point in the project.

    Estimation Error: Pre-improvement

    Estimation Error RateThe chart above measures the estimation error for the pastseven development cycles. The estimation error rate is theactual time needed, minus the original time estimated.

    Features Completion RateFor the previous seven development cycles, none of thecycles met the goal of 80% completion of the features.Average completion rate: 46% of features on time

    Anderson-Darling Normality TestA-squared 8.75P-value < 0.005

    Mean 1.0769StDev 3.2065Variance 10.2817Skewness -0.46377Kurtosis 3.17042N 533

    Minimum -16.00001st Quartile 0.0000Median 1.00003rd Quartile 3.0000Maximum 12.0000

    95% Confidence Interval for Mean

    0.8041 1.349895% Confidence Interval for Median

    1.0000 1.000095% Confidence Interval for StDev

    3.0249 3.4115

    Mean

    Median

    0.8 0.9 1.0 1.1 1.2 1.3 1.4

    -16 -12 -8 -4 0 4 128

    95% Confidence Intervals

    * * * * ********

    iSS Mag JF11 43 Final Tollgate_R.qxd:Layout 1 12/14/10 6:33 PM Page 45

  • 7/28/2019 Paisley Project

    4/6

    iSixSigma Magazine January/February 2011

    Analyze

    The next step was to map out the current process and create

    a failure mode and effects analysis (FMEA). The team con-ducted the FMEA to find all the potential points in the cur-rent process where a failure could occur. The results showeda large risk priority number (RPN) at the point where theSME estimates how long each task will take.

    With the DevTeam now more on board regarding the sever-ity of the problems, the next step undertaken was constant-noise-experimental analysis. The team mapped out all of thepossiblexs and determined which were constants (controllablefactors), which were noise (uncontrollable factors) and whichwere experimental variables (factors that can be tested at differ-ent options or settings).

    The biggest discovery here was that the DevTeam had no

    standard operating procedures (SOPs) defined at all. Theteam was also reluctant to create SOPs, claiming that theywould stifle creativity and would be made obsolete tooquickly due to technology changes. But after discussing thefirst constant (there was no clear definition of done ) fornearly an hour, they realized how much clarity an opera-tional definition and SOP would bring, as everyone had a dif-ferent definition and it was causing roadblocks to completingtasks. The result of these roadblocks was that workers wouldopen up too many tasks, which caused a backlog of tasksthat were still in progress.

    Under the experimental variables, the fact that an SMEdid the estimation on tasks was identified as a failure mode

    in the FMEA and hotly discussed. While all team memberssaw the SME as a rank of senior status in the group, somefelt the SMEs were too out of touch with doing the actualwork and did not adequately consider estimation input fromthe other members at large. Any error in estimation was gen-erally attributed to the individual worker not doing their jobcorrectly on a task, or that the previous worker did not com-plete their task entirely.

    Because variations in the as-is process were contributingto the task completions and estimation errors, the teamagreed to develop SOPs as a first step in improvements. Asoftware tool was created to improve the ease and accuracyof updating timesheets, which moved this task to the con-

    stant category. It was expected that SOPs would contributeimmediately to reducing variation.The length of sprints also was discussed. Sprints at the

    time lasted four weeks, and the team felt that was too long towait for feedback. The team also felt that long sprints createdtoo many tasks to be estimated and completed for each cycle.

    The next step was to identify the source(s) of variation inestimation. The first areas looked at were investigating theblindingly obvious. Examples such as determining whetherthe root cause was 1) a certain time period, 2) the person(s)performing the tasks, or 3) the type of task itself were all

    THE FINAL TOLLGATE

    46

    Constants-noise-experimental Analysis

    ConstantsNo clear definition of done

    Too many open tasks per person

    No standard demo-meeting template

    Not enough test machines

    NoiseUpdating timesheets

    Turnover in roles

    Attitude regarding Scrum

    Programming language requirements

    Company has political silos

    Market changes

    Innovations in technologies

    Experimental VariablesSME doesnt do task

    A few SMEs do all estimation

    Sprints are too long

    Input-process-output (IPO) MatrixProcess Inputs

    List inputs below:

    CONSTANTS

    No clear definition of done

    Too many open tasks per person

    No standard demo-meeting template

    Not enough test machines

    EXPERIMENTAL VARIABLES

    SME doesnt do task

    A few SMEs do all estimation

    Sprints are too long

    Estimation Accuracy

    Priority: 9

    9

    3

    1

    1

    9

    9

    3

    Features Completed

    Priority: 9

    3

    9

    1

    1

    3

    9

    3

    Stakeholder Demos

    Priority: 1

    3

    1

    9

    1

    1

    1

    9

    Sum

    111

    109

    27

    19

    109

    163

    63

    iSS Mag JF11 43 Final Tollgate_R.qxd:Layout 1 12/14/10 6:33 PM Page 46

  • 7/28/2019 Paisley Project

    5/6

    January/February 2011 iSixSigma Magazine

    THE FINAL TOLLGATE

    47

    areas where the team felt an obvious answer would emerge.Multi-vari charts that were completed using statistical analy-

    sis software showed that the variation was consistently with-in-part across time, owner and task type.

    Because the source of variation was not consistently locat-ed with any time cycle, individual, or task type, the teamdetermined that the root causes were more systemic inside theprocess itself. The group decided to move forward with imple-menting solutions around creating SOPs for the constants andpiloting improvement solutions to the experimental variables.All signs to this point were pointing to the SME estimationstep in the process as a major source for improvement.

    The team used an input-process-output (IPO) matrix tofind which problem areas had the highest impact on the out-puts. The highest areas were around the steps in the process

    that involved SMEs. Next was the operational definition ofthe Scrum term done, and the number of open tasks at anyone time. The second tier of impacts included the length ofsprints, the absence of a standardized demo meeting agendaand template, and too few test machines.

    Improve

    To begin improvements, the team deemed that a standardmeeting template for the demo meetings was a quick win, as itwas fast to do, easy to implement, inexpensive and reversible,if necessary. Getting more test machines was denied as there

    was no clear link to any major improvement occurring as aresult. The team decided to reduce the length of each sprint

    (cycle time) from one calendar month to two weeks, mainlydue to ease of implementation and scheduling the meetings soall parties could attend. No advanced analysis was done to seeif a certain number of days yielded higher results than another.

    Defining done was solved through a series of meetings todetermine an operational definition for each task type, whichwas decided through a unanimous vote on each. (These defi-nitions later became SOPs for the software firm and have beenclassified as confidential.) Limiting the number of open taskswas accomplished through the operational definitions. Thesolution was that a task had to meet the done criteria beforeanother could be started.

    The last two solutions centered on the SME roles. While

    SMEs did not work on the tasks they estimated, they did doall of the estimation. Politically, this was a very sensitiveissue; the SMEs were senior people and did not want torelinquish their role because they were afraid it might be per-ceived as a reflection of incompetence. Their initial sugges-tion was to keep the idea of SME estimation and rotate theexisting SMEs by assigning them to estimate other tasktypes. That idea was rejected because no one was an SME forall task types. The next suggestion was to rotate everyone onthe DevTeam through the SME role. That, too, was rejectedas not everyone had the skill set necessary or the interest.

    Estimation Error: Post-improvement

    Anderson-Darling Normality TestA-Squared 4.42P-Value < 0.005

    Mean 0.13874StDev 1.51497Variance 2.29512Skewness 0.063266Kurtosis 0.517264N 191

    Minimum -4.000001st Quartile -1.00000Median 0.000003rd Quartile 1.00000

    Masimum 4.00000

    95% Confidence Interval for Mean-0.07748 0.35497

    95% Confidence Interval for Median0.00000 0.00000

    95% Confidence Interval for StDev1.37675 1.68427

    Mean

    Median

    -0.1 0.0 0.1 0.2 0.3 0.4

    -16 -12 -8 -4 0 4 128

    95% Confidence Intervals

    iSS Mag JF11 43 Final Tollgate_R.qxd:Layout 1 12/14/10 6:33 PM Page 47

  • 7/28/2019 Paisley Project

    6/6

    from 1.07 hours to 0.13 hours. Standard deviation alsodropped from 3.2 hours to 1.5 hours.

    Control

    Since the solutions have been implemented, the process isnow more centered. More estimations are falling within spec-ification limits. The overall percentage of features completedafter each sprint now stands at 96 percent. In the first quar-ter the solutions were implemented, they translated into sav-ings of roughly $35,000 through an increase in labor produc-tivity. They were then scaled out in the division and, by year-end, had resulted in annualized savings of $574,080. Theupdated FMEA showed reduced RPNs for the process steps.

    The project was a big win for Six Sigma in gaining accept-ance in the software companys DevTeam division. Many

    long-standing practices and beliefs were challenged and thesolutions were heralded as major breakthroughs. However,opportunities for further reducing variation still exist.

    The improvements, particularly the SOPs, are beingleveraged as best practices across the company, and theDevTeam is interested in further reducing more of the varia-tion in their estimation process in a future project.N

    iSixSigma Magazine January/February 2011

    THE FINAL TOLLGATE

    48

    The team then decided to implement TRIZ (Theory ofInventive Problem Solving) on the SME issues as a way of

    gaining more acceptance for the proposed solution. Throughthe TRIZ process, the DevTeam came up with a solution thatinvolved incorporating those who know more about specifictasks into the estimation process and increasing the magni-tude of feedback given for each estimation. The team alsodecided to increase team interaction when estimating andcreate a more transparent estimation process.

    A final solution of incorporating everyone into a com-mon estimation meeting each cycle was created; majorityvote would determine the estimation time for each taskadopted. The number of times this general solution wasbrought up during the TRIZ exercise proved to be valuablein convincing the SMEs and management to try this change.

    A few team members who had also read the book TheWisdom of Crowds (Anchor, 2005), by James Surowieki, wereable to present additional anecdotes supporting this solution.

    Sprint No. 8 was designated as the transitional period forpreparing the new improvements, so no work was done dur-ing that period of time. Sprint 9 was the first cycle of thenew solutions. After completing sprints 9 through 12, theoverall percentage of features completed exceeded the 80percent target. The number of in-progress tasks werereduced significantly, and no tasks were entered in the notstarted category for each sprint. Average estimation error fell

    David Paisley is a certified Master Black Belt and has practicedSix Sigma for more than 10 years.

    Control PlanItem to Improve

    Average estimation error

    Standard deviationof estimation error

    Percentage of featurescompleted on time

    Monitoring Frequency

    Monthly(every other sprint)

    Monthly(every other sprint)

    Monthly(every other sprint)

    Responsibility

    ScrumMaster

    ScrumMaster

    Product owner

    Contingency Action Plan

    Incorporate into next sprintplanning

    Incorporate into next sprintplanning

    Incorporate into next sprintplanning

    Post-improvement data: Average estimation error fell from 1.07 hours to

    0.13 hours. Standard deviation dropped from 3.2 hours to

    1.5 hours.

    Following sprints 9 through 12, the overallnumber of features completed on time exceededthe 80 percent goal

    Number of in-progress tasks fell significantly;none entered in not started category

    Continued Improvement: The DevTeam was able to reduce headcount by

    2 fulltime-equivalent positions, saving more than$500,000 per year

    Overall percentage of features completed aftereach sprint: 96 percent

    Improvements and SOPs now being leveragedas best practices companywide

    iSS Mag JF11 43 Final Tollgate_R.qxd:Layout 1 12/14/10 6:33 PM Page 48