qsm executive primer estimation and demand management

77
Understanding Software Estimation, Negotiation, and Enterprise Demand Management An Executive Primer Douglas T. Putnam and C. Taylor Putnam-Majarian with foreward by Lawrence Putnam, Sr.

Upload: taylor-putnam-majarian

Post on 14-Apr-2017

234 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: QSM Executive Primer Estimation and demand Management

Understanding Software Estimation, Negotiation,and Enterprise Demand Management

An Executive Primer

Douglas T. Putnam and C. Taylor Putnam-Majarianwith foreward by Lawrence Putnam, Sr.

Page 2: QSM Executive Primer Estimation and demand Management
Page 3: QSM Executive Primer Estimation and demand Management

Understanding Software

Estimation, Negotiation, and

Demand Management

An Executive Primer

Page 4: QSM Executive Primer Estimation and demand Management

Published by Quantitative Software Management, Inc.

2000 Corporate Ridge, Ste 700 McLean, VA 22102 800.424.6755 [email protected] www.qsm.com

Copyright © 2015 by Quantitative Software Management.

All rights reserved. This publication is protected by copyright, and permission must be obtained from the publisher

prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means,

electronic, mechanical, photocopying, microfilm, recording, or likewise. For information regarding permissions,

write to the publisher at the above address.

Portions of this publication were previously published in journals and forums, and are reprinted here special

arrangement with the original publishers, and are acknowledged in the preface of the respective chapters.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for

identification and explanation, without intent to infringe. Every attempt has been made to annotate them as such.

First Edition

Page 5: QSM Executive Primer Estimation and demand Management

TABLE OF CONTENTS

FOREWORD: by Larry Putnam Sr. i

CHAPTER 1: The Journey Forward 1

CHAPTER 2: The IT Innovation Cycle 5

CHAPTER 3: Most Common Reasons Why Software Projects Fail

and What You Can Do About It 9

CHAPTER 4: Fact Based Decisions – It Starts with Data and Establishing a Baseline 13

CHAPTER 5: The Software Production Equation and Rayleigh Staffing Model 21

CHAPTER 6: Single Release Estimation Using the Software Production Equation

and the Rayleigh Staffing Model 27

CHAPTER 7: Multiple Release and Complex Program Estimation 35

CHAPTER 8: Capacity Planning and Demand Management 39

CHAPTER 9: Tracking, Forecasting, Execution, and Post Mortem 51

CHAPTER 10: Understanding Quality and Reliability 55

CHAPTER 11: What does it take to Successfully Implement Estimation? 63

CHAPTER 12: Summing It Up 67

CONTRIBUTING AUTHORS 69

Page 6: QSM Executive Primer Estimation and demand Management
Page 7: QSM Executive Primer Estimation and demand Management

i

FOREWORD

Larry Putnam Sr., President Emeritus of QSM, Inc.

______________________________________________

Thirty five years ago, people who were involved with software development were frustrated

because they did not have a good method to determine the cost and schedule for their

projects. The typical approach was to use a number for software productivity, such as source

lines of code per person-month (SLOC/PM), obtained from a previous project, from a friend, or

from a magazine article, and then divide this number, say 200 SLOC/PM, into the best guess for

the size of the project, say 50,000 SLOC (COBOL), to obtain 250 person months of effort. This did

not address the schedule or how long it was going to take, and it gave uniformly poor results

time after time. Bosses were not happy. They couldn’t plan this way.

I was on active duty in the Army at the time, and in charge of the Army’s Data Processing

budget. I had some historical data available to me to analyze. There was a pattern to how

software systems behaved in their demand for people at different points in time during the

project. A project had a typical staffing buildup, peak, and tail off as the project progressed.

Errors were made that had to be found and fixed and the project’s environment changed,

making changes to the software system necessary. It was a dynamic, time varying process that

needed to be treated as such.

Fortunately, over the next year or so, by looking at approximately 50 Army systems we found a

time varying model, a Rayleigh curve, that had the buildup, peak and tail off that we saw in the

budget data. The parameters of this curve could be expressed in terms the bosses wanted to

deal with – effort and schedule. We also found that these parameters could be related to the

product size of the system (the functionality). This led to what we called the software equation.

We have used this software equation effectively now since 1978 and it has been built into a

comprehensive system to deal with most of the management issues related to software

development in nearly every domain (business, engineering, real-time embedded) and industry

sector (finance, telecom, medical, aerospace, defense, government, etc.). It works well, gets

good results, and has stood the test of time for three generations. In fact, the fundamental

software equation that I used to estimate COBOL programs running on a mainframe in the

1970s, is the same one my son, Doug (now Co-CEO of QSM, Inc.), used to estimate object-

oriented C++ programs that ran on a PC client-server platform in the 1990s. It is also the same

equation that my granddaughter, Taylor (now a consulting analyst at QSM, Inc.), uses to

estimate modern software projects that are written in Java using Agile methods, and run on a

cloud computing platform.

Page 8: QSM Executive Primer Estimation and demand Management

ii

So the book that follows entitled, Software Estimation, Negotiation and Enterprise Demand

Management – An Executive Primer, tells that story and shows the many ways these methods

can be used by management to plan and control software development with the opportunity

to save lots of money in the process.

Lawrence H. Putnam, Sr.

Jan 1, 2015

Page 9: QSM Executive Primer Estimation and demand Management

1

CHAPTER 1. THE JOURNEY FORWARD

_______________________________________________

The news of another failed software project causes many to roll their eyes. As an industry that has

been around since the 1940’s, and with all the technological advancements made over the years,

one would think that by now we would have figured out how long it takes to build software

applications. What many do not understand is that software development is very complicated

and not nearly as intuitive as society would like to think. Although this notion is not fully understood,

many organizations are feeling pressure to do better.

By picking up this book, you have begun your journey towards better decision making in

controlling individual software projects, as well as in long-term process improvements at the

portfolio level. There are many reasons why process improvement may interest you. Perhaps you

want to learn how to best utilize the resources available to you. Or perhaps, you feel as though

you do not have the right resources necessary to ensure the success of your software

development projects. Maybe you want to be in a better position to negotiate schedules and

budgets with business stakeholders or vendors. Whatever your reason, I want to commend you

for educating yourself on a better way to achieve successful project outcomes.

You may or may not have a background in software development, yet you find yourself currently

responsible for that function in your organization. While learning the ins and outs of software

development may be useful, it’s likely that you don’t have enough time to do that in addition to

all your other responsibilities. Your main concerns probably include the following:

1. What is the status of the current project?

2. What can be done to make long-term improvements to our organization?

3. What options are available right now in the current development process?

In this book, we will help you answer those questions by providing a basic explanation of the

methods involved in the overall software management process including demand management,

software estimation and risk mitigation, as well as overall process improvement strategies.

SOFTWARE DEVELOPMENT IS NOT MANUFACTURING

One common mistake made when initiating a development project is that most businesses

assume software development mirrors manufacturing. In reality, the two behave very differently.

Manufacturing strives to gain predictability in production by eliminating any variance. Resources

are often carefully distributed so as to ensure that production is not only controllable, but also

controlled. In manufacturing, the relationship between time and cost is linear. If one wants to

double the output, you invest in a second manufacturing line and produce twice the volume of

product in half the time (see Figure 1.1).

Page 10: QSM Executive Primer Estimation and demand Management

2

Figure 1.1. The cost of a manufacturing project over time.

However, software development is much more abstract. You cannot physically touch a software

project like you would a manufactured item, which makes it especially hard to judge its status

towards completion. In manufacturing, the tasks are mostly dependent, meaning one task must

be completed before another task may be started. In software development, it is possible for

there to be both independent as well as dependent tasks, making it unique from traditional

manufacturing. Additionally, the relationship between time and cost is non-linear in software

development (see Figure 1.2).

Figure 1.2. The cost of a software development project over time.

If you attempt to shorten the schedule by adding people, you will find rapid diminishing returns on

the schedule with corresponding large increases in cost and decreases in quality. Why, you might

ask? The reason is quite simple, and relates to the basic laws of acceleration. Acceleration

requires energy. Energy efficiency decreases as acceleration increases. A simple analogy that

Page 11: QSM Executive Primer Estimation and demand Management

3

most people can relate to is a gasoline powered automobile. Suppose you are at a stop light

and the light turns green. If you stomp on the gas to get to the next stop light as quickly as possible,

the miles per gallon meter will immediately drop to a lower efficiency rating. In the same situation,

you can be more conservative by accelerating at a slower rate. It takes slightly longer to get to

the next light but you do it in a much more fuel efficient manner. Software development staffing

behaves in a similar manner. The “fuel” is the human effort “hours”. If you add more people, the

human communication complexity increases which manifests itself in more defects, all of which

need to be found and fixed. This creates more rework cycles and is one of the reasons why using

larger teams only achieves marginal schedule compression. From a management standpoint,

this means that leaders have a direct influence on cost and quality by simply controlling the

project’s staffing levels. As a leader, this can be very empowering.

SUMMARY

Understanding the non-linear relationship between schedule and cost can be extremely useful in

project planning. Additionally, knowledge of the software development lifecycle will facilitate

the logistics of how to staff projects. The following section describes the high-level phases of

software development and how they can be applied to project staffing.

Page 12: QSM Executive Primer Estimation and demand Management

4

Page 13: QSM Executive Primer Estimation and demand Management

5

CHAPTER 2. THE IT INNOVATION CYCLE

_______________________________________ When faced with a problem, regardless of what it is, humans will attempt to solve it using a

common process. First, we will determine if their desired course of action is even feasible. Next

they will formulate a design of action. The third step is to actually carry out the planned activities,

and finally, the fourth step is to clean up and fine tune the finished product.

This problem-solving process is applicable to software development as well. Regardless of the

development methodology that you use, all software projects have four high level phases: Phase

1 - Feasibility, Phase 2 - Requirements & High-Level Design, Phase 3 - Build and Test, and Phase 4 -

Post Delivery Stabilize and Deploy (see Figure 2.1). While various methodologies may further

subdivide the development activities and increase or decrease the amount of overlap and

concurrency, these are the overarching phases used in estimation and project management.

Figure 2.1. High-level software development phases

Page 14: QSM Executive Primer Estimation and demand Management

6

PHASE 1: THE ‘WHAT?’ PHASE

This is the earliest phase in the software development lifecycle, in which individuals determine

what it is that they hope to achieve by developing this software. This phase is extremely high-level

and the main focus is to brainstorm whether or not building an application is even feasible.

Typically these activities include: a statement of need or list of the capabilities that the user

expects from the system, a feasibility analysis assessing whether or not the project can and should

be done, and a set of high-level plans documenting the management approach. This phase

typically results in a go or no-go decision.

PHASE 2: THE ‘HOW?’ PHASE

The second high-level phase focuses primarily on how the work is going to get done. Many equate

this with planning because many decisions about the requirements and design are made. Typical

activities for this phase include: defining the interfaces between the system-level components,

specifying software architecture and requirements by describing inputs and outputs, and finalizing

the project plans. A design review may be held at the end of this phase or it may be iterative in

an agile-like development process. Phase 2 typically overlaps with the end of Phase 1, as well as

the beginning of Phase 3.

PHASE 3: THE ‘BUILD IT’ PHASE

The third phase of the software development lifecycle is the longest because it involves actually

carrying out the plans made in Phase 2. The phase begins with detailed design and coding. The

final portion of the phase includes system testing and hardening of the product prior to general

availability. Phase 3 ends at the first customer release, which is typically when the system is 95%

defect free and can run for about one day without encountering a critical defect.

PHASE 4: THE POST DELIVERY STABILIZATION PHASE

The last phase of the software development lifecycle begins after the software has been released

to customers. Typical activities include correcting latent defects found after general availability.

The purpose of this phase is to provide product support to the customer base, and stabilize the

existing product. This phase may be considered complete once the project reaches a particular

reliability milestone or certification level. The duration of this phase varies depending on the needs

of the users. Prior to the conclusion of this phase, the base product often transitions to long-term

operations and maintenance. The next innovation cycle begins as new capabilities are added

to the base product.

An advantage of using a four-phase macro model of the innovation cycle is that it can be easily

adapted to any software development methodology. Given the introduction and change rate

of new methodologies, the ability to maintain some continuity with estimation techniques provides

great benefits as the industry adapts to the most effective development approaches. The

following table maps the four phases to some of the more popular methodologies used today.

Page 15: QSM Executive Primer Estimation and demand Management

7

Methodology Phase 1: What? Phase 2: How? Phase 3: Build

& Test

Phase 4: Deploy &

Stabilize

Waterfall Concept Requirements &

Design

Construct &

Test

Deploy

RUP

Initiation Elaboration Construction Transition

Agile Initiation Iteration

Planning

Iteration

Development

Production

SAP ASAP Project

Preparation

Business Blueprint Realization &

Final Prep

Go Live

SUMMARY

In summary, humans follow a distinctive process that they apply to innovation projects. Initially

they identify problems and potential solutions. The following step is to determine the most feasible

solution to the problem. Next they conceptualize the solution and implement it. Finally, they put

the solution to use and fine tune it. This flexible macro model with four high-level phases is

adaptable and can be applied to any development methodology.

Page 16: QSM Executive Primer Estimation and demand Management

8

Page 17: QSM Executive Primer Estimation and demand Management

9

CHAPTER 3. THE MOST COMMON REASONS

WHY SOFTWARE PROJECTS FAIL

AND WHAT YOU CAN DO ABOUT IT

_______________________________________ An article based on this chapter is scheduled to appear in an

upcoming edition of the online journal InfoQ.

Knowing the basics of software development can greatly improve the project outcome.

However, that alone is not enough from preventing project failures. Projects can be categorized

as failures for a number of reasons. Cost overruns, late deliveries, poor quality, and developing

the wrong product are a few of the reasons why projects can be viewed unfavorably. When

faced with a failure, project managers often find themselves wondering where it went wrong.

More often than not, software developers bear the brunt of the responsibility for such situations;

after all, they’re the ones who built the application. However, closer examinations of the projects

do not always show evidence of incompetence. When asked to assess these projects, QSM

compares the completed projects with our industry trends, and often finds that they performed

“reasonably”. So then why are these projects considered failures? Our experience shows that the

overwhelming majority of the problems can be tied to flawed estimation or poor business decision

making. It is important to make sure that everyone understands a common set of terms because

we often find that individuals and organizations use them interchangeably when they each have

a unique meaning. Targets, constraints, estimates, commitments, and plans have distinct

definitions and can be reviewed below.

Target – A goal, what we would like to do or achieve.

Constraint – Some internal or external limitation on what we are able to do.

Estimate – A technical calculation of what we might be able to do at some level of scope,

cost, schedule, staff, and probability.

Commitment – A business decision made to select one estimate scenario and assign

appropriate resources to meet a target within some constraints.

Plan – A set of project tasks and activities that (we calculate) will give us some probability

of meeting a commitment at a defined level of scope, budget, schedule, and staff.

In other words, decisions made based off of unrealistic targets or constraints can send a project

on a death march from the start. Here is our list of the top five reasons why IT projects fail:

1. Accepting a forced schedule or mandated completion/ milestone dates without

substantial data and analysis.

How many times have you found yourself in a situation in which you’ve been given an

unreasonably short deadline? Someone in your organization publicly speculates that the

project will be done by a particular date, thus unintentionally committing the team to that

deadline. Perhaps your budget cycle dictates that the money allocated to this project

Page 18: QSM Executive Primer Estimation and demand Management

10

must be spent by the end of the year or the next release will not get funded. Maybe the

stakeholder wants the project finished by Christmas so that he/she can enjoy the holiday

in peace, knowing that the project is complete. Or maybe the stakeholder just really likes

round numbers and wants the project to release the 1st of the month. There are numerous

reasons why a development team would be given an arbitrary project completion

deadline. The unfortunate reality of an overzealous schedule often results in overstaffing

the project, the next reason why software projects fail.

2. Adding excessive personnel to achieve unrealistic schedule compression.

How do project managers deal with an overly optimistic schedule? One common

response is to staff up the project, often adding way more people than necessary to

complete the project. Not only does this drastically increase the cost of a project, but it

also decreases the quality. Having more people involved in the project increases

opportunities for miscommunications and also makes it more challenging to integrate the

different sections of code together. Additionally, Frederick Brooks (1975) would argue that

“adding people to a late project only makes it later,” which makes sense. The people

have to come from somewhere, often from other projects. This puts the other projects

further behind schedule and requires that the new staff get caught up by veteran staff,

thus decreasing productivity across the board.

3. Failing to account and adjust for requirements growth or change into schedule and budget

forecasts.

“Wouldn’t it be great if…?” Those five words can be some of the most dreaded, especially

when heard halfway through the construction portion of a project. While there is certainly

a time and a place for brainstorming, those activities should take place in Phases 1 and 2.

In fact, the purpose of these two phases is to determine whether or not a project is feasible,

and what features the application should have. If you think of it this way, Phase 2 helps

you figure out what you are building, while in Phase 3, you build whatever you decided

upon in Phase 2. While some overlap may exist between the two high-level phases, by the

time you’re in the Phase 3, the scope of required functionality for a given product release

should be clear.

Requirements growth is a problem if you’re adding functionality without allotting

more time and budget for development. In essence, you’re asking for more work

in a shorter amount of time, a technique that we’ve already found does not work.

Changes to existing requirements can also be a problem depending on the nature

and timing of the change. Projects using Agile methods can manage changes to

requirements details, as long as they occur before construction of a given iteration.

However, any changes to requirements of software architecture that cause rework

of code already written will almost certainly have an impact to the schedule and

budget.

4. Emotional or “intuition-based” stakeholder negotiation that ignores facts and statistics.

At some point or another we’ve all become attached to a particular project that we’ve

worked on. It may not have been related to software development, but at some point

we’ve all become emotionally invested in a project’s outcome. For many, this may be the

Page 19: QSM Executive Primer Estimation and demand Management

11

project where your reputation is on the line; the one that is too-big-to-fail, and often we let

our emotions get the best of us.

When the pending success or failure of a software project puts an individual’s career on

the line, it’s likely that any related business decisions will be impacted. Stress can cloud

one’s thinking, especially when the stakes are high. A stakeholder may call for a 12 month

schedule in order to impress a customer despite the fact that reports from previous projects

of similar sizes all show a 15 month lifecycle. The stakeholder may dismiss advice from team

members, stating that he has “a feeling” that the team will be able to pull through. In such

situations, following your gut could be extremely detrimental and potentially lead to

project failures.

5. False, but common belief that the proverbial IT silver bullet alone can solve project

throughput or process issues.

When all else fails, a common next approach is a change of strategy. “We must be doing

it all wrong” and “What are our competitors doing?” are common thoughts at this point.

This is when musings of an “IT silver bullet” may start circulating around the office. For

instance, someone may suggest that the organization needs to adopt the latest

bandwagon development approach. While that might be a great direction in which to

take the organization, a decision like this should not be taken lightly. Whichever

development methodology your organization decides to use, it’s only as good as your

implementation. Merely switching development methodologies is not enough to

transform your development operation. Regardless of what you decide, in order to make

the implementation successful, there needs to be buy-in from all sides, individuals need to

be trained, and everyone needs to be held accountable to the same standards.

Otherwise, your development strategy will be roughly equivalent to waiting for the stars to

align perfectly and hoping for a miracle each time you begin a project. If implemented

without thought, this strategy is not only incredibly risky, but also reduces opportunities for

team members to give feedback mid-project.

SUMMARY

In short, software projects fail for a number of reasons. Take a moment to reflect on whether any

of the above reasons may have been the cause of a project failure in your organization. Now

what can you do about it? As an executive leader you can do a lot, but it will require courage

and a backbone to support your development operation. They need to function within a

reasonable set of expectations. Obviously, they will still have to be held accountable, so you will

also need historic performance data as evidence of their capability. Doing this will put you in a

much more powerful position to negotiate with customers and business stakeholders. A successful

outcome is one in which every entity can be successful. In all likelihood this will require some

compromise, but through this process you will be sowing the seeds of success. We will discuss this

further in the coming chapters.

Page 20: QSM Executive Primer Estimation and demand Management

12

References

Brooks, F. P. (1975). The Mythical Man-Month. Addison-Wesley.

Page 21: QSM Executive Primer Estimation and demand Management

13

CHAPTER 4. FACT-BASED DECISIONS

IT STARTS WITH DATA AND ESTABLISHING A BASELINE

_______________________________________ This article originally appeared in the May 20, 2015 online edition of

Project Management Times and is reprinted here with permission. One of the best ways to assure realistic expectations for a project is to observe the past. Before

getting started with any process improvement endeavors, it is important to understand the big

picture and establish an initial baseline with your projects. To do this, you will want to collect data

on your completed projects.

While it may sound overwhelming to collect data on completed projects, there are actually only

a few pieces of information that your need to collect:

1. Size:

A measure of the amount of functionality or value delivered in a system. It can be

measured in Source Lines of Code (SLOC), Function Points, User Stories, Business or

Technical Requirements, or any other quantifiable artifact of product size.

2. Time:

The duration required to complete the system (i.e. schedule). This is typically measured in

months, weeks or days.

3. Effort:

A measure of the resources expended. This is typically measured in Person Hours or Person

Months of work and is related to staffing as well as cost. The total effort multiplied by the

labor rate is typically used to determine the project cost.

4. Quality:

The reliability of the system is typically measured in the number of errors injected into a

system, or the Mean Time to Defect (MTTD), which measures the amount of elapsed time

that the system can run between the discoveries of new errors.

Once the size, schedule, effort, and quality data have been collected a 5th metric, the

Productivity Index (PI), can be calculated. The PI is a measure that determines the capability and

complexity of a system. It is measured in Index Points ranging from 0.1 to 40 and takes into account

a variety of factors including personnel, skills and methods, process factors and reuse. See

Chapter 6 for more information on PI.

Together, these 5 metrics give a complete view of the project which can be used to assess its

performance. In order to establish a true baseline; a broad reaching sample of historic

performances is preferred (see Figure 4.1). However, it is better to start with something rather than

nothing at all, so begin by creating your baseline with whatever is practical, and then build on

that as you realize success from newly completed projects.

Page 22: QSM Executive Primer Estimation and demand Management

14

Figure 4.1. Build datasets to include a sample of historical data at a variety of project sizes.

THE INTERNAL BASELINE

Once a repository of an organization’s completed projects has been established, custom trend

lines for schedule, effort, and productivity can be calculated to use for creating the baseline.

These trend lines serve as a reference point which can be used for comparing projects within your

organization. Where your projects fall relative to the trend line will indicate better or worse

performance than the average. This will give insight into your organization’s current capabilities.

Figure 4.2. Project baseline and outliers.

Page 23: QSM Executive Primer Estimation and demand Management

15

We can learn a lot about what we do well and what we can improve upon from looking at our

projects relative to the baseline. Examining how far a project deviates from the various trends can

help isolate best- or worst-in-class performances. Project outliers can also provide great insight

into this. Figure 4.2 displays the company project portfolio against their baseline average

productivity. Two of the projects stand out since they fall outside two standard deviations, one

above the average and one below respectively. Further examining the factors that influenced

these projects (i.e. new technology, tools and methods, personnel, or project complexity) will help

shed some light on why these projects performed so well or so poorly. Mimicking what went well

for best-in-class projects and avoiding what didn’t for the worst-in-class projects can help improve

the performance of future projects.

Understanding the baseline for your current development operation can help set reasonable

expectations for future projects by showing what has been accomplished in the past. If the

desired project parameters push the estimate into uncharted territory, you can use the historical

baseline to negotiate for something more reasonable. This baseline can also be used for contract

negotiation, evaluating bids and vendor performance, and navigating customer constraints, thus

allowing you to achieve your cost reduction and process improvement goals. Chapters 5 and 6

discuss this in more detail.

INDUSTRY COMPARISON

While the internal baseline gives good insight into the practices of an individual organization, we

have the market to think about as well. The internal project standings within an organization may

not matter if they’re not competitive with the industry. Therefore, it is important to have an external

comparison for your project data.

QSM regularly collects and maintains project data that is used to create 17 different industry trend

groups, ranging from Business IT systems of various industry sectors and development methods to

engineering and real time systems. Since these industry trends are kept current, they can be a

good reference point if you are new to estimation and do not have enough data from your own

projects.

When using these trend lines to determine performance, it is important to examine the project

holistically. Judging a project’s performance based upon one metric can be very misleading

because of the tradeoffs that occur in software development. For instance, a project may have

been viewed favorably because it was delivered quickly. However, looking beyond the schedule,

a project may not have performed well overall.

Page 24: QSM Executive Primer Estimation and demand Management

16

Figure 4.3. Project delivered 4.7 months earlier than average.

Figure 4.3 shows a project that was delivered 4.7 months ahead of the industry average, an

accomplishment that is often viewed favorably by management because it provides an

advantage over market competition. While speed of delivery may be desirable in some

circumstances, compressing the schedule unnecessarily can cause tradeoffs, which often result

in higher costs and lower quality.

Figure 4.4 shows how the project performed in other areas including productivity, staffing, effort

expended, and the number of defects present during testing. These graphs tell a very different

story.

Page 25: QSM Executive Primer Estimation and demand Management

17

Figure 4.4. Holistic view of project shows that the effort, staffing, and defects are higher than

industry average.

While the productivity of this project was comparable with the industry average, the peak staffing

and effort expended were drastically higher than what is typical for other projects of similar

scopes. This directly translates into higher project costs to pay for the additional labor hours.

Additionally, the number of defects present in the system was also considerably higher than what

is typical for the industry (see Figure 4.4). This is likely a result of the higher staffing. When more

people work on a project, there is a greater chance that miscommunications between team

members could lead to errors being injected into the system. Utilizing more people further divides

the code base, which can result in more defects during integration. The following scenario helps

illustrate the challenges of working with large teams.

Suppose you and another team member were asked to write a book together. The two of you

would have to brainstorm a plot and characters, figure out who would write each chapter, and

make sure that all the chapters flow together. Now suppose that 20 more people were added to

your book-writing team, and each of them was assigned to write one chapter. Each of the

chapters might be really good on its own, but putting them together becomes increasingly

challenging as the number of team members grows. When put together, the chapters may seem

Page 26: QSM Executive Primer Estimation and demand Management

18

fragmented, especially if the authors of the earlier chapters do not communicate well with the

authors of the later chapters.

The editing process introduces challenges of its own. Suppose an author makes a change in

Chapter 2. This change has the potential to affect every chapter thereafter, thus creating a lot

of rework. If the ultimate goal of the project was to have a completed book by the end of the

allotted time, it may have actually been easier and faster for the two-person team to write the

book than the 22-person team.

The challenges of this scenario are similar to what it is like to develop software using large teams.

Often, many large teams struggle with communication between team members, which is why we

tend to see a higher degree of defects than those that use smaller teams. Overall, this example

reinforces the need to examine a project holistically. While excelling in one area may make the

project appear as though it was a top performer, further examination may indicate otherwise. By

viewing the performance using multiple dimensions, one can determine or assess the strategy

employed on a project and determine whether the strategy delivered what had been

anticipated.

Figure 4.5. The Five Star Report.

Five Star Report views, like the one shown in Figure 4.5, can be used to rate the overall

performance of a project or group of projects. For each metric, the project is given a rating of 1-

5 stars with one star being the worst performance and five stars being the best. The Composite

Project Star Rating column on the right gives an overall grade to the project by factoring in the

performance of all the individual metric scores. This value helps determine what effects adjusting

the staffing or schedule will have on the overall project rating. Here it is easy to see when a project

staffs up in order to decrease their schedule. In such a situation, the project would have a Duration

rating of 4 or 5 stars but a staffing and effort rating around 1 or 2 stars. The opposite can also

occur, thus indicating a project that used fewer than the average number of staff and lengthened

the schedule duration. Ideally, project managers should shoot for an average of 3 or more stars

for their projects.

Page 27: QSM Executive Primer Estimation and demand Management

19

SUMMARY

Establishing a baseline will eliminate much of the up-front ambiguity and will provide detailed

recommendations based on quantifiable data that define the range of the organization’s current

capabilities. As various organizations strive for improvement, knowing where they stand relative

to competition is important. A company with a lower than average productivity will have different

goals and implement different process improvement measures than one that is average, or better

than average. Knowing where you stand as an organization can help you determine the most

appropriate measures to take, and decide the best method for moving forward. With this data

you will empower the organization to move toward fact based decision making, thereby

maximizing the possibility of having successful project outcomes.

Page 28: QSM Executive Primer Estimation and demand Management

20

Page 29: QSM Executive Primer Estimation and demand Management

21

CHAPTER 5. THE SOFTWARE PRODUCTION

EQUATION AND THE RAYLEIGH

STAFFING MODEL _______________________________________

SLIM® is the acronym for Software Lifecycle Management. It is a methodology as well as an

estimation and measurement product suite. SLIM® is based on two fundamental equations that

trace their origins to the seminal research of our founder Larry Putnam Sr. from the 1970’s. We

believe that these equations capture the fundamental time and effort behaviors related to how

people solve complex design problems. These equations are:

1. The Software Production Equation

2. The Rayleigh Equation

The software production equation is composed of four terms, namely: Size, Schedule, Effort and

Productivity. The equation is described below.

Size = Productivity × Time4/3 × Effort1/3

Notice the exponents attached to the time and effort variables. The schedule variable, or time,

has been raised to the 4/3rd power, while the effort variable has been raised to the 1/3rd power.

Without getting too technical, it means that there is a diminishing return of schedule compression

as people are added to a software project. In the most extreme conditions, it means that

spending a lot of money in labor costs can get little to no schedule compression (see Figure 5.1).

Figure 5.1. Utilizing more people requires exponentially more effort.

Page 30: QSM Executive Primer Estimation and demand Management

22

However, what’s nice is that this phenomenon also works in reverse. If a project can afford to

lengthen the schedule, it can be accomplished significantly cheaper by using fewer people (see

Figure 5.2). In today’s economic environment this is a very attractive option.

Figure 5.2. Lengthening the schedule drastically reduces the cost.

The software production equation is simply a quantification of “Brooks’ Law,” which states that

throwing people at a late project only makes it later. Fundamentally the software production

equation models the complexity of human communication and how that manifests itself in

defect creation and rework cycles. As more people are added to a project, human

communication complexity grows exponentially. The communication complexity results in larger

volumes of defects being created, and more rework cycles to fix the problems (see Figure 5.3).

Figure 5.3. Communication complexities increase as people are added to a project.

Page 31: QSM Executive Primer Estimation and demand Management

23

Figure 5.4. The Rayleigh curve occurs because of both dependent and independent tasks.

The Rayleigh equation uses the schedule and effort variables from the software production

equation to produce time-based predictions of effort, staffing, and cost. Software projects

typically begin with fewer resources during Phases 1 and 2. The necessary resources quickly

increase during Phase 3 when more people are needed to build the application due to the arrival

of independent work tasks, and then declines in the second half during testing when more of the

tasks become dependent. After delivery, a few staff remain to assist with fixing latent defects and

maintaining the application (see Figure 5.4).

The Rayleigh equation also produces cumulative curves to provide estimates of total effort and

cost, and rate curves to provide monthly or weekly estimates of staffing, effort or cash flow. These

are all important variables in the estimation, planning and negotiation process.

It is important to note that resource allocation in software development differs from traditional

manufacturing. In software development, the ideal work arrival rate follows the Rayleigh curve,

meaning the amount of available work is not always constant. Manufacturing, on the other

hand, tends to have a more stable amount of work, utilizing the same number of people working

from start to finish (see Figure 5.5).

Page 32: QSM Executive Primer Estimation and demand Management

24

Figure 5.5. Level loaded staffing shape is used in manufacturing.

Traditional project management has attempted to level load projects. However, this strategy is

often unsuccessful because it does not effectively utilize the available resources. It tends to

overstaff projects in the beginning and at the end of the lifecycle, thus wasting resources that

could be available to other projects. By using a Rayleigh curve to model the staffing buildup,

software projects can use their available resources more efficiently, and prevent some staff from

sitting idly at points in the project when there is not enough work to be done.

THE IMPOSSIBLE ZONE

QSM also uses historical data to determine the productivity of an organization and to identify

“The Impossible Zone”. “The Impossible Zone” is a schedule that no project has ever been able

to achieve (see Figure 5.6). It turns out that IT consumers and stakeholders love to demand

project schedules in the impossible zone.

Figure 5.6. ‘The Impossible Zone’ vs. ‘The Impractical Zone.’

Page 33: QSM Executive Primer Estimation and demand Management

25

QSM has over 10,000 historic projects in our repository which can be used to support clients that

have no history but we encourage our clients to collect their own data because it is more

powerful, especially when one needs to negotiate out of the impossible zone.

SUMMARY

The SLIM equations are field proven with over 35 years of practical application on real world

systems. The Software Equation allows one to predict both time and effort alternatives, while the

Rayleigh curve uses these outputs to create a staffing plan. One especially nice characteristic

of these algorithms is that they are applicable, even in the earliest phases of the software

development lifecycle, when little data is available but critical business decisions are often

made.

Page 34: QSM Executive Primer Estimation and demand Management

26

Page 35: QSM Executive Primer Estimation and demand Management

27

CHAPTER 6. SINGLE-RELEASE ESTIMATION

USING THE SOFTWARE PRODUCTION EQUATION AND

THE RAYLEIGH STAFFING MODEL

_______________________________________ So how would one actually do an estimate on a single release using our two equations? We like

to start with the software production equation, where the required inputs are size and

productivity. We rearrange our equations algebraically such that our inputs are on one side of

the equation and we can solve for effort (measured in labor hours) and time (schedule).

𝑆𝑖𝑧𝑒

𝑃𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦= (𝐸𝑓𝑓𝑜𝑟𝑡)1/3(𝑇𝑖𝑚𝑒)4/3

PRODUCTIVITY ASSUMPTIONS

The first major input is Productivity, which is measured using the Productivity Index (PI). PI is a

calculated macro measure of the total development environment, and embraces many factors

in software development, including: management influence, development methods, tools,

techniques, skill and experience of the development team, and application type complexity.

It is measured on a normalized scale that ranges from 0.1 – 40. In general, low values are

associated with poor environments and tools, and complex systems. High values, on the other

hand, are typically associated with good environments, tools and management, and well-

understood, straightforward projects. Business IT systems tend to have the highest PI’s because

their projects generally have well understood algorithms. Engineering systems have the next

highest PI’s followed by Real Time systems, because their increased algorithm complexity and

higher quality standards tend to lengthen the project durations.

Selecting the right Productivity Index is critical to producing a good estimate. As system size

increases, productivity typically also increases. Therefore, the PI selected will be size-dependent.

The best way to select your PI is from history. By rearranging the software production equation we

can calculate the actual productivity from past projects. This requires that we input actual

performance data on size produced, schedule months required and hours of effort expended.

𝑃𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑆𝑖𝑧𝑒

(𝐸𝑓𝑓𝑜𝑟𝑡)1/3(𝑇𝑖𝑚𝑒)4/3

Using a database of your historical projects, a productivity trend will be created from the average

PI at each project size. To use this method, simply select the PI that corresponds with your desired

project size. If the project being estimated has 3-5 similar projects, we will have a good idea of

the developer’s average capability. We can then modify this productivity assumption based on

whether the project is expected to be more or less complicated than the historical baseline.

Page 36: QSM Executive Primer Estimation and demand Management

28

If no historical data is available there is another option. Simply use the PI from the QSM Industry

Trend that most closely corresponds with the type of project being built. The Industry Trends are

created from historical project data in the QSM Database. Project data and their corresponding

trends are regularly collected and updated to reflect what is currently typical in the industry.

SIZING ASSUMPTIONS

The other major assumption in estimation is sizing. Estimating the size of a software application

can be challenging. One must translate abstract requirements into a technical size quantity. It’s

like trying to determine the square footage for a new construction project. An architect might

use the number of floors to calculate the square footage of a house. In software development

we use sizing units such as Business Requirements, Agile Epics, Agile Stories, Function Points, or Use

Cases to calculate the number of programming constructs. As the design matures, component

based units like screens, reports, interfaces, scripts, and modules become more common and

can be used to help refine the estimate (see Figure 6.1).

Figure 6.1. Top-down estimation and the cone of uncertainty.

Top-down estimation is especially effective when used in the earlier phases of the software

development lifecycle. At the beginning of the lifecycle, the actual size can be as much as ±4

times the estimated size because there is much more uncertainty. The sizing units available at

this point in the lifecycle often measure the project’s functions (i.e. Business Requirements,

Function Points, Use Cases, etc.) and can vary somewhat. As the project progresses, the

uncertainty decreases until the project is complete. At this point it is easier to count the number

of programming constructs (i.e. Source Lines of Code, Configurations, etc.).

Often, when an initial estimate is needed, there is little sizing information available. Additionally,

the sizing information available may not be in the unit that is familiar to the developers who are

building the application. In such cases, we use a gearing factor to map the functional

Page 37: QSM Executive Primer Estimation and demand Management

29

requirements to the desired programming constructs. The gearing factor is simply the average

number of lines of code or configuration actions required to produce an average component.

It behaves similarly to the Rosetta Stone which translated the same text into three different scripts

(see Figure 6.2). In essence, gearing factors translate the high-level, abstract units, known at the

time of the estimate, into units more familiar to the developers. The gearing factors have been

calculated and validated from our clients’ historical project data.

Figure 6.2. Similar to the Rosetta Stone which translated the same meaning into three different

scripts, gearing factors translate the same functionality into different sizing methodologies.

For further information on sizing please refer to the included infographic.

THE INITIAL ESTIMATE & ALTERNATIVES

Now that you understand the inputs, size and productivity, you are able to begin your estimate.

SLIM takes the assumed inputs and uses them to estimate a range of potential schedule and

effort solutions. There are a finite but large number of schedule and effort combinations for any

set of inputs, so it is important to understand the available options (see Figure 6.3). For example,

Page 38: QSM Executive Primer Estimation and demand Management

30

one could use fewer people and lengthen the schedule, or they could shorten the schedule and

use more people. If you move the solution far enough in either direction you will eventually hit a

roadblock. Shortening the schedule too much could put the project into the impossible zone,

overstretch the budget constraint, or require more people than are available. Moving too far in

the other direction could also cause roadblocks. Lengthening the schedule too much could

cause the project to miss its deadline or go below the minimum staffing constraint.

Figure 6.3. The SLIM estimation process.

Once the project and organizational constraints are imposed they create a practical alternatives

region. Any combination of schedule and staffing within this region would be a reasonable

solution to select. The next step would then be to identify the solution with the highest probability

of success. This optimum solution would then be used as the inputs to the Rayleigh equation to

generate the staffing, effort, cash flow, and cumulative cost plans (see Figure 6.4).

Figure 6.4. Standard estimation solution graphs.

Page 39: QSM Executive Primer Estimation and demand Management

31

DEALING WITH UNCERTAINTY AND RISK

Of course, our knowledge of size and productivity are imperfect since they are assumptions. They

are our best guess given the information available. How can we realistically manage

uncertainty? A Monte Carlo Simulation is a technique that solves the equation thousands of times

using variations of size and productivity to produce a range of potential outcomes based on the

current understanding of the system.

More importantly it allows us to build a probability curve. We have an expected value – our

estimate and the uncertainty distribution. Now the business decision makers need to decide

how much risk they are willing to take while executing this project. If the outcome has the

potential to tarnish the reputation of the corporation, it is probably worthwhile to commit to a

99% probability of reaching the expected value. If the outcome is less critical, perhaps 80% would

be more appropriate (see Figure 6.5).

Figure 6.5. Risk probability charts for schedule and effort.

It is important to deal with uncertainty in a realistic and scientific manner. One particularly

effective method for managing risk is planning the project to the 50% risk solution, but make

commitments to stakeholders at the 99% solution to be very sure of not exceeding your

commitment. This builds in a buffer should the project deviate slightly from the original plan.

Page 40: QSM Executive Primer Estimation and demand Management

32

LEVERAGING ESTIMATES IN BUSINESS DECISIONS – LET THE FUN BEGIN!

The initial estimate rarely satisfies all of the targets and constraints that are imposed on the project.

When this happens, some negotiations need to take place. In such situations, the developers,

business stakeholders and executive management are faced with a few practical options.

1. Relax the schedule. Working with stakeholders to extend the schedule to include the

additional time necessary is typically a good first step. In many cases, the stakeholders

find that it’s perfectly acceptable to live without the system for a short period of time. In

general, this option is the most ideal as it tends to be the least expensive and requires fewer

resources than other options.

2. Reduce capability. If the amount of time needed greatly exceeds the desired deadline,

another strategy is to reduce the capability of the system. An initial release can be

delivered to the stakeholder, giving them the basic functionality at the desired deadline.

A second release can then be integrated later which includes the additional functionality

that the stakeholders initially wanted. While this strategy is more labor intensive and

ultimately requires more resources than building the system in one release (the integration

requires much testing), it also satisfies stakeholders by delivering some working software

while also keeping development costs fairly consistent.

3. Add additional personnel. This strategy involves adding as many people as possible in

order to meet the desired deadline. While commonly utilized in businesses, this strategy is

the least effective of the three, and is almost never a good alternative when the

stakeholder expectations are off by more than three months. Due to the non-linear

relationship between effort and duration, the amount of schedule compression achieved

is minimal compared with all the negative consequences, such as higher costs and

decreased quality.

4. A combination of options 1 – 3.

In such situations, estimates should be used as a negotiation and communication tool. Presenting

the flawed estimate to a stakeholder will help them realize that their expectations are

unreasonable. Sharing alternative solutions and their predicted outcomes will provide options

that facilitate the decision-making process. This strategy can help improve the chances of a

successful project outcome while also making the stakeholders feel more involved in the selection

process.

SUMMARY

Flawed estimation can drive ill-informed business decisions resulting in unrealistic expectations and

disappointing outcomes before the project even begins. It is well known that early in a project

lifecycle, information can be incomplete and undefined with a reduced ability to forecast results.

However, the core value of a good estimate is identifying and contending with that uncertainty

in a legitimate, objective and useful way – rather than speculating and hoping for a positive

outcome. It is not a good reason to forego data and measurement altogether.

Page 41: QSM Executive Primer Estimation and demand Management

33

The prevailing, critical flaw for countless software initiatives is that they are inherently destined to

disappoint because of ill-advised or overly optimistic expectations. Exposing such liabilities before

they cause an adverse, or disastrous, outcome will shed some light on current processes and

prevent future failures. Parametric estimation helps reveal probable scenarios and ongoing

considerations to guide and course-correct once the projects are underway. Those two core

principles are at the heart of the SLIM philosophy: to allow projects not only to begin on solid

ground, but to move, in control, more predictably through the full system development lifecycle.

Inherently non-deterministic estimates, after all, are still guesses. You can reduce uncertainty, but

can never completely eliminate it. How well you contend with that reality can be proportionate

to how well your projects meet their objectives.

Page 42: QSM Executive Primer Estimation and demand Management

34

Page 43: QSM Executive Primer Estimation and demand Management

35

CHAPTER 7. MULTIPLE RELEASE AND

COMPLEX PROGRAM ESTIMATION

_______________________________________ The previous two chapters outlined the basics for creating a single-release estimate. However,

sometimes due to budget or schedule constraints a project may need to be completed in two or

more releases. Additionally, some projects may have some non-development related

components such as training programs, hardware purchases or infrastructure cost, etc. This

chapter will discuss the various strategies for creating multiple-release and complex program

estimates.

There are several reasons why a project may be delivered in multiple releases. Perhaps the project

is larger than the allocated schedule and must be broken into smaller releases so that the

stakeholder is able to use a portion of the software by a certain date. It is also possible that the

project has a series of maintenance updates built into the overall schedule.

Whatever the reason, multiple release estimation can be much more difficult than basic single-

release estimation because of dependencies. Many organizations struggle with estimating one

release, so estimating beyond that can be particularly challenging, especially if very little

information is known.

The strategy here is not terribly different from that of the single-release estimate. If unsure about

the size or productivity inputs for your multi-release projects, examine the historical data and use

the averages. If none are available, use your best judgment. It is always possible to update the

estimates as more information becomes available.

COMPLEX SYSTEM INTEGRATION PROJECT

Once you have the basic estimates for each release, they can be aggregated into one view

using SLIM-MasterPlan. In this format (see Figure 7.1), it is easy to see how each contractor’s

portion and each release fits together with the project’s overall schedule. Additionally, it’s now

possible to assess the number of staff needed on a project at a particular point in time.

Page 44: QSM Executive Primer Estimation and demand Management

36

Figure 7.1. Multi-Release Complex Project View.

SLIM-MasterPlan allows users to view the complex estimate at both the high-level and detailed

levels. In Figure 7.1 the upper left chart shows the high-level total monthly average staff utilized

over the lifecycle of the project. Meanwhile, the lower left chart shows the monthly average staff

of each component in a greater level of detail.

The chart in the upper right shows some blue lines connecting all of the development and

integration components between the first and second release. These lines represent the

dependencies present in this multi-release project, meaning the status of one release depends on

the outcome of a prior release. The dependencies can be set up so that Release 2 begins 20%

before the end of Release 1. If Release 1 has a delayed start, these dependencies update the

start date for Release 2, thus preventing it from starting before there is a solid enough code

foundation from Release 1.

ITERATIVE RELEASE EXAMPLE

Another benefit to using SLIM-MasterPlan is that it has the ability to help manage your product line

release schedule. With this tool, it is easy to determine the resources necessary for executing

complex, multi-release projects.

Page 45: QSM Executive Primer Estimation and demand Management

37

Figure 7.2. Incremental Build Project Estimate.

Figure 7.2 shows a project with several releasable builds. While the staffing initially starts out slowly,

it quickly ramps up as the number of concurrent builds increases. The project reaches its peak

staff of 22 FTE’s in July of 2010. Start dates and their respective dependencies for each release

can be updated in this view to create alternative options if the necessary resources are not

available. Slipping the schedule creates less overlap between the releases, thus reducing the

overall staffing for the multi-release project.

Multiple product lines within an organization can also be modeled in SLIM, resulting in improved

abilities to manage product demand and organizational resources. Suppose that an organization

is planning the development activities of 5 product lines during the next fiscal year that will be

released at staggered intervals. Similar to the multi-release estimate scenario, these individual

estimates can be combined into one view to allow for resource totals to be appropriately

distributed among all the 5 applications. If running too many projects simultaneously requires more

staff than are currently available, this view facilitates deciding which projects need to be

prioritized or delayed. Additionally, this view can also accommodate other tasks, such as

management, training, etc., which may not impact the development schedule, but may have

an effect on the overall budget costs or personnel estimates. By examining the staffing and

schedule for all of the applications simultaneously, one can easily determine the company’s busy

periods, and when there are opportunities to take on more work. The following chapter examines

this concept in more detail.

Page 46: QSM Executive Primer Estimation and demand Management

38

An added benefit to examining an entire portfolio rather than individual projects is that if you want

to make adjustments to the estimates and try out some “what if?” scenarios, you can see how

they’ll impact not just an individual project but the overall schedule, staffing, and cost of the entire

portfolio (see Figure 7.3).

Figure 7.3. The Master What If Feature.

For many projects, scope creep is not only probable, it’s typical. With some projects increasing

their size as much as 15%, creating estimates with multiple sizing scenarios provides beneficial

insight into plausible ranges for staffing, cost, and effort. The sizes of each project can be adjusted

either by the total size or by a percentage increase and will illustrate how these adjustments could

affect the entire product line. Similar “what-if” scenarios can be run by adjusting the productivity

and staffing as well. This type of insight can be particularly useful when it comes to complex

decision-making. By understanding the impact that changes to an individual application can

have on an entire product line, more informed decisions and effective project planning can result.

SUMMARY

To summarize, multi-release estimation builds on the skills acquired from single-release estimation

and adds a degree of complexity. This method provides a more holistic view of your project

portfolio and allows estimators to gain greater insight into how their individual projects impact the

overall software development practices. The next chapter will focus on how users can utilize this

information to make informed decisions regarding their development capacity and demand.

Page 47: QSM Executive Primer Estimation and demand Management

39

CHAPTER 8. CAPACITY PLANNING AND

DEMAND MANAGEMENT

_______________________________________ This article originally appeared in the March 20, 2015 online edition of

Software Magazine and is reprinted here with permission.

If you think about it, enterprise application capacity planning can be a difficult juggling act. On

one side of the equation you have business demand. The business exists in a fluid, competitive

environment. With stakeholders hoping to gain an edge over their competitors, they are

constantly looking to innovation and technology to help them improve business performance and

increase their profitability. The IT organization stands on the other side of the equation, and is

responsible for satisfying the demands of the stakeholders. Their capacity is limited by their

facilities, the volume of developers and their specific skills, and the infrastructure that is in place to

enhance their productivity. This leaves the business and technology executives in the unenviable

position of trying to balance the demand for IT development with their current capacity levels.

Figure 8.1. Conflicting demands on executive management.

Moreover, in a large enterprise there are thousands of projects moving through the development

pipeline, making the balancing process that much more difficult. In order to address these

complexities many organizations have implemented enterprise Project and Portfolio

Page 48: QSM Executive Primer Estimation and demand Management

40

Management (PPM) systems. These PPM systems are great for allocating known resources against

approved projects, and aggregating the data into a portfolio to give a current picture of how the

IT capacity satisfies business demands.

In order for the capacity planning process to work, it is important to have a robust demand

estimation capability. This is a discipline that has proven to be difficult for many organizations.

Why is it so difficult? There are many reasons, but two stand out.

The first is that most organizations only estimate an aggregate number of hours by skill category

and have no estimate of the schedule or how the skills are applied over time on the project. The

most common practice today is based on using a detailed task-based estimation method.

Imagine a spreadsheet with a generic list of hundreds of tasks that typically are performed on any

given project. It is the estimator’s job to review the requirements and estimate the number of

labor hours required for each task. The task estimates are rolled up into a total labor estimate

which is then allocated out to labor categories. The problem is that this method does not provide

details for how the skilled labor categories are required to ramp on or taper off the project. This is

an important component of capacity planning process. So using today’s most common

estimation practices, one ends up with a project estimate that looks something like Figure 8.2:

Unfortunately, this method also does not

include a schedule. The schedule is usually

decided arbitrarily by the stakeholders and

often does not align with the estimated

effort. QSM research shows that unrealistic

schedule expectations are the number one

reason why projects fail. This is clearly a big

deficiency. The second issue is that task-

based estimation is not well matched for the

information that is available during the early

stages of a project when the estimates and business decisions are typically made. The initial

estimates are typically performed when there are only a few pages of business requirements. This

clearly is not sufficient information to do a detailed task-level estimate. Moreover, doing a task-

based estimate requires a lot of time and energy, so it would not be appropriate if numerous

alternative scenarios are requested.

TOP-DOWN ESTIMATION IS VERY EFFECTIVE IN THE EARLY STAGES OF THE PROJECT LIFECYCLE

Top-down parametric estimation is particularly effective in the early stages of the lifecycle. It

utilizes gross sizing and productivity assumptions to produce schedule and effort estimates. If the

estimates do not match expectations, top-down estimation tools can quickly evaluate what can

be accomplished with the available resources or determine how much additional time and effort

will be required to achieve the business objectives (see Figure 8.3).

Figure 8.2. Typical bottoms-up estimate without project schedule.

Page 49: QSM Executive Primer Estimation and demand Management

41

Figure 8.3. Desired schedule and cost compared against historical data and benchmark trends.

The beauty of the top-down method is that it uses crude size measures like Business Requirements,

Agile Epics, Use Cases or Function Points as a direct input. Coincidentally, these are the design

artifacts that are most likely to be available during the early stages of estimation.

With this basic information, top-down methods can quickly identify both high risk and conservative

expectations, and if appropriate, suggest more reasonable alternatives. Let’s take a look at a

typical estimation situation early in the lifecycle. Suppose, for example, that a business has

identified some new capabilities that they would like to have implemented. They have a 5 page

document that describes 19 business requirements or capabilities. They would like to have the

capabilities available within 3 months and have a budget of $250,000. Figure 8.4 shows their

desired outcome.

Page 50: QSM Executive Primer Estimation and demand Management

42

Figure 8.4. Risky desired outcome compared with the recommended, more realistic estimate.

When compared to historical data, the desired outcome is determined to be “Risky” and an

alternative estimate of 5.7 months and $571,000 is recommended. Now a negotiation between

the stakeholders and the IT department can take place to explore more viable options. For

example, 11 business requirements could be completed in 3 months using the previously outlined

resources, or all 19 requirements could be completed in 4 months using a team of 12 people and

a budget of $1.8 million. The main purpose for estimating at this stage of the demand

management process is to smoke out grossly unrealistic expectations and negotiate realistic

options in order for all parties to be winners in this development.

TURNING HOURS INTO SKILLS AND STAFFING REQUIREMENTS

The most “common practice” labor estimates just aren’t good enough to support capacity

planning. What is really needed is an estimate of how the various skills will ramp up and taper off

over the course of the project (see Figure 8.5).

Page 51: QSM Executive Primer Estimation and demand Management

43

Figure 8.5. Staffing skill categories by month.

With this information, PPM systems can match individuals with their respective skills to the project’s

needs at the time they are required and free them up when their work is done. It is important to

recognize that one size does not fit all projects. For a large enterprise there might be several skill

profiles or templates to support the different project types and methodologies. Typical

methodology and project examples today include Agile, Traditional Waterfall, Package

Implementation, Infrastructure, etc. The key point is to have just enough flexibility to provide

adequate coverage but not so much that the organization becomes a slave to template creation

and maintenance.

Configuration within the SLIM tool is flexible enough to add, change, and delete labor categories

and rates. It is also intuitive enough to allocate the buildup and ramp down of skill categories

over time (see Figure 8.6).

Page 52: QSM Executive Primer Estimation and demand Management

44

Figure 8.6. Configuring skill categories.

With a method to produce estimated skill allocation over time, it is possible to incorporate realistic

demand into the planning process and match demand with the available capacity. The SLIM

estimation tool has a PPM integration framework which allows SLIM to integrate with any PPM

program (see Figure 8.7).

Page 53: QSM Executive Primer Estimation and demand Management

45

Figure 8.7. SLIM integration with PPM tools.

The PPM integration allows SLIM to assess and adjust the planned resources and/ or start dates to

projects already present in the PPM system or to those prior to entry. This feature can be

particularly useful for projects that require further evaluation and validation.

MATCH DEMAND TO CAPACITY

Now we have a way to realistically estimate schedule, effort, cost and skill requirements over the

duration of a project, and can feed that information into a PPM system. By implementing this

discipline throughout the enterprise, it is possible to evaluate how demand matches capacity and

determine whether they are out of sync. If we discover that they are out of alignment, we can

evaluate practical options for matching demand and capacity.

Let’s use an example to illustrate the process:

Our example enterprise has 3 different data centers: one is located in the U.S., while the other

two are in India and Ireland. The total capacity for these three centers is limited to 250 IT

developers, and there are currently 35 projects that have been approved for production (see

Figure 8.8).

Page 54: QSM Executive Primer Estimation and demand Management

46

Figure 8.8. Peak staffing over time for individual projects and aggregated by development center.

Each of these projects is in a different stage of production, with some in the early planning stages,

others beginning coding, and some that are approaching delivery. At any point in time, it is fairly

typical to have a portfolio with projects at all of these production stages. Additionally, it is

common for a portfolio to have a wide range of project scopes, and team sizes. Each project

can be estimated individually, and they can be rolled up to show the aggregate data at either

the data center level or the enterprise level to help determine the capacity and demand levels.

For simplicity sake, we’ll examine at the enterprise level in this example. When the data is

aggregated at this level, we notice that for a 9 month period, between August 2014 and May

2015, the demand for developers exceeds capacity by 65 Full Time Equivalents (FTEs), see Figure

8.9.

Page 55: QSM Executive Primer Estimation and demand Management

47

Figure 8.9. Demand exceeded capacity by 65 people for 9 months.

When the demand for developers exceeds its capacity there are usually 3 options:

1. Eliminate projects (usually not a practical option since there is a definitive business need),

2. Push back the start dates on projects that have not yet begun, or

3. Make a select staffing reduction on projects in-progress or those yet to begin and relax

their respective delivery dates.

Page 56: QSM Executive Primer Estimation and demand Management

48

Figure 8.10. Project start date delayed.

In this example case, we chose option 2 and pushed back the start dates. In a relatively short

period of time we were able to optimize the staffing to meet the capacity limit of 250 people. The

resulting impact this had on the eight projects was a delay that lasted anywhere from 2 to 8

months (see Figure 8.10).

If we want to assess the skill requirements and match those to our pool of available resources, we

need to aggregate the skill categories to match that to our skill capacity.

Page 57: QSM Executive Primer Estimation and demand Management

49

Figure 8.11. Staffing resource profile.

Figure 8.11 gives a graphic depiction of the necessary resources at the enterprise level. It shows

the aggregated demand for each of the skill sets at various times within a 28 month period. This

information is useful to have because it lays out what you will need to successfully complete the

approved projects and compare them with the resources currently available. If there are

imbalances between supply and demand, you now know what areas need to be supported and

can take actions such as hiring or training additional employees to help fill the void. By having this

kind of information available and understanding its significance, great progress in capacity

planning can be realized by the IT organization.

SUMMARY

Capacity planning and demand management go hand in hand. To manage them effectively,

one needs to be able to:

1. Realistically forecast demand early in the lifecycle,

2. Negotiate functionality, schedule and effort with business stakeholders when their

expectations are unrealistic,

3. Have the ability to forecast effort by skill category, by month, or by week, and feed this

information to PPM systems, and

4. Be in a position to perform quick what-if analyses on the portfolio when demand exceeds

capacity.

If you take action to address these issues, then you can have a best-in-class IT capacity planning

solution for the enterprise.

Page 58: QSM Executive Primer Estimation and demand Management

50

Page 59: QSM Executive Primer Estimation and demand Management

51

CHAPTER 9. TRACKING, FORECASTING,

EXECUTION, AND POST MORTEM

_______________________________________ The previous chapters discussed the best practices involved in creating estimates. While creating

estimates is an important and necessary skill in improving productivity; that is only half the battle.

The second half is tracking actual performance against the estimate and reforecasting when

significant changes occur.

TRACKING

Suppose that after much consideration, an organization puts forth an estimate with the intention

of carrying it out as a project plan. At this point, many estimators would believe that they are

done. However, in order to maximize the benefits of the estimate, it will be necessary to track the

project’s progress through completion. While this may seem like a tedious process, project

tracking allows for organizations to gain the greatest insight into their software development

lifecycles.

To do this, estimators will want to collect data on the current actuals for the following metrics at

regular intervals:

Cumulative Size Produced – this metric is customizable and can include but is not limited

to Source Lines of Code, Agile Stories, Screens, etc.

Monthly Effort – the Person Months or Person Hours expended thus far

Staff – the number of Full Time Equivalents currently billing time to the project

Defects – how many errors were discovered

Depending on the length of the project’s schedule, it’s recommended to collect data on the

actuals at the end of each month, or for shorter projects at the end of each week. This data can

then be compared against the original estimate to track the project’s in-flight progress, as well as

any deviations from the estimate. Utilizing these practices can increase accountability within the

organization because current statuses are regularly maintained.

Additionally, tracking these various metrics over time can help identify areas for improvement

within an organization. For example, if the project’s actual data match the projected values from

the original estimate in staffing and cumulative effort but not cumulative size, this observation

could indicate a couple of potential scenarios (see Figure 9.1). One possible explanation might

be that the organization has staffed the project with the correct number of people at a given

point in time, but has not appropriately accounted for the skills necessary to complete the project.

An excessive number of management personnel may have been staffed for the project, resulting

in a decreased staffing allotment for the developers. With fewer developers staffed or those

lacking the skills needed for the project, producing code at the estimated rate would be

impossible and deviations from the original estimate would be expected.

Page 60: QSM Executive Primer Estimation and demand Management

52

Another possible explanation for the decreased code production is that the actual productivity

of the organizations is in fact lower than originally estimated. There are a number of explanations

for why this occurs. Introducing new tools, development methodologies, or team members

impacts the development environment and can require additional time for developers to perform

at the previous productivity level. The next section describes some potential solutions if this

happens to your project.

Figure 9.1. Tracking actuals against the planned values: Consumption vs. construction.

FORECASTING AND EXECUTION

Life is rarely predictable, and that holds true in software estimation as well. Even the best estimates

cannot anticipate some of the curveballs that life throws at them. For instance, if several team

members are pulled off the project and temporarily reassigned to another, it is unlikely that the

project will progress as originally planned. In such situations, the best suggestion would be to

reforecast the estimate.

Similar to a GPS, SLIM is able to recalculate a new trajectory, or roadmap, if the project gets off

track. It factors in what has already been done and re-estimates a new productivity based on

the actual values entered. If a project drifts so far off course that the original plan is no longer

feasible, a new plan can be implemented and briefed to stakeholders so that everyone has similar

expectations (see Figure 9.2).

In this chart we see the project’s plan in blue compared with the actual data collected in monthly

intervals in red. In January, the amount of code produced was slightly below the quota outlined

in the plan, yet the cumulative effort expended was above the allotted amount. Additionally,

Page 61: QSM Executive Primer Estimation and demand Management

53

they were finding more defects in the system than initially predicted. Since the project has begun

to get off track from the original estimate, the wise choice would be to reforecast the project plan.

The new forecast is displayed in white and shows that based on the current trajectory, the project

should end about 2.5 months later and require 6,600 more person hours of effort than the original

plan. Knowing this information early can allow time for orderly changes while still underway.

Figure 9.2. Forecasting chart.

One of the greatest benefits of this technique is that it provides transparency into the

development process by showing the project’s progress at regular intervals. Development teams

know how much functionality will have to be delivered at each interval, and can plan

accordingly. If a project begins to drift off course, it’s much easier to determine this from the start

and take measures to mitigate risk early. Additionally, as new data is added, SLIM reforecasts the

plan like a GPS would recalculate the roadmap. If the new data shows improvements, it will be

reflected in the forecast going forward, thus creating a refined learning system. By having this

kind of insight and level of detail up front can dramatically improve accountability across the

organization.

POST MORTEM

Once a project has reached completion, it’s very easy to move onto the next without a second

thought. Regardless of whether the project went really well and the momentum is continuing or

if the project was so utterly painful that the wounds are still fresh, a post mortem assessment is

SEI Core Metrics

Schedule

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

Oct

'02

Dec Feb

'03

Apr Jun Aug Oct Dec Feb

'04

Apr Jun Aug Oct Dec Feb

'05

Apr Jun

Requiremts & D...

Construct and T...

Test & Evaluate

Phases

87654321321 87654

Milestones

1 - PDR

2 - INC_1

3 - INC_2

4 - INC_3

5 - ST

6 - TRR

7 - FAT

8 - 99R

Milestones

1 - PDR

2 - INC_1

3 - INC_2

4 - INC_3

5 - ST

6 - TRR

7 - FAT

8 - 99R

Cum Effort C&T

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

Oct

'02

Dec Feb

'03

Apr Jun Aug Oct Dec Feb

'04

Apr Jun Aug Oct Dec Feb

'05

Apr Jun

0

10

20

30

40

PH

R (thousands)

87654321321

Defects Found Moderate

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

Oct

'02

Dec Feb

'03

Apr Jun Aug Oct Dec Feb

'04

Apr Jun Aug Oct Dec Feb

'05

Apr Jun

0

5

10

15

20

25

30

35

Defects

87654321321

Cum Eff SLOC

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31

Oct

'02

Dec Feb

'03

Apr Jun Aug Oct Dec Feb

'04

Apr Jun Aug Oct Dec Feb

'05

Apr Jun

0

10

20

30

40

50

60

SLO

C (thousands)

87654321321

Current Plan Actuals Current Forecast Project: TechPro 5.0

Page 62: QSM Executive Primer Estimation and demand Management

54

always a valuable exercise. Post mortems provide opportunities for teams to discuss what went

well and what areas could be improved. Additionally, post mortems allow teams to reconvene

at the end of the project to collect the final data for the project repository. This newly completed

project can be entered into SLIM along with the rest of the completed project data. A new set of

custom trends can then be created using the most recently added historical data, which can be

used for more accurate estimates in the future. You will find that over time, this process of

collecting historical data, updating trends, and using those trends to estimate future projects

allows the estimation process to come full circle, refining itself each time.

SUMMARY

While creating estimates is a great way to start a project off on the right foot, using parametric

methods to track and monitor your project’s progress provides even greater insight into the

development of your project. Knowing the status of a project inflight and having the ability to

share that with developers and stakeholders, when most can merely guess, is extremely beneficial

and puts project managers in a much more powerful position. The project can be tracked all the

way through completion, at which point, a post mortem assessment can be done to collect data

and further refine the estimation process. Using this model creates a refinement cycle whereby

collecting accurate data leads to creating more accurate estimates.

Page 63: QSM Executive Primer Estimation and demand Management

55

CHAPTER 10. UNDERSTANDING QUALITY

AND RELIABILITY

_______________________________________ One of the most overlooked but important areas of software estimation, measurement, and

assessment is quality. It often is not considered or even discussed during the early planning stages

of a development project, but it’s almost always the ultimate criteria for when a product is ready

to ship or deploy. Therefore, it needs to be part of the expectation-setting conversation from the

outset of the project.

So how can we talk about quality? It can be measured a number of ways but two in particular

give excellent insights into the stability of the product. They are:

1. The number of errors discovered in the system during testing, and

2. The Mean Time to Defect (MTTD), or the amount of time between errors discovered prior

to and after delivery.

The reason we like these two measures is that they both relate to product stability, a critical issue

as the release date approaches. They are objective, measurable, and can usually be derived

from most organizations’ current quality monitoring systems without too much trouble.

Generally, having fewer errors and a higher MTTD is associated with better overall quality. While

having the highest quality possible may not always be a primary concern for stakeholders, the

reliability of the project must meet some minimum standards before it can be shipped to the

customer. This means that at delivery, the project is about 95% defect free, or the application can

run for about 1 day without crashing. Another good rule of thumb is that the software typically

will be of minimum acceptable reliability when testers are finding fewer than 20 errors per month.

In other words, the product will run about an 8 hour work day. Of course this rule of thumb is mostly

applicable for commercial IT applications. Industrial and military embedded applications require

a higher degree of reliability.

THE RAYLEIGH DEFECT MODEL

The QSM defect estimation approach uses the Rayleigh function to forecast the discovery rate of

defects as a function of time throughout the software development process. The Rayleigh

function was discovered by the English physicist Lord Rayleigh in his work related to scattering of

acoustic and electro‐magnetic waves. We have found in our research that a Rayleigh reliability

model closely approximates the actual profile of defect data collected from software

development efforts.

In the QSM reliability modeling approach, the Rayleigh equation is used to predict the number of

defects discovered over time. The QSM application of the Rayleigh model has been formulated

to cover the duration of time from the High Level Design Review (HLDR ‐ High Level Design is

Page 64: QSM Executive Primer Estimation and demand Management

56

Complete) until 99.9% of all the defects have been discovered. A sample Rayleigh defect

estimate is shown in Figure 10.1.

Figure 10.1. Sample Rayleigh defect estimate.

Note that the peak of the curve occurs early in the build and test phase. This means that a large

number of the total defects are created and discovered early in the project. These defects are

mostly requirements, design and unit coding defects. If they are not found they will surface later

in the project resulting in extensive rework.

Milestone 10 is declared to be the point in time when 99.9% of the defects have been discovered.

Less than 5% of the organizations that QSM has worked with record defect data during the

detailed design phase. Industry researchers claim that it can cost 3‐10 times more to fix a defect

found during system test rather than during design or coding, so one could make a compelling

case to start measuring and taking action earlier in the process.

Simple extensions of the model provide other useful information. For example, defect priority

classes can be specified as percentages of the total curve. This allows the model to predict

defects by severity categories over time. This is illustrated in Figure 10.2.

Page 65: QSM Executive Primer Estimation and demand Management

57

Figure 10.2. Rayleigh defect model broken out by defect severity class.

A defect estimate could be thought of as a plan. For a particular set of conditions (size,

complexity, efficiency, staffing, etc.) a planned curve could be generated. A manager could

use this as a rough gauge of performance to see if his project is performing consistently with the

plan and by association with comparable historic projects. If there are significant deviations, this

would probably cause the manager to investigate and, if justified, take remedial action.

Figure 10.3 shows how one can use the defect discovery estimate to track and compare actuals.

Obviously, the actual measurements are a little noisier than the estimate but they track the

general pattern nicely and give confidence that the error discovery rate will be below 20 per

month, our minimum acceptable delivery criteria, at the schedule end of the project.

Defect RateQuick Estimate Wizard Solution

6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Jul

'15

Aug Sep Oct Nov Dec Jan

'16

Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

0

25

50

75

100

125

150

Defect R

ate

1098765432

Cosmetic

Tolerable

Moderate

Serious

Critical

Project: Single Release Estimate

Page 66: QSM Executive Primer Estimation and demand Management

58

Figure 10.3. Defect discovery rate plan with actuals plotted.

DEFECT MODEL DRIVERS

There are specific inputs that determine the duration and magnitude of the Rayleigh defect

model. The inputs enable the model to provide an accurate forecast for a given situation. There

are three macro parameters that the QSM model uses:

Size (new and modified)

Productivity Index

Peak Staffing

These driving factors impact the defect behavior patterns that we see with regards to software

projects.

Size

Historically, we have seen that as project size increases, so do the number of defects present in

the system (see Figure 10.4). Stated simply, building a larger project provides more opportunities

for developers to inject defects into the system, and also requires more testing to be completed.

This rate of defect increase is close to linear. Similarly, as size increases the MTTD decreases. This

is due to the increased number of errors in the system. Larger projects tend to have lower

reliabilities because there is less time between defects. These trends are typical for the industry.

Total Defects Found

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Oct

'02

Nov Dec Jan

'03

Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan

'04

Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan

'05

Feb Mar Apr May Jun

0

20

40

60

80

100

120

140

Defects

87654321321

Current Plan Actuals Project: TechPro 5.0

Page 67: QSM Executive Primer Estimation and demand Management

59

Figure 10.4. As size increases so do the number of defects.

Productivity Index

Productivity also tends to have a great impact in the overall quality of a system. Historic data has

shown that the number of defects discovered exponentially decreases as the Productivity Index

increases. Figure 10.5 shows the cumulative number of defects discovered for the same sized

software application using two different PI’s (17 and 21 respectively). The project with the higher

PI not only delivers the application 9 months faster, but also discovers about 360 fewer errors in

total. It makes sense that when development teams improve, they tend to make fewer errors to

begin with, thus decreasing the number of errors found during testing. Operating at a higher

productivity level can drastically increase software quality.

Page 68: QSM Executive Primer Estimation and demand Management

60

Figure 10.5. Comparison of defects produced for the same size system using different PI’s.

Staffing

The size of the development team also can impact the quality of a system. As discussed previously

in Chapters 3 and 4, using larger teams produces more errors than using smaller teams. As shown

in Figure 10.6, when comparing a large team (red) with a small team (grey) at the same project

sizes, the small teams produce between 50-65 fewer errors than large teams at all project sizes.

Additionally, they paid little, if any, schedule penalty and used significantly less effort. This finding

can be especially useful when looking to identify areas of waste within an organization, because

it shows that adding resources to a project does not always improve its quality.

Page 69: QSM Executive Primer Estimation and demand Management

61

Figure 10.6. Small teams produce fewer errors than large teams at all project sizes.

SUMMARY

To conclude this section, software reliability and quality are two areas that should be addressed

in the expectation setting and project negotiation stages of a project. As a leader there are

several strategies that you can use to improve reliability. Keep the developed size as small as

possible, use smaller teams of people, and make regular investments in your environment to

improve the efficiency of your development shop. All of these actions will pay reliability and

quality dividends.

Page 70: QSM Executive Primer Estimation and demand Management

62

Page 71: QSM Executive Primer Estimation and demand Management

63

CHAPTER 11. WHAT DOES IT TAKE TO

SUCCESSFULLY IMPLEMENT

ESTIMATION? _______________________________________

It doesn’t take long for an individual or small team to achieve quick wins implementing the

estimation best practices outlined in the previous chapters. However, that initial success is always

at risk of becoming derailed when a key individual leaves the organization, unless those best

practices are institutionalized (i.e. fully embraced by the organizational culture, supported by a

majority of stakeholders, and incorporated into the organizational process). So what needs to

happen if you want to actually implement an effective estimation and measurement function?

Well it is going to take some determination and hard work but the rewards are well worth it!

An “Estimation Center of Excellence” (ECoE) is an excellent way to operationalize the concepts

that we have been discussing to a structure that can support the needs of an enterprise. The

ECoE encompasses aspects of People, Process, and Tools such that all three of these areas are

incorporated into the planning and execution process.

The most common organizational structures for an ECoE include centralized, distributed, and a

hybrid “spoke and hub,” but the method selected really depends on the organization. It’s

important to find a structure that will satisfy your culture and business decision making needs.

Sometimes the function is housed in an existing part of the operation, like a project management

office PMO, while others prefer that it be its own entity.

Here are some implementation issues that need to be considered:

1. How will this fit into our organization?

2. What is the best structure to support our needs and decision making?

3. What people will be needed and what skills do they need to possess?

4. What tooling will be required?

5. How long will it take to stand up the operation?

6. What ROI can be expected from this endeavor?

7. How do you overcome the resistance to change?

We have found that best implementations usually follow a 4 stage process. These four stages are:

1. Stage I – Requirement Definition

2. Stage II – Design and Implement

3. Stage III – Launch and Pilot Operation

4. Stage IV – Transition to Full Scale Operation

During each stage, people, process, and tooling activities are addressed. Here is a brief

description of what takes place during each of the implementation stages.

Page 72: QSM Executive Primer Estimation and demand Management

64

STAGE I – REQUIREMENTS DEFINITION

During this stage, one identifies the major pain points and goals of the organization and builds a

business case for addressing issues. A charter is developed to identify the major stakeholders in

the ECoE implementation and the ground rules that they will follow. The current estimation process

maturity is assessed to establish a baseline and starting point of departure. From there, the tooling

as well as the various skill sets of the ECoE staff are identified and advertised. Finally, a change

management strategy is designed.

STAGE II – EcoE DESIGN AND DEVELOPMENT

During this stage, the ECoE staff are hired and trained. Detailed business processes are defined

and service level agreements are established. Next, the estimation tooling is customized and

templates are created to support the estimation process. Additionally, a historical data repository

is set up to support estimation and business case ROI analysis. Several pilot estimates will be

performed to gain experience and fine tune the estimation methods. This stage will include

regular information briefings to individuals throughout the organization to garner support and

demonstrate the value of this solution.

STAGE III – EcoE LAUNCH AND PILOT OPERATION

During this phase the ECoE becomes operational. ECoE staff are mentored as they gain

experience, working towards a formal estimation expert certification. The tooling templates and

processes are adjusted as necessary based on feedback from operation use. Training sessions

are conducted for new users, as required, if a distributed or hub and spoke model are

implemented. Sharing circles are also implemented to share knowledge and practical

experience throughout the estimation community. Case studies and other findings will continue

to be briefed, thus showing the value of the ECoE.

STAGE IV – FULL OPERATION

The ECoE is fully operational. The tools and processes will continue to be improved based on

newly completed projects and operational experience. Additional staff training and certification

are conducted on an as-needed basis.

In our experience it can take as little as 6 months or up to 15 months to complete the first three

stages depending on size and complexity of the organization. Figure 11.1 is a good example of

an ECoE implementation in a large organization.

Page 73: QSM Executive Primer Estimation and demand Management

65

Figure 11.1. Example Estimation Center of Excellence Implementation

ECoE FOCUS AREAS

The ECoE can have different areas of focus based on the needs of the organization. They are:

1. The Internal Development ECoE

2. The Acquisition and Vendor Management ECoE

3. The Oversight ECoE

The Internal ECoE’s primary focus is on estimation, stakeholder negotiation, risk assessment,

resource management, capacity planning, and business decision support. The Acquisition and

Vendor Management ECoE focuses on ensuring successful procurements of outsourced projects.

It does this by providing a framework for establishing a realistic “should cost” analysis and method

for quantifying vendor bids. This can then be used to select vendors that put forth realistic

proposals, and quantitatively monitor their execution so that there are no surprises along the way.

The Oversight ECoE typically operates at the portfolio level and provides executive level oversight.

It provides independent reviews at key points during the development lifecycle process, to ensure

that the project has a realistic estimate and is proceeding under control. Often special emphasis

is placed on high-risk and/ or high-visibility projects.

Page 74: QSM Executive Primer Estimation and demand Management

66

RETURN ON INVESTMENT

Your return will depend on the size of your organization, and the size and volume of projects that

are subject to ECoE analysis. Below is a chart that shows what others have achieved.

Figure 11.2. Return on Investment for using SLIM.

SUMMARY

Adopting an Estimation Center of Excellence is a strategic, long-term investment rather than a

quick fix. Leveraging defensible metrics and greater quantitative analysis will help your

organization gain more insight into its own productivity, both internal and external, as well as its

estimation maturity while better positioning it to plan, prioritize, track and empirically assess the

interrelated variables of project size, cost, quality, and duration related to these projects. Before

making drastic changes such as those needed to properly implement a similar change in

development practices, invest time at all levels of the organization to fully understand the task at

hand and assess how long it will take to reap the benefits and achieve the expected return on

investment. Making deliberate, fully-informed decisions will ultimately lead to better outcomes.

Page 75: QSM Executive Primer Estimation and demand Management

67

CHAPTER 12. SUMMING IT UP

_______________________________________ If you’ve stayed with us this far, you’ll know that there is a lot to be gained from incorporating top-

down estimation into your current software development practices. Before leaving you, we

wanted to recap some of our main points.

In any type of project, software development or otherwise, accountability is essential for success.

By establishing some expectations up front, the developers will know what they are responsible for

and the stakeholders will know when the final product should be ready. However, without

quantitative methods to measure productivity this task is much easier said than done. A good

estimate will provide the framework necessary for establishing expectations and keeping

everyone accountable.

To get to that point, it’s critical to measure not

just the consumption of the project (i.e. how

much money has been spent), but also the

construction (i.e. how much functionality has

been built). Monitoring and tracking both of

these together will provide a holistic view of the

project’s progress and can allow teams to

adapt to changes in-flight, should they be

necessary.

Understanding how the two SLIM algorithms

function is key to making business decisions

about your projects. The law of tradeoffs states

that there are numerous ways that a project

can be accomplished by adjusting the

schedule and the effort. Estimators can create

several what-if scenarios which can be

presented as options to stakeholders and

decision-makers. In essence, they become a

vital negotiation tool because they back your

proposals with quantitative data. Additionally,

presenting multiple options to stakeholders

inadvertently solicits their participation in the decision-making process, thus shaping better project

outcomes overall.

One of the greatest benefits of top-down estimation is that you can start creating more accurate

estimates right away and the results are immediate. As you continue to estimate over time, the

benefits compound even more (see Figure 12.1). Improving resource and demand management

can free up staffing resources for other projects, thus increasing productivity and creating a higher

utilization of staff.

Figure 12.1. The benefits of top-down software estimation.

Page 76: QSM Executive Primer Estimation and demand Management

68

With technology increasingly being incorporated into our daily lives, now is the time to start

implementing top-down estimation into your software development lifecycle. With everything

else at stake, you really can’t afford not to.

Page 77: QSM Executive Primer Estimation and demand Management

69

CONTRIBUTING AUTHORS

Douglas T. Putnam is Co-CEO for Quantitative Software Management (QSM) Inc. He has

35 years of experience in the software measurement field and is considered a pioneer

in the development of this industry. Mr. Putnam has been instrumental in directing the

development of the industry leading SLIM Suite of software estimation and

measurement tools, and is a sought after international author, speaker and consultant.

His responsibilities include managing and delivery of QSM software measurement

services, defining requirements for the SLIM Product Suite and overseeing the research

activities derived from the QSM benchmark database.

C. Taylor Putnam-Majarian is a Consulting Analyst at QSM and has over seven years of

specialized data analysis, testing, and research experience. In addition to providing

consulting support in software estimation and benchmarking engagements to clients

from both the commercial and government sectors, Taylor has authored numerous

publications about Agile development, software estimation, and process improvement,

and is a regular blog contributor for QSM. Most recently, Taylor presented research

titled Does Agile Scale? A Quantitative Look at Agile Projects at the 2014 Agile in

Government conference in Washington, DC. Taylor holds a bachelor’s degree from

Dickinson College.