pushing the envelope -- managing very large projects

28
Pushing the Envelope: Managing Very Large Projects by Ken Orr, Fellow, Cutter Business Technology Council Very large software projects are being tackled in high-intensity environments, with heavy user involvement — and with a higher chance of failure. Why? This Executive Report answers this question and argues that despite widely held beliefs, agile techniques can be useful in taming these IT beasts. Agile Project Management Vol. 5, No. 7

Upload: the-ken-orr-institute

Post on 29-Mar-2016

215 views

Category:

Documents


1 download

DESCRIPTION

Managing Very Large Projects is extremely prone to failure. Ken Orr talks about what he considers the real problems: ego and lack of architecture.

TRANSCRIPT

Pushing the Envelope:

Managing Very Large

Projects

by Ken Orr, Fellow,

Cutter Business Technology Council

Very large software projects are being tackled in high-intensity

environments, with heavy user involvement — and with a higher

chance of failure. Why? This Executive Report answers this question

and argues that despite widely held beliefs, agile techniques can be

useful in taming these IT beasts.

Agile ProjectManagement

Vol. 5, No. 7

About Cutter Consortium

Cutter Consortium’s mission is to foster the debate of, and dialogue on, the business-

technology issues challenging enterprises today and to help organizations leverage IT

for competitive advantage and business success. Cutter’s philosophy is that most of the

issues managers face are complex enough to merit examination that goes beyond simple

pronouncements. The Consortium takes a unique view of the business-technology

landscape, looking beyond the one-dimensional “technology” fix approach so common

today. We know there are no “silver bullets” in IT and that successful implementation

and deployment of a technology is as crucial as the selection of that technology.

To accomplish our mission, we have assembled the world’s preeminent IT consultants —

a distinguished group of internationally recognized experts committed to delivering top-

level, critical, objective advice. Each of the Consortium’s nine practice areas features a

team of Senior Consultants whose credentials are unmatched by any other service

provider. This group of experts provides all the consulting, performs all the research

and writing, develops and presents all the workshops, and fields all the inquiries from

Cutter clients.

This is what differentiates Cutter from other analyst and consulting firms and why we say

Cutter gives you access to the experts. All of Cutter’s products and services are provided

by today’s top thinkers in business and IT. Cutter’s clients tap into this brain trust and are

the beneficiaries of the dialogue and debate our experts engage in at the annual Cutter

Summit, in the pages of Cutter IT Journal, through the collaborative forecasting of the

Cutter Business Technology Council, and in our many reports and advisories.

Cutter Consortium’s menu of products and services can be customized to fit your

organization’s budget. Most importantly, Cutter offers objectivity. Unlike so many

information providers, the Consortium has no special ties to vendors and can therefore

be completely forthright and critical. That’s why more than 5,300 global organizations

rely on Cutter for the no-holds-barred advice they need to gain and to maintain a

competitive edge — and for the peace of mind that comes with knowing they are

relying on the best minds in the business for their information, insight, and guidance.

For more information, contact Cutter Consortium at +1 781 648 8700 or [email protected].

Cutter Business Technology Council

Access to the

Experts

Rob Austin Tom DeMarco Christine Davis Lynne Ellyn Jim Highsmith Tim Lister Ken Orr Ed Yourdon

by Ken Orr, Fellow, Cutter Business Technology Council

Pushing the Envelope: Managing Very Large ProjectsAGILE PROJECT MANAGEMENTADVISORY SERVICE

Executive Report, Vol. 5, No. 7

For large projects in the10,000-100,000 function pointrange, the results are verygrim indeed. More than halfof these massive projectswill be cancelled outright,and the percentage finishingapproximately on scheduleis less than 20%. This is avery troubling situation,because large systems areextremely expensive, andthus the failures can costmillions of dollars.

— Capers Jones [1]

FOREWORD

Serious engineering disciplines

devote enormous amounts of

time and effort to researching

failures, documenting what hap-

pened and why. When a bridge,

a building, or a walkway in a hotel

collapses, careful analysis of the

cause follows. The same is true

of transportation failures. In the

US, when airplanes, trains, or

ships collide with something, the

National Transportation Safety

Board teams are among the first

to arrive on-site. In our domain,

however, IT is not nearly as good

at analyzing its failure. While IT

should be good at this, it isn’t.

One of the reasons that IT is not

as good at analyzing its failures is

that IT disasters are rarely as pub-

lic as those concerning buildings

or transportation. People rarely die

(at least directly) as a result of IT

failures, and most large organi-

zations are not eager to air their

dirty wash in public. With building

and transportation failures, those

involved have little choice.

Once in a while, however, a bla-

tant case of IT failure manages to

reach the front page. Recently,

for example, the Royal Bank of

Canada made the news because

of problems created by a software

upgrade.1 And a few years back,

Foxmeyer Health Corporation, a

distributor of pharmaceuticals and

medical supplies, blamed its bank-

ruptcy on poor implementation of

a very large enterprise resource

planning (ERP) project. But for the

most part, IT failures, even major

ones, largely go unreported.

But very large IT failures continue

to happen, and organizations are

losing millions, perhaps billions,

because of them. So if develop-

ing large software systems is to

become more professionalized

and more reliable, we must do a

much better job of defining, archi-

tecting, designing, and building

these very large systems. Based

on the public’s skepticism about

the part of technology that people

deal with every day, IT must do

this soon, before software is

considered too risky to attempt

anything really big or new and

the world loses one of its most

powerful tools of innovation.

INTRODUCTION

From one point of view, this

Executive Report is about scaling

up agile, or incremental, devel-

opment to the point at which it

can be used successfully on

very large projects (VLPs). In

this context, the report serves

as a discussion about how one

combines environments charac-

terized by high intensity, high

user involvement, and uncertain

requirements with the structure

needed to complete VLPs. The

report argues that VLPs have cer-

tain unique circumstances and

that architecture plays a unique

role. Many people believe —

wrongly, in my opinion — that

VLPs cannot avail themselves

of agile techniques because,

as the project grows, the prob-

lems of communication, coordi-

nation, and control become too

difficult for an approach that

depends heavily on face-to-face

communication.

But beyond agile development,

this report also focuses on

something close to my heart: the

cause of VLP failures. For years,

industry experts have pointed

out that IT project risk increases

dramatically with project size.

Unfortunately, the situation hasn’t

gotten better. Not only do VLPs

fail far more frequently than do

small and medium-sized ones

(see Table 1), but the rate of fail-

ure hasn’t changed much over

the years.

Despite the fact that hundreds of

books and thousands of courses

are offered on project manage-

ment, large projects continue to

fail on a kind of logarithmic scale.

Despite everyone’s best efforts,

large projects, particularly the

very largest ones, continue to be

extremely risky undertakings not

only for the companies involved

but also for all the individuals

closely associated with it. A failed

project doesn’t look good on any-

one’s résumé.

Not only do Capers Jones’s figures

in Table 1 indicate that the num-

ber of projects delivered on time

decreases by 40% as one moves

from medium projects (with less

than 1,000 function points, or FPs)

to large projects (with more than

10,000 FPs), but the rate of cancel-

lation more than doubles. And it

gets much worse for projects with

greater than 100,000 FPs.

The most obvious solution would

seem to be to break VLPs into

much smaller parts. But there are

practical reasons why this can’t

always be done. There are some

very big projects that must be

completed to meet very big busi-

ness or organizational demands.

But new approaches need to be

applied that reduce the risks

posed by VLPs and improve the

odds of success.

VLPs GONE BAD

While IT does not have a resource

that collects and studies failures

on VLPs, one area provides some

documentation on VLP failures:

government systems. Because of

the size, cost, importance, and

constant scrutiny of large govern-

ment activities, we probably know

more about failures in this domain

than in any other. For this report,

I’ve included two notable cases

to provide a baseline.

The IRS Modernization Program

The IRS’s $8 billion modern-ization program, launched in1999 to upgrade the agency’sIT infrastructure and morethan 100 business applica-tions, has stumbled badly.The first of multiple softwarereleases planned for a newtaxpayer database is nearly

VOL. 5, NO. 7 www.cutter.com

22 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

The Agile Project Management Advisory Service Executive Report is published by the Cutter Consortium, 37 Broadway, Suite 1, Arlington, MA

02474-5552, USA. Client Services: Tel: +1 781 641 9876 or, within North America, +1 800 492 1650; Fax: +1 781 648 1950 or, within North America,

+1 800 888 1816; E-mail: [email protected]; Web site: www.cutter.com. Group Publisher: Kara Letourneau, E-mail: [email protected].

Managing Editor: Rick Saia, E-mail: [email protected]. Production Editor: Lauren S. Horwitz, E-mail: [email protected]. ISSN: 1536-2981. ©2004

by Cutter Consortium. All rights reserved. Unauthorized reproduction in any form, including photocopying, faxing, and image scanning, is against

the law. Reprints make an excellent training tool. For more information about reprints and/or back issues of Cutter Consortium publications,

call +1 781 648 8700 or e-mail [email protected]

three years late and $36.8million over budget. Eightother major projects havemissed deployment dead-lines, and costs have bal-looned by $200 million. Thiscase study illustrates whatcan go wrong when a com-plex project overwhelmsmanagement capabilities ofboth vendor and client.

— Elana Varon [11]

The cover of the 1 April 2004

issue of CIO reads: “IRS Files

for Extension Again.” And an

article by Elana Varon spells out,

in excruciating detail, the IRS’s

failure to bring its 1960s-era vin-

tage Master File system into the

21st century. Launched in 1999,

the entire project involved billions

of dollars and various applica-

tions. High-profile management

was brought in to manage the

project, and a major outside con-

tractor was chosen for implemen-

tation. But by the end of 2003, the

project was classed as a failure,

the contractor was constrained

from bidding on certain IRS proj-

ects, and all participants had egg

on their faces.

As you might expect, for a proj-

ect like this to make the trade

press, things have to be pretty

bad. But this was not the first time

the IRS had failed in its attempt

to build a better collection system.

For at least the past 30 years,

various attempts have been

made with similar results.

As Varon noted, “The IRS has

twice before failed to modernize.

In the late 1970s, President

[Jimmy] Carter put the kibosh

on a project to replace the aging

Master File when an external

review questioned whether the

agency could adequately protect

taxpayer privacy. Almost two

decades later, in 1995, Congress

pulled the plug on a second mod-

ernization program after the IRS

spent 10 years and $2 billion with

little to show for it” [11].

Despite this series of events, a

member and former chairman

of the IRS Oversight Board told

Congress, “People ask, Is modern-

ization going to fail? I say we can’t

let it fail”2 [11]. Those charged

with overseeing the IRS continue

to make comments like this, yet

modernization at this government

institution keeps failing. We

should bear in mind the old

saying, “Hope is not a strategy.”

And each time the IRS program

fails, project management is seen

as the culprit. Those involved

lament: “If we only had the right

vendor”; “If we only had the right

contract”; “If we had people with

more experience.” And yet,

despite all this experience, all this

management, and all this money,

modernization at the IRS fails.

Perhaps something else is wrong;

maybe it is more than project

management that the IRS lacks.

The FBI Trilogy Program

But the IRS story is not the only

one getting attention from the

technology press these days.

Recently, a series of government

and independent reviews have

been undertaken to evaluate the

FBI’s Trilogy program, a project

launched in 1999 and slated for

completion in October 2002, cost-

ing roughly $350 million. After

9/11, the FBI agreed to move the

completion date to October 2002

at the prompting of the US

Congress.

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 33

Probability of Software Project Results

Early On Time Delayed Cancelled Sum

1 FP 14.68% 83.16% 1.91% 0.25% 100.00%

10 FP 11.08% 81.25% 5.67% 2.00% 100.00%

100 FP 6.07% 74.77% 11.83% 7.33% 100.00%

1,000 FP 1.24% 60.76% 17.67% 20.33% 100.00%

10,000 FP 0.14% 28.03% 23.83% 48.00% 100.00%

100,000 FP 0.00% 13.67% 21.33% 65.00% 100.00%

Average 5.53% 56.94% 13.71% 23.82% 100.00%

Table 1 — Distribution of Project Results by Size in Function Points (FPs) [1]

By the fall of 2002, however, it

became clear that the FBI and its

contractors would not make the

October 2002 date; moreover, it

was likely that they wouldn’t

make their original target of

October 2003. The project was

extended until the end of 2004,

with funding increased by roughly

$150 million. At that point, the FBI

asked the National Academy of

Sciences (NAS) to put together a

team to review the project.3

Conducted by NAS, the evaluation

dragged on for more than a year

and a half, with its culmination in

a recently published report sug-

gesting that while the FBI has

improved (it installed a new net-

work and thousands of new desk-

top systems and hired a new

government-savvy CIO), it still

lacks a visible and viable enter-

prise architecture [2]. Meanwhile,

the key new system in the works,

the Virtual Case File, may not be

ready for prime time. Even if it is,

it should definitely be installed

with a “flash cutover.”4

But like the IRS, the FBI has a

spotty record in keeping its IT sys-

tems up to date. Perhaps more

important, in the middle of the

Trilogy project, the FBI was sud-

denly thrown into a frantic effort

to support a major change in its

mission from one of law enforce-

ment to counterintelligence/

counterterrorism. While the FBI

has been involved in domestic

intelligence since its earliest

days, 9/11 changed everything.

Domestic counterterrorism

became the number one priority.

The NAS review team discovered

that the FBI had continued with

the Trilogy project without fully

understanding what it was getting

into. Although the FBI brought

in top-level management talent

from the National Security Agency,

the challenge of developing state-

of-the-art applications that inte-

grate law enforcement and

counterintelligence was far more

difficult than anything previously

attempted by the FBI. The diffi-

culty arose from a fundamental

conflict between the use and

sharing of data between law

enforcement and intelligence;

while the first group wants to

keep its data secret, the second

needs to share. The NAS team

pointed out that even after 50

years, this problem still plagues

agencies like the CIA.

THE USUAL SUSPECTS

In other fields, whenever some-

thing fails, a detailed report fol-

lows that covers all aspects of the

failure. For example, when the

walkways at the Hyatt Regency

Hotel in Kansas City, Missouri,

USA, collapsed in the 1980s,

an extensive analysis traced the

failure to a design change that

had not been reviewed properly.

Summaries of these reports

were made available to the

press. When the Challenger and

Columbia space shuttles failed,

extensive reviews revealed

problems with the design and

decision-making processes. Once

again, documentation of the

discoveries was made public so

that engineers and managers

could learn from previous mis-

takes. Out of these inquiries, the

public learned about the dangers

to these aircraft associated with

O-rings and breakaway pieces

of foam.

But rarely do we find a very large

IT project failure analyzed at this

level, though one could argue

that more economic damage

results from IT failures than from

building collapses or space shuttle

disasters. Because of this lack of

analysis, the little data we have on

major IT failure always points to

the same villains: project manage-

ment, people, technology, or a

bad contract.

Usual Suspect 1:

Project Management

Simply blaming bad project man-

agement is too easy to be of much

use, largely because it involves a

fundamentally circular argument.

By definition, good project man-

agement means the project suc-

ceeds; and if a project fails, we

have a case of “bad project man-

agement.” This kind of thinking

doesn’t help much. Since there

are many different kinds of project

management, it is important to

know which aspects of project

management were not particularly

good and which were lousy.

Traditional waterfall methods, for

example, use a series of steps.

The idea is that if you follow the

steps in the prescribed order,

things will turn out fine. But

VOL. 5, NO. 7 www.cutter.com

44 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

suppose we have a failed project

and we did everything in the pre-

scribed order. Is the cause bad

project management or bad

methodology? Agile development,

on the other hand, says, keep

things small, deliver early and

often, stay on track. Not so many

steps, but you follow them strictly

and you get short-term feedback.

The agile answer is not process

but communication. But if a large

agile project goes bad, what’s

the problem? At what point does

face-to-face communication

break down?

Usual Suspect 2: People

Next in the list of likely suspects

in IT project failure is usually

“pilot error.” The most common

kind of airplane accident is

referred to as CFIT, or Controlled

Flight into Terrain.5 In other words,

the pilot flew into the ground or

a mountain or something else,

thinking that everything was OK.

In an aircraft crash, the first thing

people look for is pilot error. In

IT, we put it differently; we talk

about project managers who are

inexperienced or over their head

or untrained in the latest technol-

ogy. Often, the implication is that

the project team didn’t under-

stand what it was doing.

Sometimes, the implication is

that some of those on the project

team were not working hard

enough. But this is hardly ever

the cause: no big project that I

have observed, reviewed, or

participated in has ever failed for

lack of effort. In failures as well

as in successes, one of the hall-

marks of VLPs is that you have

dozens (or hundreds) of intelli-

gent, earnest, hard-working

individuals who work nights,

weekends, and holidays to con-

tribute to a goal to which they are

committed. A mentor of mine at

IBM once said, “No one at IBM

ever gets fired for not working

hard enough; they get fired for

doing the wrong things.” People

may be poorly trained or poorly

motivated, but to paraphrase

W. Edwards Deming, 85% of the

time the problem lies in the sys-

tem — that is, the approach —

not the people.

Usual Suspect 3: Technology

Consider these comments:

“We just couldn’t get the XXX to

work!”; “We would have made it

if the technology had delivered

the way they promised”; “We

could never get the XXX to work

with the YYY!” Technology often

shows up on the review board’s

radar when looking at significant

failures. Fair enough.

But again, my experience is

that technology is rarely the

ultimate cause of a VLP failure.

Technology problems may aggra-

vate an already bad situation,

but technology is almost never

the ultimate reason for things

going wrong. However, betting

on technology to bail you out of

a bad schedule often creates

serious problems.

Usual Suspect 4: A Bad Contract

And consider these comments:

“We got screwed!”; “We didn’t

pay attention to the contract”;

“The vendor didn’t live up to the

spirit of the contract”; “Given the

contract, there was no way we

could deliver what was needed.”

Now we get to the ground rules:

that is, the contract. Often times,

the review board concludes that

the project was doomed because

it didn’t make the contract tough

enough on the vendor. In self-

defense, the vendor usually

responds that the customer didn’t

know what it wanted, and by the

time the customer figured out

what it wanted, the vendor had

gone past the time and costs

specified in the contract.

What gets lost in this discussion is

the fact that contracts are two-

way streets: if you block off traffic

in one direction, things are liable

to become congested. At this

year’s Cutter Summit, one of the

most interesting presentations

concerned an approach that

Cutter Consortium Senior

Consultant Stuart Kliman called

“Negotiation as if implementation

mattered.” On VLPs, contracts are

far more critical than on the aver-

age project because so much is

unknown. Since VLPs come

along so infrequently, few staff

members have experience writing

a contract that will span three or

four years and that addresses mul-

tiple phases that involve defining

exactly what the system will do.

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 55

MORE THAN $I BILLIONIN PROJECT FAILURESAND COUNTING

So if the usual suspects are not

the source of VLP failure, what

is? If not project management,

people, technology, and/or con-

tracting, what should we look out

for? What is the real problem?

I’ve been pondering these ques-

tions for some time.

I estimate that during my career

I have been directly involved in

reviewing and making recom-

mendations on more than $1 bil-

lion in problem-ridden projects.

When I add up the numbers, I’m

shocked but not necessarily sur-

prised. Nonetheless, I still find that

I’m called in as an outside expert

to “say grace” over a dying proj-

ect. It’s a ticklish job, but you

learn a lot. After a while, however,

you can quickly tell how much

trouble the project is in. Mostly,

the smart people on the project

have already deduced that the

project is terminal; they just

need someone to communicate

this to management.

But getting people to terminate an

important activity is not easy or

unique to IT. Recently, I found

myself sympathizing with a doctor

as he testified before a legislative

panel in Florida and tried to

explain why a young woman in a

persistent vegetative state should

be allowed to die. Try as he might,

he could not convince a hostile

group of legislators that just

because the patient in question

appeared to be aware and

occasionally made gestures that

seemed to represent smiles and

frowns, she was nonetheless truly

brain-dead and her chances

for even partial recovery were

infinitesimal.

I know how the doctor felt. In the

vast percentage of what I now call

“permanent vegetative IT sys-

tems,” I end up recommending

that the project in question expire.

In the next section, I offer lessons

learned about killing large IT proj-

ects. In the final sections, I pull

together what is unique about

VLPs to help you avoid making the

mistakes that have led to costly

disasters.

PROJECT TITANIC SCENARIO

All big failures look alike.

— Ken Orr

Looking back, the most surprising

thing about all the failed projects I

have studied is how much they

resemble one another. When I

researched the history (via logs)

of some of the failed projects I

worked on early in my career, I

found that they look enormously

alike — so much so that my first

book on systems development

describes the following 21-step

approach for large project fail-

ure [6]. I refer to this multistep

approach to project failure as the

Project Titanic Scenario:

Step 1: Select project due date

and budget.

Step 2: Select acronym (i.e., set

project scope).

Step 3: Select project manager.

Step 4: Develop project schedule.

Step 5: Hire requirements

analysts.

Step 6: Develop requirements

specifications.

Step 7: Develop input specifica-

tions and order data input forms.

Step 8: Hire programmers.

Step 9: Begin work on input edit

programs.

Step 10: Begin work on

conversion programs.

Step 11: Begin design of

database.

Step 12: Begin work on update

programs.

Step 13: Hire additional

programmers.

Step 14: Begin work on output

reports, query screens.

Step 15: Slip schedule.

Step 16: Hire additional

programmers.

Step 17: Panic.

Step 18: Search for guilty.

Step 19: Punish the innocent.

VOL. 5, NO. 7 www.cutter.com

66 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

Step 20: Promote the uninvolved.

Step 21: Go back to Step 1.

It is depressing that more than 25

years later, I can still present this

scenario to a large group of IT

managers and professionals and

someone will invariably approach

me and remark, “I think my proj-

ect is at Step 15” or “How do we

get off this project before we

reach Step 18?” Most large failed

IT projects look alike, and the

causes of failure are eerily similar.

(Note: To those who have dis-

missed the Project Titanic

Scenario as “cute” or “trite”; it is

neither. Built into the scenario

are the seeds of the failure of

VLPs.) Recently, I have received

several RFPs that could have

been drafted directly by following

the Project Titanic timeline. So in

the next few pages, I will try to

explain why what many (i.e.,

most) people would consider

“straightforward project manage-

ment” fails when applied to VLPs.

I do so by relaying observations

drawn from personal experience.

I have already touched on some

of them in our discussion of the

usual suspects, but they are

important to reiterate.

Observation 1: Large projects

don’t go bad; they start bad.

The root cause is not technology,

but ego. On most failed projects,

two egos create the conditions for

failure: the ego of the sponsor

(aka the boss) and the ego of the

project manager (aka the fall guy).

For some reason, top managers

and executives believe that they

can do things with IT projects that

they would never attempt in other

kinds of projects (e.g., building a

plant or rolling out a new prod-

uct). In itself, this is not an earth-

shaking observation. Executives

are always in a hurry and always

want to know the costs up front.

And executives, especially top

executives, all have big egos.

Unfortunately, on large IT projects,

several factors often conspire to

drive project risk through the ceil-

ing and a large percentage to fail.

The most important of these fac-

tors is the ego of the project man-

ager. On one failed project after

another, we find project managers

with personality traits that sow the

seeds of project failure.

On a large project, the typical

project manager is ambitious,

articulate, proud, thrill-seeking,

inexperienced, and a team player.

In other circumstances, including

smaller IT projects, these person-

ality traits yield success; but on

large projects, they tend to spell

disaster.

In my experience, it takes two to

create a really big failure. It is not

just the sponsoring executive’s

ego and desires that set up a

project for disaster, but that these

demands are coupled with the

ego of the project manager, who

has a do-or-die attitude. Note that

in the Project Titanic Scenario, the

project manager isn’t even hired

until after the most critical deci-

sions (i.e., the project deadline,

budget, and scope) have been

made. This means that the project

manager is willing to accept the

mission impossible and make it

happen. Unfortunately, real life

and $100-million movies often

turn out differently.

Observation 2: Most of the

critical bad decisions on large,

failed IT projects are made

early on. Many people, especially

project managers assigned to

failed IT projects, believe that if

you’re smart enough and work

hard enough, you can overcome

any obstacle. Unfortunately, this is

patently false. A project’s fate is

often determined in the first few

weeks of the project’s conception,

often before it is even officially a

“project.”

Mary Shaw, a friend of mine at

Carnegie Mellon University, once

recalled the ideas of one her col-

leagues, a chemical engineer, on

the critical nature of these early

decisions. He said, “If you were

building a new chemical plant,

the expenditures tended to be

‘back end–loaded’ — that is, most

of the costs were associated with

the construction of the plant.”

(See Figure 1.) This same cost

curve shows up in a large number

of development projects as well.

“But,” continued Mary’s col-

league, “if you look at when these

expenditures are committed, the

graph is very different. Almost all

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 77

the expenditures are ‘committed’

[i.e., decisions have been made]

very early on in the project, in the

first weeks or months.” (See

Figure 2.)

Here too, there are strong similari-

ties with IT projects. Moreover, the

chemical engineer’s comments

about cost also apply to all other

aspects of a project, including

project staffing, project scope,

methodology, tools, and schedule.

Successful project managers

know this; unsuccessful ones

don’t. On projects that follow the

Project Titanic Scenario, the bud-

get, end date, and scope are,

for all intents and purposes,

established before the project

manager is even selected. As

mentioned previously, this is a

recipe for disaster.

Unfortunately, the knowledge

of the application and/or

technologies and methods that

will apply to the project often

lags significantly behind the

decision making. The middle

curve in Figure 3 is intended to

demonstrate that for most proj-

ects, knowledge acquisition is

“lumpy.” Normally, a great deal

is learned about the application

domain during the early part of

the project, and another spurt of

knowledge occurs late on the

project. But for the most part, this

knowledge is not factored into

the major commitments.

Experienced project managers

understand Figure 3 intuitively.

They know that they have to be

very careful about committing too

much too early. They know that

the more you know, the easier

decision making becomes. As I

will discuss later, architecture is

crucial, because on large projects,

a great deal of overhead is associ-

ated with knowledge manage-

ment (acquiring, documenting,

and disseminating knowledge

about the project design).

THE WRONG PEOPLE

Anyone who’s been on onelarge failure will make surethat they are never onanother. Therefore, mostlarge projects are staffedwith trainees andmasochists.

— Andy Kinslow, IBM

In most areas of human endeavor,

if you’re going to take on a project

that is likely to cost tens or hun-

dreds of millions of dollars and

VOL. 5, NO. 7 www.cutter.com

88 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

Time

Expenditure

Expenditures

Figure 1 — Project expenditures over time.

Time

Expenditure

Expenditures committed

Expenditures

Figure 2 — Commitment of project expenditures.

that will have an enormous effect

on an organization, you must

insist that the project leader has

extensive experience working on

a similar project. Moreover, you

will check credentials, and

closely. You expect the project

leader to have a professional

license and to have worked in

the field for an extensive period.

Because the ramp-up for VLPs is

often rushed because of a fixed

deadline, people frequently forgo

this process when it comes to

large IT projects, which leads to

major problems.

One of the excuses frequently

offered for a lack of project leader

experience is that older project

managers don’t understand the

new technologies and/or meth-

ods. This may be true; old dogs

have trouble learning new tricks.

But if this appears to be a major

issue, it should serve as trigger:

do you really want to set out

on a critical undertaking using

technologies and methods that

only a few young people really

understand?6

Observation 3: Failed projects

are often managed by those

who are inexperienced in man-

aging VLPs with the new imma-

ture technologies and methods.

I have found that the key to the

success of any project, especially

a large project, is having the right

people with the right skills in the

right jobs. I wrote a series of

Cutter Consortium E-Mail Advisors

on the five big questions that

every organization should ask

about those selected to lead

large, critical projects [3, 4, 5].

These questions address experi-

ence and risk:

1. Has any project team member

built anything this big?

2. Has any project team member

used this methodology/

technology on a project

this big?

3. Has any project team member

been involved in a large sys-

tems failure?

4. Do the right people have plan-

ning, architecture, require-

ments, and design skills?

5. Has anyone thought about

contingency plans?

In order to know whether a proj-

ect team has enough experience

to warrant starting a big project,

you need to ask probing ques-

tions. With all the major positions

represented, this questioning

should be more than cursory. It

should address details and ask for

a description from the candidate

of what he or she does when

things go wrong or priorities

change. This questioning should

check how the applicant might

choose to produce short-term

deliverables. The candidate

should be grilled on his or her

experience with large projects.

How large a project has the candi-

date run or worked on?

Big projects require specific skills

in planning, scheduling, delegat-

ing, and monitoring. Successful

project managers know that there

is enormous pressure to do dumb

things, and it may be hard to

retain top management’s atten-

tion, especially if the project is

going to last for a few years.

And it is not enough for IT project

managers just to understand the

intricacies of project manage-

ment. Indeed, most project man-

agement training doesn’t prepare

project managers for a VLP with

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 99

Expenditures committed

Domain/technology knowledge

Expenditure

Time

Knowledge

Figure 3 — Expenditures, commitments, and knowledge.

its unique problems. Indeed, I

am not a fan of “content-free”

project management training;

I’m not sure what it means to be

a “certified” project manager if

you haven’t actually managed a

large project yourself.

Managers of VLPs must know

what their people are doing and

why. When managers can’t under-

stand what their people are say-

ing, they are at the mercy of their

people. Large project failures

often involve one of two scenar-

ios: (1) the project manager

fancies him or herself a “pure”

project manager and puts all trust

in the chief designer, architect, or

perhaps a consultant; or (2) the

project manager is an advocate of

some new approach or tool. Both

scenarios present difficulties.

Observation 4: On large proj-

ects, there are no silver bullets.

While advanced technologies and

methodologies are often treated

as silver bullets that can magically

overcome one or more major

project management problems

(e.g., time, cost, and personnel),

relying too heavily on them is

always an indication that some-

thing is wrong with the project

planning. Over the years, I have

watched managers on one failed

project after another put up a

smokescreen by saying that they

were implementing new methods

and tools to avoid embarrassing

questions about schedule and

process.7

Observation 5: On a VLP, there

is no substitute for a project

manager who has a failure

under his or her belt. You’ll

notice that the third big question

is about the importance of failure.

If the project manager has not

been involved in at least one

major failure, management ought

to be cautious about launching

the project. I tend not to trust

people who have only experi-

enced success, because, to para-

phrase Henry Petroski, one of the

great writers on engineering, you

simply don’t learn enough from

success [7]. Failures provide a

template in the mind of a project

manager. If you’ve survived a

major failure, when you hear cer-

tain phrases or see things that

happened on the failed project,

you instantly become more alert

and start asking more questions.

Observation 6: VLPs require

experienced project architects

and project managers. Over the

years, I have tried to be both the

project manager and the project

architect. It’s a tough job, and the

larger the project, the more diffi-

cult it becomes. Either one can

be the ultimate boss, depending

upon the kind of project, but in

general, you need two different

people: one to worry primarily

about the system that you’re trying

to develop and the technology

being used to install it (the project

architect) and the other to worry

primarily about communicating

with management, producing the

right reports, and tracking the

team’s progress (the project

manager).

Finding a good project architect is

every bit as difficult as finding a

good project manager. In fact,

good project architects are proba-

bly more critical to VLPs than are

good project managers. They

must have vision and understand

the details. They need to know

when programmers are lying or

fudging. They need to know what

people should be testing. They

should constantly think about inte-

gration. And perhaps most impor-

tant, they need to know when to

blow the whistle. Project archi-

tects are the ones who worry

about the project’s scope.

THE WRONG APPROACH

On a VLP, other than putting the

wrong people in charge, the

worst thing you can do is use the

wrong approach (methodology).

If you look at the Project Titanic

Scenario carefully, you will notice

that it is actually a distillation of a

series of the right tasks, but in the

wrong sequence. Project Titanic

involves setting the scope too

early, hiring too many people too

soon, and starting them working

in parallel on the wrong end of

the problem (i.e., the inputs)

when you have no idea what the

outcomes (i.e., the outputs)

should be. And it involves break-

ing up the project in phases only

when you’re in deep trouble.

But fundamentally, Project Titanic

is just a natural reaction to doing

VOL. 5, NO. 7 www.cutter.com

1100 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

the very worst things first —

namely, establishing the due date

and the budget too early. Most

inexperienced project managers

believe that if your due date and

budget are set ahead of time (i.e.,

before the project begins) the best

you can do is develop a schedule

working backward from the due

date, breaking up the tasks and

sequences as much as possible

so that much of the work can be

done in parallel, and then hoping

for the best. As it turns out, this

approach is nearly always a fatal

mistake.

Starting from the given due date

is not the best you can do; it is

actually the worst. It just allows

you to make management happy

initially and postpone the most

serious problems, at least for a

while. But sooner or later, the

project will start slipping and you

will ultimately have to go back

and ask for more money, more

people, and more time. And even-

tually, if you keep going back and

everything looks hopeless, they’ll

call in someone like me. At that

point, the task is to say what proj-

ect leaders failed to say at the

beginning — the truth.

Observation 7: The initial

project scope of a VLP is never

right. You simply can’t set the

project scope correctly until

you know the basic require-

ments and general design,

so don’t try to set scope in

concrete too early. A great many

people believe that “hard-headed”

project management consists of

deciding on project scope early

and then religiously fighting to

avoid scope creep. The notion is

a good one, but it doesn’t work.

Recall Figure 3: people on failed

projects tend to make their most

important (and most fatal) deci-

sions before they have the knowl-

edge to make those decisions.

This is especially true of project

scope.8

Take the following example.

Suppose our scope is “to install

a new supply chain management

system.” Lots of million-dollar

systems are launched with that

brief a goal. But what if that’s not

our problem? What if the reason

we have such difficulty with our

supply chain is that we have a

such terrible accounts payable

system (or process) that people

don’t want to do business with us?

Assuming you can set the right

project scope without doing

requirements and high-level

design is like assuming you can

plot the route of a new highway

without surveying the land. Sure

you can draw a line on a map

and say, “I want a road from

here to here in three years.”

But suppose that when you per-

formed the land survey for this

new road, you found that the

proposed route crossed a river,

a swamp, and a couple of moun-

tains. What would you do? The

answer is you would change

your scope (or time frame),

and quickly.

Inexperienced, content-free proj-

ect managers (i.e., those who

believe that all you need to know

about managing a large software

project is project management

itself) think that being hard-

headed means that you make all

the important decisions, then

build a plan and get users to

“sign in blood.” But often a wildly

optimistic plan will rarely bring in

an unspecified project before the

arbitrary target date and under

the arbitrary budget. This may

not a big deal on a small or

medium-sized project, but on a

big project, it’s a killer.

Observation 8: Check your

maps frequently. All large failed

projects have Gantt/PERT charts,

and none are up to date when

things go south.

The map is not the territory.

— Alfred Korzybski

If you don’t know where youare, a map won’t help.

— Anonymous

A Gantt chart is what youshow management whileyou’re doing something else.

— Ken Orr

Every large failed project that I’ve

ever reviewed has had an elabo-

rate project Gantt/PERT chart early

in its history. So when I start gath-

ering data, one of my first ques-

tions is, “Where are we on this

chart?” And by this point in the

project, the response is always,

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 1111

VOL. 5, NO. 7 www.cutter.com

1122 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

“We can’t actually show you

where we are on this chart” or,

even worse, “Which chart?”

During the first six or 10 months

of a big project, the project charts

are religiously updated and status

reports are filed right on time.

But as the project begins to fall

behind, the scheduled updates

become more sporadic and the

charts less reflective of “the plan.”

One of the reasons is that catch-

up strategies are often introduced,

and it becomes too involved, not

to mention embarrassing, to actu-

ally update the schedules and

dependencies to reflect the

revised situation.

Observation 9: Project man-

agers and their bosses will go

to great lengths to avoid telling

top management the truth.

The truth will make youfree, but first it will makeyou miserable.

— Tom DeMarco

In most failing projects, there are

a few people at the top of the

organization who think they are

in trouble, lots of people at the

bottom who know they are in

trouble, and a bunch of worried

middle managers trying to keep

those at the top from talking to

those at the bottom. The hardest

thing to come by on a big, prob-

lem-riddled project is the truth.

This is understandable. In all these

VLP cases, dozens, even hun-

dreds, of people have worked

night and day on the project for

years, so admitting that a project

is in serious trouble is akin to

admitting personal failure. But in

the end, truth is the only guide.

THE END GAME: KNOWINGWHERE YOU REALLY ARE

In 1707, four English Ships ofthe Line went aground onthe Scilly Isles off England’sSouthwest Coast. Their cap-tain thought that they weretwo hundred miles furtherEast. Two thousand sailorswere lost in this tragedy.

— Dava Sobel [9]

Observation 10: Project sched-

ules on big projects do not slip

one day at a time; they slip in

big chunks. In one instance, I

was involved on a very large,

highly critical, extremely overdue

project. The situation got so bad

that management began to hold

daily progress meetings. Every

afternoon, management con-

vened all the key managers and

reviewed project progress in

detail. One day the news was par-

ticularly bad. “How the hell did

we slip six months overnight?” the

senior VP screamed. “We worked

very hard, Sir,” replied a small

voice from the back of the room.

Of course the project didn’t slip

six months overnight. Like Sobel’s

captain of the English fleet, the

project team members didn’t

know where they were or, more

to the point, how much still

had to be done. What they discov-

ered was that they were much

further from their target than they

thought. They had been operating

on the assumption that they were

located at one place on the map,

while they were at another. Or

more embarrassing, they realized

they had the wrong map!

If you haven’t developed require-

ments and general design, you

can’t estimate what you have yet

to do in detail. When you finally

determine how much work

remains, the amount often totally

exceeds what management had

been promised, even in the most

recently revised schedules.

Observation 11: In software

development, you can lie about

everything but the testing.

Testing for VLPs is like the fatal

shoals on which ships run

aground. Project plans, require-

ments, design documents, pro-

gram charts, and so on are just

pieces of paper. Unless they are

reviewed carefully by those who

know what they’re doing, they

are not really deliverables. If

they choose to, project teams

can hide major problems under

mounds of paper. But ultimately,

you can’t lie about testing. If a

program doesn’t work, it shows

up in testing — you can’t go on;

and if the system does work, test-

ing is the final arbiter. Many large

projects maintain the lie that they

are on schedule right until they

start serious testing.

Unfortunately, on large projects,

many of the most critical tests —

the ones the end user sees —

don’t occur until it is way too

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 1133

late to change course.9 The result

is like hitting a large rock. The

project careens off schedule and

stays off schedule. Integration

schedules slip and data conver-

sion can’t progress. Meetings are

held daily, and one excuse after

another is offered, the search for

the guilty commences, followed

by the punishment of the inno-

cent, and so on.

Observation 12: Most VLPs

are not cancelled; they simply

slip away. Admitting that you

blew (or were party to blowing)

millions of dollars and wasting

years of time is never a pleasant

task. As a result, all but the most

visible large projects are labeled

“partial successes” and allowed

to disappear from management’s

radar. Whenever the project

comes up, there is a wink and

a nod and the subject changes.

Years or even decades later, peo-

ple change the subject when the

project is mentioned. To keep the

discussion muted, those most

closely associated with it are

allowed to retire or slip into staff

positions. They’re not fired; they

just fade away.

Occasionally, things get so bad

that the company is hemorrhag-

ing money and those involved

become so obstinate that some-

one like myself gets called in to

put the project out of its misery.

And even then, the project is not

declared a failure. No wonder

we have so little data on these

kinds of projects.

WHAT ARE THE REALPROBLEMS WITH VLPs?

I have often asked myself if any-

one really knows how to manage

VLPs. Lots of people, and organi-

zations, claim to know how and

that they routinely bring big proj-

ects in on schedule and under

budget, but I’m skeptical. Given

my experience with VLP failures,

I am often less than impressed

when I hear about project suc-

cesses, largely because it is

difficult to know just how suc-

cessful any VLP was unless it was

reviewed after the dust settled,

preferably in a post-audit review a

year or two after the declaration

of project success.10

People Don’t Want to Remember

So the success or failure of big

projects is difficult to judge up

close. Moreover, the trauma of

being involved tends to cloud our

vision. Here’s a personal example:

Many years ago, my wife and I

remodeled our 19th-century

Victorian house, which was a

major undertaking. We added a

family room, an office, a new

master bath, and redid the

kitchen. The renovation involved

almost every very expensive thing

you could do to an old house.

The remodeling took months,

and we moved out of the house

(something we hadn’t done the

last time we remodeled and

learned to regret). And though we

worked through every possible

detail (or so we thought) with the

architect over the previous 18

months, the job cost far more

than the estimate, and naturally

everything was late. And every

time we asked the contractor

when a particularly important

milestone such as the installation

of the cabinets would be done, his

response was “two weeks.” But

somehow, the contractor’s two

weeks had a habit of dragging on

for months, and things got more

and more expensive.

Shortly after the remodeling was

over, my wife and I needed some

relaxation, and we went to see

the comedy The Money Pit. In the

movie, a couple buys a run-down

mansion and remodels it. Like us,

everything the couple touched fell

apart, and — again, like us —

when the couple hired contrac-

tors, everything was scheduled to

take “two weeks.” The costs, like

ours, shot up. It was a funny

movie, I suppose; everyone in the

theater laughed hilariously. But

somehow my wife and I did not

find the situation funny. The

Money Pit was too close to the

truth for us after going through

the real thing.

That’s the way it is with traumatic

events; people block the unpleas-

ant experiences from their mind.

And so it goes with people recov-

ering from the shock of a very

large failure. Often, when I men-

tion the Project Titanic Scenario,

I see tears come into people’s

eyes. The rule is, if you were

involved on a recent VLP, Project

VOL. 5, NO. 7 www.cutter.com

1144 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

Titanic isn’t likely to be funny.

And unlike The Money Pit, there

is rarely a happy ending.

Seat-of-the-Pants Management

Doesn’t Work

VLPs fail so frequently because

many of our most basic (instinc-

tive) approaches for managing

people, teams, and organizations

that work (or appear to work)

on small or medium-sized proj-

ects lead us astray as a project

exceeds a certain size.

In most organizations, most

people manage most of the time

by the seat of their pants. On

VLPs, however, this kind of

management is a bad thing. To

demonstrate, let’s trace the origin

of the expression “by the seat

of one’s pants.” Early airplane

pilots learned that people have

an instinctive sense of the direc-

tions of up and down. In most

earthbound circumstances, this

sense of up and down is both

correct and useful.

In flying, however, our natural

senses work against us. A large

number of pioneering pilots

were killed before it was discov-

ered that when you’re flying an

airplane, you can’t trust your

instincts. We now know that a

combination of centrifugal forces

and a lack of visual cues often

trick inexperienced pilots into

believing that up is down and

down is up, resulting in the

previously mentioned CFIT

experience. 11

But it turns out that it is hard for

your brain to override these

instincts. And for beginner pilots,

the hardest task is learning to

trust the instruments in all cases.

As pilots will tell you, the most

demanding part of learning to fly

is becoming “instrument rated.”

During this training, the pilot is

blocked from seeing out of the

airplane and must give up the

reliance on instinct concerning

direction and learn to trust what

the instruments tell him.

Because the skill is so difficult

to master, it is not necessary to

become instrument rated to get a

basic license to pilot a plane. But

a pilot who isn’t instrument rated

is not supposed to fly when visual

flight rules12 do not apply. And

because of the real dangers

involved, most noninstrument-

rated pilots are careful to avoid fly-

ing in conditions when they lack

visual clues. But occasionally,

changes in weather, time of day,

or location conspire to place the

unrated pilot at risk or — as was

the case with John F. Kennedy Jr.

— tragedy.

In the same way that our physical

instincts betray us when flying an

airplane without visual clues, our

management instincts have a way

of betraying us when trying to

manage a project for which we

have to trust our “project instru-

ments” rather than our instincts.

Small or medium-sized projects

can be managed by direct com-

munication and/or inspection.

With small or medium-sized

projects, there are only a couple

of levels of management at most.

But with large projects, and espe-

cially VLPs, there are layers upon

layers of management to the point

that it is impossible to use the nor-

mal sets of visual clues that indi-

cate if the project is flying up or

down or where you are with

respect to the project terrain. (In

project management, as in flying,

it is important to know where you

are with respect to things like air-

ports, coastlines, the surface of

the earth, and mountains.)

Not only does managing by

instinct cause large projects to

veer off course, but many of our

basic planning assumptions often

fail us on VLPs. The most impor-

tant of these assumption is what I

call the “front-to-back strategy.”

Designing Front to Back

Doesn’t Work

In general, there are three basic

strategies for developing large

systems:

1. Front-to-back strategy. Start

with the inputs and work “for-

ward” to the database, then to

the outputs.

2. Back-to-front strategy. Start

with the outputs and work “back-

ward” to the database, then to

the inputs.

3. Middle-out strategy. Start with

the database (or central objects)

and work backward to the inputs

and forward to the outputs.

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 1155

Of these three approaches, the

front-to-back strategy is by far

the most instinctive and intuitive.

Humans tend to think linearly and

temporally, and we take our clues

from execution. When this system

is done, so the thought process

goes, it will start with the inputs,

update the database, and then

periodically produce outputs, so

that’s the way we ought to build it.

Unfortunately, the front-to-back

approach, like seat-of-the-pants

management, is a trap that even-

tually catches up with you, usually

late in the project when you can

least afford to redo things. Here’s

the problem.

The reason we do systems is to

be able to produce certain output

(payroll checks, up-to-date bank

statements, benefit checks, etc.).

Those outputs comprise data ele-

ments that are produced either

from inputs or from data fields

computed from inputs, together

with various calculation formulas

or decision rules.

In small or medium-sized systems,

the connections between inputs

and outputs are usually pretty

clear and direct. That means if

you start with the inputs, put those

inputs on a database, and pro-

duce outputs from the database,

the process doesn’t take long. If

you make a mistake and find that

you can’t produce a given output

from the data you have on the

database, you usually have time

to go back and add a new ele-

ment to the database and then

find a place on the input to place

that information.

On VLPs, however, the cycle of

designing the inputs, the data-

base, and the outputs is very long,

and if you discover, as you always

do, that someone missed one or

more of the major outputs in

requirements, you need to make

major changes to both the data-

base and the inputs. This “refac-

toring” can throw a monkey

wrench into the planning process.

Working front to back, however,

looks good from a project plan-

ning standpoint. IF we are

constrained (from a project

management standpoint) by an

arbitrary due date, and IF we

can define the inputs early, THEN

we can define the database early.

And IF we can define the data-

base early, THEN we can have

lots of people entering data, AND

we can also get lots of people

writing output reports indepen-

dently of one another. I refer to

this as “fantasy project planning.”

But of course, the real world does

not follow this pattern. We almost

never know what all the primary

outputs for a large system are at

the beginning of a project. Often,

the reason we’re tackling a big

system is that we need to change

things. That means defining new

requirements, and new require-

ments normally involve new out-

puts or new business rules or

both. So we put aside the most

important consideration —

namely, defining the primary

outputs — and start guessing

about what inputs we will need.

Once we come to a compromise

on the inputs,13 we guess what the

database should look like and

then finally define the outputs.

Then we discover that we need

things to produce the outputs that

no one told us about, so we have

to go back and refactor. And all

this at the worst possible time.

Learning to take a back-to-front

approach, on the other hand, is

much like learning to trust your

flight instruments. It is hard to

concentrate on defining the out-

puts when there is pressure to get

started implementing the system,

but a back-to-front strategy works

much better on large systems.

The great advantage of a nonin-

stinctive, back-to-front approach

is less rework, and in many big

projects, rework is the killer. If

you do a good job of defining the

primary outputs of the system,

designing the database becomes

much more straightforward, and

defining the inputs is a piece of

cake. As a result, the implemen-

tation goes much more smoothly.

But unfortunately, using a back-

to-front strategy requires patience,

and patience is often in short

supply on VLPs.

Starting Fast Doesn’t Work

In the absence of long-termplans, short-term demandswill always drive out long-term goals.

— Ken Orr

VOL. 5, NO. 7 www.cutter.com

1166 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

In the Project Titanic Scenario,

the due date and budget are

already assigned when the project

manager is selected. Surprisingly

this is often the case in the real

world as well. As a result, many,

if not most, project managers are

already late when then they get

their assignments. Therefore,

there is great pressure to get

cracking and to cut corners. And

when, in our boss’s perception,

we’re already behind, our instincts

tell us to start fast and rush. But

here again, our instincts get us in

trouble.

On VLPs, the tendency, especially

among inexperienced project

managers, is to confuse effort with

progress, which is natural. Human

beings tend to be impatient. In

this age of instant gratification,

impatience is compounded with

the illusion of quick fixes. Big

projects require careful planning,

but on many large projects, there

is a not-so-subtle message that

there isn’t any time for planning,

only execution.

On small or medium-sized proj-

ects, this can be an annoyance,

but on VLPs, confusing effort with

progress is deadly. Based on the

fast-start scenario, there is often a

push to bring people on too early,

and once they are on board, there

is ongoing pressure to keep every-

one busy. The result is that the

project manager often struggles

to stay one day ahead of the pro-

grammers. At some point, some-

thing falls between the cracks

and everything begins to slip.

The truth is that long-term things

like planning and architecture are

the keys to success on VLPs, and

typically, these steps can’t be

accomplished by large groups

working in parallel. The early parts

of VLPs must be serial activities,

and they must be done well

before you bring people on board.

If you are a manager of a large

project, people will test you. Lots

of top managers believe that if

they set totally unrealistic sched-

ules and demands, people will

perform better. But the opposite

is more often the case. Most

people work best when they

believe management knows

what it’s doing and can set rea-

sonable expectations. Because

I’ve seen the premature start

strategy fail so often, I get really

testy when people attempt to

bully me into “doing something

even if it’s wrong.”

WHAT SHOULD YOU DO?

If you’ve read this far, you know

that there are many ways that

VLPs go south. What I’ve done is

describe the landscape, and I’m

not going to turn around now and

say that a cookie-cutter approach

will work, because it won’t. The

problem is that we have too many

constraints. If you’re supposed to

deliver an enterprise-wide supply

chain system for less than $10 mil-

lion by 1 September 2005, your

project may already be doomed.

Moreover, it doesn’t matter much

whether you use a traditional

waterfall methodology or an agile

approach; the cards are stacked

against you. This section discusses

various activities that can dramati-

cally alter the odds in your favor.

And if you are committed to using

agile development methods, it

gives you a framework.

Break the Project in Two

In the real world, there are archi-

tects and there are general con-

tractors, and they work together

in different ways, however, logi-

cally the architect always comes

first. The architect does all the

basic design work, checks out the

lay of the land (literally), reviews

rules and regulations, and meets

regularly with the owner and/or

developer to establish the specifi-

cations. The architect produces

the building specifications in

terms of a variety of sketches,

models,14 and drawings.

Historically, the architect receives

10% of the cost of the entire build-

ing, so if the cost of the building

is $5 million, the architect gets

$500,000; and if the building is

estimated to cost $50 million,

the architect gets $5 million.

Architecture is recognized as a

major component of the entire

cost structure. Moreover, it

involves detail as well as high-

level design.

Most important, from our view-

point, the funding for the project

is usually broken into two pieces:

(1) the planning and architecture

and (2) the construction. If we

are to remove the risk from the

management, we must get our

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 1177

knowledge in sync with the mag-

nitude of our decisions.

Phrasing this new approach in

terms of risk management is often

easier for executives to grasp.

They may not understand VLP

management or knowledge

management, but they certainly

understand risk management.

Do Requirements and

Architecture First

The division between architecture

and construction has worked

for hundreds of years in construc-

tion, and I believe that we must

employ a similar strategy on VLPs

in the software world.

If we are ever to feel comfortable

managing VLPs, we must focus

on understanding the overall

design (high-level requirements

and high-level architecture) before

we break the project into its major

components and develop detailed

schedules and detailed cost break-

downs. So we must separate the

high-level requirements and archi-

tecture from the development

(i.e., construction). This is just as

important in agile development as

it is in a traditional waterfall envi-

ronment, perhaps even more so.

To work well, agile development

needs to break a large project

into a set of small projects

(small enough to be done by small

teams composed of dedicated

users and two programmers). In

the next section, I will describe

how this might be accomplished

in the context of a VLP.

Make the Architect Responsible for

Quality Control and Integration

In the real world, architects do

more than design the building;

they help estimate the cost and

schedule, and equally important,

they are responsible for change

control and quality control

throughout the project. I think

they should play this role in VLPs

as well.

And if you break a large project

into several small (i.e., incremen-

tal) subprojects, you must ensure

that, in the end, they fit together to

create the finished product. On a

VLP, a project architect should be

responsible for the cohesion of

these pieces as well as for change

control and quality control.

Since computer science has not

reached the level of professional-

ism of architecture and other

engineering disciplines, it’s not

likely that we will be able to pull

off really big projects routinely for

some time, but these kinds of

changes could make a substantial

dent in our VLP failure rate.

SHUTTING DOWN ANDSTARTING OVER

Things to Do on a Failing Project

If you are ever asked to go to

another organization, or another

division of your organization, and

evaluate a large project that is in

trouble, you can use some of the

observations presented here as

a sanity check. You should know

that, in most cases, by the time

an outsider gets called in, it’s

too late to do anything but con-

duct as quiet a burial as circum-

stances allow. But the process still

requires careful analysis, courage,

and tact. And no matter what,

submit your report to top man-

agement so that it is a permanent

part of the project record.

Start with the people on the front

line: the designers, programmers,

testers, and documenters. Have

management construct a current

Gantt chart. Have an all-hands

meeting and distribute sealed

ballots. Ask everybody to write on

the ballot the probability of the

project meeting its goals. Even

under the worst of circumstances,

most people will be honest,

especially if they believe their

anonymity is protected.

Look under rocks and look the

project managers in the eye. Don’t

be afraid to recommend that the

project be scuttled. Remember,

everything that has been spent to

date is sunk cost. Economically, it

doesn’t matter how much has

been spent; what matters is how

much will likely be spent going

forward.

Most organizations don’t like to

advertise failure; it’s bad for the

career, you know. But keep all

your notes from the project so

the next time you’re asked to be

a project manager on a large

project, you can pull out your

notes and check them against the

project schedule and due date. If

the situation resembles the failed

project you just helped kill, start

VOL. 5, NO. 7 www.cutter.com

1188 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

making a fuss early. If nobody lis-

tens, start updating your résumé.

Recall Andy Kinslow’s comment:

large projects are often manned

by trainees and masochists.

Which one are you?

Starting Over

Let’s suppose that you have suc-

cessfully shut down a VLP that

failed. What next? Well, if the

organization is game for another

round, it may be willing to con-

sider new alternatives. (And if

management wants to set another

due date and budget, run — don’t

walk — away!) On the other hand,

if the organization is willing to

rethink its basic strategy (and

organizations that have recently

admitted failure are often more

humble), suggest the concept

of the project architect and break-

ing up the VLP into two major

parts: (1) architecture and (2)

construction.

If managers think these ideas

make sense, explain that they

need to think of committing to

the architecture first and then

reassess how much the construc-

tion phase will cost and how it

will be staged. If they are still

listening, tell them about agile

development.

Leveraging the Strengths of

Agile Development Combined

with Project Architecture

Agile development doesn’t have

much to say about overcoming

the special problems of VLPs,

because few VLPs have been

successful with agile. While agile

development gurus refer to fairly

large projects, they’re not usually

talking about VLPs in our sense of

the word.

Agile development performs

well in small and medium-sized

projects. Based on the approaches

and methods of Cutter Consor-

tium Senior Consultants Kent

Beck, Alistair Cockburn, and oth-

ers, the industry is rethinking how

projects should be conducted.

The emergence of agile develop-

ment has prompted organizations

to reconsider how people com-

municate on real projects and

caused many to recognize the

need to eliminate the busywork

and to get all the major parties in

the same room working on the

same end product.

On VLPs, however, it’s difficult

to get all the right people in the

same room. And even if you did,

it’s unlikely that much face-to-face

communication would take place.

If agile development is going to

scale up, it clearly must come up

with some major enhancements.

I strongly believe that ultimately a

lack of architecture will plague

large-scale agile development

projects just as it has traditional

development projects. Indeed, in

discussions on the use of agile

development on large projects,

many individuals have indicated to

me that they insert an architecture

phase routinely as the first step.

We all know that large projects

increase both the opportunities

and the need for communication

faster than they increase knowl-

edge. The larger the project, the

greater the number of people who

need to know about the various

aspects of the design, which elon-

gates the communication cycle,

which creates the need for more

meetings, and so on. Inserting an

architecture phase attempts to

maintain the balance between

knowledge and effort.

Figure 4 offers an example of a

relatively large agile development

project developed using the

Feature Driven Development

methodology in Australia. The

project was broken up into several

subprojects, each of which can be

accomplished with agile. The

secret here was that before work

began on the subprojects, a fairly

extensive architecture phase

broke the entire project into

smaller ones.

But breaking things into manage-

able pieces is not enough for an

organization to be successful on

really big projects. Agile devel-

opment solves three major issues:

(1) how to break a big thing into

a set of smaller interrelated

pieces; (2) how to integrate the

separate pieces back together;

and (3) how to accommodate

changes that affect the interrela-

tion between the pieces. The

second and third issues pose

significant difficulties.

Adding an architecture phase to

the agile lifecycle solves only the

first of these problems. In order to

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 1199

solve the other two, more has to

be done; we must deal with the

issues of change/quality control

and integration. What needs to be

put in place, then, is a structure

somewhat like the one in Figure 5.

Architects in the software/systems

world ultimately will have to

assume responsibility for their

architecture all the way through

implementation just as they

do in real-world construction.

Eventually, one might guess that

project architects will be licensed,

just as real-world architects and

engineers are today. At any rate,

for project architects to help solve

our problems, they must closely

monitor the development to

ensure that what was originally

designed gets built and, if mate-

rial changes are made, what the

consequences will be.

Tools, Communication, and

Online Collaboration

In the end, traditional waterfall

development schemes, such

as the CMM®, and agile develop-

ment have much to learn from

each other. CMM must learn to

be more focused on the product

and less on the process, while

agile must learn to be more

focused on architecture and

on a formal means of documen-

tation. I believe that, in time,

much of this argument will be

resolved through better tools

for requirements, architecture,

design, and collaboration.

In the end, distance will be less

of a barrier to communication.

Since my company, the Ken Orr

Institute, is a small consulting

organization with widely dis-

persed clients and coworkers,

the collaboration tools available

on the Internet allow me and my

coworkers to communicate with

a wide variety of others (clients,

researchers, vendors, etc.) in

ways that would have required

substantial travel and/or wait

time in the past. I believe strongly

subproject 1

subproject 2

subproject 3

subproject 4

subproject 5

subproject n

Architecture IntegrationPlanning Installation

Subproject 1

Subproject 2

Subproject 3

Subproject 4

Subproject 5

Subproject n

Architecture IntegrationPlanning Installation

Integration and change management

subproject 1

subproject 2

subproject 3

subproject 4

subproject 5

subproject n

Architecture Integration

Subproject 1

Subproject 2

Subproject 3

Subproject 4

Subproject 5

Subproject n

Architecture Integration

Figure 4 — Agile development framework for a very large project (VLP).

Figure 5 — Augmented agile development framework for a VLP.

VOL. 5, NO. 7 www.cutter.com

2200 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

that CADD/CAE/CAM tools for IT

will permit all organizations to

attack VLPs in an increasingly

agile manner.

I have something of an obsession

with tools not because I want to

replace people but because I fear

that unless we find better tools

for developing, maintaining, and

operating large systems, our orga-

nizations will become more and

more difficult to manage. Already,

many organizations are so depen-

dent on a few systems and net-

works that are fragile, possibly too

vulnerable to survive major busi-

ness or technological changes.

CONCLUSION

No one knows how to bring in

VLPs successfully every time,

even under the best of circum-

stances. What I do know, based

on thousands of hours and dozens

of projects, is what not to do.

Managing VLPs takes great man-

agement and architectural skills.

Until recently, people have under-

estimated the importance of

architecture in VLPs, and we must

redress that. Before we make the

critical decisions in these projects,

we need knowledge, and the best

way to get high-level project

knowledge is through architec-

ture. Remember: hope is not a

strategy.

EPILOGUE

Life has a way of repeating itself.

Many years ago, I got an interest-

ing phone call from someone at

the IRS. After reassuring me that

the call didn’t concern an audit,

the caller said that a colleague of

mine who worked in the IRS’s

methodology section had sug-

gested that he give me a call.

“We have a big problem,” he said.

“How big?” I asked.

“Eight billion dollars in 10 years,”

he said.

“More like $20 billion in 20 years,”

I calculated silently to myself. Out

loud I said, “That’s big.”

“What kind of methodology works

in a project this size?” he asked.

“None known to man,” I said.

“What should we do?” he said.

“Well, I’ll tell you, because I

know you won’t do what I

suggest. What I would do is hire

10 or 12 of the brightest people

you can find inside and outside

the IRS and put them in a room

for about 12 to 18 months. I’d

have them come up with a plan.

Then I would start spending that

$8 billion. By the way, my guess is

that to get the job done, it’s going

to take more like 12 to 15 years.”

My comments met silence on

the other end, and then the IRS

worker said, “You’re right, we’re

not going to do that.” We then

discussed the ideas the IRS had

for spending the $8 billion for a

few more minutes, and then the

IRS worker rang off.

Probably 15 years have passed

since this conversation, but clearly

the IRS has still not been able

to modernize its core system.

During this period, the IRS has

gone through at least two, and

possibly three, iterations at

roughly $8 billion a whack.

The phone call, brief as it was, got

me thinking about architecture

and VLPs. It has occurred to me

more than once that the time,

cost, and organizational pressures

of VLPs are such that it is nearly

impossible to succeed. Somehow,

the cycle must be broken. Here I

am, 15-plus years later, talking

about what the IRS should do. I

think I was right then and that I’m

right now, but I’ll give you even

money that the IRS is still not

likely to follow my advice.

ABOUT THE AUTHOR

Ken Orr is a Fellow of the Cutter

Business Technology Council

and a Senior Consultant with

Cutter Consortium’s Business-

IT Strategies and Enterprise

Architecture Practices. He is

also a regular speaker at Cutter

Summits and symposia. Mr. Orr

is a Principal Researcher with

the Ken Orr Institute, a business-

technology research organization.

Previously, he was an Affiliate

Professor and Director of the

Center for the Innovative

Application of Technology with

the School of Technology and

Information Management at

Washington University. He is

an internationally recognized

expert on technology transfer,

software engineering, information

architecture, and data warehous-

ing. Mr. Orr has more than 30

years’ experience in analysis,

design, project management,

technology planning, and man-

agement consulting. He is the

author of Structured Systems

Development, Structured

Requirements Definition, and The

One Minute Methodology. He can

be reached at [email protected].

ENDNOTES

1As reported in Finextra.com,

“Royal Bank of Canada says

it will be working over the

weekend to update customer

accounts following a computer

glitch that has caused payroll

delays for thousands of Canadian

workers. The bank fell behind in

processing salary deposits for

Ontario government workers

after running into systems prob-

lems during a software upgrade

earlier this week.” (See [8].)

2Here’s a clue concerning what’s

wrong: when people believe that

just because the project is impor-

tant it can fail, they sow the seeds

of their own destruction.

3I was a member of the NAS

Trilogy review team.

4In a “flash cutover,” an organiza-

tion installs a new application

throughout the organization all at

once. By contrast, in a “phased

implementation,” pieces of the

application are installed over

time. Flash cutovers are consid-

ered extremely risky, especially

on VLPs.

5CFIT occurs when an airworthy

aircraft under the control of a

pilot is inadvertently flown into

terrain, water, or an obstacle with

inadequate awareness on the

part of the pilot of the impending

disaster.

6This is a serious challenge to

agile development methods,

and incidentally, few people

with hands-on experience use

agile methods on VLPs. It is parti-

cularly risky to attempt a VLP

with agile in an organization that

has only inhouse experience

with agile in smaller projects.

7Here again, agile development

advocates must be careful. A con-

siderable amount of data indicates

that agile development produces

better products in shorter time

frames on small and medium-

sized projects. Because agile is

so new, there isn’t much data on

VLPs using agile. It is important

to be sure of what you’re doing

before you attempt it.

8This is where agile has a lot to

teach. To paraphrase Cutter

Business Technology Council

Fellow Jim Highsmith, change

early and change often until

you and the user come to an

agreement on what the system

must do.

9Again, agile development has it

right. Test early, test often, show

the user everything.

10I’ve noticed that, in many cases,

a struggling project simply disap-

pears from management’s radar:

even though the project is not

finished, it is marked “ready for

operation.” At this point, the

time code on the project sched-

ule changes from “development”

to “maintenance.” From then on,

sometimes for years, additional

costs are hidden in the massive

maintenance budget. And as

any experienced manager

knows, you can “maintain”

something forever.

11“Buried in many body structures,

including the skin, joints, and

muscles, the somatosensory

sensors provide important

equilibrium information as they

respond to pressure and move-

ment. The sensations enable you

to know the relative position of

your arms, legs, and body. This

system is commonly called the

‘seat-of-the-pants’ sense because

early pilots believed that they

could determine which direction

was down by analyzing that por-

tions of their bodies were sub-

ject to the greatest amount of

pressure. On the ground, the pull

of gravity squeezes the pressure

sensors in various positions of

the body, indicating in which

direction the earth lies. But in

flight, centrifugal forces, com-

bined with the pull of gravity,

©2004 CUTTER CONSORTIUM VOL. 5, NO. 7

EXECUTIVE REPORT 2211

result in G-forces that make the

seat-of-the-pants unreliable as

an attitude indicator” [10].

12Visual flight rules refer to a

set of regulations under which

a pilot may operate when

weather conditions meet certain

minimum requirements. The

requirements are designed to

promote sufficient visibility.

13Defining the inputs takes more

time than anyone expects

because the design team must

guess what the inputs will be

used for. Unfortunately, not until

you have defined the primary

outputs with some specificity do

the “true” inputs become clear.

14Today, using computer-aided

design and drafting (CADD)

systems, architects routinely

develop computer models that

allow users, developers, and

others to “walk through” the

finished structure.

REFERENCES

1. Jones, Capers. “Why Flawed

Software Projects Are Not

Cancelled in Time.” Cutter IT

Journal, Vol. 16, No. 12, December

2003.

2. McGroddy, James C., and

Herbert S. Lin (eds.). A Review

of the FBI’s Trilogy Information

Technology Modernization

Program. Committee on the FBI’s

Trilogy Information Technology

Modernization Program, National

Research Council. National

Academies Press, 2004.

3. Orr, Ken. “Five Big Questions.”

Cutter Consortium Agile Project

Management E-Mail Advisor,

4 December 2003.

4. Orr, Ken. “Answering Big

Questions 1-3.” Cutter Consortium

Agile Project Management E-Mail

Advisor, 30 December 2003.

5. Orr, Ken. “The Last of the Five

Big Questions.” Cutter Consortium

Agile Project Management E-Mail

Advisor, 29 January 2004.

6. Orr, Ken. Structured Systems

Development. Yourdon Press,

1977.

7. Petroski, Henry. To Engineer

Is Human: The Role of Failure

in Successful Design. St. Martin’s

Press, 1985.

8. “Royal Bank of Canada

Computer Glitch Causes

Payments Chaos.” 4 June 2004

(www.finextra.com/topstory.asp?

id=11948).

9. Sobel, Dava. Longitude: The

True Story of a Lone Genius Who

Solved the Greatest Scientific

Problem of His Time. Penguin,

1996.

10. US Air Force. “Flying

Operations, Instrument Flight

Procedures.” Air Force Manual

11-217, Vol. 2, 6 August 1998.

11. Varon, Elana. “For the IRS

There’s No EZ Fix.” CIO, 1 April

2004.

VOL. 5, NO. 7 www.cutter.com

2222 AGILE PROJECT MANAGEMENT ADVISORY SERVICE

Index

ACCESS TO THE EXPERTS

Upcoming Topics

�� Integrating Usability into

the Spiral Lifecycle

by Jonathan Addleston

�� Agile Project Leadership:

What It Takes to Succeed

by Doug DeCarlo

�� Software Project Metrics

by Khaled El Emam

This index includes Agile Project

Management Executive Reports

and Executive Updates that

have been recently published,

plus upcoming Executive Report

topics. Reports that have already

been published are available

electronically in the Online

Resource Center. The Resource

Center includes the entire

Agile Project Management

Advisory Service archives plus

additional articles authored

by Cutter Consortium Senior

Consultants on the topic of

project management.

For information

on beginning a subscription

or upgrading your current

subscription to include access

to the Online Resource

Center, contact your account

representative directly or

call +1 781 648 8700 or send

e-mail to [email protected].

Executive Reports

Vol. 5, No. 7 Pushing the Envelope: Managing Very Large Projects by Ken Orr

Vol. 5, No. 6 The Usability Challenge by Larry L. Constantine

Vol. 5, No. 5 Adaptive Components: Software Engineering’s Ace in the Hole

by Paul G. Bassett

Vol. 5, No. 4 Value-Driven Project Management and Software Development

by Ken Schwaber

Vol. 5, No. 3 The Software Productivity Imperative by Mary Poppendieck

Vol. 5, No. 2 Practicing Agile Requirements Management by Sam Bayer

Vol. 5, No. 1 Evaluating ROI from Software Quality by Khaled El Emam

Vol. 4, No. 12 Planning for Risk in IT Projects by Kerry F. Gentry

Vol. 4, No. 11 Finding Success in Small Software Projects by Khaled El Emam

Vol. 4, No. 10 Building the Emotionally Intelligent Agile Manager by David R. Caruso

Vol. 4, No. 9 Evolutionary Database Management: Skills for Agile DBAs

by Scott W. Ambler

Vol. 4, No. 8 Lean Development and the Predictability Paradox by Mary Poppendieck

Vol. 4, No. 7 The Software Testing Landscape by Dave Higgins

Vol. 4, No. 6 Challenging the Fundamental Notions of Software Development

by Robert N. Charette

Executive Updates

Vol. 5, No. 14 The Software Modeling Revival: How to Avoid Being Stranded

in the Desert by E.M. Bennatan

Vol. 5, No. 13 The Path to “Healthy” Projects by David Hussman

Vol. 5, No. 12 Why Teamwork Remains Hit or Miss by Christopher M. Avery

Vol. 5, No. 11 Software Teams — Your Most Important Asset: Part III:

Evolve or Dissolve by E.M. Bennatan

Vol. 5, No. 10 Paths and Practices for Agile Coaches by David Hussman

Vol. 5, No. 9 Software Teams — Your Most Important Asset: Part II:

What Movitates Them? by E.M. Bennatan

Vol. 5, No. 8 Commitment Management and CMMI by David Constant

Vol. 5, No. 7 Software Teams — Your Most Important Asset: Part I:

The “Make or Break” of a Project by E.M. Bennatan

Vol. 5, No. 6 Frameworks and YAGNI (You Aren’t Gonna Need It) by Dave Rooney

Vol. 5, No. 5 A Fresh Look at Software Design: Part III — Why Aren’t We Doing a

Better Job? by E.M. Bennatan

Vol. 5, No. 4 Agile Practices for Global Software Development by Daniel J. Paulish

and Roman Pichler

Vol. 5, No. 3 A Fresh Look at Software Design: Part II — Is It an Engineering Discipline?

by E.M. Bennatan

Vol. 5, No. 2 Trends: Looking Back at 2003 and Ahead in 2004 — An Interview

with Scott W. Ambler, E.M. Bennatan, and Robert N. Charette

by Cutter Consortium

Vol. 5, No. 1 A Fresh Look at Software Design: Part I — Dealing with Change

by E.M. Bennatan

> Agile Project Management

Advisory Service

of published issues

Wor

ksho

ps

Agile SoftwareDevelopment andProject Management

Cutter Consortium37 Broadway Suite 1Arlington, MA 02474, USA

Tel: +1 781 648 8700Fax: +1 781 648 1950Web: www.cutter.comE-mail: [email protected]

Improve Your Software Developmentand Project Management PracticesCutter Consortium’s Agile Software Development and Project Management workshops are

designed to help organizations rethink traditional software and project management so they can

create a methodology suitable to their needs, tailor it to their specific projects and get the speed

and flexibility required to ensure their enterprise’s competitive edge.

The workshops can be customized to include client-specific examples. Workshops vary in length

from one to five days to meet all of your training needs.

Accessing Cutter’s experts gives you the confidence that comes from relying on the best minds

in the industry. Call your account manager today to discuss a workshop that will ensure your

enterprise’s project management success.

Agile Project Management: Innovation in Action

Presenter: Jim Highsmith. This workshop will

help you understand advanced techniques for

iteration planning, collaboration and project

management targeted at situations in which

projects have high-exploration factors, where

customer responsiveness is paramount and in

which projects operate within organizations

with innovative cultures. This workshop will

help you encourage dramatic improvements

in your project success rate and your project

teams’ ability to cope with change, through

the application of a unique agile project

management (APM) lifecycle framework.

You’ll learn when to apply APM over tradi-

tional project management and why a well-

thought-out approach to APM can help you

increase innovation, keep costs down and

shorten your product development cycle.

Agile Requirements: SystemsVisualization in the 21st Century

Presenter: Ken Orr. In this workshop, Ken Orr

demonstrates how business modelers and

software developers can work with their

customers in real time to produce new

business systems models that users can see,

use and test. Learn how your organization can

model its businesses and prototype these

models using the Next Practice integrated

business modeling and systems requirements

methodology. You’ll learn about the Next

Practice Methodology (NP/M) and advanced

modeling tool set (Next Practice Development

Environment [NP/DE]) that makes it possible

to visualize systems requirements the same

way that architects and engineers visualize

buildings using CAD/CAM tools. You’ll learn

how to illustrate high-level business context

and workflow diagrams, database designs and

working prototypes using the NP/M integrated

approach, which is based on “time-boxed

management” and targets six- to 12-week

intensive requirements projects.

Deadline-Driven ProjectEstimation

Presenter: Michael Mah. Imposed deadlines

are the norm for technology projects. Yet the

nature of software projects demands that

teams deal with the constant dynamics of

change. This presentation addresses why soft-

ware projects are different than other classes

of work and how the R&D “laws” of lifecycle

dynamics can be used to avert disaster. It

discusses benchmarking against “the competi-

tion” and addresses laws of cause and effect,

so managers can negotiate viable commit-

ments using proven techniques for software

project estimation. It reveals how to use

productivity baselines for estimation, critical

flaws in “traditional” planning processes, risk

management techniques for fixed deadlines

and what to do about “dangerous metrics.”

The Extreme ProgrammingWorkshop

Presenter: Joshua Kerievsky. If you’re looking

for a workshop that will give you a thorough

education in Extreme Programming (XP),

taught by people who coach and program on

real XP projects, you’ve found it. XP takes best

practices to the extreme in order to produce

highly productive, agile and confident soft-

ware teams. To fully appreciate XP, you have

to do XP — not just a few of the XP practices,

but all of them, together. In this five-day work-

shop, you will learn XP by doing XP on a simu-

lated XP project, in a simulated XP environment,

complete with customer and developer staff

and experienced XP practitioners. You will

understand why pair programming works

after you’ve done it with multiple partners

working on diverse tasks in this workshop.

You will understand how XP simplifies require-

ments during planning sessions. And you will

discover the power and importance of test-first

design from experiencing the simplicity that

results from following this best practice.

Implementing Lean SoftwareDevelopment

Presenter: Mary Poppendieck. Manufacturing

and service companies use lean techniques

to improve quality while decreasing cost and

delivery time. Why not apply these principles

to software development? This workshop

helps you identify and eliminate the real

waste of software development. You learn

how to apply the important lean techniques of

customer focus, process flow and data-based

decisions to your software development

process; ensure your software development

process delivers real customer value; increase

quality and decrease cost through the effective

use of feedback; reduce complexity and

manage risk; and develop an implementation

plan that leverages your company’s lean or

lean Six Sigma initiative.

Leading Successful Projects

Presenter(s): Tom DeMarco and/or Tim Lister.

Today, a project needs to be run as a spry

organism, nimble to the changes it’s sure to

encounter. We think of ourselves as systems

designers, but the project is a system, too —

and do we ever turn our skills to properly

designing it? All of the heuristics governing

system design can profitably be applied to

design of the project. Tom DeMarco and Tim

Lister turn their attention to proper design and

implementation of software projects. The goal

is to help your projects be more productive

and better able to turn out quality results. This

workshop will prepare you for a leadership

role in designing, populating and inhabiting

these adaptive project organisms. You’ll get

practical guidance to help your project meet its

specific challenges and to achieve its promise

of success.

Risk Management for Software:Learning to Contain, Mitigateand Manage the Uncertainties of Software Development

Presenters: Tim Lister and Tom DeMarco.

Projects that deliver real benefit are full of risk.

We have to develop ways to discover the

lurking risks, estimate their impact, optimize

our response and monitor for change. This

workshop helps you identify and quantify the

uncertainties that threaten software develop-

ment success. For each uncertainty, you learn

to contain, mitigate or eliminate its impact.

You explore the link between risk and opportu-

nity, the role of the postmortem, how to sepa-

rate resultant and root causal risks, the roles

during the risk discovery process, how to

perform backward root cause analysis, a

modified syntax of risk declaration and more.

Testing and Refactoring

Presenter: Joshua Kerievsky. Learn just how

brilliantly effective test-driven development

(TDD) can be, how to write good test code and

how to practice the rhythm of refactoring.

Although its name implies that TDD is mostly

about testing, it is primarily about design: it

keeps you focused on exactly what you need

to build and helps you avoid over-engineering.

Much of what you’ll do in this workshop is

write code. When you’re not writing code,

you’ll be studying and implementing refactor-

ings, gaining experience with TDD, partici-

pating in interactive demos (such as the

refactoring fishbowl) and learning about

automated testing techniques. This workshop

is taught using Java or .NET and can be

customized to meet your specific needs.

You Can Become Agile in One Day: Scrum Overview and Application

Presenter: Ken Schwaber. If your organization

is uncertain where a project is, Scrum provides

a way to have greater visibility into it. If you

have a project that needs to be turned around

right now, Scrum offers an easy, low-risk way

to do it. In this one-day workshop, you get

an overview of Scrum theory, practices and

actuality so that you’ll have an understanding

of Scrum, why it works and how it works.

You’ll discover the benefits one company

realized by studying how that company

applied Scrum processes to a complex project.

You’ll have the opportunity to discuss how

Scrum could help your organization and you’ll

discover how Scrum and its practices can be

applied to one of your specific projects.

Other Agile SoftwareDevelopment and ProjectManagement Workshops:

�Agile Software Development: A Review

of Agile Methodologies with Jim Highsmith

�Asset-Centric Software Development for

Senior Managers with Paul G. Bassett

� Extreme Project Management Masterclass

with Rob Thomsett

� The Extreme Project Management

Workshop with Doug DeCarlo

�Managing the Deadline: A Project

Management Masterclass

with Suzanne Robertson

�Mastering the Requirements Process

with Suzanne Robertson

� Rapid Software Testing with James Bach

� Scrum Project Quick Start

with Ken Schwaber

� Software Estimation: A Wolf in Sheep’s

Clothing with E.M. Bennatan

� Software Project Management

with Johanna Rothman

�A Taste of Extreme Programming

with Kent Beck

WorkshopsAgile Software Development and Project Management

For more information, call +1 781 648 8700 or visit www.cutter.com/workshops.

Abou

t th

e Pr

acti

ce Agile ProjectManagement PracticeCutter Consortium’s Agile Project Management Practice helps companies succeed

under the pressures of this highly turbulent economy. The practice is unique in that

its Senior Consultants — who write the reports and analyses for the information

service component of this practice and do the consulting and mentoring — are the

people who’ve developed the groundbreaking practices of the Agile Methodology

movement. The Agile Project Management Practice also considers the more

traditional processes and methodologies to help companies decide what will

work best for specific projects or teams.

Through the subscription-based publications and the consulting, mentoring, and

training the Agile Project Management Practice offers, clients get insight into Agile

methodologies, including Adaptive Software Development, Extreme Programming,

Dynamic Systems Development Method, and Lean Development; the peopleware

issues of managing high-profile projects; advice on how to elicit adequate

requirements and managing changing requirements; productivity benchmarking;

the conflict that inevitably arises within high-visibility initiatives; issues associated

with globally disbursed software teams; and more.

Products and Services Available from the Agile Project ManagementPractice

• The Agile Project Management Advisory Service

• Consulting

• Inhouse Workshops

• Mentoring

• Research Reports

Other Cutter Consortium Practices

Cutter Consortium aligns its products and services into the nine practice areas

below. Each of these practices includes a subscription-based periodical service,

plus consulting and training services.

• Agile Project Management

• Business Intelligence

• Business-IT Strategies

• Business Technology Trends and Impacts

• Enterprise Architecture

• IT Management

• Measurement and Benchmarking Strategies

• Risk Management and Security

• Sourcing and Vendor Relationships

Senior ConsultantTeamThe Cutter Consortium Agile Project

Management Senior Consultant Team

includes many of the trailblazers in the project

management/peopleware field, from those

who’ve written the textbooks that continue

to crystallize the issues of hiring, retaining,

and motivating software professionals, to

those who’ve developed today’s hottest Agile

methodologies. You’ll get sound advice and

cutting-edge tips, as well as case studies and

data analysis from best-in-class experts. This

brain trust includes:

• Jim Highsmith, Director

• Scott W. Ambler

• James Bach

• Paul G. Bassett

• Sam Bayer

• Kent Beck

• E.M. Bennatan

• Tom Bragg

• David Caruso

• Robert N. Charette

• Alistair Cockburn

• Doug DeCarlo

• Tom DeMarco

• Khaled El Emam

• Kerry Gentry

• Ron Jeffries

• Joshua Kerievsky

• Brian Lawrence

• Tim Lister

• Michael C. Mah

• Lynne Nix

• Ken Orr

• Mary Poppendieck

• Roger Pressman

• James Robertson

• Suzanne Robertson

• Alexandre Rodrigues

• Johanna Rothman

• Lou Russell

• Ken Schwaber

• Rob Thomsett

• Colin Tully

• Richard Zultner