better software magazine - agile testing with lisa crispin · 2012. 12. 29. · 28 better software...

5
26 BETTER SOFTWARE JUNE 2007 www.StickyMinds.com GETTY IMAGES By Lisa Crispin

Upload: others

Post on 25-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Better Software Magazine - Agile Testing with Lisa Crispin · 2012. 12. 29. · 28 BETTER SOFTWARE JUNE 2007 previously undetected problem surfaced in the same code. Having detailed

26 BETTER SOFTWARE JUNE 2007 www.StickyMinds.com

GETTY

IMAGES

By Lisa Crispin

Page 2: Better Software Magazine - Agile Testing with Lisa Crispin · 2012. 12. 29. · 28 BETTER SOFTWARE JUNE 2007 previously undetected problem surfaced in the same code. Having detailed

www.StickyMinds.com JUNE 2007 BETTER SOFTWARE 27

A s a tester, I’ve always believed thatdefect-tracking systems are a neces-sary tool in the softwaredevelopment toolbox—like sourcecode control and a database. In

some situations, I’ve found these systems to be helpfulrepositories of knowledge; other times they were re-quired to keep defects from falling into a big blackhole. I’ve always assumed that every software develop-ment team needs some efficient way to track defects,make sure they get fixed, discover whether the sameones are recurring, investigate their underlying causes,and notice if they’re concentrated in certain parts of anapplication.

But recently, I’ve learned that some software devel-opers, who are applying lessons learned from leanmanufacturing, don’t log defects. Basically, a log is aninventory, and the lean community considers any kindof inventory a liability. As Tom and Mary Poppendieckwrite in their book Implementing Lean Software De-velopment:Defect tracking systems are queues of partiallydone work, queues of rework if you will. Toooften we think that just because a defect is in aqueue, it’s OK, we won’t lose track of it. But inthe lean paradigm, queues are collection pointsfor waste.

Rather than tracking defects found during develop-ment, some software teams simply fix them as soon asthey are discovered. They don’t keep any record of thedefect except a test written to reproduce the problem,prove that the fix worked, and detect any future re-gressions. These teams may track bugs found afterrelease but not bugs found “internally” by developersor testers.

But rather than a liability, could a database of de-fects provide some value? Could software developers

benefit by looking for patterns in the types of defectsthey create and in so doing improve their practices orprocesses to prevent them in the future? Should wetrack defects so we can learn from them? Or does theefficiency of “fix ‘em and forget ‘em” outweigh thebenefits of recording defects in a tracking system?

I asked a number of people working “in the trench-es” of software development how they approach thisquestion, and I took a hard look at my own team’s ex-perience. Here’s what I discovered. (See theStickyNotes for references).

TThhee CCaassee ffoorrDDeeffeecctt TTrraacckkiinnggAlthough most of our project’s defects arefixed within a day or two, my teamrecords almost all of them in a defect-

tracking system. If it’s a bug in newly checked-in codeand the programmer fixes it right away, we usuallydon’t log it. But if it won’t get fixed immediately, werecord the steps to reproduce the problem in the track-ing system and stick an index card with the bugnumber and subject on our task board. All bugs foundin production also are logged, and if they are a highpriority they get stuck on the task board to be fixedthat day.

Sometimes the programmers put detailed notes inthe defect report about the issue they found and fixed.For example, in a defect report two months ago, a de-veloper recorded his analysis of the problem, noted animmediate solution via a data update, and then ex-plained a more detailed code solution. He alsodocumented other issues we had discussed and our de-cision that the software was working as designed. Itwould have been difficult to put that much detail intocomments within the code or a test case. This week, a

Page 3: Better Software Magazine - Agile Testing with Lisa Crispin · 2012. 12. 29. · 28 BETTER SOFTWARE JUNE 2007 previously undetected problem surfaced in the same code. Having detailed

28 BETTER SOFTWARE JUNE 2007 www.StickyMinds.com

previously undetected problem surfacedin the same code. Having detailed notesfrom a related problem saved us a lot oftime in analyzing the second issue.

I was surprised that our programmersfind the defect-tracking system quite use-ful. The ability to read clear, detailedsteps to reproduce the problem savesthem time. If we didn’t use defect-track-ing software, we could write these detailson cards or paper, or perhaps annotatescreen prints, but it probably wouldn’t beas easy to use. Of course, the program-mer who plans to fix the bug could justtalk to the tester who found the defect,but what if that tester isn’t around at thetime the programmer starts working onthe bug?

Like many organizations, we have anold and rather ugly legacy system thatwill continue to haunt us for years tocome. The business folk don’t want us tospend time fixing the low-priority de-fects; they’d rather have us work on newfeatures. But if we rewrite part of the oldsystem, it’s useful to know its currentproblems. Besides, every once in a whilewe have time to repair a few low-prioritydefects. We couldn’t do that without eas-ily retrievable defect records.

Some teams mine their “bugbases”for more information to use root causeanalysis to reduce their defect rates. ChrisWheeler recommends this and com-ments: There is some value to having proj-ect memory in a more durable formthan the collective conscience of theteam. This is because, in real life,teams do disband, projects do getoutsourced to teams in differenttime zones, products do move intosustaining groups, and sometimesthe brains don’t go along with theproduct. Sometimes the trackingsystem facilitates better collabora-tion.

Tracking defects doesn’t necessarilymean using an automated tool. DaveRooney worked on a team that took amore visual approach. When testersfound a problem, they printed screenshots, wrote down the steps to reproducethe problem, discussed it with a develop-er, and put their sheets in a pink filefolder in the development area. Develop-

ers would triage each defect and work atfixing it into the schedule. Fixed defectswere moved to a blue file folder andretested. Dave notes:The file folders had an interestingvisual impact—you could check at aglance how many bugs were beingfound. If the folder started to getmore than a few bug reports, we’d“pull the cord” and stop to figureout as a team what was wrong.

To me, an important quality of anydefect-tracking system is low overhead.While many people associate defecttracking with heavyweight, complex sys-tems and processes, it doesn’t have to bethat way. If your bugbase is weighing youdown, look for alternatives. Brad Apple-ton advises that defect reports should bekept to the bare necessities—and makesure it’s really a defect when you reportit. He emphasizes that bugbases shouldenable communication and collabora-tion, not become “the wall” over whichyou throw things back and forth. Talkingover an issue with another team membermight help narrow it down or determineit isn’t a bug.

Some industry experts maintain thatthese days of regulatory compliance à laSarbanes-Oxley may require us to recordhistorical information such as reporteddefects and their dispositions. Jim Shoreacknowledges that you may need a for-mal way to track defects. However, hecautions:I never assume that a database willbe necessary until the requirementsof the situation prove that it is.Even then, I look for ways to useour existing process to meet theregulatory or organizational re-quirements. Although someshudder at the informality, archiv-ing red bug cards in a drawer maybe enough.

Finally, users may want to know thestatus of their problem—has it been fixedyet? If not, is it being worked on? JanetGregory notes:

One of the important uses of a track-ing system is for generating client reports.Clients want to know which bugs werefixed and released in the latest version.Most clients want to test and to make

certain the fixes were done to their satis-faction.

TThhee CCaassee ffoorrDDeeffeecctt ““FFiixx&& FFoorrggeett””

Apart from the “lean” idea that an inven-tory of defects is a liability rather than anasset, proponents of simply fixing bugs asthey are found and not recording them atall often find that bugbases may be adumping ground, cluttered with bugsthat never will see the light of day. Ourbug database contains hundreds of bugs,mostly from the legacy system, that prob-ably will never get fixed.

Another complaint is that trackingsystems don’t tend to promote collabora-tion. As Jim Shore puts it:Explicitly not providing a databasehelps create the attitude that bugsare an abnormality. It also removesthe temptation to use the databaseas a substitute for direct, in-personcommunication.

Ron Jeffries shares this viewpoint:I have never participated in a reallygreat meeting with people sittingaround the issues database.

The preferred approach to handlingdefects in the “lean” school is along thelines of what Ron personally does:

1. Write a test showing that the bugexists.

2. Add it to the test suite.3. Make the test run.4. Take no other action.

Now the defect not only is fixed, butif that particular piece of code fails againthe same way, we have a regression testto catch it. Yes, other scenarios still mayproduce unforeseen bugs, but preventingregressions will give you more time tolook for those.

If you think the whole idea of “fix andforget” is loony, at least in your own situ-ation, take a fresh look at why you’retracking defects. When I fretted thatwithout recording a defect in a trackingsystem I’d forget the details, MichaelBolton offered me this advice:

Page 4: Better Software Magazine - Agile Testing with Lisa Crispin · 2012. 12. 29. · 28 BETTER SOFTWARE JUNE 2007 previously undetected problem surfaced in the same code. Having detailed

www.StickyMinds.com JUNE 2007 BETTER SOFTWARE 29

If you have written a defect downor recorded it in some way, youhave an issue tracker. The ques-tions then become “What do youvalue in an issue tracker?” and“Whose needs are you trying tosatisfy?” If you’re working on aproject where the team is dispersedworldwide, a Moleskine notebookis probably not the place to recorddefects for the entire team; JIRAmight be a better choice. If you’retaking notes about a bug, prepar-ing for a face-to-face conversationwith a developer ten minuteshence, JIRA might be overkill andthe Moleskine might be just thething. If you’re going to pass yournotes to the developer, an indexcard or two might be preferable,since you don’t want to give heryour whole Moleskine. If you wantto keep your notes in order, theMoleskine is probably better thanindex cards, because being un-bound they might get out of order.If you want to record test ideas asthey occur to you, free form, JIRAdoesn’t seem like a good choice; ifyou want to be able to filter out allthe defects assigned to a particulardeveloper, JIRA can produce a re-port in an instant. What do youwant to do today? You might wantto use multiple tools to accomplishmultiple tasks.(See the StickyNotes for more on

Moleskines and JIRA.)

TToo TTrraacckk oorrNNoott ttoo TTrraacckk??TThhaatt iiss tthhee QQuueessttiioonn

Even teams that “fix and forget” are like-ly to track production bugs. Mostbusinesses want reports with informationabout defects. Andreas Zeller’s bookWhy Programs Fail asserts that as a man-ager you must be able to answerquestions such as:

• Which problems currently areopen?

• Which are the most severe prob-lems?

• Did similar problems occur in thepast?

So while you still might track produc-tion bugs, your focus should be on bugprevention. A large number of bugs—whether you’re tracking them or not—isa red flag that needs investigation. MaryPoppendieck shared these observationswith me when I asked her about myteam’s issues with giving up the defecttracking system:The objective is not to get rid of adefect tracking system—it is to notneed a defect tracking system.There is a big difference. The trickis to expose a defect the moment itoccurs. Now this is far easier saidthan done, but it should be theteam’s objective. In other words, stop worryingabout the defect database and start

worrying about why you are stillcreating code where defects are dis-covered a significant amount oftime after the code has been writ-ten.

Jean McAuliffe echoes these senti-ments: The goal is defect elimination, notdefect discovery.That’s a lofty goal, but you can realis-

tically strive to reduce the number ofdefects found during testing or in produc-tion. As Mary and Jean explained,eliminating many defects requires disci-plined practices, such as test-drivendevelopment (writing tests in concertwith the code), continuous integration,and designing both architecture and codefor testability. The time span betweencoding, integrating, and testing must beshort enough that bugs are caught whiletheir cause is still obvious, making themeasier to fix. Techniques such as automat-ing regression testing and taking time fordiligent exploratory testing as soon as thecode is written help teams lower their de-fect rate.

Jean pointed out that, like me, a lot oftesters “grew up” when defect-trackingsystems were often the only way to com-municate issues to the programmers. Iexpect this is still true for many testers.But for teams that have adopted disci-plined practices and have built solidprocesses there might be simpler ways todeal with defects.

If we don’t track all of our defects,what happens to the knowledge of what

Page 5: Better Software Magazine - Agile Testing with Lisa Crispin · 2012. 12. 29. · 28 BETTER SOFTWARE JUNE 2007 previously undetected problem surfaced in the same code. Having detailed

defects occurred and where and how theywere fixed? As Ron Jeffries pointed outto me, other ways exist for a team tolearn from mistakes and improve itspractices and processes. Retrospectivesare an excellent tool for reviewing what’sbeen working well and what hasn’t, iden-tifying problem areas such as a highnumber of defects in one area of the ap-plication, and identifying actions toaddress them. If you do this on a regularbasis, you might not have much to learnfrom a defect database. There is simplyno substitute for face-to-face communi-cation.

Exploring this topic led me to under-stand that the important debate isn’t overwhether we ought to track defects thatalready have been discovered. It’s abouthow we can learn to minimize defects inthe code we deliver in the future. Ifyou’re working on a buggy legacy sys-tem, root cause analysis of defects in abugbase could help your team identifyhigh risk areas and focus your efforts onrewriting them in a testable, high-qualityfashion. If you’re on a “greenfield” proj-ect with a collocated project team andwrite code in a way that produces fewdefects, why take on the overhead of adefect-tracking system?

Even if you have a defect database,you might sometimes choose not to useit. Janet Gregory offers these words ofwisdom:A defect-tracking system is notmeant as a communication tool.Nothing can take the place of dis-cussions with the developers. Thebest thing a tester can do is to talk

30 BETTER SOFTWARE JUNE 2007 www.StickyMinds.com

to the developer as soon as a prob-lem is found. If they can fix itimmediately, there is no need to en-ter a report, because you have a testthat will catch it if it happens again.The issue is dealt with quickly andavoids waste.

If you use a defect-tracking system,make sure it doesn’t replace direct com-munication. If you’re spending a lot oftime trying to find out if someone hasfixed a defect yet, you may need a defect-tracking system (or a different one, if youhave one and still can’t determine the sta-tus of defects). If you already have adefect database, consider mining it for in-formation about where defects cluster,and focus on cleaning up the relatedcode. Most importantly, make sure allyour tools—bug trackers included—fityour needs and help your team improve,rather than getting in your way.

Whether or not you maintain a defectdatabase, see if your team can fix defectsas soon as they are discovered, and makesure every defect has a test written for itthat will catch future regressions as wellas provide information about the defect.Fixing bugs as soon as they arise is gener-ally a good practice for everyone.Surprisingly (at least, it was a surprise tome), your business managers may notwant you to fix every single bug found.They may prefer that the developmentteam concentrate on delivering new, valu-able features, instead of spending time onhard-to-reproduce bugs or problems thatusers can work around.

Most importantly, your team should

StickyNotes

For more on the following topics go towww.StickyMinds.com/bettersoftware.

� References� Moleskines� JIRA

continually review its progress, identifyproblem areas, and collaborate on waysto improve. If you already have a bigbacklog of bugs, they might providesome useful information to help you fo-cus your efforts. There are manypractices that may help prevent bugs.Write testable code, rewrite buggy areastest-first. Try shortening the cycle of cod-ing, integrating, and testing so thatprogrammers get quick feedback aboutcode quality. Involve business experts sothat requirements are well understood.

In the case of our team, we may findin a year or two that we’ve gotten reallygood at writing defect-free code, we’verewritten the really icky parts of our lega-cy system, and we have so fewproduction defects that we don’t feel theneed to log them in an online database.We can decide then to stop using it, or wecan decide that it’s not a lot of extra workto log defects in the system and that it’s auseful knowledgebase to keep around.

We need to keep our focus on theright target. We want to deliver the bestquality code that we can, and we want todeliver value to the business in a timelymanner. Projects succeed when the peopleworking on them are able to do their bestwork. Our focus should be on improvingcommunication and facilitating collabo-ration. If we encounter many defects, weneed to investigate the real source of theproblem. If we need a defect-trackingdatabase to do that, so be it. If our teamworks more efficiently by documentingdefects in test cases and fixing them rightaway, let’s do that. If some combinationof the two approaches supports our abili-ty to improve continually, then that isright for us. {end}

Since 2000, Lisa Crispin has been a testeron agile teams developing Web-based ap-plications. She co-authored TestingExtreme Programming (Addison-Wesley,2002) with Tip House and is a regularcontributor to Better Software magazine.Read more about Lisa’s work atlisa.crispin.home.att.net.