software test & performance issue dec 2008

36
BEST PRACTICES: SCM Tools Teach Old Apps New Testing Tricks Using Quality Gates To Prevent Automation Project Failure Reeled In By The Allure Of Test Tools? School Your Team On Putting Test Automation Into Practice page 12 A Publication VOLUME 5 • ISSUE 12 • DECEMBER 2008 • $8.95 • www.stpmag.com

Upload: kenw

Post on 12-Nov-2014

116 views

Category:

Documents


2 download

DESCRIPTION

http://www.stpmag.comhttp://www.stpmag.com/backissues2009.htm

TRANSCRIPT

Page 1: Software Test & Performance issue Dec 2008

BESTPRACTICES:

SCMTools

A Publication

VOLUME 5 • ISSUE 12 • DECEMBER 2008 • $8.95 • www.stpmag.com

Reeled In ByThe Allure Of Test Tools?School Your Team On Putting Test Automation Into Practice page 12

Teach Old Apps New Testing Tricks

Using Quality Gates ToPrevent Automation

Project Failure

Page 3: Software Test & Performance issue Dec 2008

Contents A BZ Media Publication

VOLUME 5 • ISSUE 12 • DECEMBER 2008

1122CCOOVVEERR SSTTOORRYYReeled In By The Allure Of Fancy Test Automation Tools?

Here are some real-world examples that can school your team on the waysof putting test automation into practice.

By Aaron Cook and Mark Lustig

1188Teach Old Apps Some New Tricks

2277 Quality Gates For the Five PhasesOf Automated Software Testing“You shall not pass,” someone might bellow, as if protecting the teamfrom a dangerous peril. And so might your team if it embarks on thejourney through the five phases of automated testing.

By Elfriede Dustin

DECEMBER 2008 www.stpmag.com • 3

DDeeppaarrttmmeennttss5 • EditorialHow many software testers are there in theworld, anyway?

6 • ContributorsGet to know this month’s experts and thebest practices they preach.

7 • ST&PediaIndustry lingo that gets you up to speed.

9 • Out of the BoxNews and products for testers.

11 • FeedbackIt’s your chance to tell us where to go.

32 • Best PracticesSCM tools are great, but they won’t runyour business or make the coffee.

By Joel Shore

34 • Future TestWhen automating Web service testing,there’s a right way and a wrong way.

By Elena Petrovskaya and Sergey Verkholazov

When redesign’s not an option, andadding testability interfaces is difficult,you need ways to improve testability ofexisting legacy apps.

By Ernst Ambichl

Page 4: Software Test & Performance issue Dec 2008

STP012 Full Page ads 11/10/08 1:39 PM Page 10

Page 5: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 5

What is the size of the soft-ware testing market? Howmany software testers existin the world?

I was asked that questionrecently, and I had to admitthat I had no idea of theanswer. My estimate wasabout 250,000, but that wasjust a guess. It was not basedon research, statistical meas-urement or anything at all,really. It was just a number plucked fromthin air.

Numbers I do know for sure are these:The circulation of this magazine is about25,000. My e-mail newsletter goes to anoth-er 40,000. Our conferences (there arethree of them each year) attract anotherfew thousand software testers, QA profes-sionals and senior test managers. Ofcourse, there’s some overlap between mag-azine and newsletter readers and confer-ence attendees, so let’s say BZ Media isreaching about 60,000 unique testers. Isthat a lot, or are we just scratching the sur-face?

To help find the answer, I contactedThomas Murphy, a research analyst withGartner. While he couldn’t provide evenan approximate body count, he said thatsales of software testing tools this year isexpected to yield US$2.2 billion in rev-enue for the likes of Borland, Hewlett-Packard and other AD tool makers. Thatfigure includes about $1.5 billion in test-tool revenue for so-called “distributed”platforms such as Linux, Mac OS X andWindows, and another $700 million formainframes.

As for people, Murphy estimates thatfor the enterprise—that is, companiesbuilding applications for internal use—the most common ratio for developers totesters is about 5-to-1. That ratio can bequite different for ISVs. “At Microsoft, forinstance, the ratio is almost 1-to-1. Butmost companies doing a good job [of

QA] are in the 1-to-3 or 1-to-5 range.” However, the ratioin about a third of compa-nies is around 1-to-10, hesaid, which skews the aver-age. “We focus on the enter-prise. ISVs tend to have ahigher saturation of testers.I’ve never seen an enterprisewith a 1-to-1 ratio; maybewith certain teams [workingon] critical apps. So I would

say that 1-to-5 as an industry averagewould be close.”

Murphy points to what he called theflip-side of that argument. “How many linesof code does it take to test a line of code?”If, let’s say, that ratio was 10-to-1, “howcould I have three developers crankingout code and expect one tester to keep upwith that?” Of course, test tools andautomation have the potential to help thetester here, but his point is well taken;there are not enough testers out there.

Which perhaps helps to explain themeteoric rise of test-only organizationscropping up in India. “We’ve seen a hugegrowth of Indian off-shore groups whereall they do is testing. Tata has spun outtesting as its own entity,” he said, refer-ring to the gigantic Indian conglomer-ate. “They wanted [the testing organiza-tion] to be independent, with IV&V char-acteristics.

“Testing has been a growth businessin India. There’s a huge number of indi-viduals doing manual testing, [and] build-ing regression suites and frameworks forpackage testing.” They’ve also had to shifttheir processes to adapt to changes intechnology. “As they get deeper into test-ing Web 2.0 and SOA, just to get to thesame level of quality they used to haverequires tremendous increases in theirown quality proactives.”

Still, the number of software testers inthe world remains elusive. I’ve only justbegun to search. ý

How Many Software Testers Are Out There?

VOLUME 5 • ISSUE 12 • DECEMBER 2008

Ed Notes

President Ted Bahr

Executive Vice President Alan Zeichick

Software Test & Performance (ISSN- #1548-3460) ispublished monthly by BZ Media LLC, 7 High Street,Suite 407, Huntington, NY, 11743. Periodicals postagepaid at Huntington, NY and additional offices.

Software Test & Performance is a registered trade-mark of BZ Media LLC. All contents copyrighted 2008BZ Media LLC. All rights reserved. The price of a oneyear subscription is US $49.95, $69.95 in Canada,$99.95 elsewhere.

POSTMASTER: Send changes of address to SoftwareTest & Performance, PO Box 2169, Skokie, IL 60076.Software Test & Performance Subscribers Servicesmay be reached at [email protected] or by calling 1-847-763-9692.

Director of Circulation

Agnes [email protected]

EDITORIAL

SALES & MARKETING

READER SERVICE

Art DirectorMara Leonardi

ART & PRODUCTION

BZ Media LLC7 High Street, Suite 407Huntington, NY 11743+1-631-421-4158fax [email protected]

EditorEdward J. Correia+1-631-421-4158 [email protected]

Copy DeskAdam LoBeliaDiana Scheben

Editorial DirectorAlan [email protected]

Contributing EditorsMatt HeusserChris McMahonJoel Shore

Publisher

Ted Bahr+1-631-421-4158 x101

[email protected]

Associate Publisher

David Karp+1-631-421-4158 x102

[email protected]

Advertising Traffic

Nidia Argueta+1-631-421-4158 [email protected]

List Services

Lisa [email protected]

Reprints

Lisa [email protected]

Accounting

Viena Ludewig+1-631-421-4158 [email protected]

Customer Service/Subscriptions

[email protected]

Edward J. Correia

Page 6: Software Test & Performance issue Dec 2008

In her upcoming book titled “Implementing AutomatedSoftware Testing,” (Addison-Wesley, Feb. 2009), ELFREDEDUSTIN details the Automated Software Testing Process bestpractices for test and QA professionals. In this third and finalinstallment on automated software testing, which begins onpage 27, she provides relevant excerpts from that book onprocesses describing use of quality gates at each phase of aproject as a means of preventing automation failure.

Elfriede has authored or collaborated on numerous oth-er works, including “Effective Software Testing” (AddisonWesley, 2002), “Automated Software Testing,” (Addison Wesley,

1999) and “Quality Web Systems,” (Addison Wesley, 2001). Her latest book “The Artof Software Security Testing,” (Symantec Press, 2006), was co-authored with securityexperts Chris Wysopal, Lucas Nelson, and Dino Dai Zovi.

Once again we welcome ERNST AMBICHL, Borland’s chiefscientist, to our pages. In the March ‘08 issue, Ernst schooledus on methods of load testing early in development to pre-vent downstream performance problems. This time he tellsus how to make legacy and other existing applications moretestable when redesign is not an option, and adding testabil-ity interfaces would be difficult.

Ernst served as chief technology officer at Segue Softwareuntil 2006, when the maker of SilkTest and other QA toolswas acquired by Borland. He was responsible for the devel-opment and architecture of Segue’s SilkPerformer andSilkCentral product lines. For Borland, Ernst is responsible for the architecture ofBorland’s Lifecycle Quality Management products.

Contributors

If you’ve ever been handed a software-test automation tool and told that it will makeyou more productive, you’ll want to turn to page 12 and read our lead feature. It waswritten by automation experts AARON COOK and MARK LUSTIG, who themselves havetaken the bait of quick-fix offers and share their experiences for making it work.

AARON COOK is the quality assurance practice leader atCollaborative Consulting and has been with the companyfor nearly five years. He has led teams of QA engineers andanalysts at organizations ranging from startups to large multi-nationals. Aaron has extensive experience in the design,development, implementation and maintenance of QA proj-ects supporting manual, automated and performance test-ing processes, and is a member of the American Society forQuality. Prior to joining Collaborative, he managed the testautomation and performance engineering team for a start-up company.

MARK LUSTIG is the director of performance engineeringand quality assurance at Collaborative Consulting. In addi-tion to being a hands-on performance engineer, he special-izes in application and technical architecture for multi-tieredInternet and distributed systems. Prior to joining Collaborative,Mark was a principal consultant for CSC Consulting. Bothmen are regular speakers at the Software Test & Performanceconference.

6 • Software Test & Performance DECEMBER 2008

TO CONTACT AN AUTHOR, please send e-mail to [email protected].

Page 7: Software Test & Performance issue Dec 2008

KEYWORD DRIVENAn alternative to record/playback is to isolateuser interface elements by IDs, and to exam-ine only the specific elements mentioned.Keyword-driven frameworks generally takeinput in the form of a table, usually with threecolumns, as shown in Table 1.

Some versions omit the Command or verb,where the framework assumes that every verb isa “click” or that certain actions will always beperformed in a certain order. This approach isoften referred to as data-driven.

Where record/playback reports too manyfalse failures, keyword driven frameworks do notevaluate anything more than the exact elementsthey are told to inspect, and can report too manyfalse successes. For example, in Table 1, if the soft-ware accidentally also included a cached lastname, and wrote “Hello, Matthew Heusser,” thiswould technically constitute failure but wouldevaluate as true.

MODEL DRIVEN TESTING (MDT)Most software applications can be seen as list ofvalid states and transitions between states, alsoknown as a finite state machines. MDT is anapproach popularized by Harry Robinson thatautomates tests by understanding the possiblevalid inputs, randomly selecting a choice andvalue to insert. Some MDT software systemseven record the order the tests run in, so theycan be played back to recreate the error. Underthe right circumstances, this can be a powerfulapproach.

AUTOMATED UNIT TESTSSoftware developers commonly use Test DrivenDevelopment (TDD) and automated unit teststo generate some confidence that changes tothe software did not introduce a serious“break,” or regression. Strictly, TDD is unit test-ing typically developed in the same language ofthe code, and does not have the GUI integra-tion issue challenges associated with automat-

Even the experts don’t agree on the “right”way to approach test automation. If there wasone, it would address one specific situation—which might or might not match yours. Whatwe might all agree on are which problems arethe most challenging, and on some of the waysto attack them. We'll start with two of the mostcommon problems of test automation:

The MinefieldExploratory testing pioneer James Bach oncefamously compared the practice of softwaretesting to searching for mines in a field. “If youtravel the same path through the field againand again, you won’t find a lot of mines,” hesaid, asserting that it’s actually a great way toavoid mines.

Automated tests that repeat the same stepsover and over find bugs only when things stopworking. In some cases, as in a prepared demoin which the user never veers from a script, thatmay be exactly what you want. But if you knowthat users won’t stick to the prepared test scripts,the strategy of giving them a list of tests to repeatmay not be optimal.

The OracleWhile injecting randomness solves the mine-field problem, it introduces another. If thereare calculations in the code, the automatedtests now need to calculate answers to deter-mine success or failure. Consider testing soft-ware that calculates mortgage amounts basedon a percentage, loan amount, and paymentperiod. Evaluating the correct output for agiven input requires another engine to cal-culate what the answers should be. And tomake sure that answer is correct, you needanother oracle. And to evaluate that...and onand on it goes.

RECORD/PLAYBACKIn the 1990’s, a common automation strategywas to record a tester’s behavior and theAUT’s responses during a manual test by useof screen capture. Later, these recorded testscan be played back repeatedly without thepresence of the manual tester. Unfortunately,these recordings are fragile. They’re subject to(false) failure due to changes in date, screenresolution or size, icons or other minutescreen changes.

The Automation Automaton

ST&PediaTranslating the jargon of testing into plain English

DECEMBER 2008 www.stpmag.com • 7

Command Element Argumenttype your_name_txt Matthew

click Button Submit

wait_for_text_present_ok Body hello, Matthew

TABLE 1

continued on page 33 >

Matt Heusser andChris McMahon

Matt Heusser and Chris McMahonare career software developers,testers and bloggers. They’re col-leagues at Socialtext, where theyperform testing and quality assur-ance for the company’s Web-based collaboration software.

Q:What would your answers be?

Can we shrink the project sched-

ule if we use test automation?

Now that the tests are repeat-able...are they any good?Which heuristics are you using?

Are we doing an SVT after ourBVT? Does the performance

testing pass?

A: ST&Pedia will help you answer questions

like these and earn therespect you deserve.

Upcoming topics:

JanuaryChange Management, ALM

February

Tuning SOA Performance

March

Web Perf. Management

AprilSecurity & Vuln.Testing

MayUnit Testing

JuneBuild Management

Page 9: Software Test & Performance issue Dec 2008

Zephyr in late October launched version2.0 of its namesake software test man-agement tool, which is now available as aSaaS in Amazon’s Elastic ComputeCloud. Zephyr gives development teamsa Flex-based system for collaboration,resource, document and project man-agement, test-case creation automationand archiving, defect tracking andreporting. The system was previouslyavailable only as a self-hosted system.

“The advantage of a SaaS is that youneed no hardware,” said Zephyr CEOSamir Shah. “It’s a predeveloped back-endup and running immediately with 24/7availability, backup, restore, and highbandwidth access from anywhere.” Thecost for either system is US$65 per userper month. If you deploy in-house, youalso get three free users for the first year.

Also new in Zephyr 2.0 is Zbots, whichShah said are automation agents for inte-grating Zephyr with other test tools.“These are little software agents that runon the remote machines on which yourun Selenium, [HP QuickTestPro] or[Borland] SilkTest.” They let you run allyour automated tests from within Zephyr,and bring the results back into Zephyrfor analysis, reporting and communica-tion to the team. “You build your automa-tion scripts in those tools,” he explained,“then within Zephyr, you get compre-hensive views of all test cases so you can

look at the coverage you have across man-ual and automated tests.”

The system also lets you schedule testto run. “When it comes time to run auto-mated scripts, we can kick off automationscripts on the target machines, bringresults back to Zephyr and update met-rics and dashboards in real time.” Shahclaims that moving test cases into Zephyris largely automatic, particularly if they’restored in Excel spreadsheets or Word doc-uments, which is said is common. “At theend of the day, you have to give a reportof the quality of your software. Thatprocess is manual and laborious. We auto-mate that by bringing all that [data] into

one place, from release to release, sprintto sprint, via a live dashboard. We give thetest team and management a view intoeverything you are testing.”

Also new in Zephyr 2.0 is two-way inte-gration with the JIRA defect tracking sys-tem. “Objects stay in JIRA, and if they’remodified in one place they’re modifiedin the other. We built a clean Web serv-ices integration, and the live dashboardalso reflects changes in JIRA.”

Version 2.0 improves the UI, Shah said,making test cases faster and easier to writethanks to a full-screen mode. “When atester is just writing test cases, we expandthat for them and let them speed throughthat process. Importing and reporting alsowere improved, he said.

Attendees of the Windows HardwareEngineering Conference (WinHEC) inLos Angeles in early November receiveda pre-beta of Windows 7, the version ofWindows that Microsoft said will replaceVista. The software, which Microsoftcharacterized in a statement as “API-complete,” introduces a series of capabil-ities the company says will “make it easi-er for hardware partners to create newexperiences for Windows PC customers.”

The move is intended to “rally hard-ware engineers to begin development andtesting” for its nascent operating system.Windows 7 could be available as soon assoon as the middle of next year, accord-ing to a Nov. 7 report on download-squad.com. Microsoft’s original promise

for the operating system when it was intro-duced in 2007 was that of a three-yeartimetable that “will ultimately be deter-mined by meeting the quality bar.”

Word from Microsoft is that develop-ment of Windows 7 is on schedule. In aNov. 5 statement, Microsoft said touch-sensitivity and simplified broadband con-figuration were ready.

Helping to afford those opportunities(Microsoft hopes) is a new componentcalled Devices and Printers, which report-edly presents a combined browser forfiles, devices and settings. “Devices canbe connected to the PC using USB,Bluetooth or Wi-Fi,” said the statement,making no of devices connected via seri-al, parallel, SCSI, FireWire, IDE, PS/2,

PCI or SATA. The module also providesWizards.

Claiming to simplify connections tothe Internet while mobile, Microsoft hasbroadened the “View Available Networks”feature to include mobile broadband. Formakers of specialized or highly customdevices, Microsoft provides Device Stage,which “provides information on thedevice status and runs common tasks ina single window customized by the devicemanufacturer.” Microsoft also said thatthe “Start menu, Windows Taskbar andWindows Explorer are touch-ready,” inWindows 7. It’s unclear if applicationdevelopers can access the Windows TouchAPIs. WinHEC attendees also received apre-beta of Windows Server 2008 R2.

Hardware Makers Get Windows 7 Pre-Beta

Zephyr 2.0: Now in the Cloud

The test manager’s ‘desktop’ in Zephyr provides access to information about available human and tech-

nology resources, test metrics and execution schedules, defects and administrative settings.

DECEMBER 2008 www.stpmag.com • 9

Page 11: Software Test & Performance issue Dec 2008

SOFTWARE IS BUGGED BY BUGSRegarding “Software is Deployed, Bugs and All,” Test & QA Report,July 29, 2008, (http://sdtimes.com/link/3299432634):

I was very interested in this article. I will have to track downthe white paper and take a look but there are some commentsthat I’d like to make based on the article. First a disclosure of myown— used to work for a company that made tools in this spaceand presently work for a distributor of IBM Rational tools.

Your “paradoxical tidbit” is I think absolutely a correct obser-vation. It highlights something I’ve observed for some time...thatmany businesses seem to think managers in the IT departmentdon’t need necessarily to have an IT background or they havean IT background but [no] real management training.

The net result is they either don’t fully understand the impli-cations of problems with the process, or they understand the prob-lems but don’t have the management skills to address them. It’svery rare to find a truly well managed IT department. I think whatyou’ve described is really a symptom of that.

Some of the other “conclusions” seem less clearly supportable.Statistics is a dangerous game. Once you start looking at the low-er percentage results it is very easy to make conclusions that maybe supportable, but often there are other equally supportable con-clusions.

For example, the data re time required to field defects... justbecause it could take 20-30 days to fix a problem that isn’t neces-sarily the worst outcome for a business. I can think of several cas-es where this would be acceptable. To list a few...

1. The cost to the business of not releasing the product withdefects could be even greater

2. There is a work around for the defect3. The defect is non-critical4. There is release cycle in place that only allows fixes to be

deployed once a month. I’m sure there are others.The takeaways are where the real test of a survey are. From

what you’ve published of the report it seems that the second oneis supportable after a fashion. Clearly they need to fix their processand it would seem obvious that automated solution should be apart of that. However I do wonder if they actually got data tosupport that from the survey.

The first conclusion says there are “debilitating consequences.”Again I wonder if the survey actually established that. Clearly thereare consequences, but were they debilitating? Was there any dataabout the commercial impact of the defects? Without that it ishard to say. Yes we all know about the legendary bugs and theirconsequences, but that does not automatically imply that all defectsare debilitating.

In any event, it is a topic that should be discussed more oftenand I enjoyed the article.

Mark McLaughlinSoftware TractionSouth Australia

Feedback

FEEDBACK: Letters should include the writer’s name, city, state, companyaffiliation, e-mail address and daytime phone number.Send your thoughts [email protected]. Letters become the property of BZ Media and maybe edited for space and style.

DECEMBER 2008 www.stpmag.com • 11

Page 12: Software Test & Performance issue Dec 2008

to make the most of your catch

in Practice

12 • Software Test & Performance DECEMBER 2008

solution. Tool vendors do a great job ofselling their products, but too often lacka comprehensive plan for putting theautomation solution into practice.

This article will help you undertake areal-world approach to planning, devel-oping, and implementing your testautomation as part of the overall qualityefforts for your organization. You will learnhow to establish a test automation envi-ronment regardless of the tools in yourtesting architecture and infrastructure.

We also give you an approach to cal-culating the ROI of implementing testautomation in your organization, anexample test automation frameworkincluding the test case and test scenarioassembly, and rules for maintenance ofyour automation suite and how to account

for changes to your Application UnderTest (AUT). We also cover an approachto the overall test management of yournewly implemented test automation suite.

The set of automation tools an organ-ization requires should fit the needs, envi-ronments, and phases of their specific sit-uation. Automation requires an integrat-ed set of tools and the corresponding infra-structure to support test development,management, and execution. Dependingon your quality organization, this set ofautomation tools and technologies caninclude solutions for test management, func-tional test automation, security and vulnera-bility testing, and performance testing. Eachof these areas is covered in detail.

In most organizations we have seen,phases and environments typically includedevelopment and unit testing, system inte-gration testing, user acceptance testing,performance testing, security and vulner-ability testing, and regression testing.Depending on your specific qualityprocesses and software developmentmethodologies, automation may not be

applicable or practical for all testing phases.Additional environments to also con-

sider may include a break-fix/firecall envi-ronment. This environment is used to cre-ate and test production “hot fixes” priorto deployment to production. A trainingenvironment may also exist. An exampleof this would be for a call center, wheretraining sessions occur regularly. Whilethis is usually a version of the productionenvironment, its dedicated use is to sup-port training.

Automation is most effective whenapplied to a well-defined, disciplined setof test processes. If the test processes arenot well defined, though maturing, it maybe practical and beneficial to tacticallyapply test automation over time. Thiswould happen in parallel to the contin-ued refinement and definition of the testprocess. A key area where automation canbegin is test management.

Test Management Mature test management solutions pro-vide a repeatable process for gathering

By Aaron Cook and Mark Lustig

One of the biggest challengesfacing quality teams today

is the development and mainte-nance of a viable test automation

Once you’ve taken the bait, how

Beyond Tools: Test Automation

Page 13: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 13

and maintaining requirements, plan-ning, developing and scheduling tests,analyzing results, and managing defectsand issues. The core components of test management typically include testrequirements management (and/orintegration with a requirement manage-ment solution), and the ability to planand coordinate the testing along withintegration with test automation solu-tions. Many of the test managementprocesses and solutions available todayalso include reporting features such astrending, time tracking, test results, proj-ect comparisons, and auditing. Test man-agers needs to be cognizant of integrat-ing with software change and configura-tion management solutions as well asdefect management and tracking.

These core components can beemployed within a single tool or can beintegrated using multiple tools. Using onetool integrated tool can improve the qual-ity process and provide a seamless enter-prise-wide solution for standards, com-munication and collaboration among dis-

tributed test teams by unifying multipleprocesses and resources throughout theorganization.

For example, business analysts can usethe test management tool(s) (TMT) todefine and store application businessrequirements and testing objectives. Thisallows the test managers and test analyststo use the TMT to define and store testplans and test cases. The test automationengineers can use the TMT to create andstore their automated scripts and asso-ciated frameworks. The QA testers canuse the TMT to run both manual andautomated tests and design and run thereports necessary to examine the test exe-cution results as well as tracking andtrending the application defects. Theteam’s program and project managerscan use the TMT to create status reports,manage resource allocation and decidewhether an application is ready to bereleased to production. Also, the TMTcan be integrated with other automationtest suite tools for functional test automa-tion, security and vulnerability test

automation, and performance testautomation.

The test management tool allows fora central repository to store and managetesting work products and processes, andto link these work products to other arti-facts for traceability. For example, by link-ing a business requirement to a test caseand linking a test case to a defect, theTMT can be used to generate a trace-ability report allowing the project teamto further analyze the root cause of iden-tified defects. In addition, TMT allows forstandards to be maintained which increas-es the likelihood that quality remainshigh. However, facilitating all resourcesto leverage this tool requires managementsponsorship and support.

Functional Test Automation Functional test automation should becoupled with manual testing. It is notpractical for all requirements and func-tions to be automated. For example, ver-ifying the contents of a printed docu-ment will likely best be done manually.

to make the most of your catch

n in Practice

Page 14: Software Test & Performance issue Dec 2008

14 • Software Test & Performance DECEMBER 2008

HOOKED ON TESTING

However, verifying the print formattingfor the document can easily be automat-ed and allow the test engineer to focusefforts on other critical testing tasks.

Functional test scripts are ideal forbuilding a regression test suite. As newfunctionality is developed, the functionalscript is added to the regression test suite.By leveraging functional test automation,the test team can easily validate function-ality across all environments, data sets, andselect business processes. The test teamcan more easily and readily document and replicate identified defects for devel-opers. This ensures that the developmentteam can replicate and resolve the defectsfaster. They can run the regression tests on upgraded and enhanced appli-cations and environments during off hoursso that the team is more focused on newfunctionality introduced during the last build or release to QA. Functional testautomation can also pro-vide support for specifictechnologies (e.g., GUI,text-based, protocol specif-ic), including custom con-trols.

Performance TestingPerformance testing toolscan be used to measureload/stress capability andpredict system behaviorusing limited resources.Performance testing toolscan emulate hundredsand thousands of concur-rent users putting theapplication through rig-orous real-life user loads. IT depart-ments can stress an application fromend-to-end and measure response timesof key business processes. Performancetools also collect system and compo-nent-level performance informationthrough system monitors and diagnos-tic modules.

These metrics can be combined toanalyze and allow teams to drill down toisolate bottlenecks within the architec-ture. Most of the commercial and opensource performance testing tools avail-able include metrics for key technologies,operating systems, programming lan-guages and protocols.

They include the ability to performvisual script recording for productivity, aswell as script based viewing and editing.The performance test automation toolsalso provide flexible load distribution tocreate synthetic users and distribute load

generation across multiple machines anddata centers/geographic locations.

Security and Vulnerability Testing Security testing tools enable securityvulnerability detection early in the soft-ware development life cycle, during thedevelopment phase as well as testingphases. Proactive security scanningaccelerates remediation, and save bothtime and money when compared withlater detection.

Static source code testing scans anapplication’s source code line by line todetect vulnerabilities. Dynamic testingtests applications at runtime in many environments (i.e., development, accept-ance test, and production). A maturesecurity and vulnerability testing processwill combine both static and dynamic test-ing. Today’s tools will identify most vul-nerabilities, enabling developers to pri-

oritize and address theseissues.

Test EnvironmentsWhen planning the testautomation environment, anumber of considerationsmust be addressed. Theseinclude the breadth ofapplications, the demandfor automation within theenterprise, the types ofautomation being executedand the resource needs to support all systemrequirements. A key first step is tool(s) selec-tion. Practitioners should

consider questions such as: • Should all tools be from a single vendor? • How well do the tools interact with the

AUT? • How will tools from multiple vendors work

together? That last issue—that of integration—

is key. Connecting tools effectively increas-es quality from requirements definitionthrough requirements validation andreporting.

The next step is to determine thenumber of test tool environmentsrequired. At a minimum, a productiontest tools environment will be necessaryto support test execution. It is worth con-sidering the need for a test automationdevelopment environment as well. By hav-ing both a test automation developmentenvironment and production automationenvironment, automation engineers candevelop and execute tests independent-

ly and concurrently. The size and scale ofeach environment also can be managedto enable the appropriate testing whileminimizing the overall infrastructurerequirement.

The sample environment topologybelow defines a potential automationinfrastructure. This includes:• Primary test management and coor-

dination server• Automation test centralizedcontrol

server• Automation test results repository

and analysis server• System and transaction monitoring

and metrics server• Security and vulnerability test

Execution server• Functional automation execution

server(s)• Load and stress automation

Execution server(s)• Sample application under test

(AUT), including:• Web server• Application server(s)• Database server

Return on InvestmentDefining the ROI of an automationinstallation can be straightforward. Forexample, comparisons across: time to cre-ate a defect before and after test automation;time to create and execute a test plan; andtime to create reports before and after test man-agement, can all be effective measures.

Test automation is an investment thatyields the most significant benefits overtime. Automation is most effectivelyapplied to a well defined, disciplined testprocesses. The testing life cycle is multi-phased, and includes unit, integration,system, performance, environment, anduser acceptance testing. Each of thesephases has different opportunities for costand benefit improvements. Additionally,good test management is a key compe-tency in realizing improvements and ben-efits when using test automation.

There are different ways of definingthe return on investment (ROI) of testautomation. It is important to realize thatwhile ROI is easily defined as benefits overcosts, benefits are more accuratelydefined as the benefits of automated testingversus the benefits of manual testing over costsof automated testing versus the costs of man-ual testing. Simply put, there are costs asso-ciated with test automation including soft-ware acquisition and maintenance, train-ing costs, automation development costs(e.g., test script development), and

l

Automation isan investment

that yields the most

significant benefits

over time.l

Page 15: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 15

HOOKED ON TESTING

automation maintenance costs (e.g., testscript maintenance).

One simple approach to determiningwhen it makes sense to automate a testcase is:

The benefits of automation are two-fold, starting with a clear cost savings asso-ciated with test execution. First, automatedtests are significantly faster to execute andreport on than manual tests. Secondly,resources now have more time to dedicatetoward other testing responsibilities suchas test case development, test strategy, exe-cutes tests under fault conditions, etc.

Additional benefits of automation include:• Flexibility to automate standalone com-ponents of the testing process. This mayinclude automating regression tests, butalso actions directly related to test man-agement, including defect tracking, andrequirements management.• Increased depth of testing. Automatedscripts can enable more thorough testingby systematically testing all potential com-binations of criteria. For example, a sin-gle screen may be used for searching, withresults formatted differently for differentresults sets returned. By automating theexecution of all possible permutations of

results, a system can be more thorough-ly executed.• Increased range of testing. Introduceautomation into areas not currently beingexecuted. For example, if an organiza-tion is not currently executing standalonecomponent testing, this area could beautomated.

Though ROI is a prevailing tool forconveying the value of test automation,the less quantifiable business advantagesof testing may be even more importantthan cost reduction or optimization of thetesting process itself. These advantagesinclude the value of increased system qual-ity to the organization. Higher qualityyields higher user satisfaction, better business execution, better information,and higher operational performance.Conversely, the negative benefits of a lowquality experience may result in decreaseduser satisfaction, missed opportunities,unexpected downtime, poor information,missed transactions, and countless otherdetriments to the company. In this sense,we consider the missed revenue or wastedexpense of ill-performing systems and soft-ware, not just the isolated testing process.Test Automation ConsiderationsWhen moving toward test automation, anumber of factors must be considered.

1. What to automate. Automation canbe used for scenario (end-to-end) basedtesting, component-based testing, batchtesting, and workflow processes, toname a few. The goals of automationdiffer based on the area of focus. Agiven project may choose to implementdiffering levels of automation as timeand priorities permit.2. Amount of test cases to be automat-ed, based on the overall functionality ofan application: What is the duration ofthe testing life cycle for a given applica-tion? The more test cases that arerequired to be executed, the more valu-able and beneficial the execution of testcases becomes. Automation is more costeffective when required for a highernumber of tests and with multiple per-mutations of test case values. In addi-tion, automated scripts execute at a con-sistent pace. User testers pace varieswidely, based on the individual tester’sexperience and productivity. 3. Application release cycles: Are releas-es daily, weekly, monthly, or annually? Themaintenance of automation scripts canbe more challenging as the frequency ofreleases increases if a framework basedapproach is not used. The frequency ofexecution of automated regression tests

Page 16: Software Test & Performance issue Dec 2008

will yield financial savings as more regres-sion test cycles can be executed quickly,reducing overall regression test effortsfrom an employee and time standpoint.The more automated scripts are execut-ed, the more valuable the scripts become. It is worth noting the dependency of aconsistent environment for test automa-tion. To mitigate risks associated with envi-ronment consistency, environments mustbe consistently maintained.4. System release frequency: Dependingon how often systems are released into theenvironments where test automation willbe executed (e.g., acceptance testing envi-ronment, performance testing environ-ment, regression testing environments),automated testing will be more effectivein minimizing the risk to releasing defectsinto a production environment, loweringthe cost of resolving defects.5. Data integrity requirement within andacross test cases: If testers create their owndata sets, data integrity may be compro-mised if any inadvertent entry errors aremade. Automation scripts use data setsthat are proven to maintain data integri-ty and provide repeatable expected results.Synergistic benefits are attained with con-sistent test data can be used to validateindividual test case functionality as well asmulti-test case functionality. For example,one set of automated test cases providesdata entry while another set of test casescan test reporting functionality.6. Automation engineers staffing andskills: Individual experience with testautomation varies widely based on expe-rience. Training courses alone will resultin a minimum level of competency witha specific tool. Practical experience, basedon multiple projects and lessons learnedis the ideal means to achieve a strong setof skills, time permitting. A more prag-matic means is to leverage the lessonslearned from industry best practices, andthe experiences of highly skilled individ-uals in the industry.

An additional staffing consideration isthe resource permanence. Specifically, arethe automation engineers permanent ortemporary (e.g., contractors, consultants,outsourced resources). Automation engi-neers can easily develop test scripts fol-lowing individual scripting styles thatcan lead to costly or difficult knowledgetransfers.7. Organizational support of test automa-tion: Are automation tools currently ownedwithin the organization, and/or has budg-et been allocated to support test automa-tion? Budget considerations include

• Test automation software• Hardware to host and execute auto-

mated tests• Resources for automation scripting and

execution, including test management.

Test Automation Framework To develop a comprehensive and exten-sible test automation framework that iseasy to maintain, it’s extremely helpful ifthe automation engineers understand theAUT and its underlying technologies.They should understand how the testautomation tool interacts with and han-dles UI controls, forms, and the under-lying API and database calls.

The test team also needs to under-stand and participate in the developmentprocess so that the appropriate codedrops can be incorporated into the over-all automation effort and the targetedframework development. This comes intoplay if the development organization fol-lows a traditional waterfall SDLC. If thedev team follows a waterfall approach,then major functionality in the new codehas to be accounted for in the framework.This can cause significant automationchallenges. If the development team isfollowing an agile or other iterativeapproach, then the automation engineers

should be embedded in the team. Thisalso has implications for how the automa-tion framework will be developed.

Waterfall MethodologyTo estimate the development effort forthe test automation framework, theautomation engineers require access tothe business requirements and technicalspecifications. These begin to define theelements that will be incorporated intothe test automation framework. From the design specifications and businessrequirements, the automation teambegins to identify those functions that willbe used across more than one test case.This forms the outline of the framework.Once these functions are identified, theautomation engineers then begin todevelop the code required to access andvalidate between the AUT and the testautomation tool. These functions oftentake the form of the tool interacting witheach type of control on a particular form.Determining the proper approach forcustom-built controls and the test toolcan be challenging. These functionsoften take the form of setting a value (typ-ically provided by the business test case)and validating a result (from the AUT).By breaking each function down into the

16 • Software Test & Performance DECEMBER 2008

HOOKED ON TESTING

A business scenario defined to validate that the application will not allow the enduser to define a duplicate customer record.

Components required:• Launch Application

LaunchSampleApp(browser, url)• Returns message (success or failure)• Login

Login(userID, password)• Returns message (success or failure)• Create new unique customer record

CreateUniqueCustomer({optional} file name, org, name, address)• Outputs data to a file including application generated customer ID, org,

name, and address• Returns message (success or failure)• Query and verify new customer data

QueryCustomerRecord(file path, file name)Returns message (success or failure)

• Create identical customer recordCreateUniqueCustomer({optional} file name, org, name, address)

• Outputs data to a file including application generated customer ID, org, name,and address

Returns message (success or failure)Handle successful error message(s) from application

• Gracefully log out and return application to the home pageLogoutReturns message (success or failure)

TABLE 1: GUTS OF THE FRAMEWORK

Page 17: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 17

HOOKED ON TESTING

simplest construct, the test automationengineer can begin to assemble theminto more complex functions easily, with-out a great deal of code development.

For example, let’s say the AUT has arequirement for Login. This is drivenfrom the business requirement. This canalso be a good example of a reusable testcomponent. The function Login can bebroken down into four separate functions:SetUserID, SetPassword, SubmitForm,and VerifyLogin. Each of these functionscan be generalized so that you end upwith a series of functions to set a value (inthis case, both the UserID and thePassword) and there is a generic submitfunction and a generic validate resultfunction. These three functions form thebasis for the beginning of the develop-ment of your test automation framework.

Agile MethodologyIn the case of an Agile methodology,often times the development begins bycoding a test case that will fail. Then thedevelopers write just enough code topass the test case. Once this is accom-plished, then the code is added to thehourly/daily build and compiled intothe release candidate.

For test automation to be successfulin this type of environment, the testautomation engineers need to beinvolved upfront in the decisions of whattests to automate and what priority toautomate them in. In this case, the teamdetermines the list of requirements forthe given sprint and assigns priority tothe development items. The testers inconjunction with the rest of the teamdetermine which test cases will be can-didates for this sprint and what order tobegin development. From there, theactual development begins to follow asimilar pattern to the earlier waterfallmethodology. For example, suppose oneof the scrum team decides to developthe AUT login function. The testers willfollow the same process to decomposethe high-level business process intothree discrete technical components,SetValue, SubmitForm, and VerifyResult.Table 1 shows another example. Thesefunctions will form the basis for theunderlying test automation framework.After each sprint is accomplished, thescrum team performs the same evalua-tion of functions to build and automate,priority and resources required. Fromthat point forward, the automationframework begins to grow and maturealongside of the application under

development.

Putting the Pieces Together When it comes time to execute yourautomated tests, it is often the case thatthey must be assembled into multipletechnical and business scenarios. Bytaking a framework-based approach tobuilding the automation components,the job of assembly is straightforward.The business community needs to dic-tate the actual business scenario thatrequires validation. They can do this ina number of ways. One way is to pro-vide the scenario to the automationengineers (in a spreadsheet, say) andhave them perform the assembly. Amore pragmatic approach is for thebusiness community to navigate to thetest management system that is usedfor the storage and execution of thetest automation components and toassemble the scenario directly. Thismakes the job of tracking and trendingthe results much easier and less proneto interpretation.

Because maintenance is the key to alltest automation, the QA team needs tostandardize the approach and methodsfor building all test automation functions.Each function needs to have a standardheader/comment area describing thefunction as well as all necessary parame-ters. All outputs and return values needto be standardized. This allows for greaterflexibility in the development of yourautomation code by allowing the automa-tion engineer to focus on the tool inter-action with your AUT without having toreinvent the standard output methods.Any QA engineer can pick up that func-tion or snippet of code and know what

the output message will be. For example,all WebTableSearchByRow functionsreturn a row number based on the searchcriteria (Figure 1).

To be successful, the automation effortmust have organizational support and begiven the same degree of commitment asthe initial development effort for the proj-ect, and goals for test automation mustbe identified and laid out up front.Automation should have support andinvolvement of the entire team and thetool(s) should be selected and evaluatedfrom within your environment.

Your test automation team needs a sta-ble and consistent environment in whichto develop components of the testautomation framework and the discretetest cases themselves. The number andtype of tests to be automated need to bedefined (functional UI, system level API,performance, scalability, security), andautomators need to have dedicated timeto develop the automation components,store and version them in a source codecontrol system, and schedule the test runsto coincide with stable testable softwarereleases.

Failure in any one of these areas canderail the entire automation project.Remember, test automation cannot com-pletely replace manual testing, and notall areas of an application are candidatesfor automation. But test automation ofthe parts that allow it, when properlyimplemented, can enhance the ability ofyour quality organization to release con-sistent, stable code in less time and at low-er cost. ý

WebTableSearchByRow_v3

‘This Function searches through a webtable to locate any item in the table.

‘WebTableSearchByRow_v3 returns the row number of the correct item.

‘This function takes 6 parameters‘ @param wtsbr_browser ‘ Browser as defined in the Library‘ @param wtsbr_page ‘ Page as defined in the Library‘ @param wtsbr_webtable ‘ WebTable as defined in the Library‘ @param wtsbr_rowcount ‘ Total Row Count for the WebTable‘ @param wtsbr_searchcriteria ‘ String value to search for‘ @param wtsbr_columncount ‘ Total column count for the webtable

‘ return Row Number for Search Criteria

Example:rownum = WebTableSearchByRow_v3 (br, pg, wt, rc, sc, cc)

FIGURE 1:TRANSPARENT SCALES

Page 18: Software Test & Performance issue Dec 2008

more than any other: “Changing theapplication to make it more testable withyour tool is not an option for my organi-zation.” If companies don’t change thismindset, then test automation tools willnever deliver more than marginalimprovements to testing productivity.

Testability improvements can beapplied to existing, complex applicationsto bring enhanced benefits and simplic-ity to testers. This article will teach youhow to enhance testability in legacy appli-cations where a complete re-design of theapplication is not an option (and specialpurpose testability interfaces can not eas-ily be introduced).

You’re also learn how testabilityimprovements can be applied to an exist-ing, complex (not “Hello World”) appli-cation and the benefits gathered throughimproved testability. This article coverstestability improvements from the per-spective of functional testing as well asfrom that of performance and load test-ing, showing which changes are neededin the application to address either per-spective.

You’ll also understand the effort need-ed to enhance testability with the bene-fits gathered in terms of increased testcoverage and test automation anddecreased test maintenance, and how anagile development process will help tosuccessfully implement testabilityimprovements in the application.

What Do I Mean by ‘Testability?’There are many ways to interpret theterm testability, some in a general andvague sense and others highly specific.For the context of this article, I definetestability from a test automation per-spective. I found the following defini-tion, which describes testability in termsof visibility and control, most useful:

Visibility is the ability to observe thestates, outputs, resource usage and oth-er side effects of the software under test.Control is the ability to apply inputs tothe software under test or place it inspecified states. Ultimately testabilitymeans having reliable and convenientinterfaces to drive the execution and ver-ification of tests.

Stepchild of the ArchitectureThe likelihood and willingness of devel-opment teams to provide test interfacesvaries considerably.

Designing your application with testa-bility in mind automatically leads to betterdesign with better modularization and lesscoupling between modules. For moderndevelopment practices like extreme pro-gramming or test-driven development,testability is one of the cornerstones of goodapplication design and a built-in practice.

Unfortunately for many existing appli-cations, testability was not a key objectivefrom the start of the project. For theseapplications, it often is impossible to intro-

duce new special purpose test interfaces.Test interfaces need to be part of the

initial design and architecture of the appli-cation. Introducing them in an existingapplication can cause mature architecturalchanges, which most often includes re-writ-ing of extensive parts of the applicationthat no one is willing to pay for. Also testinterfaces need a layered architecture.Introducing an additional layer for thetestability interfaces is often impossible formonolithic applications architectures.

Testability interfaces for performancetesting need to be able to sufficiently testthe multiuser aspects of an application.This can become complex as it usuallyrequires a remote-able test interface.Thus, special purpose testability inter-faces for performance testing are evenless likely to exist than testability inter-faces for the functional aspects of theapplication.

To provide test automation for appli-cations that were not designed with testa-bility in mind, existing application inter-faces need to be used for testing purpos-es. In most cases this means using thegraphical user interface (GUI) for func-tional testing and a protocol interface ora remote API interface for performancetesting. This is the approach traditional

18 • Software Test & Performance DECEMBER 2008

By Ernst Ambichl

I n my 15 years of experience in building functional and perform-ance testing tools, there is one statement I’ve heard from customers

TEACHYour Old Applications To Be More Testable

Page 19: Software Test & Performance issue Dec 2008

functional and load testing tools are using. Of course, there are problems using

the available application interfaces fortesting purposes, and you will find manyexamples of their deficiencies, especial-ly when reading articles on agile testing.Some of those examples are: • Existing application interfaces are usu-ally not built for testing. Using them fortesting purposes may use them in a waythe existing client of the interface neverwould have intended. Driving teststhrough these interfaces can cause unex-pected application behavior as well as lim-ited visibility and control of the applica-tion for the test tool.

• A GUI with “custom controls” (GUI con-trols that are not natively recognized bythe test tool) is problematic because itprovides only limited visibility and con-trol for the test tool.• GUI controls with weak UI object recog-nition mechanisms (such as screen or win-dow coordinates, UI indexes, or captions)provide limited control, especially whenit comes to maintenance. This results infragile tests that break each time the appli-cation UI changes, even minimally. • Protocol interfaces used for perform-ance testing are also problematic. Sessionstate and control information is often hid-den in opaque data structures that arenot suitable for testing purposes. Also,the semantic meaning of the protocoloften is undocumented.

But using existing interfaces need notto be less efficient or effective than usingspecial purpose testing interfaces. Oftenslight modifications to how the applica-tion is using these interfaces can increasethe testability of an application signifi-cantly. And again, using the existing appli-cation interfaces for testing is often theonly option you have for an existing appli-cations.

Interfaces with Testability HooksA key issue with testability using an exist-ing interface is being able to name anddistinguish interface controls using sta-ble identifiers. Often it’s the absence ofstable identifiers for interface controlsthat makes our life as testers so hard.

A stable identifier for a control meansthat the identifier for a certain control isalways the same – between invocations ofthe control as well as between differentversions of the application itself. A stableidentifier also needs to be unique in thecontext of its usage, meaning that thereis not a second control with the sameidentifier accessible at the same time.

This does not necessarily mean thatyou need to use GUID-style identifiersthat are unique in a global context.Identifiers for controls should be read-able and provide meaningful names.Naming conventions for these identifierswill make it easier to associate the iden-tifier to the actual control.

Using stable identifiers also avoids thedrawbacks of using the control hierarchyfor recognizing controls. Control hierar-chies are often used if you’re using weakcontrol identifiers, which are not uniquein the current context of the application.By using the hierarchy of controls, you’reproviding a “path” for how to find the

Ernst Ambichl is chief scientist at Borland.

DECEMBER 2008 www.stpmag.com • 19

s To Be More Testable

Page 20: Software Test & Performance issue Dec 2008

FutureTest 2009: QA and the WebWho Should Attend?The program is designed for high-level test managers,

IT managers or development managers that manage the

test department or consider future test and QA strategies

to be critical to their job and their company. By attending

you will gain ideas and inspiration, be able to think freely,

away from the day-to-day demands of the office, and also exchange ideas with peers after being stimulated

by our inspirational speakers. You will leave this conference with new ideas for managing and solving your

biggest challenges both now and in the future.

A BZ Media Event

FutureTest is a

two-day conference

created for senior

software test and

QA managers.

FutureTest will

provide practical,

results-oriented

sessions taught by

top industry

professionals.

• Stay abreast of trends and best practices for managing

software quality

• Hear from the greatest minds in the software test/QA

community and gain their knowledge and wisdom

• Hear practitioners inside leading companies share their

test management and software quality assurance secrets

• All content is practical, thought-provoking,

future-thinking guidance that you can use today

• Lead your organization in building

higher-quality, more secure software

• Network with high-level peers

CIO/CTO

CSO/CISO

Vice President, Development

Test Team Leader

Vice President, Test/QA

Test Director

Senior VP, IT Operations

Senior Software Test Manager

Security Director

Test Architect

Manager of Quality Assurance

Project Manager

Director of Systems Programming

VP, Software Engineering

Test/QA Manager

Senior Software Architect

Typical job titles at FutureTest include:

Page 21: Software Test & Performance issue Dec 2008

1.You’ll hear great ideas to helpyour company save money

with more effective Web applica-tions testing and quality assuranceand security.

2.You’ll learn how to implementnew test & QA programs and

initiatives faster, so you save moneyand realize the benefits sooner.

3.You’ll listen to how otherorganizations have improved

their Web testing processes, so youcan adopt their ideas to your ownprojects.

4.You’ll engage with the newesttesting methodologies and

QA processes — including someyou may never have tried before.

5.You’ll be ready to share prac-tical, real-world knowledge

with your test and developmentcolleagues as soon as you get back to the office.

Add it all up, and it’s where youwant to be in February.

February 24 –25, 2009The Roosevelt Hotel

New York, NY

www.futuretest.net

5 Great Reasons to

REGISTER byDecember 19Just $1,095

SAVE $350!

A and the Web

Page 22: Software Test & Performance issue Dec 2008

22 • Software Test & Performance DECEMBER 2008

OLD DOG, NEW TRICKS

control. Relying your control recognitionon this path just increases the depend-ency of the control recognition on othercontrols and introduces a maintenanceburden when this hierarchy changes.

To improve the testability of yourapplication, one of the simplest and mostefficient practices is to introduce stablecontrol identifiers and expose them viathe existing application interfaces. Thispractice not only works for functional test-ing using the GUI but also can be adopt-ed for performance testing using a pro-tocol approach. How to accomplish thisfor a concrete application is shown next.

Case Study: Testability Improvementsin an Enterprise Application

A major theme for the latest releaseof one of our key applications that we aredeveloping in Borland’s R&D lab in Linz,Austria, was to make the application readyfor enterprise-wide, global usage. Thisresulted in two key requirements: • Providing localized versions of the appli-cation. • Provide significantly increased per-formance and scalability of the applica-tion in distributed environments.

Our application is a Web based multi-user application using a HTML/Ajaxfront-end with multiple tiers built on aJava EE infrastructure with a databasebackend. The application was developedover several years and now contains sev-eral hundred thousand lines of code.

To increase the amount of test automa-tion, we wanted to be able to test local-ized versions of the application with min-imal or no changes to the functional testscripts we used for the English version ofthe product. Also we wanted to be ableto test the scalability and performance ofexisting functionality on a regular night-ly build basis over the whole release cycle.

These regular performance testsshould ensure that we detect perform-ance regressions as early as possible. Newfunctionality should be tested for per-formance as soon as the functionality wasavailable, and in combination with exist-ing functionality to ensure it was builtmeeting the scalability and performancerequirements. We thought that by exe-cuting performance tests on a regularbasis, we should be able to continuouslymeasure the progress we were makingtoward the defined scalability and per-formance objectives.

Performance TestingAmong the problems we had was frag-ile performance test scripts. Increasing

the amount of test automation for per-formance tests increases the importanceof maintainable and stable performancetest scripts. In previous releases, our per-formance test scripts lacked stability,maintainability and customizability. Oftenit happened that only small modificationsin the application broke performancetest scripts.

Also was it hard to detect the reasonof script failures. Instead of searching forthe root-cause of broken scripts, per-formance test scripts were re-recordedand re-customized. Complex test caseswere not covered (especially scenariosthat change data) as they were hard tobuild and had a high likelihood to breakwhen changes were introduced in theapplication. Also customization of testswas complicated, and increased the fragili-ty of the tests.

To get a better insight on the reasonsfor the poor testability, we needed to lookat the architecture of the application andthe interfaces we used for testing:

For Web-based applications, the mostcommon approach for performance test-ing is to test at the HTTP-protocol-level.So let’s take a look into the HTTP-pro-tocol of the application:

Our application is highly dynamic, soHTTP requests and responses are dynam-ic. This means that they include dynam-ic session information as well as dynam-ic information about the active UI con-trols. While session information (as inmany other Web applications) is trans-ported using cookies and therefore auto-matically handled by most HTTP basedtesting tools, the information aboutactions and controls is represented as

URL query string parameters in the fol-lowing way:

Each request that triggers an actionon a control (such as pressing a button,clicking a link, expanding a tree node)uses an opaque control handle to iden-tify the control on the server:

http://host/borland/mod_1?control_handle=68273841207300477604!...

On the server, this control handle isused to identify the actual controlinstance and the requested action. Asthe control handle references the actu-al instance of a control, the value of thehandle changes after each server request(as in this case, where a new instanceof the same control will be created). Thecontrol handle is the only informationthe server exposes through its interface.This is perfectly fine for the functional-ity of the application, but is a nightmarefor every testing tool (and of course fortesters).

What does this mean for identifyingthe actions on controls in a test tool?

It means that because of the dynam-ic nature of the requests (control handlesconstantly change), a recorded, staticHTTP-based test script never will work asit calls control handles, which are notvalid. So the first thing testers need to dois to replace the static control handles inthe test script with the dynamic handlesgenerated by the server at runtime.Sophisticated performance test tools willallow you to define parsing and replace-ment rules that do this for you and cre-ate the needed modifications in the testscript automatically (Figure 1).

An application of course has many

Parse the dynamic control handler from a HTTP response:

WebParseDataBound(<dynamic control handle>, "control_handle=", “!” …)

The function WebParseDataBound parses the response data of a HTTP requestusing “control_handle=” as the left boundary and “!” as the right boundary andreturns the result into <dynamic control handle>. You’re tools’ API will vary, ofcourse, but might look something like this.

Use the dynamic control handler in an HTTP request:

http://host/borland/mod_1?control_handle=<dynamic control handle>!...

FIG. 1: PARSING WORDS

WebPage(“http://host/borland/login/”);WebParseDataBound(hBtnLogin, "control_handle=", "!”, 6);

To call the actual “Login” button then might look like this:

WebPage(“http://host/borland/mod_1?control_handle=” + hBtnLogin + “!”);

FIG. 2: PARSING LOGINS

Page 23: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 23

OLD DOG, NEW TRICKS

actionable controls on one page, whichin our case all use the same request for-mat shown above. The only way to iden-tify a control using its control handle isby the order number of the control han-dle compared to other control handlesin the response data (which is eitherHTML/JavaScript or JSON in our case).

E.g.: The control handle for the“Login” button in the “Login” pagemight be the 6th control handle in thepage. A parsing statement needed toparse the value of the handler for the“Login” button then might look like thatis Figure 2.

It should be obvious from the smallexample in Figure 2 that using the con-trol handle to identify controls is far froma stable recognition technique. The prob-lems that come with this approach are:• Poor readability: Scripts are hard to readas they use order number information toidentify controls.• Fragile to changes: Adding controls orjust rearranging the order of controlswill cause scripts to break or even worseintroduces unexpected behavior caus-ing subsequent errors which are veryhard to find.• Poor maintainability: Frequent changesto the scripts are needed. Detecting thechanges of the order number of controlhandles is cumbersome, as you have tosearch through response data to find thenew order number of the control handle.

Stable Control IdentifiersTrying to optimize the recognition tech-nique within the test scripts by adapt-ing more intelligent parsing rules(instead of only searching for occur-rences of “control_handle”) proved notto be practical. We even found it coun-terproductive because the more unam-biguous the parsing rules were the lessstable they became.

So we decided to address the rootcause of the problem by creating stableidentifiers for controls. Of course, thisneeded changes in the application code—but more on this later.

We introduced the notion of a controlidentifier (CID), which uniquely identifiesa certain control. Then we extended theprotocol so that CIDs could easily be con-sumed by the test tool. By using mean-ingful names for CIDs, we made it easyfor testers to identify controls inrequest/response data and test scripts.Also was then easier for the developersto associate CIDs with the code that imple-ments the control.

CIDs are only used to provide morecontext information for the test tool, thetesters, and developers. There is nochange in the application behavior – theapplication still uses the control handleand ignores the CID.

The HTTP request format changedfrom:

http://host/borland/mod_1?control_handle=68273841207300477604!...

to:

http://host/borland/mod_1?control_handle=*tm_btn_login.68273841207300477604!...

As CIDs are included in all responsesas part of their control handlers, it is easyto create a parsing rule the uniquely pars-es the control handle – the search willonly return one control handle as theCIDs are always unique in the contextof the response.

The same script as above now looks like:

WebPage(“http://host/borland/login/”);WebParseDataBound(hBtnLogin,"control_handle=*tm_btn_login.", “!”, 1);

WebPage(“http://host/borland/mod_1?control_handle==*tm_btn_login.” + hBtnLogin + “!”);

By introducing CIDs and exposingCIDs at the HTTP protocol-level, we arenow able to build extremely reliable andstable test scripts. Because a CID will notchange for an existing control, there isno maintenance work related to changessuch as introducing new controls in apage, filling the page with dynamic con-tent that exposes links with control han-dlers (like a list of links), or reorderingcontrols on a page. Also the scripts arenow more readable and the communi-cation between testers and developers iseasier because they use the same notionwhen they were talking about the con-trols of the application.

Functional TestingThe problem here was the existence oflanguage-dependent test scripts. Ourexisting functional test scripts heavilyrelied on recognizing GUI controls andwindows using their caption.

For example, for push buttons, thecaption is the displayed name of the but-ton. For links the caption is the displayedtext of the link. And for text fields, thecaption is the text label preceding thetext field.

To automate functional testing for alllocalized versions of the application, weneeded to minimize the dependencies

between the test scripts and the differentlocalized versions of the application.Captions are of course language depend-ent, and therefore are not a good candi-date for stable control identifiers.

The first option would have been tolocalize the test scripts themselves byexternalizing the captions and providinglocalized versions of the externalized cap-tions. But still this approach introducesa maintenance burden when captionschange or when new languages need tobe supported.

Using the caption to identify anHTML link (HTML fragment of anHTML page):

<A … HREF=" http://...control_handle=*tm_btn_login.6827..." >Login</A>

Calling the actual “Login” link thenmight look like this:

MyApp.HtmlLink(“Login”).Click();

As we already have introduced the con-cept of stable control identifiers (CIDs)for performance testing we wanted toreuse these identifiers also for GUI leveltesting. Using CIDs makes the test scriptslanguage independent without the needto localize the scripts (at least for controlrecognition – verification of control prop-erties still may need language dependentcode). To make the CID accessible forour functional testing tool, the HTMLcontrols of our application exposed a cus-tom HTML attribute named “CID.” Thisattribute is ignored by the browser butis accessible from our functional testingtool using the browser’s DOM.

Using the CID to identify a link:

<A … HREF=" http://...control_handle=*tm_btn_login.6827..." CID="tm_btn_login">Login</A>

Calling the actual “Login” link usingthe CID then might look like this:

MyApp.HtmlLink(“CID=tm_btn_login”).Click();

Existing Test ScriptsWe had existing functional test scriptswhere we needed to introduce to thenew mechanism how to identify con-trols. So it was essential that we had sep-arated the declaration of UI controlsand how controls are recognized fromthe actual test scripts using them.

Page 24: Software Test & Performance issue Dec 2008

I am Not a LeadI WILL GET TO KNOW YOU on my own time and

in my own way. I will learn about you myself and

then I may choose to read your white papers,

attend your webinars, or visit your web site. I am

in control of the process. And if I don’t get to

know you first, you will never get my business.

I’m a person, just like you.

I’ll buy when and how I want.

Do not call me.

Do not make me register tolearn about you.

Don’t try to generate me.

I am not ‘actionable.’

Print Advertising - How Buyers Get to Know YouLEARN MORE AT I-AM-NOT-A-LEAD.COM

Based on study of SD Times readers by Readex, February 2008. How do you prefer to receive marketing information from software tool companies 61% ads in print magazines, 40% presentations at trade shows, 38% vendor white papers, 29% direct mail, 19% banners.

Page 25: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 25

OLD DOG, NEW TRICKS

Therefore we only had to change the declaration of the con-trols but not their usage in multiple test scripts.

A script that encapsulates control information from actionsmight look something like this:

// declaration of the controlBrowserChild MyApp {…HtmlLink Login {tag “CID=tm_btn_login” // before: tag “Login”…}}

// action on the controlMyApp.Login.Click();

This approach works similarly for GUI toolkits such as JavaSWT, Java Swing/AWT, toolkits supporting the Microsoft UserInterface Automation (MSUIA) and Adobe Flex Automation. Allthese GUI toolkits allow developers to add custom properties tocontrols or are offering a special property that you then can useto expose CIDs to testing tools. Of course you need to check ifyour GUI testing tool is able to work with custom properties ofcontrols for each toolkit.

Code Changes Needed to Add Testability HooksOne of the most enlightening experiences in this project washow easy it was to add the testability hooks to the applicationcode. Speaking with the developers of the UI framework thatwas used in the application and explaining them the needs touse CIDs for recognizing controls, they immediately found outthat there was already functionality in the framework APIs thatallowed them to add custom HTML attributes to the controlsthe UI framework provided.

So no changes in the UI framework code were even neededto support CIDs for functional testing! Certainly there was workrelated to creating CIDs for each UI control of the applica-tion. But this work was done step-by-step at first introducingCIDs for the parts of the applications, which we focused our test-ing efforts on.

Changes in the application code to introduce a CID:

Link login = new Link("Login");login.addAttribute("CID", "tm_btn_login"); // additional code line to // create CID the control

More Changes For Performance TestingFor performance testing, we needed to extend the protocol ofthe application to also include the CID as part of the controlhandler, which is used to relate the actual control instance onthe server with the UI control in the browser. By understandingwhat we needed, our UI framework developers immediately fig-ured out how to accomplish it. Again the changes were mini-mal and were needed in just one base class of the framework.

Changes in the UI framework of the application code to intro-duce CIDs in control handlers:

private String generateControlHandle() {String cid = this.getAttribute(“CID”);if (cid != null)return "*" + cid + "." + this.generateOldControlHandle();elsereturn this.generateOldControlHandle();}

The new version of the generateControlHandle method,which is implemented in the base class of all UI control class-

es, now generates the following control handler, which containsthe CID as part of the handler:

*tm_btn_login.68273841207300477604!

Lessons LearnedCosts for enhancing testability using existing application inter-faces were minimal. One of the most enlightening experiencesin this project was how easy it was to add the testability hooksat the time we were able to express the problem to the devel-opers. The code changes needed in the application to add thetestability hooks to the existing application interfaces (GUIand protocol) were minimal.

There was no need to create special purpose testing inter-faces, which would have caused a major rework of the applica-tion. And the runtime overhead that was added by the testabil-ity hooks was negligible. Most effort was related to introducingstable identifiers (CIDs) for all relevant controls of the appli-cation. All changes needed in the application summed up to aneffort of about 4 person weeks—this is less than 1% of theresources working on this application per year!

For the changes and effort to make them, the benefits wegathered were dramatic. Maintenance work for test scriptsdropped significantly. This was especially true for performancetesting, where for the first time we were able to have a main-tainable set of tests. Before that we needed to recreate test scriptswhenever the application changed. The performance testingscripts are now extremely stable and if the application changesthe test scripts are easy to adjust.

What’s more, the changes in the existing functional testscripts that were needed for testing different localized versionsof the application were small. Here, it helped that we hadalready separated the declaration of UI controls and how con-trols are recognized from the actual test scripts using them.Being able to also cover the localized versions of the applica-tion with automated tests not only increased the test coverageof the application, but it reduced the time we needed for pro-viding the localized versions of the application. We are cur-rently releasing all localized versions and the English versionat the same time.

Performance testing is now done by regularly running acore set of benchmark tests with different workloads againstdifferent configurations on each nightly build. So there is con-tinuous information how the performance and the scalabili-ty the application is improving (or degrading). We’re also nowable to detect defects that affect performance or scalabilityas soon as they are introduced—which is a huge benefit as per-formance problems are usually extremely hard to diagnose.Knowing how performance has changed between two nightlybuilds greatly reduces the time and effort needed to find theroot cause.

Moving to an agile development process was the catalyst.When it’s so easy to improve the testability of existing appli-

cations and by that increase test automation and improve qual-ity so significantly, I wonder why it’s not done more often. ý

REFERENCES• Software Test Automation, Mark Fewster, Dorothy Graham, Addison Wesley, 1999

• Design for Testability, Bret Pettichord, Paper, 2002

• Lessons Learned in Software Testing, Cem Kaner et. al, John Wiley & Sons, 2002

• Agile Tester, Internet Blog: Why GUI tests fail a lot? (From tools perspective), http://developer-in-test.blogspot.com/2008/09/problems-with-gui-automation-testing.html

Page 27: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 27

process, using minimal overhead and isdescribed here. It is based on proven sys-tem and software engineering processesand consists of 5 phases, each requiringthe “pass” of a quality gate before mov-ing on to the next phase. By implement-ing quality gates, we will help enforcethat quality is built into the entire imple-mentation, thus preventing late andexpensive rework (see Figure 2).

The overall process implementationcan be verified via inspections, qualitychecklists and other audit activities, eachof which is covered later in this section.

In my experience in the defense indus-try, I have modified the Automated TestingLifecycle Methodology (ATLM) to adaptto the needs of my current employer,Innovative Defense Technologies. SeeFigure 1 for an illustration of the processthat’s further defined next.

Our proposed Automated SoftwareTesting technical process needs to beflexible enough to allow for ongoing iter-ative and incremental improvement

feedback loops, including adjustmentsto specific project needs. For example,if test requirements and test cases alreadyexist for a project, Automated SoftwareTesting will evaluate the existing test arti-facts for reuse, modify them as requiredand mark them as “to-be-automated,”instead of re-documenting the testrequirement and test case documenta-tion from scratch. The goal is to reuseexisting components and artifacts anduse/modify as appropriate, wheneverpossible.

The Automated Software Testing phas-es and selected best practices need to beadapted to each task at hand and needto be revisited and reviewed for effec-tiveness on an ongoing basis. Anapproach for this is described here.

The very best standards and process-es are not useful if stakeholders don’tknow about them or don’t adhere tothem. Therefore Automated SoftwareTesting processes and procedures are doc-umented, communicated, enforced and

tracked. Training for the process will needto take place.

Our process best practices span allphases of the Automated Software Testinglife cycle. For example, in the require-ments phases, an initial schedule is devel-oped and maintained throughout eachphase of the Automated Software Testingimplementation (e.g. update percentagecomplete to allow for program statustracking, etc). See the section on quali-ty gates activities related to schedules.

Weekly status updates are also animportant ingredient for successful sys-tem program management, which spansall phases of the development life cycle.See our section on quality gates relatedto status reporting.

Post-mortems or lessons learned playan essential part in these efforts, and areconducted to help avoid repeats of pastmistakes in ongoing or new developmentefforts. See our section on Quality Gatesrelated to inspections and reviews.

By implementing quality gates and

Elfriede Dustin is currently employed by Inno-vative Defense Technologies (IDT), a Virginia-based software testing consulting companyspecializing in automated testing.

By Elfriede Dustin

I mplementing a successful Automated Software Testing effortrequires a well defined and structured, but lightweight technical

Page 28: Software Test & Performance issue Dec 2008

28 • Software Test & Performance DECEMBER 2008

QUALITY GATES

related checks and balances alongAutomated Software Testing, the team isnot only responsible for the final testingautomation efforts, but also to helpenforce that quality is built into the entireAutomated Software Testing life cyclethroughout. The automation team is heldresponsible for defining, implementingand verifying quality.

It is the goal of this section to provideprogram management and the technicallead a solid set of automated technicalprocess best practices and recommenda-tions that will improve the quality of thetesting program, increase productivity withrespect to schedule and work performed,and aid successful automation efforts.

Testing Phases and MilestonesIndependent of the specific needs ofthe application under test (AUT),Automated Software Testing will imple-ment a structured technical process andapproach to automation and a specificset of phases and milestones to eachprogram. Those phases consist of:• Phase 1: Requirements Gathering—

Analyze automated testing needs anddevelop high level test strategies

• Phase 2: Design & Develop Test Cases• Phase 3: Automation framework and

test script development • Phase 4: Automated test execution

and results reporting • Phase 5: Program review

Our overall project approach toaccomplish automated testing for a spe-cific effort is listed in the project mile-stones below.

Phase 1: Requirements Gathering Phase 1 will generally begin with a kick-off meeting. The purpose of the kick-offmeeting is to become familiar with theAUT’s background, related testing pro-cessing, automated testing needs, andschedules. Any additional informationregarding the AUT will be also collect-ed for further analysis. This phase servesas the baseline for an effective automa-tion program, i.e. the test requirementswill serve as a blueprint for the entireAutomated Software Testing effort.

Some of the information you gatherfor each AUT might include: • Requirements• Test Cases• Test Procedures• Expected Results• Interface Specifications

In the event that some informationis not available, the automator will workwith the customer to derive and/or devel-op as needed. Additionally, during thisphase of automation will generally followthis process:1) Evaluate AUT’s current manual test-ing process and determine

a) areas for testing technique improve-ment

b) areas for automated testingc) determine current quality index, as

applicable (depending on AUT state)d) collect initial manual test timelines

and duration metrics (to be usedas a comparison baseline ROI)

e) determine the “automation index”—i.e. determine what lends itselfto automation (see the next item)

2) Analyze existing AUT test requirements

for ability to automatea) If program or test requirements are

not documented, the automationeffort will include documenting thespecific requirements that need tobe automated to allow for a require-ments traceability matrix (RTM)

b) Requirements are automated basedon various criteria, such as

i) Most critical feature pathsii) Most often reused (i.e. automat-

ing a test requirement that onlyhas to be run once, might not becost effective)

iii) Most complex area, which isoften the most error-prone

iv) Most data combinations, sincetesting all permutations and com-binations manually is time-con-suming and often not feasible

v) Highest risk areasvi) Test areas which are most time

consuming, for example test per-formance data output and analy-sis.

3) Evaluate test automation ROI of testrequirement

a) Prioritize test automation imple-mentation based on largest ROI

b) Analyze AUT’s current life-cycletool use and evaluate reuse of exist-ing tools

c) Assess and recommend any addi-tional tool use or the required devel-opment

d) Finalize manual test effort baselineto be used for ROI calculation

A key technical objective is to demon-strate a significant reduction in test time.Therefore this phase involves a detailedassessment of the time required to man-ually execute and validate results. Theassessment will include measuring the actu-al test time required for manually execut-ing and validating the tests. Important inthe assessment is not only the timerequired to execute the tests but also tovalidate the results. Depending on thenature of the application and tests, vali-dation of results can often take significantlylonger than the time to execute the tests.

Based on this analysis, the automatorwould then develop a recommendationfor testing tools and products most com-patible with the AUT. This important stepis often overlooked. When overlookedand tools are simply bought up front with-out consideration for the application, theresult is less than optimum results in thebest case, and simply not being able touse the tools in the worst case.

At this time, the automator would also

FIGURE 1:THE ATLM

Page 29: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 29

QUALITY GATES

identify and develop additional softwareas required to support automating thetesting. This software would provide inter-faces and other utilities as required tosupport any unique requirements whilemaximizing the use of COTS testing toolsand products. A brief description:• Assess existing automation framework

for component reusability• GUI record/playback utilities compat-

ible with AUT display/GUI applications(as applicable in rare cases)

• Library of test scripts able to interfaceto AUT standard/certified simula-tion/stimulation equipment and sce-narios used for scenario simulation

• Library of tools to support retrieval oftest data generation

• Data repository for expect-ed results

• Library of performancetesting tools able to sup-port/measure real timeand non-real time AUTs

• Test scheduler able to sup-port distributed testingacross multiple computersand test precedence

The final step for thisphase will be to completethe configuration for theapplication(s), includingthe procurement and instal-lation of the recommendedtesting tools and products along with theadditional software utilities developed.

The products of Phase 1 will typically be :1. Report on test improvement

opportunities, as applicable2. Automation index3. Automated Software Testing

requirements walkthrough withstakeholders, resulting in agree-ment

4. Presentation report on recom-mendations for tests to automate,i.e. test requirements to be auto-mated

5. Initial summary of high level testautomation approach

6. Presentation report on test toolor in-house development needsand associated recommendations

7. Automation utilities8. Application configuration details9. Summary of test environment10. Timelines11. Summary of current manual test-

ing level of effort (LOE) to beused as a baseline for automatedtesting ROI measurements

Once the list of test requirements forautomation has been agreed to by theprogram, they can be entered in therequirements management tool and/ortest management tool for documentationand tracking purposes.

Phase 2: Manual Test CaseDevelopment and ReviewArmed with the products of phase 1,manual test cases can now be devel-oped. If test cases already exist, they cansimply be analyzed, mapped as applica-ble to the automated test requirementsand reused, ultimately to be marked asautomated test cases.

It is important to note for a test to beautomated, manual test cases need to be

automated, as computerinputs and expectationsdiffer from human inputs.As a general best practice,before any test can beautomated, it needs to bedocumented and vettedwith the customer to veri-fy its accuracy and that theautomator’s understand-ing of the test cases is cor-rect. This can be accom-plished via a test case walk-through.

Deriving effective testcases is important for suc-

cessfully implementing this type ofautomation. Automating inefficient testcases will result in poor test program per-formance.

In addition to the test procedures, oth-er documentation, such as the interfacespecifications for the software are alsoneeded to develop the test scripts. Asrequired, the automator will develop anymissing test procedures and will inspectthe software, if available, to determinethe interfaces if specifications are notavailable.

Phase 2 also includes a collection andentry of the requirements and test casesinto the test manager and/or require-ments management tool (RM tool), asapplicable. The end result is a populatedrequirements traceability matrix insidethe test manager and RM tool that linksrequirements to test cases. This centralrepository provides a mechanism toorganize test results by test cases andrequirements.

The test case, related test procedure,test data input and expected results fromeach test case are also collected, docu-mented, organized and verified at this

time. The expected results provide thebaseline to determine pass or fail statusof each test. Verification of the expectedresults will include manual execution ofthe test cases and validation that theexpected results were produced. In thecase where exceptions are noted, theautomator will work with the customer toresolve the discrepancies, and as neededupdate the expected results. The verifi-cation step for the expected resultsensures that the team will be using thecorrect baseline of expected results forthe software baseline under test.

Also during the manual test assess-ment, pass/fail status as determinedthrough manual execution will be docu-mented. Software trouble reports will bedocumented accordingly.

The products of Phase 2 will typically be:1. Documented manual test cases to

be automated (or existing test cas-es modified and marked as “to beautomated”)

2. Test case walkthrough and priorityagreement

3. Test case implementation byphase/priority and timeline

4. Populated requirements traceabil-ity matrix

5. Any software trouble reports asso-ciated with manual test execution

6. First draft of “Automated SoftwareTesting Project Strategy andCharter” (as described in theProject Management portion of thisdocument)

Phase 3 : Automated Frameworkand Test Script DevelopmentThis phase will allow for analysis andevaluation of existing frameworks andautomation artifacts. It is expected thatfor each subsequent implementation,there will be software utilities and testscripts we’ll be able to reuse from previ-ous tasks. During this phase we deter-mine scripts that can be reused.

As needed, the automation frameworkwill be modified, and test scripts to executeeach of the test cases are developed. Scriptswill be developed for each of the test cas-es based on test procedures for each case.

The recommended process for devel-oping an automated test framework ortest scripts is the same as would be usedfor developing a software application.Keys to a technical approach to devel-oping test scripts is that implementationsare based on generally accepted devel-opment standards; no proprietary imple-

l

Automating

inefficient test

cases will

result in poor

test program

performance.l

Page 30: Software Test & Performance issue Dec 2008

30 • Software Test & Performance DECEMBER 2008

QUALITY GATES

mentation should be allowed.This task also includes verifying that

each test script works as expected.

The products of Automated SoftwareTesting Phase 3 will typically be:

1. Modified automated test frame-work; reuse test scripts (as appli-cable)

2. Test case automation—newlydeveloped test scripts

3. High-level walkthrough of auto-mated test cases with internal orexternal customer

4. Updated requirements traceabil-ity matrix

Phase 4: Automated Test Executionand Results ReportingThe next step is to execute automatedtests and develop the framework andrelated test scripts. Pass/fail status will becaptured and recorded in the test man-ager. An analysis and comparison of man-ual and automated test times andpass/fail results will be conducted andsummarized in a test presentation report.

Depending on the nature of the appli-cation and tests, you might also completean analysis that characterizes the rangeof performance for the application.

The products of Automated SoftwareTesting Phase 4 will typically be:

1. Test Report including Pass/FailStatus by Test Case and Require-ment (including updated RTM)

2. Test Execution Times (Manual andAutomated)—initial ROI reports

3. Test Summary Presentation4. Automated Software Testing train-

ing, as required

Phase 5: Program Review and Assessment The goal of Automated Software Testingimplementations is to allow for contin-ued improvements. During the fifthphase, we will review the performance ofthe automation program to determinewhere improvements can be made.

Throughout the automation efforts,we collect test metrics, many during thetest execution phase. It is not beneficialto wait until the end of the automationproject to document insights gained intohow to improve specific procedures.When needed, we will alter detailed pro-cedures during the test program, whenit becomes apparent that such changesare necessary to improve the efficiencyof an ongoing activity.

The test program review also includesan assessment of whether automationefforts satisfy completion criteria for theAUT, and whether the automation effortitself has been completed. The review alsocould include an evaluation of progressmeasurements and other metrics col-lected, as required by the program.

The evaluation of the test metricsshould examine how well original testprogram time/sizing measurements com-pared with the actual number of hoursexpended and test procedures developedto accomplish the automation. Thereview of test metrics should concludewith improvement recommendations, asneeded.

Just as important is to document theactivities that automation efforts per-formed well and were done correctly inorder to be able to repeat these success-ful processes. Once the project is com-plete, proposals for corrective action willsurely be beneficial to the next project,but the corrective actions, applied dur-ing the test program, can be significantenough to improve the final results of thetest program.

Automated Software Testing efforts willadopt, as part of its culture, an ongoingiterative process of lessons learned activi-ties. This approach will encourage automa-tion implementers to take the responsi-bility to raise corrective action proposalsimmediately, when such actions potentiallyhave significant impact on test programperformance. This promotes leadershipbehavior from each test engineer.

The products of phase 5 will typically be: 1. The final report

Quality GatesInternal controls and quality assuranceprocesses verify that each phase has beencompleted successfully, while keepingthe customer involved. Controls includequality gates for each phase, such as tech-nical interchanges and walkthroughsthat include the customer, use of stan-dards, and process measurement.

Successful completion of the activitiesprescribed by the process should be theonly approved gateway to the next phase.Those approval activities or quality gatesinclude technical interchanges, walk-throughs, internal inspections, examina-tion of constraints and associated risks,configuration management; tracked andmonitored schedules and cost; correctiveactions; and more as this sectiondescribes. Figure 1 reflects typical quali-

ty gates, which apply to AutomatedSoftware Testing milestones.

Our process controls verify that theoutput of one stage represented in Figure2 is fit to be used as the input to the nextstage. Verifying that output is satisfacto-ry may be an iterative process, and veri-fication is accomplished by customerreview meetings; internal meetings andcomparing the output against definedstandards and other project specific cri-teria, as applicable. Additional qualitygates activities will take place as applica-ble, for example:

Technical interchanges and walk-throughs with the customer and theautomation team, represent an evalua-tion technique that will take place dur-ing and as a final step of each AutomatedSoftware Testing phase. These evaluationtechniques can be applied to all deliver-ables, i.e. test requirements, test cases,automation design and code, and othersoftware work products, such as test pro-cedures and automated test scripts. Theyconsist of a detailed examination by a per-son or a group other than the author.These interchanges and walkthroughs areintended to help find defects, detect non-adherence to Automated Software Testing

FIGURE 2:AUTOMATED SOFTWARE TESTING PHASES,MILESTONES AND QUALITY GATES

Page 31: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 31

QUALITY GATES

standards, test procedure issues, and oth-er problems.

Examples of technical interchangemeetings include an overview of testrequirement documentation. When testrequirements are defined in terms thatare testable and correct, errors are pre-vented from entering the developmentpipeline, which would eventually bereflected as possible defects in the deliv-erable. Automation design-componentwalkthroughs can be performed to ensurethat the design is consistent with definedrequirements, conforms to standards andapplicable design methodology anderrors are minimized.

Technical reviews and inspections haveproven to be the most effective form ofpreventing miscommunication, allowingfor defect detection and removal.

Internal automator inspections ofdeliverable work products will take placeto support the detection and removal ofdefects early in the development and testcycles; prevent the migration of defect tolater phases; improve quality and pro-ductivity; and reduce cost, cycle time, andmaintenance efforts.

A careful examination of goals andconstraints and associated risks will takeplace, which will lead to a systematicautomation strategy, will produce a pre-dictable, higher-quality outcome and willenable a high degree of success.Combining a careful examination of con-straints, as a defect prevention technolo-gy, together with defect detection tech-nologies will yield the best results.

Any constraint and associated risk willbe communicated to the customer andrisk mitigation strategies as necessary willbe developed.

Defined QA processes allow for con-stant risk assessment and review. If a riskis identified, appropriate mitigation strate-gies can be deployed. We require ongo-ing review of cost, schedules, processesand implementation to prevent potentialproblems to go unnoticed until too late,instead our process assures problems areaddressed and corrected immediately.

Experience shows that it’s importantto protect the integrity of the AutomatedSoftware Testing processes and environ-ment. Means of achieving this include thetesting of any new technologies in isola-tion. This ensures that validating that tools,for example, perform up to specificationsand marketing claims before being usedon any AUT or customer environment.The automator also will verify that anyupgrades to a technology still run in the

current environment. The previous ver-sion of the tool may have performed cor-rectly and a new version may perform finein other environments, but might adverse-ly affect the team’s particular environ-ment. Additionally, using a configurationmanagement tool to base-line the test repository willhelp safeguard the integri-ty of the automated test-ing process and help withroll-back in the event offailure.

The automator incor-porates the use of config-uration management toolshelp control of theintegrity of the automa-tion artifacts. For exam-ple, we will include allautomation frameworkcomponents, script files,test case and test proce-dure documentation,schedules and cost track-ing data under a config-uration management sys-tem. This assures us that the latest andaccurate version control and records ofAutomated Software Testing artifacts andproducts are maintained.

Schedules are Defined,Tracked and Communicated It’s also important to define, track and communicate project schedules.Schedule task durations are determinedbased on past historical performanceand associated best estimates. Also, anyschedule dependencies and critical pathelements should be considered up frontand incorporated into the schedule.

If the program is under a tight dead-line, for example, only the automationtasks that can be delivered on time shouldbe included in the schedule. DuringPhase 1, test requirements are prioritized.This allows the most critical tasks to beincluded and prioritized for completionfirst, and less critical and lower prioritytasks to be later in the schedule.

After Phase 1, an initial schedule ispresented to the customer for approval.During the technical interchanges andwalkthroughs, schedules are presentedon an ongoing basis to allow for contin-uous schedule communication and mon-itoring. Potential schedule risks will becommunicated well in advance and riskmitigation strategies will be explored andimplemented, as needed. Any potentialschedule slip can be communicated to

the customer immediately and necessaryadjustments made accordingly.

Tracking schedules on an ongoingbasis also contributes to tracking and con-trolling costs.

Costs that are tracked can be controlled.By closely tracking sched-ules and other requiredresources, the automatorassures that a cost trackingand controlling process isfollowed. Inspections, walk-throughs and other statusreporting will allow for aclosely monitored cost con-trol tracking activity.

Performance is con-tinuously tracked withnecessary visibility intoproject performance,related schedule and cost.The automator managermaintains the record ofdelivery dates (plannedvs. actual), and continu-ously evaluates the proj-ect schedule. This is main-

tained in conjunction with all projecttracking activities, is presented at week-ly status reports and is submitted with themonthly status report.

Even with the best laid plans andimplementation, corrective actions andadjustments are unavoidable. Good QAprocesses will allow for continuous eval-uation and adjustment of task imple-mentation. If a process is too rigid, imple-mentation can be doomed to failure.

When making adjustments, it’s criti-cal to discuss any and all changes with cus-tomers, explain why the adjustment is rec-ommended and the impact of not mak-ing the change. This communication isessential for customer buy-in.

QA processes should allow for and sup-port the implementation of necessary cor-rective actions. This allows for strategiccourse correction, schedule adjustments,and deviation from Automated SoftwareTesting Phases to adjust to specific proj-ect needs. This also allows for continuousprocess improvement and ultimately toa successful delivery. ý

REFERENCES

• This process is based on the Automated Testing LifecycleMethodology (ATLM) described in the book AutomatedSoftware Testing — A diagram that shows the rela-tionship of the Automated Software Testing technicalprocess to the Software Development Lifecycle willbe provided here

• As used at IDT• Implementing Automated Software Testing,

Addison Wesley Feb. 2009.

l

By closely tracking

schedules and

other required

resources, the

automator assures

that cost tracking

and controlling

process is followed.l

Page 32: Software Test & Performance issue Dec 2008

32 • Software Test & Performance DECEMBER 2008

When it comes to managingcollaborative code-develop-ment, it is not so much thechoice of tools used, butrather, ensuring that every-one plays by the same set ofrules. It takes only one cow-boy coder to bring disasterto otherwise carefully con-structed processes or onerogue developer to bring aproject to a standstill.

Whether you’ve implemented anopen-source tool such as CVS, Subversion,or JEDI; or a commercial tool, expertsagree there had better be a man in astriped shirt ready to throw a penalty flagif short cuts are attempted.

“You can’t have a free for all when itcomes to version control; it’s essential tohave clearly defined policies for codecheck in and check out,” says DavidKapfhammer, practice director for theQuality Assurance and Testing SolutionsOrganization at IT services providerKeane. “We see companies installingsophisticated source code and configu-ration management systems, but what theyneglect to do is audit their processes.”

And it’s often large, well-establishedcorporations that should know better.Kapfhammer says he is working with “verymature, very big clients” in the health-care and insurance industries that haveall the right tools and processes in place,“but no one is auditing, no one is mak-ing sure everyone follows the rules.”

It boils down to simple human nature.With enormous pressure to ensure thatschedules are met (though they often arenot), it’s common for developers—in theirwell-meant spirit of getting the product outthe door—to circumvent check-in policies.

A simple, low-tech way to avoid poten-tial disasters, such as sending the wrongversion of code into production, is a sim-ple checklist attached to developer’s cubi-cle walls with a push pin. “This method ofmanaging repetitive tasks and making sure

that the details of the devel-opment process are followedsounds completely silly, butit works,” says Kapfhammer.

That seems simpleenough. But the follow-upquestion, of course, is “ok,but exactly which proce-dures are we talking about?”That’s where the idea of aprocess model comes intoplay.

The current darling of process mod-els is the Information TechnologyInfrastructure Library (ITIL), a compre-hensive set of concepts and policies for managing development, operations,and even infrastructure. There’s seriousweight behind ITIL—it’s actually a trade-mark of the United Kingdom’s Office ofGovernment Commerce.

The official ITIL Web site, www.itil-offi-cialsite.com (apparently they’re seriousabout being THE official site), describesITIL as the “most widely acceptedapproach to IT service management inthe world.” It also notes that ITIL aims toprovide a “cohesive set of best practices,drawn from the public and private sec-tors internationally.” That’s especiallyimportant as projects pass among teamsworldwide in an effort to keep develop-ment active 24 hours a day.

But again, it works only if everyoneplays, and only if development tasks aregranular enough to be manageable.

“If I had to boil everything I knowabout source code management and ver-sion control down to three words, it wouldbe this,” says Norman Guadagno, direc-tor of product management for Micro-soft’s Visual Studio Team System: “Everybuild counts.”

Guadagno’s view of the developmentworld is simple: Build control and processautomation must be ingrained in the cul-ture of the team and not something that’sleft to a manager. “If it’s not instilled inthe culture of the team, if they don’t

understand branching structures andchanges, it all becomes disconnectedfrom what makes great software.”Developers, he notes, often think aboutbuild quality as something that happensdownstream. “The thinking is ‘if I buildand it breaks, it’s no big deal.’ But that’snot true. Do it up front and you’ll invari-ably a deliver better product and do itmore timely.”

Branching, while necessary in an envi-ronment where huge projects are divviedup among many developers, is the prover-bial double-edged sword. Handled cor-rectly, it simplifies development and main-tenance. But implement a model basedon bad architecture, and projects can col-lapse under their own weight.

Guadagno recalled a small ISV with aspecialized product in use by five cus-tomers. Their model of building a customversion of the app for each customer anddoing that with separate branches was agood idea, he says. “But what happens ifthey grow to 50 or a hundred customers?”Introducing new functionality eventuallywill cause every build to break—and it did.Spinning off different branches and hop-ing for the best when it comes to cus-tomizations simply didn’t work.

“It turns out that this was not a prob-lem that any source code managementtool or process could solve. The problemwas the original system architecture,which could handle the addition of a fewnew customers, but which never foresawthe impact of adding dozens.”

Guadagno’s point is that while tools,processes, and procedures are essentialin assuring a successful outcome, poordesign will still yield poor results. “Goodbuilds,” he says, “are not a substitute forgood architecture.” ý

SCM Tools Are Great, But Won’t Mind the Store

Best Practices

Joel Shore is a 20-year industry veteran andhas authored numerous books on personalcomputing. He owns and operates ReferenceGuide, a technical product reviewing and doc-umentation consultancy in Southboro, Mass.

Joel Shore

Page 33: Software Test & Performance issue Dec 2008

DECEMBER 2008 www.stpmag.com • 33

ed acceptance testing.

AUTOMATED BUSINESS-FACING TESTSA final approach is to have somemethod of accessing the business logicoutside of the GUI, and to test thebusiness logic by itself in a standaloneform. In this way, the tests become asort of example of “good” operations,or an executable specification. Twoopen-source tools designed to enablethis are FIT and Fitnesse. Business-fac-ing tests also sidestep the GUI, butmiss any rendering bugs that exist.

AUTOMATED TEST SETUPSoftware can be complex and yet nothave simple save—open functionality—for example, databases. Building import/export hooks, and a “sample test data-base” can save time, energy, and effortthat would have to be done over and overagain manually. Automated environmentsetup is a form of test automation.

LOAD AND PERFORMANCETESTINGMeasuring a page load—or simulating

100 page loads at the exact same time—will take some kind of tooling to do eco-nomically.

SMOKE TESTINGFrequently a team’s first test automationproject, automating a build/deploy/sanity-check project almost always paysoff. (Chris once spent a few days buildinga smoke test framework that increasedefficiency 600% for months afterward.)

COMPUTER ASSISTED TESTINGJonathan Kohl is well known for usingGUI scripting tools to populate a testenvironment with a standard set-up. Thetest automation brings the applicationto the interesting area, where a humanbeing takes over and executes sapienttests in the complex environment.

Any form of cybernetics can make aperson better, faster, and less error prone.The one thing they cannot do is think.There’s no shame in outsourcing required,repetitive work to a computer, but you willalways need a human being somewhere todrive the strategy. Or, to paraphrase JamesBach: The critical part of testing is essen-tially a conversation between a human andthe software. What would it even meanto “automate” a conversation? ý

ST&Pedia< continued from page 7

Advertiser URL Page

Automated QA www.testcomplete.com/stp 8

Empirix www.empirixfreedom.com 4

FutureTest 2009 www.futuretest.net 20, 21

Hewlett-Packard hp.com/go/quality 36

I Am Not a Lead www.i-am-not-a-lead.com 24

Qualitest www.QualiTest-int.com 4

Ranorex www.ranorex.com/stp 15

Reflective Solutions www.stresstester.net/stp 11

SD Times Job Board www.sdtimes.com/jobboard 26

Seapine www.seapine.com/stptcm08 2

Software Test & Performance www.stpmag.com 6, 33

Software Test & Performance www.stpcon.com 35

Conference

Test & QA Report www.stpmag.com/tqa 10

Index to Advertisers

Learn SomeNew Tricks! Free White Papers atwww.stpmag.com

Discover all the best

software practices,

gain an in-depth

understanding of the

products and services

of leading software

vendors, and educate

yourself on a variety of

topics directly related

to your industry.

Page 34: Software Test & Performance issue Dec 2008

Web services Web services are perfectcandidates for automated testing. Com-pared with validating a UI, Web servicesWeb services testing is quite simple: senda request, get a response, verify it. Butit's not as easy as it may look. The mainchallenge is to identify the formal pro-cedures that could be used for auto-mated validation and verification. Otherproblems include dynamic data and dif-fering request formats.

In the project our team was workingon when this was written, there weremany different Web services in need oftesting. We needed to check not only theformat of the request and response, butalso the business logic, data returned bythe services and the service behaviorafter significant changeswere made in the archi-tecture. From the verybeginning we intended toautomate as many of thesetests as possible. Some ofthe techniques we used forautomated verification ofWeb services functionalitywere as follows:

1.Response schema val-idation. This is the

most simple and formalprocedure of responsestructure verification. Thereare free schema validatorsout there that can easily be built into automatedscripts. However, schema validation is notsufficient of you also need to validate theWeb services logic based on data returnedfrom a service.

2.Comparison of the actual responsewith the expected response. This is

a perfect method for Web service regres-sion testing because it allows for check-ing of both response structure and datawithin the response. After the successful

manual validation, all valid responses areconsidered correct and saved for futurereference. The response of all next ver-sions of that Web service will then be com-pared with these template files. There is

absolutely no need to writeyour own script for XMLfile comparison; just findthe most suitable free tooland use it within yourscripts. However, thismethod is not applicablewhen dealing with dynam-ic data or if a new versionof Web services involveschanges in the requeststructure or Web servicesarchitecture.

3.Check XML nodesand their values. In

the case of dynamic data, itcan be useful to validateonly the existence of nodes

using XPath query and/or check thenode values using patterns.

4.Validate XML response data againstoriginal data source. As a “data

source” it’s possible to use database,another service, XML files, non-XML filesand in general, everything that can beconsidered a “data provider” for Web serv-ices. In combination with responseschema validation, this method could be

the perfect solution for testing any serv-ice with dynamic data. On the other hand,it requires additional skills to write yourown scripts for fetching data from datasource and comparing it with serviceresponse data.

5.Scenario testing. This is used whenthe test case is not a single request

but a set of requests sent in sequence tocheck Web services behavior. Usually theWeb services scenario testing includeschecking one Web service operation usinganother. For example, you can check thatthe CreateUser request works properlyby sending a GetUser request and vali-dating the GetUser response. In general,any of the above techniques can be usedfor Web services scenarios. Just be care-ful to send requests one after anotherin the correct order. Also be mindful ofthe uniqueness of test data during thetest. The service might not allow you tocreate, for example, several users with thesame name. It’s also common for onerequest in a sequence to require datafrom a previous response. In this case youneed to think about storing that data andusing it on the fly.

All of these techniques were imple-mented using a home-grown tool whichwe were using in our day-to-day work.Using that tool, we are able to automateabout 70 percent of our test cases. Thissmall utility that was originally created bydevelopers for developers to cover typi-cal predefined test conditions and grad-ually evolved into rather powerful tool.Having such a tool in your own organi-zation could help your testing efforts wellinto the future. ý

Future Test

Automate Web ServiceTesting the Right Way

Elena Petrovskaya and Sergey Verkholazov

Future Test

Elena Petrovskaya is lead software testingengineer and Sergey Verkholazov is chief soft-ware engineer working for EPAM Systems inone of its development centers in EasternEurope, and currently engaged in a project fora leading provider of financial information.

34 • Software Test & Performance DECEMBER 2008

l

We are able to

automate about

70 percent of

our test cases

[with a]

small utility

built in-house.l

Page 35: Software Test & Performance issue Dec 2008

SPRINGSPRING

Attend

March 31 – April 2, 2009San Mateo Marriott • San Mateo, CA

www.stpcon.comFor more information, go to

3 Days

Tracks

ClassesColleagues

Speakers3130+

60+C500+ Tutorials6+

Full-day tutorials offer a complete

immersion into a subject, allowing

much greater detail, comprehension

and (in some cases) practice of the

subject matter than a 1-hour session

could convey.

PRODUCED BY

Our speakers are hand-picked for their

technical expertise and ability to

communicate their knowledge

effectively, in ways that are

most useful for integrating

that knowledge into your

daily life.

PRODUCED BY

Exhibitors3125+

Minutes 29San Mateo, CA is

midway between

San Francisco and

Silicon Valley.

Monday Tuesday Wednesday Thursday Friday30

April 2009

31 1 2 3

6 7 8 9Mark YourCalendar!

More than 60 classes will be offered,

covering software test/QA and performance

issues across the entire application life

cycle. There are 8 classes in a given time

slot, which means you’ll always find

something that fits your needs.

Share experiences! Network with peers

and instructors at our reception, coffee

breaks and

on the exhibit

floor.

We know you’re busy, so we packed the

conference into 3 days to minimize

your time out

of the office.

We’ll offer learning tracks

designed especially for

managers and other specialists

just like you, who must stay

on top of the latest technologies

and development techniques

to keep your company

competitive.

5+

matiiioon go to

Value!$895The travel and tuition expense of attending STPCon

is less than other conferences.

Check out JetBlue and Southwest

for discount airfares.

The earlier you sign up, the more you save.

Your Full Event Passport Registration Includes:

• Admission to workshops and technical classes

• Admission to keynote

• Admission to Exhibit Hall and Attendee Reception

• Admission to Lightning Talks and Tool Showcases

• All conference materials

• Continental breakfast, coffee breaks, and lunch

Meet the makers of the latest software

testing tools in our Exhibit Hall.

Learn about their new products and

features, test them out, and talk to the

experts who built them.

TheBest Value of any

Testing Conference!

STP012 Full Page ads 11/13/08 11:10 AM Page 35

Page 36: Software Test & Performance issue Dec 2008

OK

Prepared by Goodby, Silverstein & Partners 2008. All rights reserved. 415.392.0669

Released on 07.24.08Print Output at 100% Reader 3

ClientJob Number

Ad Number

Ad-ID

Job Title

File Name

File Format

Start Date

Color / Media

1st Close

1st Insertion

Vendor

Pubs

B

T

L

G

S

PeopleCreative Director

Associate CD

Art Director

Copywriter

Proofreader

Account Manager

Asst. Account Manager

Print Producer

Project Manager

Client

Production Artist

Mechanical SpecsHP TSGHPTSG-322

000085

EQUP13390000

Demand Gen - Quality Mgmt REV

HPTSG322_DGen_Qual Mgmt_REV.indd

Adobe InDesign CS3

7/23/08 9:27 AM

4/c Mag

None

September

XYZ Graphics

Better Software

Software Test and Performance

8.25 in x 10.75 in

7.875 in x 10.5 in

7 in x 9.75 in

None

1 in = 1 in

For Regions Only

Keith Anderson

None

Brian Gunderson

Brian Thompson

Shannon / Sage / Jen

Liz Clark

Leigh McDaniel

Katie Borzcik

Lisa Nicolay

None

Kirsten Finkas @ 7/24/08 10:00 AM

Notes

1735207.31.08

GSPRT

n/aOrisM17352_HPTSG322DGenQtyMgmt_REV

Cyan Magenta Yellow Black

Technology for better business outcomes. hp.com/go/quality

A LT E R N A T I V E T H I N K I N G A B O U T Q U A L I T Y M A N A G E M E N T S O F T W A R E :

Make Foresight 20/20.Alternative thinking is “Pre.” Precaution. Preparation. Prevention.Predestined to send the competition home quivering.

It’s proactively designing a way to ensure higher quality in yourapplications to help you reach your business goals.

It’s understanding and locking down requirements ahead oftime—because “Well, I guess we should’ve” just doesn’t cut it.

It’s quality management software designed to remove theuncertainties and perils of deployments and upgrades, leavingyou free to come up with the next big thing.

©2008 Hewlett-Packard Development Company, L.P.

S:7 in

S:9.75 inT:7.875 in

T:10.5 inB:8.25 in

B:10.75 in