functional test automation - plm services and solutions document is disclosed only to the recipient...
TRANSCRIPT
W
HITE P
APER
Functional Test Automation
Leverage Automation Frameworks for
Efficiencies in Software QA
Version 1.0March 2009
2
Getting ready for PLM
Copyright Notice
© Geometric Limited. All rights reserved.
No part of this document (whether in hardcopy or electronic form) may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, to any third party without the written permission of Geometric Limited. Geometric Limited
reserves the right to change the information contained in this document without prior notice.
The names or trademarks or registered trademarks used in this document are the sole property of the
respective owners and are governed/ protected by the relevant trademark and copyright laws.
This document is provided by Geometric Limited for informational purposes only, without representation or
warranty of any kind, and Geometric Limited shall not be liable for errors or omissions with respect to the
document. The information contained herein is provided on an “AS‐IS” basis and to the maximum extent
permitted by applicable law, Geometric Limited hereby disclaims all other warranties and conditions, either
express, implied or statutory, including but not limited to, any (if any) implied warranties, duties or
conditions of merchantability, of fitness for a particular purpose, of accuracy or completeness of responses,
of results, of workmanlike effort, of lack of viruses, and of lack of negligence, all with regard to the
document.
THERE IS NO WARRANTY OR CONDITION OF NON‐INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHTS
WITH REGARD TO THE DOCUMENT. IN NO EVENT WILL GEOMETRIC LIMITED BE LIABLE TO ANY OTHER
PARTY FOR LOST PROFITS, LOSS OF USE, LOSS OF DATA, OR ANY INCIDENTAL, CONSEQUENTIAL, DIRECT,
INDIRECT, OR SPECIAL DAMAGES WHETHER UNDER CONTRACT, TORT, WARRANTY, OR OTHERWISE,
ARISING IN ANY WAY OUT OF THIS DOCUMENT, WHETHER OR NOT SUCH PARTY HAD ADVANCE NOTICE OF
THE POSSIBILITY OF SUCH DAMAGES.
Confidentiality Notice
This document is disclosed only to the recipient pursuant to a confidentiality relationship under which the
recipient has confidentiality obligations defined herein after. This document constitutes confidential
information and contains proprietary information belonging to Geometric Limited, and the recipient, by its
receipt of this document, acknowledges the same. The recipient shall use the confidential information only
for the purpose defined above for which this document is supplied. The recipient must obtain Geometric
Limited’s written consent before the recipient discloses any information on the contents or subject matter
of this document or part thereof to any third party which may include an individual, firm or company or an
employee or employees of such a firm or company. The recipient acknowledges its obligation to comply
with the provisions of this confidentiality notice.
3
Getting ready for PLM
Contents
Introduction ........................................................................................................................ 4
Automation Frameworks .................................................................................................... 4
The Need for a Framework ......................................................................................................... 4
Types of Frameworks .................................................................................................................. 5
Test Automation Life Cycle (TALC)...................................................................................... 6
Cost of test automation ...................................................................................................... 9
Cost of the Automation Tool....................................................................................................... 9
Cost of the Automation Effort..................................................................................................... 9
Benefits of Test Automation............................................................................................. 10
ROI on Automation: A Conservative Estimate.................................................................. 11
What can be Automated................................................................................................... 12
What to Automate, and When ......................................................................................... 13
Relationship of the TALC to the SDLC ............................................................................... 13
Challenges in Automation................................................................................................. 13
Automation Best Practices................................................................................................ 14
Conclusion......................................................................................................................... 14
About the Author.............................................................................................................. 15
About Geometric .............................................................................................................. 15
4
Getting ready for PLM
Introduction Of every four dollars spent on software development, at least one dollar goes towards testing. Of
this dollar, at least 70 cents go towards functional testing. Within functional testing, test case
design is the key determinant of the quality of testing, but it is test execution that is the single
largest consumer of effort. This effort multiplies rapidly with the number of configurations to be
tested, is required in cycles, and is repetitive to a significant extent. Companies are constantly
looking for ways to ‘rationalize’ this cost and enable comprehensive testing, rather than having
to make risky choices about which platforms to test thoroughly. Test automation offers an
elegant way to tackle this challenge, though it is by no means an instant panacea. This explains
why worldwide spending on test automation is growing 10% faster than spending on testing as a
whole.
Test automation refers to the development and usage of tools to determine the success or
failure of pre‐specified test cases, against the Application Under Test (AUT), without human input.
The primary objective of test automation is to reduce repetitive manual tasks.
Several commercially available test automation tools allow the recording of user actions taken on
the AUT. The recording session generates scripts that can later be replayed without human input.
This paper attempts to outline the what, why, how and when of test automation. It focuses more
on automation for the QA team, as compared to that for the Development team.
Automation Frameworks
The Need for a Framework
An automation test suite can be built by simply recording various test cases. However, it is often
possible to significantly enhance the reusability and maintainability of such suites, by developing
frameworks for automation.
Reusability
• Consider an AUT that has a combo with five possible values. Further, assume there are five
separate test cases, each using a different value, but identical in other respects. If we use the
‘record and replay’ approach, we would have to record five different test cases. Instead, we
can abstract the combo as an argument to be passed to the script, and then call the same
script five times with different arguments.
• There are certain user actions – log on, for instance – that might be common to several test
cases. Again, these actions (sequences) can be abstracted out and reused in several
automated test cases, rather than recording the same sequence multiple times. Short action
sequences can also be used to compose long ones.
5
Getting ready for PLM
Maintainability
Consider a website (the AUT) that has a set of commonly needed links appearing in several web
pages. Further assume that we have automated the testing of this website.
In the absence of any frameworks, the
identification of the links will be arbitrary: it could
be indexed on the current page layout, for
instance. In a future version of the website, these
indices could change – say, due to the addition of
other links. In this case, each test script involving a
web page that has this set of links will be broken,
and will need rectification. This could mean a
significant rework effort, as there could be many
such pages and perhaps the whole website.
Additional benefits of a framework
A framework makes it easy to provide exception handling and wait mechanisms, both of which further increase robustness. _________________________________ A well designed set of utilities in the framework makes it easy to create a dashboard of the test results.
Instead, if named references are provided to these links and only these names are referred by
the test scripts; then changes will be needed only to (re)map the names to the actual links.
Conceptually, a framework eliminates ‘hard coding’ and provides ‘modularization’. Based on our
experience, we estimate that in the long run, up to 50% of the script development and
maintenance effort can be saved by investing in creating an automation framework.
Types of Frameworks
Automation frameworks can broadly be classified into three types.
• Data driven: In this approach, variables are used to hold test data. At runtime, these
variables could be loaded from an external data source. This approach reduces the problem
of hard coding. Note that identification of GUI elements is still hard coded; for instance, the
script might contain instructions that effectively mean ‐ Select the image whose index
number is 3.
• Keyword driven: In this approach, the input, user actions and expected output are encoded
using keywords that are typically independent of the AUT. A test case is encoded as a record
composed of these keywords. Test suites composed of such test cases are typically stored in
tables. As part of the framework development, scripts are written to translate these records
to a specific AUT.
This approach reduces the problem of hard coding and also provides modularization.
• Hybrid: This approach combines the two approaches outlined above, and brings in benefits
derived from both. Over a period of time, hybrid frameworks have emerged as the de facto
standard for automation requirements. Based on our experience, we recommend serious
evaluation of the hybrid approach during framework design.
6
Getting ready for PLM
Test Automation Life Cycle (TALC) Broadly, the Test Automation Life Cycle can be depicted as follows. It should not be assumed that
efforts (person weeks) will be proportional to the schedules (calendar weeks) indicated.
Release Cycle n n+1
Month 1 2 3 4 5 6 7 8 9 10 11 12
Study
Framework Design
Scripting
Execution & reporting
Script Maintenance
Scripting
Legend
Activities related to Release n‐1 Activities related to Release n
Activities related to Release n+1 Release independent activities
The automation project should ideally be divided into five distinct phases as described below. It
is assumed that test case design is already done, and is not treated as part of the automation
effort.
Study
Typically, the study phase would involve the following steps:
• Identification of a few (could be just two) test cases that are fairly complex
• Identification of candidate tools. Several (say six) tools may be compared and a few (say, two
or three) may be tagged as candidates. This short‐listing should be based on a multiple
criteria, some of them being support for the AUT’s underlying technology, scripting
languages, cost etc.
• PoC (Proof of Concept) automation of the selected test cases using the candidate tools. The
main intent is to discover problems that a particular automation tool may have with the AUT,
and solutions to those problems. At this point, the focus is not on elegance in scripting, script
reuse etc.
• Broad assessment of the long term returns from the automation initiative
7
Getting ready for PLM
• If the above assessment confirms sponsorship for the automation initiative, then
recommend the tool to be used, and the test cases to be taken up in the first automation
cycle.
Deliverables from this phase
• Broad assessment of the long term returns from automation
• Recommendation on the tool to use
• List of test cases to be taken up in the first automation cycle
• Data for estimation of subsequent phases
Framework Design
This phase is similar to the high level design phase in the SDLC (Software Development Lifecycle)
and involves deciding the type of framework to create, how to orchestrate the execution, and so
on.
Deliverables from this phase
• Architecture for the framework
• High level flowchart for the way test case execution will be orchestrated
• Format for reporting test results
• Test management strategy if not already decided
• List of the main keywords and functions that will be required
Scripting
The scripting phase is similar to the construction (coding) and unit testing phase in the SDLC. It
includes verification to ensure that an automatically executed test case returns the exact same
result as the manually executed test case.
Deliverables from this phase
• Automated test cases
• Test result reporting mechanism
In order to precisely estimate the effort required for scripting, it is recommended that the
following points are identified for each test case to be automated
• Action points: They represent the number of user actions (clicks, selections, etc.) that are
required while executing the test case. Conceptually, they are similar to ‘function points’ in
application development, but are at a much finer level of granularity.
• Verification points: These are of two types
• Checks required for the automation itself to work correctly. For instance, has a link
appeared before we attempt to click it?
• Checks that are deliberately inserted to validate the behavior of the AUT. These are
conceptually similar to ‘asserts’ used in application development.
8
Getting ready for PLM
A verification point often takes more efforts to develop than an action point.
As a guiding indicator, scripting effort is linearly proportional to the number of action points and
verification points. The other factors that significantly impact the scripting effort are
• Recording support: While most tools provide support for recording, some don’t (e.g. Unified
TestPro). In such cases, higher efforts are needed.
• Level of reuse: Higher the level of reuse of action and verification points, the lower the
effort needed.
• Wait points: More the wait points and their complexity, higher the effort needed.
• AUT(s): Higher the complexity of the AUT, the higher the effort needed. Also, greater effort
is typically needed when more than one AUT is involved. The architecture of the AUT also
impacts the scripting effort needed (e.g. client‐server versus web).
Execution and reporting
This phase involves the actual execution of the automated test cases, and reporting the results.
This is an ongoing phase and is almost completely automatic.
Deliverable from this phase
• Test results (summary and detailed)
Maintenance
Maintenance involves modifying and updating the automated test cases as required, to ensure
that they remain true to the intent of the test. Typical reasons necessitating maintenance are–
• A GUI change in the AUT
• Enhancement of the test case
Deliverable from this phase
• Updated / enhanced automated test cases
While good framework design will minimize the need to rework the scripts in response to a
change like GUI enhancement in the AUT, it is usually difficult to eliminate this need completely.
9
Getting ready for PLM
Cost of test automation Investment in automation has two key components
• Investment in the automation tool
• Investment in the effort required for developing, executing and maintaining the automated
test suite
Cost of the Automation Tool
Typical single user, perpetual licenses cost in the range of USD 5,000 for a named user license,
and USD 10,000 for a floating license. AMCs are usually 20% of the license costs. Some of the
most successful commercial test automation tools are provided by vendors like HP, IBM, Borland
and AutomatedQA.
Cost of the Automation Effort
Estimates discussed here are a broad representation and are based on our experience with test
automation of CAx and PLM systems. Actual estimates would primarily depend on the AUT,
automation tools and the actual test cases to be automated
• Study would take four‐eight person weeks of effort.
• Framework design would take two‐four person weeks of effort.
• Scripting: As indicated earlier, we recommend point based estimation. However, if a rough
estimate is needed in the early phases of the automation project, then the number of web
pages / forms, controls embedded in them, etc. can be used to arrive at an estimate of the
number of test cases, which can then be translated into an effort estimate.
• Reporting: Creating the result reporting mechanism would take one person week of effort.
10
Getting ready for PLM
Benefits of Test Automation
Shorter test cycles
A ‘test cycle’ refers to a single run of a set of test cases. For instance, a system test cycle means
that all system test cases have been run. Automation significantly reduces the time required to
execute a test cycle through
• Faster action triggering: Typically, a script can trigger actions on the AUT much faster than a
human. However, actual actions triggered (UI interactions done) during a manual test might
be different from those during an automated test due to a limitation of the automation tool.
• 24x7 replays: Since it is rare for manual testing to be done for more than eight working
hours in a day, the fact that automated replays can run continuously, offers a significant
reduction in the test cycle time.
A 50‐70% reduction in the test cycle time is common.
Saving of manual testers’ time
The saving in a test engineer’s time is almost linearly proportional to the number of test cases
automated, and the number of test cycles to be executed.
Repeatability
In complex test environments (those involving several application components, platforms,
environment variables, etc.) human error can creep into manual test execution. Automation
ensures 100% repeatability and hence greater predictability in execution.
Enabling non‐functional testing – synergy with other quality tools
With additional tools and effort, it is often possible to configure special runs of the automated
test cases in order to perform non‐functional testing, for example:
• Performance testing
• Scalability/ load testing
• Memory profiling
• Code coverage and impact analysis
Depending on the project’s priorities, the above benefits can be translated into higher quality,
lower costs or lesser time to market.
11
Getting ready for PLM
ROI on Automation: A Conservative Estimate Based on the benefits outlined above, we recommend that automation should be seen as an
overall quality and productivity improvement initiative, rather than merely as a cost saving
exercise.
At the same time, automation is more amenable, as compared to some other quality
improvement initiatives, to a return on investment (ROI) analysis. The monetary values of the
following can be compared:
• Return: Saving of manual testers’ time, converted to a monetary value
• Investment
• Cost of the automation tool
• Cost of the automation effort
For automation of functional test cases, a typical ROI profile might look like the one below.
0
5
10
15
20
25
1 2 3 4Release Cycle
K USD
‐100%
‐50%
0%
50%
100%
150%
200%
%age
Cost of the Tool Cost of the Automation Effort
Cumulative %age tes t cases automated ROI %age
The above ROI assessment can be termed conservative, as it only considers cost elements, and
does not take into account the benefits resulting from shorter cycles, higher repeatability, etc.
12
Getting ready for PLM
What can be Automated There is a wide spectrum of tasks that can be automated
• Test data generation (particularly useful for load testing)
• Unit testing
• Integration testing
• System testing involving a single AUT
• System testing involving multiple AUTs and their integrations
• AUT installation
Non‐textual input/ output
Consider a simple 2D CAD AUT that has a graphics area, in which the user can pick objects such
as circles, lines, etc. and drag them with the mouse. Or, consider an AUT that converts text to
speech. Commercially available automation tools are usually unable to handle such AUTs out‐of‐
the‐box. However, with additional effort, it is possible to automate testing of AUTs with non‐
textual input/ output as well.
Pixel based automation and journal based automation, two common solutions to this problem
are described in more detail in the table below. While the CAD AUT has been used as an example,
variations of the approaches described can be used with other types of AUTs – the text‐to‐
speech application, for instance.
Challenges
Non‐textual input Non‐textual output
App
roach Record the user actions of picking the circle, dragging the
mouse, etc. just like other user actions Save a screenshot of the output to a bitmap file Then compare this with the reference bitmap
GUI based, leading to high fidelity with respect to manual testing
Pros
No extra coding required
Pixel based
autom
ation
Cons
Automation tends to be brittle. This can be mitigated by fine control over the environment, by not picking close to the edge, validating output at the right precision level, etc.
App
roach
Journal inputs (Pick Circle#15, Apply Move [10, 15], etc.) to a file (one‐time activity) Write a test app that mimics the user by using this file to drive the API of the AUT Commercial automation tool not used at all
Record the test case as usual
Journal outputs (NewCirclePosition [25,70]) to a file during test execution
Compare this output file with the reference file
Pros
Highly robust automation ‐ not affected by environment, precision, etc. and even by changes to the GUI
Robust automation ‐ not affected by environment, precision, etc.
Extra coding required in the AUT for journaling
GUI used neither for input nor for output ‐ leading to a significant loss of fidelity with respect to manual testing
GUI used for input but not for testing output ‐ leading to a loss of fidelity with respect to manual testing
Solution
s
Journa
l based
autom
ation
Cons
Significant extra coding required to create the test apps
13
Getting ready for PLM
What to Automate, and When We do not recommend making automation decisions solely on the basis of (costs only) ROI.
However, given that automation needs a non‐trivial effort, it is advisable to undertake
automation only after some parts of the AUT have become fairly stable. This ensures that the
automated test cases will get executed enough number of times to offset the automation effort.
A few consequences of this rationale are as follows:
• Automation is much more attractive in product development, as compared to services
projects.
• In product development, it would be prudent to start automation after a couple of releases
of the product. This ensures that chances of a GUI overhaul are minimal.
• It is advisable to first automate those test cases that involve functionalities that are fairly
stable. New features could typically be taken up for automation in the next release. This also
means that automation is most effective for regression testing.
Relationship of the TALC to the SDLC Since much automation is targeted towards regression testing, it is not essential to synchronize
the Test Automation Lifecycle with the Software Development Lifecycle. An automation initiative
can be started at any point in the SDLC, given that some parts of the AUT have reached stability.
However for convenience in planning and tracking, it is common to synchronize these two life
cycles, and to then treat the TALC as a sub‐project of the main SDLC. This is depicted in the figure
below.
Challenges in Automation • The choice of the tool is often restricted by the technology underlying the AUT. For instance,
a popular cross‐platform development framework Qt is not supported well by one of the
leading automation tools.
• In web applications, multi‐window test cases are usually difficult to automate. Pop‐ups,
single child windows are not a problem.
• Integrations between AUTs are sometimes difficult to automate.
14
Getting ready for PLM
Automation Best Practices We recommend the following best practices for automation.
• Since automation frameworks are essentially about abstraction, an important set of best
practices deals with ensuring loose coupling between –
• The test data and the test scripts,
• Test scripts themselves,
• The automation framework and the AUT(s),
• The test cases and the automation framework, and
• The automation framework and the automation tool.
The entities listed above will essentially have to reference one another to form a complete
working test automation system. It is important that these references be through well defined
interfaces only.
• Keyword names should be carefully chosen, so that human readability is also high. This
enables gradual transitioning from manual testing to automated testing.
• Avoid duplication in scripts. Any duplication should be investigated to check whether a
separate unit (say, function) can be created.
• Verification points should be judiciously inserted into the scripts. In case of test case failure,
these points accelerate the process of zeroing in on the reason of the failure.
• The development of an automation framework is similar to the development of an
application in several respects, and hence should be planned and tracked as a (sub) project
in itself. It should be noted that framework creation and test case design are distinct
activities (and require different skills).
• Simpler test cases should be automated before complex ones. This makes it easy for later
scripts to build on earlier ones. There is an exception to this approach though: fairly complex
test cases should be taken up in the study phase, as the objective there is to discover
problems in automation (rather than achieving a high level of reuse).
Conclusion Test automation offers a promising way of quality and productivity improvement in the area of
software testing – particularly in products. While manual testing is required and also desired
(except perhaps for a product that is purely in sustenance mode), the time and cost required for
it can be significantly reduced. Moreover, a part of this saving can be invested for better quality.
15
Getting ready for PLM
Commercial tools and a rapidly growing body of knowledge have led to a reduction in the time
needed for monetary returns to be seen, thus accelerating the adoption of test automation in
the industry.
We recommend that for all product development, at least the study phase of the test
automation lifecycle should be undertaken.
About the Author Prashant Anaskure is the Head of the Quality Assurance and Support Practice, part of the
Software Product Engineering group, at Geometric. Prashant has a very strong background in
product development lifecycle having worked for over ten years with different busines units in
Geometric. Prashant has lead teams in product development and quality assurance activities for
Geometric’s Enterprise Products, Desktop Products and also for our key customers.
About Geometric Geometric is a specialist in the domain of engineering solutions, services and technologies. Its
portfolio of Global Engineering services and Digital Technology solutions for Product Lifecycle
Management (PLM) enables companies to formulate, implement, and execute global engineering
and manufacturing strategies aimed at achieving greater efficiencies in the product realization
lifecycle.
Headquartered in Mumbai, India, Geometric was incorporated in 1994 and is listed on the
Bombay and National Stock Exchanges. The company recorded consolidated revenues of Rupees
4.86 billion (US Dollars 121.6 million) for the year ended March 2008. It employs close to 3000
people across 10 global delivery locations in the US, France, Romania, India, and China.
Geometric is assessed at SEI CMMI Level 5 for its software services and ISO 9001:2000 certified
for engineering operations. For further details, please visit www.geometricglobal.com.