benefits and pitfalls of test automation - aalto university · benefits and pitfalls of test...
TRANSCRIPT
HELSINKI UNIVERSITY OF TECHNOLOGY
T-76.613 Software Testing and Quality Assurance
25.10.2006
Benefits and Pitfalls ofTest Automation
Juha ItkonenSoberIT
2Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Tools in development lifecycle
Redrawn from Fewster and Graham. Software Test Automation, 1999.
Test design tools:Logical design tools
Physical designtools
Managementtools
Staticanalysis tools
Coveragetools, unit test
frameworks
Dynamicanalysis tools
Debuggingtools
Performance,simulator tools
Test execution andcomparison tools
RequirementSpecification
ArchitecturalDesign
Detaileddesign
Code Unit test
Integrationtest
System test
Acceptancetest
3Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Some types of test automation
Scripting (graphical) user interface tests Using some script-languageCapture replay tools (record and playback)
Random automationExecuting randomly generated test sequencesMonkey automation
Data driven test automationKey-word driven test automationAutomatic test case generationModel based test automationTools for testing specific quality attributes
Performance, load, etc.Unit testing frameworks
Capture replay is not test automationFewster and Graham. 1999. Software Test Automation.
4Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Test automation frameworks
1. Generation frameworks unstructuredtest data embedded into the scriptsone script per one test casescripts generated using capture and replay tools or manually codedscripts virtually non-maintainable
when the tested system changes they need to be captured or written again
2. Generation frameworksscripts are well-designed, modular, robust, documented, and maintainablescripts do not only handle test execution but also for example setup and cleanup and error detection and recoverytest data is still embedded into the scriptsone driver script per one test casecode is mostly written manually both implementation and maintenance require programming skills
3. Generation frameworkshave all the good characteristics of the 2. generationThey go forward by taking test data out of the scripts
one driver script may execute multiple similar test casesadding new tests is trivialtest design and framework implementation are separate tasks
5Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Data-Driven Testing
A scripting technique that stores test inputs and expected outcomes as data
normally in a tabular format
One driver script can execute all of the designed test cases. Avoids the problems of embedded test data
When test data needs to be updated or extended the actual scriptcode must be changed
Scripts can be long and unstructuredTesters can lack programming skills
Creating similar tests with slightly different test data always requires programming
Leads to copy-paste scripting
Test data (test inputs and expected resutls) are read from an external data sourceExternal test data can be easily editable by test engineers or even business people or customers without any programming skills.
6Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Keyword-Driven Testing
An automation approach where keywords instructing how to use the data are separated from the actual scripts
In addition to the test data (in data-driven testing)Keywords and test data are read from an external data sourceWhen test cases are executed keywords are interpreted by a test library which is called by a test automation frameworkThe test library is the test scriptsKeyword-driven testing has all the benefits of data-driven testingKeyword-driven testing improves the data-driven approach
In data-driven approach all test cases are similarcreating totally new tests requires programming effort
In keyword-driven approach not only the test data but also directives (keywords) telling what to do with the data are taken from test scripts and put into external input files The basic idea—reading test data from external files and running tests based on it—stays the same as in data-driven testingA good example of data-driven and keyword-driven testing framework is FIT (Framework for Integrated Test)
See lecture 7 slideshttp://fit.c2.com/
7Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Architecture of Keyword-Driven Framework
Pekka Laukkanen. Data-Driven and Keyword-Driven Test Automation Frameworks. Master’s Thesis. Helsinki University of Technology. 2006.
8Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Test case generation and model based testing
Test cases are automatically generatedBased on code, interfaces, or specificationsLeads to large numbers of test casesChallenge is how to generate the expected results
In model based testing an abstract model of the behaviour of the tested system is designed
Test cases are generated based on this modelThe model can provide also the expected results for the generated testsIdea is to generate a large amount of tests that cover the model
Many different criteria for covering the model
9Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Claimed benefits of automation
Improved testing repeatabilityRegression tests for new versionsSmoke tests (pre-testing builds before actual testing)Running same tests in other environments
Enables running tests faster and more oftenShorter feedback loop
Better use of resourcesMenial and boring repeating can be automated
More accurate resultsCheaper
Skilled testers do a better job with the remaining, fewer, testsImproved confidence and knowledge of software status
Unit tests, regression, smoke, performance, reliability, load testsImproved developer testing
Unit testsMakes some types of testing possible
Load and performance testing, stress testing, …
10Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Problems of test automation
Unrealistic expectationsPoor testing practices
Automating chaos gives faster chaos
Expectation that automated tests will find a lot of new defectsFalse sense of securityMaintenance of automated testsTechnical problemsOrganizational issues
11Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Example
The Reckless Assumptions of Automation
“Automated tests execute a sequence of actions without human intervention. This approach helps eliminate human error, and provides faster results. Since most products require tests to be run many times, automated testing generally leads to significant laborcost savings over time. Typically a company will pass the break-even point for labor costs after just two or three runs of an automated test.”
Based on James Bach, Test Automation Snake Oil, 1999
12Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #1
Testing is a “sequence of actions”Manual testing is a process that adapts easily to change and can cope with complexityHumans are able to detect and evaluate hundreds of problem patterns, in a glanceGood testing is an interactive cognitive process
13Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #2
Testing means repeating the same actions over and overAutomated testing is regression testing
Excluding advanced model based testing
No new defects foundStepping in someone else’s footprints minimizes the change of being blown up by a land mine
14Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #3
We can automate testing actionsSome tasks are easy for people but hard for computerHow do you automatically interpret test results?
Recognize all categories of significant problemsIgnore all insignificant variations
High degree of uncertainty and change in a typical software project and productHard to find good tools for automating many types of testing
15Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #4
An automated test is faster, because it needs no human intervention
It can be surprisingly hard to create a test suite that runs without human interventionInterpreting results, fixing/maintaining tests, problems and bugs in the test tools, changes to software under test, …
16Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #5
Automation reduces human errorSome errors are amplified – any error that goes unnoticed when the test scripts and result files are created will go systematically unnoticed over and over again
17Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #6
We can quantify the costs and benefits of manual vs. automated testing
Two different processes that reveal different defects –comparison is rather meaninglessA lot of specific and hidden costsTest automation is one part of a test strategy – not a substitute for manual testing
18Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #7
Automation will lead to significant labor cost savingsDeveloping automation
Writing test scripts is software developmentTesters are seldom competent codersThe quality of test code must be good
Operating automated testsMaintaining automated test suitesAll other new tasks necessitated by the automationProbably does not significantly reduce the need for manual testing
19Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Reckless assumptions of automation #8
Automation will not harm the test processIt’s dangerous to automate something we don’t understandThe result will be a large mass of test code that nobody understands
20Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Automation and oracles
Automated testing depends on the ability to programmatically detect when the software failsAn automated test is not equivalent to a similar manual test
Automatic comparison is typically more precise Automatic comparison will be tripped by irrelevant discrepanciesThe skilled human comparison will sample a wider range of dimensions, noting oddities that one wouldn't program the computer to detect
“Our ability to automate testing is fundamentally constrained by our ability to create and use oracles”
Cem Kaner
21Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Automated comparison
Designing good verifications defines the quality of testsComputer does not see anything but what it is told to see
Outcome prepared in advance vs. captured actual outcomesThe best task to automate in testing
Repetitive and detailed task
Checking what is visible, added, altered, or deleted data in file or databasemessaging
Dynamic or Post-execution comparison
22Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Types of outcome to compare
Disk-basedComparing text filesComparing non-textual forms of dataComparing databases and binary files
Screen-basedCharacter-based applicationsGUI applications
Correct message, display attributes, displayed correctlyGUI components and their attributes
Graphical imagesAvoid bitmap comparisons
OthersMultimedia applications
Sounds, video clips, animated pictures
Communicating applications
Simple vs. complex comparison
23Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Test sensitivity in comparisons
Robust tests Sensitive tests
Susceptibility to change
Implementation effort
Miss defects
Failure analysis effort
Storage space
Redrawn from Fewster et al. Software Test Automation, 1999.
24Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Effect of automation to ”goodness” of a test case
Effective
Exemplary
EvolvableEconomic
First run ofautomated tests
Automated testafter many runs
Manualtest
Redrawn from Fewster and Graham Software Test Automation, 1999.
25Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Automating pre- and post-processing
Select/Identify test cases to run
Set up test environment- create test environment
- load test data
Repeat for each test case:- set up test prerequisites
- execute- compare results
- log results-analyze test failures
-report defect(s)- clear up after test case
Clear up test environment:- delete unwanted data- save important data
Summarize results
Select/Identify test cases to run
Set up test environment:- create test environment
- load test data
Repeat for each test case:- set up test prerequisites
- execute-compare results
- log results-clear up after test case
Clear up test environment:- delete unwanted data-save important data
Summarize results
Analyze test failuresReport defects
Automated tests Automated testing
Manual process
Automated process
Redrawn from Fewster et al. Software Test Automation, 1999.
26Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Automate or not?
What is the goal of the automated tests?How much more effort to automate?
Compared to run manually
Expected value of rerunning the test often?What do you lose by running the tests manually?
27Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Tests to Automate
Tests that are run oftenregression testsimportant but trivial
Expensive to perform manuallymulti-user testsendurance/reliability tests
Difficult to perform manuallytiming criticalComplex testsDifficult comparisons
28Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Tests not to automate
Tests that are run rarelyTests that are not important
will not find severe faults
Hard to recognize defectsUsability testsDo the colors look nice?
29Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Relationship of testing activities
Redrawn from Fewster et al. Software Test Automation, 1999.
Edit tests(maintenance)
Set up Execute Analyze failures Clear up
Manualtesting
Same testsautomated
More matureautomation
Time
30Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Tips for sensible automation
Define the test process first to know what to automate and whyUse automated tests as baseline and regression suites –not as replacement for manual testingSelecting a test tool is a big project
Start with the needs, not with the product demonstrationsThe biggest task is internal sales and marketing, deployment and training after the tool is selected and purchased
Design your test suites carefully paying attention to maintainability
Automated tests are code, too.Think testing first and keep the machinery and craft separateWhat information does the automated test set provide?
What do we know if an automated test set fails?And what do we know if it passes?
31Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Introducing a new tool is a cultural change
Tool changes the working practices and habitsChanging working practices is not easy
Requires managingRequires training and support
Resistance is naturalTechnology is only a matter of training and learningOrganizational politics and “soft issues” are the hard part
32Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Tips for choosing a tool
Start and document an internal projectPlan carefully
Gain knowledgeStart by looking at your current situation
Identify your problems Explore alternative solutions Are you ready for tools?
Make a business case for the tool What are your current and future manual testing costs? What are initial and future automated testing costs? What return will you get on investment and when?
List potential toolsIdentify few candidate tools to evaluate
Remember your own prioritiesIdentify constraints (economic, environmental, commercial, quality, political) Classify tool features into mandatory & desirable Evaluate featuresInvestigate tool experienceMake the decision
33Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
How to put a tool into use
Pilot projectNot critical, no tight schedule, not too riskyMake it easy for the users
Small, controlled steps
Small team first
Lobbying and sellingNumbers and marketingRemember to advertise victories
Make users to hype the tool to each otherMake users dependent on the toolWork statements of the tool into every document
34Juha ItkonenSoberIT/HUTHELSINKI UNIVERSITY OF TECHNOLOGY
Cost of a tool
Tool Training Maintenance
Cost