m. holovaty, Концепции автоматизированного тестирования
DESCRIPTION
Краткая презентация об эффективности или об отсутствии оной при тестировании приложений.TRANSCRIPT
Lohika / HP Confidential
Automated Testing Concepts
Mikhail Holovaty05, 31, 2011
Abstraction Level Limitations
Abstraction Implementation
Requirement Test Case
Test Script
Test Execution
QA
Robot
Automating a Manual Process
Level of abstraction accessible to human and automated tests
Cyclic Development Process
Automation test scripts development, execution and results analysis are determined by cyclic development process of a tested application
For each development iteration, the application must implement a set of requirements. The set may include a new or altered requirements and already implemented requirements (regression area)
An iterative development model
Cyclic Development ProcessTest scripts are strongly bound with implementation of application requirementsAutomation results require attention only if there are test script failures. A test script failure means one of the following• Defect. Application implementation is changed and doesn't meet requirements• Test script is out of date. Application implementation is changed and meets new or altered requirements. Test script requires updateResults validation phase will continue until application defects are fixed and test scripts are updated.• Environment
Simplified development cycle that includes automation test scripts implementation, execution and analysis
Cyclic Development Process
Imortant points
• test script doesn't show defects itself. It only shows areas of changed implementation
• automation results analysis cannot be detached from scripts update and involves person with development skills
• test scripts version is bound to application version. Test scripts cannot be treated as a universal pack and be invoked against different versions or branches of the application. Each application branch or version should have its own scripts
Automation Efficiency
TA cr / N + TA errval + TA upd < TM val
N - iteration number since scripts creationTA cr - time to create automation scriptsTA errval - average iteration time to understand reasons of failed scriptsTA upd - average iteration time to update scriptsTM val - average iteration time of manual validation
Let's consider time for automation and manual testing to validate the same functionality and find the same amount of defects.
Automation testing is effective, if the following statement becomes true
Time estimation allows further calculation of Return On Investment (ROI) value, if to consider automation and manual time cost.
Automation Efficiency
Cost of test scripts creation is reduced, if we need to validate the same functionality for many development iterations. Automated validation isn't effective for functionality that should be checked only once, or functionality that significantly changes during iterations
Error validation time can be decreased with effective results analysis strategy that shuld be supported by scripts. The following questions have to be answered as quickly as possible• What functionality does each error correspond?• What scripts have the same error?• What actions did scripts perform?• What state and dynamic data the application had at the moment of scripts execution?
Update time can be decreased with• Don't Repeat Yourself (DRY) principles (increasing abstraction level) from test scripts side• techniques that reduce test scripts dependency on application implementation from application side (automation friendly GUI toolkits, ids of GUI elements, etc.)
Automation Script
An automation script fully based on general test case definition. A test oracle mechanism is used to determining whether the program has passed or failed a test. A complete oracle have three capabilities
• generator, to provide predicted or expected results for each test.• comparator, to compare predicted and obtained results.• evaluator, to determine whether the comparison results are sufficiently close to be a pass.
The application is viewed as a black box and is tested in terms of the outputs generated in response to predetermined inputs.
Scheme of a black box
Automation ScriptFailed automation test cases require human interaction to understand fail reason. Ideally there shouldn't be need to rerun failed test cases, watch application under test or rerun the tests manually. Automation test case should collect as much information as needed to get quick answer, what was the problem
Logging writes automatically all actions performed by script into log file. It includes state of application at the moment of test (e.g. user name, new advertisement id, page screen shot) and test script internals (e.g stack trace, element locators). Logging is useful for detailed analysis of application and script behaviour.
Test result model is used for general analysis of a large amount of failed test cases. Simple result model could be 'pass' and 'fail' statuses of test case. For deep and effective understanding of fail reasons should be used extended concepts, such as
• error reason• requirement link• dependent functionality• execution time
Extended result concepts allow to group test cases with the same fail reason, take into account blocked test cases, build traceability matrix, etc.
Script Design and Development
Ways of test scripts development
Record/PlayPerformed by QA, no development skilsNo abstraction layer for reporting and code reuse
Visual Oriented TolsPerformed by QA, QA Automation, minimum development skilsPredefined visualized abstractionsPredefined reporting model
Language Oriented ToolsPerformed by QA Automation or QA + Developer co-operationDevelopment skils requiredExtendable abstraction layerExtendable reporting model
QA Automation Evolution
xUnit frameworks
Test-driven development
Continious Integration process
and tools
Automated Testing
Unit testing
Integration testing
System testing
GUI drivers (e.g. Selenium RC)Automation friendly UI libraries
Programming
Quality Assurance
Record/Play
Visual Development
QA&Developers co-operation