test process essentials riitta viitamäki, 14.4.2004 [email protected]
TRANSCRIPT
PrinciplesWhat is a bug ?• Error: a human action that produces an incorrect result• Fault: a manifestation of an error in sw
– Also known as a defect or bug– If executed, a fault may cause a failure
• Failure: deviation of the sw from its expected delivery or service
Failure is an event; fault is a state of the sw, caused by an error
Why do faults occur in sw?
• Sw is written by human beings,– Who know something but not everything– Who have skills but are not perfect– Who do make mistakes (errors)
• Under increasing pressure to deliver to strict deadlines– No time to check, but the assumptions may be wrong– Systems may be incomplete
• If you have ever written sw…
Testing and quality
• Testing measures sw quality (it doesn't make it !)• Testing can find faults and when they are
removed, sw quality (and reliability?) is improved• What does testing test ?
– System function, correctness of operation
– Non-functional qualities: reliability, usability, maintainability, reusability, testability, performance in normal load and stress load, installability, etc.
Other factors that influence testing
• Contractual requirements• Legal requirements• Industry-specific requirements
– GSM specs and standards
– Air traffic control
It is difficult to determine how much testing is enough
but it is not impossible
Test planning – different levels
• Test policy (company level)• Test strategy (company level)• High level test plan (one for each project,
IEEE 829)• Detailed test plan for each test stage level
(IEEE 829) within a project. For example Component, System, Security, Regression test plans.
The test process
• Planning – detailed level
• Specification
• Execution
• Recording
• Check completion
Test planning – detailed level
• How the test strategy and project test plan apply to the software under test (SUT)
• Document in the plan any exceptions to the test strategy – for example; only one test design technique needed for
this functional area, because it is less critical.
• List other sw needed for the tests and environment details. (Start ordering the software, lisences and hardware)
• Set test completion criteria
Specification
• Identify conditions, ”What” is to be tested and prioritise the conditions.
• Design: ”how” the ”what” is to be tested (test cases, with actual steps how to do it), ”anybody can execute the tests in the same way”
• Build: implement the tests (define data, record the scripts)
specification recording check completionexecution
Identify test conditions
• List the conditions that we would like to test:– use the test design techniques specified in the test plan
(EP, BVA)– There may be many conditions for each system
function or attribute.
• Prioritise the test conditions: – we can’t test everything– there is never enough time to do all the testing you
would like– so what testing you should do ?
Design test cases
• Design test input and test data– each test exercises one or more test conditions
• Determine expected results– predict the outcome of each test case, what is output,
what is changed, what is not changed.
• Design sets of tests– different test sets for different objectives such as
regression, building confidence and finding faults
Build (implement) test cases• Prepare test scripts
– less system knowledge the tester has, the more detailed the scripts have to be
– scripts for tools have to specify every detail
• Prepare test data– data that must exist in files and databases at the start of the
tests
• Prepare expected results– should be defined before the test is executed. With exact
values it is easier and quicker to compare the actual and expected results.
– What should be changed in the database table, what not.
Execution
• Execute the prescribed test cases– Most important ones first– Would not execute all test cases if
• Testing only fault fixes• To many faults found early test cases• Time pressure
• Can be performed manually or automated. • If actual results do not match with the expected
ones, try to repeat, analyse, log the incident.
specification recording check completionexecution
Recording 1/2• The test record contains:
– Identities and versions of • Software under test• Test specifications
• Follow the plan– Mark off progress on test script– Document actual outcomes from the test– Capture any other ideas you have for new test cases– Note that these records are used to establish that all test
activities have been carried out as specific.
specification recording check completionexecution
Recording 2/2• Compare actual outcome with expected outcome.
Log discrepancies accordingly:– Software fault– Test fault (expected results were wrong, or steps
missing)– Environment or version fault– Test run incorrectly
• Log coverage levels achieved (compare with test completion criteria)
• After the fault has been fixed, repeat the required test activities (execute, design, plan)
Check test completion
• The completion criteria were specified in the test plan.
• If not met, need to repeat test activities, for example design or execute more tests
specification recording check completionexecution
Coverage too lowCoverageOK
Completion criteria
• Completion or exit criteria apply all levels of testing – to determine when to stop– Coverage, using a measurement technique
• Branch coverage for component testing• User requirements (most important?) covered for acceptance
testing• Most frequently used transactions covered for system testing
• Faults found vs. expected. Too many serious faults found in an business critical area.
• Cost or time
True or false ?The purpose of testing is to prove that the system works.Neither True or False, because…
• One purpose of the testing is to show that the system is working. • For testers in development another major purpose is to try and prove that the system is not working. • No amount of testing can prove that the system has no bugs. • Testing can show the presence of bugs but not their absence.
True or false…Testing can only start when coding is ready. False, because…• Test execution cannot start until there is code to run a given test, but testing is far more that just running tests…
– setting policy at the highest level, – planning the test strategy, – activities for a project, deciding what tests will be performed, – designing individual tests from the specifications they are
based on, – assembling the right data and the environment,– describing the procedures to be followed when running tests;
…and none of these require code.
True or false…Programmers should test their own work. Both true and false, because…• True, because programmers should test
their own code. • False, because they should do also other
testing, like – integrating to other programmers’ modules,– involving in system level testing and user
acceptance testing.
So little time, so much to test..• Test time will always be limited
• Use risk to determine– what to test first– what to test most– how thoroughly to test each item– what not to test this time
• Use risk to – allocate the time available for testing by
prioritising testing …
Most important principle
Prioritise tests
so that,
whenever you stop testing,
you have done the best testing
in the time available.