an overview to software testing
Post on 13-Jul-2015
270 Views
Preview:
TRANSCRIPT
Software Quality Assurance – A general walkthrough
• What is it?• What are its objectives?• What do QAs do?
Content
• Essence• Terminology• Classification
– Unit, System …– BlackBox, WhiteBox
Definition
• Glen Myers– Testing is the process of executing a
program with the intent of finding errors
Objective explained
• Paul Jorgensen– Testing is obviously concerned with errors,
faults, failures and incidents. A test is the act of exercising software with test cases with an objective of• Finding failure• Demonstrate correct execution
A Testing Life Cycle
RequirementSpecs
Design
Coding
Testing
Fault Resolution
FaultIsolation
FaultClassification
Error
Fault
Fault
Fault
Error
Error
incident
Fix
Verification versus Validation
• Verification is concerned with phase containment of errors
• Validation is concerned about the final product to be error free
Relationship – program behaviors
Program Behaviors
Specified(expected)Behavior
Programmed(observed)BehaviorFault
OfOmission
FaultOfCommission
Correct portion
Classification of Test
• There are two levels of classification– One distinguishes at granularity level
• Unit level• System level• Integration level
– Other classification (mostly for unit level) is based on methodologies• Black box (Functional) Testing• White box (Structural) Testing
Relationship – Testing wrt BehaviorProgram Behaviors
Specified(expected)Behavior
Programmed(observed)Behavior
Test Cases(Verified behavior)
8 7
5 6
14 3
2
Cont…
• 2, 5– Specified behavior that are not tested
• 1, 4– Specified behavior that are tested
• 3, 7– Test cases corresponding to unspecified
behavior
Cont…
• 2, 6– Programmed behavior that are not tested
• 1, 3– Programmed behavior that are tested
• 4, 7– Test cases corresponding to un-
programmed behaviors
Inferences
• If there are specified behaviors for which there are no test cases, the testing is incomplete
• If there are test cases that correspond to unspecified behaviors– Either such test cases are unwarranted– Specification is deficient (also implies that
testers should participate in specification and design reviews)
Test methodologies
• Functional (Black box) inspects specified behavior
• Structural (White box) inspects programmed behavior
When to use what
• Few set of guidelines available• A logical approach could be
– Prepare functional test cases as part of specification. However they could be used only after unit and/or system is available.
– Preparation of Structural test cases could be part of implementation/code phase.
– Unit, Integration and System testing are performed in order.
Unit testing – essence
• Applicable to modular design– Unit testing inspects individual modules
• Locate error in smaller region– In an integrated system, it may not be
easier to determine which module has caused fault
– Reduces debugging efforts
Test cases and Test suites
• Test case is a triplet [I, S, O] where– I is input data– S is state of system at which data will be
input– O is the expected output
• Test suite is set of all test cases• Test cases are not randomly selected.
Instead even they need to be designed.
Need for designing test cases
• Almost every non-trivial system has an extremely large input data domain thereby making exhaustive testing impractical
• If randomly selected then test case may loose significance since it may expose an already detected error by some other test case
Time for an exercise
• Give me all possible test cases for this object:
Black box testing
• Equivalence class partitioning• Boundary value analysis• Comparison testing
Equivalence Class Partitioning
• Input values to a program are partitioned into equivalence classes.
• Partitioning is done such that:– program behaves in similar ways to
every input value belonging to an equivalence class.
Why define equivalence classes?
• Test the code with just one representative value from each equivalence class: – as good as testing using any other values
from the equivalence classes.
Equivalence Class Partitioning
• How do you determine the equivalence classes?– examine the input data. – few general guidelines for determining the
equivalence classes can be given
Equivalence Class Partitioning
• If the input data to the program is specified by a range of values:– e.g. numbers between 1 to 5000. – one valid and two invalid equivalence
classes are defined.
1 5000
Equivalence Class Partitioning
• If input is an enumerated set of values: – e.g. {a,b,c}– one equivalence class for valid input values
– another equivalence class for invalid input
values should be defined.
Example
• A program reads an input value in the range of 1 and 5000:– computes the square root of the input
number
SQRT
Example (cont.)
• There are three equivalence classes: – the set of negative integers, – set of integers in the range of 1 and 5000, – integers larger than 5000.
1 5000
Example (cont.)
• The test suite must include:– representatives from each of the three
equivalence classes:– a possible test suite can be:
{-5,500,6000}.
1 5000
Boundary Value Analysis
• Some typical programming errors occur: – at boundaries of equivalence classes – might be purely due to psychological
factors.
• Programmers often fail to see:– special processing required at the
boundaries of equivalence classes.
Boundary Value Analysis
• Programmers may improperly use < instead of <=
• Boundary value analysis:– select test cases at the boundaries of
different equivalence classes.
Example
• For a function that computes the square root of an integer in the range of 1 and 5000:– test cases must include the values:
{0,1,5000,5001}.
1 5000
• Acceptance testing• Formal testing with respect to user needs, requirements, and business
processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
• Alpha testing• Simulated or actual operational testing by potential users/customers or
an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
• Back-to-back testing• Testing in which two or more variants of a component or system are
executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.
• Beta testing• Operational testing by potential and/or existing users/customers at an
external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
Continued…• Black-box testing• Testing, either functional or non-functional, without reference to the internal
structure of the component or system.• Boundary value• An input value or output value which is on the edge of an equivalence partition
or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
• Boundary value analysis• A black box test design technique in which test cases are designed based on
boundary values. • Branch testing• A white box test design technique in which test cases are designed to execute
branches.• Defect• A flaw in a component or system that can cause the component or system to fail
to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
Continued…• Functional testing• Testing based on an analysis of the specification of the functionality of a component or
system.• Integration testing• Testing performed to expose defects in the interfaces and in the interactions between
integrated components or systems. • Load testing• A test type concerned with measuring the behavior of a component or system with
increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.
• Monkey testing• Testing by means of a random selection from a large range of inputs and by randomly
pushing buttons, ignorant on how the product is being used.• Recoverability testing• The process of testing to determine the recoverability of a software product.• Regression testing• Testing of a previously tested program following modification to ensure that defects have
not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
Continued…• Severity• The degree of impact that a defect has on the development or operation of a component or system.• Smoke test• A subset of all defined/planned test cases that cover the main functionality of a component or system, to
ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.
• Test automation• The use of software to perform or support test activities, e.g. test management, test design, test execution
and results checking.• Test case specification• A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution
preconditions) for a test item. • Test design specification• A document specifying the test conditions (coverage items) for a test item, the detailed test approach and
identifying the associated high level test cases.• Test environment• An environment containing hardware, instrumentation, simulators, software tools, and other support
elements needed to conduct a test. • Test harness• A test environment comprised of stubs and drivers needed to execute a test.• Test log• A chronological record of relevant details about the execution of tests.
Questions?
top related