software testing techniques - · pdf file12/01/2015 1 software testing techniques lecture 16...
TRANSCRIPT
12/01/2015
1
Software Testing Techniques
Lecture 16
Software Engineering
Software Testing
Testing is the process of exercising a
program with the specific intent of finding
errors prior to delivery to the end user.
Testing is the one step in the software process that
could be viewed psychologically as destructive
rather than constructive
12/01/2015
2
Software Testing
Some commonly used terms associated with testing are:
Failure: This is a manifestation of an error (or defect or bug). But, the mere presence of an error may not necessarily lead to a failure.
Test case: This is the triplet [I,S,O], where I is the data input to the system, S is the state of the system at which the data is input, and O is the expected output of the system.
Test suite: This is the set of all test cases with which a given software product is to be tested.
Aim of Software Testing
The aim of the testing process is to identify
all defects existing in a software product.
However for most practical systems, even
after satisfactorily carrying out the testing
phase, it is not possible to guarantee that the
software is error free.
12/01/2015
3
What Testing Shows
errors
requirements conformance
performance
an indicationof quality
Who Tests the Software?
developer independent tester
Understands the system
but, will test "gently"
and, is driven by "delivery"
Must learn about the system,
but, will attempt to break it
and, is driven by “quality”
12/01/2015
4
Testing Objectives
1) Testing is a process of executing a program with an
intent of finding an error
2) A good test case is the one that has a higher probability
of finding an as-yet-undiscovered errors
3) A successful test is one that uncovers an as-yet-
undiscovered error
Objective : To design tests that systematically uncover
different classes of errors and to do so in minimum
amount of time and effort
Testing Principles
All tests should be traceable to customer requirements
Tests should be planned long before testing begins
The Pareto principle applies to software testing
Testing should begin “in the small” and progress toward
testing “in the large”
Exhaustive Testing is not possible
To be most effective, testing should be conducted by an
independent third party
12/01/2015
5
Attributes of a Good Test
A good test has a high probability of finding an error
A good test is not redundant
A good test should be “best of breed”
A good test should be neither too simple nor too
complex
Exhaustive Testing
loop < 20 X
There are 10 possible paths! If we execute onetest per millisecond, it would take 3,170 years totest this program!!
14
12/01/2015
6
Selective Testing
loop < 20 X
Selected path
What are the steps?
Software is tested from two different perspectives
Internal program logic is exercised using “white box” test case design techniques
Software requirements are exercised using “black box” test case design techniques.
12/01/2015
7
… cont’d
There are two ways of testing a product
Black-Box testing
tests are conducted at the software interface
Checks whether inputs, outputs and functions are
properly working
It pays little regard to the internal logical structure
White-Box testing
Closely examines the procedural details
Logical paths are thoroughly tested
Testing of Engineered Product
Black-box testingKnowing the specified function that a product has been designed to perform, tests can be conducted that demonstrate each function is fully operational while at the same time searching for errors in each function
White-box testingKnowing the internal workings of a product, tests can be conducted to ensure that “all gears mesh,” that is, internal operations are performed according to specifications and all internal components have been adequately exercised.
12/01/2015
8
White-Box Testing
White-Box Testing (or Glass Box Testing)
... our goal is to ensure that all
statements and conditions have been executed at least once ...
12/01/2015
9
White Box Testing
Examines the program structure and derive test cases from program logic
Also known as
Glass box
Structural
Clear box
Open boxint a;
int b;
If (a==0)
1) Guarantee that all independent paths within a module
have been exercised at least once
2) Exercise all logical decisions on their true and false
sides
3) Execute all loops at their boundaries and within their
operational bounds
4) Exercise internal data structures to ensure their
validity
12/01/2015
10
Why Cover?
logic errors and incorrect assumptions
are inversely proportional to a path's
execution probability
we often believe that a path is not
likely to be executed; in fact, reality is
often counter intuitive
typographical errors are random; it's
likely that untested paths will contain
some
White Box Testing
Fig: Stronger and complementary testing strategies
12/01/2015
11
Statement coverage
Example: Consider the Euclid’s GCD computation algorithm:int compute_gcd(x, y)int x, y;{1 while (x! = y){2 if (x>y) then3 x= x – y;4 else y= y – x;5 }6 return x;}
By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)}, we can exercise the program such that all statements are executed at least once.
Branch coverage
12/01/2015
12
Black-Box Testing
requirements
eventsinput
output
Black Box Testing
Internal working taken as perfect
Focus on interface
Internal working of software being tested is not known to the tester
conformance to requirements
Also known as
Functional Testing
12/01/2015
13
Why White Box ?
Why white box when black box done Logic errors and incorrect assumptions.
assumptions about execution paths may be incorrect, and so make design errors. White box testing can find these errors.
Verification and Validation
• Verification – are we building the product right ?
– refers to the set of activities that ensure that
software correctly implements a specific function.
• Validation – are we building the right product?
– refers to the set of activities that ensure that the
software that has been built is traceable to
customer requirements.
12/01/2015
14
Verification
“Are we building the product right?”
Validation
“Are we building the right product?”
12/01/2015
15
Testing Strategy
unit testintegration
test
validationtest
systemtest
Unit Testing
moduleto betested
test cases
results
softwareengineer
12/01/2015
16
Unit Testing
It is a verification effort on the smallest unit of the
software design – the software component or module
The unit test is white-box oriented
The steps can be conducted in parallel for multiple
modules
Unit Testing
interface
local data structures
boundary conditions
independent paths
error handling paths
moduleto betested
test cases
12/01/2015
17
Unit Test ProceduresUnit testing is normally considered as an adjunct to the
coding step
Because a component is a stand-alone program, drive and/or
stub software must be developed
Driver :
A driver is nothing more
than a “Main program” that,
accepts test data,
passes it to the component
to be tested,
and prints relevant results
Stub :
It serves to replace modules that are
subordinate(called by) the components
to be tested.
It uses the subordinate module’s interface,
may do little data manipulation,
prints verification of entry,
and returns control to the module
undergoing testing
Unit Testing Environment
module
stub
driver
RESULTS
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
stub
12/01/2015
18
Comments
Drivers and stubs represents overhead
Both softwares are written but are not delivered with
the final product
If drivers and stubs are kept simple, actual overhead is
relatively low
Unit testing is simplified when a component with ‘high
cohesion’ is designed
Integration Testing Strategies
Options:
• The big bang approach – slap everything together,
and hope it works. If not, debug it.
• An incremental construction strategy – add modules
one at a time, or in small groups, and debug problems
incrementally.
12/01/2015
19
Integration Testing
Integration testing is a systematic techniques for constructing
the program structure while at the same time conducting tests
to uncover errors associated with interfacing.
Objective: Take unit tested components and build a program
structure that has been dictated by design
Top-down Integration
Bottom-up Integration
Incremental Integration
Strategies
Top-down Integration
It is an incremental approach to construction of program
structure
Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module
(main program)
Modules subordinate (and ultimately subordinate) to the
main control module are incorporated into the structure in
either a depth-first or breadth-first manner
12/01/2015
20
Top-down Integration
top module is
tested with stubs.
stubs are replaced one
at a time, “depth first”.
as new modules are integrated,
some subset of tests is re-run.
A
B
C
D E
F G
Top Down Integration
“Depth first”
top module is tested with
stubs
stubs are replaced one at
a time, "depth first"
as new modules are integrated,
some subset of tests is re-run
A
B
C
D E
F G
12/01/2015
21
Top Down Integration
“Breadth first”
Incorporates all components directly subordinate at each level
A
B
C
D E
F G
Steps in Top-down Integration
1) The main control module is used as a test driver andstubs are substituted for all components directlysubordinate to the main control module
2) Depending on the integration approach selected(depth-first or depth-first), subordinate modules arereplaced one at a time with actual components
3) Tests are conducted as each component is integrated
4) On completion of each set of tests , another stub isreplaced with the real component
5) Regression testing may be conducted to ensure thatnew errors have not been introduced
12/01/2015
22
Pros
major decision points are verified early in the testprocess
Using depth-first integration, a complete function of thesoftware may be implemented
Cons
Stubs are required which are overhead
Problems occur when processing at low levels in thehierarchy is required to test upper levels because stubsreplace low-level modules at the beginning of top-down testing
No significant data flow upward in the programstructure until all the stubs are replaced by the actualcomponents
Bottom-up Integration
It begins construction and testing with atomic modules
(i.e., components at the lowest levels in the program
structure)
processing required for components subordinate to a given
level is always available
need for stubs is eliminated
12/01/2015
23
Bottom-Up Integration
drivers are replaced one
at a time, “depth first”.
worker modules are grouped
into builds and integrated.
A
B
C
D E
F G
cluster
Bottom Up Integration
worker modules
are grouped into
clusters
A
B
D1
G
Cluster 1
D2
Cluster 2
12/01/2015
24
Steps in Bottom-up Integration
1) Low-level modules are combined into clusters that
perform a specific software function
2) A driver (a control program for testing) is written to co-
ordinate test case input and output
3) The cluster is tested
4) Drivers are removed and clusters are combined moving
upwards in the program structure
Pros
Processing required for components to a given level is
always available
The need for stubs is eliminated
As integration moves upward, the need for separate test
drivers lessens
Cons
“the program as an entity does not exist until the last module
is added”
12/01/2015
25
Sandwich Testing
worker modules are grouped
into builds and integrated.
A
B
C
D E
F G
cluster
top modules are tested with stubs.
A combined approach
that uses top-down
tests for upper levels
of the program
structure, coupled
with bottom-up tests
for sub-ordinate
levels
High-Order Testing
• Validation Testing
• Verifies conformance with requirements
• Answers the question “Did we build the correct
product?”
• Alpha and Beta Testing
• Testing by the customer of a near-final version
of the product.
• System Testing
• Testing of the entire software system, focused
on end-to-end qualities.
12/01/2015
26
Validation Testing
• Validation succeeds when the software under test functions in
a manner that can reasonably be expected by the customer.
• Validation is achieved through a series of black-box tests that
demonstrate conformity with requirements.
• The test plan should outline the classes of tests to be
conducted and define specific test cases that will be used in an
attempt to uncover errors in conformity with requirements.
• Deviations or errors discovered at this stage in a project can
rarely be corrected prior to scheduled delivery
Alpha and Beta Testing
Alpha Test
software customer tests
customer tests
customer site
customer sitedeveloper site
developer site
software
developer reviews
Beta Test
12/01/2015
27
System Testing
System testing is actually a series of different tests whose
primary purpose is to fully exercise the computer-based
system.
Although each test has a different purpose, all work to
verify that system elements have been properly integrated
and perform allocated functions
System Testing
• Recovery testing – forces the software to fail in a variety of
ways and verifies that recovery is properly performed.
• Security testing – attempts to verify that protection
mechanisms built into a system will in fact protect it from
improper access.
• Stress testing – executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume.
• Performance testing – tests run-time performance of
software within the context of an integrated system.
12/01/2015
28
and Testing
Acceptance tests• Carried out by customers to validate all requirements
Alpha tests• Conducted at developer’s site by a customer
• Conducted in controlled environment
Beta tests• Conducted at customer’s site by end users
• Environment not controllable by developer
Black-Box Testing
12/01/2015
29
Black-box Testing
Focuses on the functional requirements of the software
It is a complementary approach of White-box techniques
and uncover a different class of errors
It is applied during later stages of testing
Black-Box Testing
requirements
eventsinput
output
12/01/2015
30
Black-Box
Finds Errors In:
Incorrect or missing functions
Interface errors
Data structures or external data base access
Performance errors
Initialization and termination errors
Black-Box Testing Methods
Graph-Based Testing
Equivalence Partitioning
Boundary Value Analysis
Comparison Testing
12/01/2015
31
Equivalence Partitioning Method
Divides the input domain of a program into classes
of data from which test cases can be derived.
Test case design is based on an evaluation of
equivalence classes for an input condition.
Equivalence class represents a set of valid or
invalid states for input conditions.
Black-box Testing — Equivalence Partitioning
System
Outputs
Invalid inputs Valid inputs
12/01/2015
32
Input Condition & Equivalence Class
For range, one valid and two invalid equivalence classes
are defined
For Specific value, one valid and two invalid equivalence
classes are defined
For member of a set, one valid and one invalid equivalence
class are defined
For Boolean, one valid and one invalid equivalence class
are defined.
Black-box Testing — Equivalence Partitioning
Partition system inputs and outputs into equivalence setsbased on specifications
One member of an equivalence set is as good a test case asany other
Guidelines for derivation of equivalence sets
If a range v1 ... v2, then <v1, v1 ... v2, >v2
If a value v, then <v, v, >v
If a set S={ a, b, ..., x }, then S, any value not in S
If Boolean b, then b=true, b=false
Example - a program is required to handle any integerbetween 1 and 25, so three equivalence sets, <1, 1 ... 25,>25
12/01/2015
33
Boundary Value Analysis Method
Create test cases that exercise bounding values(The edges
of the class)
Complements equivalence class testing
Derives test cases from the input and output domain as
well
Guidelines For Boundary Value
Range A to B, Test case for values A and B and just above
and below A and B.
Set of values, Test cases for the min and max values and
just above and below.
The above applies to output conditions.
Internals programs data structures boundary,test cases for
this boundaries.
12/01/2015
34
Black-box Testing — Boundary Value Analysis
Faults frequently exist at and on either side of the boundaries of equivalence sets
For a range v1 ... v2, test data should be designed with v1 and v2, just above and just below v1 and v2, respectively
Example
Suppose that the range is 1 ... 25
Then test data are 0, 1, 2, 24, 25, 26
Equivalence Partitioning:
Sample Equivalence Classes
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formattingresponses to error messagesgraphical data (e.g., mouse picks)
data outside bounds of the program physically impossible data
proper value supplied in wrong place
Valid data
Invalid data
12/01/2015
35
References:
Software Engineering - A practitioner’ s Approach by Roger S. Pressman (6th Ed)
14.3, 14.4
Software Engineering - A practitioner’ s Approach by Roger S. Pressman (5th Ed)
Chapter 17
• 17.1 - (17.1.1, 17.1.2, 17.1.3), 17.2, 17.6 - (17.6.2,17.6.3)
• 17.3, 17.4 - (17.4.1, 17.4.2, 17.4.3, 17.4.4)