Transcript
Page 1: Integration testing(revisited)

INTEGRATION AND SYSTEM TESTING CHAPTER 12 & 13Csci565

Spring 2009

H.R

eza

1

Page 2: Integration testing(revisited)

OBJECTIVES

Integration Testing Simple ATM (SATM) discuss integration testing strategies

Top-down Bottom-up Decomposition Based Integration Testing

(DBIT) Call Graph Based Integration Testing (CGBIT) Path-Based Integration Testing (PBIT)

2

H.R

eza

Page 3: Integration testing(revisited)

H.R

eza

3

A SOFTWARE TESTING STRATEGY: SPIRAL MODEL

System Testing

System eng.

validation Testing

Requirements

Integration Testing

Design

Unit Testing

Code

Page 4: Integration testing(revisited)

INTEGRATION TESTING:1

If all units/components work individually, do they work as a whole when we put them together? Not necessarily

The problem is “putting them together” or interfacing them

4

H.R

eza

Page 5: Integration testing(revisited)

PROBLEMS WITH INTERFACING

Integration faults often traceable to incomplete or misunderstood interface specifications mismatched assumptions about other components Individually acceptable imprecision may be

magnified to unacceptable levels Global data structures can present problems

Inconsistent interpretation of parameters or values

Mixed units (meters/yards) in Martian Lander Violations of value domains, capacity, or size

limits …

5

H.R

eza

Page 6: Integration testing(revisited)

INTEGRATION TESTING

Tests complete systems or subsystems composed of integrated components

Integration testing should be black-box testing when tests derived from the specification

Main difficulty is localising errors Incremental integration testing reduces this

problem

6

H.R

eza

Page 7: Integration testing(revisited)

APPROACHES TO INTEGRATION TESTING

Two major approachesIncremental approaches

The decomposition-based techniques or treeStubs/Drivers

Call Graph-based techniquesNo stubs or drivers

Non-incremental approaches 7

H.R

eza

Page 8: Integration testing(revisited)

INCREMENTAL INTEGRATION TESTING

8

H.R

eza

T3

T2

T1

T4

T5

A

B

C

D

T2

T1

T3

T4

A

B

C

T1

T2

T3

A

B

Test sequence1

Test sequence2

Test sequence3

Page 9: Integration testing(revisited)

INCREMENTAL APPROACHES: TOP-DOWN

Top-down testingStart with high-level system integrate from the top-down replacing

individual components by stubs where appropriateDepth-first Breadth-firstNo-best order

Critical sections Early skeletal version using I/O modules 9

H.R

eza

Page 10: Integration testing(revisited)

STUBS

Stubs Special module to simulate some functionality Its production is nontrivial task because the code

may simulate a very complicated tasks E.g.

Writing a stub performing a database table search routine

Creating multiple version of same stub for various reasons

10

H.R

eza

Page 11: Integration testing(revisited)

TOP-DOWN TESTING

11

H.R

eza

Page 12: Integration testing(revisited)

TOP-DOWN: COMPLICATIONS

The most common complication occurs when processing at low level hierarchy demands adequate testing of upper level

To overcome: Either, delay many tests until stubs are replaced

with actual modules (BAD) Or, develop stubs that perform limited functions

that simulate the actual module (GOOD) Or, Integrate the software using bottom up

approach Confusion about overlapping with design

12

H.R

eza

Page 13: Integration testing(revisited)

INCREMENTAL TESTING: BOTTOM UP

Bottom-up testing Integrate individual components in levels

until the complete system is created

13

H.R

eza

Page 14: Integration testing(revisited)

BOTTOM-UP APPROACH

Starts with construction and testing with atomic modules No need for stub Low level components are combined into cluster

(or builds) to perform a specific sub-function A driver (a control program for testing) is written

Contain hardcoded test input, calls the module being tested, and display the results

Cluster is tested Drivers are removed and clusters are combined

moving upward in the program structure

14

H.R

eza

Page 15: Integration testing(revisited)

BOTTOM-UP TESTING

15

H.R

eza

M

Page 16: Integration testing(revisited)

H.R

eza

16

A

C

E

B D

F

J

G H I

K L

Page 17: Integration testing(revisited)

H.R

eza

17

A

Stub C

Stub E

B Stub D

Stub F

second state in the top-down

Page 18: Integration testing(revisited)

H.R

eza

18

A

Stub C

Stub E

B D

F

J

Stub HStub H I

Intermediate state in the top-down

Page 19: Integration testing(revisited)

TOP-DOWN

Possible Sequences of modules A, B, C, D, E, F, G, H, I, J, K, L A, B, E, F, J, C, G, K, D, H, L, I A, D, H, I, K, L, C, G, B, F, J, E A, B, F, J, D, I, E, C, G,K, H, L

If parallel testing allowed, then other alternatives are possible After A has been tested, one programmer could

take A and test the combination A-B, Another programmer could test A-C

19

H.R

eza

Page 20: Integration testing(revisited)

GUIDELINE FOR INTEGRATION TESTING

Integrate the components that implement the most frequently used functionality

Perform regression testing for existing features

Perform progression testing for new features

20

H.R

eza

Page 21: Integration testing(revisited)

NON-INCREMENTAL

Big-bang Imposes no order (GOOD) Test all the units (Modules) at once Very easy, but difficult to localize the source of

errors

21

H.R

eza

Page 22: Integration testing(revisited)

INTEGRATION TEST DOCUMENT

Overall plan for integration of the system under construction must be documented in a Test Specification

The test plan should describe the overall strategy for integration Example of phases

User interaction (menu, button, forms, display presentation)

Data Manipulation and analysis Display processing and generation (RT-2D, RT-3D, etc) Database Mgt Logic??

22

H.R

eza

Page 23: Integration testing(revisited)

INTEGRATION: CRITERIA

The following criteria Interface integrity

(internal and external interfaces are tested as each module (or cluster) is incorporated

Functional validity Testes to uncover functional error

Information validity Tests to uncover error related to local/ global data

Performance (Quality) Tests designed to verify performance bounds during

software design

23

H.R

eza

Page 24: Integration testing(revisited)

TOP-DOWN VS. BOTTOM-UP

Architectural validation Top-down integration testing is better at

discovering errors in the system architecture System demonstration

Top-down integration testing allows a limited demonstration at an early stage in the development

Test implementation Often easier with bottom-up integration

testing24

H.R

eza

Page 25: Integration testing(revisited)

THE WATERFALL LIFE CYCLE

25

H.R

eza

Requirement specifications

Preliminary design

details design

coding

Unit testing

Integration testing

Systemtesting

Page 26: Integration testing(revisited)

INTEGRATION TESTING: SOFTWARE ARCHITECTURE

Integration testing How software architecture can be used to

conduct tests to uncover errors related to the interface?

26

H.R

eza

Page 27: Integration testing(revisited)

FIG. 12.2: PRIMARY DESIGN (OR INFORMAL SOFTWARE ARCHITECTURE) OF THE ATM USING TREE-BASED DECOMPOSITION

27

H.R

eza

Requirement specifications

Terminal I/OMange Session

Conduct Transactions

Card Entry PIN EntrySelect

Transaction

Page 28: Integration testing(revisited)

MORE ON PRIMARY DESIGN

How do perform Integration testing for non-tree based functional decomposition? E.g

integration testing for OO Integration testing for Client/server systems Integration testing for Layered systems ….

28

H.R

eza

Page 29: Integration testing(revisited)

SIMPLE ATM (SATM)

An ATM simple Provides 15 screens for interactions includes 3 function buttons Modeled in structural analysis

Data Model (ERD) Functional Model (DFD) Behavioral model (STD)

29

H.R

eza

Page 30: Integration testing(revisited)

FIGURE 12.7

30

H.R

eza

Page 31: Integration testing(revisited)

FIGURE 12.8

31

H.R

eza

Page 32: Integration testing(revisited)

FIGURE 12.9

32

H.R

eza

Page 33: Integration testing(revisited)

FIGURE 12.10

33

H.R

eza

Page 34: Integration testing(revisited)

FIGURE 12.11

34

H.R

eza

Page 35: Integration testing(revisited)

FIGURE 12.12

35

H.R

eza

Page 36: Integration testing(revisited)

FIGURE 12.13

36

H.R

eza

Page 37: Integration testing(revisited)

DECOMPOSITION BASED STRATEGIES

Decomposition based Top/down Bottom up Sandwich Big bang

37

H.R

eza

Page 38: Integration testing(revisited)

FIGURE 12.14

38

H.R

eza

Page 39: Integration testing(revisited)

FIGURE 13.1

39

H.R

eza

Page 40: Integration testing(revisited)

DECOMPOSITION BASED TESTING:1

Discussion revolves around the tree-based decomposition and the order by which units are tested and combined Top-to-bottom Bottom-to-top Sandwich Big bang

The focus is on the structural compatibility among interfaces

40

H.R

eza

Page 41: Integration testing(revisited)

TEST SESSIONS A test session refers to one set of tests for a

specific configuration of actual code and stubs

The number of integration test sessions using a decomposition tree can be computedSessions=nodes – leaves + edges

41

H.R

eza

Page 42: Integration testing(revisited)

DECOMPOSITION BASED TESTING: 2

For SATM system42 integration testing session (i.e.,

42 separate sets of integration test cases)top/down

(Nodes-1) stubs are needed 32 stub in SATM

bottom/up (Nodes-leaves) of drivers are needed 10 drivers in SATM

42

H.R

eza

Page 43: Integration testing(revisited)

DECOMPOSITION BASED STRATEGIES: PROS AND CON

Intuitively clear and understandable In case of faults, most recently added units are

suspected ones Can be tracked against decomposition

tree Suggests breadth-first or depth-first

traversals Units are merged using the

decomposition tree Correct behavior follows from individually

correct units and interfaces Stubs/Drives are major development Overhead

43

H.R

eza

Page 44: Integration testing(revisited)

CALL GRAPH BASED INTEGRATION TESTING

Call graph A directed graph Nodes corresponds to unit Edges corresponds to the call E.g.

AB (i.e., A is calling B)

Attempts to overcome the decomposition problem (structural)

Moves toward behavioral testing

44

H.R

eza

Page 45: Integration testing(revisited)

CALL GRAPH (CG): APPROACHES

Two main approaches based on Call Graph Pair-wise integration Neighborhood integration

45

H.R

eza

Page 46: Integration testing(revisited)

FIGURE 13.2: SATM CALL GRAPH

46

H.R

eza

Page 47: Integration testing(revisited)

TABLE 2: AM

47

H.R

eza

Page 48: Integration testing(revisited)

PAIR-WISE INTEGRATION

The main idea is to eliminate the overhead (i.e., stub/drive)

Uses actual code by restricting a session testing to a pair of units in the Call Graph One integration test for each edge in CG 40 edges means 40 integration tests for the

SATM

48

H.R

eza

Page 49: Integration testing(revisited)

PAIR-WISE INTEGRATION

49

H.R

eza

-Uses actual code

-one integration test session for each edge

-40 edges for SATM

Page 50: Integration testing(revisited)

NEIGHBORHOOD INTEGRATION The neighborhood of a node refers to the nodes that

are one edge away from the given nodes SATM Neighborhoods Number of neighborhoods can be computed

Neighborhoods = nodes – sink nodes Or the number of interior-nodes + X ( x=1 if

there exists leaf nodes connected directly to the root node otherwise X= 0)

Results a drastic reduction in the number of integration test session In case of SATM (11 vs. 40)

50

H.R

eza

Page 51: Integration testing(revisited)

NEIGHBORHOOD INTEGRATION

51

H.R

eza

Page 52: Integration testing(revisited)

TABLE 3: SATM NEIGHBORHOODS

52

H.R

eza

Page 53: Integration testing(revisited)

PROS AND CONS

Benefits (GOOD) Mostly behavioral than structural Eliminates sub/drive overhead Works well with incremental development

method such as Build and composition Liabilities (BAD)

The fault isolation E.g.,

Fault in one node appearing in several neighborhood

53

H.R

eza

Page 54: Integration testing(revisited)

PATH-BASED INTEGRATION TESTING

The hybrid approach (i.e., structural and behavioral) is an ideal one integration testing

The focus is on the interactions among the units Interfaces are structural Interactions are behavioral

With unit testing, some path of source statements is traversed What happens when there is a call to another unit?

Ignore the single-entry/single-exit Use exist follows by an entry Suppress the call statement

54

H.R

eza

Page 55: Integration testing(revisited)

NEW AND EXTENDED CONCEPTS

Source node (begin) A statement fragment at which program

execution begins or resumes E.g., BEGIN

Sink node (end) A statement fragment at which program

execution terminates E.g., Final END

55

H.R

eza

Page 56: Integration testing(revisited)

MORE ON CONCEPTS Module Execution Path (MEP)

A sequence of statements that begins with a source node and ends with a sink node, with no intervening sink nodes

The implication of this definition is that program graph (PG) may have multiple source/sink nodes

Message A programming language mechanism by which

one unit transfers control to another unit E.g.,

subroutine invocations Procedure calls Function references 56

H.R

eza

Page 57: Integration testing(revisited)

THE PATH BASED INTEGRATION TESTING (DEFINITION) MM-Path (definition)

An interleaved sequence of module execution paths (MEP) and messages

MM-Path can be used to describe sequences of module execution paths

including transfers of control among units using messages

To represents feasible execution paths that cross unit boundaries

To extent program graph Where

nodes = execution paths edges = messages

Atomic system function (ASF) An action that is observable at the system level in

terms of port input and output events 57

H.R

eza

Page 58: Integration testing(revisited)

FIGURE 13.3

58

H.R

eza

Page 59: Integration testing(revisited)

H.R

eza

59

There are seven module execution paths (MEP):

Page 60: Integration testing(revisited)

MM-PATH GRAPH (DEFINITION)

Given a set of units, their MM-Path graph is the directed graph in which nodes are module execution paths (MM-PATHS) and edges represents messages/returns from one unit to another

Supports composition of units

60

H.R

eza

Page 61: Integration testing(revisited)

FIGURE 13.4

61

H.R

eza

Page 62: Integration testing(revisited)

FIGURE 13.5

62

H.R

eza

Page 63: Integration testing(revisited)

PDL DESCRIPTION OF SATM

63

H.R

eza

Page 64: Integration testing(revisited)

H.R

eza

64

Page 65: Integration testing(revisited)

H.R

eza

65

Page 66: Integration testing(revisited)

MORE ON MM-PATH MM-Path issues

How long is an MM-Path? What is the endpoint?

The following observable behavior that can be used as endpoints Event quiescence ( event inactivity)

System level event Happens when system is ideal/waiting

Message quiescence (msg inactivity) Unit that sends no messages is reached

Data quiescence (data inactivity) Happens when a sequences of processing generate a

stored data that is not immediately used E.g. account balance that is not used immediately

66

H.R

eza

Page 67: Integration testing(revisited)

MM-PATH GUIDELINES

MM-Path Guiltiness Points of quiescence are natural endpoints for an

MM-path atomic system functions (system behavior) are

considered as an upper limit for MM-Paths MM-Paths should not cross ASF boundaries

67

H.R

eza

Page 68: Integration testing(revisited)

PROS AND CONS hybrid approach(GOOD) The approach works equally well for software

testing developed by waterfall model (GOOD) Testing closely coupled with actual system

behavior (GOOD) Identification of MM-Paths which can be

offset by elimination of sub/drivers(BAD)

68

H.R

eza

Page 69: Integration testing(revisited)

SYSTEM TESTING Closely related to everyday expertise The goal is to Test the quality (not specification) Works with notion of threads (or scenarios) A thread is a sequence of events (or ASFs)

Provides a unifying view of our three level of testing e.g.,

A scenario of normal usage A stimulus/response pair A sequence of machine instructions A sequences of ASFs ….

Identifying threads FSM (node/edge coverage metrics)

Top/down Bottom/up

69

H.R

eza

Page 70: Integration testing(revisited)

SOFTWARE ARCHITECTURE & TESTING

Using Traditional approach formalizing and automating the integration test stage is difficult the selection of the test cases for the subsystems stress structure over behavior the order in which the components are

incrementally combined is entirely dependent on the adopted system

How can we test if a design and implementation comply using SA?

How can we specify a SA such that one or more properties can “easily" be tested or verified?

How integration testing can be planned and controlled based on the SA?

70

H.R

eza

Page 71: Integration testing(revisited)

SA-BASED APPROACH

SA Used for prediction of the system-level quality

the approach would belong to the black box techniques of state transition testing, i.e., the system specification is modeled by an

automaton, the generation of test cases is aimed at covering

the arcs (i.e., the transitions) and the nodes (i.e., the states) of it

Create sub-graphs from SPEC representing specific views of a system

use these views as a base for the definition of coverage criteria and testing strategies.

Select test cases to cover these sub-graphs

71

H.R

eza

Page 72: Integration testing(revisited)

THE ADVANTAGES OF USING THE SATO DERIVE THE AUTOMATON

for state transition testing are evident: we have a formal description and the automaton

can be automatically derived; the SA is at a high level of abstraction, thus the

number of states in the automaton can be kept manageable;

we can trace changes in the SA to the corresponding modifications in the automaton, allowing for the testing of new systems obtained after insertion or modification of a component.

72

H.R

eza

Page 73: Integration testing(revisited)

ABOUT SURVEY PAPER

H.R

eza

73

Example of abstract for survey paper: In this report (or

paper), we survey a number of X-based representations/algorithms/tool support used to generate/identify/execute/etc test cases.

Many problems with references and citation

Examples of citation:


Top Related