cs 360 lecture 16. for a software system to be reliable: each stage of development must be done...
TRANSCRIPT
SOFTWARE TESTING STRATEGIES 2
CS 360
Lecture 16
BUILDING RELIABLE SYSTEMS: TWO PRINCIPLESFor a software system to be reliable:
Each stage of development must be done well, with incremental verification and testing.
Testing and correction do not ensure quality, but reliable systems are not possible without testing.
2
STATIC AND DYNAMIC VERIFICATION TESTINGStatic verification testing:
Techniques of verification that do not include execution of the software.
May be manual or use computer tools.
Dynamic verification testing: Testing the software with trial data. Debugging to remove errors.
3
STATIC VERIFICATION TESTINGReviews are a form of static verification testing that is carried out throughout the software development process
4
REVIEWS Reviews are a fundamental part of good software development Team members review each other's work:
Can be applied to any stage of software development, but particularly valuable to review program design or code
Can be formal or informal
Preparation The developer(s) provides information on components they’re building Models Specifications and design Code
Participants (should) study the materials in advance.
Meeting The developer leads the reviewers through the materials, describing what each section does and encouraging questions.
5
THE REVIEW MEETINGA review is a structured meeting Participants and their roles:
Developer(s): person(s) whose work is being reviewed
Moderator: ensures that the meeting moves ahead steadily
Scribe: records discussion in a constructive manner
Interested parties: other developers on the same project
Client: representatives of the client(s) who are knowledgeable about this part of the process
6
BENEFITS OF REVIEWSBenefits:
Multiple people looking at design/code. Uncover mistakes, suggest improvements
Developers share expertise which helps with training
Incompatibilities between components can be identified
Gives developers an incentive to tidy loose ends
Helps scheduling and management control 7
SUCCESSFUL REVIEWSTo make a review a success:
Senior team members must show leadership Good reviews require good preparation by everybody
Everybody must be helpful, not threatening Allow plenty of time and be prepared to continue on another day.
8
STATIC VERIFICATION TESTING: PAIR DESIGN AND PAIR PROGRAMMING
Concept: achieve benefits of review by shared development
Two people work together as a team: design and/or coding testing and system integration documentation and hand-over
Benefits include: two people create better software with fewer mistakes cross training
Many software houses report excellent productivity 9
STATIC VERIFICATION TESTING: PROGRAM INSPECTIONSFormal program reviews whose objective is to detect faults Code is read or reviewed line by line. 150 to 250 lines of code in 2 hour meeting. Use checklist of common errors. Requires team commitment and trained leaders
So effective that it is claimed that it can replace unit testing
10
STATIC VERIFICATION TESTING: PROGRAM INSPECTIONS, COMMON ERRORS
Data faults: Initialization, constants, array bounds
Control faults: Conditions, loop termination, compound statements, case statements
Input/output faults: All inputs used, all outputs assigned a value
Interface faults: Parameter numbers, types, and order, structures and shared memory
Storage management faults: Modification of links, allocation and deallocation of memory
Exceptions: Possible errors, error handlers
11
STATIC VERIFICATION TESTING: ANALYSIS TOOLS
Program analyzers scan the code for possible errors and anomalies.
Control flow: Loops with multiple exit or entry points
Data use: Undeclared or uninitialized variables, unused variables, multiple assignments, array bounds
Interface faults: Parameter mismatches, non-use of functions return value, uncalled procedures
12
STATIC VERIFICATION TESTING: ANALYSIS TOOLSStatic analysis tools Cross-reference table:
Shows every use of a variable, procedure, object, etc.
Information flow analysis: Identifies input variables on which an output depends.
Path analysis: Identifies all possible paths through the program. 13
DEFENSIVE PROGRAMMINGMurphy's Law:
If anything can go wrong, it will.
Defensive Programming: Write simple code. Avoid risky programming constructs. If code is difficult to read, rewrite it. Incorporate redundant code to check system state after modifications.
14
DYNAMIC VERIFICATION TESTING: STAGES OF TESTING
Testing is most effective if divided into stages User interface testing (carried out separately) Unit testing
unit test
System testing integration test function test performance test installation test
Acceptance testing (carried out separately) 15
DYNAMIC TESTING: UNIT TESTINGTests on small sections of a system:
a single class or function Emphasis is on accuracy of code against specification
If unit testing is not thorough, system testing becomes almost impossible. If your are working on a project that is behind schedule, do not rush the unit testing.
16
DYNAMIC TESTING: USER INTERFACE TESTING (DOCUMENTATION)
Subset of user interface testing guidelines:General
Every action that alters user data can be undone. All application settings can be restored to default. The most frequently used functions are found at the top level of the menu structure.
Keyboard Efficient keyboard access is provided to all application features. No awkward reaches for frequently performed keyboard operations.
Provides keyboard operations for all mouse operations.Mouse
No operations depend on input for middle or right mouse buttons. The mouse pointer is never restricted to part of the screen by the application.
17
DYNAMIC TESTING: UNIT TESTING, INTERFACE (DOCUMENTATION)
Takes place when modules or sub-systems are integrated to create larger systems
Objectives are to detect faults due to interface errors or invalid assumptions about interfaces
18
DYNAMIC TESTING: UNIT TESTING, INTERFACE (DOCUMENTATION)
Interface types:Parameter interfaces
Data passed from one procedure to another
Shared memory interfacesBlock of memory is shared between procedures
Procedural interfacesSub-system encapsulates a set of procedures to be called by other sub-systems
Message passing interfacesSub-systems request services from other sub-systems
19
DYNAMIC TESTING: UNIT TESTING, INTERFACE ERRORS
Interface misuseA calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order
Interface misunderstandingA calling component embeds assumptions about the behaviour of the called component which are incorrect
Timing errorsThe called and the calling component operate at different speeds and out-of-date information is accessed 20
DYNAMIC TESTING: UNIT TESTING, BASIS PATHS (DOCUMENTATION)
The objective of path testing is to build a set of test cases so that each path through the program is executed at least once Ensures statement/branch coverage
If every condition in a compound condition is considered Condition coverage can be achieved as well
Steps for basis path testing: Draw a (control) flow graph using the source code
Line numbers can dictate each node in the graph Calculate the cyclomatic complexity using the flow graph Determine the basis set of linearly independent paths Design test cases to exercise each path in the basis set 21
DYNAMIC TESTING: UNIT TESTING, BASIS PATHS (DOCUMENTATION)
Flow GraphsUsed to depict program control structureCan be drawn from a piece of source codeFlow Graph Notation: composed of edges and nodes.
An edge starts from a node and ends at another node
22
DYNAMIC TESTING: UNIT TESTING, BASIS PATHS (DOCUMENTATION)Binary search flow graph
23
DYNAMIC TESTING: UNIT TESTING, BASIS PATHS (DOCUMENTATION)
Calculating cyclomatic complexity:E = number of edgesN = number of nodesNumber of components with exit pointsCC = E – N + 2PCC = 11 – 10 + 2 = 3
Independent paths through the program:12, 1, 2, 3, 5, 6, 7, 1012, 1, 2, 3, 5, 6, 7, 8, 7, 1012, 1, 2, 3, 5, 6, 7, 8, 9, 7, 10
Test cases should be derived so that all of these paths are executed
24
DYNAMIC TESTING: UNIT TESTING, BASIS PATHS (DOCUMENTATION)
Designing the test cases: Path 1 test case:
12, 1, 2, 3, 5, 6, 7, 10 Input data: [4] Expected output: 4
Path 2 test case: 12, 1, 2, 3, 5, 6, 7, 8, 7, 10 Input data: [6, 2, 5, 1, 3] Expected output: 6
Path 3 test case: 12, 1, 2, 3, 5, 6, 7, 8, 9, 7, 10 Input data: [5, 2, 1, 1, 8, 3, 4] Expected output: 8
25
DYNAMIC TESTING: SYSTEM TESTING, INTEGRATION TEST (DOCUMENTATION)Tests complete systems or subsystems composed of integrated components
Main difficulty is localising errorsErrors may not exist until components rely on each other to function properly
Incremental integration testing reduces this problem
26
DYNAMIC TESTING: SYSTEM TESTING, PERFORMANCE/STRESS TEST (DOCUMENTATION)
Exercises the system beyond its maximum design load.Stressing the system often causes defects to come to light
Stressing the system test failure behaviour.Systems should not fail catastrophically. Stress testing also checks for unacceptable loss of service or data
Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded 27
DYNAMIC TESTING: ACCEPTANCE TESTING (DOCUMENTATION)
Used to determine if the requirements of a software product are met.
Gather the key acceptance criteria The list of features/functions that will be evaluated before testing the product.
Determine testing approaches Types of acceptance tests: stress, timing, compliance, capacity
Testing levels: system level, component level, integration level
Test methods and tools
Test data recording Description of how acceptance test will be recorded
28
DYNAMIC TESTING: SYSTEM TESTING ACTIVITIES
29
Requirements
Tests by developerTests by developer
Performance Acceptance
Client’s Understanding
of Requirements
Test TestInstallation
User Environment
Test
UsableSystem
ValidatedSystem
AcceptedSystem
Tests by clientTests by client
DYNAMIC TESTING: ERROR HANDLINGError prevention (before the system is released):
Use good programming methodology to reduce complexity Use version control to prevent inconsistent system Apply verification to prevent algorithmic bugs
Error detection (while system is running): Testing: Create failures in a planned way Debugging: Start with an unplanned failures Monitoring: Deliver information about state. Find performance bugs
Error recovery (recover from failure once the system is released): Data base systems (atomic transactions) Modular redundancy Recovery blocks
30
DOCUMENTATION OF TESTING (DOCUMENTATION)
Every project needs a test plan that documents the testing procedures for thoroughness, visibility, and for future maintenance.
It should include: Description of testing approach. List of test cases and related bugs. Procedures for running the tests. Test analysis report.
31
A NOTE ON USER INTERFACE TESTINGUser interfaces need several categories of testing. During the design phase, user interface testing is carried out with trial users.
Design testing is also used to develop graphical elements and to validate the requirements.
During the implementa1on phase, the user interface goes through the standard steps of unit and system testing to check the reliability of the implementation.
Finally, acceptance testing is carried out with users, on the complete system.
32
KEY POINTS OF SOFTWARE TESTINGTest parts of a system which are commonly used rather than those which are rarely executed
acceptance testing is based on the system specifications and requirements
Flow graphs identify test cases which cause all paths through the program to be executed
Interface defects arise because of specification misreading, misunderstanding, errors or invalid timing assumptions
33