1
CS 552 Testing
You can not “test-in” quality.
2
Conceptual Transition Underway
From
Customization
• Design
• Code
• Test
To Componentry
• Find
• Link
• Verify
3
Kruchten’s “4 + 1”Model for Developing a Software Architecture
+ 1 Business Scenario
+ 1 BusinessScenario
+ 1 Business Scenario
View 1
Logical--
End Users
View 2
Process--
System Integrators
View 3
Physical--
Engineers
View 4
Development--
TESTERS
Make sure that you understand this model
4
Challenges due to Multi-Person Programming
• Dividing the project into work assignments, called modules.
• Specifying and assuring the behavior of each module.
5
Challenges to testing Multi-Version Programming
• Are programs easy to modify?• Are the subsets meaningful and testable? .• Can the modules sassily? • Are the interfaces coupled? • Are the data exchanges normalized? • Are errors contained?
6
Release Flow
•CODE•Unit Test•DOCUMENTATION
DEVELOPMENTSHOPS
SOFTWAREMANUFACTURING
SOFTWAREMANUFACTURING
COOPERATIVETESTING
Release PACKAGES
CHANGE orMODIFICATIONREQUEST(MR)
COMPONENTTEST
INTEGRATION& SYSTEM
TESTS
SITES
N - 1
N
DEVELOPMENT ENVIRONMENT EXECUTION ENVIRONMENT
MODULE D
MODULE C
MODULE B
MODULE A
INSTALLATION TESTS
7
8
Extreme Programming XP
• Test before Coding.• Pair Programming.• On-Site Customers. • Ad hoc functionality.• Evolutionary Development.• Continuous Integration.• Short Cycles with Feedback.• Incremental Development.
9
What is Paired Testing?– Two testers with one machine.
– Pairs work together voluntarily. One person might pair with several others during a day.
– A given testing task is the responsibility of one person, who recruits one or more partners (one at a time) to help out.
– One tester strokes the keys (but the keyboard may pass back and forth in a session) while the other suggests ideas or tests, pays attention and takes notes, listens, asks questions, grabs reference material, etc.
10
Benefits of Paired TestingPairing forces each tester to explain and react
to ideas. When each tester must phrase their thoughts to another tester, that simple process of phrasing brings the ideas into better focus and naturally triggers more ideas that:
• Encourages creativity
• Testers stay focused and keep testing.
• More fun
11
CS 552 Project Managers: Identify your Tools in the next issue of your Development Plan• Development Tools• Production Tools• Documentation Tools• Configuration Management Tools• Problem Tracking Tools
12
13
Software Testing
• Dijkstra: “Testing can show the presence of bugs but not their absence!”
• Independent testing is a necessary but not sufficient condition for trustworthiness.
• Good testing is hard and occupies 20% of the schedule; poor testing processes dominate 40% of the schedule
• Bottom Line:
Test to assure confidence in operation; not to find bugs.
14
Test Ontology
• Unit• Interface• Cooperative• Integration• System• Scenario
• Reliability• Stress• Verification• Validation• Certification• Installation
15
Good Testing Practice
• Boehm, “… errors discovered in the operational phase incur cost 10 to 90 times higher than design phase– Over 60% of the errors were introduced during
design – 2/3’s of these not discovered until operations”
• So, test requirements, specifications, architectures designs and components (code) as well as subsystems and systems.
• Models work well.
16
Testing Vocabulary• Error is a human action creating a bug.• Fault is a manifestation of an error in the code• Failure is executing a fault that causes system or component to
fail• Test is the process of checking the correctness of a software
artifact.• Verification is the process of evaluating a system during or at
the end of the development process to assure that it satisfies specified requirements.
• Validation is the process of assuring that the solution solves the problem. The focus is on nominal and expected conditions
• Certification is the process of determining the dynamic range or boundaries where the system still performs correctly. The focus is on dispersions.
• Wicked is the process of creating unreasonable loads that stress the system to the point of failure.
17
Test Process
Program or Doc
input
Test strategy
Prototype Or model
Subset of input
Subset of input Execute
Expected output
Actual output
compare
Testresults
18
Maintenance Testing
• Half project life is spent in maintenance• Modifications induce another round of tests• Regression tests
– Library of previous test plus adding more (especially if the fix was for a fault not uncovered by previous tests)
– Issue is whether to retest all vs selective retest, expense related decision (and state of the architecture/design related decision – when entropy sets test thoroughly!)
– Cuts testing interval in half.
19
Portal Teams Learns Testing
When a portlet was accessed without logging on, we noticed a bug caused by attempting to access log in data without checking to see if anyone was actually logged on.
To solve it, code was added to check before accessing user information.
Later when we added functionality to allow the portlet to be configured from an XML file, the "no user logged on" issue reappeared.
What we saw was that we had inadvertently added a line of code to access the user's reports directory before the new access check.
We learned the importance of consistency and regression testing, because we did not modify the code that originally created the bug, and we did not modify the code that was written to fix the bug, but still managed to (inadvertently) write code in such a way so as to recreate the exact same error.
20
Fault Seeding
• Artificially seed or insert faults, then test to discover both seeded and real faults:
Total faults = ((total faults found – total seeded faults found)[total seeded faults/total seeded faults found]
• Assumes real and seeded errors have same distribution but manually generating faults -unrealistic.
• Trust results when seeded faults dominate the found faults.
• If there are many real faults, redesign the module since the probability of more faults in a module is proportional to the number of errors already found!
21
Software Testing Footprint
Time
Tests Completed
Planned
Rejection point
Tests run successfullyPoor Module Quality
22
Case Study- Test Status
23
The variances of software engineering processes, practices
and technology is at odds with producing Trustworthy Software
Isaac Levendel, PhD, Bell Labs & Motorola
24
Software Testing
• The problem of software testing is to find “design holes”
• In the reality of budget constraints, uniform coverage is not cost effective– Testing needs to optimize the tradeoff between coverage
and focused testing on design holes
25
Bug Fixing Destroys Cohesion
c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11 c12 c13 c14
f1 ....f2 ....f3 ....f4 ....f5 ....f6 ....f7 ....f8 ....f9 ..... ..... ....
Software changes
Softwaremodules
Time
26
Two testing approaches
Impractical!!!
Traditional Levendel’s
Negates software variance Embraces software variance
Implies full coverage Implies discovering largest holes
Implies preset testing Implies adaptive testing
Detect less faults/test Detect more faults/test(10X)
27
Observations Drives Architecture
• Modules with high defect couplings are actually one module and not components.
• Modules with no defect coupling can be components.
28
Software type variance
Development fault density
(a)
Field fault density
(b)
Ratio b/a
Software type
0.05
0.015
0.3
System administration
.
. . .
.
.
Operational
0.1
0.003
0.03
Recovery
29
Code size variance
>3000 3
>1000,<2000 7
>250,<500 7
>100,<250 10
>50,<100 8
>25,<50 17
>10,<25 24
<10 50
Code submission size Number of developers
30
Tester variance
1 55 48
2 50 46
3 49 45
4 27 25
5 26 24
6 21 21
. . .
37 1 1
. . .
44 1 0
45 0 0
. . .
51 0 0
Tester Trouble reports Real problems
31
Software attributes
• Software defects are not uniformly distributed– need very high sampling rate to assess
quality– need even higher sampling rate to correct
quality • Software systems are not linear, they are
chaotic.
32
Software testing
• The problem of software testing is to find “design holes”
• In the reality of budget constraints, uniform coverage is not cost effective
– Testing needs to optimize the tradeoff between coverage and focused testing on design holes
33
Software Testing Objectives
• If the only objective of testing is to predict behavior, then operational profiles are a good idea
• If the objective of testing is also to improve the software quality, then also focus on “design holes”
34
Levendel’s theory of software defects
• Defects “abhor” loneliness• Defective software areas look like “software
poles”– System becomes unstable when execution
approaches “poles”
• Good software areas look like “software zeros”– System is stable around “zeros”
35
Customer Interests
I N S T A L L A T I O N
Before
• Features• Price• Schedule
After
• Reliability• Response Time• Throughput
36
37
Beware of over hanging
Incorrect and “potentially false or
misleading” claims were made by
65% of all the commercial software titles examined.
Study by Industry Canada’s Competition Bureau, 1999
38
TOP BUSINESS ISSUES
• 66%: Recruiting qualified people
• 43%: Software quality
• 43%: Schedule delays
• 34%: Managing growth
39
TOP DEVELOPMENT ISSUES
1. Schedule overruns
2. Shortage of skilled staff
3. Poor requirements
4. Inaccurate project estimates
40
ProfessionState-of-Mind
Software
Other Delta
Meaningfulness of job
Responsibility
Knowledge of results
Note that .05 is a significant difference
5.49
5.48
5.00
5.40 + 0.09
5.75 - 0.27
5.00 0.0
Psychological states of Testers
41
ProfessionState-of-Mind
Software Other Delta
General satisfaction
Satisfaction with co-workers
Satisfaction with supervisor
5.37
5.22
4.60
4.88 +0.59
5.48 - 0.26
4.89 -0.29
Testers satisfaction index