use of a weighting factor to improve software verification · case study weighting factor usage in...
TRANSCRIPT
Use Of A Weighting Factor To Improve
Software Verification
Mark Flecken, Ericsson, GermanyTh2
itle
In CAPITALS
50 pt
ubtitle
32 pt
Usage Of A Weighting Factor To
Improve Software Verification
Mark Flecken
Ericsson GmbH
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
3 (24)
Biography Mark Flecken
Personal– Mark Flecken (*1969)
– Studied in Aachen and Liverpool
– Master of Science in Information Technology
Professional Experience1997 - 1999 Hewlett Packard - Network specialist
1999 – present Ericsson Germany in various positions
Senior verification Engineer
Test Team Leader for an e2e verification project
Project Manger for an e2e verification project
Project Manager remote update and upgrade
verification for GSM systems
itle
In CAPITALS
50 pt
ubtitle
32 pt
Agenda
Introduction Overview
In Process Metrics
Key Problems in Software Measurement
Direct and Indirect Measures
How Metrics should look like/
Usual Metrics
Case Study
Conclusion / Way Forward
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
5 (24)
Introduction /Overview
Process Metrics
Product Metrics
Project Metrics
Function Points
LOC
Schedule in time Average
faultfinding
Test Coverage
Costs
Actual Costs
Productivity
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
6 (24)
Key Problems in Software
MeasurementMost attributes of interest cannot be measured directly
Most metrics are very difficult to validate
Most Models are at best vague approximations
Models usually need to be adapted to a particular organisation
Need to collect data over a long period to validate and adapt
But you cannot control if you do not measure it
The technology keeps changing
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
7 (24)
Direct and indirect measurements
Direct Indirect
Length of source code
Duration of testing process
Time a programmer spends
on a project
Number of defects
discovered in testing
Module defect density
Number of defects / module size
Defect detection efficiency
Number of defects detected /
Total number of defects
Programmer productivity
LOC produced /
person month effort
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
8 (24)
How metrics should look like
Metrics should be able to tell what is “good” or “bad”
in terms of quality
or schedule in order to be useful
A metric can also be used as an indicator
to point into a direction
where something went wrong / well
To achieve these objectives, historical comparison
or comparison to a model are often needed
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
9 (24)
Usual defect data available in SW
verification projects
Total amount
of
Defects solved
Total amount
of Test Cases
failed
Total amount
of
Defects found
Total amount
of Test Cases
passed
A defect has a
connection to subsystem
of the software
A defect has a Priority or
importance
A not passed TC is
linked to a defect
A defect has an answer
code
A defect is linked to a
project
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
10 (24)
In Process Metrics for software testing
Test progress S curve (plan, attempted, actual)– Inside of Ericsson a test progress tool is used
– Collects and reports faults in one place
– Measures test progress
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
11 (24)
Test progress curve per Test Object
Test progress S curve (plan, attempted, actual)– Expected executed test cases
– Passed test cases, failed, skipped
– Measures the test progress
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
12 (24)
Software development
How to know if the software quality has improved from software release to
release?
Software
Release 1
Software
Release 2
Software
Release 3
Software
Release 4
Base System
Base System +
New Features
Bug Fixes
Base System +
New Features
Bug Fixes
Base System +
New Features
Bug Fixes
Subsystem 1
Subsystem 2
Subsystem 6
Subsystem 5
Subsystem 4
Subsystem 3
Subsystem 1
Subsystem 2
Subsystem 6
Subsystem 5
Subsystem 4
Subsystem 3
Subsystem 1
Subsystem 2
Subsystem 6
Subsystem 5
Subsystem 4
Subsystem 3
Subsystem 1
Subsystem 2
Subsystem 6
Subsystem 5
Subsystem 4
Subsystem 3
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
13 (24)
Ericsson Software life cycle
Defect tracking information collected in the different test phases
Each Test phase has unique identifiers (e.g. where found, subsystem, submitter)
Each defect has a priority identifier (e.g. Priority A,B,C, or importance
high/medium/low)
Software
Release
Design Test Function Test System Test Network TestRegression
TestCustomer Use
Database
defectsdefects
defects defe
cts
defectsdefects
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
14 (24)
What can be done with this data?
Identify
subsystems
with major
faults
In which
testing phase
major faults
are found?
Major faults
found in
testing not in
customer use
Are we testing
the right
things ? How
to improve!
Has the
product
quality
increased?
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
15 (24)
Case Study Weighting Factor
usage in Ericsson
Aim was to find out if product quality has improved
Different Key Performance Indicators were evaluated
Weighting factor seemed to be useful and was extended to
complete SW test life cycle
Improve testing into a customer “like” testing
To find weaknesses in testing and to improve testing
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
16 (24)
Defect Classification
A fault (defect) found during testing is classified according to priority
Priority Description Weight
A System outage 4
B e.g. small fault 2
C e.g. Documentation 1
For the usage of the weighting factor a weight is assigned according
to the priority of the defect
The weighting factor can be calculated as below (1 good /4 bad)
The highest value for the weighting factor can be 4 and means
major problems found. Low values indicate good software
quality/testing
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
17 (24)
Weighting factor example
The Weighting Factor is suitable for analysing software systems on subsystem
level and to weight faults according to their severity
Defect Priority A B C Total WeightingFactor
Subssytem 1 298 436 34 768 2.73
Subsystem 2 44 223 45 312 2.14
Subsystem 3 1 0 0 1 4.00
Subsystem 4 0 143 17 160 1.89
Total 343 802 96 1241
SW Release 1
Defect Priority A B C Total WeightingFactor
Subssytem 1 244 301 4 549 2.88
Subsystem 2 22 223 67 312 1.93
Subsystem 3 0 1 1 2 1.50
Subsystem 4 1 101 17 119 1.87
Total 267 626 89 982
SW Release 2
Defect Priority A B C Total WeightingFactor
Subssytem 1 100 122 17 239 2.77
Subsystem 2 2 123 88 213 1.61
Subsystem 3 0 0 1 1 1.00
Subsystem 4 2 143 17 162 1.92
Total 104 388 123 615
SW Release 3
Not meaningful
Total amount of Defects
has decreased
Weighting Factor still
high on subsystem
Threshold values need to be
set for further investigation
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
18 (24)
GPRS/UMTS System under test
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
19 (24)
Ericsson Case Study Part I
Design TestFunction &
System Test
Network
Integration
Network
Verification
Customer
Usage
3 Month
SW Release
A
SW Release
B
SW Release
x
3 Month 6 Month3 Month3 Month
Design TestFunction &
System Test
Network
Integration
Network
Verification
Customer
Usage
Prio A = 2113
Prio B = 6058
Prio C = 2551
Total = 10722
Prio A = 465
Prio B = 2183
Prio C = 1078
Total = 3726
1,96
2,07
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
20 (24)
Ericsson Case Study Part II
Design TestFunction &
System Test
Network
Integration
Network
Verification
Customer
Usage
3 Month
SW Release
A
SW Release
B
SW Release
x
3 Month 6 Month3 Month3 Month
Design TestFunction &
System Test
Network
Integration
Network
Verification
Customer
Usage
Prio A = 528
Prio B = 1699
Prio C = 1322
Total = 3549
Prio A = 169
Prio B = 834
Prio C = 558
Total = 1561
Prio A = 1295
Prio B = 3371
Prio C = 1046
Total = 5712
Prio A = 225
Prio B = 934
Prio C = 413
Total = 1572
Prio A = 290
Prio B = 988
Prio C = 183
Total = 1461
Prio A = 71
Prio B = 285
Prio C = 107
Total = 463
1,92 2,27 2,27
1,85 2,02 2,07
Prio A = 213
Prio B = 586
Prio C = 226
Total = 1025
2,19
Prio A = 30
Prio B = 130
Prio C = 59
Total = 219
2,0
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
21 (24)
Ericsson Case Study III
Subsystem DT FT/ST NIV Cust
Sub1 2.00 2.30 2.70 2.30
Sub2 2.20 2.60 2.50 2.60
Sub3 2.40 2.40 2.40 2.55
Sub4 2.00 2.40 2.50 2.10
Sub5 2.00 2.20 2.20 2.00
Sub6 2.40 2.60 2.50 2.60
Sub7 1.50 1.80 2.00 2.80
Sub8 2.20 2.70 3.20 2.80
Sub9 1.90 2.40 2.70 2.30
sub10 2.20 2.80 0.00 1.00
SW Release A
DT FT/ST NIV Cust
Sub1 2.70 2.40 1.90 2.00
Sub2 2.00 2.20 1.70 2.40
Sub3 2.30 2.00 2.00 2.70
Sub4 2.00 1.90 2.20 2.50
Sub5 1.70 2.00 2.10 1.90
Sub6 2.00 2.40 2.50 2.50
Sub7 1.50 1.70 0.00 0.00
Sub8 2.10 2.30 2.30 2.40
Sub9 2.10 2.60 2.80 0.00
sub10 2.50 2.40 0.00 0.00
SW Release B
Subsystem showed that major defects were found in customer usage
Major defects found in testing and in customer usage
Subsystem showed major defects found in DT/FT/ST not in e2e
Subsystem showed that major defects were found in customer usage
How to improve the verification process now ?
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
22 (24)
Improvement using weighting factor Improvement from customer perspective
Improvement from development perspective
Set a threshold value for weighting factor on subsystem for defects found in customer usage
Analyse these defects and adapt and increase verification of these subsystems in most appropriate test phases
With the outcome of the analysis a customer like testing can be achieved
After analysing these defects adapt test specification and verification accordantly
Each test phase sets a threshold value for weighting factor on subsystem for
defects found in their test phase
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
23 (24)
Design Test
Improvement using weighting factor
Function Test
System Test
Network
Integration
Network
VerificationMajor Defects
found on
Subsystems
In
Customer
Usage
Improvement Customer Perspective
Improvement Development Perspective
Test-process of each test phase sets focus
on test improvement on subsystems with
weighting factor above e.g. 2
Subsystems which show a weighting factor
above e.g. 2 in customer usage are
analysed and testing process will be
improved in most suitable phaseTwo dimensional test
improvement
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
24 (24)
Conclusion
A weighting factor can be used for improving a verification process
Large defect data sets are needed
The weighting factor points out if a verification process is good in finding major faults
The weighting factor can be used to find weaknesses in testing
Threshold values help to improve the verification process
Weighting factor can be used in two dimension customer and development focus
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
25 (24)
Further reading / References
Books:
– Metrics and models in Software Quality Engineering, Stephen H. Kan
– Software Metrics – A Rigorous & Practical Approach, Fenton Pfleeger
– Software Metriken, Thaller
Paper:
Veers & Marshall (1994, pp.3-8)
– Veers A, Marshall AC, (1994) A Relationship Between Software Coverage
Metrics and Reliability, Software Testing, Verification and Reliability, 4, January,
pp.3-8.
Miranda (1998, pp.291-298)
– Miranda E, (1998) The use of reliability growth model in project management, In:
Proceedings of the 9th International Software Reliability Engineering, 04 – 07
November 1998, Paderborn, Germany. IEEE Computer Society; pp.291 – 298.
Top right
corner for
field
customer or
partner logotypes.
See Best practice
for example.
Slide title
40 pt
Slide subtitle
24 pt
Text
24 pt
5
20 pt
26 (24)
Acknowledgements
My sincere thanks to ….
– Wilhelm Meding Quality Controller Ericsson Sweden
– Ericsson Germany
Thanks for your attention!
Questions.. ??