process improvement and cmmiprocess improvement and cmmi...
TRANSCRIPT
Software Quality & Testing Industry Academia SymposiumBeer Sheba, 28 May, 2013
Process Improvement and CMMIProcess Improvement and CMMI for Systems and Software:for Systems and Software:
The role of testing data
Prof Ron S. [email protected]
The KPA Group, Israel and University of Turin, Italy
AgendaAgenda
1. Background2. Tools for modeling and analysis
11
222. Tools for modeling and analysis– Control charts
P Ch– Pareto Charts– Weibull Analysis– Regressions
3 Software Trouble Assessment333. Software Trouble Assessment4. RISCOSS and SQuAT44
2
11
IntelCisco
AmdocsIAI
MotorolaTadiranMagic
Erricson
3http://www.kpa-group.com/en/about-kpa/softrel
11
• Product and service development — CMMI for Development (CMMI-DEV)
• Service establishment, management, — CMMI for Services (CMMI-SVC)
• Product and service acquisition — CMMI for A i iti (CMMI ACQ)Acquisition (CMMI-ACQ)
4
11
CARCAR OPMOPM
OPPOPP QPMQPM
• Process performance baseline (PPB) P f
5
• Process performance model (PPM)
Failure Reporting System22
Failure Reporting SystemError => Fault => Failure
6
22
CA
G
B
D E FD E F
7
22
G
E
8
Scatterplot of Totbugs vs Build number22
90
80
8482
70
60 57
63
55
FCSFCS
tbug
s
60
504140
47
41
5055
44
Tot
40
30 27
36
22
28
22
343635
20
10 8
2222
1010
PrePre αα αα ββ
25201510500
Pre Pre αα αα ββ
9
Build number
Scatterplot of WibullModelTot, Totbugs vs Build number22
90
80
VariableWibullModelTotTotbugs
70
60
g
Dat
a
60
50
Y-D 40
30
20
10
2520151050
10
0
10
Build number
Scatterplot of WibullModelTot, Totbugs vs Build number22
90
80
VariableWibullModelTotTotbugs
70
60
g
FCSFCS
Dat
a
60
50
Y-D 40
30
20
10 PrePre αα αα ββ
2520151050
10
0
Pre Pre αα αα ββ
11
Build number
Data from Inspection of Documents22
Data from Inspection of Documents
12
Data from Inspection of Drawings22
Data from Inspection of Drawings
13
Data on Review Preparation versus Failures22
Data on Review Preparation versus Failures
14
Data on Review Execution versus Failures22
Data on Review Execution versus Failures
15
Data on Inspections and Reviews22
Data on Inspections and Reviews
15 30
Y
0.4 Too SlowToo Slow
EQ
UE
NC
Y
0.3
T FT F
ATI
VE
FR
E
0 1
0.2 Too FastToo Fast
5 10 15 20 25 30 35 40 45 500
RE
LA
0
0.1
5 10 15 20 25 30 35 40 45 500
PREPARATION RATE IN PAGES/HR.
16
Data on Inspections and Reviews22
Data on Inspections and Reviews
150.2
ll25
QU
EN
CY
0.15Too SlowToo Slow
TIV
E F
RE
0.1 Too FastToo Fast
RE
LAT
0.05
10 20 30
REVIEW RATE IN PAGES/HR
0
17
22
Inspection data versusInspection data versusInspection data versus Inspection data versus Engineering Change Engineering Change
Proposal (ECP)Proposal (ECP)
18
Inspection Data22
Inspection Data
19
ECP Data22
ECP Data
20
22
21
System and Software Life Cycle33
System and Software Life CycleMarketingRequirements Acceptance
TestSpec
System System
TestSpec
SystemRequirements Spec
SystemTest Spec
SystemDesignSpec
System IntegrationSpec
SoftwareRequirements Spec
SoftwareTest Spec
Design and Construction
22
ConstructionArtifacts
STAM basic questions33
STAM basic questions•• When were errors detected?When were errors detected?When were errors detected?When were errors detected?Depends on the inspection process efficiency - i.e., how it performs
•• When errors could have been detected?When errors could have been detected?When errors could have been detected?When errors could have been detected?Depends on the inspection process effectiveness - i.e., how it was designed
•• When were errors created?When were errors created?When were errors created?When were errors created?Depends on the overall performance of the software development process
System/Software Trouble System/Software Trouble Assessment Matrix (STAM)Assessment Matrix (STAM)
23
Kenett, R.S., Assessing Software development and Inspection Errors, Quality Progress, pp. 109-112, October 1994
The The Sys/Soft Sys/Soft Trouble Assessment MatrixTrouble Assessment Matrix
When were errors created?When were errors created? When were errors detected?When were errors detected?
When errorsWhen errorscould havecould have
been detected?been detected?been detected?been detected?
24
Wh d?Wh d? Wh d d?Wh d d?The The Sys/Soft Sys/Soft Trouble Assessment MatrixTrouble Assessment Matrix
When were errors created?When were errors created? When were errors detected?When were errors detected?
25
When were errors detected?When were errors detected?33
When were errors detected?When were errors detected?
Life Cycle Phase Number of Errors
Requirements Analysis 3 3 Cumulative Cumulative
Top Level design 7 10Detailed Design 2 12
profile profile = S= S11
Detailed Design 2 12Programming 25 37Unit Tests 31 68System Tests 29 97Acceptance Test 13 110
26
When errors could have been detected?When errors could have been detected?33
When errors could have been detected?When errors could have been detected?
Lif C l Ph N b f ELife Cycle Phase Number of Errors
Requirements Analysis 8 8Top Level design 14 22Detailed Design 10 32Detailed Design 10 32Programming 39 71U it T t 8 79Unit Tests 8 79System Tests 26 105Acceptance Test 5 110 Cumulative
profile = S2
27
When were errors created?When were errors created?33
When were errors created?When were errors created?
Lif C l Ph N b f ELife Cycle Phase Number of Errors
Requirements Analysis 34 34Cumulative
Top Level design 22 56Detailed Design 17 73
Cumulative profile = S3
Detailed Design 17 73Programming 27 100U i T 5 105Unit Tests 5 105System Tests 5 110Acceptance Test 0 110
28
SS11 SS22 SS33 cumulative profilescumulative profiles33
SS11, S, S22, S, S3 3 cumulative profilescumulative profiles
73
100 105 110 110
105 110120
34
56
73
71 79
105
97110
80
100
e Er
rors
S1
622
32
37
68
20
40
60
Cum
ulat
ive
S2
S3
1 2 3 4
3 10 120
20C
3 4 5 6 7Development Cycle Phase
29
SS11 SS22 SS33 cumulative profilescumulative profiles33
SS11, S, S22, S, S3 3 cumulative profilescumulative profiles
73
100 105 110 110
105 110120
34
56
73
71 79
105
97110
80
100
e Er
rors
S1
622
32
37
68
20
40
60
Cum
ulat
ive
S2
S311097
1 2 3 4
3 10 120
20C
683712103
3+10+12+37+68+97+110 = 337 3 4 5 6 7Development Cycle Phase
30
A Case Study33
A Case Study
S1S2S3
31
A Case Study33
A Case Study
32
Computation of STAM Metrics33
Computation of STAM Metrics
N li i 100 (S2 S1)/S1 200%Negligence ratio: 100 x (S2 - S1)/S1 = 200%
Evaluation ratio: 100 x (S3 - S2)/S2 = 15%Evaluation ratio: 100 x (S3 - S2)/S2 = 15%
Prevention ratio: 100 x S1/(8 x total) = 15%
33
Definition of STAM Metrics33
Definition of STAM MetricsNegligence ratio:Negligence ratio: indicates the amount of errors that g gg gescaped through the inspection process filters.
INSPECTION EFFICIENCY
Evaluation ratio:Evaluation ratio: measures the delay of the inspection process in identifying errors relative to theinspection process in identifying errors relative to the phase in which they occurred.
INSPECTION EFFECTIVENESS
Prevention ratio:Prevention ratio: an index of how early errorsare detected in the development life cycle relative to the total number of reported errors.
DEVELOPMENT PROCESS EXECUTIONDEVELOPMENT PROCESS EXECUTION
34
Interpretation of STAM Metrics33
Interpretation of STAM Metrics1 Errors are detected on average 2 stages later than they1. Errors are detected on average 2 stages later than they
should have been (I.e. had the inspection processes worked perfectly). The inspection process is very p y) p p yinefficient.
2. The theoretical inspection processes considered by the analysts has errors detected 15% into the phase following their creation. The theoretical process is very effective.
3. Ideally all errors are requirements errors, and they are detected in the requirements phase. In this example only 15% of this ideal is materialized implying significant15% of this ideal is materialized implying significant opportunities for improvement.
35
Open Source Software (OSS)44
Open Source Software (OSS)Some estimates indicate that by 2016 the y
prevalence of OSS will exceed 95% in commercial applications
36
44
www.riscoss.eu
37
38
44
39http://ow2.org/view/About/SQuAT
ReferencesReferences• Kenett, R.S., Process Trouble Assessment and Pareto Analysis of Software Errors,
l f f l d d l fNational Institute for Software Quality and Productivity Annual Conference,Washington D.C., April 1992.
• Kenett, R.S., Assessing Software development and Inspection Errors, Quality , , g p p , Q yProgress, pp. 109-112, October 1994
• Kenett, R.S., Assessing Software Inspection Processes with STAM, Software Engineering Australia SEA2000 Canberra Australia April 2000Engineering Australia, SEA2000, Canberra, Australia, April 2000.
• Kenett, R.S., The Impact of Classification Errors on Assessing Software Inspection Processes with STAM, MMR2002, ISI Conference, Trondheim, Norway, June 2002.
• Kenett, R.S., Software Failure Data Analysis in Encyclopedia of Statistics in Quality and Reliability, Ruggeri, F., Kenett, R. S. and Faltin, F. (editors in chief), John Wiley and Sons 2007and Sons, 2007.
• Kenett, R.S. and Baker E., Process Improvement and CMMI for Systems and Software, Taylor and Francis, Auerbach CRC Publications, 2010.
• www.riscoss.eu• http://ow2.org/view/About/SQuAT 40